ReportWire

Tag: Facebook

  • Mark Zuckerberg Worried Facebook Listening To Him After Being Pushed Shirt That Says ‘I Just Laid Off 10,000 Employees’

    Mark Zuckerberg Worried Facebook Listening To Him After Being Pushed Shirt That Says ‘I Just Laid Off 10,000 Employees’

    [ad_1]

    PALO ALTO, CA—Noting the eerie feeling of being surveilled, Meta CEO Mark Zuckerberg reportedly expressed concern Tuesday that Facebook was listening to him after he received a targeted ad for a shirt that read “I Just Laid Off 10,000 Employees.” “How could it even know I just said that? It’s got to be using my goddamn microphone,” said Zuckerberg, adding that he had all his privacy settings turned on, and yet Facebook was pushing this item perfectly suited to his tastes. “It must have been listening to the party I was having to celebrate the layoffs. Sheesh, I’ve got to delete my cookies more often. It’s so invasive to feel tracked like this. It’s a little dystopian how it just showed me a shirt that says ‘I went to HARVARD and destroyed my FRIENDSHIP and love hunting with SPEARS.’” At press time, Zuckerberg confirmed he had bought the shirt from the advertisement.

    [ad_2]

    Source link

  • Meta is laying off 10,000 more workers as part of

    Meta is laying off 10,000 more workers as part of

    [ad_1]

    Meta to cut thousands of jobs


    Meta to cut thousands of jobs in upcoming layoffs, after suggesting no more

    03:33

    Meta Chief Executive Officer Mark Zuckerberg on Tuesday said the company is laying off an additional 10,000 workers to hedge against economic instability that could persist for “many years.”

    The move is part of a number of steps, including slowing hiring and canceling some projects, that Meta is taking to cut costs and improve financial performance during what Zuckerberg dubbed its “Year of Efficiency.” 

    The layoffs will be conducted “over the next couple of months,” Zuckerberg said in a memo to employees. 

    Recruiting team members will know by tomorrow whether or not they still have jobs. Meta, the parent company of Facebook and Instagram, will announce layoffs in its tech groups in late April, and across its business teams toward the end of May. The company will also scrap plans to hire an additional 5,000 workers to fill open roles. 

    “This will be tough and there’s no way around that,” Zuckerberg said in his address to workers. 

    “Removing jobs” is one of the ways Meta is executing on its goal of becoming more efficient, according to the memo. 

    The new round of layoffs comes after the company cut about 11,000 jobs — or 13% of Meta’s workforce — in November.

    Zuckerberg said that the earlier round of layoffs has helped the company “execute its highest priorities faster” as a leaner organization.

    At its peak in 2022, Meta employed 87,000 full-time workers.


    [ad_2]

    Source link

  • Facebook-parent Meta plans to lay off another 10,000 employees | CNN Business

    Facebook-parent Meta plans to lay off another 10,000 employees | CNN Business

    [ad_1]



    CNN
     — 

    Facebook-parent Meta plans to lay off another 10,000 workers, marking the second round of significant job cuts announced by the tech giant in four months.

    The latest layoffs, announced on Tuesday, come after Meta said in November that it was eliminating approximately 13% of its workforce, or 11,000 jobs, in the single largest round of cuts in the company’s history.

    In a Facebook post Tuesday, CEO Mark Zuckerberg said the job cuts will take place “over the next couple of months.”

    “We expect to announce restructurings and layoffs in our tech groups in late April, and then our business groups in late May,” he wrote. In a “small number of cases, it may take through the end of the year to complete these changes.”

    “Overall, we expect to reduce our team size by around 10,000 people and to close around 5,000 additional open roles that we haven’t yet hired,” Zuckerberg said.

    As of September 2022, Meta reported a headcount of 87,314, per a securities filings. With 11,000 job cuts announced in November and the 10,000 announced Tuesday, that would bring Meta’s headcount down to around 66,000.

    Meta is far from the only Big Tech company to undergo layoffs amid higher inflation, recession fears and a whiplash in pandemic-induced demand. In the first months of this year, Amazon, Google-parent Alphabet and Microsoft have all confirmed major job cuts impacting tens of thousands of tech workers.

    Shares of Meta rose more than 4% in early trading Tuesday following the announcement.

    When the first round of job cuts was announced in November, Zuckerberg blamed himself at the time for the company’s over-hiring earlier in the pandemic. Meta  nearly doubled its headcount between March 2020 and September of last year, as the Covid-19 crisis led to a surge in demand for digital services.

    But the situation changed radically for the social media giant and other tech companies last year as pandemic restrictions eased and people returned to their offline lives. Meta’s core business was also hit by privacy changes implemented by Apple and advertisers tightening budgets amid recession fears.

    In its most-recent quarterly earnings report, Meta posted a sharp drop in profits and reported its third straight quarterly decline in revenue. But during the earnings call, Zuckerberg promised investors that 2023 would be the “year of efficiency” for the company, following years of heavy investment in growth and a more immersive version of the internet called the metaverse.

    On that call, Zuckerberg also suggested that more job cuts could be coming.

    “We closed last year with some difficult layoffs and restructuring some teams. When we did this, I said clearly that this was the beginning of our focus on efficiency and not the end,” Zuckerberg said during the earnings call in early February. He added that the company would be focused on “flattening” its org structure and “removing some layers of middle management to make decisions faster.”

    “As part of this, we’re going to be more proactive about cutting projects that aren’t performing or may no longer be as crucial, but my main focus is on increasing the efficiency of how we execute our top priorities,” Zuckerberg said.

    [ad_2]

    Source link

  • China censors women modeling lingerie on livestream shopping — so men are doing it | CNN Business

    China censors women modeling lingerie on livestream shopping — so men are doing it | CNN Business

    [ad_1]


    Hong Kong
    CNN
     — 

    Donning a sassy piece of silk lingerie, a male model grooves to the beat and forms a heart shape with his fingers during a livestreaming session on Douyin, one of China’s most popular video-sharing platforms.

    His modeling performance is the latest illustration of the kind of entrepreneurial innovation sometimes needed to bypass China’s rigorous internet censorship, a dragnet that can ensnare seemingly innocuous activities – in this case retailers selling women’s underwear online.

    China deploys one of the world’s most stringent censorship regimes, with a track record of blocking out not just politically sensitive information but images of women’s bodies deemed marginally racy.

    Several businesses specializing in selling lingerie through livestreaming have had their sessions cut short after they featured a female model and their brush with internet censorship came to light in January.

    Hence the use of men instead.

    On one of the sales channels, a man is seen dressed in black lingerie, standing next to a mannequin showing a similar outfit, in what appears to be a screenshot of a livestream broadcast on Alibaba

    (BABA)
    ’s Taobao Live, a streaming platform for the e-commerce giant.

    In another image, a different male model put on a pink slip dress and silky shawl, accessorized with cat ear headbands.

    In one livestream clip, carried by multiple state media outlets, an owner of an online venture said he was simply trying to play it safe.

    “This is not an attempt at sarcasm. Everyone is being very serious about complying with the rules,” the man, who identified himself as Mr Xu, said.

    The emergence of male lingerie models has caused mixed views online in China, from merriment and annoyance to reluctant acceptance.

    “So what should I do if I want to promote and showcase lingerie in the live broadcast session? It’s very simple, find a man to wear it,” read one comment on China’s microblogging site Weibo.

    A man in a mini slip dress and velvet robe models beside a woman in pajamas in a video posted on Douyin on February 17, 2023.

    Livestreaming sales of products is a multibillion-dollar industry in mainland China, and was given a major boost during the three years of the country’s strict Covid lockdowns that battered many bricks and mortar businesses.

    As of June last year, the number of livestreaming e-commerce users in mainland China is over 460 million, according to the Academy of China Council for the Promotion of International Trade, a body affiliated with Beijing’s commerce ministry.

    A 2021 report by iResearch, a Beijing-based firm specializing in measuring audience growth online, predicted the livestream sector would be worth as much as $720 billion this year.

    Male models are not the only workaround.

    On Douyin, the Chinese domestic version of TikTok, other female models have circumvented the censorship by showcasing the latest style of lingerie on themselves on top of a t-shirt they are already wearing.

    Others displayed the items on mannequins.

    In 2015, China led a crackdown on television shows exposing actresses’ cleavage, forcing some of the most popular costume dramas to zoom in on their faces to avoid getting into trouble with the broadcast authorities.

    Having male influencers promoting female-oriented products is not new in China, either.

    One of the industry’s most successful livestream shopping influencers is Austin Li Jiaqi, who made his name as the “Lipstick King” after selling 15,000 lipsticks in just five minutes in 2018.

    As one of China’s biggest internet celebrities, Li also peddles cosmetics, skincare products and fashion apparel, often applying products he’s selling to his own face.

    Even outside of China, platforms such as Facebook and Instagram have faced criticism for restricting the sharing of images involving partial nudity, especially of women.

    Facebook and Instagram’s parent company, Meta, restricts the sharing of breasts, although it says it intends “to allow images that are shared for medical or health purposes.” But even Meta’s own Oversight Board has called on the company to make its policy less confusing and more gender inclusive.

    YouTube says it prohibits “the depiction of clothed or unclothed genitals, breasts, or buttocks that are meant for sexual gratification,” but it may age-restrict other images or videos involving nudity.

    [ad_2]

    Source link

  • Meta’s Instagram back up after brief global outage | CNN Business

    Meta’s Instagram back up after brief global outage | CNN Business

    [ad_1]

    Meta Platform’s Instagram was back up for most users after a global outage, the photo-sharing platform said on Thursday, adding that an hours-long technical issue has been resolved.

    “Earlier tonight, a technical issue caused people to have trouble accessing Instagram. We resolved this issue for everyone as quickly as possible,” Instagram said in a tweet.

    Downdetector, which tracks outages, reported more than 53,000 incidents of users unable to access Instagram at the peak of the outage. The website collates status reports from a number of sources, including user-submitted errors on its platform.

    As Instagram was coming back online, Downdetector said reports of outages had fallen below 1,000 in the United States.

    Reports of issues came down to less than 100 reports in the UK, India, Japan and Australia, the outage-tracking website showed.

    [ad_2]

    Source link

  • Facebook tests bringing back in-app messaging features as it competes with TikTok | CNN Business

    Facebook tests bringing back in-app messaging features as it competes with TikTok | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Nearly a decade after Facebook angered some users by splitting off messaging features from its flagship social networking application and forcing people to download a separate app to chat with friends, the company is now testing out reversing the move.

    In an interview with CNN, Facebook head Tom Alison said the platform is testing bringing messaging capabilities back to the Facebook app so users can more easily share content without having to use the Messenger app. The test comes as Facebook looks to beat back competition from TikTok by bolstering its position both as a platform to discover new content and discuss it.

    “We believe that content feeds into not just you consuming it but being conversation starters and starting that message thread with your friends or being something that you can share into a group of people who share your same interests,” Alison said. “I think the thing that will differentiate Facebook and Instagram from TikTok and others is just the depth of being able to start a conversation with your friends from this content and have that kind of social dimension.”

    The move, which Alison also announced in a blog post Tuesday, comes after Facebook revised its strategy last year amid concerns about a stagnant and aging user base. No longer would the platform simply be about connecting friends and family. Instead, founder Mark Zuckerberg wanted Facebook to become a “discovery engine.”

    Facebook redesigned its home feed to surface more entertaining posts from across the platform, with AI-powered content recommendations, rather than just showing posts from those specifically in a user’s network. (A new, separate tab fulfilled the desire for the latter.) The goal was clear: to keep users engaged longer and help the platform better compete with TikTok and its steady stream of recommended content.

    Nine months later, that shift has begun to pay off, Alison told CNN. The platform last month reported that it hit 2 billion daily active users in the December quarter.

    “A lot of the narrative leading up to this has been that Facebook is in decline or Facebook’s best days are behind it,” Alison said, “and part of what we’re trying to do with this milestone is say, ‘hey, look, that’s actually not true.”

    There have been no shortage of rumors of Facebook’s demise over the years, from its admission of having a “teen problem” a decade ago to the more recent series of PR debacles for the social network and its parent company, Meta. TikTok’s rapid rise and even the success of Facebook’s sister service, Instagram, have also taken some of the shine off the aging social network Zuckerberg launched in a dorm room nearly 20 years ago. But its audience has resumed growing, for now.

    Alison, who has been in charge of the Facebook app since July 2021, said the introduction of the “discovery engine” strategy is just the beginning of a larger shift for the platform, as Facebook works to forge a path to continued growth and relevance over the next two decades.

    “For the last almost 20 years … we’ve been really known for friends and family, but over the next 20 years, what we’re really working toward is being known for social discovery,” he said. “It’s going to be about helping you connect with the people that you know, the people that you want to know and the people that you should know.”

    While Facebook and Instagram have struggled in their attempts to keep pace with TikTok, including through copycat features like Reels, Alison argues Facebook has a leg up on TikTok thanks to its roots in helping people connect with their networks.

    For some creators, for example, Facebook has become a place to create groups of fans and hold conversations beyond the content they share to Instagram and TikTok, Alison said. “I think it’s helping them get closer to their fans on Facebook in a way they can’t do on other platforms.”

    As Facebook plots its evolution, it will have to contend with what Zuckerberg has called the company’s “year of efficiency,” an effort to cut costs after a broader reckoning in the tech industry and investor skepticism around its pricey plan to center its business model around the future version of the internet it calls the metaverse.

    “One of the things that we are embracing with the year of efficiency is prioritization and, frankly, just focusing more effort on some of our bigger bets,” Alison said. The platform has over the past year shuttered some smaller efforts, such as its Bulletin newsletter subscription service, in favor of investing in key areas like AI. “That’s a lot of the culture that we’re kind of instituting across Meta is just like, how do we do fewer things better? And how do we do them, sometimes, more quickly? Efficiency is not just about cost savings.”

    [ad_2]

    Source link

  • Facebook revamps controversial content moderation process for VIPs | CNN Business

    Facebook revamps controversial content moderation process for VIPs | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Facebook-parent Meta on Friday announced a revamp of its “cross-check” moderation system after facing criticism for giving VIPs special treatment by applying different review processes for VIP posts versus those from regular users.

    But Meta stopped short of adopting all the recommended changes that had previously been put forward by its own Oversight Board, including a suggestion to publicly identify which high-profile accounts qualify for the program.

    The cross-check program came under fire in November 2021 after a report from the Wall Street Journal indicated that the system shielded some VIP users — such as politicians, celebrities, journalists and Meta business partners like advertisers — from the company’s normal content moderation process, in some cases allowing them to post rule-violating content without consequences.

    As of 2020, the program had ballooned to include 5.8 million users, the Journal reported. Meta’s Oversight Board said in the wake of the report that Facebook had failed to provide it with crucial details about the system. At the time, Meta said that criticism of the system was fair, but that cross-check was created in order to improve the accuracy of moderation on content that “could require more understanding.”

    Meta’s Oversight Board in a December policy recommendation called out the program for being set up to “satisfy business concerns” and said it risked doing harm to everyday users. The board — an entity financed by Meta but which says it operates independently — urged the company to “radically increase transparency” about the cross-check system and how it works.

    On Friday, Meta said it would implement in part or in full many of the more than two dozen recommendations the Oversight Board made for improving the program.

    Among the changes it has committed to make, Meta says it will aim to distinguish between accounts included in the enhanced review program for business versus human rights reasons, and detail those distinctions to the board and in the company’s transparency center. Meta will also refine its process for temporarily removing or hiding potentially harmful content while it’s pending additional review. And the company also said it would work to ensure that cross-check content reviewers have the appropriate language and regional expertise “whenever possible.”

    The company, however, declined to implement such recommendations as publicly marking the pages of state actors and political candidates, business partners, media actors and other public figures included in the cross-check program. The company said that such public identifiers could make those accounts “potential targets for bad actors.”

    “We are committed to maintaining transparency with the board and the public as we continue to execute on the commitments we are making,” regarding the cross-check program, Meta said in a policy statement.

    The Oversight Board said in a tweet Friday that the company’s proposed changes to the cross-check program “could render Meta’s approach to mistake prevention more fair, credible and legitimate, addressing the core critiques” in its December policy recommendation.

    [ad_2]

    Source link

  • Mark Zuckerberg looks to ‘turbocharge’ Meta’s AI tools after viral success of ChatGPT | CNN Business

    Mark Zuckerberg looks to ‘turbocharge’ Meta’s AI tools after viral success of ChatGPT | CNN Business

    [ad_1]



    CNN
     — 

    Mark Zuckerberg said Meta is creating a new “top-level product group” to “turbocharge” the company’s work on AI tools, as it attempts to keep pace with a renewed AI arms race among Big Tech companies.

    In a Facebook post late Monday, Zuckerberg said the elite new group will initially be formed by pulling together teams across the company currently working on generative AI, the technology that underpins the viral AI chatbot, ChatGPT. This group will be “focused on building delightful experiences around this technology into all of our different products,” Zuckerberg said, starting with “creative and expressive tools.”

    “Over the longer term, we’ll focus on developing AI personas that can help people in a variety of ways,” Zuckerberg said. Those AI features may include new Instagram filters as well as chat tools in WhatsApp and Messenger, he said.

    The planned efforts come amid a heightened AI frenzy in the tech world, kicked off in late November when Microsoft-backed OpenAI released ChatGPT publicly. The tool quickly went viral for its ability to generate compelling, human-sounding responses to user prompts. Microsoft later announced it was incorporating the tech behind ChatGPT into its search engine Bing. A day before Microsoft’s announcement, Google unveiled its own AI-powered tool called Bard.

    Meta, by comparison, has been quiet so far. Yann LeCunn, Meta’s Chief AI scientist, has expressed some skepticism surrounding the ChatGPT hype. “It’s not a particularly big step towards, you know, more like human level intelligence,” LeCunn said in one interview late last month. “From the scientific point of view, ChatGPT is not a particularly interesting scientific advance,” he added.

    Generative AI tools are built on large language models that have been trained on vast troves of online data to create written and visual responses to user prompts. But these systems also have the potential to perpetuate biases and misinformation. Already, both Microsoft and Google’s AI tools have run into controversies for producing some inaccurate or uncanny responses.

    As with Microsoft and Google, there are some risks for Meta in embracing this technology. Last year, before the ChatGPT hype, Meta publicly released an AI-powered chatbot dubbed “BlenderBot 3.” It didn’t take long, however, for the chatbot to start making offensive comments.

    In his post Monday, Zuckerberg said: “We have a lot of foundational work to do before getting to the really futuristic experiences, but I’m excited about all of the new things we’ll build along the way.”

    [ad_2]

    Source link

  • New Meta platform aims to prevent sextortion of teens on Facebook and Instagram | CNN Business

    New Meta platform aims to prevent sextortion of teens on Facebook and Instagram | CNN Business

    [ad_1]



    CNN
     — 

    Meta is taking steps to crack down on the spread of intimate images of teenagers on Facebook and Instagram.

    A new tool, called Take It Down, takes aim at a practice commonly referred to as “revenge porn,” where someone posts an explicit picture of an individual without their consent to publicly embarrass or cause them distress. The practice has skyrocketed in the last few years on social media, particularly among young boys.

    Take It Down, which is operated and run by the National Center for Missing and Exploited Children, will allow minors for the first time to anonymously attach a hash – or digital fingerprint – to intimate images or videos directly from their own devices, without having to upload them to the new platform. To create a hash of an explicit image, a teen can visit the website TakeItDown.NCMEC.org to install software onto their device. The anonymized number, not the image, will then be stored in a database linked to Meta so that if the photo is ever posted to Facebook or Instagram, it will be matched against the original, reviewed and potentially removed.

    “This issue has been incredibly important to Meta for a very, very long time because the damage done is quite severe in the context of teens or adults,” said Antigone Davis, Meta’s global safety director. “It can do damage to their reputation and familial relationships, and puts them in a very vulnerable position. It’s important that we find tools like this to help them regain control of what can be a very difficult and devastating situation.”

    The tool works for any image shared across Facebook and Instagram, including Messenger and direct messages, as long as the pictures are unencrypted.

    People under 18 years old can use Take It Down, and parents or trusted adults can also use the platform on behalf of a young person. The effort is fully funded by Meta and builds off a similar platform it launched in 2021 alongside more than 70 NGOs, called StopNCII, to prevent revenge porn among adults.

    Since 2016, NCMEC’s cyber tip line has received more than 250,000 reports of online enticement, including sextortion, and the number of those reports more than doubled between 2019 and 2019. In the last year, 79% of the offenders were seeking money to keep photos offline, according to the nonprofit. Many of these cases played out on social media.

    Meta’s efforts come nearly a year and a half after Davis was grilled by Senators about the impact its apps have on younger users, after an explosive report indicated the company was aware that Facebook-owned Instagram could have a “toxic” effect on teen girls. Although the company has rolled out a handful of new tools and protections since then, some experts say it has taken too long and more needs to be done.

    Meanwhile, President Biden demanded in his latest State of the Union address more transparency about tech companies’ algorithms and how they impact their young users’ mental health.

    In response, Davis told CNN that Meta “welcomes efforts to introduce standards for the industry on how to ensure that children can safely navigate and enjoy all that online services have to offer.”

    In the meantime, she said the company continues to double down on efforts to help protect its young users, particularly when it comes to keeping explicit photos off its site.

    “Sextortion is one of the biggest growing crimes we see at the National Center for Missing and Exploited Children,” said Gavin Portnoy, vice president of communications and branding at NCMEC. “We’re calling it the hidden pandemic, and nobody is really talking about it.”

    Portnoy said there’s also been an uptick in youth dying by suicide as a result of sextortion. “That is the driving force behind creating Take It Down, along with our partners,” he said. “It really gives survivors an opportunity to say, look, I’m not going to let you do this to me. I have the power over my images and my videos.”

    In addition to Meta’s platforms, OnlyFans and Pornhub’s parent company MindGeek are also adding this technology into their services.

    But limitations do exist. To get around the hashing technology, people can alter the original images, such as by cropping, adding emojis or doctoring them. Some changes, such as adding a filter to make the photo sepia or black and white, will still be flagged by the system. Meta recommends teens who have multiple copies of the image or edited versions make a hash for each one.

    “There’s no one panacea for the issue of sextortion or the issue of the non-consensual sharing of intimate images,” Davis said. “It really does take a holistic approach.”

    The company has rolled out a series of updates to help teens have an age-appropriate experience on its platforms, such as adding new supervision tools for parents, an age-verification technology and defaulting teens into the most private settings on Facebook and Instagram.

    This is not the first time a major tech company has poured resources into cracking down on explicit imagery of minors. In 2022, Apple abandoned its plans to launch a controversial tool that would check iPhones, iPads and iCloud photos for child sexual abuse material following backlash from critics who decried the feature’s potential privacy implications.

    “Children can be protected without companies combing through personal data, and we will continue working with governments, child advocates, and other companies to help protect young people, preserve their right to privacy, and make the internet a safer place for children and for us all,” the company said in a statement provided to Wired at the time.

    Davis did not comment on whether it’s expecting criticism for Meta’s approach, but noted “there were significant differences between the tool that Apple launched and the tool that NCMEC is launching today.” She emphasized Meta will not be checking for images on users phones.

    “I do welcome any member of the industry trying to invest in efforts to prevent this kind of terrible crime from happening on their apps,” she added.

    [ad_2]

    Source link

  • The Last Of Us Episode 7 Recap: Just Like Heaven

    The Last Of Us Episode 7 Recap: Just Like Heaven

    [ad_1]

    Screenshot: HBO

    The release of The Last of Us in 2013 already marked a remarkable shift in narrative tone for big-budget, so-called “AAA” games. However, for some of us, 2014’s DLC chapter, The Last of Us: Left Behind, proved to be even more remarkable. It took mechanics that, in the game proper, had been used in nail-biting sequences of life-or-death desperation and repurposed them as the stuff of bonding and relationship-building, leading us to feel Ellie’s connection with Riley not just through cutscenes and pre-written dialogue but through play, in the purest sense of the word.

    Now, the episode of HBO’s adaptation based on Left Behind is here, and it’s very good on its own terms. The storytelling fundamentals still work, even with the interactivity that made the game so striking removed. (A number of sequences built around that interactivity, including one in which Ellie and Riley have a contest in which they throw bricks to break car windows, and one in which they hunt each other with water rifles, are understandably totally absent in the episode.) However, because Left Behind was a particularly remarkable example of what’s possible when AAA mechanics are used in new and exciting ways, I don’t feel that there was really any hope of this episode reaching the same highs. The game was one of the very best, most innovative and moving AAA experiences of the decade in which it was released. This is—and I don’t mean this as an insult at all—a very good episode of a mostly very good TV series, and it does benefit from a few music cues that the game lacks. On top of that, Bella Ramsey and Storm Reid are both exceptional, and defixfnitely make this story and its deeply felt emotions their own. Let’s get into it.

    A tale of two malls

    First, let me touch on the biggest change between this episode and the game on which it’s based. In both, Joel’s been seriously injured, and Ellie must find some supplies with which to treat his wound. Here in the show, we experience Ellie’s mall flashback while she rummages for supplies in a house where she and Joel are hiding out, and the only real thematic throughline between the action of the “present” and the “past” of the episode is that what Ellie goes through in the past informs our understanding of why she’s so desperate not to lose Joel in the present.

    Ellie looks at a statue of an archer in a snowy Colorado mall in the game The Last of Us: Left Behind.

    Screenshot: Naughty Dog

    In the game, she’s actually got Joel locked up in an old storefront at a Colorado mall, and the flashbacks to her night at the mall with Riley are interspersed with action set in the “present” in which she searches this other mall high and low for medical supplies. Playing the DLC, you probably spend about as much time in the Colorado mall as you do in the Boston one, and as Ellie, you must fight infected stalkers, solve some environmental puzzles, and survive some very challenging combat encounters with men who are hunting Joel and Ellie. The Colorado mall also has a number of details that trigger associations for us as players with the Boston mall. For instance, both have a restaurant chain called Fast Burger, and in the pocket of a body she’s searching, Ellie finds a strip of photos created by the same type of photo booth she and Riley use at the mall in Boston.

    Meanwhile, all TV show Ellie has to do is look in the kitchen for a needle and thread. She doesn’t know how easy she’s got it.

    This hopeless situation

    In the episode’s opening scene, the injured Joel tells her to leave and she says “Joel shut the fuck up!” reminding us, as the last episode emphasized and this one will drive home, that she has known too much loss already, and she’s not about to give up on him.

    He tells her to go to Tommy. She covers him with a jacket, gives him a fuck you look, and walks out of the room, and into the flashback that dominates the episode.

    She’s running listlessly in circles in a high school gymnasium. On her Walkman (yes, an actual Sony Walkman, which she also has in the game) she’s listening to “All or None” by Pearl Jam. It’s from the 2002 album Riot Act, so it would exist in the show’s timeline where the outbreak occurred in 2003. Without spoiling anything for those who haven’t played The Last of Us Part II, Pearl Jam does figure into the game in a way that likely won’t, for timeline reasons, play out the same in the show, so this at least lets the band’s work be heard in the TV series.

    Ellie, in gym sweats, looks angrily at another girl in the foreground in a moment from HBO's The Last of Us.

    Screenshot: HBO

    (Incidentally, none of this stuff with Ellie in school is from the game. Some of it may be based on material in the comic book series The Last of Us: American Dreams, but as I haven’t read that series, I can’t say for sure.)

    Soon, a bigger girl starts giving Ellie shit, telling her to pick up the pace so that the whole group doesn’t get punished. When Ellie says she doesn’t want to fight about it, the girl says tauntingly, “You don’t fight. Your friend fights. She’s not here anymore, is she?” With that, Ellie decides she does want to fight after all.

    Cut to some time later, and Ellie’s sporting a nasty shiner. A FEDRA official, Cpt. Kwong, notes that her behavior has been particularly bad for the past few weeks and that his bad-cop approach in response—tossing her in the hole multiple times—hasn’t worked, so he tries the good-cop approach, giving her a heartfelt talk in which he suggests that she’s too smart to throw her life away, but that seems like exactly what she’s determined to do. She can either keep misbehaving and end up a grunt, doing grunt work until she dies in one unfortunate circumstance or another, he says, or she can swallow her pride and someday become an officer. His impulse is rooted in a bleak view of humanity—”if we go down, the people in this zone will starve or murder each other, that much I know”—but Ellie nonetheless seems persuaded, for the moment.

    Ellie’s room, featuring a poster for Mortal Kombat II

    Later, Ellie’s in her room as the rain falls outside. She’s reading an issue of Savage Starlight, the significance of which I first talked about in my recap of episode five.

    Setting the comic down, she stares at the vacant bed across the room before a lights out call prompts her to try going to sleep. For a bit, the camera lingers on details in the room, like a small stack of cassettes that includes A-ha’s greatest hits compilation and an Etta James tape, both of which feature songs we’ll be hearing before the night is out. Also on Ellie’s wall are dinosaur drawings, space shuttle diagrams, and, amusingly, a poster for the 1987 sci-fi comedy Innerspace starring Martin Short, Meg Ryan, and Dennis Quaid.

    We also see a poster for Mortal Kombat II. Yes, this reflects one of the biggest changes to the source material that we’ll get to later in the episode. However, what you may not know is that, when Left Behind was remade for The Last of Us Part I, the developers also snuck a Mortal Kombat II poster into Ellie’s room there, confirming (via retcon) that the game does at least exist in the game’s universe as well, likely because they knew by that point that MKII was going to be taking the place of The Turning in the TV adaptation.

    Read More: The Last Of Us Show Made One Of The Best Game Moments Worse

    A rocky reunion with Riley

    Riley and Ellie’s reunion gets off to a rough start when Riley (Storm Reid, Euphoria) sneaks into the room and puts her hand over the mouth of the sleeping Ellie. Ellie panics, knocks Riley to the floor, and grabs her switchblade before she realizes who her attacker is. When she sees that it’s actually her best friend, the exposition starts flying fast. Riley’s been gone for three weeks because, after a long time spent “talking about liberating the QZ,” she’s actually decided to do something.

    In a shot from the game, Ellie says to an offscreen Riley, "All this time - I thought you were dead."

    Screenshot: Naughty Dog

    This triggers complicated feelings in Ellie, who refuses Riley’s request to come with her and have “the best night of your life” because she has to get up in a few hours for drills “where we learn to kill Fireflies.” Yeah, these friends are in a tough spot, seemingly on opposite sides of an ideological (and real) conflict. As Riley predicted, though, Ellie quickly relents, the chance to spend a few hours with the friend she’s been missing so much apparently too tough to pass up.

    What’s FEDRA vs. Fireflies between friends?

    After they make their escape, Ellie is surprised that Riley seems less inclined toward conflict than usual, telling her, “You can’t fight everything and everyone. You can pick and choose what’s important.” “Are they teaching you this at Firefly University?” Ellie asks, and it turns out they are. A minute later, as they’re sneaking through an old apartment building, Ellie’s flashlight starts giving out. “Firefly lights are better,” Riley teases. When Ellie declares that “one point for the anarchists,” Riley says, “We prefer freedom fighters.”

    In a moment that’s new for the show, Ellie and Riley find a man’s body in a hallway, with some pills and a bottle of hard liquor nearby, which they snag and take swigs from on the rooftop. In the game, they instead raid the camp of a man they were on friendly terms with named Winston, who, remarkably for someone in their world, died of natural causes. He has some booze in a cooler that you can drink. The show’s Ellie handles the liquor much better than her game counterpart, who spits it out.

    After begging Riley to let her hold her gun, Ellie asks, “So, what happened, you started dating some Firefly dude and was like, ‘Uhhh, this is cool, I think I’ll be a terrorist’?” It’s a striking line because it’s both an obvious joke and it also seems to be Ellie perhaps trying to feel out Riley’s attitude toward boys, as if she’s trying to determine if there’s any chance Riley reciprocates her feelings. (Nothing like this is said in the game.) Soon, Riley tells the truth: she encountered a woman—Marlene—who asked her what she thought of FEDRA. Riley replied with her honest opinion, “they’re fascist dickbags,” and with that, she was in. Ellie starts to push back, regurgitating some of the same bullshit Cpt. Kwong told her earlier about FEDRA holding everything together, but rather than let it devolve into an argument, Riley says they’re on a mission, and leads them onward, hopping across many a rooftop on the way to their destination: the mall.

    Riley promises to show Ellie the four wonders of the mall in a moment from HBO's The Last of Us.

    Screenshot: HBO

    When they arrive, Riley arranges a pretty cool reveal for Ellie, having her friend stand in the darkened shrine to capitalism before flipping on the power. Ellie gazes in awe as everything becomes illuminated. Riley promises to show her “the four wonders of the mall,” and their adventure truly begins.

    Take on me

    The Last of Us becomes the latest prestige TV series to use the A-ha hit “Take on Me,” a song that also figures into the game’s sequel, as Ellie experiences the wonder of escalators, or as she calls them at first, “electric stairs,” for the first time. Amazed by the contraption, she races down them, races back up them, walks in place, and, perhaps trying to impress her crush and probably feeling the effects of that swig of alcohol she took earlier, just generally acts like a total goofball.

    As they make their way toward Riley’s first wonder (which is now the second wonder because Ellie was so wowed by the escalator), they pass a movie theater with a poster out front for a film in the Dawn of the Wolf series, the Last of Us universe’s stand-in for Twilight. Briefly stopping to regard the display at a Victoria’s Secret, Riley comments on how strange it is to her that people once wanted that stuff, then starts laughing while trying to imagine Ellie wearing the lacy lingerie. Riley moves on, but Ellie takes a moment to check her look in the window, clearly concerned about the impression she might make on Riley tonight.

    Just like heaven

    Riley tells Ellie to close her eyes, and as she leads her by the hand to the mall’s next wonder, we’ve gotten enough insight into Ellie’s feelings that we can imagine how exciting it must be for her, that high school electricity you might feel at the slightest physical contact with the person you’ve been dreaming about.

    Ellie says to Riley, "Fuck you, you found another pun book?" while both ride a carousel in a moment from the game The Last of Us: Left Behind.

    Screenshot: Naughty Dog

    The wonder is indeed worthy of the build-up: a stunning carousel, lit up in golden lights. This is, of course, straight out of the DLC, the source of some of its most iconic images, but new here is the fact that the carousel plays a music-box version of The Cure’s “Just Like Heaven,” and I think the lyrics of that song sum up how Ellie feels in this moment pretty well. Like the game on which it’s based, this episode is full of unspoken emotion, which makes it all the more effective. Ellie’s smile, beaming at Riley as the carousel spins, says more than words ever could. Find someone who looks at you the way Ellie looks at Riley here. The two have another drink, and Ellie continues to bask in Riley’s presence.

    But such moments never last, and as the carousel grinds to a halt, Ellie’s mind is interfering with what her heart feels, turning over questions again about Riley’s allegiance to the Fireflies. “Did you really leave because you actually think you can liberate this place?” she asks, making the question sound every bit as dismissive as it reads. When Riley protests that it’s not a fantasy, that the Fireflies have set things right in other QZs, Ellie tells her that they could do that too, “if you come back. We’re, like, the future.”

    Ellie and Riley look at each other while riding a carousel in HBO's The Last of Us.

    Screenshot: HBO

    Riley doesn’t seem hopeful about her prospects with FEDRA, telling Ellie that Kwong has her lined up for sewage detail. To Kwong, Riley is doomed to the kind of grunt work she told Ellie she could avoid if she plays her cards right. This is new for the show, and makes it that much more clear why Riley wants a life outside of what FEDRA has in store for her.

    Pictures of you

    Next up on Riley’s tour of wonders is the photo booth, another classic moment from the game. When the DLC first launched in 2014, this moment felt impactful because it featured some then-novel Facebook integration, allowing you to upload images of the specific poses you had Ellie and Riley strike to your feed. It was a way for people to share the experience and connect over their feelings about it. It’s a bit strange to see a moment that was initially designed not just for interactivity but for social media integration be recreated without these elements that once made it so special. It’s still a sweet scene, of course, but this is one case where the game will always be the definitive experience for me. At least the show’s Ellie and Riley actually get a printout of their photos, albeit faded and colorless. The game’s duo got only their memories of the experience.

    As they head to the next wonder, Riley talks it up, saying “it’s pretty dang awesome and it might break you.” Ellie tells her not to oversell it, but she hasn’t. She tells Ellie to stop and listen, and in the distance is the unmistakable cacophony of a video arcade. Yeah, Ellie is stoked. Standing before Raja’s Arcade in all its noisy glory, she says, “This is the most beautiful thing I’ve ever seen.”

    Mortal Kombat II vs. The Turning

    The arcade’s got Centipede and Tetris, Frogger and Daytona USA, all alive and ready to be played. But there’s one game they want to play most: Mortal Kombat II.

    This is one of the episode’s biggest departures from the game. There, the machines in the arcade remain off, and the most Ellie can do is imagine playing with them. (As I discovered when re-playing Left Behind for this recap, there’s a hidden trophy you can get here, a little self-deprecating joke from Naughty Dog. If you approach and interact with a Jak X Combat Racing arcade machine in the back corner, Ellie will imagine playing it for a bit. When she’s done, she comments to herself, “That game is stupid,” and you get the trophy, called Nobody’s Perfect. Oof, was Jak X really that bad?)

    Riley's face is lit by the blue glow of a screen while Riley narrates the action of a fighting game for her in a screen from the game The Last of Us: Left Behind.

    Screenshot: Naughty Dog

    In the game, it’s not Mortal Kombat II that they play, but a fictional fighting game called The Turning, and Ellie can only play it with her imagination. As Riley narrates the action, and as Ellie imagines it so vividly that she can hear the game’s announcer as well as the sound effects of battle, you enter a series of onscreen inputs to pull off attacks, blocks, dodges, and, finally, an ultra kill. Yes, The Turning was clearly inspired by Mortal Kombat, so the genuine article makes for a pretty fitting replacement.

    In his own commentary piece, my colleague Kenneth makes a strong argument that something is lost by having the characters actually play a game, rather than merely imagining one. I definitely agree that the way it plays out in the game is much more poignant. It’s just one more thing that Ellie will never get to really experience. At the same time, I think the interactivity of the sequence was central to its impact, that just seeing Ellie imagine the game and input sequences would have little of the same effect that the scene conjures through the device of having you do it, and in lieu of that, I think swapping in Mortal Kombat II, a game so many of us have our own memories of playing, allows us to feel some deeper connection to the scene. For me, it’s another instance, like the photo booth, where the TV show was never going to fully recapture the power of the game on which it’s based.

    Ellie and Riley stand before a Mortal Kombat II machine in HBO's The Last of Us.

    Screenshot: HBO

    Kiss me, kill me

    Bella Ramsey does a great job of capturing the intense excitement and supreme cluelessness of a gamer girl who’s literally never played an arcade game before, and it’s fun to watch both her and Reid react to the game’s legendary sound effects, and to Mileena’s famous fatality. Eventually, playing as Baraka, Ellie gets a win on Riley, who tells her how to do his fatality. Baraka impales Mileena on his blades and the girls lose it, and in the excitement, we can tell, even if Riley can’t, that Ellie really wants to kiss her. The moment passes, though, and Ellie protests that she has to be back home in bed soon. However, Riley tells her that she got her a gift, and that’s enough to get Ellie to tag along for a bit longer.

    In the food court, Riley’s got a little camp, where she gives Ellie volume two (actually “volume too” lol) of Will Livingston’s series of pun books, the same one she’s been torturing Joel with throughout the series. In the game, Riley gives it to Ellie just after you ride the carousel, and you can spend a while reading jokes to Riley if you like. (My favorite of the bunch: What’s a pirate’s favorite letter? ‘Tis the C.)

    In the show, however, Ellie’s delight in the new treasure trove of punny goodness is short-lived, as she finds a bunch of explosives Riley has made. Riley says that she would never let them be used on or anywhere near Ellie, but Ellie doubts that her supervisors would care what Riley has to say about that, and she storms off.

    Riley gives chase and tells Ellie that she’s leaving, that this is her last day in Boston, which is enough to get Ellie to stop. “I asked if you could join so we could go together,” Riley says, “but Marlene said no.” In the game, Riley phrases this sentiment a bit differently, telling Ellie that Marlene “wants you safe at that stupid school. I’m not even supposed to come see you.” The reasons why Marlene might be looking out for Ellie from afar—even before knowing Ellie was immune to cordyceps—will become clear in time, if you don’t know them already. Despite Riley’s heartfelt plea, expressing her desire to spend some of her little time left in Boston with Ellie and to say goodbye on good terms, Ellie remains furious, and storms off again.

    Love and truth in the Halloween shop

    She thinks better of it, though, and turns around before she gets too far. Trudging back through the mall, she hears screams and fears the worst. Charging into the store the screams are coming from, she’s confronted with a spooky sight indeed: some sort of mechanical Halloween jumpscare device letting out the pre-recorded shrieks. Here it is, the Halloween store, the final wonder Riley had in store for her. (In the game, you actually enter the Halloween store first upon arriving at the mall. This scene effectively combines that one and one near the end of the DLC.)

    Riley’s hiding out in the Halloween store, and tells Ellie she was saving it for last because she thought she’d like it the best. “I guess it was stupid,” she says. “I’m fucking stupid.” Ellie sits down. It’s time to talk about some real shit.

    Ellie says "Don't go" to Riley in a moment from the game The Last of Us: Left Behind.

    Screenshot: Naughty Dog

    “So you leave me. I think you’re dead. All of a sudden, you’re alive. And you give me this night. This amazing fucking night. And now you’re leaving again, forever, to join some cause I don’t even think you understand. Tell me I’m wrong.” Yeah, I can see how Ellie’s got some emotional turmoil going on at the moment.

    Riley tells Ellie that she doesn’t know everything. Unlike Ellie, Riley remembers what it was to have a family, for a little while at least, and the real sense of belonging that came with that. Now the Fireflies have chosen her, and she senses a chance for that kind of belonging and purpose again. “I matter to them.”

    Ellie kisses Riley in HBO's The Last of Us.

    Screenshot: HBO

    Ellie softens a bit, and tells Riley that she’s her best friend and that she’ll miss her. Riley proposes “one last thing,” and Ellie agrees, before Riley tosses her a werewolf mask and grabs a spooky clown mask for herself, masks they both also wear in the game. She puts on Etta James’ “I Got You Babe,” the same song that features so prominently in the game at this pivotal moment, and begins dancing atop the display case.

    For a while they just enjoy the moment, but what Ellie is feeling is too strong to be contained, so she takes off her mask and pleads with Riley, “Don’t go.” Just as in the game, Riley agrees, almost as if she’s been waiting, hoping that Ellie would ask her this. Ellie kisses her, then apologizes, to which Riley responds, “For what?” It’s a beautiful and cathartic moment, and a painful one, too, since we know their happiness ends even before it has a chance to start. It makes for a fascinating contrast with the third episode, which charted the love story of Bill and Frank across decades. Here, we get the love story of Ellie and Riley, not quite in real time but not too far off. This night lasts only a matter of hours, and yet the memory of it will be with Ellie forever.

    I feel like “don’t go” is a bigger ask on Ellie’s part here in the show than it is in the game, since she knows that FEDRA has Riley pegged for grunt work, and it’s a lot to ask someone you love to resign themselves to a life of such limited possibility just to be with you. But I’m sure that in that moment, she thinks that together, they can create something better. And who knows, maybe they could have.

    They barely even get a chance to imagine what that future might look like, however, before the infected we saw earlier roars and runs in, putting up one hell of a fight before Ellie finally finishes it with her switchblade. Not before both of them are bitten, however, and just like that, their dream future evaporates.

    “I’m not letting you go”

    Ellie clutches a medical kit while saying "I'm not letting you go" in HBO's The Last of Us.

    Screenshot: Naughty Dog

    And while future Ellie rummages desperately in the house for something to help Joel with, past Ellie, thinking her fate is sealed, smashes shit in a rage before collapsing next to Riley. Riley says they could just off themselves with her gun, but she’s not a fan of that idea. Taking Ellie’s hand, she says, “Whether it’s two minutes or two days, we don’t give that up. I don’t want to give that up.”

    Ellie's fingers intertwine with Joel's in a shot from HBO's The Last of Us.

    Screenshot: HBO

    Rummaging in the kitchen, Ellie finds some needle and thread and returns to Joel. For a moment, she takes his hand, interlocking her fingers with hers. She’s not letting him go. Then, she begins to sew.

    [ad_2]

    Carolyn Petit

    Source link

  • Will the Supreme Court Blow Up the Internet?

    Will the Supreme Court Blow Up the Internet?

    [ad_1]

    Since at least 2020, Justice Clarence Thomas has essentially been pleading with lawyers to bring him “an appropriate case” challenging the scope of a statute Republican politicians love to hate: Section 230 of the Communications Decency Act. The law, which has been around since the mid-1990s, has been hailed as the Magna Carta of the internet and the 26 words that built the modern World Wide Web. Its text isn’t exactly a model of clarity, clearly belonging to the era of dial-up modems and free AOL CD-ROMs: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

    Whatever the precise meaning of these words, they’ve been a boon to internet companies, which have been broadly immunized from what everyday users post, and then some. The law allows companies to moderate what users post, without fear that they’ll be held liable if some truly harmful content, like posts inciting an insurrection on the Capitol, goes unchecked. In that regard, it’s a somewhat conservative-minded provision, and even good for business, but many Republicans, including the likes of Donald Trump and Josh Hawley, see Section 230 as a threat to conservative speech and viewpoints on Facebook, Twitter, and other platforms. Democrats, for their part, have their own reasons for disliking Section 230.

    After Trump was banned for shitposting about overturning the last presidential election, he went as far as to sue Twitter, Facebook, and YouTube declaring Section 230 “an unconstitutional delegation of authority.” He was laughed out of court for his Twitter case in short order. Thomas, however, has echoed some of these same grievances. “We will soon have no choice but to address how our legal doctrines apply to highly concentrated, privately owned information infrastructure such as digital platforms,” he acknowledged in an opinion that name-checked Trump and his troubles.

    That very brief primer on Section 230 brings us to Gonzalez v. Google and Twitter v. Taamneh, a pair of cases the Supreme Court considered on Tuesday and Wednesday that, until this week, many believed could break the internet. They stem from lawsuits accusing YouTube and Twitter of facilitating the spread of content that led to Islamic State terrorist attacks in France and Turkey. The allegations in the two cases are essentially the same—the companies should be held liable for content that ultimately led to people dying. But only the case against YouTube turns on Section 230, and thus it’s the one that’s received the bulk of attention from scholars and advocates. (The case against Twitter turns on a different law that imposes liability whenever someone provides material support to someone else in an act of international terrorism.)

    By their very nature, neither case is a walk in the park. For more than five hours in all, the justices were clearly struggling with what to do in the disputes—and at times appeared completely confused by the legal issues at stake, if not the workings of the internet itself. One area of consensus: the understanding that the Court’s eventual ruling could have serious consequences. “Would Google collapse and the internet be destroyed if YouTube and, therefore, Google were potentially liable for posting and refusing to take down videos that it knows are defamatory and false?” pondered at one point Justice Samuel Alito.

    Of course, much has changed since the passage of Section 230, which arose at a time when chat rooms, message boards, and comments sections on news sites were the primary modes of interaction. This was long before targeted algorithms, personalized ads, and recommendations of all kinds—now the bread and butter of just about any major platform on the internet. There’s little dispute that all of that content curation and spoon-feeding is very much generated by the platforms. Should Section 230 immunize Instagram if, say, its algorithm feeds teens content that could lead to self-harm or other ills? “Every other industry has to internalize the costs of its conduct,” Justice Elena Kagan said on Tuesday. “Why is it that the tech industry gets a pass? A little bit unclear.”

    Indeed, the Justice Department and a number of advocates are hoping the Supreme Court doesn’t give Big Tech a pass, but reaches some sort of middle ground: continuing immunity for content moderation decisions regarding third parties—think Trump getting booted off Twitter—but no immunity for a platform’s own targeted recommendations. In other words, YouTube can’t be blamed for wittingly or unwittingly failing to take down an ISIS video, but an algorithm that feeds that same video to a person who later becomes radicalized and commits an unspeakable act of violence may be fair game under the law.

    The Supreme Court may well be tempted to go that route—or other routes, like drawing a distinction between speech and conduct—but anything that breaks new ground would mark a major policy shift, opening the door to the charge that they’re yet again making up law on the fly. “Isn’t it better … to keep it the way it is for us, and Congress—to put the burden on Congress to change that and they can consider the implications and make these predictive judgments?” Justice Brett Kavanaugh asked at one point. Along the same lines, Kagan, who hasn’t been shy about calling out the Supreme Court’s recent excesses, seemed keenly aware of the dangers of drawing lines in an area where the justices are novices at best. “I mean, we’re a court,” she said. “We really don’t know about these things. You know, these are not like the nine greatest experts on the Internet.” (As CNN recently reported, the justices’ own information-technology practices leave much to be desired.)

    On Section 230, as in other areas of law with far-reaching consequences, more of Kagan’s humility would be welcome. Less would be more. If the Supreme Court wants a way out of the messiness of deciding these cases, or if the justices fear their “solution” may make matters worse, they could always dismiss them “as improvidently granted”—lawspeak for “Oops, we should’ve never taken these cases.” As internet law expert Eric Goldman wrote reflecting on the Google case, almost any way the Supreme Court goes could make a hot mess out of things. “I am slightly relieved about the tenor of the justices’ questions,” he wrote in a blog post. “However, I remain nervous that the court’s opinion will still change the status quo, potentially significantly, by opening up new doors for plaintiffs to explore.”

    [ad_2]

    Cristian Farias

    Source link

  • Supreme Court hears case that could reshape the

    Supreme Court hears case that could reshape the

    [ad_1]

    Washington — Kati Morton was a reluctant adopter of YouTube.

    A therapist working toward her license in California, it was her then-boyfriend and now-husband who first suggested that Morton explore posting videos on the platform as a way to disseminate mental health information.

    The year was 2011, and Morton, like many others, thought YouTube primarily consisted of videos of cats playing the piano and make-up tutorials. But after seeing other content posted on the site, Morton decided to give it a shot.

    Her audience started small, with her videos garnering a handful of views. But in the more than a decade since then, Morton’s YouTube channel has grown to more than 1.2 million subscribers.

    Crucial to the growth of Morton’s audience is YouTube’s system for recommending content to users, which the company began building in 2008. It relies on a highly complex algorithm to predict what videos will interest viewers and keep them watching. Today, half of Morton’s views come from recommendations, she said.

    “If you could see the entire life of the channel, it was really, really slow and steady,” Morton told CBS News. “And then through recommendations, as well as collaborations, things have grown as you’re able to reach a broader audience and YouTube is better able to understand the content.”

    YouTube’s recommendations algorithm, and those used by platforms like TikTok, Facebook and Twitter, are now at the heart of a legal dispute that will go before the Supreme Court on Tuesday, in a case that involves the powerful legal shield that helped the internet grow.

    “We’re talking about rewriting the legal rules that govern the fundamental architecture of the internet,” Aaron Mackey, senior staff attorney at the Electronic Frontier Foundation, told CBS News of what’s at stake in the case, known as Gonzalez v. Google. 

    “A backbone of online activity”

    Section 230 of the Communications Decency Act immunizes internet companies from liability over content posted by third parties and allows platforms to remove content considered obscene or objectionable. The dispute before the Supreme Court marks the first time the court will consider the scope of the law, and the question before the justices is whether Section 230’s protections for platforms extend to targeted recommendations of information.

    The court fight arose after terrorist attacks in Paris in November 2015, when 129 people were murdered by ISIS members. Among the victims was 23-year-old Nohemi Gonzalez, an American college student studying abroad who was killed at a bistro in the city. 

    Gonzalez’s parents and other family members filed a civil lawsuit in 2016 against Google, which owns YouTube, alleging that the tech company aided and abetted ISIS in violation of a federal anti-terrorism statute by recommending videos posted by the terror group to users.

    Google moved to dismiss the complaint, claiming that they were immune from the claims under Section 230. A federal district court in California agreed and, regarding YouTube’s recommendations, found that Google was protected under the law because the videos at issue were produced by ISIS.

    The U.S. Court of Appeals for the 9th Circuit affirmed the district court’s ruling, and Gonzalez’s family asked the Supreme Court to weigh in. The high court said in October it would take up the dispute.

    The court fight has elicited input from a range of parties, many of which are backing Google in the case. Platforms like Twitter, Meta and Reddit — all of which rely on Section 230 and its protections — argue algorithmic recommendations allow them to organize the millions of pieces of third-party content that appear on their sites, enhancing the experience for users who would otherwise be forced to sift through a mammoth amount of posts, articles, photos and videos.

    “Given the sheer volume of content on the internet, efforts to organize, rank, and display content in ways that are useful and attractive to users are indispensable,” lawyers for Meta, the parent company of Facebook and Instagram, told the court.


    What is Section 230 and why do people want it repealed?

    12:55

    Even the company that operates online dating services Match and Tinder pointed to Section 230 as “vital” to its efforts to connect singles, as the law allows “its dating platforms to provide recommendations to its users for potential matches without having to fear overwhelming litigation.”

    But conservatives are using the case as a vehicle to rail against “Big Tech” firms and amplify claims that platforms censor content based on political ideology.

    Citing lower court decisions they believe has led to a “broad grant of immunity,” a group of Republican senators and House members told the Supreme Court that platforms “have not been shy about restricting access and removing content based on the politics of the speaker, an issue that has persistently arisen as Big Tech companies censor and remove content espousing conservative political views, despite the lack of immunity for such actions in the text of” Section 230.

    The case has presented the justices with a rare opportunity to hear directly from the co-authors of the legislation at issue. Ron Wyden, now a Democratic senator from Oregon, and Chris Cox, a former GOP congressman from California, crafted Section 230 in the House in 1996. The bipartisan pair filed a friend-of-the court brief explaining the plain meaning of their law and the policy balance they sought to strike.

    “Section 230 protects targeted recommendations to the same extent that it protects other forms of content curation and presentation,” they wrote. “Any other interpretation would subvert Section 230’s purpose of encouraging innovation in content moderation and presentation. The real-time transmission of user-generated content that Section 230 fosters has become a backbone of online activity, relied upon by innumerable internet users and platforms alike.”

    Google, they argued, is entitled to liability protection under Section 230, since the platform’s recommendation algorithm is merely responding to user preferences by pairing them with the types of content they seek. 

    “The algorithm functions in a way that is not meaningfully different from the many curatorial decisions that platforms have always made in deciding how to present third-party content,” Wyden and Cox said. 

    The battle also highlights competing views about the internet today and how Section 230 has shaped it. For tech companies, the law has laid the groundwork for new platforms to come online, an industry of online creators to form and free expression to flourish. For Gonzalez’s family and others, the algorithmic recommendations have proven deadly and harmful.

    Like the Gonzalezes, Tawainna Anderson, too, has fought to hold a social media platform responsible over content it recommends to users.

    Last May, Anderson sued TikTok and its parent company, China-based ByteDance, after her 10-year-old daughter Nylah died in late 2021 after trying to perform the dangerous “Blackout Challenge,” in which users are pushed to strangle themselves until they pass out and then share videos of the experience.

    The challenge, which went viral on TikTok, was recommended to Nylah through her account’s “For You” page, a curated feed of third-party content powered by TikTok’s algorithmic recommendation system.

    “They are actually feeding it to our children. They are sending them videos that they never even searched before,” Anderson told CBS News chief legal correspondent Jan Crawford. 

    Anderson’s lawsuit sought to hold TikTok accountable for deliberately funneling dangerous content to minors through the challenges and encouraging behavior that put their lives in danger. TikTok asked the federal district court in Pennsylvania to dismiss the suit, invoking Section 230. 

    U.S. District Judge Paul Diamond tossed out the case in October, writing that the law shielded TikTok from liability because it was promoting the work of others. But he acknowledged in a brief order that TikTok made the Blackout Challenge “readily available on their site” and said its algorithm “was a way to bring the challenge to the attention of those likely to be most interested in it.”

    “The wisdom of conferring such immunity is something properly taken up with Congress, not the courts,” Diamond wrote.

    Mackey, of the Electronic Frontier Foundation, noted that if people disagree with the reach of Section 230 as the courts have interpreted it, the right remedy is for Congress, not the Supreme Court, to rewrite the law.

    “When they passed it, they set this balance and said not that they didn’t believe there wouldn’t be harmful content, but they believed on balance the creation of opportunities and forums for people to speak, for the growth of the internet and development of a tool that became central to our lives, commerce, political expression — that was what they valued more,” Mackey said. “Congress is free to rewrite that balance.”

    A new creator economy

    In the 27 years since Section 230 became law, the explosive growth of the internet has fueled a multi-billion-dollar industry of independent online creators who rely on large tech platforms to reach new audiences and monetize their content.

    In Morton’s case, her YouTube channel has allowed her to expand beyond her office in Santa Monica, California, and reach patients around the country, including in areas where mental health resources may be scarce.

    “The ability for me to get over a million views on YouTube means that I’m able to reach so many more people, and mental health information isn’t held behind a paywall,” she said.

    Alex Su, a lawyer by training who runs the TikTok account LegalTechBro, first began sharing content on LinkedIn in 2016 as a way to drive awareness of his employer, a technology company. After building up a following of lawyers and others in the legal industry on LinkedIn, Su began experimenting with TikTok in 2020.

    His TikTok videos, which touch on insider experiences of working at a law firm, resonated with other lawyers and people with ties to the profession. He said LinkedIn’s recommendation system has been instrumental in helping Su reach his target audience and market his company’s services.

    “These algorithms let me go viral among people who can relate to my jokes,” he told CBS News. “If I put this type of content in front of a general audience, they probably wouldn’t find it as funny.”

    Internet companies and supporters of Section 230 note the law has allowed for new and emerging companies to grow into industry leaders without incurring significant litigation costs fighting frivolous claims.

    Su, an early adopter of LinkedIn and TikTok for those in the legal field, noted that creators are often quick to take advantage of new platforms, where they can reach new audiences.

    “I think it’s no accident that there are these shifts where new entrants come in and you can take advantage of it as a content creator because then you can go viral on that platform with a new audience quickly,” he said. “Without those different platforms, I would not have been able to grow in the way that I did.”

    Few clues from the court

    The Supreme Court has given little indication of how it may approach Section 230. Only Justice Clarence Thomas has written about lower courts’ interpretations of the legal shield.

    “Courts have long emphasized non-textual arguments when interpreting [Section] 230, leaving questionable precedent in their wake,” Thomas wrote in a 2020 statement urging the court to consider whether the law’s text “aligns with the current state of immunity enjoyed by internet platforms.”

    The Supreme Court could issue a ruling that affirms how Section 230 has been interpreted by lower courts, or narrow the law’s immunity.

    But internet companies warned the court that if it limits the scope of Section 230, it could drastically change how they approach content posted to their sites. With a greater risk of costly litigation with fewer protections, companies may be more cautious about letting content appear on their sites that may be problematic, and only allow content that has been vetted and poses little legal risk.

    “If you’re concerned about censorship, the last thing you want is a legal regime that is going to punish platforms for keeping things online,” Mackey said. “It’s going to be increased censorship, more material will be taken down, a lot won’t make it alone in the first place.” 

    A decision from the Supreme Court is expected by the summer.

    [ad_2]

    Source link

  • Meta to launch paid verification service on Facebook and Instagram, following Twitter’s lead

    Meta to launch paid verification service on Facebook and Instagram, following Twitter’s lead

    [ad_1]

    Meta to launch paid verification service on Facebook and Instagram, following Twitter’s lead – CBS News


    Watch CBS News



    Meta will begin testing its new subscription service later this week, which will offer a blue badge to verified accounts on its Facebook and Instagram platforms. Louise Matsakis, a technology reporter for Semafor, joins CBS News to discuss what this new subscription plan entails.

    Be the first to know

    Get browser notifications for breaking news, live events, and exclusive reporting.


    [ad_2]

    Source link

  • Megan Fox criticizes ‘baseless’ rumors about relationship in return to Instagram | CNN

    Megan Fox criticizes ‘baseless’ rumors about relationship in return to Instagram | CNN

    [ad_1]



    CNN
     — 

    Megan Fox returned to Instagram on Sunday to address rumors about her relationship with fiancé Machine Gun Kelly.

    “There has been no third party interference in this relationship of any kind,” Fox’s Instagram post read. “That includes, but is not limited to… actual humans, DMs, AI boys or succubus demons.”

    Fox and Kelly announced their engagement in January 2022.

    The actress drew the attention of tabloids last week after she deactivated her Instagram account.

    The move prompted speculation that Fox’s actions – including a February 12 photo and caption that referenced Beyoncé’s biting ballad about suspected infidelity “Pray You Catch Me” – was related to her relationship with Kelly.

    Fox dismissed the rumors as “random baseless news stories” in her statement this weekend, and asked that those reading to leave “innocent people alone now.”

    Kelly has not addressed the situation.

    Both are next set to make guest appearances on the upcoming third season of the FX series, “Dave.”

    [ad_2]

    Source link

  • Meta launching paid verification system for Facebook and Instagram

    Meta launching paid verification system for Facebook and Instagram

    [ad_1]

    Meta warns of rise in AI-generated profiles


    Meta reports a rapid rise of AI-generated profiles used by threat actors

    04:01

    Meta, the parent company of Facebook and Instagram, announced Sunday it will begin rolling out a paid subscription program allowing users and businesses to verify their accounts with a blue badge.

    In a Facebook post, CEO Mark Zuckerberg said the new verification system, called “Meta Verified,” will cost $11.99 month on web or $14.99 a month for iPhone users. 

    The announcement comes after Elon Musk, the billionaire Tesla founder and owner of Twitter, created a paid-for verification system known as Twitter Blue after taking over the company last year. Twitter also announced Friday that users who do not subscribe to Twitter Blue will soon have to give up using text messages as a two-factor authentication method to secure their accounts, and instead must use other verification methods.

    According to Zuckerberg, the subscription service will increase authenticity and security across Meta’s services by verifying users accounts with a government ID. He said this will create extra protection against impersonators, and subscribers will have direct access to customer support.

    The new product will be available in Australia and New Zealand starting this week.

    [ad_2]

    Source link

  • ‘Fire-breathing demon’ dog Ralphie returned to Niagara shelter | CNN

    ‘Fire-breathing demon’ dog Ralphie returned to Niagara shelter | CNN

    [ad_1]



    CNN
     — 

    Will a fourth adoption be the charm for this seemingly unadoptable pup?

    Ralphie, a New York shelter’s adorable “jerk” dog, has been returned to the shelter again after his most recent (and unsuccessful) adoption.

    “Ralphie proved to be more than she could handle,” the shelter explained in an update posted to Facebook on Tuesday after the woman who adopted the French bulldog brought him back.

    News of the canine menace went viral in late January after the Niagara SPCA posted an eye-catching ad for potential adopters. Shelter employees described Ralphie as “a terror in a somewhat small package.”

    “Everything belongs to him. If you dare test his ability to possess THE things, wrath will ensue,” they wrote at the time. “If you show a moment of weakness, prepare to be exploited.”

    This is Ralphie’s third unsuccessful adoption, according to the shelter. The pup’s first family rehomed him after training was unsuccessful. His second family surrendered him to the shelter after he “annoyed” their older dog.

    “What they actually meant was: Ralphie is a fire-breathing demon and will eat our dog, but hey, he’s only 26lbs,” reads a Facebook post from the Niagara SPCA.

    The ornery pup is now enrolled in an intensive six-week boarding and training program that will start on February 20, according to the Tuesday Facebook post. The shelter said that they would start vetting prospective adopters immediately and that the ideal adopter would work with the trainer while Ralphie’s at the residential training program.

    The shelter noted that those who believe “that all Ralphie needs is love” should not apply to adopt the fearsome pup. “He will totally exploit that,” they wrote.

    Neither should families with children or other pets, as he has a history of biting.

    Dog lovers who aren’t intimidated by Ralphie’s formidable reputation can apply to adopt him with a letter of interest and “dog experience ‘resume,’” according to the Facebook post.

    The shelter is also raising money to cover the $6,000 tuition for the training program.

    “No one likes that it didn’t work out for Ralphie, but he will receive the training he needs,” shelter employees added in the post.

    [ad_2]

    Source link

  • Supreme Court to hear case that could reshape the

    Supreme Court to hear case that could reshape the

    [ad_1]

    Washington — Kati Morton was a reluctant adopter of YouTube.

    A therapist working toward her license in California, it was her then-boyfriend, now-husband, who first suggested that Morton explore posting videos on the platform as a way to disseminate mental health information.

    The year was 2011, and Morton, like many others, thought YouTube primarily consisted of videos of cats playing the piano and make-up tutorials. But after seeing other content posted on the site, Morton decided to give it a shot.

    Her audience started small, with her videos garnering a handful of views. But in the more than a decade since then, Morton’s YouTube channel has grown to more than 1.2 million subscribers.

    Crucial to the growth of Morton’s audience is YouTube’s system for recommending content to users, which the company began building in 2008. It relies on a highly complex algorithm to predict what videos will interest viewers and keep them watching. Today, half of Morton’s views come from recommendations, she said.

    “If you could see the entire life of the channel, it was really, really slow and steady,” Morton told CBS News. “And then through recommendations, as well as collaborations, things have grown as you’re able to reach a broader audience and YouTube is better able to understand the content.”

    YouTube’s recommendations algorithm, and those used by platforms like TikTok, Facebook and Twitter, are now at the heart of a legal dispute that will go before the Supreme Court on Tuesday, in a case that involves the powerful legal shield that helped the internet grow.

    “We’re talking about rewriting the legal rules that govern the fundamental architecture of the internet,” Aaron Mackey, senior staff attorney at the Electronic Frontier Foundation, told CBS News of what’s at stake in the case, known as Google v. Gonzalez. 

    “A backbone of online activity”

    Section 230 of the Communications Decency Act immunizes internet companies from liability over content posted by third parties and allows platforms to remove content considered obscene or objectionable. The dispute before the Supreme Court marks the first time the court will consider the scope of the law, and the question before the justices is whether Section 230’s protections for platforms extend to targeted recommendations of information.

    The court fight arose after terrorist attacks in Paris in November 2015, when 129 people were murdered by ISIS members. Among the victims was 23-year-old Nohemi Gonzalez, an American college student studying abroad who was killed at a bistro in the city. 

    Gonzalez’s parents and other family members filed a civil lawsuit in 2016 against Google, which owns YouTube, alleging that the tech company aided and abetted ISIS in violation of a federal anti-terrorism statute by recommending videos posted by the terror group to users.

    Google moved to dismiss the complaint, claiming that they were immune from the claims under Section 230. A federal district court in California agreed and, regarding YouTube’s recommendations, found that Google was protected under the law because the videos at issue were produced by ISIS.

    The U.S. Court of Appeals for the 9th Circuit affirmed the district court’s ruling, and Gonzalez’s family asked the Supreme Court to weigh in. The high court said in October it would take up the dispute.

    The court fight has elicited input from a range of parties, many of which are backing Google in the case. Platforms like Twitter, Meta and Reddit — all of which rely on Section 230 and its protections — argue algorithmic recommendations allow them to organize the millions of pieces of third-party content that appear on their sites, enhancing the experience for users who would otherwise be forced to sift through a mammoth amount of posts, articles, photos and videos.

    “Given the sheer volume of content on the internet, efforts to organize, rank, and display content in ways that are useful and attractive to users are indispensable,” lawyers for Meta, the parent company of Facebook and Instagram, told the court.


    What is Section 230 and why do people want it repealed?

    12:55

    Even the company that operates online dating services Match and Tinder pointed to Section 230 as “vital” to its efforts to connect singles, as the law allows “its dating platforms to provide recommendations to its users for potential matches without having to fear overwhelming litigation.”

    But conservatives are using the case as a vehicle to rail against “Big Tech” firms and amplify claims that platforms censor content based on political ideology.

    Citing lower court decisions they believe has led to a “broad grant of immunity,” a group of Republican senators and House members told the Supreme Court that platforms “have not been shy about restricting access and removing content based on the politics of the speaker, an issue that has persistently arisen as Big Tech companies censor and remove content espousing conservative political views, despite the lack of immunity for such actions in the text of” Section 230.

    The case has presented the justices with a rare opportunity to hear directly from the co-authors of the legislation at issue. Ron Wyden, now a Democratic senator from Oregon, and Chris Cox, a former GOP congressman from California, crafted Section 230 in the House in 1996. The bipartisan pair filed a friend-of-the court brief explaining the plain meaning of their law and the policy balance they sought to strike.

    “Section 230 protects targeted recommendations to the same extent that it protects other forms of content curation and presentation,” they wrote. “Any other interpretation would subvert Section 230’s purpose of encouraging innovation in content moderation and presentation. The real-time transmission of user-generated content that Section 230 fosters has become a backbone of online activity, relied upon by innumerable internet users and platforms alike.”

    Google, they argued, is entitled to liability protection under Section 230, since the platform’s recommendation algorithm is merely responding to user preferences by pairing them with the types of content they seek. 

    “The algorithm functions in a way that is not meaningfully different from the many curatorial decisions that platforms have always made in deciding how to present third-party content,” Wyden and Cox said. 

    The battle also highlights competing views about the internet today and how Section 230 has shaped it. For tech companies, the law has laid the groundwork for new platforms to come online, an industry of online creators to form and free expression to flourish. For Gonzalez’s family and others, the algorithmic recommendations have proven deadly and harmful.

    Like the Gonzalezes, Taiwanna Anderson, too, has fought to hold a social media platform responsible over content it recommends to users.

    Last May, Anderson sued TikTok and its parent company, China-based ByteDance, after her 10-year-old daughter Nylah died in late 2021 after trying to perform the dangerous “Blackout Challenge,” in which users are pushed to strangle themselves until they pass out and then share videos of the experience.

    The challenge, which went viral on TikTok, was recommended to Nylah through her account’s “For You” page, a curated feed of third-party content powered by TikTok’s algorithmic recommendation system.

    Anderson’s lawsuit sought to hold TikTok accountable for deliberately funneling dangerous content to minors through the challenges and encouraging behavior that put their lives in danger. TikTok asked the federal district court in Pennsylvania to dismiss the suit, invoking Section 230. 

    U.S. District Judge Paul Diamond tossed out the case in October, writing that the law shielded TikTok from liability because it was promoting the work of others. But he acknowledged in a brief order that TikTok made the Blackout Challenge “readily available on their site” and said its algorithm “was a way to bring the challenge to the attention of those likely to be most interested in it.”

    “The wisdom of conferring such immunity is something properly taken up with Congress, not the courts,” Diamond wrote.

    Mackey, of the Electronic Frontier Foundation, noted that if people disagree with the reach of Section 230 as the courts have interpreted it, the right remedy is for Congress, not the Supreme Court, to rewrite the law.

    “When they passed it, they set this balance and said not that they didn’t believe there wouldn’t be harmful content, but they believed on balance the creation of opportunities and forums for people to speak, for the growth of the internet and development of a tool that became central to our lives, commerce, political expression — that was what they valued more,” Mackey said. “Congress is free to rewrite that balance.”

    A new creator economy

    In the 27 years since Section 230 became law, the explosive growth of the internet has fueled a multi-billion-dollar industry of independent online creators who rely on large tech platforms to reach new audiences and monetize their content.

    In Morton’s case, her YouTube channel has allowed her to expand beyond her office in Santa Monica, California, and reach patients around the country, including in areas where mental health resources may be scarce.

    “The ability for me to get over a million views on YouTube means that I’m able to reach so many more people, and mental health information isn’t held behind a paywall,” she said.

    Alex Su, a lawyer by training who runs the TikTok account LegalTechBro, first began sharing content on LinkedIn in 2016 as a way to drive awareness of his employer, a technology company. After building up a following of lawyers and others in the legal industry on LinkedIn, Su began experimenting with TikTok in 2020.

    His TikTok videos, which touch on insider experiences of working at a law firm, resonated with other lawyers and people with ties to the profession. He said LinkedIn’s recommendation system has been instrumental in helping Su reach his target audience and market his company’s services.

    “These algorithms let me go viral among people who can relate to my jokes,” he told CBS News. “If I put this type of content in front of a general audience, they probably wouldn’t find it as funny.”

    Internet companies and supporters of Section 230 note the law has allowed for new and emerging companies to grow into industry leaders without incurring significant litigation costs fighting frivolous claims.

    Su, an early adopter of LinkedIn and TikTok for those in the legal field, noted that creators are often quick to take advantage of new platforms, where they can reach new audiences.

    “I think it’s no accident that there are these shifts where new entrants come in and you can take advantage of it as a content creator because then you can go viral on that platform with a new audience quickly,” he said. “Without those different platforms, I would not have been able to grow in the way that I did.”

    Few clues from the court

    The Supreme Court has given little indication of how it may approach Section 230. Only Justice Clarence Thomas has written about lower courts’ interpretations of the legal shield.

    “Courts have long emphasized non-textual arguments when interpreting [Section] 230, leaving questionable precedent in their wake,” Thomas wrote in a 2020 statement urging the court to consider whether the law’s text “aligns with the current state of immunity enjoyed by internet platforms.”

    The Supreme Court could issue a ruling that affirms how Section 230 has been interpreted by lower courts, or narrow the law’s immunity.

    But internet companies warned the court that if it limits the scope of Section 230, it could drastically change how they approach content posted to their sites. With a greater risk of costly litigation with fewer protections, companies may be more cautious about letting content appear on their sites that may be problematic, and only allow content that has been vetted and poses little legal risk.

    “If you’re concerned about censorship, the last thing you want is a legal regime that is going to punish platforms for keeping things online,” Mackey said. “It’s going to be increased censorship, more material will be taken down, a lot won’t make it alone in the first place.” 

    A decision from the Supreme Court is expected by the summer.

    [ad_2]

    Source link

  • Meta Employees Are Being ‘Paid to Do Nothing’: Report

    Meta Employees Are Being ‘Paid to Do Nothing’: Report

    [ad_1]

    It’s misery at Meta.

    According to the New York Post, cost-cutting measures and mass layoffs have left employees feeling unliked and demotivated.

    A Financial Times report reveals that the malaise is being felt by workers in the trenches all the way up to senior management. What was promised to be a “year of efficiency” by CEO Mark Zuckerberg has been a year of employees sitting on their hands.

    Project budget approvals have been delayed, leading to a halt in work, according to FT. The outlet quoted insiders who say “zero work” is getting done, including its vital metaverse and advertising initiatives.

    “Honestly, it’s still a mess,” one Meta employee said. “The year of efficiency is kicking off with a bunch of people getting paid to do nothing.”

    Related: Meta Stock Jumps 1 Percent After False Report

    Last year, Meta’s stock fell by 72% as its metaverse push landed with a virtual thud, losing an estimated $700 billion. After laying off around 11,000 employees, Zuckerberg admitted to staffers that Meta over-invested after the pandemic led to a temporary surge in online activity and told employees, “I got this wrong, and I take responsibility for that.”

    Zuck and the Meta gang are not alone. Google, Amazon, Twitter, and Microsoft are among the other tech giants forced to lay off thousands in 2022.

    Related: 23 Weird Things You Didn’t Know About Mark Zuckerberg

    On an earnings call last week, Zuckerberg said he would restructure the company to find efficiencies. “We’re working on flattening our org structure and removing some layers of middle management to make decisions faster, as well as deploying AI tools to help our engineers be more productive,” Zuckerberg said, adding that he will be quick to shut down projects that are not performing or no longer seen as crucial.

    That oughta boost morale.

    [ad_2]

    Dan Bova

    Source link

  • Meta employees are reportedly bracing for more layoffs amid delays to finalized budgets: ‘It’s still a mess’

    Meta employees are reportedly bracing for more layoffs amid delays to finalized budgets: ‘It’s still a mess’

    [ad_1]

    Facebook parent Meta conducted its biggest-ever layoffs last November, shedding about 11,000 workers. But more jobs, it appears, are about to be axed.

    CEO Mark Zuckerberg noted in a Facebook post on Feb. 1, “We closed last year with some difficult layoffs and restructuring some teams. When we did this, I said clearly that this was the beginning of our focus on efficiency and not the end.” During an earnings call that same day, he announced 2023 will be Meta’s “year of efficiency.”

    While Meta workers wonder who will be deemed inefficient, the company has delayed finalizing multiple teams’ budgets, according to the Financial Times. Employees who spoke to the British paper on condition of anonymity said morale at the company was low and little work was getting done on some teams as they await abnormally slow budget decisions. 

    Meta declined to comment when contacted by Fortune.

    “Honestly, it’s still a mess,” one employee told the FT. “The year of efficiency is kicking off with a bunch of people getting paid to do nothing.”

    Other workers told the paper the next job cuts are expected next month.

    Middle managers have reason to be nervous.

    ‘More proactive about cutting projects’

    Zuckerberg wrote in his Facebook post, “We’re working on flattening our org structure and removing some layers of middle management to make decisions faster, as well as deploying AI tools to help our engineers be more productive. As part of this, we’re going to be more proactive about cutting projects that aren’t performing or may no longer be as crucial, but my main focus is on increasing the efficiency of how we execute our top priorities.”

    One of those priorities is the metaverse, a largely unrealized virtual world that has underwhelmed users and could take years to become profitable, if it ever does. The company’s metaverse division, Reality Labs, notched a loss of $13.7 billion for 2022, up from a $10.2 billion loss in 2021. 

    Investors have tried pressuring Zuckerberg to scale back the metaverse investments, to no avail. 

    In December, John Carmack, a virtual reality pioneer, left his high-level consulting role at Meta, where he worked on the metaverse. He tweeted on the way out, “I have always been pretty frustrated with how things get done at FB/Meta.  Everything necessary for spectacular success is right there, but it doesn’t get put together effectively.”

    Slow going with the metaverse and three consecutive quarters of year-over-year revenue declines, however, are not stopping stock buybacks at Meta. In its latest earnings statement, Meta said it had increased its share repurchase authorization by $40 billion, noting that last year it bought back about $28 billion.

    Many tech companies that over-hired during the pandemic, as demand surged for the services, have conducted large layoffs in recent months, leading to a sense of clashing headlines as the latest U.S. jobs report shows the lowest unemployment in 50 years.

    Learn how to navigate and strengthen trust in your business with The Trust Factor, a weekly newsletter examining what leaders need to succeed. Sign up here.

    [ad_2]

    Steve Mollman

    Source link

  • US senators seek answers from Meta on whether user data was accessed by China, Russia and others | CNN Business

    US senators seek answers from Meta on whether user data was accessed by China, Russia and others | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Top US lawmakers on the Senate Intelligence Committee want answers from Meta on a newly disclosed internal investigation it conducted in 2018 that found tens of thousands of software developers in China, Russia and other “high-risk” countries may have had access to detailed Facebook user data before the company clamped down on that access beginning in 2014.

    In a letter to Meta CEO Mark Zuckerberg on Monday, Sens. Mark Warner and Marco Rubio, the chair and vice-chair of the Senate committee, cited a document unsealed last week in an ongoing privacy lawsuit involving the company.

    That document, an internal slide presentation from 2018, suggested that nearly 87,000 developers in China, 42,000 in Russia and a handful based in Cuba, Iran and North Korea had access to Facebook user information through an earlier version of the company’s programming interfaces. The presentation provides an interim update on the probe, which found, among other things, that Iran was home to a “significant number of seemingly Russian developers” of Facebook apps.

    The document does not explicitly outline what types of information the developers could have accessed, but it focuses on a period prior to 2014, before Facebook had restricted third-party access to data such as political views, relationship statuses and education history, among other things.

    The congressional letter seeks more information about the outcome of the investigation, with a particular focus on whether Facebook users’ data could have ended up in the hands of Chinese or Russian intelligence agencies.

    “We have grave concerns about the extent to which this access could have enabled foreign intelligence service activity, ranging from foreign malign influence to targeting and counter-intelligence activity,” the lawmakers wrote.

    The findings are “especially remarkable given that Facebook has never been permitted to operate in [China],” they added.

    Meta’s investigation, launched after the company’s Cambridge Analytica data privacy scandal, had focused on third-party app developers with access to “large amounts of information” and whose software had exhibited “suspicious activity.”

    On Tuesday, Meta told CNN in a statement that the document cited in the letter references data practices that are no longer in effect at the company.

    “These documents are an artifact from a different product at a different time,” said Meta spokesman Andy Stone. “Many years ago, we made substantive changes to our platform, shutting down developers’ access to key types of data on Facebook while reviewing and approving all apps that request access to sensitive information.”

    Meta declined to answer whether the app developer investigation is still ongoing or how many apps have been reviewed since the 2018 slide presentation, which was unsealed in court last week. The document had projected the probe would continue at least through 2020.

    In recent years, policymakers have increasingly sounded the alarm about data leakages to foreign adversaries. Hostile governments could seek to use Americans’ personal information to spread disinformation or identify intelligence targets, US officials have said.

    Those fears have culminated most visibly in tensions with the short-form video app TikTok, whose links to China through its parent company have prompted the US government and numerous states to ban the app from official devices. US officials have also sought to block Chinese telecom firms from the US market over similar concerns.

    But the lawmakers’ letter highlights how worries about data access by foreign adversaries extends beyond TikTok and encompasses some of the largest social media platforms.

    Although Meta has moved on with different, more restrictive policies for developers, Warner and Rubio called for the company to explain what information may have been transferred to China, Russia and other nations in the past, and for any evidence the company may have that the data has been abused to target Americans or engage in propaganda campaigns.

    [ad_2]

    Source link