Meta has thrown billions of dollars at its artificial intelligence efforts. Somehow, that is apparently resulting in fewer people being employed. According to a report from Axios, about 600 people lost their jobs in Meta’s “superintelligence” lab in an effort to create a less “bureaucratic” structure.
The cuts will reportedly primarily hit Meta’s FAIR AI research lab, which was the company’s long-standing AI research unit, as well as the company’s product-related AI teams and its AI infrastructure units. “By reducing the size of our team, fewer conversations will be required to make a decision, and each person will be more load-bearing and have more scope and impact,” Meta chief AI officer Alexandr Wang said in a memo obtained by Axios. TBD Lab, which is tasked with “developing the next generation” of the company’s large language models, was reportedly spared from the layoffs.
The company also reportedly encouraged the employees affected by the layoffs to apply for other open positions within the company, with Wang writing, “This is a talented group of individuals, and we need their skills in other parts of the company.” No word on whether there were efforts to move people into those roles before telling them to put their belongings in a box.
The restructuring is just the latest example of Meta desperately playing catch-up in the AI race. Earlier this year, the company made waves with a hiring spree that saw it throw massive, multi-million dollar paydays at top talent in an effort to poach them from its rivals. It succeeded in luring them away, but hasn’t necessarily figured out what comes next. Some recipients of those big signing bonuses threatened to leave within weeks of joining the company, according to the Financial Times, presumably over the lack of direction within the company. Others did dip, reportedly including people who had been with Meta for years.
Zuck’s company has seemingly yet to figure out what the shape of its AI operation should be. In addition to shelling out NBA max contract-sized payouts, the company poured $15 billion into Scale to get the company’s talent and infrastructure. Since absorbing all that, it has failed to figure out what to do with it. It announced its Superintelligence initiative first to unify its efforts in the AI space, but broke it up into multiple divisions within a matter of weeks. In the meantime, it looks like it’s the employees that Meta isn’t spending millions of dollars on who will be penalized for organizational incompetence.
Meta CEO Mark Zuckerberg, Instagram chief Adam Mosseri and Snap CEO Evan Spiegel will have to testify in an upcoming trial that deals with social media safety and whether the executives’ platforms are addictive. A Los Angeles judge ruled that the three men will need to testify in the trial set to begin in January, according toCNBC.
“The testimony of a CEO is uniquely relevant, as that officer’s knowledge of harms, and failure to take available steps to avoid such harms could establish negligence or ratification of negligent conduct,” Judge Carolyn B. Kuhl wrote. As CNBC points out, the January trial will be closely watched as it’s the first of many lawsuits alleging harms to young social media users that will head to trial.
Lawyers for Meta and Snap had argued that the executives should be spared from testifying at the upcoming trial. Meta’s lawyers reportedly argued that forcing testimony from Zuckerberg and Mosseri would “set a precedent” for future trials. Meta is currently facingnumerouslawsuits over alleged harms to younger users of its platforms. The company didn’t immediately respond to a request for comment.
Snap is also facing a number of lawsuits over alleged safety issues. In a statement, the law firm representing Snap said that the judge’s order “does not bear at all on the validity of Plaintiffs claims” and that they “look forward to the opportunity to explain why Plaintiffs’ allegations against Snapchat are wrong factually and as a matter of law. “
OpenAI might be the center of the AI development world these days, but the competition has been heating up for quite a while. And few competitors are bankrolled on the same level as Meta. With a market capitalization of more than $1.75 trillion and a CEO who’s not afraid to spend heavily, Meta has been on a hiring spree in the AI world for months, poaching top tier talent from a variety of competitors.
It appeared recently that the wave of high-profile (and high-dollar) recruitments was coming to an end. In August, Meta quietly announced a freeze on hiring after adding roughly 50 AI researchers and engineers. This month, though, two more big names have joined the Meta roster.
While Meta might have a gap to close with its AI rivals, the company has assembled an all-star team to catch up and move forward. Here are some of the most notable experts to come on board.
Andrew Tulloch, co-founder of Thinking Machines Lab
Tulloch partnered with OpenAI’s former chief technical officer Mira Murati to launch Thinking Machines Lab in February of this year. Now he’s returning to his roots. Considered a leading researcher in the AI field, Tulloch previously spent 11 years at Meta, leaving in 2023 to join OpenAI, then departing with Murati. Meta founder Mark Zuckerberg has been chasing Tulloch for a while, reportedly making an offer with a $1.5 billion compensation package at one point, which Tulloch rejected. (Meta has called the description of the offer “inaccurate and ridiculous.”) There’s no word on what Tulloch was offered that made him decide to move.
An Inc.com Featured Presentation
Ke Yang, Senior Director of Machine Learning at Apple
Yang, who was appointed to lead Apple’s AI-driven web search effort just weeks ago, is another big October Meta hire. At Apple, his team (Answers, Knowledge and Information, or AKI) was working to make Siri more Chat-GPT-like by pulling that information from the web, making his departure one of Meta’s most notable poachings. Meta convinced him to come over after recruiting several of his colleagues.
Shengjia Zhao, co-creator of OpenAI’s ChatGPT
Zhao joined Meta in June to serve as chief scientist of Meta Superintelligence Labs. Beyond co-creating ChatGPT, he also played a role in building GPT-4 and led synthetic data at OpenAI for a stint. “Shengjia has already pioneered several breakthroughs including a new scaling paradigm and distinguished himself as a leader in the field,” Zuckerberg wrote in a social media post in July. “I’m looking forward to working closely with him to advance his scientific vision.”
Daniel Gross, co-founder of Safe Superintelligence
As it did with Murati’s Thinking Machines Lab, Meta tried to acquire Safe Superintelligence, the AI startup co-founded by OpenAI’s former chief scientist, Ilya Sutskever. When that offer was rejected, Zuckerberg began looking for talent, luring co-founder and CEO Gross in June. Gross is working on AI products for Meta’s superintelligence group. By joining Meta, he’s reunited with former GitHub CEO Nat Friedman, with whom he once created the venture fund NFDG.
Ruoming Pang, Apple’s head of AI models
Pang was one of the first high-profile departures from Apple to Meta, making the jump in July. At the time, he was Apple’s top executive overseeing AI models and had been with the company since 2021. While there, he helped develop the large language model that powers Apple Intelligence and other AI features, such as email and webpage summaries.
Matt Deitke, co-founder of Vercept
Vercept is a start-up that’s attempting to build AI agents that use other software to autonomously perform tasks, something that caught Zuckerberg’s attention. Deitke proved hard to lure, though. He reportedly turned down a $125 million, four-year offer, but a direct appeal by Zuckerberg (and a reported doubling of that offer) convinced him to make the move (with the blessing of his peers). Kiana Ehsani, his co-founder and CEO, announced his departure on social media, joking, “We look forward to joining Matt on his private island next year.”
Alexandr Wang, founder and CEO of Scale AI
Wang left his startup to join Meta after the social media company made a $14.3 billion investment into Scale AI (without any voting power in the company). “As you’ve probably gathered from recent news, opportunities of this magnitude often come at a cost,” Wang wrote in a memo to staff. “In this instance, that cost is my departure.” Wang joined Meta’s superintelligence unit. Scale made its name by helping companies like OpenAI, Google and Microsoft prepare data used to train AI models. Meta was already one of its biggest customers.
Nat Friedman, former CEO of GitHub
Friedman was already a part of Meta’s Advisory Group before he was brought on full-time. That external advisory council provides guidance on technology and product development. Now, he’s working with Wang to run the superintelligence unit. Friedman previously was CEO of GitHub, a cloud-based platform that hosts code for software development. Most recently, he was a board member at the AI investment firm he started with Safe Superintelligence’s Gross.
As for what Zuck is going to do with all this talent, the sky’s the limit, but there’s some catchup to do first. The Llama Large Language Model hasn’t quite matched up to those of OpenAI or Google, but with Meta’s gargantuan user base (3.4 billion people use one of the company’s apps each day), Meta’s AI could still be one of the most widely used in the years to come.
Attorney General Pam Bondi announced Tuesday that the U.S. Justice Department had succeeded in getting Facebook to remove a group on the platform that allowed people in Chicago to alert their neighbors when ICE was in the area.
“Today following outreach from @thejusticedept Facebook removed a large group page that was being used to dox and target @ICEgov agents in Chicago,” Bondi wrote on X.
“The wave of violence against ICE has been driven by online apps and social media campaigns designed to put ICE officers at risk just for doing their jobs,” Bondi continued. “The Department of Justice will continue engaging tech companies to eliminate platforms where radicals can incite imminent violence against federal law enforcement.”
Right-wing influencer Laura Loomer tweeted about a Facebook group called “ICE Sighting-Chicagoland” on October 8, which appears to be the group that was removed. Loomer claimed on Monday that she had been informed by a source at the Department of Justice that the agency had contacted Facebook’s parent company, Meta, about the page.
“DOJ source tells me they have seen my report and they have contacted Facebook and their executives at META to tell them they need to remove these ICE tracking pages from the platform,” Loomer tweeted. “We will see if they comply. There are DOZENS of pages like the one below that endanger the lives of @ICEgov agents.”
In an emailed statement to Gizmodo, Meta didn’t confirm the name but said a group “was removed for violating our policies against coordinated harm.” Meta didn’t respond to follow-up questions about what kind of specific activity was taking place that violated the policy. But the policy does note that it’s against the Facebook rules to out the undercover status of “law enforcement, military, or security personnel if the content contains the agent’s name, their face or badge.”
It’s completely legal to share information about where police are operating. In fact, the idea that sharing such information should be illegal is only common in authoritarian countries. But Bondi and the Department of Justice have declared war on anyone sharing such basic information, characterizing it as dangerous or violent. And it is against Facebook’s policies for anyone to share information about the secret police that are currently roaming American streets, abducting people. It’s not clear, however, that “doxxing” is what led to the group being removed. That’s just the suggestion from the tech giant.
ICE agents in the U.S. have been criticized for their lack of identification and their frequent use of masks while kidnapping people off the streets. Free countries typically don’t have masked agents of the state refuse to identify themselves, and experts have warned that the practice is just one more step in America’s descent into fascism. Meta CEO Mark Zuckerberg, it should be noted, has become quite chummy with President Trump over the past year.
ICE is currently engaging in a campaign of terror against residents of Chicagoland, and the stories that have been emerging are sometimes hard to believe. For example, the Chicago Tribune reported on Monday that a 44-year-old woman getting off a double shift at a bar downtown earlier this month was suddenly grabbed by three federal agents who zip-tied her hands. They questioned her for an hour, simply because she looked Latina, and didn’t believe her passport was real.
Greeley, who was born at Illinois Masonic hospital and is adopted, carries a copy of her passport just in case she runs into federal agents.
“I am Latina and I am a service worker,” Greeley said. “I fit the description of what they’re looking for now.”
During the encounter, Greeley said they told her she “doesn’t look like” a Greeley.
“They said this isn’t real, they kept telling me I’m lying, I’m a liar,” Greeley recalled. “I told them to look in the rest of my wallet, I have my credit cards, my insurance.”
When the agents let her go, Greeley got home and screamed when she saw the shadow on her door. Days after the incident, Greeley said, it’s still “terrifying.”
Another recent story from the Chicago Sun-Times explained how a raid on an apartment building saw people trying to protect their neighbors from getting kidnapped on Sept. 30. One resident hid a mother and her 7-year-old daughter for three days in a way that many have compared to the story of Anne Frank.
There are countless stories like that emerging from the Chicago area, which makes it that much more galling when Bondi insists that sharing information about ICE is somehow creating a danger for the ICE agents.
Apple also recently removed apps that allowed people to share the location of federal agents spotted in their neighborhoods. Bondi took credit for pressuring Apple to disappear the apps. The removals are eerily similar to what happened during pro-democracy protests in Hong Kong in 2019, where Apple also removed an app that allowed users to see Hong Kong police movements via crowdsourced information.
Is the U.S. becoming more and more like China with each passing day? Even the Wall Street Journal seems to think so. And when Bondi calls Americans who don’t like to see their neighbors abducted “radicals,” you know we’re in a very bad place as a country.
The Eddington writer-director didn’t have to entertain The Hollywood Reporter‘s questions about an embryonic draft of his COVID-19 Western, but he did so anyway, further illustrating how the writing and rewriting process doesn’t truly end until picture is locked. The film always introduced its fictional small town setting of Eddington, New Mexico through the perspective of a troubled local vagrant named Lodge (Clifton Collins Jr.), but according to an earlier script, the sequence originally contained a real-life tech billionaire with a notable history on the big screen.
As Lodge babbles and walks barefoot back to town, Aster establishes a sign for a proposed data center, which is one of numerous issues that has divided Eddington’s sub-3,000 population and the nearby reservation known as Santa Lupe Pueblo. Similar data centers are being built all over the U.S. right now in order to support Big Tech’s overwhelming investment in AI infrastructure. However, there have been widespread objections over these facilities’ potential resource depletion, particularly water.
Meta’s own data centers have been in the news due to this very concern, and so it makes sense why Aster once scripted a quick scene involving Meta chairman, Mark Zuckerberg. Lodge once watched the tech CEO emerge from a stretch limousine with a map in hand so he could assess Eddington’s offerings. But the appearance was scrapped during ongoing development, never advancing to the point of having to assemble a casting list.
“That fell by the wayside a long time before we started making it,” Aster tells THR during a recent FYC conversation. “That was an early idea, and it was only one moment.”
The battle for Eddington’s soul is primarily waged by Joaquin Phoenix’s Sheriff Joe Cross and Pedro Pascal’s Mayor Ted Garcia. The two men have opposing views on just about everything: politics, the aforementioned data complex and COVID-19 safety protocols as of May 2020. Furthermore, they have longstanding personal grievances, mainly involving Joe’s wife, Louise (Emma Stone).
After a dust-up with Ted over the local grocery store’s adherence to the state’s mask mandate, Joe impulsively announces his rival candidacy for mayor of Eddington, and tensions eventually boil over to the point of deadly violence. Eddington contains a number of images, story points and themes that struck a chord at the time of its theatrical release in July 2025, but a number of them have proven to be quite prophetic of more recent events within America’s fraught political landscape.
“I’m pretty heartbroken about where we are. I’m very scared. I feel immense dread all the time. This movie came out of that sense of dread, and I certainly see how the film is prescient,” Aster says. “There are things that have happened since [the theatrical release] that the film anticipates, but the film is also the product of me just trying to look unblinkingly at where we are. If I’m not using the world right now for my work, then it’s just going to be using me. This is a very, very dark moment, and so I hope that the film feels reflective of where we are.”
Below, Aster also discusses other adjustments he made to his ever-evolving script, including the substantial dialogue removal during Joe and Ted’s duel over the volume of Katy Perry’s “Firework.”
***
The final shot has lingered in my mind since July. My first thought in the theater was, “This is who won, and this is who was always going to win.” Is that reading on your wavelength?
Yep! (Aster smiles.)
I read an early version of the script that does not end with that shot. It ended with invalid Joe (Joaquin Phoenix) and Dawn’s (Deirdre O’Connell) unique bedroom arrangement, minus the third party. When did it occur to you that the data center shot should be the exclamation point on the piece?
Well, it was in the shooting script before we started production, so you probably read a version that was maybe half a year before we began shooting. But it felt like it came to be a very important part of the film’s spine before we began. And now, it’s the heart of the film. It’s the point of the film.
Ari Aster and Pedro Pascal on the set of Eddington
Richard Foreman/A24
There have been recent stories about the water-related impact of a Meta data center in Georgia, and that’s one of several ways in which Eddington has become even more relevant since its theatrical release. On one hand, it might be reaffirming to know you had your finger on the pulse, but on the other hand, I can’t imagine you want to be right about all these things. Do you actually feel conflicted about the film’s prescience?
I’m pretty heartbroken about where we are. I’m very scared. I feel immense dread all the time. This movie came out of that sense of dread, and I certainly see how the film is prescient. There are things that have happened since [the theatrical release] that the film anticipates, but the film is also the product of me just trying to look unblinkingly at where we are. As a storyteller, I take as many pieces of this landscape, this culture, and build a house out of it, create a piece of architecture. If I’m not using the world right now for my work, then it’s just going to be using me. This is a very, very dark moment, and so I hope that the film feels reflective of where we are.
I don’t have any answers, and the movie doesn’t pretend to have any answers, but it’s very easy to lose the forest for the trees. So I hope that the movie is able to pull back far enough to give a broader picture of where we are. Of course, I have a very limited picture of where we are because I’m also just completely mired in my own identity and, honestly, in my own algorithm. I have access to the information that I have access to, and I do what I can to get as broad a picture of what everybody is seeing, especially while I was making this film. I really tried to do that.
Eddington is a dark film, and I’ve heard people describe it as mean-spirited. But again, it’s trying to reflect the mood of the country, and things have gotten really mean. Things are very cruel. This culture is incredibly cruel, and things have gotten really obscene. So, in some ways I had to tamp all that stuff down in the film because it could have easily been much more alienating and much more unpleasant. So it was interesting to have to actually sand off the edges in some cases just so it could be digestible.
Micheal Ward, Joaquin Phoenix and Luke Grimes in Ari Aster’s Eddington
Richard Foreman/A24
A Mark Zuckerberg character was once scripted to appear during Lodge’s (Clifton Collins Jr.) opening sequence. (Per Lodge’s POV, he sees Zuckerberg get out of a stretch limo at night and survey the town while holding a map.) Did that quickly fall by the wayside?
Oh, so you read a much older version. Yeah, that fell by the wayside a long time before we started making it.
So you never got as far as thinking about casting?
No, that was an early idea, and it was only one moment, as you know.
[The following question contains major spoilers for Eddington.]
The former opening also had the first of two major jurisdictional standoffs between Sheriff Joe Cross and Santa Lupe Pueblo police. Did you decide that it would be more dramatic to save that type of conflict for the investigation into the Garcia murders?
Well, we actually did shoot one version of that first standoff, the one with the charred body near the wheelchair and the land grant. That was something that we did shoot, and it was just too long and complicated. It was something that was meant to never quite come back into the story. So that was something that we reshot in the middle of editing. There were a few pickups we needed, and we decided, “Let’s do something simpler at the beginning here so that we can just get going.” And we didn’t need to repeat the jurisdictional issue. It worked right in the middle of the film, well enough that it didn’t need the doubling. [Writer’s Note: Joe’s opening scene instead became a more streamlined squabble with reservation police over his resistance to wearing a mask on their soil.]
Joaquin Phoenix’s Sheriff Joe Cross and Pedro Pascal’s Mayor Ted Garcia in Ari Aster’s Eddington
Courtesy of Cannes
I thought it was interesting how you removed most of the scripted dialogue from two big scenes: the party fight between Joaquin and Pedro’s characters, and Louise’s (Stone) departure. Did you make that determination? Or did the actors insist that they could sell most of it with just their expressions and body language?
No, that was changed [by me]. Yeah, you read a really early version that shouldn’t be available to read.
Sorry, I just didn’t want to give you the same interview you’ve already been given.
No, it’s fine. Things always leak. I changed that [party scene] as I was working on the script and polishing it and seeing what we needed. A lot of the dialogue that’s in that scene we pulled earlier, so it’s in [Joe and Ted’s] interaction on the street. But that was all just work that I had been doing to make the film leaner. It then became clear to me that, at that point in the film, enough words have been exchanged, and the scene would be much stronger with just the specific action of what’s happening.
There were a number of stories this year about Joaquin and how he tends to go through a period of self-doubt in the lead-up to a project. This is not unheard of among artists. He usually works through it, but sometimes he doesn’t. Assuming he’s had phases on your two movies where he gets in his head, what’s the key when that happens? Do you just talk things through and find a happy medium?
Joaquin completely throws himself into whatever he’s doing, and he takes the decision to actually commit to something very seriously. I think he suffers over it, and I certainly understand that. I have nerves about everything I’m doing and wondering whether it’s the right thing. With Joaquin, I think he faces that with every scene. For every scene, he comes in and asks, “How do I find this? How do I find something interesting, true and urgent that is worth expressing?” I think he lives in horror of the idea of acting and just giving a performance. I think he even recoils at that word performance, and that’s why I really love working with him.
He will challenge everything you put in front of him, and that very often yields something surprising and, sometimes, electrifying. What you want from any actor is for the scene or the movie to come to life, or to get away from you and take on its own energy. And there is a magic to what Joaquin does. He’s trying to summon that. He’s trying to summon something that is beyond him. He’s also a very technical actor, surprisingly technical. He knows what he’s doing, and he’s very conscious of craft.
Joaquin Phoenix’s Sheriff Joe Cross in Eddington
A24
I’m always fascinated by the fraternity between filmmakers. I routinely hear stories of Guillermo del Toro spending a day in a filmmaker’s editing room and whatnot. Zach Cregger also just told me about a major contribution that your buddy Bill Hader made during the rewriting of Weapons. You’ve thanked people such as Chris Abbott on your last couple films, and the same goes for one or both of the Coens, too. Can you talk about the support or contributions you receive from your community?
I live in New York, and I know a lot of New York filmmakers. We’ll be called into a feedback screening to watch something before it’s done and give notes. Typically, when you see somebody in the thank you section, you’re thanking them for giving feedback, or showing up and just watching the film before it’s done. Sometimes, you’re getting a lot of feedback from somebody, meaning, if you’re close with them, you’ll talk to them for a while. Yeah, Joel [Coen] was very helpful, Ethan [Coen] was helpful. I’m friends with Bo Burnham, and he’s always helpful. He’s very smart. They’re all incredibly smart people. Bill [Hader] is also somebody that I’ll often bounce stuff off of, and he’ll bounce stuff off of me. So it’s great to have friends like that.
You’ve mentioned previously that you have a follow-up of sorts in the world of Eddington. Based on the ending, I’m guessing that it would involve the Michael character. What’s your temperature on that potential project at the moment?
Well, I just want to keep making films that are engaging with the world and with the moment and with where we are. We’re living in such a combustible time, and things are changing so quickly and so drastically. So it feels important to be engaged with that and to not retreat from that.
*** Eddington is currently available on digital ahead of Oct. 21’s Exclusive 4K Release via A24.
A Meta executive in charge of building the company’s metaverse products told employees that they should be using AI to “go 5X faster” according to an internal message obtained by 404 Media.
“Metaverse AI4P: Think 5X, not 5%,” the message, posted by Vishal Shah, Meta’s VP of Metaverse, said (AI4P is AI for Productivity). The idea is that programmers should be using AI to work five times more efficiently than they are currently working—not just using it to go 5 percent more efficiently.
“Our goal is simple yet audacious: make Al a habit, not a novelty. This means prioritizing training and adoption for everyone, so that using Al becomes second nature—just like any other tool we rely on,” the message read. “It also means integrating Al into every major codebase and workflow.” Shah added that this doesn’t just apply to engineers. “I want to see PMs, designers, and [cross functional] partners rolling up their sleeves and building prototypes, fixing bugs, and pushing the boundaries of what’s possible,” he wrote. “I want to see us go 5X faster by eliminating the frictions that slow us down. And 5X faster to get to how our products feel much more quickly. Imagine a world where anyone can rapidly prototype an idea, and feedback loops are measured in hours—not weeks. That’s the future we’re building.”
Zuckerberg has spoken extensively about how he expects AI agents to write most of Meta’s code within the next 12 to 18 months. The company also recently decided that job candidates would be allowed to use AI as part of their coding tests during job interviews. But Shah’s message highlights a fear that workers have had for quite some time: That bosses are not just expecting to replace workers with AI, they are expecting those who remain to use AI to become far more efficient. The implicit assumption is that the work that skilled humans do without AI simply isn’t good enough.
At this point, most tech giants are pushing AI on their workforces. Amazon CEO Andy Jassy told employees in July that he expects AI to completely transform how the company works—and lead to job loss. “In the next few years, we expect that this will reduce our total corporate workforce as we get efficiency gains from using AI extensively across the company,” he said.
Meta Platforms said on Wednesday it would begin using people’s interactions with its generative AI tools to personalize content and advertising across its apps such as Facebook and Instagram starting on December 16.
Users will be notified of the changes from October 7 and they will not have an option to opt out, the social media giant said, though the update applies only to those who use Meta AI.
Meta said users’ interactions with its AI features, whether by voice or text, would be added to existing data such as likes and follows to shape recommendations for content and ads, including Reels and ads. For example, a user talking about hiking with Meta AI could later be shown hiking groups, friends’ trail updates or ads for boots.
“People’s interactions simply are going to be another piece of the input that will inform the personalization of feeds and ads,” said Christy Harris, privacy policy manager at Meta. “We’re still in the process of building the first offerings that will make use of this data.”
An Inc.com Featured Presentation
When people have conversations with Meta AI about more sensitive topics such as their religious views, sexual orientation, political views, health, racial or ethnic origin, Meta will not use those topics to show them ads, it said.
The rollout will begin in most regions on December 16 and expand over time, excluding the UK, the European Union and South Korea.
Meta AI now has 1 billion monthly active users across the company’s family of apps.
CEO Mark Zuckerberg said at the company’s annual shareholder meeting this year that the “focus for this year is deepening the experience and making Meta AI the leading personal AI with an emphasis on personalization, voice conversations and entertainment.”
Meta launched its first consumer-ready smart glasses with a built-in display at its annual Connect conference last month.
The company’s use of AI interactions for ads comes as other tech giants, including Google and Amazon, have begun monetizing AI tools, often through cloud-based services. But few have used AI chat interactions to personalize content and advertising across multiple platforms at the scale Meta is attempting.
Reporting by Echo Wang in New York; Editing by Jamie Freed.
In a move no one asked for, Meta is introducing “Vibes,” a new feed in the Meta AI app and on meta.ai for sharing and creating short-form, AI-generated videos. Think TikTok or Instagram Reels, but every single video you come across is essentially AI slop.
Meta CEO Mark Zuckerberg announced the rollout of Vibes in a post on Instagram that features a series of AI-generated videos. In one video, a group of fuzzy-looking creatures hops from one fuzzy cube to another. In another, a cat kneads some dough. A third video shows what appears to be an ancient Egyptian woman taking a selfie on a balcony overlooking Ancient Egypt.
Image Credits:Meta
According to Meta, as you browse the new feed, you’ll see AI-generated videos from both creators and other users. Over time, Meta’s algorithm will begin to show you personalized content.
You have the option to generate a video from scratch, or remix a video that you see on your feed. Before publishing, you can add new visuals, layer in music, and adjust styles. You can then post the video directly to the Vibes feed, DM it to others, or cross-post to Instagram and Facebook Stories and Reels.
Meta’s chief AI officer Alexandr Wang shared in a post that the company has partnered with AI image generators Midjourney and Black Forest Labs for the early version of Vibes, while Meta continues developing its own AI models.
Since no one really wants an AI-generated version of TikTok, the user comments in response to Zuckerberg’s announcement were about what you’d expect. The top comment on the post reads: “gang nobody wants this,” while another popular comment says: “Bro’s posting ai slop on his own app.” Another comment reads: “I think I speak for everyone when I say: What….?”
Image Credits:Meta
The new feed likely won’t be welcomed by users, especially since the rise of AI technology has caused social media platforms to become flooded with AI slop. The problem has become so widespread that companies like YouTube are now looking to crack down on the issue. This makes Meta’s move particularly puzzling, given that the company said earlier this year that it was tackling “unoriginal” Facebook content and advised creators that they should focus on “authentic storytelling,” not short videos offering little value.
Techcrunch event
San Francisco | October 27-29, 2025
The launch of the new feed comes as Meta has recently invested heavily in revamping its AI efforts amid concerns that it was falling behind competitors like OpenAI, Anthropic, and Google DeepMind.
The Meta AI app — you know, the one where people publicly shared their private conversations with the chatbot by accident — now has a dedicated feed for AI slop. The Vibes feed is a home for AI-generated short-form videos in the Meta AI app and website. Users can scroll the creations of other people, or can make their own clips, either by building from scratch or adapting other videos from the feed. The videos people make can also be shared via DM or cross-posted to Instagram or Facebook.
The company said it plans to add more features for AI-generated creation in the future. According to a Threads post by CEO Mark Zuckerberg, Vibes is “an early look at some of the new product directions we’re exploring.” He added that Meta Superintelligence Labs will work with Midjourney and Black Forest Labs on upcoming AI projects.
Here at WIRED, we tend to stick to journalism. We talk about our work to anyone who will listen—during podcasts, on social media, over dinner with our politely listening friends—but we tend to confine our bragging to the scoops we get, the stories we write. For our new politics issue, though, we decided to do something different and bring WIRED’s work outside, to you, directly.
Over the past few days we’ve been posting the cover of our latest issue in New York, Los Angeles, San Francisco, Austin, and Washington, DC. It’s being displayed as wheatpasted posters, digital billboards, and even a mural. Hopefully, they’re easy to spot if not downright hard to miss.
Here’s where you come in. We’re not going to tell you exactly where the cover is displayed. (Where’s the fun in that?) Instead, we want you to go on a treasure hunt. If you live in or near one of the five cities listed above, keep your eyes peeled. If you see WIRED’s new cover out in the wild, snap a photo, tell your friends.
Also, scan the QR code to read the stories in the politics issue, like editor at large Steven Levy’s deeply reported piece on watching Silicon Valley transform from a countercultural tech utopia to a business sector looking to curry favor with President Trump. Or, perhaps, our investigation into how much geopolitical power Elon Musk is amassing through his SpaceX rocket launches and Starlink satellites.
One more thing: If you do see one of WIRED’s covers, let us know. Tag us on Instagram, TikTok, Bluesky, or X. Or, of course, leave a comment below. See you out there.
Mark Zuckerberg has poached a high-ranking OpenAI researcher to be the research principal of Meta Superintelligence Labs (MSL). Yang Song, who previously led the strategic explorations team at OpenAI, is now reporting to Shengjia Zhao, another OpenAI alum who has overseen the buzzy AI effort since July, according to multiple sources. He started earlier this month.
The move comes after Zuckerberg went on a hiring blitz earlier this summer, bringing in at least 11 top researchers from OpenAI, Google, and Anthropic.
Song had been at OpenAI since 2022. His research there focused on improving models’ ability to process large, complex datasets across different modalities. While still a graduate student at Stanford University, he developed a breakthrough technique that helped inform the development of OpenAI’s DALL-E 2 image generation model. Both he and Zhao attended Tsinghua University in Beijing as undergraduates, and worked under the same advisor, Stefano Ermon, while pursuing PhDs at Stanford.
In a staff-wide memo sent this summer, Zuckerberg touted Zhao’s impressive resume as the cocreator of ChatGPT, GPT-4, all mini models, 4.1, and o3 at OpenAI—but he did not specify Zhao’s new role at Meta. In July, Zuckerberg wrote in a Threads post that while Zhao had “cofounded the lab” and “been our lead scientist from day one,” Meta had decided to “formalize his leadership role” as the lab’s chief scientist. The move came after Zhao threatened to return to OpenAI, even going as far as to sign employment documents, WIRED previously reported.
A small number of researchers have left Meta Superintelligence Labs since the initiative was first announced in June. Two staffers have returned to OpenAI, WIRED previously reported. One of these researchers went through onboarding but never showed up for their first day of work at Meta.
Another AI researcher, Aurko Roy, also left Meta in July, WIRED has learned. He’d worked at the tech giant for just five months, according to his personal website, which also says he now works on Microsoft AI. Roy did not immediately respond to a request for comment from WIRED. Yang Song, OpenAI, and Meta also did not immediately respond to a request for comment from WIRED.
Song joins an already crowded field of big-name AI talent within Meta’s increasingly complicated AI division. When Zhao was hired in July, some speculated that he had replaced Yann LeCun, Meta’s longstanding chief AI scientist. In a LinkedIn post, LeCun clarified that he remained chief AI scientist for Facebook AI Research (FAIR), the company’s longstanding foundational AI research lab.
YouTube creators whose accounts were banned for violating previous policies against COVID-19 and election misinformation will be given the chance to rejoin the platform, said Alphabet, YouTube’s parent company, on Tuesday.
In a letter submitted in response to subpoenas from the House Judiciary Committee, attorneys for Alphabet said the decision to bring back banned accounts reflected the company’s commitment to free speech.
“No matter the political atmosphere, YouTube will continue to enable free expression on its platform, particularly as it relates to issues subject to political debate,” the letter read, noting that a number of accounts were kicked off the platform between 2023 and 2024 for violating misinformation rules that don’t exist anymore. Now, it said, “YouTube will provide an opportunity for all creators to rejoin the platform if the Company terminated their channels for repeated violations of COVID-19 and elections integrity policies that are no longer in effect.”
The company in its letter also said it “values conservative voices on its platform and recognizes that these creators have extensive reach and play an important role in civic discourse” and added that YouTube “recognizes these creators are among those shaping today’s online consumption, landing ‘must-watch’ interviews, giving viewers the chance to hear directly from politicians, celebrities, business leaders, and more.”
The move is the latest in a cascade of content moderation rollbacks from tech companies, who cracked down on false information during the pandemic and after the 2020 election but have since faced pressure from President Trump and other conservatives who argue they unlawfully stifled right-wing voices in the process.
It comes as tech CEOs, including Alphabet CEO Sundar Pichai, have sought a closer relationship with the Republican president, including through high-dollar donations to his campaign and attending events in Washington.
YouTube in 2023 phased out its policy to remove content that falsely claims the 2020 election, or other past U.S. presidential elections, were marred by “widespread fraud, errors or glitches.” Claims of fraud in the 2020 election have been debunked.
The platform in 2024 also retired its standalone COVID-19 content restrictions, allowing various treatments for the disease to be discussed. COVID-19 misinformation now falls under YouTube’s broader medical misinformation policy.
Among the creators who have been banned from YouTube under the now-expired policies are prominent conservative influencers, including Dan Bongino, who now serves as deputy director of the FBI. For people who make money on social media, access to monetization on YouTube can be significant, earning them large sums through ad revenue.
House Judiciary Committee Chairman Jim Jordan and other congressional Republicans have pressured tech companies to reverse content moderation policies created under former President Joe Biden and accused Biden’s administration of unfairly wielding its power over the companies to chill lawful online speech.
In Tuesday’s letter, Alphabet’s lawyers said senior Biden administration officials “conducted repeated and sustained outreach” to coerce the company to remove pandemic-related YouTube videos that did not violate company policies.
“It is unacceptable and wrong when any government, including the Biden Administration, attempts to dictate how the Company moderates content, and the Company has consistently fought against those efforts on First Amendment grounds,” the letter said.
Meta CEO Mark Zuckerberg has also accused the Biden administration of pressuring employees to inappropriately censor content during the COVID-19 pandemic. Elon Musk, the owner of the social platform X, has accused the FBI of illegally coercing Twitter before his tenure to suppress a story about Hunter Biden.
The Supreme Court last year sided with the Biden administration in a dispute with Republican-led states over how far the federal government can go to combat controversial social media posts on topics including COVID-19 and election security.
Asked for more information about the reinstatement process, a spokesperson for YouTube did not immediately respond to a request for comment.
On an earnings call this summer, Meta CEO Mark Zuckerberg made an ambitious claim about the future of smart glasses, saying he believes that someday people who don’t wear AI-enabled smart spectacles (ideally his) will find themselves at a “pretty significant cognitive disadvantage” compared to their smart-glasses-clad kin.
Meta’s most recent attempt to demonstrate the humanity-enhancing capabilities of its face computing platform didn’t do a very good job of bolstering that argument.
In a live keynote address at the company’s Connect developer conference on Wednesday, Zuckerberg tossed to a product demo of the new smart glasses he had just announced. That demo immediately went awry. When a chef was brought onstage to ask the Meta glasses’ voice assistant to walk him through a recipe, he spoke the “Hey Meta” wake word, and every pair of Meta glasses in the room—hundreds, since the glasses had just been distributed to the crowd of attendees—sprang to life and started chattering.
In an Instagram Reel posted after the event, Meta CTO Andrew Bosworth (whose own bit onstage had run into technical problems) said the hiccup happened because so many instances of Meta’s AI running in the same place meant they had inadvertently DDOS’d themselves. But a video call demo failed too, and the demos that did work were filled with lags and interruptions.
This isn’t meant to just be a dunk at the kludgy Connect keynote. (We love a live demo, truly!) But the weirdness, the timid exchanges, the repeated commands, and the wooden conversations inadvertently reflect just how graceless this technology can be when used in the real world.
“The main problem for me is the raw amount of times where you do engage with an AI assistant and ask it to do something and it doesn’t actually understand,” says Leo Gebbie, a director and analyst at CCS Insights. “The failure risk just is high, and the gap is still pretty big between what’s being shown and what we’re actually going to get.”
Eyes of the World
Live Captions seen on the Meta Ran Ban Display.Courtesy of Meta
Clearly, we are a long way from Zuckerberg’s vision of smart glasses being the computing platform that elevates humanity to some higher-thinking, higher-functioning state. Sure, wearing internet-connected hardware on your face can make it easier and faster to access information, and that may help you become—or at least appear to become—smarter or more capable. But as the clumsiness of the Connect demo very publicly demonstrated, the act of simply wearing a chatbot and a screen on your face might cancel out any cognitive advantage. Smart glasses put the wearer at a significant social disadvantage.
Deutsche Bank called it “the summer AI turned ugly.” For weeks, with every new bit of evidence that corporations were failing at AI adoption, fears of an AI bubble have intensified, fueled by the realization of just how topheavy the S&P 500 has grown, along with warnings from top industry leaders. An August study from MIT found that 95% of AI pilot programs fail to deliver a return on investment, despite over $40 billion being poured into the space. Just prior to MIT’s report, OpenAI CEO Sam Altman rang AI bubble alarm bells, expressing concern over the overvaluation of some AI startups and the intensity of investor enthusiasm. These trends have even caught the attention of Fed Chair Jerome Powell, who noted that the U.S. was witnessing “unusually large amounts of economic activity” in building out AI capabilities.
Mark Zuckerberg has some similar thoughts.
The Meta CEO acknowledged that the rapid development of and surging investments in AI stands to form a bubble, potentially outpacing practical productivity and returns and risking a market crash. But Zuckerberg insists that the risk of over-investment is preferable to the alternative: being late to what he sees as an era-defining technological transformation.
“There are compelling arguments for why AI could be an outlier,” Zuckerberg hedged in an appearance on the Access podcast. “And if the models keep on growing in capability year-over-year and demand keeps growing, then maybe there is no collapse.”
Then Zuckerberg joined the Altman camp, saying that all capital expenditure bubbles like the buildout of AI infrastructure, seen largely in the form of data centers, tend to end in similar ways. “But I do think there’s definitely a possibility, at least empirically, based on past large infrastructure buildouts and how they led to bubbles, that something like that would happen here,” Zuckerberg said.
Bubble echoes
Zuckerberg pointed to past bubbles, namely railroads and the dot-com bubble, as key examples of infrastructure buildouts leading to a stock-market collapse. In these instances, he claimed that bubbles occurred due to businesses taking on too much debt, macroeconomic factors, or product demand waning, leading to companies going under and leaving behind valuable assets.
The Meta CEO’s comments echoed Altman’s, who has similarly cautioned that the AI boom is showing many signs of a bubble.
“When bubbles happen, smart people get overexcited about a kernel of truth,” Altman told The Verge, adding that AI is that kernel: transformative and real, but often surrounded by irrational exuberance. Altman has also warned that “the frenzy of cash chasing anything labeled ‘AI’” can lead to inflated valuations and risk for many.
The consequences of these bubbles are costly. During the dot-com bubble, investors poured money into tech startups with unrealistic expectations, driven by hype and a frenzy for new internet-based companies. When the results fell short, the stocks involved in the dot-com bubble lost more than $5 trillion in total market cap.
An AI bubble stands to have similarly significant economic impacts. In 2025 alone, the largest U.S. tech companies, including Meta, have spent more than $155 billion on AI development. And, according to Statista, the current AI market value is approximately $244.2 billion.
But, for Zuckerberg, losing out on AI’s potential is a far greater risk than losing money in an AI bubble. The company recently committed at least $600 billion to U.S. data centers and infrastructure through 2028 to support its AI ambitions. According to Meta’s chief financial officer, this money will go towards all of the tech giant’s US data center buildouts and domestic business operations, including new hires. Meta also launched its superintelligence lab, recruiting talent aggressively with multi-million-dollar job offers, to develop AI that outperforms human intelligence.
“If we end up misspending a couple hundred billion dollars, that’s going to be very unfortunate obviously. But I would say the risk is higher on the other side,” Zuckerberg said. “If you build too slowly, and superintelligence is possible in three years but you built it out were assuming it would be there in five years, then you’re out of position on what I think is going to be the most important technology that enables the most new products and innovation and value creation in history.”
While he sees the consequences of not being aggressive enough in AI investing outweighing overinvesting, Zuckerberg acknowledged that Meta’s survival isn’t dependent upon AI’s success.
For companies like OpenAI and Anthropic, he said “there’s obviously this open question of to what extent are they going to keep on raising money, and that’s dependent both to some degree on their performance and how AI does, but also all of these macroeconomic factors that are out of their control.”
Fortune Global Forum returns Oct. 26–27, 2025 in Riyadh. CEOs and global leaders will gather for a dynamic, invitation-only event shaping the future of business. Apply for an invitation.
Meta chief technology officer Andrew Bosworth took to his Instagram to explain, in more technical detail, why multiple demos of Meta’s new smart-glasses technology failed at Meta Connect, the company’s developer conference, this week.
However, at different points during the event, the live technology demos failed to work.
In one, cooking content creator Jack Mancuso asked his Ray-Ban Meta glasses how to get started with a particular sauce recipe. After repeating the question, “What do I do first?” with no response, the AI skipped ahead in the recipe, forcing him to stop the demo. He then tossed it back to Meta CEO Mark Zuckerberg, saying that he thinks the Wi-Fi may be messed up.
Jack Mancuso at Meta Connect.Image Credits:Meta
In another demo, the glasses failed to pick up a live WhatsApp video call between Bosworth and Zuckerberg; Zuckerberg eventually had to give up. Bosworth walked onstage, joking about the “brutal” Wi-Fi.
“You practice these things like a hundred times, and then you never know what’s gonna happen,” Zuckerberg said at the time.
After the event, Bosworth took to his Instagram for a Q&A session about the new tech and the live demo failures.
Techcrunch event
San Francisco | October 27-29, 2025
On the latter, he explained that it wasn’t actually the Wi-Fi that caused the issue with the chef’s glasses. Instead, it was a mistake in resource management planning.
Image Credits:Instagram (screenshot)
“When the chef said, ‘Hey, Meta, start Live AI,’ it started every single Ray-Ban Meta’s Live AI in the building. And there were a lot of people in that building,” Bosworth explained. “That obviously didn’t happen in rehearsal; we didn’t have as many things,” he said, referring to the number of glasses that were triggered.
That alone wasn’t enough to cause the disruption, though. The second part of the failure had to do with how Meta had chosen to route the Live AI traffic to its development server to isolate it during the demo. But when it did so, it did this for everyone in the building on the access points, which included all the headsets.
“So we DDoS’d ourselves, basically, with that demo,” Bosworth added. (A DDoS attack, or a distributed denial of service attack, is one where a flood of traffic overwhelms a server or service, slowing it down or making it unavailable. In this case, Meta’s dev server wasn’t set up to handle the flood of traffic from the other glasses in the building — Meta was only planning for it to handle the demos alone.)
The issue with the failed WhatsApp call, on the other hand, was the result of a new bug.
The smart glasses’ display had gone to sleep at the exact moment the call came in, Bosworth said. When Zuckerberg woke the display back up, it didn’t show the answer notification to him. The CTO said this was a “race condition” bug, or where the outcome depends on the unpredictable and uncoordinated timing of two or more different processes trying to use the same resource simultaneously.
“We’ve never run into that bug before,” Bosworth noted. “That’s the first time we’d ever seen it. It’s fixed now, and that’s a terrible, terrible place for that bug to show up.” He stressed that, of course, Meta knows how to handle video calls, and the company was “bummed” about the bug showing up here.
Despite the issues, Bosworth said he’s not worried about the results of the glitches.
“Obviously, I don’t love it, but I know the product works. I know it has the goods. So it really was just a demo fail and not, like, a product failure,” he said.
When Mark Zuckerberg announced Meta’s latest smart glasses at the company’s Connect 2025 keynote, he encountered two glitches that prevented him from properly demonstrating some of the devices’ features. Now, Meta’s Chief Technology Officer, Andrew Bosworth, said in an AMA on Instagram that they were demo failures and not actual product failures. The first glitch took place in the middle of a live demo with a cooking content creator, who asked Live AI for instructions on how to make a Korean-inspired steak sauce on his Meta glasses. Instead of giving him detailed instructions, his glasses’ AI skipped ahead by several steps and continued glitching. The chef told Zuckerberg that the “WiFi might be messed up” in the venue.
Bosworth said, however, that it was not the case. Apparently, when the chef said “Hey Meta, start Live AI,” it fired up every single Meta Ray-Ban’s Live AI in the building. And since the event was all about the company’s smart glasses, there were a lot of them in the venue at the time. The company had also routed Live AI’s traffic to its dev server to isolate it, but it ended up routing the Live AI traffic of everyone’s glasses in the building to its server. “We DDoS’d ourselves, basically,” he said. He continued that it didn’t happen at rehearsal, because there weren’t as many people wearing the glasses when they tested it out.
Zuckerberg also ran into an issue when he tried demonstrating taking WhatsApp video calls on the Meta Ray-Ban Display. The audience could see him getting calls on the glasses’ HUD, but he couldn’t answer them to start the call. Bosworth said that it was caused by a “never-before-seen bug” that had put the display to sleep at the very instant that the notifications came in that someone was calling. Even after Zuckerberg woke up the display, there was no option to answer the call. The CTO said Meta had never come across that bug before the demo and that it has since been fixed. “You guys know we can do video calling… we got WhatsApp, we know how to do video calling,” he said, but admitted that it was a missed opportunity to be able to show on stage that the feature actually works.
The whole experience—from the quality of the display itself, to the gesture controls and the on-glasses capabilities—all feels polished and intuitive, particularly considering this is Meta’s first commercial stab at such a product.
But here’s the problem: As impressive as they are, I still wouldn’t buy them. Outside of tech fans and early adopters, I don’t think a lot of people will. Not this iteration, anyway. And that’s not even because of the arguably punchy $800 price tag.
The thing that truly lets them down is their aesthetic, and that’s not what I expected from the company that made such a success of the original Ray-Ban Metasbecause of their design. While the originals (and their just-announced successors) basically look like Ray-Ban glasses, these, in what can only be described as a glaring faux pas, are far from being fashion-first. They look like smart glasses, but the old kind you don’t really want to be seen wearing.
The chunk factor cannot be ignored.
Courtesy of Verity Burns
Oh, there is a whiff of the Wayfarer about the Meta Ray-Ban Display; you can tell the intention is there to try and replicate the success of the most popular Ray-Ban style. But somehow distant alarm bells are ringing. Even though “statement glasses” are fashionable, these are just a bit too chunky to blend in.
At a glance, you can tell that something is going on with them. We’ve arrived in the uncanny valley of smart glasses, where the subtle bulges and added girth of the frames demand your attention, but not in a good way.
Interestingly, there is a subtle nod to this shift in aesthetics in the naming structure. While the original Ray-Ban Meta glasses lead with the Ray-Ban branding in their name, the Meta Ray-Ban Display switch that focus around. Which of the two brands made that call hasn’t been made clear, but these are Meta’s self-branded, tech-first glasses, and that feels a like misstep, especially considering the experience Meta already has in the market.
At Meta Connect 2025’s kickoff event, Mark Zuckerberg unveiled a trio of new smart eyewear, including its first model with augmented reality. Meta’s boss also announced the second generation Ray-Ban Meta, as well as a pair of Oakley-branded sunglasses designed for athletes. In addition, Zuckerberg launched Horizon TV, a new entertainment hub for the Quest headsets, which will give you easy access to Disney+, Prime Video and other streaming services in virtual reality. Here’s everything you might have missed.
Ray-Ban Meta “Gen 2”
The second-gen Ray-Ban Meta glasses come with improved battery life that the company says can now last up to eight hours with “typical use.” Even their accompanying charging case provides an additional 48 hours of juice, compared to the previous version’s 32 hours. The model is equipped with a 12-megapixel camera that can capture videos in 3K Ultra HD, with up to 60 frames per second and HDR support, as well as 32GB of storage. This fall, Meta will also roll out updates that will bring hyperlapse and slow-motion video capture to all its glasses, including this one. The Gen 2 Ray-Ban Meta glasses are now available with the same three base frames as their predecessor, namely Wayfarer, Skyler and Headliner, and will cost you at least $379.
Unlike the original Oakley Meta glasses, the Vanguard was clearly designed to cater to athletes. It features the wraparound frames Oakley is known for, with reflective swappable lens in different colors. Due to how it curves around the face, Meta placed its 12-megapixel camera in the center of the frames so that helmets and hats don’t ruin your shots. The camera on this model has a wider 122-degree angle lens and adjustable video stabilization so that you can still take videos while moving. Meta told us that the device’s battery was optimized for a wider range of temperatures, as well, allowing it to hold up better in harsh environments. In addition to the better battery life, the Vanguard also has louder onboard speakers and will come with integrations for Strava and Garmin. The Oakley Meta Vanguard glasses are now available for preorder for $499 and will be officially available on October 21.
The Meta Ray-Ban Display is the company’s first pair of AR glasses. Its lenses function as translucent heads-up displays (HUD) that can show you texts, AI prompts, turn-by-turn pedestrian navigation and video calls. The dedicated EMG wristband it’s paired with will allow you to interact with the HUD’s interface and will even give you the ability to type out responses. Video calling didn’t work properly during its on-stage demo, but Zuckerberg was able to play a song on Spotify, demonstrate a real-time subtitle feature that could be a huge help for those with hearing impairments, as well as capture and view images. The Meta Ray-Ban Display will be available through a limited number of brick-and-mortar stores, including Best Buy, LensCrafters, Ray-Ban and Verizon, since you’ll have to be fitted for the wristband. You’ll be able to get it for $799 starting on September 30 in the US and starting early next year in Canada, France, Italy and the United Kingdom.
Near the end of the Meta Connect keynote, Zuckerberg announced a new entertainment hub for Quest headsets. Called Horizon TV, it’s a unified interface for the streaming services available on the device, including Prime Video and Peacock. The Meta CEO also revealed that Disney+ is coming to Quest headsets.
After revealing his company’s latest augmented reality and smart glasses at Meta Connect this year, Mark Zuckerberg has introduced a new entertainment hub for its Quest headsets called Horizon TV. Zuckerberg said Meta believes watching video content is going to be a huge category for both virtual reality headsets and glasses in the future. Meta has already teamed up with several major streaming services to provide shows and movies you can enjoy in VR. One of those partners is Disney+, which will give users access to the Marvel Cinematic Universe on their headsets, as well as to content from ESPN and Hulu.
Based on the interface Zuckerberg showed on the event, which had a lineup of streaming apps that will be available on the hub, Meta also teamed up with Prime Video, Spotify, Peacock and Twitch. That will allow you to watch shows, such as The Boys and Fallout on your virtual reality devices. Meta also partnered with Universal Pictures and iconic horror company Blumhouse, so that you can watch horror flicks like M3GAN and The Black Phone on your Quest “with immersive special effects you won’t find anywhere else.”
The Horizon TV hub supports Dolby Atmos for immersive sounds, with Dolby Vision arriving later this year for richer colors and crisper details. For a limited time, you’ll be able to watch an exclusive 3D clip of Avatar: Fire and Ash on Horizon TV, as well, as part of Meta’s partnership with James Cameron’s Lightstorm Vision.
After revealing his company’s latest augmented reality and smart glasses at Meta Connect this year, Mark Zuckerberg has introduced a new entertainment hub for its Quest headsets called Horizon TV. Zuckerberg said Meta believes watching video content is going to be a huge category for both virtual reality headsets and glasses in the future. Meta has already teamed up with several major streaming services to provide shows and movies you can enjoy in VR. One of those partners is Disney+, which will give users access to the Marvel Cinematic Universe on their headsets, as well as to content from ESPN and Hulu.
Based on the interface Zuckerberg showed on the event, which had a lineup of streaming apps that will be available on the hub, Meta also teamed up with Prime Video, Spotify, Peacock and Twitch. That will allow you to watch shows, such as The Boys and Fallout on your virtual reality devices. Meta also partnered with Universal Pictures and iconic horror company Blumhouse, so that you can watch horror flicks like M3GAN and The Black Phone on your Quest “with immersive special effects you won’t find anywhere else.”
The Horizon TV hub supports Dolby Atmos for immersive sounds, with Dolby Vision arriving later this year for richer colors and crisper details. For a limited time, you’ll be able to watch an exclusive 3D clip of Avatar: Fire and Ash on Horizon TV, as well, as part of Meta’s partnership with James Cameron’s Lightstorm Vision.