I’ve been getting a lot of questions about Meta’s new smart glasses with a display, appropriately dubbed the Meta Ray-Ban Display. In fact, just 30 minutes before typing this, I was fielding questions about how they looked, what they did, and my personal thoughts from an interested individual at a site called Gizmodo.com. And the curiosity is warranted! As I wrote in my hands-on with them at Connect last week, these are the smart glasses you’ve been waiting for.
But as much as I can tell you that, it’s hard to fully understand until you try them for yourself, and you’re about to have more locations to do just that. According to Meta, it’s launching a few new pop-ups where you’ll be able to get a “premium” demo experience. I can’t say for sure what that entails, but I would hope it involves a more “hands-on” experience with the smart glasses and Meta’s Neural Band that lets you control them with finger gestures. Heck, maybe you’ll even get the same experience I got!
Without further ado, here are all the pop-up locations incoming (the Burlingame one is an existing location and is actually being made permanent, but I included it for reference):
Burlingame, Calif. 322 Airport Boulevard Open now
Las Vegas, Nev., Wynn Plaza Opens Oct. 16, 2025
Los Angeles, Calif. 8600 Melrose Avenue Opens Oct. 24, 2025
New York, New York 697 5th Avenue Opens Nov. 13, 2025
To be clear, these won’t be the only locations doing demos, but Meta is suggesting that you’ll get a more nuanced experience by going to one of its stores specifically. If you aren’t in one of those cities, you can search Meta’s list of retailers here, where you’ll be able to get a non-Meta demo. I searched New York, where I live, and it looks like I can currently book a demo at Sunglass Hut and LensCrafters, but Best Buy is not taking demo appointments yet. Your mileage on that front may vary.
According to Meta, demo appointments are going fast. In its words, “appointments in many major cities [are] already booked out through mid-October.” So, if you want to try the Meta Ray-Ban Display before November, you may want to start looking for retailers offering demos now. If you’re one of the six people not interested in the smart glasses with a display, you can also try out any of Meta’s AI glasses, including its Oakley Vanguard and HSTN, as well as Gen 1 and Gen 2 of its Ray-Ban Meta AI glasses at both pop-ups and select retailers.
So, if you’re dying to find out what Meta’s Ray-Ban Display are like for yourself, now’s the time. Get in there and start booking, but just know that they just might convince you to drop $800. You’ve been warned.
Mark Zuckerberg has poached a high-ranking OpenAI researcher to be the research principal of Meta Superintelligence Labs (MSL). Yang Song, who previously led the strategic explorations team at OpenAI, is now reporting to Shengjia Zhao, another OpenAI alum who has overseen the buzzy AI effort since July, according to multiple sources. He started earlier this month.
The move comes after Zuckerberg went on a hiring blitz earlier this summer, bringing in at least 11 top researchers from OpenAI, Google, and Anthropic.
Song had been at OpenAI since 2022. His research there focused on improving models’ ability to process large, complex datasets across different modalities. While still a graduate student at Stanford University, he developed a breakthrough technique that helped inform the development of OpenAI’s DALL-E 2 image generation model. Both he and Zhao attended Tsinghua University in Beijing as undergraduates, and worked under the same advisor, Stefano Ermon, while pursuing PhDs at Stanford.
In a staff-wide memo sent this summer, Zuckerberg touted Zhao’s impressive resume as the cocreator of ChatGPT, GPT-4, all mini models, 4.1, and o3 at OpenAI—but he did not specify Zhao’s new role at Meta. In July, Zuckerberg wrote in a Threads post that while Zhao had “cofounded the lab” and “been our lead scientist from day one,” Meta had decided to “formalize his leadership role” as the lab’s chief scientist. The move came after Zhao threatened to return to OpenAI, even going as far as to sign employment documents, WIRED previously reported.
A small number of researchers have left Meta Superintelligence Labs since the initiative was first announced in June. Two staffers have returned to OpenAI, WIRED previously reported. One of these researchers went through onboarding but never showed up for their first day of work at Meta.
Another AI researcher, Aurko Roy, also left Meta in July, WIRED has learned. He’d worked at the tech giant for just five months, according to his personal website, which also says he now works on Microsoft AI. Roy did not immediately respond to a request for comment from WIRED. Yang Song, OpenAI, and Meta also did not immediately respond to a request for comment from WIRED.
Song joins an already crowded field of big-name AI talent within Meta’s increasingly complicated AI division. When Zhao was hired in July, some speculated that he had replaced Yann LeCun, Meta’s longstanding chief AI scientist. In a LinkedIn post, LeCun clarified that he remained chief AI scientist for Facebook AI Research (FAIR), the company’s longstanding foundational AI research lab.
Meta is allowing more governments to access its suite of Llama AI models. The group includes France, Germany, Italy, Japan, and South Korea and organizations associated with the European Union and NATO, the company said in .
The move comes after the company took similar steps to bring Llama to the US government and its contractors. Meta has also made its AI models available to the UK, Canada, Australia and New Zealand for “national security use cases.”
Meta notes that governments won’t just be using the company’s off-the-shelf models. They’ll also be able to incorporate their own data and create AI applications for specific use cases. “Governments can also fine-tune Llama models using their own sensitive national security data, host them in secure environments at various levels of classification, and deploy models tailored for specific purposes on-device in the field,” the company says.
Meta says the open source nature of Llama makes it ideally suited for government use as “it can be securely downloaded and deployed without the need to transfer sensitive data through third-party AI providers.” Recently, Mark Zuckerberg that “safety concerns” could potentially prevent Meta from open-sourcing its efforts around building “real superintelligence.”
It takes a lot of computing power to run an AI product – and as the tech industry races to tap the power of AI models, there’s a parallel race underway to build the infrastructure that will power them. On a recent earnings call, Nvidia CEO Jensen Huang estimated that between $3 and $4 trillion will be spent on AI infrastructure by the end of the decade – with much of that money coming from AI companies themselves. Along the way, they’re placing immense strain on power grids, and pushing the industry’s building capacity to its limit.
Below, we’ve laid out everything we know about the biggest AI infrastructure projects, including major spending from Meta, Oracle, Microsoft, Google, and OpenAI. We’ll keep it updated as the boom continues, and the numbers climb even higher.
Microsoft’s $1 billion investment in OpenAI
This is arguably the deal that kicked off the whole contemporary AI boom: in 2019, Microsoft made a $1 billion investment in a buzzy non-profit called OpenAI, known mostly for its association with Elon Musk. Crucially, the deal made Microsoft the exclusive cloud provider for OpenAI – and as the demands of model-training became more intense, more of Microsoft’s investment started to come in the form of Azure cloud credit rather than cash. It was a great deal for both sides: Microsoft was able to claim more Azure sales, and OpenAI got more money for its biggest single expense. In the years that followed, Microsoft would build its investment up to nearly $14 billion – a move that is set to pay off enormously when OpenAI converts into a for-profit company.
The partnership between the two companies has unwound more recently. In January, OpenAI announced it would no longer be using Microsoft’s cloud exclusively, instead giving the company a right of first refusal on future infrastructure demands but pursuing others if Azure couldn’t meet their needs. More recently, Microsoft began exploring other foundation models to power its AI products, establishing even more independence from the AI giant.
OpenAI’s arrangement with Microsoft was so successful that it’s become a common practice for AI services to sign on with a particular cloud provider. Anthropic has received $8 billion in investment from Amazon, while making kernel-level modifications on the company’s hardware to make it better-suited for AI training. Google Cloud has also signed on smaller AI companies like Loveable and Windsurf as “primary computing partners,” although those deals did not involve any investment. And even OpenAI has gone back to the well, receiving a $100 billion investment from Nvidia in September, giving it capacity to buy even more of the company’s GPUs.
The rise of Oracle
Techcrunch event
San Francisco | October 27-29, 2025
On June 30th 2025, Oracle revealed in an SEC filing that it had signed a $30 billion cloud services deal with an unnamed partner, more than the company’s cloud revenues for all of the previous fiscal year. OpenAI was eventually revealed as the partner, securing Oracle a spot alongside Google as one of the OpenAI’s string of post-Microsoft hosting partners. Unsurprisingly, the company’s stock went shooting up.
A few months later, it happened again. On September 10th, Oracle revealed a five-year, $300 billion deal for compute power, set to begin in 2027. Oracle’s stock climbed even higher, briefly making founder Larry Ellison the richest man in the world. The sheer scale of the deal is stunning: OpenAI does not have $300 billion to spend, so the figure presumes immense growth for both companies, and more than a little faith. But before a single dollar is spent, the deal has already cemented Oracle as one of the leading AI infrastructure providers – and a financial force to be reckoned with.
Building tomorrow’s hyperscale data centers
For companies like Meta that already have significant legacy infrastructure, the story is more complicated – although equally expensive. Mark Zuckerberg has said that Meta plans to spend $600 billion on US infrastructure through the end of 2028. In just the first half of 2025, the company spent $30 billion more than the previous year, driven largely by the company’s growing AI ambitions. Some of that spending goes toward big ticket cloud contracts, like a recent $10 billion deal with Google Cloud, but even more resources are being poured into two massive new data centers. A new 2,250-acre site in Louisiana, dubbed Hyperion, will cost an estimated $10 billion to build out and provide an estimated 5 gigawatts of compute power. Notably, the site includes an arrangement with a local nuclear power plant to handle the increased energy load. A smaller site in Ohio, called Prometheus, is expected to come online in 2026, powered by natural gas.
That kind of buildout comes with real environmental costs. Elon Musk’s xAI built its own hybrid data center and power-generation plant in South Memphis, Tennessee. The plant has quickly become one of the county’s largest emitters of smog-producing chemicals, thanks to a string of natural gas turbines that experts say violate the Clean Air Act.
The Stargate moonshot
Just two days after his second inauguration, President Trump announced a joint venture between SoftBank, OpenAI and Oracle, meant to spend $500 billion building AI infrastructure in the United States. Named “Stargate” after the 1994 film, the project arrived with incredible amounts of hype, with Trump calling it “the largest AI infrastructure project in history. Sam Altman seemed to agree, saying, ”I think this will be the most important project of this era.”
In broad strokes, the plan was for SoftBank to provide the funding, with Oracle handling the buildout with input from OpenAI. Overseeing it all was Trump, who promised to clear away any regulatory hurdles that might slow down the build. But there were doubts from the beginning, including from Elon Musk, Altman’s business rival, who claimed the project did not have the available funds.
As the hype has died down, the project has lost some momentum. In August, Bloomberg reported that the partners were failing to reach consensus. Nonetheless, the project has moved forward with the construction of eight data centers in Abilene, Texas, with construction on the final building set to be finished by the end of 2026.
The agreement sets out hiring timelines that the company must also hit to receive these tax incentives: Meta can receive the highest property tax exemption as long as it hires the equivalent of 300 “full-time” jobs by 2030, 450 by 2032, 475 by 2033 and 500 by December 31, 2034.
Louisiana’s agreements ask for more than some other states’ tax subsidies. According to Good Jobs First, nearly half of state tax subsidies for data centers don’t require any new jobs to be created. But Miller has concerns that the tax breaks were not necessary at all to entice a company as large as Meta. “While everyone likes to avoid taxes, they’re not going to hire people in Richland [Parish] just because they’re going to get a tax break,” Miller says.
Louisiana had already amended a tax rebate to create an exemption for data centers in 2024 to entice Meta; in its latest iteration, it says data centers can receive a full sales tax exemption for equipment purchases in the state as long as they hire 50 full-time jobs and invest at least $200 million by July 1, 2029. A separate contract viewed by WIRED affirms that this applies to the Richland Parish data center, in addition to the PILOT agreement.
Good Jobs First says that at least 10 states have subsidies for data centers that are worth more than $100 million each, and “have suffered estimated losses of $100 million each in tax revenue for data centers,” according to its data. In total, these states forgo more than $3 billion in taxes annually for data centers. Texas revised the cost of its data center subsidy in 2025 from $130 million to $1 billion. In 2024, a pause on data center subsidies was passed in Georgia but vetoed by governor Brian Kemp.
The Franklin Farms site in Holly Ridge, the area of Richland Parish where Meta’s data center is being built, was purchased by Louisiana specifically for economic development projects. In its ground lease with Meta, Louisiana offered the 1,400-acre plot to the company for $12 million, which the lease says was the cost to the state of acquiring and maintaining the land. The lease also says Meta’s $732,000 a year “rent” is “credit toward the Base Purchase Price,” meaning the company will have paid for the property by a little over 16 years into its 30-year lease.
The price for the potential sale would be slightly higher if Meta does not reach minimum hiring and investment thresholds: As an example, the lease says if Meta only spends $4 billion in the state instead of $5 billion, the property would end up costing it $19 million. Louisiana Economic Development reserves the right to reclaim the property if Meta doesn’t invest at least $3.75 billion and hire the equivalent of 225 “full-time” jobs by 2028. When asked if Meta plans to purchase the property, Clayton said, “We’ll keep you updated on our future plans for this site.”
Meta’s presence has already caused land values to jump. A nearby tract of 4,000 acres of land in Holly Ridge is for sale for $160 million, or $40,000 per acre—more than 4.5 times the price paid by Louisiana for the data center’s site.
But there’s also a concern that Meta could delay or abandon the data center project. The PILOT agreement its subsidiary signed with the state says the company’s timeline will depend on “numerous factors outside of the control of the lessee, such as market orientation and demand, competition, availability of qualified laborers to construct and/or weather conditions.”
“My general fear is that too many data centers are being built,” Miller says. “That means some of the data centers are just going to be abandoned by the owners.”
She says in the scenario that Big Tech cuts back investments in data centers, Meta would not even be able to find another buyer. “Essentially, the state will be stuck with this warehouse full of computers,” Miller says.
Update: 9/22/2025, 12:50 PM EDT: Wired has clarified the subhead to reflect how critics perceive the data center.
Historically, most clinical trials and scientific studies have primarily focused on white men as subjects, leading to a significant underrepresentation of women and people of color in medical research. You’ll never guess what has happened as a result of feeding all of that data into AI models. It turns out, as the Financial Times calls out in a recent report, that AI tools used by doctors and medical professionals are producing worse health outcomes for the people who have historically been underrepresented and ignored.
The report points to a recent paper from researchers at the Massachusetts Institute of Technology, which found that large language models including OpenAI’s GPT-4 and Meta’s Llama 3 were “more likely to erroneously reduce care for female patients,” and that women were told more often than men “self-manage at home,” ultimately receiving less care in a clinical setting. That’s bad, obviously, but one could argue that those models are more general purpose and not designed to be use in a medical setting. Unfortunately, a healthcare-centric LLM called Palmyra-Med was also studied and suffered from some of the same biases, per the paper. A look at Google’s LLM Gemma (not its flagship Gemini) conducted by the London School of Economics similarly found the model would produce outcomes with “women’s needs downplayed” compared to men.
A previous study found that models similarly had issues with offering the same levels of compassion to people of color dealing with mental health matters as they would to their white counterparts. A paper published last year in The Lancet found that OpenAI’s GPT-4 model would regularly “stereotype certain races, ethnicities, and genders,” making diagnoses and recommendations that were more driven by demographic identifiers than by symptoms or conditions. “Assessment and plans created by the model showed significant association between demographic attributes and recommendations for more expensive procedures as well as differences in patient perception,” the paper concluded.
That creates a pretty obvious problem, especially as companies like Google, Meta, and OpenAI all race to get their tools into hospitals and medical facilities. It represents a huge and profitable market—but also one that has pretty serious consequences for misinformation. Earlier this year, Google’s healthcare AI model Med-Gemini made headlines for making up a body part. That should be pretty easy for a healthcare worker to identify as being wrong. But biases are more discreet and often unconscious. Will a doctor know enough to question if an AI model is perpetuating a longstanding medical stereotype about a person? No one should have to find that out the hard way.
With any new device category comes a whole host of novel and sometimes exhaustingly complex questions. Smartphones, for example, no matter how mundane they seem right now, are still nagging us with existential quandaries. When should we use them? How should we use them? What in God’s name happens to us when we use them, which, last I checked, is literally all the time?
These are important questions, and most of us, even if we’re not spending all day ruminating on them, tackle the complexity in our own way, setting (or resetting) social norms for ourselves and other people as we trudge along. The only thing is, in my experience, we tend to ask these questions mostly in retrospect, which is to say after the cat (or phone, or smartwatch, or earth-shattering portal into the online world) is out of the proverbial bag. It’s easy to look back and say, “That was the time we should have thought about this,” and when I put Meta’s new smart glasses with a screen on, I knew that the time, for smart glasses in particular, was now—like, right f**king now.
In case you missed it, Meta finally unveiled the Meta Ray-Ban Display, which are its first smart glasses with an in-lens display. I flew out to Meta headquarters for its annual Connect conference to try them, and the second I put them on, it was clear: these are going to be big. It probably seems silly from the outside to make a declaration like that. We have screens everywhere all the time—in our hands, on our wrists, and sometimes, regrettably, in our toasters. Why would smart glasses be any different? On one hand, I get that skepticism, but sometimes function isn’t the issue; it’s form. And when it comes to smart glasses, there is no other form like it.
Meta’s Ray-Ban Display aren’t just another wearable. The screen inside them opens up an entirely new universe of capabilities. With these smart glasses and Meta’s wild new “Neural Band,” a wristband that reads the electrical signals in your arm and translates them to inputs, you’re able to do a lot of the stuff you normally do on your phone. You can receive and write messages, watch Reels on Instagram, take voice calls and video calls, record video and take pictures, and get turn-by-turn navigation. You can even transcribe conversations that are happening in real time. You’re doing this on your face in a way that you’ve never done it before—discreetly and, from my experience, fairly fluidly.
If there were any boundaries between you and a device, Meta’s Ray-Ban Display are closing them to a gap that only an iPhone Air could slide through. It’s incredibly exciting in one way, because I can see Meta’s smart glasses being both useful and fun. The ability to swipe through a UI in front of my face by sliding my thumb around like some kind of computer cursor made of meat is wild and, at times, actually thrilling. While not everything works seamlessly yet, the door to smart glasses supremacy feels like it’s been swung wide open. You are going to want a pair of these smart glasses whether you know it or not. These are going to be popular, and as a result, potentially problematic.
We may have a solid grasp on where and when we’re supposed to use phones, but what happens when that “phone” in question becomes perfectly discreet, and the ability to use it becomes almost unnoticeable to those around us? When I use a smartphone, you can see me pick it up—you know there’s a device in my hand. When I use Meta’s Ray-Ban Display, however, there’s almost no indication. Yes, there’s a privacy light that tells outside people that a picture or video is being taken, but there’s also less than 2% light leakage through the lens, meaning you can’t tell when the screen inside the glasses is on. I certainly couldn’t tell when I watched others use them. It’s as ambient as any ambient computing I’ve witnessed so far.
I talked to Anshel Sag, a principal analyst at Moor Insights & Strategy who covers the wearable market, and he says the privacy framework around technology like this is still in flux.
“We are still very much in the infancy of the smart glasses, AI wearable, and AR privacy and etiquette era,” he said. “I think that the reality is that having a wearable with a camera on your face is going to change things, and there are going to be places where these things are banned explicitly.”
Some of those environments, Sag said, are private areas like bathrooms or locker rooms, but it could extend beyond just places where you might catch a glimpse of someone naked. Driving, for example, is a major question. Meta’s Ray-Ban Display have navigation built in, and while the company tells me that the feature is designed for walking right now, it’s not actually preventing anyone from using its smart glasses in the car. Instead, it will provide a warning before you do so by detecting what speed you’re moving at. Other companies like Amazon seem not to have even thought that navigating on smart glasses while driving could be a safety hazard at all. Early reports indicate that Amazon is plowing forward, making smart glasses that are specifically designed for its delivery drivers to use in a car.
While regulators like the NHTSA have issued warnings about people using VR headsets while driving (yes, people were actually doing that), it hasn’t, according to my research or knowledge, addressed the impact of smart glasses, which are much more likely—especially if they become widespread—to enter the equation while driving. I reached out to the NHTSA for comment, but have not yet received a response.
Privacy concerns shouldn’t just stem from the form factor, either. You also have to think about the company that’s making the thing you’re wearing on your face all the time and whether it has shown to be a good steward of your data and privacy. In Meta’s case? Well, without going into an entirely separate diatribe, I think it could do a lot better. And other companies that are also in hot pursuit of screen-clad glasses, like Google? Well, they haven’t been much better.
And makers of smart glasses shouldn’t be surprised if, when these things wind up on people’s faces, they get some shit for it. Google Glass, which came out in 2013, may seem like a different age, and in a lot of ways it is (people’s expectations for privacy are almost nonexistent now), but we also haven’t had to confront the idea of pervasive camera-clad wearables in a long time, so who’s to say things have really changed? Sag says, while he expects some backlash, it may not be like the Glasshole days of yore.
“I think there will be some backlash, but I don’t think it’s gonna be as bad as Google Glass,” he says. “Google Glass had such an invasive appearance. You know, it didn’t really look normal, so it really caught people’s attention more. And I think that’s really what has made these classes more successful, is that they’re just inherently less intrusive in terms of appearance.”
I may not be an industry analyst, but I agree with Sag. I’m not sure there really will be a category-ending backlash like we saw back in the Google days, and a part of me doesn’t want there to be. As I mentioned, I got a chance to use Meta’s Ray-Ban Displays, and the idea all but knocked my socks off. These are the smart glasses that anyone interested in the form factor has been waiting for. What I really want is to be able to live in a world where we can all use them respectfully and responsibly, and one where the companies that are making them give us the same responsibility and respect back. But in my experience, the only way to get toward a more respectful, harmonious world is to try everything else first, and in this case, the first step might be your next pair of Ray-Bans.
On an earnings call this summer, Meta CEO Mark Zuckerberg made an ambitious claim about the future of smart glasses, saying he believes that someday people who don’t wear AI-enabled smart spectacles (ideally his) will find themselves at a “pretty significant cognitive disadvantage” compared to their smart-glasses-clad kin.
Meta’s most recent attempt to demonstrate the humanity-enhancing capabilities of its face computing platform didn’t do a very good job of bolstering that argument.
In a live keynote address at the company’s Connect developer conference on Wednesday, Zuckerberg tossed to a product demo of the new smart glasses he had just announced. That demo immediately went awry. When a chef was brought onstage to ask the Meta glasses’ voice assistant to walk him through a recipe, he spoke the “Hey Meta” wake word, and every pair of Meta glasses in the room—hundreds, since the glasses had just been distributed to the crowd of attendees—sprang to life and started chattering.
In an Instagram Reel posted after the event, Meta CTO Andrew Bosworth (whose own bit onstage had run into technical problems) said the hiccup happened because so many instances of Meta’s AI running in the same place meant they had inadvertently DDOS’d themselves. But a video call demo failed too, and the demos that did work were filled with lags and interruptions.
This isn’t meant to just be a dunk at the kludgy Connect keynote. (We love a live demo, truly!) But the weirdness, the timid exchanges, the repeated commands, and the wooden conversations inadvertently reflect just how graceless this technology can be when used in the real world.
“The main problem for me is the raw amount of times where you do engage with an AI assistant and ask it to do something and it doesn’t actually understand,” says Leo Gebbie, a director and analyst at CCS Insights. “The failure risk just is high, and the gap is still pretty big between what’s being shown and what we’re actually going to get.”
Eyes of the World
Live Captions seen on the Meta Ran Ban Display.Courtesy of Meta
Clearly, we are a long way from Zuckerberg’s vision of smart glasses being the computing platform that elevates humanity to some higher-thinking, higher-functioning state. Sure, wearing internet-connected hardware on your face can make it easier and faster to access information, and that may help you become—or at least appear to become—smarter or more capable. But as the clumsiness of the Connect demo very publicly demonstrated, the act of simply wearing a chatbot and a screen on your face might cancel out any cognitive advantage. Smart glasses put the wearer at a significant social disadvantage.
Deutsche Bank called it “the summer AI turned ugly.” For weeks, with every new bit of evidence that corporations were failing at AI adoption, fears of an AI bubble have intensified, fueled by the realization of just how topheavy the S&P 500 has grown, along with warnings from top industry leaders. An August study from MIT found that 95% of AI pilot programs fail to deliver a return on investment, despite over $40 billion being poured into the space. Just prior to MIT’s report, OpenAI CEO Sam Altman rang AI bubble alarm bells, expressing concern over the overvaluation of some AI startups and the intensity of investor enthusiasm. These trends have even caught the attention of Fed Chair Jerome Powell, who noted that the U.S. was witnessing “unusually large amounts of economic activity” in building out AI capabilities.
Mark Zuckerberg has some similar thoughts.
The Meta CEO acknowledged that the rapid development of and surging investments in AI stands to form a bubble, potentially outpacing practical productivity and returns and risking a market crash. But Zuckerberg insists that the risk of over-investment is preferable to the alternative: being late to what he sees as an era-defining technological transformation.
“There are compelling arguments for why AI could be an outlier,” Zuckerberg hedged in an appearance on the Access podcast. “And if the models keep on growing in capability year-over-year and demand keeps growing, then maybe there is no collapse.”
Then Zuckerberg joined the Altman camp, saying that all capital expenditure bubbles like the buildout of AI infrastructure, seen largely in the form of data centers, tend to end in similar ways. “But I do think there’s definitely a possibility, at least empirically, based on past large infrastructure buildouts and how they led to bubbles, that something like that would happen here,” Zuckerberg said.
Bubble echoes
Zuckerberg pointed to past bubbles, namely railroads and the dot-com bubble, as key examples of infrastructure buildouts leading to a stock-market collapse. In these instances, he claimed that bubbles occurred due to businesses taking on too much debt, macroeconomic factors, or product demand waning, leading to companies going under and leaving behind valuable assets.
The Meta CEO’s comments echoed Altman’s, who has similarly cautioned that the AI boom is showing many signs of a bubble.
“When bubbles happen, smart people get overexcited about a kernel of truth,” Altman told The Verge, adding that AI is that kernel: transformative and real, but often surrounded by irrational exuberance. Altman has also warned that “the frenzy of cash chasing anything labeled ‘AI’” can lead to inflated valuations and risk for many.
The consequences of these bubbles are costly. During the dot-com bubble, investors poured money into tech startups with unrealistic expectations, driven by hype and a frenzy for new internet-based companies. When the results fell short, the stocks involved in the dot-com bubble lost more than $5 trillion in total market cap.
An AI bubble stands to have similarly significant economic impacts. In 2025 alone, the largest U.S. tech companies, including Meta, have spent more than $155 billion on AI development. And, according to Statista, the current AI market value is approximately $244.2 billion.
But, for Zuckerberg, losing out on AI’s potential is a far greater risk than losing money in an AI bubble. The company recently committed at least $600 billion to U.S. data centers and infrastructure through 2028 to support its AI ambitions. According to Meta’s chief financial officer, this money will go towards all of the tech giant’s US data center buildouts and domestic business operations, including new hires. Meta also launched its superintelligence lab, recruiting talent aggressively with multi-million-dollar job offers, to develop AI that outperforms human intelligence.
“If we end up misspending a couple hundred billion dollars, that’s going to be very unfortunate obviously. But I would say the risk is higher on the other side,” Zuckerberg said. “If you build too slowly, and superintelligence is possible in three years but you built it out were assuming it would be there in five years, then you’re out of position on what I think is going to be the most important technology that enables the most new products and innovation and value creation in history.”
While he sees the consequences of not being aggressive enough in AI investing outweighing overinvesting, Zuckerberg acknowledged that Meta’s survival isn’t dependent upon AI’s success.
For companies like OpenAI and Anthropic, he said “there’s obviously this open question of to what extent are they going to keep on raising money, and that’s dependent both to some degree on their performance and how AI does, but also all of these macroeconomic factors that are out of their control.”
Fortune Global Forum returns Oct. 26–27, 2025 in Riyadh. CEOs and global leaders will gather for a dynamic, invitation-only event shaping the future of business. Apply for an invitation.
Meta chief technology officer Andrew Bosworth took to his Instagram to explain, in more technical detail, why multiple demos of Meta’s new smart-glasses technology failed at Meta Connect, the company’s developer conference, this week.
However, at different points during the event, the live technology demos failed to work.
In one, cooking content creator Jack Mancuso asked his Ray-Ban Meta glasses how to get started with a particular sauce recipe. After repeating the question, “What do I do first?” with no response, the AI skipped ahead in the recipe, forcing him to stop the demo. He then tossed it back to Meta CEO Mark Zuckerberg, saying that he thinks the Wi-Fi may be messed up.
Jack Mancuso at Meta Connect.Image Credits:Meta
In another demo, the glasses failed to pick up a live WhatsApp video call between Bosworth and Zuckerberg; Zuckerberg eventually had to give up. Bosworth walked onstage, joking about the “brutal” Wi-Fi.
“You practice these things like a hundred times, and then you never know what’s gonna happen,” Zuckerberg said at the time.
After the event, Bosworth took to his Instagram for a Q&A session about the new tech and the live demo failures.
Techcrunch event
San Francisco | October 27-29, 2025
On the latter, he explained that it wasn’t actually the Wi-Fi that caused the issue with the chef’s glasses. Instead, it was a mistake in resource management planning.
Image Credits:Instagram (screenshot)
“When the chef said, ‘Hey, Meta, start Live AI,’ it started every single Ray-Ban Meta’s Live AI in the building. And there were a lot of people in that building,” Bosworth explained. “That obviously didn’t happen in rehearsal; we didn’t have as many things,” he said, referring to the number of glasses that were triggered.
That alone wasn’t enough to cause the disruption, though. The second part of the failure had to do with how Meta had chosen to route the Live AI traffic to its development server to isolate it during the demo. But when it did so, it did this for everyone in the building on the access points, which included all the headsets.
“So we DDoS’d ourselves, basically, with that demo,” Bosworth added. (A DDoS attack, or a distributed denial of service attack, is one where a flood of traffic overwhelms a server or service, slowing it down or making it unavailable. In this case, Meta’s dev server wasn’t set up to handle the flood of traffic from the other glasses in the building — Meta was only planning for it to handle the demos alone.)
The issue with the failed WhatsApp call, on the other hand, was the result of a new bug.
The smart glasses’ display had gone to sleep at the exact moment the call came in, Bosworth said. When Zuckerberg woke the display back up, it didn’t show the answer notification to him. The CTO said this was a “race condition” bug, or where the outcome depends on the unpredictable and uncoordinated timing of two or more different processes trying to use the same resource simultaneously.
“We’ve never run into that bug before,” Bosworth noted. “That’s the first time we’d ever seen it. It’s fixed now, and that’s a terrible, terrible place for that bug to show up.” He stressed that, of course, Meta knows how to handle video calls, and the company was “bummed” about the bug showing up here.
Despite the issues, Bosworth said he’s not worried about the results of the glitches.
“Obviously, I don’t love it, but I know the product works. I know it has the goods. So it really was just a demo fail and not, like, a product failure,” he said.
OpenAI chairman Bret Taylor has held many notable titles in tech. Katelyn Tucker/ Slava Blazer Photography
A.I. agents are the next big platform shift in tech, on par with the dawn of the internet 30 years ago and the rise of mobile apps a decade ago, according to OpenAI chairman Bret Taylor, who also runs his own A.I. startup, Sierra. Speaking at the Skift Global Forum in New York City yesterday (Sept. 18), the tech executive argued that enterprises are now racing to adopt A.I. agents much like they once scrambled to build websites or launch mobile apps.
“I think this is an opportunity that, probably, the closest catalog would be the birth of the internet,” Taylor said during an onstage interview.
Taylor has seen several waves of disruption firsthand. At Google in the early 2000s, he helped launch Google Maps. He went on to serve as chief technology officer at Facebook (now Meta), co-CEO of Salesforce, and chair of Twitter’s board during Elon Musk’s tumultuous takeover. In 2023, he was tapped as chairman of OpenAI’s board after the ChatGPT-maker briefly ousted and reinstated CEO Sam Altman.
Now, his focus is on Sierra, the conversational A.I. startup he co-founded two years ago with former Google colleague Clay Bavor. The company has quickly become a “decacorn,” hitting a $10 billion valuation earlier this month after raising $350 million from Greenoaks Capital. Sierra already counts hundreds of enterprise customers across financial services, health care and retail. A fifth of Sierra’s customers have annual revenue over $10 billion.
Taylor insists that A.I. agents are more than just cost-cutting tools. Increasingly, they’re revenue drivers. Sierra’s platform is helping companies sell mortgages, make outbound sales calls and even manage payroll for small businesses. “These agents are not only doing services, but also doing sales,” he said.
And the form factor is evolving. While chatbots dominate today’s landscape, Taylor believes voice-enabled A.I. is “as, or more important, of a channel than chat.” Multi-modal agents are also emerging. For instance, retailers are beginning to process warranty claims by analyzing photos of damaged products.
Just as the internet gave rise to search engines and aggregation platforms, Taylor expects agentic A.I. to spawn entirely new business categories. The challenge will be ensuring that they meet consumer expectations as their desires inevitably evolve with the technology’s development. “Consumers are moving faster than most companies can make decisions,” Taylor warned, noting that ChatGPT became the fastest-growing consumer app in history. “It’s on all of us leaders to push decisively towards this new world.”
Meta CEO Mark Zuckerberg gave the keynote address at Meta Connect on Wednesday, revealing a trio of new smart glasses. The products have been a hit with consumers — the tech giant has sold more than two million pairs of its Ray-Ban glasses since it launched in October 2023.
“It is no surprise that AI glasses are taking off,” Zuckerberg said at the event. “The sales trajectory that we’ve seen is similar to some of the most popular consumer electronics of all time.”
The product that took center stage was the Meta Ray-Ban Display, an entirely new pair of glasses ($799) that has a small screen on the bottom right lens and comes with an accompanying neural wristband that tracks hand movements as commands. The screen allows the user to look at messages, take video calls where they can see the person on the other end, see walking directions, watch Instagram Reels, and get a preview of pictures before taking them.
Only the person wearing the glasses is able to see the display, and they can turn off the screen when it is not in use. The user controls the display through the wristband, which allows them to click, scroll, and write out messages with different hand gestures. For example, tapping the thumb and index finger together plays music. The glasses also allow users to generate a live transcription of the speech around them, which they can view on the screen.
Zuckerberg said the screen enables people to “put subtitles on the world.”
The Meta Ray-Ban Display glasses function for six hours on a single charge, and the case adds up to 30 hours of battery life. The water-resistant wristband has 18 hours of power. The glasses and the wristband are sold together and will hit shelves in the U.S. on September 30 at retailers like Best Buy, LensCrafters, and Sunglass Hut.
Meta CEO Mark Zuckerberg wears a pair of Meta Ray-Ban Display AI glasses with an accompanying neural wristband at Meta Connect 2025. Photographer: David Paul Morris/Bloomberg via Getty Images
At $799, the Meta Ray-Ban Display glasses are priced more like a smartphone substitute than an AI accessory. For comparison, the iPhone 17, which Apple announced earlier this month, starts at $799, while the Samsung Galaxy S25, announced in January, is priced a little bit higher at $859.
Other smart glasses with built-in displays, like the $269 RayNeo Air 3s and the $429 Rokid Max 2, are hundreds of dollars cheaper — but don’t include a neural wristband.
Zuckerberg also revealed updated Ray-Bans ($379) with double the battery life, improved cameras, and an $80 price hike. The glasses allow users to take photos and videos, make calls, listen to music, and send text messages through voice commands. Facebook’s founder also introduced the new $499 Oakley Vanguard glasses, which feature water resistance and a centered camera. These glasses, which join the $400 Oakley Meta HSTN introduced earlier this year as another Meta offering from the Oakley brand, start shipping on October 21.
Meta’s smart glasses partner, EssilorLuxottica, stated in July in an earnings report that revenue from the Ray-Ban Meta frames unexpectedly tripled over the past year, making the glasses the No. 1 bestselling frames on the market.
The partnership between Meta and EssilorLuxottica is so lucrative that Meta acquired a stake worth $3.5 billion in the eyewear company in July.
Meta CEO Mark Zuckerberg gave the keynote address at Meta Connect on Wednesday, revealing a trio of new smart glasses. The products have been a hit with consumers — the tech giant has sold more than two million pairs of its Ray-Ban glasses since it launched in October 2023.
“It is no surprise that AI glasses are taking off,” Zuckerberg said at the event. “The sales trajectory that we’ve seen is similar to some of the most popular consumer electronics of all time.”
The product that took center stage was the Meta Ray-Ban Display, an entirely new pair of glasses ($799) that has a small screen on the bottom right lens and comes with an accompanying neural wristband that tracks hand movements as commands. The screen allows the user to look at messages, take video calls where they can see the person on the other end, see walking directions, watch Instagram Reels, and get a preview of pictures before taking them.
In addition to new hardware announcements, Meta had software news to share during its Meta Connect 2025 conference today. The company revealed that Discord will be making a native app for the Meta Quest headset. According to Meta, the native window app will be available some time in 2026.
The development makes sense. VR is a platform with a lot of gaming presence, so having Discord for easy social and voice connections while playing is a win for players and a natural match for the two businesses. Having a native app can make a big difference in the ease of use. I’m primarily a member of the PlayStation nation, and I swear I heard an angelic choir singing when the PS5 finally got call support.
Meta positioned the upcoming availability of the native app as a boon for the developers of VR experiences to reach new audiences, thanks to Discord’s more than 200 million monthly active players. We’ve reached out to Discord for additional comment and will update with any more details we receive.
If you can’t resist the urge to check your phone over and over, even if you’re out with friends, Meta has a solution: check your glasses instead.
“The promise of glasses is to preserve this sense of presence that you have with other people,” said CEO Mark Zuckerberg at the Meta Connect 2025 keynote. “I think that we’ve lost it a little bit with phones, and we have the opportunity to get it back with glasses.”
In reality, Meta wants its own hardware to eat into the marketshare of Apple and Google so that it doesn’t have to keep siphoning profits to them via app stores. But nevertheless, this is the angle Meta is taking to sell its most sophisticated smart glasses yet, the Meta Ray-Ban Display, which the company hopes could one day eclipse the market share of smartphones.
Meta’s Reality Labs division burns cash at an alarming rate, which has concerned investors over the years. But Wednesday’s event finally showed us a glimpse of what the division’s $70 billion in losses since 2020 have gone toward.
Meta has had its fair share of flops, like the entire promise of its social metaverse. (Remember when they announced that metaverse avatars would finally get legs?) But with the Meta Ray-Ban Display, Meta has created a remarkable piece of technology, unlike any other consumer-facing product on the market — we have yet to test it ourselves, so we can’t quite say just how groundbreaking this really is, but it looks promising.
Like Meta’s existing smart glasses, which have sold millions of pairs, the new model has cameras, speakers, microphones, and an on-board AI assistant. The display on the glasses, which is offset so as not to obstruct one’s sightline, can display Meta apps like Instagram, WhatsApp, and Facebook, as well as directions and live translations.
What most sets the Meta Ray-Ban Display apart is the Meta Neural Band, a wristband that uses surface electromyography (sEMG) to pick up on signals sent between your brain and your hand when performing a gesture.
Techcrunch event
San Francisco | October 27-29, 2025
Meta’s keynote didn’t get into the specifics of how Zuckerberg was writing these texts, but according to Reality Labs’ research on sEMG, users can write out messages like this by holding their fingers together as if they were gripping a pen and “writing” out the text.
While some live AI demos at the keynote failed — Zuckerberg blamed the Wi-Fi — we at least got to see the wristband in action, which is more novel. Zuckerberg quickly wrote out text messages, then sent them on his Ray-Bans.
“I’m up to about 30 words a minute on this,” Zuckerberg said on stage at the company’s Menlo Park headquarters. “You can get pretty fast.”
On a touchscreen smartphone like an iPhone, research has estimated that people text at about 36 words per minute, making Zuckerberg’s claim impressive. Reality Labs’ research participants averaged closer to 21 words per minute.
Unlike past Meta Ray-Bans, this technology allows people to actually use the glasses without speaking aloud, which isn’t always natural in public settings. While Apple Watch users can send texts without voice prompting, the process is so tedious and slow that it’s only useful as a last resort.
Other gesture controls on the wristband seem more similar to technology that consumers have used before, like Nintendo Joy-Cons and Apple Watches. But if the voiceless texting interface is as good as it seems, then the wristband will likely be capable of more complex gestures than we’re used to.
Image Credits:Meta
Meta has invested heavily in research on sEMG since 2021, even showing us a prototype of a heftier product called Orion. Like Apple and Google, Meta is preparing for a not-so-impossible future where these smart glasses could potentially eclipse the smartphone.
But as is the risk with any massive hardware investment, there’s no way to know if this will actually feel more natural to people in their day-to-day lives than pulling a sleek aluminum rectangle out of their pocket to tap out messages to their friends.
This might be Meta’s biggest bet — perhaps a bigger bet than its subpar metaverse. That’s why it’s so striking that Zuckerberg is unveiling this technology as not just a fascinating innovation, but something that he wants to portray as more prosocial than the smartphone. It’s a way for him to capitalize on our growing malaise with our ever-increasing screen time, even though he’s the one making the apps that demand our attention.
“The technology needs to get out of the way,” Zuckerberg said.
Will the smartphone become an obsolete relic like a Nokia with a T9 keyboard? That depends on whether or not there’s truth to Zuckerberg’s narrative that these glasses will help us feel more present. But Meta and its competitors are betting big on the cultural shift from smartphones to smart glasses, and the Ray-Ban Display will give consumers their first taste of this possible future.
The whole experience—from the quality of the display itself, to the gesture controls and the on-glasses capabilities—all feels polished and intuitive, particularly considering this is Meta’s first commercial stab at such a product.
But here’s the problem: As impressive as they are, I still wouldn’t buy them. Outside of tech fans and early adopters, I don’t think a lot of people will. Not this iteration, anyway. And that’s not even because of the arguably punchy $800 price tag.
The thing that truly lets them down is their aesthetic, and that’s not what I expected from the company that made such a success of the original Ray-Ban Metasbecause of their design. While the originals (and their just-announced successors) basically look like Ray-Ban glasses, these, in what can only be described as a glaring faux pas, are far from being fashion-first. They look like smart glasses, but the old kind you don’t really want to be seen wearing.
The chunk factor cannot be ignored.
Courtesy of Verity Burns
Oh, there is a whiff of the Wayfarer about the Meta Ray-Ban Display; you can tell the intention is there to try and replicate the success of the most popular Ray-Ban style. But somehow distant alarm bells are ringing. Even though “statement glasses” are fashionable, these are just a bit too chunky to blend in.
At a glance, you can tell that something is going on with them. We’ve arrived in the uncanny valley of smart glasses, where the subtle bulges and added girth of the frames demand your attention, but not in a good way.
Interestingly, there is a subtle nod to this shift in aesthetics in the naming structure. While the original Ray-Ban Meta glasses lead with the Ray-Ban branding in their name, the Meta Ray-Ban Display switch that focus around. Which of the two brands made that call hasn’t been made clear, but these are Meta’s self-branded, tech-first glasses, and that feels a like misstep, especially considering the experience Meta already has in the market.
After revealing his company’s latest augmented reality and smart glasses at Meta Connect this year, Mark Zuckerberg has introduced a new entertainment hub for its Quest headsets called Horizon TV. Zuckerberg said Meta believes watching video content is going to be a huge category for both virtual reality headsets and glasses in the future. Meta has already teamed up with several major streaming services to provide shows and movies you can enjoy in VR. One of those partners is Disney+, which will give users access to the Marvel Cinematic Universe on their headsets, as well as to content from ESPN and Hulu.
Based on the interface Zuckerberg showed on the event, which had a lineup of streaming apps that will be available on the hub, Meta also teamed up with Prime Video, Spotify, Peacock and Twitch. That will allow you to watch shows, such as The Boys and Fallout on your virtual reality devices. Meta also partnered with Universal Pictures and iconic horror company Blumhouse, so that you can watch horror flicks like M3GAN and The Black Phone on your Quest “with immersive special effects you won’t find anywhere else.”
The Horizon TV hub supports Dolby Atmos for immersive sounds, with Dolby Vision arriving later this year for richer colors and crisper details. For a limited time, you’ll be able to watch an exclusive 3D clip of Avatar: Fire and Ash on Horizon TV, as well, as part of Meta’s partnership with James Cameron’s Lightstorm Vision.
After revealing his company’s latest augmented reality and smart glasses at Meta Connect this year, Mark Zuckerberg has introduced a new entertainment hub for its Quest headsets called Horizon TV. Zuckerberg said Meta believes watching video content is going to be a huge category for both virtual reality headsets and glasses in the future. Meta has already teamed up with several major streaming services to provide shows and movies you can enjoy in VR. One of those partners is Disney+, which will give users access to the Marvel Cinematic Universe on their headsets, as well as to content from ESPN and Hulu.
Based on the interface Zuckerberg showed on the event, which had a lineup of streaming apps that will be available on the hub, Meta also teamed up with Prime Video, Spotify, Peacock and Twitch. That will allow you to watch shows, such as The Boys and Fallout on your virtual reality devices. Meta also partnered with Universal Pictures and iconic horror company Blumhouse, so that you can watch horror flicks like M3GAN and The Black Phone on your Quest “with immersive special effects you won’t find anywhere else.”
The Horizon TV hub supports Dolby Atmos for immersive sounds, with Dolby Vision arriving later this year for richer colors and crisper details. For a limited time, you’ll be able to watch an exclusive 3D clip of Avatar: Fire and Ash on Horizon TV, as well, as part of Meta’s partnership with James Cameron’s Lightstorm Vision.
Ray-Ban wasn’t the only collaboration that got some shine at Meta Connect. The company also took the wraps (no pun intended) off a pair of wraparound shades designed by Oakley and, like its recently released HSTN smart glasses, designed more with sporty types in mind.
Outside of the differing glasses shape, the $499 Meta Oakley Meta Vanguard (yes, that’s the official name in that order) specs also have a centered camera that’s meant to be better suited for capturing footage during “action” sports like snowboarding or cycling. Similar to Oakley’s HSTN glasses, the Vanguard have upgraded camera specs and are capable of capturing video in up to 3K resolution with its 12-megapixel camera that has a 122-degree field of view.
There are some new fitness integrations, specifically with Garmin and Strava, that allow you to use the smart glasses as a sort of augment for health wearables. For instance, you can ask Meta AI how you’re doing on your fitness goals, or you can get updates on other fitness metrics in real time.
While the tech inside the Vanguard is significant, equally as important is the form factor. Wraparound shades, while they’re probably not the style most normies would spring for, are ideal for skiing and snowboarding because of the superior wind blockage. Having used Meta’s HSTN smart glasses a little myself, I think Vanguard will appeal to more people interested in the action sports side of things since the former double more as just regular specs.
One of the biggest upgrades that I got to hear for myself is the speakers. According to Meta, the Vanguard are 6 decibels louder than the HSTN glasses, which is clutch if you’re tearing down a hill at 30 mph on a snowboard. Meta also tried to optimize the design for sports in a number of ways, including an IP67 water rating, which makes them very durable when it comes to water and dust. I don’t know any professional water skiers, but if I did, I’d probably recommend these smart glasses.
Battery-wise, the Vanguard have decent longevity on paper. According to Meta, they have 9 hours of battery life with “mixed usage” or as much as 6 hours if you’re playing music continuously. With the charging case, Meta says its smart glasses get 36 hours and they can go from 0 to 50% in 20 minutes. There are all sorts of lens variations this go-around too, including black, sapphire, 24K (which is gold), and something called “Road.” Those lenses can be swapped around or replaced, but it’ll cost you a whole $85.
I haven’t had a chance to really test out the Vanguards in depth, but I can see how these would be appealing to someone who wants a sturdy pair of action-sports-oriented smart glasses. They’re available on Oct. 21 if that’s your thing, or you can preorder now.
Although today’s Meta Connect developer conference was largely about new smart glasses, the social networking company did announce a handful of metaverse updates during Wednesday’s keynote. Of these, one of the largest was the introduction of Hyperscape, first demoed at last year’s event, which allows developers and creators to build more photorealistic spaces in virtual reality.
The company announced that Hyperscape Capture is now rolling out in Early Access, meaning Quest device owners will be able to scan a room in a few minutes, then turn it into an immersive and photorealistic world that’s like a digital replica of a real-world space.
The capture process itself only takes a few minutes, but the room’s rendering will actually take a few hours, Meta notes.
At launch, users won’t be able to invite others into their digital spaces, though that functionality will be supported in time, Meta says, through a private link.
Image Credits:Meta
However, the tech has already been used to render some featured Hyperscape worlds, including Gordon Ramsay’s home kitchen in L.A., Chance the Rapper’s House of Kicks, The Octagon at the UFC Apex in Las Vegas, and Happy Kelli’s room filled with her Crocs shoe collection.
Meta first demoed Hyperscape last year at its Connect conference, showing how it used Gaussian Splatting, cloud rendering, and streaming to make the digital worlds appear on a Meta Quest 3 headset. Now, it’s rolling it out to users 18 years old and up, who have either a Quest 3 or Quest 3S.
The rollout will be gradual, starting today, so not all users may see it immediately.
Techcrunch event
San Francisco | October 27-29, 2025
Meta also introduced more metaverse updates at today’s events, including a new lineup of fall VR games, including Marvel’s Deadpool VR, ILM’s Star Wars: Beyond Victory, and Demeo x Dungeons & Dragons: Battlemarked, and Reach.
Its streaming app, Horizon TV, will add support for Disney+, ESPN, and Hulu, while a partnership with Universal Pictures and horror company Blumhouse will offer movies like “M3GAN” and “The Black Phone” with immersive special effects. A 3D clip of “Avatar: Fire and Ash” will also be available for a limited time.
There’s one thing people want to know when they see my first-gen Ray-Ban smart glasses, and it’s got nothing to do with AI, or cameras, or the surprisingly great open-ear audio they put out. They want to know what’s probably front-of-mind right now as you’re reading this: Do they have a screen in them? The answer? Sadly, no… until now.
At Meta Connect 2025, Meta finally unveiled its Ray-Ban Display smart glasses that, as you may have gathered from the name, have a screen in them. It doesn’t sound like much on the surface—we have screens everywhere, all the time. Too many of them, in fact. But I’m here to tell you that after using them in advance of the unveil, I regret to inform you that you will most likely want another screen in your life, whether you know it or not. But first, you probably want to know exactly what’s going on in this screen I speak of.
The answer? Apps, of course. The display, which is actually full-color and not monochrome like previous reporting suggested, acts as a heads-up display (HUD) for things like notifications, navigation, and even pictures and videos. For the full specs of that display, you can read the news companion to my hands-on here. For now, though, I want to focus on what that screen feels like. The answer? A little jarring at first.
While the Ray-Ban Display, which weigh 69g (about 10 more grams than the first-gen glasses without a screen) do their best not to shove a screen in front of your face, it’s still genuinely there, hovering like a real-life Clippy, waiting to distract you with a notification at a moment’s notice. And, no matter what your feelings are about smart glasses that have a screen, that’s a good thing, since the display is the whole reason you might spend $800 to own a pair. Once your eyes adjust to the screen (it took me a minute or so), you can get cracking on doing stuff. That’s where the Meta Neural Band comes in.
The Neural Band is Meta’s sEMG wristband, a piece of tech it’s been showing off for years now that’s been shrunk down into the size of a Whoop fitness band. It reads the electrical signals in your hand to register pinches, swipes, taps, and wrist turns as inputs in the glasses. I was worried at first that its wristband might feel clunky or too conspicuous on my body, but I can inform you that it’s not the case—this is about as lightweight as it gets. The smart glasses also felt light and comfortable on my face despite being noticeably thicker than the first-gen Ray-Bans.
More importantly than being lightweight and subtle, it’s very responsive. Once the Neural Band was tight on my wrist (it was a little loose at first, but better after I adjusted), using it to navigate the UI was fairly intuitive. An index finger and thumb pinch is the equivalent of “select,” a middle-finger and thumb pinch is “back,” and for scrolling, you make a fist and then use your thumb like it’s a mouse made of flesh and bone over the top of said fist. It’s a bit of Vision Pro and a bit of Quest 3, but with no hand-tracking needed. I won’t lie to you, it feels like a bit of magic when it works fluidly.
Personally, I still had some variability on inputs—you may have to try to input something once or twice before it registers—but I would say that it works well most of the time (at least much better than you’d expect for a literal first-of-its-kind device). I suspect the experience will only get more fluid over time, though, and even better once you really train yourself to navigate the UI properly. Not to mention the applications for the future! Meta is already planning to launch a handwriting feature, though it’s not available at launch. I got a firsthand look… kind of. I wasn’t able to use handwriting myself, but I watched a Meta rep use it, and it seemed to work, though I have no way of knowing how well until I use it for myself.
But enough about controls; let’s get to what you’re actually doing with them. I got to briefly experience pretty much everything that the Meta Ray-Ban Display have to offer, and that includes the gamut of phone-adjacent features. One of my favorites is taking pictures in a POV mode, which imposes a window on the glasses display that shows you what you’re taking a picture of right in the lens—finally, no guess and check when you’re snapping pics. Another “wow” moment here is the ability to pinch your fingers and tweak your wrist (like you’re turning a dial) to zoom in. It’s a subtle thing, but you feel like a wizard when you can control a camera by just waving your hands around.
Another standout feature is navigation, which imposes a map on the glasses display to show you where you’re going. Obviously, I was limited in testing how that feature works since I couldn’t wander off with the glasses in my demo, but the map was quite sharp and bright enough to be used outdoors (I did test this stuff in sunlight, and the 5,000 nits brightness was sufficient). Meta is leaving it up to you whether you use navigation while you’re in a vehicle or on a bike, but it will warn you of the dangers of looking at a screen if it detects that you’re moving quickly. It’s hard to say how distracting a HUD would be if you’re biking, and it’s something that I plan to eventually test in full.
Another interesting feature you might actually use is video calling, which pulls up a video of the person you’re calling in the bottom-right corner. The interesting part about this feature is that it’s POV for the person you’re calling, so they can see what you’re looking at. It’s not something that I’d do in any situation, since usually the person you’re calling wants to see you and not just what you’re looking at, but I can confirm that it works at least.
Speaking of just working, there’s also a live transcription feature that can listen in on your environment and superimpose what the other person is saying onto the display of the smart glasses. I had two thoughts when using this feature: the first one is that it could be a game-changer for accessibility. If your hearing is impaired, being able to actually see a live transcript could be hugely helpful. Secondly, such a feature could be great for translation, which is something that Meta has already thought of in this case. I didn’t get a chance to use the smart glasses for translating another language, but the potential is there.
One problem I foresee here, though, is that the smart glasses may pick up other conversations happening nearby. Meta thought of this too and said that the microphones in the Ray-Ban Display actually beamform to focus just on who you’re looking at, and I did get a chance to test that out. While one Meta rep spoke to me in the room, others had their own conversations at a fairly normal volume. The results? Kind of mixed. While the transcription focused mostly on the person I was looking at, it still picked up stray words here and there. This feels like a bit of an inevitability in loud scenarios, but who knows? Maybe beamforming and AI can fill in the gaps.
If you’re looking for a killer feature of Meta’s Ray-Ban Display smart glasses, I’m not sure there necessarily is one, but one thing I do know is that the coupling of the glasses with its Neural Band should be nothing short of a game-changer. Navigating the UI in smart glasses has been a constant issue in the space, and until now, I haven’t seen what I thought was a killer solution, but based on my early demos, I’d say that Meta’s “brain-reading” wristband could be the breakthrough we were waiting for—at least until hand or eye tracking at this scale becomes possible.
I’ll know more about how everything works when I get a chance to use Meta Ray-Ban Display on my own, but for now I’d say Meta is still clearly the frontrunner in the smart glasses race, and its head start just got pretty massive.