Things tend to move fast in the world of gadgets—blink your eyes and you’re liable to go from “What’s an iPhone?” to seeing one in the hands of everyone you’ve ever known. That rapid acceleration can be hard to identify in the moment, but when you’ve written about gadgets as extensively as I have, you start to see the early signs, and I’m here to tell you that, if you’re a fan of smart glasses, I suggest you buckle up now.
If smart glasses are just cropping up on your radar, I don’t blame you. While the form factor has been creeping up for years now, Meta’s entry into the equation with a pair of smart glasses that actually have a display has made quite a splash, especially after making the Meta Ray-Ban Display the highlight of its annual Connect conference. Not only that, but Meta now has its name attached to not one, but five pairs currently being sold right now—yes, five. And the thing is, it’s not just about Meta, nor is it just about Samsung and Apple, both of which are most likely in varying stages of releasing their own pairs of smart glasses.
Beneath all of those massive names, there are actually tons of companies that already have their own smart glasses for sale (with screens and without) that are even on multiple generations now. Companies like Rayneo, Viture, Even Realities, Solos, Brilliant Labs, Inmo, Rokid… Should I keep going? You see where I’m going with this—things are getting crowded all of a sudden, which in a lot of ways is great. The more companies making smart glasses, the more options you’ll have as consumers, and theoretically, the more innovation you’ll have in the category.
I say “theoretically” here because, despite all of that attention, smart glasses haven’t been an easy nut to crack. While initial startups have offered some surprisingly compelling use cases (navigation and open-ear audio, to name a couple of my favorites), the category still has a bit of an Apple Watch issue, which is to say, compelling hardware without the Holy Grail killer feature that makes people rush out in droves to buy their own pair.
And whether companies will be able to stick around long enough to figure those issues out fully is a whole different issue. Without resources to burn like Google, Apple, Samsung, or Amazon, which also recently dipped its toes into the smart glasses world with a pair of delivery driver-focused glasses, hanging in won’t be an easy task. And when Google, Samsung, Apple, and friends do arrive on the scene… What then?
As surprisingly functional as smart glasses made by startups are right now, they don’t exactly have the robust ecosystem of mobile titans like Apple and Google, which own the platforms that smart glasses have to connect to. Why does that matter? Well, the more interconnectivity, the easier and more useful smart glasses become; suddenly, critical features like messaging, calling, and taking pictures feel seamless. Call me a cynic, but I doubt that third-party smart glasses will ever enjoy the same level of integration as a pair of Apple-made smart glasses connecting to an iPhone.
I’m leaving room to be wrong here, too. Maybe a startup will figure out something game-changing and beat Samsung or Apple to the punch, though the clock is ticking for that to happen, since Samsung and Google have already gone as far as to preview a prototype of smart glasses. Or maybe, those tech titans pushing towards smart glasses just don’t have the chops to make things work, and a startup with some groundbreaking waveguides, or an awe-inspiring input system, will swoop in and steal the thunder. But if I’m being honest, that feels less likely to me, which is a bummer—to think that all this excitement could actually just be one big bubble.
Everyone wants a piece of the smart glasses pie. With Meta plowing full steam ahead, releasing three new pairs of smart glasses in one go at its Connect conference this year (including the Ray-Ban Display with a screen), other big-name competitors are following suit. Apple, for example, is reportedly attempting to expedite its first pair of smart glasses by diverting resources from the Vision Pro team. To give you a sense of how urgent that pivot is, Apple is reportedly de-prioritizing the development of a cheaper and lighter version that people might actually, ya know, buy, to pursue said smart glasses.
Now, it looks like another smartphone titan is being swept up in that push, and the result could be Samsung-made smart glasses on shelves sooner than you think. According to a report from the Financial News in South Korea, we could see “Project Haean,” Samsung’s rumored (Google-powered) AR glasses, as soon as early next year. It’s hard to say how much stock to put in that rumor without any official timeline from Samsung, but if you’ll allow me to speculate for a moment, it does feel very possible.
On top of the palpable push toward smart glasses, there’s also solid evidence that both Google and Samsung are heavily invested in AR. This year at I/O, Google showed off a preview of its XR glasses, which have a similar featureset as Meta’s Ray-Ban Display. While there was no indication of when those glasses may see the light of day, it’s clear that this isn’t some pie-in-the-sky prototype. Gizmodo’s Senior Editor, Consumer Tech, Raymond Wong, got to try the XR glasses a little bit, and while the demo only ran for a grand total of 90 seconds, they were at least real in the sense that Google was letting people try them on.
That’s all to say that new hardware is clearly in the pipeline, and while Samsung hasn’t gone as far as announcing anything official (it certainly hasn’t offered demos in a public setting), we have gotten some strong hints. At the I/O keynote in May, for example, GM of Android XR, Shahram Izadi said, “We’re taking our partnership with Samsung to the next level by extending Android XR beyond headsets to glasses. We’re creating the software and reference hardware platform to enable the ecosystem to build great glasses alongside us. Our glasses prototypes are already being used by trusted testers.” Well, well, look at that; Samsung and Google sittin’ in a tree…
That’s not a clear yes or no, to be sure, but it’s not, not a no either. No matter how this plays out (or when), I’m personally looking forward to Samsung entering the fray. Even more so than Google, Samsung has a chance to make smart glasses that feel truly useful. Given the breadth of its presence in phones and other hardware, it could offer tighter integration between mobile devices and smart glasses than Meta could ever dream of, and that’s huge for delivering a quality smart glasses experience right now, since they still rely on phones for all the big-time computing.
In the U.S., Samsung might not have the same weight as an Apple-scale ecosystem, but it’s still a huge player and would be the biggest one (sorry, Meta) that smart glasses with a screen have seen yet. Personally, my body (or my face, I guess) is ready for the varied and non-Meta-dominated smart glasses field that many of us have been waiting for.
As a nice digestif, we even had Qualcomm’s Snapdragon Summit, which gave us a preview of the future of computing from the inside. The wild part is, all of those conferences and events don’t even cover all the gadgets we liked in September, so we made this list to make sure you got all of last month’s wild releases down.
Apple might not be the first of the big phone providers to go super-thin in a flagship phone (that distinction belongs to Samsung’s Galaxy S25 Edge), but this is still the first hyper-thin iPhone, and that’s a big deal. My colleague, Gizmodo’s Senior Editor of Consumer Tech, Raymond Wong, called it a “magic slab of glass,” and while I haven’t had a chance to use the phone in-depth myself, I did get to at least hold it, and I see the appeal.
It’s as thin and light as promised, and the fact that Apple managed to cram all of the compute power in the top portion of the phone and still deliver a serviceable battery life really is a feat of engineering. You don’t need an iPhone that’s this thin and light, but once you have one in you’re hand, you’re going to be tempted to buy one, even if the camera is barebones.
After trying Insta360’s new action camera out, we’re going to have to add swordplay to our list of usual tests. In case you missed it, Gizmodo Staff Writer Kyle Barr, tried out the Insta360 Go Ultra and, yes, it survived a blow from a sword, which is good news for anyone who’s bringing action cameras to a renaissance fair or to a reenactment of the movie Hook.
It’s not just durability; the Insta360 Go Ultra can record in 4K at 60 fps and comes with a magnetic mount that allows you to fix the camera in a lot of places, including square in the middle of your chest if you’re wearing a shirt while filming, which you probably should be. If you’re looking for a high-res, portable action camera that can survive sword attacks, this is worth a look.
After several long years, Apple’s AirPods Pro 2 got a real number update, and it was worth the wait. While the AirPods Pro 3 retain the same $250 starting price as the last generation, they get a few key upgrades, including better active noise cancellation, a redesigned fit, more ear tip sizes, and perhaps most importantly, two brand new capabilities: heart rate tracking and live translation.
Apple seems to be embarking on a new identity for its AirPods with health features in particular, and if you’re at all interested in keeping tabs on your biometrics, but don’t feel the need to strap an Apple Watch on your wrist, the AirPods Pro 3 could be the perfect gateway. AirPods Pro 3 are proof that you don’t need a generation update every single year—you just need one that feels worth the anticipation.
Would you buy a mouse that doesn’t click? Sounds like a trick question, but in the case of Logitech’s G Pro X2 Superstrike, that question is kind of literal. This mouse uses haptics to simulate clicking, which sounds like a gimmick, but is actually useful if you’re a competitive gamer. According to Logitech, the architecture of its G Pro X2 Superstrike mouse (Haptic Inductive Trigger System, for anyone interested) offers 30 milliseconds lower latency than a mouse with an optical switch, which uses a beam of infrared light to determine when you press the button.
Are you fast enough to even take advantage of technology like this? Probably not, but the fact that you could is impressive, and using haptics in a mouse instead of real-life clicks is objectively interesting if nothing else.
Sure, the iPhone Air may have stolen the show, but the base iPhone 17 and the 17 Pro/17 Pro Max versions also had a lot to like. We called the iPhone 17 base the “best iPhone value in years” thanks to its 120Hz always-on display, its great battery life, and its excellent performance, while the 17 Pros also held it down with the longest battery, the best performance, the best cameras, and a new “Cosmic Orange” model. Sure, the scratching didn’t help the fanfare, but you’re probably going to slap a case on these things anyway. If you’re in need of an iPhone upgrade, now may be the time.
What do I say about the Meta Ray-Ban Display? I’ve delved deeper and deeper into the burgeoning world of smart glasses over the last year, and Meta’s Ray-Ban Display (the company’s first pair of smart glasses with a screen in them) feels like the pair I’ve been waiting for.
They come with navigation abilities, message notifications, translation, a POV camera feature, and Instagram integration for watching Reels—and that’s on top of doing all the stuff that previous non-display smart glasses have done.
Sure, privacy problems abound, and they’re not quite a phone replacement yet, but based on my hands-on with them at Meta’s annual Connect conference, the Meta Ray-Ban Display are an exciting start and might just be the first pair of smart glasses you want to buy. Trust me, Meta’s Neural Band (a wristband that lets you control the smart glasses’ screen with your fingers) is just as magical as it sounds.
Wired security cameras might be a pain to set up, but they’re also superior in the fact that they have a higher likelihood of staying powered up—no battery and no climbing ladders to charge them when they die. The Reolink Elite Floodlight WiFi security camera is no different and delivers fairly high-res 4K footage and doesn’t require a subscription. It’s got a huge 180-degree FOV, too. You will have to buy a microSD card to store your footage—there’s no cloud storage here—but the simplicity will likely appeal to some.
If the iPhone 17 is the best-value iPhone in years, the Apple Watch SE 3 may take the title for the smartwatch side of things. For $250, you get the proverbial “greatest hits” from the Apple Watch feature set, including an always-on display, an S10 chip, and even Apple’s “double-tap” gesture. There’s also 32 hours of battery life, which may not be enough for people who need the most out of an Apple Watch (that’s what the Apple Watch Ultra 3 is for), but should be plenty for most.
We would have liked to see some new colors here, but still, a good value is hard to beat, and that’s a note that the Apple Watch SE 3 hits perfectly in tune.
It’s hard to believe, but we now live in a world where you have to account for generations of smart glasses. That’s great for variety’s sake, but for choosing which smart glasses to buy (in this case, which Ray-Ban-branded smart glasses in particular), things might get a little confusing. Having used both generations of Meta’s Ray-Ban AI smart glasses myself, I’m here to give you the guidance you need, though.
If Meta Connect had you considering taking the plunge into smart glasses for the first time, here’s everything you need to know before you go and drop several hundred real, non-Metaverse dollars on a pair.
The biggest thing (or one of them) you’re going to want to know is that the second generation of Ray-Ban Meta AI smart glasses improve greatly in the battery department. Meta claims that Gen 2 have double the battery over Gen 1, which, no matter which way you spin it, is a major win for anyone who plans to wear their smart glasses for long periods.
I’ve already gotten a chance to use the second-gen Ray-Bans for about a week, and while I can’t fully declare that these achieve that lofty battery improvement quite yet, they do appear to be much-improved. That’s thanks in part to what Meta says is a new “ultra-narrow steelcan” battery design that crams more battery life into a space that’s no bigger than the last generation. The case also gets a battery bump, going from 32 hours to 48 hours in the second-gen version.
This one is a no-brainer; Meta’s Gen 2 Ray-Bans are far and away the winner in the battery department.
An equally big improvement when it comes to the Meta Ray-Ban Gen 2 is in the video capabilities. While the first-gen Meta Ray-Bans top out at 1080p at 30 fps, the Meta Ray-Ban Gen 2 ratchets things up to 3K resolution. If you want to record at 60 fps, you’re limited to 1080p.
This should make the Meta Ray-Ban Gen 2 ideal for anyone who plans to record themselves in motion (i.e. while playing sports or riding a bike), since a higher fps is just generally better suited for those activities. As for still pictures, the news is less exciting. Both smart glasses have the same 12-megapixel ultra-wide camera sensor, which makes them a tie resolution-wise, gen over gen.
Still, it’s hard not to give Ray-Ban Meta Gen 2 the point here since the frame rate and resolution are such a drastic improvement over the original. If you’re going to use your smart glasses for video, there’s only one generation you need.
While battery life and video capabilities are where the two generations stand out from each other, things start to look fairly similar in feature set. Both pairs of smart glasses come with Meta AI, and while Meta recently announced some new features at its annual Connect conference, those are going to eventually roll out to all of its “AI glasses,” including the Meta Ray-Ban Gen 1 and existing Oakley smart glasses, including the HSTN and Vanguard.
Those features, in case you missed it, are hyperlapse, which condenses longer videos into a quicker timelapse, and slo-mo. Again, those features should look nicer on the new generation, though, thanks to the improved video capabilities. Conversation Focus, which helps you hear the person you’re talking to, is also slated to roll out on all of Meta’s AI smart glasses.
If you’re buying either pair of smart glasses for Meta AI, you should know that Meta’s voice assistant is still hit and miss in my experience. While it works well for simpler voice commands like “Hey Meta, play music,” anything more complex, like “Hey Meta, what kind of car is that?” might send its AI spiraling.
Another point of parity between these two glasses is in their size. The smart glasses, despite the difference in battery life between the two, have the same dimensions. There is a slight difference in weight—the second-gen glasses are about 2 grams heavier—but that’s hardly anything to write home about. Having used the first-gen Ray-Bans extensively and the Meta Ray-Ban Gen 2 for almost a week, I can’t really notice a difference.
This one is a tie, but also points to the Meta Ray-Ban Gen 2 smart glasses for having more battery without a larger footprint.
As you’ve probably already guessed, the Ray-Ban Meta Gen 2 cost a little bit more than the original, given the several improvements, but for the quality of life you get from longer battery life and higher-res video recording, I’d say that price bump feels pretty standard. Meta’s Ray-Ban Gen 1 are available starting at $329, while the second-gen version starts at $379. It’s up to you whether that $50 is worth it, but if you plan on wearing your smart glasses for extended periods, the choice kind of makes itself.
With any new device category comes a whole host of novel and sometimes exhaustingly complex questions. Smartphones, for example, no matter how mundane they seem right now, are still nagging us with existential quandaries. When should we use them? How should we use them? What in God’s name happens to us when we use them, which, last I checked, is literally all the time?
These are important questions, and most of us, even if we’re not spending all day ruminating on them, tackle the complexity in our own way, setting (or resetting) social norms for ourselves and other people as we trudge along. The only thing is, in my experience, we tend to ask these questions mostly in retrospect, which is to say after the cat (or phone, or smartwatch, or earth-shattering portal into the online world) is out of the proverbial bag. It’s easy to look back and say, “That was the time we should have thought about this,” and when I put Meta’s new smart glasses with a screen on, I knew that the time, for smart glasses in particular, was now—like, right f**king now.
In case you missed it, Meta finally unveiled the Meta Ray-Ban Display, which are its first smart glasses with an in-lens display. I flew out to Meta headquarters for its annual Connect conference to try them, and the second I put them on, it was clear: these are going to be big. It probably seems silly from the outside to make a declaration like that. We have screens everywhere all the time—in our hands, on our wrists, and sometimes, regrettably, in our toasters. Why would smart glasses be any different? On one hand, I get that skepticism, but sometimes function isn’t the issue; it’s form. And when it comes to smart glasses, there is no other form like it.
Meta’s Ray-Ban Display aren’t just another wearable. The screen inside them opens up an entirely new universe of capabilities. With these smart glasses and Meta’s wild new “Neural Band,” a wristband that reads the electrical signals in your arm and translates them to inputs, you’re able to do a lot of the stuff you normally do on your phone. You can receive and write messages, watch Reels on Instagram, take voice calls and video calls, record video and take pictures, and get turn-by-turn navigation. You can even transcribe conversations that are happening in real time. You’re doing this on your face in a way that you’ve never done it before—discreetly and, from my experience, fairly fluidly.
If there were any boundaries between you and a device, Meta’s Ray-Ban Display are closing them to a gap that only an iPhone Air could slide through. It’s incredibly exciting in one way, because I can see Meta’s smart glasses being both useful and fun. The ability to swipe through a UI in front of my face by sliding my thumb around like some kind of computer cursor made of meat is wild and, at times, actually thrilling. While not everything works seamlessly yet, the door to smart glasses supremacy feels like it’s been swung wide open. You are going to want a pair of these smart glasses whether you know it or not. These are going to be popular, and as a result, potentially problematic.
We may have a solid grasp on where and when we’re supposed to use phones, but what happens when that “phone” in question becomes perfectly discreet, and the ability to use it becomes almost unnoticeable to those around us? When I use a smartphone, you can see me pick it up—you know there’s a device in my hand. When I use Meta’s Ray-Ban Display, however, there’s almost no indication. Yes, there’s a privacy light that tells outside people that a picture or video is being taken, but there’s also less than 2% light leakage through the lens, meaning you can’t tell when the screen inside the glasses is on. I certainly couldn’t tell when I watched others use them. It’s as ambient as any ambient computing I’ve witnessed so far.
I talked to Anshel Sag, a principal analyst at Moor Insights & Strategy who covers the wearable market, and he says the privacy framework around technology like this is still in flux.
“We are still very much in the infancy of the smart glasses, AI wearable, and AR privacy and etiquette era,” he said. “I think that the reality is that having a wearable with a camera on your face is going to change things, and there are going to be places where these things are banned explicitly.”
Some of those environments, Sag said, are private areas like bathrooms or locker rooms, but it could extend beyond just places where you might catch a glimpse of someone naked. Driving, for example, is a major question. Meta’s Ray-Ban Display have navigation built in, and while the company tells me that the feature is designed for walking right now, it’s not actually preventing anyone from using its smart glasses in the car. Instead, it will provide a warning before you do so by detecting what speed you’re moving at. Other companies like Amazon seem not to have even thought that navigating on smart glasses while driving could be a safety hazard at all. Early reports indicate that Amazon is plowing forward, making smart glasses that are specifically designed for its delivery drivers to use in a car.
While regulators like the NHTSA have issued warnings about people using VR headsets while driving (yes, people were actually doing that), it hasn’t, according to my research or knowledge, addressed the impact of smart glasses, which are much more likely—especially if they become widespread—to enter the equation while driving. I reached out to the NHTSA for comment, but have not yet received a response.
Privacy concerns shouldn’t just stem from the form factor, either. You also have to think about the company that’s making the thing you’re wearing on your face all the time and whether it has shown to be a good steward of your data and privacy. In Meta’s case? Well, without going into an entirely separate diatribe, I think it could do a lot better. And other companies that are also in hot pursuit of screen-clad glasses, like Google? Well, they haven’t been much better.
And makers of smart glasses shouldn’t be surprised if, when these things wind up on people’s faces, they get some shit for it. Google Glass, which came out in 2013, may seem like a different age, and in a lot of ways it is (people’s expectations for privacy are almost nonexistent now), but we also haven’t had to confront the idea of pervasive camera-clad wearables in a long time, so who’s to say things have really changed? Sag says, while he expects some backlash, it may not be like the Glasshole days of yore.
“I think there will be some backlash, but I don’t think it’s gonna be as bad as Google Glass,” he says. “Google Glass had such an invasive appearance. You know, it didn’t really look normal, so it really caught people’s attention more. And I think that’s really what has made these classes more successful, is that they’re just inherently less intrusive in terms of appearance.”
I may not be an industry analyst, but I agree with Sag. I’m not sure there really will be a category-ending backlash like we saw back in the Google days, and a part of me doesn’t want there to be. As I mentioned, I got a chance to use Meta’s Ray-Ban Displays, and the idea all but knocked my socks off. These are the smart glasses that anyone interested in the form factor has been waiting for. What I really want is to be able to live in a world where we can all use them respectfully and responsibly, and one where the companies that are making them give us the same responsibility and respect back. But in my experience, the only way to get toward a more respectful, harmonious world is to try everything else first, and in this case, the first step might be your next pair of Ray-Bans.
Meta’s Ray-Bans are back with a new generation, and this time they’re finally giving people the one thing they really want—a screen. At Meta’s annual Connect developer conference, the company officially took the wraps off its Meta Ray-Ban Display, which are, as the name suggests, its first pair of AI-infused smart glasses to come with a full-color in-lens display.
The smart glasses, which still bear the same Ray-Ban branding, will cost $799 and are available for preorder today. As you might expect, they can do quite a few things that their predecessor can’t, including message notifications, turn-by-turn navigation, and telling you when queries to Meta AI are processing. There are several app integrations, including WhatsApp and Instagram, allowing you to watch reels and make video calls natively in the glasses. One major upgrade on the message notifications front is that the Meta Ray-Ban Display will not be limited to only WhatsApp, meaning it will be able to show notifications on both iOS and Android devices.
That’s not the only major shift in this generation. Meta says its first-ever display has a 600 x 600 resolution and 20-degree field of view. The display is monocular, which means it’s only in one lens—at the bottom right-ish corner—and has a refresh rate of 90Hz. Brightness goes up to 5,000 nits and as low as 30 nits, which makes them usable outdoors in full light. One of the coolest parts of the display is that Meta claims that there’s less than 2% light leakage, which means that you can’t see when someone has their display activated.
Speaking of light leakage, all of Meta’s Ray-Ban Display smart glasses will come with transition lenses. On one hand, that feels like a weird choice, but it also makes sense since this is a gadget you’re going to want to use indoors as well as out, and for $800, you should be able to use them for as long as you like without having to take them off. “As long as you like,” in this case, will be no more than 6 hours, according to Meta, which is a lot longer than I was expecting. That solid battery life is thanks in part to what Meta is calling “ultra-narrow steelcan batteries.” I wish I knew exactly what that meant, but for now, I can only look forward to getting more of a deep dive in the future.
Glasses are only half the appeal of Meta’s Ray-Ban Display, though. The other half is its sEMG wristband that you use to control the UI in the glasses.
The Meta Neural Band, as Meta is calling it, is arguably the most innovative part of its new Ray-Ban package, since no other product like it exists on a commercial scale. Outside of being a first, it also offers a potential solution to a problem that no other maker of smart glasses has quite solved—that problem being, how the hell do you actually use smart glasses? While most smart glasses (Meta’s first-gen Ray-Bans included) have a voice assistant for shouting commands like “take a picture” and a fairly simple touch-sensitive bar for physical inputs (i.e., pause/play), neither is ideal in every situation.
The fact is, adding a screen complicates smart glasses—the more you can do, the more you’ll need to convey to your glasses, and in order to do that, you need an input system as nuanced as the eyewear itself. Not only that, but if you want to use your smart glasses discreetly (or in a normal fashion at all, really), shouting into a crowded subway car is less than ideal. With the Neural Band, however, you can navigate the UI discreetly by pinching, swiping, and tapping through various menus in the smart glasses. My favorite gesture is a pinch to zoom for taking photos and videos. It’s Vision Pro-esque, but all done without cameras. In case you’re wondering, yes, the neural band is included in that $800 cost.
Like previous iterations of Meta’s Ray-Ban glasses, this year’s edition will also come equipped with cameras and speakers. Camera-wise, Meta is going with a 12-megapixel ultra-wide sensor that can also capture 1080p at 30fps in a 1,440 x 1,920 resolution. There’s also a 3x digital zoom.
The camera is used for the computer vision in the glasses, aka Meta AI, as well. Despite being a fan of Meta’s Ray-Bans (they’re the only device I ever want to take calls with), Meta AI has been a weak spot for me. While the voice assistant works well most of the time for basic stuff like taking pictures/videos, playing Spotify, and asking what your battery life is, the heavier AI lifting is hit-or-miss at best. Whether Meta’s new smart glasses fix that remains to be seen since I haven’t had a chance to use them, but I’m hoping for an upgrade here.
But even if AI is still finicky and the cameras and audio are about the same, these smart glasses still have a freaking screen. That’s a big step forward, even if functionality is limited for now. When people ask me about my first-gen Ray-Bans, the first thing they want to know is whether they have a display in them, and they’re inevitably very disappointed when I have to let them down. Now, I’ll actually have something to show them, and if Meta’s wristband works, I’ll even have something to show them that only Meta can provide.
Smart glasses are an exciting idea right now. In theory, they’re a new gadget that does lots of the stuff that our phones do, but in an always-there form factor. They can take pictures, make calls, translate menus, and—if the tech and the investment get there—they might slap a screen right onto eyeballs for notifications, navigation, and maybe even augmented reality à la Pokémon Go.
I say “in theory” because just because smart glasses can do all of those things on paper doesn’t mean they can do them well, and if they can’t do them well… they may as well not do them at all. We’re still in the early stages of the grand ascension of smart glasses as a device category, but a brave few are venturing to do it all right now, and one of those few (at least in the U.S.) is a company based in China called Rokid (pronounced rock-id).
I got a chance to try Rokid’s plainly named Rokid Glasses, and while there was a lot that intrigued me, I can say for certain that the kinks are still being worked out. One thing that these smart glasses have that big-time entrants from the likes of Meta and its Ray-Bans don’t have is a screen. That screen is a very simple dual-micro LED display that only shows things in a very Matrix-style green. I got to use the Rokid Glasses for 15 minutes and was surprised at how sharp the screen was, even if the display functions were fairly basic. And look, you can see the screen from outside the glasses. And that’s good because sharpness is crucial for some of the things that make the Rokid Glasses unique.
One of those distinct capabilities is a teleprompter feature that displays a presentation in front of your eyes, so you can read along and not sound like a total moron during your big keynote. A thing that I found very cool is the fact that the Rokid Glasses actually use the onboard microphone to listen to your words and scroll the prompter in stride with the words. Even in a crowded room with lots of noise, the feature worked smoothly, which is no small feat.
Another screen-centric feature I got to try was translation, which—though my conversation was fairly brief—seemed to work better than you’d expect. My demo companion spoke to me in Mandarin Chinese, and the Rokid Glasses were able to translate his speech in small snippets and slap them onto the screen. Again, the microphone did all of this in a noisy room, which was legitimately impressive. The microphones on Rokid’s smart glasses work so well that I’m pretty sure you could use them for spying—it picked up bits of conversations across the room that I wasn’t even able to make out with my own ears. Cool! Also scary!
To use all of this stuff, it’s best to couple the Rokid Glasses with an app (Android-only right now) where everything you’re doing is displayed. As sharp as the screen is, it’s also quite small, and words are pushed off when new information arrives at a fairly quick pace. If you need to see something, it’s best to have the app ready, lest you ask someone to repeat themselves multiple times. And in case you’re wondering, you can control the display from the smart glasses by swiping the right arm and using a tap to select things like settings, translation, and other stuff, but it’s not exactly the smoothest experience. That’s why voice assistants—for Rokid and any company making smart glasses right now—are also critical. That brings me to a not-so-bright spot.
The Rokid Glasses voice assistant, which is supposed to activate with the wake phrase “Hi, Rokid,” was basically broken. No matter how many times I screamed “Hi, Rokid” into the smart glasses, it wouldn’t answer my calls. Others around me were also having the same issue, which is not great from a UI perspective. The interesting thing is that when a native Mandarin-speaking represenative said the phrase, it seemed to work every time. American English-speaking people, not so much. I thought maybe it was the loud, crowded room at first, but after noticing that strange quirk, I think it may be a problem with how the voice assistant is trained. I can’t say for sure without testing the Rokid Glasses more thoroughly, but it’s definitely a concern for anyone buying a pair in the U.S.
Like Meta’s Ray-Bans, the Rokid Glasses can also use AI for computer vision-based tasks like asking your glasses to read a menu in a different language using the built-in camera. I wasn’t able to launch that task myself, given the aforementioned voice assistant issues, but when a Rokid representative asked the smart glasses to translate a menu in Finnish, it did so (at least I think) fairly well, displaying the translated Finnish words in the Rokid app. Again, I’d need to test this feature out more thoroughly in a better environment to verify the translation separately and determine how well (or terribly) it actually works on a consistent basis.
As long as we’re talking about computer vision, I was pleasantly surprised with the camera, which is a 12-megapixel sensor from Sony. Just one sensor, not two, though. I would say it’s on par with Meta’s Ray-Bans, but I didn’t get to test video recording out in my demo. I wouldn’t try and use the Rokid Glasses to win any photo contests, but then again, I wouldn’t do that with any pair of smart glasses.
I won’t know until I get to try Rokid Glasses for a longer period, but I get the sense that they’re smart glasses with some peaks and valleys. Translation could be impressive, as could computer vision, and they’re incredibly light (as light as Ray-Bans), but if there isn’t a functional, English-ready voice assistant to tie it all together, that’d be a big problem for anyone in the U.S. who wants to buy a pair. That’s potentially not the best news for smart glasses enthusiasts in America, but I assume Mark Zuckerberg would welcome that quirk with open arms.
Two college students have used Meta’s smart glasses to build a tool that quickly identifies any stranger walking by and brings up that person’s sensitive information, including their home address and contact information, according to a demonstration video posted to Instagram. And while the creators say they have no plans to release the code for their project, the demo gives us a peek at humanity’s very likely future—a future that used to be confined to dystopian sci-fi movies.
The two people behind the project, AnhPhu Nguyen and Caine Ardayfio, are students working on computer science at Harvard who often post their tech experiments on social media, including 3D printed images and wearable flame-throwers. But it’s their latest experiment, first spotted by 404 Media, that’s probably going to make a lot of people feel uneasy.
An Instagram video posted by Nguyen explains how the two men built a program that feeds the visual information from Meta Ray Ban smart glasses into facial recognition tools like Pimeyes, which have essentially scraped the entire web to identify where that person’s face shows up online. From there, a large language model infers the likely name and other details about that person. That name is then fed to various websites that can reveal the person’s home address, phone number, occupation or other organizational affiliations, and even the names of relatives.
“To use it, you just put the glasses on, then as you walk by people, the glasses will detect when somebody’s face is in frame. This photo is used to analyze them, and after a few seconds, their personal information pops up on your phone,” Nguyen explains in the Instagram video.
Nguyen and Ardayfio call their project I-XRAY and it’s pretty stunning how much information they’re able to pull up in a short amount of time. They’re quick to point out that many of these tools have only become widely available in the past few years. For example, Meta’s smart glasses with camera capabilities that look like regular eyeglasses were only released last year. And the kind of LLM data extraction they’re achieving was only possible in the past two years. Even the ability to look up partial social security numbers (thanks to all those data leaks you read about every day now) was only possible at the consumer level since 2023.
As you can see in the video, they also approached strangers and acted like they knew those people from elsewhere after instantly looking up their information.
“The system leverages the ability of LLMs to understand, process, and compile vast amounts of information from diverse sources–inferring relationships between online sources, such as linking a name from one article to another, and logically parsing a person’s identity and personal details through text,” the creators say in an explanation document posted to Google Drive. “This synergy between LLMs and reverse face search allows for fully automatic and comprehensive data extraction that was previously not possible with traditional methods alone.”
The creators list the tools they used in their release, noting that anyone can request that those services remove their information. For reverse facial search engines, there’s Pimeyes and Facecheck ID. For search engines that include personal information there’s FastPeopleSearch, CheckThem, and Instant Checkmate. As for the social security number information, there’s no way to get that stuff removed, so the students recommend freezing your credit.
The students didn’t immediately respond to questions from Gizmodo on Wednesday morning. Meta also didn’t respond to a request for comment. We’ll update this post if we hear back. But in the meantime, we should all probably get ready for this kind of tech to emerge more widely since this kind of technological mash-up feels inevitable at this point—especially if any of the new smart glasses that guys like Mark Zuckerberg love so much really become mainstream.
It may take quite a while for the biggest tech companies to get behind it, but just as we saw OpenAI essentially shoot the starting gun for consumer-facing generative AI, any small upstart could plausibly make this product happen and start the dominoes falling for other larger tech companies to get this future started. Let’s cross our fingers and hope for the best, given the privacy implications. It really feels like nobody will have any semblance of anonymity in public once this ball gets rolling.