We hope you like more AI in your Gmail inbox, because Google is “bringing Gmail into the Gemini era.” It’ll be on by default, but the good news is that you can disable it. The threat, issued on Thursday by Google’s VP of product Blake Barnes, sees the company expanding the reach of a trio of inbox AI features that were previously only available to Google AI Pro and Ultra subscribers. You know – the folks who actually wanted the stuff. Now, Barnes explained, everyone will be getting a dose of Google’s much-derided AI Overviews in their Gmail inbox. The Register
After several shifting rumors, it now looks almost certain that Samsung will launch its Galaxy S26 Ultra later in the year than its predecessor, the Galaxy S25. A new leak from a highly reputable tipster confirms previous reports about a February launch date for the upcoming flagship. Long-time smartphone leaker Evan Blass posted on X that Samsung will hold its Galaxy S26 Unpacked event on February 25th. Blass quoted Ice Universe, who originally broke the news, stating that the date was “100% correct.” Forbes
Repeat shoplifters are being caught by face recognition cameras and prevented from stealing goods or abusing staff a record 1,400 times a day, analysis shows. More than 100 retailers, including Sainsbury’s, Budgens, Sports Direct, Iceland and Home Bargains, have deployed the cameras operated by Facewatch in thousands of stores across England and Wales. The system uses AI to cross reference faces against a watchlist of prolific and repeat offenders shared by local stores. Telegraph
DS has unveiled a sporty reimagining of its No4 hatchback, which showcases some new features coming to its models in the future, according to design boss Thierry Métroz. Called the No4 Taylor Made (because it was partially penned by DS Formula E driver Taylor Barnard), the concept is based on the standard EV but sits lower, has a wider track and gains a host of aero features. That points to the upmarket French brand’s racing influence, said Métroz, but still makes it a proposition “for the road, not the track”. Autocar
The UK government says Elon Musk’s platform X limiting Grok AI image edits to paid users is “insulting” to victims of misogyny and sexual violence. Speaking on Friday, Downing Street said the move “simply turns an AI feature that allows the creation of unlawful images into a premium service”. It follows significant backlash after Grok digitally altered images of others by undressing them – something it says it now can only do for those who pay a monthly fee. BBC
The Honor Magic8 Prowas announced in China in October, then was released in other parts of Asia in November, and now it’s officially launching in the UK, where it’s finally available to purchase for £1,099 in Sunrise Gold, Sky Cyan, and black. It will be offered by Amazon, Argos, EE, Virgin Media O2, Vodafone, Three, Tesco Mobile, Currys, Very, and John Lewis. Available in a single 12/512 GB version the Honor Magic8 Pro comes with a £200 discount for early birds. GSM Arena
Dutch designer Sabine Marcelis has created an updated version of the popular Varmblixt lights she designed for IKEA, revamping the company’s most-sold lamp with a matte white finish and a smart bulb. Originally launched in 2023 as part of a 20-piece collection, the bloated, doughnut-shaped lamp quickly sold out and became a viral hit for its playful and tactile form.
The new version has colour-changing and dimmable qualities
Now, IKEA is presenting an updated version of the lamp as part of its first-ever presentation at the Consumer Electronics Show (CES) in Las Vegas, complete with a dimmable, colour-changing bulb. Dezeen
Zoë Schiffer: Yeah, I think that one thing that everyone can agree on is that Nvidia is undoubtedly one of the companies that has gone all in during this AI acceleration moment. For better or worse, about 90 percent of Nvidia’s sales, which were once dominated by chips for personal gaming computers now come from its data center business, and it feels like every time one of these partnerships between OpenAI and another company, Nvidia’s in there somewhere, it just feels like it’s attached to everyone else in this industry at this point.
Max Zeff: Yeah, it’s done a great job of infusing itself with every AI company, but also, I mean, that’s been a major concern. There’s been a lot of talk of these circular deals where Nvidia really depends on a lot of these startups that it’s also funding. It’s a customer, it’s an investor. Nvidia is so wrapped up in this. So I guess in that way, it’s not that surprising that Jensen is defending the AI bubble constantly now.
Zoë Schiffer: Yeah. It’s also worth saying that one of the fears that people who have the fear of the AI bubble will talk about is the fact that the GPUs are the majority of the cost of building out a data center, and they need to be replaced, what, every three years? Nvidia releases new chips and they’re cutting edge, and companies need to buy them in order to compete. I think the fear is that that renewal cycle isn’t quite factored into the pricing, but as long as people continue to buy chips, what Jensen is saying is, “No, no, we’re insulated right now.”
Max Zeff: Right. We’ll see if that’s really true though.
Zoë Schiffer: One more story before we go to break, and to get through this one, we both have to be extra professional. I’m not sure Max, which we always are, but just a little extra. You will see what I mean. WIRED contributor Mattha Busby reported on how two young Mormons created an app to help other men break their porn addiction and gooning habits. I’m going to be real. I had never heard this term before reading this story, and I was shocked. OK, if you’re not familiar with gooning, it’s basically just another word for edging. That is long hours of masturbation without release. This app called Relay was created by 27-year-old Chandler Rogers with the mission of providing his Gen Z peers a way to stop doing this and to generally escape from the clutches of porn. I have some other ideas. I feel like go outside, talk to a human, but I don’t want to be mean, because I do feel like this could be really difficult for people.
Despite its excessive spending on data centers with no clear path to revenue generation in front of it, it seemed that if OpenAI had just one thing it could count on, it was audience capture. ChatGPT seemed like it would get the brand verbification treatment, being the term people used to reference AI. Now it seems like that might be slipping away. Since the release of Google’s Gemini 3 model, it’s like all anyone on the AI-obsessed corners of the web can talk about is how much better it is than ChatGPT.
Marc Benioff, the CEO of Salesforce and longtime ChatGPT fanboy, is perhaps the loudest convert out there. On X, the exec said, “Holy shit. I’ve used ChatGPT every day for 3 years. Just spent 2 hours on Gemini 3. I’m not going back.” He called the improvement of the model over past versions “insane,” claiming that “everything is sharper and faster.”
He’s not alone in that assessment. Exited OpenAI co-founder Andrej Karpathy called Gemini 3 “clearly a tier 1 LLM” with “very solid daily driver potential.” Stripe CEO Patrick Collison went out of his way to praise Google’s latest release, too, which is noteworthy given Stripe’s partnership with OpenAI to build AI-driven transactions. Apparently, what he saw with Gemini was too hard not to comment on.
The feedback from the C-suites around the tech world follows weeks of buzz over on AI Twitter that Gemini was going to be a game-changer. It certainly got presented as such right out of the gate, as Google made a point to highlight how its latest model topped just about every benchmarking test that was thrown at it (though your mileage may vary on just how meaningful any of those are).
But even the folks behind the benchmark measures appear to be impressed. According to The Verge, the cofounder and CTO of AI benchmarking firm LMArena, Wei-Lin Chiang, said that the release of Gemini 3 represents “more than a leaderboard shuffle” and “illustrates that the AI arms race is being shaped by models that can reason more abstractly, generalize more consistently, and deliver dependable results across an increasingly diverse set of real-world evaluations.”
The timing of Google’s resurgence in the AI space could not come at a worse time for OpenAI, which currently cannot shake questions from skeptics who are unclear on how the company is ever going to make good on its multi-billion-dollar financial commitments. The company has been viewed as a linchpin of the AI industry, and that industry has increasingly received scrutiny for what seems to be some circular investments that may be artificially propping up the entire economy. Now it seems that even its image as the ultimate innovator in that space is in question, and it has a new problem: the fact that Google can definitely outspend it without worrying nearly as much about profitability problems.
This could vastly improve a seriously fractured landscape, enabling you to easily add and access your cameras on whatever platform you choose. It’s also something that can potentially be delivered in a software update, so some of the cameras you already own might get Matter support.
You may be worrying about limitations, but the supported feature list is impressive, including video and audio streaming, two-way communication, local and remote access, multiple streams, pan-tilt-zoom controls, and both detection and privacy zones. There’s also support for continuous or event-based recording, either locally or to the cloud. What it won’t handle is how that storage is managed, meaning some camera manufacturers will still require you to use their cloud-based subscription models.
Pleasingly, there are no limitations on resolution, unlike Apple HomeKit Secure Video, or restrictions on AI detection features. Matter is using WebRTC technology, with remote access handled via the STUN and TURN protocols, meaning that manufacturers can choose to implement end-to-end encryption for footage. TCP transport support is designed to allow more efficient and reliable transmission of lots of data, like video cameras produce, which should reduce the load on your Wi-Fi and the impact on camera battery life.
While this is very exciting news and the potential backwards compatibility is laudable, there’s no telling when you’ll see it in a camera in your home. The big trio: Apple, Amazon, and Google have yet to announce any plans to adopt Matter in their cameras.
Matter 1.5 isn’t just about cameras, though—it also revamps support for closures, from garage doors to smart window shades, allowing for different motion types and configurations. There’s soil sensor support, too, to measure moisture and temperature and potentially trigger Matter-based water valves and irrigation systems.
Enhanced energy management features are the final addition. Matter 1.5 enables devices to exchange data on energy pricing, tariffs, and grid operation, enabling you to potentially get a picture of the true cost of your gadgets in energy usage, cost, and carbon impact. EV charging has also been bolstered, with state-of-charge reporting and bi-directional charging that could enable vehicle-to-grid schemes in the future.
While the Matter 1.5 spec is now available, it will take developers a while to adopt it and get their devices certified by the CSA. Expect some announcements at CES 2026. —Simon Hill
Google’s Gemini Rolls Out on Android Auto
Google has been gradually replacing its long-lived Google Assistant with the souped-up Gemini AI chatbot on all its platforms for the past year. After deploying it on its Wear OS smartwatches and, more recently, adding it directly to Google Maps, the company is bringing it to Android Auto. Google says the rollout will take place over the coming months for any Android Auto users who have upgraded from Google Assistant to Gemini on their phones.
Google has introduced Gemini 3, its smartest artificial intelligence model to date, with cutting-edge reasoning, multimedia, and coding skills. As talk of an AI bubble grows, the company is keen to stress that its latest release is more than just a clever model and chatbot—it’s a way of improving Google’s existing products, including its lucrative search business, starting today.
“We are the engine room of Google, and we’re plugging in AI everywhere now,” Demis Hassabis, CEO of Google DeepMind, an AI-focused subsidiary of Google’s parent company, Alphabet, told WIRED in an interview ahead of the announcement.
Hassabis admits that the AI market appears inflated, with a number of unproven startups receiving multibillion-dollar valuations. Google and other AI firms are also investing billions in building out new data centers to train and run AI models, sparking fears of a potential crash.
But even if the AI bubble bursts, Hassabis thinks Google is insulated. The company is already using AI to enhance products like Google Maps, Gmail, and Search. “In the downside scenario, we will lean more on that,” Hassabis says. “In the upside scenario, I think we’ve got the broadest portfolio and the most pioneering research.”
Google is also using AI to build popular new tools like NotebookLM, which can auto-generate podcasts from written materials, and AI Studio which can prototype applications with AI. It’s even exploring embedding the technology into areas like gaming and robotics, which Hassabis says could pay huge dividends in years to come, regardless of what happens in the wider market.
Google is making Gemini 3 available today through the Gemini app and in AI Overviews, a Google Search feature that synthesizes information alongside regular search results. In demos, the company showed that some Google queries, like a request for information about the three-body problem in physics, will prompt Gemini 3 to automatically generate a custom interactive visualization on the fly.
Robby Stein, vice president of product for Google Search, said at a briefing ahead of the launch that the company has seen “double-digit” increases in queries phrased in natural language, which are most likely targeted at AI Overviews, year over year. The company has also seen a 70 percent spike in visual search, which relies on Gemini’s ability to analyze photos.
Despite investing heavily in AI and making key breakthroughs, including inventing the transformer model that powers most large language models, Google was shaken by the sudden rise of ChatGPT in 2022. The chatbot not only vaulted OpenAI to center stage when it came to AI research; it also challenged Google’s core business by offering a new and potentially easier way to search the web.
The developers of the big generative AI chatbots are continuing to push out new features at a rapid rate, as they bid to make sure their bot is the one you turn to whenever you need some assistance from artificial intelligence.
One of the latest updates to Google Gemini gives you the ability to set up scheduled actions. These are exactly what they sound like: Tasks that you can get Google Gemini to run automatically, on a schedule. Maybe you want a weather and news report every morning at 7 am, or perhaps you want an evening meal suggestion every evening at 7 pm. Anything you can already get Gemini to do, you can schedule.
It brings Gemini up to speed in this regard with the ChatGPT app, which introduced scheduled tasks several months ago. The idea here is more or less the same: The bot can carry out your commands at a specific point in the future, and keep repeating them if you need to. Here’s how the feature works on both platforms.
Using Scheduled Actions in Gemini
Editing a scheduled action in Gemini.David Nield
At the time of writing, this requires a subscription to Google’s AI service, which starts at $20 a month for Google AI Pro. The chatbot can keep track of up to 10 scheduled actions at once, so you need to be quite selective about how you use it. You can use scheduled actions in Gemini on the web, and in the mobile apps for Android and iOS.
All you need to do to create a scheduled action in Gemini is to describe it, and include the scheduling details in the prompt. For example, you might tell Gemini to “generate an image of a cat playing with a ball of yarn, every Monday at 12 pm,” or “give me a general knowledge trivia question every evening at 7 pm.”
Scheduled actions can be set to happen once—like next Friday at 3pm, so something happens on a specific day at a specific time. Alternatively, your actions can run on a recurring daily, weekly, or monthly basis. They can’t be set on a more complicated cadence (such as every second Tuesday in the month), or surprise you at random.
Gemini should recognize that you’ve asked it to schedule something, and will present a recap: What you’ve asked it to do, when, and how frequently. Assuming it’s got all of this information correct, you don’t need to do anything else. The action runs regardless of whether you have Gemini open at the time, and you’ll be alerted to an action running by a notification on your devices (if you’ve got them turned on) and an email.
The latest version of the Google Nest Cam Outdoor (wired, 2nd gen)—yeah, that’s the name—is a real Jekyll and Hyde of a product. The hardware and software interface are expertly crafted and a delight to use. But once you start looking more closely at Google’s AI-forward security camera, it gets ugly and annoying.
The $150 Nest Cam Outdoor’s biggest problem is that it’s not cheap to use. Like Ring and Arlo, Google’s security camera subscription plans have gotten more expensive than ever in recent years, pressing at the boundaries of what’s affordable for people in a time when everything else is also harder to pay for. And until now, it’s done so without adding any tangible benefit.
The calculus has changed a bit for Google. The company’s subscription service, now called Google Home Premium instead of Nest Aware, has expanded beyond its core product—a month or two of cloud video storage—to become a full-on smart home suite, complete with Gemini as a buddy. I didn’t get to test Gemini’s smart speaker features, as they’re in the midst of a timid early access rollout I’m not part of, but I did get to test the AI portion of this new experience that’s come to Google’s cameras. And so far, at least, it is very much not worth $10 or $20 a month.
Google Nest Cam Outdoor (wired, 2nd Gen, 2025)
Great camera hardware is hindered by subscription features that aren’t worth their asking price.
Searchable video history
Up to 60 days of video storage
Clear and crisp video
No need to worry about battery life
Easy to install
Too many paywalled features
Inaccurate AI summaries
AI notifications aren’t that useful
“Wired” means plugging into an outlet
Footage is constantly sent to Google
As bad as the AI features are, there are real things to like about the Nest Cam Outdoor, especially if you’ve enjoyed these cameras in the past. Its video feed, now in 2K resolution, is sharper than ever and delivers accurate colors by day and crisp, black-and-white infrared-lit images by night, making it easy to tell who someone is on camera. It records HDR footage at 30 fps and has a broad 152-degree diagonal field of view. That’s up from the 130-degree FOV of its battery-powered predecessor and makes it better for covering a large area like my backyard. Because the camera connects to the base magnetically, it’s very easy to point it where you want to. It’s also just a nice-looking piece of hardware, even if Google hasn’t really updated its appearance in many years.
There’s no floodlight on the new Nest Cam Outdoor—instead, a pair of infrared LEDs light up the area as far as 20 feet in front of the camera at night. It’s got a speaker and microphone inside so you can chat with people via Google Home on an iOS or Android device. Its microphone does a good job picking up voices on the other end, and its speaker is clear, but not any louder than those of other cameras like this.
Installation is as dead simple as that of an outdoor wired camera can be. You don’t connect this product directly to your house; a short cable sprouts from the device itself and runs through a magnetic base that you mount to your exterior wall using a couple of screws. You plug that short cable into a longer one, which you’ll then need to route through a window, door, or hole in the wall to an interior outlet. You can also plug it into an outdoor outlet, of course, if you’re not concerned about it being so easy to get to. Either way, it’s not as elegantly wire-free as battery-powered or hardwired cameras are, but at least it’s easy to set up.
As for its software, you’ll configure and use the camera via the clean, easy-to-navigate Google Home app. There are several features standard on security cameras like this; you can set up specific zones for different recording and notification behaviors on the screen, or crop the image closer and keep it that way. If you want to talk to someone in sight of the camera, you can do that, and it’ll come through nice and clear. It’s easy to poke through recorded events, which sit just below the camera’s live feed in the app. The camera’s settings are few but useful, including options to configure night vision or rotate the chassis 180 degrees if it’s mounted upside-down.
The Nest Cam Outdoor, with or without a paid subscription, does the everyday security camera things well. It never took more than 10 seconds for the Google Home app to notify me when something happened in my backyard, and it was generally good at identifying animals and people—although there’s an asterisk on that animal part, which I’ll get to later. It also does something I wish every camera did: it stops sending notifications if it detects the same kind of event repeatedly in a short span of time, so your phone won’t just buzz incessantly when someone is doing yard work. While I wish there were a way to tweak how this works or turn it off entirely, it’s a welcome feature.
The product does lack a few features that are common on this type of camera. Unlike the similarly priced Reolink Altas and cheaper Ring Outdoor Cam, there’s no siren, nor do you get the option to black out sections of the image—for example, if you don’t want the camera to record your neighbors’ houses. The most disappointing thing is that Google continues to refuse to offer local storage. You get six hours of cloud-based video history—that is, you can see any clips the camera recorded in the past six hours, which is double what the company had offered in the past and still not enough to make up for the omission of local storage. Anything more, and you’re on the hook for a subscription plan that’s only cheaper than streaming TV because it costs too much now.
Google has always lost me at its security camera storage approach, and that’s not just because of the price—$10 a month for 30 days of history and $20 a month for 60 days’ worth is heftier than Ring’s subscriptions, though roughly in line with what Arlo asks for. It’s fine for companies to charge for cloud storage, but only if there are other options—see the microSD card slots of the old Netatmo Presence or the Reolink Elite, or hub storage approaches like Eufy’s HomeBase. In all of those cases, you can browse your local storage via those companies’ apps. In the case of both Netatmo and Reolink, even if you’re having trouble seeing recordings in the app, you can always snatch the microSD card and look at recordings on your computer. The very existence of all of that as baseline-free features makes Google’s cloud subscription-only approach seem deeply cynical.
Of course, as I said above, there’s more to Google Home Premium than just cloud video storage. If you pay for the pricier Google Home Premium Advanced plan, you get the promise of AI features that let you pinpoint specific moments by searching your video history using vague, natural language in the Google Home app. You can also opt into 10 days of searchable, 24/7 video history and get AI summaries that resemble Apple Intelligence summaries on iOS. (We know how well those work.)
The absolute best part of all of this is that you can search that continuously recorded footage, and it will pick up clips even if they weren’t actually recorded as events. Toss out searches like “person carrying a box” or “me in a hat,” and you’ll get real matches. But don’t expect miracles—I wanted to see if it could tell me where I’d left my phone, so I asked if it had seen anyone leave a phone outside. It surfaced three clips of me walking outside and looking at my phone from days prior, but not a moment from that day when I had set a smartphone on a table in front of the camera. When I asked, “What about today?” it responded, “I don’t track personal items like phones.” Rude!
Google’s system does a decent job sending notifications when it sees a person or an animal, as cameras have for many years; the difference now is that it also tells you what they’re doing. So instead of “person spotted” or whatever, it says “a person exited the house,” or it might say that “a cat walked along the path towards the house.” It gets the details wrong a lot, though, like telling me a person left the house when they’ve only opened the door to let dogs out to pee, or repeatedly misidentifying the pets—dogs as cats and vice versa.
I could live with those issues, but things are more broken when you get to the “Home Brief” feature, which did surprisingly well, like when it said a person (me) was “observed carrying an Amazon package” in the garage, although it also said I set the box down, which I didn’t. Another time, it, uh, made it seem like my house was under siege:
“Wednesday began with a black cat running towards the house and sitting by the door in the morning. Several dogs, including a brown and white one and a black dog, were also seen walking along the path and into the yard. Around midday, various dogs, including a black dog and a brown dog wearing a blue vest, were observed walking along the path.
In the afternoon, a person wearing a teal jacket departed the house, followed shortly by someone with a backpack entering. The evening saw a cat exiting the house, and later, multiple instances of people exiting the house, sometimes accompanied by a dog. A person was also seen carrying a box and a bag, sitting down to look into the box, before another person in a hoodie entered. The day concluded with more arrivals, including a person carrying an object and someone with a backpack entering the house.”
Cue the Star Wars: The Last Jedi meme in which Luke tells Rey, “Impressive; every word in that sentence is wrong.” The black cat was actually my dog. There weren’t several or various dogs; just two. The person in a teal jacket and the one with a backpack were the same. And that bit at the end was me taking out the trash—I never sat down or looked into the box I was taking out to the recycling bin.
I like the idea of this feature, but the execution—as ever, when AI isn’t ready for the task it’s being given—comes off sloppy and unfinished. Some of the problems could be fixed with wording tweaks to account for the fact that the AI system isn’t recognizing that when a person leaves the frame and another person enters a couple of minutes later wearing the same-colored clothes, it’s probably the same person. And it would be more useful if it only called out unusual occurrences, and if Google’s facial recognition were better at identifying me—it correctly did so a few times throughout my week of testing and otherwise only saw “a person.”
All of this is undeniably a meaningful step forward for smart home security cameras and digital smart home assistants. But it’s also so flawed, and there are so many free alternatives that are almost as good—Reolink, for example, recently debuted a similar AI search feature for some of its cameras that’s not quite as robust but is free and on-device—that it’s not worth $20 a month.
What about the $10-per-month “Standard” plan that gets you 30 days of event history, no 24/7 recording, and fewer new AI features? It does unlock facial recognition (unless you live in Illinois) and notifications for things like when you’ve left your garage door open, which I tested and which, like the AI summaries, was often wrong, telling me the garage door had opened when, in fact, it had not. To its credit, it did tell me the one time I left the door open, notifying me five times, at five-minute intervals, that I’d done so.
Oh, you also get access to Gemini via smart speakers, but again, that’s in early access for now. You’ll also get notifications if a Google smart speaker or Nest camera has heard an alarm (smoke or carbon monoxide) or breaking glass. The best new feature that comes with this subscription, though, is “Help me create,” a button in the Google Home app’s automations tab that tasks AI with creating automations for you, based on descriptions you type into a text field. Of all the AI features I tried with the Nest Cam Outdoor, this one might have worked the best, creating automations that did a great job approximating what I was going for, even with vague descriptions like “Make it look like there’s a party happening if the backyard camera detects an unfamiliar face.” The automation was far from the fake party that Kevin McCallister threw to ward off The Wet Bandits in Home Alone; it announced “It’s party time” on all my Google Home speakers and set all the lights to turn on and off in a one-minute cycle. It’s not what I would’ve done, but it took five seconds to enable and was, maybe, good enough to make someone think twice about breaking in.
Google’s AI isn’t ready to pull its weight
It’s easy to see what Google is going for with the AI upgrade to its smart home ecosystem. I would love to be able to casually ask a digital assistant where I left my phone or what time my kid got home and have it give me an accurate answer right away. It would be great if the AI models peering through my security cameras could tell me if something truly unusual happened, rather than making a mundane day of me doing a little tidying up in my garage or backyard sound like a full-on home invasion. Hell, it’d be nice just to have it tell me when it sees my dogs at the back door so I don’t have to stand there waiting for them to be done peeing.
Looking at the Google Nest Cam Outdoor not as a security camera but as eyes for Google’s Gemini AI system to see with makes a spendy subscription start to make sense—kind of—if it offers all the things I described above. But it doesn’t, and I can’t bring myself to pay $20 every month for an AI model that lies to me so often about what’s happening around my house. Especially not at a time when I’ve canceled almost every streaming service I love because I can’t afford them, and I don’t buy steak because it costs nearly double what it used to. If I’m going to spend a bunch of money on something these days, it had better be good. And the Google Nest Cam Outdoor just isn’t, with or without a subscription.
Starting next year, some GM drivers will be able to have natural language conversations with their cars, thanks to Gemini.
The automaker announced on Wednesday that Google’s AI assistant is coming to its vehicles beginning in 2026. The partnership will function like an evolution of what GM already offers with Google Cloud, but with added functionality and more vehicle controls, a spokesperson says. This comes even as GM teases work on developing its own AI platform that it hopes will anticipate a driver’s needs, assist with route optimization, and build on in-vehicle safety service OnStar.
“We want it eventually to be more than just saying, ‘Hey, roll the windows up or down,’” says GM’s SVP of software and services engineering, Dave Richardson. “There’s a big opportunity around maintenance. We’ve talked about detecting drowsy drivers and helping on the safety aspect as well.”
GM officially announced the news at its GM Forward media event in New York City on Wednesday, alongside a series of other updates about advancements in autonomous driving, a new computing platform for GM vehicles, scaling robotics in GM factories, and new financing for its battery systems.
An Inc.com Featured Presentation
With autonomous driving, GM is preparing to level up its vehicles—literally—starting as soon as 2028. GM already offers Super Cruise, which is considered level 2 autonomy, meaning that drivers can take their hands off the steering wheel, but are responsible for the vehicle and must be ready to take over. Super Cruise is currently available on more than 600,000 miles of mapped roads across North America.
Starting in 2028 with the Cadillac Escalade IQ electric SUV, GM is aiming to introduce updates that will allow drivers to take their eyes off the road, unlocking a new tier of autonomy that Richardson calls Super Cruise 3.
“Where we’re going in 2028 with the Escalade IQ, is the ability to have that same experience [as Super Cruise], but you as the driver no longer have to keep your eyes on the road,” Richardson says. “You can be talking with people in the vehicle. You can be dozing off. I think the real appeal to people is that’s giving people tons of time back.”
Cadillac’s Escalade IQ will also be the first vehicle on which GM will debut its next generation electrical architecture, which is applicable for both internal combustion and electric vehicles. It plans to introduce so-called “software defined vehicles” in 2028.
“That’s really going to make it easy for us to do scalable, efficient software and deliver all the technology that we’re talking about here through the next years and beyond,” Richardson says.
GM also announced that it is deploying robots that are safe for human workers to be around, called cobots, into its factories, and announced that new leasing options will start in 2026 for the GM Energy Home system, which includes both bi-directional EV charging and a stationary home battery. All of these updates seem intended to position GM as a tech-heavy mobility company that leverages robotics and AI, rather than a simple automaker—much like Tesla considers itself a robotics company.
The updates come as the auto industry braces for a possible tumble in EV sales, following the Trump administration’s elimination of consumer EV credits. GM had previously been among the most bullish legacy automakers on EVs, at one point pledging to be all-electric by 2035.
“Despite slower EV industry growth, we believe the long-term future is profitable electric vehicle production. This continues to be our north star,” a GM spokesperson said in a statement. “We are guided by our customers and committed to offering them the choice and convenience they want — which means both EVs and gas-powered vehicles.”
A cat jumped up on my couch. Wait a minute. I don’t have a cat.
The alert about the leaping feline is something my Google Home app sent me when I was out at a party. Turns out it was my dog. This notification came through a day after I turned on Google’s Gemini for Home capability in the Google Home app. It brings the power of large language models to the smart home ecosystem, and one of the most useful features is more descriptive alerts from my Nest security cameras. So, instead of “Person seen,” it can tell me FedEx came by and dropped off two packages.
In the two weeks since I allowed Gemini to power my Google Home, I’ve enjoyed its ability to detect delivery drivers the most. At the end of the day, I can ask in the Google Home app, “How many packages came today” and get an accurate answer. It’s nice to know that it’s FedEx at the door, per my Nest Doorbell, and not a salesperson offering to replace my windows. Yet for all its smarts, Gemini refuses to understand that I do not have a cat in my house.
Person Seen
ScreenshotGoogle Home via Julian Chokkattu
Google isn’t the only company souping up its smart-home ecosystem with AI. Amazon recently announced a feature on its Ring cameras called Search Party that will use a neighborhood’s worth of outdoor Ring cameras to help someone find their lost dog. (I don’t need to stretch to imagine something like this being used for nefarious purposes.)
In early October, Google updated the voice assistant on its smart-home devices—some of which have been around for a decade—by replacing Google Assistant with Gemini. For the most part, the assistant is better. It can understand multiple commands in a spoken sentence or two, and you can very easily ask it to automate something in your home without fussing with the Routines tab in the Google Home app. And when I ask it a simple question, it generally gives me some kind of a reliable answer without punting me to a Google Search page.
Smarter camera alerts are indeed more helpful at a glance. Most of the time, I dismissed Person Seen notifications because they’re often just people walking by my house. Now the alerts actually say “Person walks by,” which gives me greater confidence to dismiss those. Some alerts accurately say “Two people opened the gate,” though sometimes it will hallucinate: “Person walks up stairs,” when no one actually did. (They just walked on the sidewalk.) It has fairly accurately noted when UPS, FedEx, or USPS are at the door, which is nice to know when I’m busy or out and about, so I can make sure to check for a package when I get home—no need to hunt through alerts.
But with my indoor security cameras, Gemini routinely says I have a cat wandering the house. It’s my dog. Even in my Home Brief—recaps at the end of the day from Gemini about what happened around the home—Gemini says, “In the early morning, a white cat was active, walking into the living room and sitting on the couch.” It’s amusing, especially considering my dog hates cats.
CatDog
ScreenshotGoogle Home via Julian Chokkattu
You would think then that I would be able to just tell this smarter assistant, “Hey, I don’t have a cat. I have a dog,” and it would adjust its models and fix the error. Well, I did exactly that. In the Ask Home feature, you can talk to Gemini and ask it anything about the home. This is where you can ask it to set up automations, for example. I asked it to turn on the living room lights when the cameras detect my wife or I arriving home, and it understood the action. It even guessed that I wanted the lights to come on only when arriving at night, despite me forgetting to mention that.
Google is adding multiple new AI features to Chrome, the most popular browser in the world. The most visible change is a new button in Chrome that launches the Gemini chatbot, but there are also new tools for searching, researching, and answering questions with AI. Google has additional cursor-controlling “agentic” tools in the pipeline for Chrome as well.
The Gemini in Chrome mode for the web browser uses generative AI to answer questions about content on a page and synthesize information across multiple open tabs. Gemini in Chrome first rolled out to Google’s paying subscribers in May. The AI-focused features are now available to all desktop users in the US browsing in English; they’ll show up in a browser update.
On mobile devices, Android users can already use aspects of Gemini within the Chrome app, and Google is expected to launch an update for iOS users of Chrome in the near future.
When I wrote about web browsers starting to add more generative AI tools back in 2023, it was primarily something that served as an alternative to the norm. The software was built by misfits and change-makers who were experimenting with new tools, or hunting for a break-out feature to grow their small user bases. All of this activity was dwarfed by the commanding number of users who preferred Chrome.
Two years later, while Google’s browser remains the market leader, the internet overall is completely seeped in AI tools, many of them also made by Google. Still, today marks the moment when the concept of an “AI browser” truly went mainstream with the weaving of Gemini so closely into the Chrome browser.
The Gemini strategy at Google has already been to leverage as many of its in-house integrations as possible, from Gmail to Google Docs. So, the decision to AI-ify the Chrome browser for a wider set of users does not come as a shock.
Even so, the larger roll out will likely be met with ire by some users who are either exhausted by the onslaught of AI-focused features in 2025 or want to abstain from using generative AI, whether for environmental reasons or because they don’t want their activity to be used to train an algorithm. Users who don’t want to see the Gemini option will be able to click on the Gemini sparkle icon and unpin it from the top right corner of the Chrome browser.
The new button at the top of the browser will launch Gemini. Users in the US will see these changes first.
Google’s Nano Banana image-generation model, officially known as Gemini 2.5 Flash Image, has fueled global momentum for the Gemini app since launching last month. But in India, it has taken on a creative life of its own, with retro portraits and local trends going viral — even as privacy and safety concerns begin to emerge.
India has emerged as the No. 1 country in terms of Nano Banana usage, according to David Sharon, multimodal generation lead for Gemini Apps at Google DeepMind, who spoke at a media session this week. The model’s popularity has also propelled the Gemini app to the top of the free app charts on both the App Store and Google Play in India. The app has also climbed to the top of global app stores’ charts, according to Appfigures.
Given India’s scale — the world’s second-largest smartphone market and second-biggest online population after China — it is no surprise the country is leading in adoption. But what is catching Google’s attention is not just how many people are using Nano Banana, it is how: Millions of Indians are engaging with the AI model in ways that are uniquely local, highly creative, and in some cases, completely unexpected.
One of the standout trends is Indians using Nano Banana to re-create retro looks inspired by 1990s Bollywood, imagining how they might have appeared during that era, complete with period-specific fashion, hairstyles, and makeup. This trend is local to India, Sharon told reporters.
A variation of the retro trend is what some are calling the “AI saree,” where users generate vintage-style portraits of themselves wearing traditional Indian attire.
Another trend local to India is people generating their selfies in front of cityscapes and iconic landmarks, such as Big Ben and the U.K.’s retro telephone booths.
“We saw a lot of that in the beginning,” Sharon said.
Techcrunch event
San Francisco | October 27-29, 2025
Indian users are also experimenting with Nano Banana to transform objects, create time-travel effects, and even reimagine themselves as retro postage stamps. Others are generating black-and-white portraits or using the model to visualize encounters with their younger selves.
Some of these trends did not originate in India, but the country played a key role in helping them gain global attention. One example is the figurine trend, where people generate miniature versions of themselves, often placing them in front of a computer screen. The trend first emerged in Thailand, spread to Indonesia, and became global after gaining traction in India, Sharon said.
In addition to Nano Banana, Google has observed a trend where Indian users are utilizing its Veo 3 AI video-generation model on the Gemini app to create short videos from old photos of their grandparents and great-grandparents.
All of this has helped drive Gemini’s popularity on both the App Store and Google Play in India. Between January and August, the app saw an average of 1.9 million monthly downloads in the country — about 55% higher than in the U.S. — accounting for 16.6% of global monthly downloads, per Appfigures data shared exclusively with TechCrunch.
India downloads have totaled 15.2 million this year until August; the U.S., on the other hand, has had 9.8 million downloads so far this year, per Appfigures data.
Daily downloads of the Gemini app in India significantly surged following the release of the Nano Banana update, beginning on September 1 with 55,000 installs across both app stores. Downloads peaked at 414,000 on September 13 — a 667% increase — with Gemini holding the top overall spot on the iOS App Store since September 10 and on Google Play since September 12, including across all categories, Appfigures data shows.
Image Credits:Jagmeet Singh / TechCrunch
Despite India leading in downloads, the country does not top in-app purchases on the Gemini app, which has generated an estimated $6.4 million in global consumer spending on iOS since launch, per Appfigures. The U.S. accounts for the largest share at $2.3 million (35%), while India contributes $95,000 (1.5%). However, India posted a record 18% month-over-month growth in spending, reaching $13,000 between September 1 and 16 — compared to an 11% global increase during the same period. That puts India seven percentage points above the global rate and more than 17 points ahead of the U.S., where growth was under 1%.
That said, as with other AI apps, there are concerns about users uploading personal photos to Gemini to transform their appearance.
“When a user asks us to fulfill their query, we do our best to fulfill that query. We don’t try to assume what the user’s intent is,” Sharon said while addressing questions on how Google is dealing with data misuse and privacy concerns among users in India and other top markets. “We’ve really tried to improve that, and we have improved that to be bold and fulfil your request.”
Google places a visible, diamond-shaped watermark on images generated by the Nano Banana model and also embeds a hidden marker using its SynthID tool to identify AI-generated content. SynthID allows Google to detect and flag whether an image was created using its models.
Sharon told reporters that Google is testing a detection platform with trusted testers, researchers, and other experts. The company also plans to launch a consumer-facing version that would allow anyone to check whether an image is AI-generated.
“This is still day one, and we’re still learning, and we’re learning together. There are things that we might need to improve on in the future, and it’s really your feedback from users, press, academia, and experts that helps us improve,” Sharon said.
AI might be coming for our jobs, but capitalist pressures appear to be coming for the people responsible for developing AI. Wired reported over 200 people working on Google’s AI products, including its chatbot Gemini and the AI Overviews it displays in search results, were recently laid off—joining the ranks of unfortunate former employees of xAI and Meta, who have also been victims of “restructuring” as companies that poured billions of dollars into AI development are trying to figure out how to make that money back.
Per Wired, most of the people working on Google’s AI products were contractors rather than Google employees. Many worked at GlobalLogic, a software development company owned by Hitachi. According to the report, most of the GlobalLogic workers who got cut off from Google were working as raters, working to ensure the quality of AI responses. Most are based in the US, work with English-language content, and many have a master’s or a PhD in their field of expertise.
At least some workers hit by this layoff were told the cuts were the result of a “ramp-down” on the project, but at least a few workers seem skeptical of that reason. Some believe the cuts may be related to worker protests over pay and job security concerns, per Wired. The publication also reported that documents from GlobalLogic indicate the company may be using human raters to train a system that can automate the rating process, which would leave AI to moderate AI.
The folks tasked with tightening up Google’s AI outputs are far from the only ones in the industry getting squeezed. According to Business Insider, Elon Musk’s xAI recently laid off at least 500 workers who were tasked with doing data annotation. The layoffs appear to be a part of a shuffling of efforts within the company, which is moving away from “generalist” data annotators and ramping up its “specialists.” Given that Google just cut contractors who would likely fall under that “specialist” label, it probably feels a bit precarious out there.
It’s been a tough go for people who are actually handling the data that feeds AI tools. Shortly after Meta invested in data labeling firm Scale AI, the company cut 14% of its staff, including 200 full-timers and about 500 contractors. Meta itself is reportedly looking seriously at downsizing its AI department as it keeps shifting priorities and trying to figure out how to get a leg up in the AI race.
It’s also hard not to look at the layoffs of lower-level workers and contractors without thinking about the multi-million dollar job offers being thrown at AI specialists to secure their talents, but this tends to be how things go: the people doing the grunt work that must be done to keep the gears turning are considered replaceable while more and more money flows to the top to people who no one really knows what they do, but they make a lot of money so it must be important.
I’m wearing a pair of thick-rimmed glasses on my face. They don’t feel heavy, but they feel chunky. I walk over to a poster of a painting—Girl With a Pearl Earring—and ask out loud what was so special about it. A brief answer detailing its expert use of light and color by Johannes Vermeer floats into my ears, and when I ask a follow-up about when it was painted, I quickly hear the same voice say, “around 1665.” I’m not talking to myself, I swear. Nor am I hearing imaginary voices. No, I’m wearing a prototype of Google’s upcoming smart glasses, powered by its Gemini voice assistant. The company teased these smart glasses at its I/O developer conference earlier this year, showing a proof-of-concept video of AI-powered smart glasses using the name Project Astra. The pair I gazed through and chatted with uses that same Astra technology, but here it’s been built into a functioning product. Even though the glasses are still in their development phase, Google plans to release them sometime in 2025. These smart glasses are one part of Google’s big announcement today: Android XR. This “extended reality” platform marks the 10th year of Google’s mobile operating system expanding to new platforms beyond phones, joining the ranks of Wear OS, Google TV, and Android Auto. It sets the stage for a new wave of virtual and augmented reality headsets and glasses with a customized version of Android running on them.
Glass, a Decade Later
I remember watching the first-ever Google Glass demo in my college dorm room—truly an iconic moment at Google I/O 2012, where people skydived toward the Moscone Convention Center wearing cyborg smart glasses that were streaming video of their approach over a Hangouts call. These Android XR–powered smart glasses don’t command that much fanfare but, in my limited time with them, I can say this: Of all the smart glasses I’ve tried, they come the closest to realizing the original vision of Glass. But Google is also in a very different place as a company than it was in 2012. A judge recently ruled Google Search to be an illegal monopoly, calling for the company to sell off Google Chrome. Yet Google (with Samsung) now wants to be the platform for the next wave of spatial computing. VR also has had a rocky road due to wavering consumer interest, and given Google’s history of killing off projects, it’s difficult to glean whether a face computing platform that requires special (and expensive) hardware will meet the fate of so many apps and services that came before. Izadi says the platform approach helps in that regard: “I think once you’re established as an Android vertical, we’re not going away anytime soon, so that’s kind of a guarantee we can give.” The big bet seems to be around Gemini and AI. Oh, and the synergy between Google and Samsung. As Kihwan Kim, the executive vice president at Samsung spearheading Project Moohan, says, “This is not about just some teams or company making this—this is different. It’s completely starting from the ground up, how AI can impact VR and AR.” He went on to say the collaboration with Google felt like “one single spirit,” adding that it’s something he’s never experienced before in this line of work.
If you’re using a personal Adobe account, it’s easy to opt out of the content analysis. Open up Adobe’s privacy page, scroll down to the Content analysis for product improvement section, and click the toggle off. If you have a business or school account, you are automatically opted out.
Amazon: AWS
AI services from Amazon Web Services, like Amazon Rekognition or Amazon CodeWhisperer, may use customer data to improve the company’s tools, but it’s possible to opt out of the AI training. This used to be one of the most complicated processes on the list, but it’s been streamlined in recent months. Outlined on this support page from Amazon is the full process for opting out your organization.
Figma
Figma, a popular design software, may use your data for model training. If your account is licensed through an Organization or Enterprise plan, you are automatically opted out. On the other hand, Starter and Professional accounts are opted in by default. This setting can be changed at the team level by opening the settings to the AI tab and switching off the Content training.
Google Gemini
For users of Google’s chatbot, Gemini, conversations may sometimes be selected for human review to improve the AI model. Opting out is simple, though. Open up Gemini in your browser, click on Activity, and select the Turn Off drop-down menu. Here you can just turn off the Gemini Apps Activity, or you can opt out as well as delete your conversation data. While this does mean in most cases that future chats won’t be seen for human review, already selected data is not erased through this process. According to Google’s privacy hub for Gemini, these chats may stick around for three years.
Grammarly
Grammarly updated its policies, so personal accounts can now opt out of AI training. Do this by going to Account, then Settings, and turning the Product Improvement and Training toggle off. Is your account through an enterprise or education license? Then, you are automatically opted out.
Grok AI (X)
Kate O’Flaherty wrote a great piece for WIRED about Grok AI and protecting your privacy on X, the platform where the chatbot operates. It’s another situation where millions of users of a website woke up one day and were automatically opted in to AI training with minimal notice. If you still have an X account, it’s possible to opt out of your data being used to train Grok by going to the Settings and privacy section, then Privacy and safety. Open the Grok tab, then deselect your data sharing option.
HubSpot
HubSpot, a popular marketing and sales software platform, automatically uses data from customers to improve its machine-learning model. Unfortunately, there’s not a button to press to turn off the use of data for AI training. You have to send an email to privacy@hubspot.com with a message requesting that the data associated with your account be opted out.
LinkedIn
Users of the career networking website were surprised to learn in September that their data was potentially being used to train AI models. “At the end of the day, people want that edge in their careers, and what our gen-AI services do is help give them that assist,” says Eleanor Crum, a spokesperson for LinkedIn.
You can opt out from new LinkedIn posts being used for AI training by visiting your profile and opening the Settings. Tap on Data Privacy and uncheck the slider labeled Use my data for training content creation AI models.
OpenAI: ChatGPT and Dall-E
OpenAI via Matt Burgess
People reveal all sorts of personal information while using a chatbot. OpenAI provides some options for what happens to what you say to ChatGPT—including allowing its future AI models not to be trained on the content. “We give users a number of easily accessible ways to control their data, including self-service tools to access, export, and delete personal information through ChatGPT. That includes easily accessible options to opt out from the use of their content to train models,” says Taya Christianson, an OpenAI spokesperson. (The options vary slightly depending on your account type, and data from enterprise customers is not used to train models).
Later in the year, Google will imbue Gemini Live with Project Astra, the computer vision tech it teased at its developer conference in May. This will allow you to use your phone’s camera app and, in real time, ask Gemini about the objects you are looking at in the real world. Imagine walking past a concert poster and asking it to store the dates in your calendar and to set up a reminder to buy tickets.
Talk to Me
Our experiences using voice assistants until this point have largely been transactional, so when I chatted with Gemini Live, I found initiating a conversation with the bot to be a little awkward. It’s a big step beyond asking Google Assistant or Alexa for the weather report, to open your blinds, or whether your dog can eat celery. You might have a follow-up here and there, but it was not built around the flow of a conversation the way Gemini Live was.
Hsiao tells me she enjoys using Gemini Live in the car on her drive home from work. She started a conversation about the Paris Olympics and about Celine Dion singing at the opening ceremony. “Can you tell me a little bit about the song she sang?” Hsiao asked. The AI responded with the song’s origin, writer, and what it meant, and after some back and forth, Hsiao discovered Celine Dion could sing in Chinese.
“I was so surprised,” she says. “But that just gives you an example of how you can find out stuff; it’s an interaction with technology that people couldn’t have before this kind of curiosity and exploration through conversation. This is just the beginning of where we’re headed with the Gemini assistant.”
In my demo, I asked Gemini what I should eat for dinner. It asked if I wanted something light and refreshing or a hearty meal. We went on, back and forth, and when Gemini suggested a shrimp dish I lied and said I was allergic to shrimp, to which it then recommended salmon. I said I didn’t have salmon. “You could always grill up some chicken breasts and toss them in a salad with grilled salad and a light vinaigrette dressing.” I asked for a recipe, and it started going through the instructions step by step. I interrupted it, but I can go back into the Gemini app to find the recipe later.
Cameras have never been a strong suit for Motorola, but it’s giving special emphasis to the new “Photo Enhancement Engine” that’s exclusive to the Razr+. The company says it “uses AI” to produce finer image details, better dynamic range, improved bokeh, and more advanced noise reduction, all on the uncompressed raw image data. The Razr+ also gets a few extra camera features, such as Adaptive Stabilization for smoother videos, Action Shot for when you capture moving subjects, Long Exposure to create light trails, and Super Zoom, which enhances your zoomed-in photos. I’m not sure how much “AI” has to do with some of these.
There are two generative AI features, too: Style Sync and Image Canvas. The former lets you snap a picture of your outfit (or any kind of special texture), and it’ll generate four images using that pattern that you can then use as a wallpaper. Magic Canvas lets you generate images via a text prompt. These two features are available on both Razrs.
Later in the fall, Motorola will launch “Moto AI,” which it says is powered by both in-house and Google’s large language models. This will include features like “Catch me up,” which will summarize your clutter of notifications so you can focus on what’s important. A “Pay attention” feature will enable the phone to start recording instantly and transcribe and summarize the recording automatically. Then there’s “Remember this,” which can save onscreen information that you can ask the device for later.
Photograph: Julian Chokkattu
Unfortunately, all this AI power doesn’t help Motorola improve its software update policy. These new Razr smartphones will only get three Android OS updates (they launch with Android 14), and four years of security updates.
For comparison, Google and Samsung offer seven years of software updates on their flagship phones. Longer software support means more features down the road, bug fixes, and security patches.
Accompanying these new phones is the Moto Tag, a small AirTags-like accessory that supports Bluetooth LE and ultra-wideband tech to help locate lost devices. It uses Google’s Find My Device network and will work with any Android phone. However, if you use it with a Moto smartphone, you can press the multifunction button on the Tag to remotely capture a photo.
There’s a lot riding on next week’s WWDC 2024 keynote. The presentation’s stakes are far higher than your standard post-event market moves. The pressure for Tim Cook and crew to deliver the goods is, in a very real sense, even higher than it was in the lead up to last year’s Vision Pro announcement.
On Monday, Apple will lay out its AI plans. The subject has been a massive question mark looming over Cupertino for the last few years, as competitors like Google and Microsoft have embraced generative AI. There’s a broad industry consensus that systems powered by large language models like ChatGPT and Gemini will profoundly affect how we interact with our devices.
Apple is expected to announce a partnership with OpenAI that will bring the company’s smarts to the iPhone and Mac. Apple’s near-term strategy is a deep integration between existing properties and generative AI, with Siri at the center. Since its debut in 2011, Apple has pushed to make the voice assistant an integral part of all its operating systems.
In the intervening 13 years, however, Siri has fallen short of the revolution Apple promised. There are plenty of reasons for this, though the primary is capability. The concept of an artificial voice assistant pre-dates Siri by decades, but no one fully cracked it for a reason. As phone makers and app developers have transformed smartphones into everything devices, these assistants’ jobs have become increasingly complex.
As impressive as the Stanford Research Institute’s work was, the technology required for a frictionless experience simply wasn’t ready. Siri co-founder Norman Winarsky addressed the underlying issue in 2018, noting that Apple’s initial plan was a far more limited assistant that handled things like entertainment and travel. “These are hard problems, and when you’re a company dealing with up to a billion people, the problems get harder yet,” Winarsky noted at the time. “They’re probably looking for a level of perfection they can’t get.”
Generative AI isn’t at that level of perfection, either — not yet, at least. Hallucinations are still a problem. That’s precisely why, even after the massive buzz of the past few years, it still feels like we’re very much in the baby steps phase. If anything, I would say that Google, for one, has been overly aggressive in places. The best example of this is the company’s decision to surface Gemini results at the top of searches.
When something is prioritized above trusted resources in the world’s dominate search engine, it needs to get things right as much as humanly possible, and not, you know, tell people to eat glue. Google labels Gemini results a product of its “Search Labs,” but surely a majority of users don’t understand what that means in terms of product maturity, nor can they be bothered to click through for more information.
Over the past few years, I’ve met several researchers who have used the term “magic” to describe the results of “black box” surrounding large language models. This isn’t a knock against all of the amazing work happening in the space, so much as a realization that there’s still so much we don’t know about the technology.
Arthur C. Clarke put it best: “Any sufficiently advanced technology is indistinguishable from magic.”
One place Google has been more intentional, however, is with its integration of Gemini into Android. Rather than replacing Assistant outright, Google has been integrating its generative AI platform into different applications. Users can also opt-in to making Gemini their default by assigning it to the Assistant button on Pixel devices. This implementation requires deliberate action on the user’s part at least thus far.
While Gemini hasn’t completely conquered Android yet, however, Google is clearly signaling at a day in the not too distant future when it replaces Assistant outright. I half expected an announcement along those lines at I/O last month, though I’m glad it ultimately opted to give Gemini more time to bake.
Whether the Assistant name sticks around is ultimately a branding decision. For its part, Apple is very wedded to the Siri name. It has, after all, spent well over a decade pitching the product to consumers. Sooner than later, however, generative AI will eat the smart assistant space.
Voice assistants in general are having an existential moment. Smart speakers have a broader bellwether for platforms like Siri, Alexa and Google Assistant. Shipments have declined, after heating up during the pandemic. It’s unfair to characterize the category as doomed, but it will be in the long run, without the proper shot in the arm.
Generative AI is poised to be the logical successor, but the first round of hardware devices built around these models, including the Humane Ai Pin and Rabbit R1, have only been testaments to how far the category has to go before it can be considered a consistent experience for mainstream users.
Apple will finally show its hand on Monday. While rumors point to the company transitioning a number of employees to generative AI operations following its electric car implosion, all signs point to Apple having ceded a significant head start to the competition. As such, its most logical play is a partnership with a reigning powerhouse like OpenAI.
Shortly after the Siri acquisition was announced, Steve Jobs was asked whether the company was trying to beat Google at its own game. “It’s an AI company,” Jobs noted. “We’re not going into the search business. We don’t care about it. Other people do it well.”
The company’s approach to generative AI is currently in the same place. At this stage, Apple can’t beat OpenAI at its own game, so it’s partnering instead. But even the best of the current models have a way to go before they’re ready to fully replace the current crop of smart assistants.
The first notable feature is Help Me Write, which works in any text box. Select text in any text box and right-click—you’ll see a box next to the standard right-click context menu. You can ask Google’s AI to rewrite the selected text, rephrase it in a specific way, or change the tone. I tried to use it on a few sentences in this story but did not like any of the suggestions it gave me, so your mileage may vary. Or maybe I’m a better writer than Google’s AI. Who knows?
Google’s bringing the same generative AI wallpaper system you’ll find in Android to ChromeOS. You can access this feature in ChromeOS’s wallpaper settings and generate images based on specific parameters. Weirdly, you can create these when you’re in a video-calling app too. You’ll see a menu option next to the system tray whenever the microphone and video camera are being accessed—tap on it and click “Create with AI” and you can generate an image for your video call’s background. I’m not sure why I’d want a background of a “surreal bicycle made of flowers in pink and purple,” but there you go. AI!
Here’s something a little more useful: Magic Editor in Google Photos. Yep, the same feature that debuted in Google’s Pixel 8 smartphones is now available on Chromebook Plus laptops. In the Google Photos app, you can press Edit on a photo and you’ll see the option for Magic Editor. (You’ll need to download more editing tools to get started.) This feature lets you erase unwanted objects in your photos, move a subject to another area of the frame, and fill in the backgrounds of photos. I successfully erased a paint can in the background of a photo of my dog, and it worked pretty quickly.
Then there’s Gemini. It’s available as a stand-alone app, and you can ask it to do pretty much anything. Write a cover letter, break down complex topics, ask for travel tips for a specific country. Just, you know, double-check the results and make sure there aren’t any hallucinations. If you want to tap into Google’s Gemini Advanced model, the company says it is offering 12 months free for new Chromebook Plus owners through the end of the year, so you have some time to redeem that offer. This is technically an upgrade from Google One, and it nets you Gemini for Workspace, 2 terabytes of storage, and a few other perks.
Google also showed off its new DJ Mode in MusicFX, an AI music generator that lets musicians generate song loops and samples based on prompts. (DJ mode was shown off during the eccentric and delightful performance by musician Mark Rebillet that led into the I/O keynote.)
An Evolution in Search
From its humble beginning as a search-focused company, Google is still the most prominent player in the search industry (despite some very good, slightly more private options). Google’s newest AI updates are a seismic shift for its core product.
New contextual awareness abilities help Google Search deliver more relevant results.
Courtesy of Google
Some new capabilities include AI-organized search, which allows for more tightly presented and readable search results, as well as the ability to get better responses from longer queries and searches with photos.
We also saw AI overviews, which are short summaries that pool information from multiple sources to answer the question you entered in the search box. These summaries appear at the top of the results so you don’t even need to go to a website to get the answers you’re seeking. These overviews are already controversial, with publishers and websites fearing that a Google search that answers questions without the user needing to click any links may spell doom for sites that already have to go to extreme lengths to show up in Google’s search results in the first place. Nonetheless, these newly enhanced AI overviews are rolling out to everyone in the US starting today.
A new feature called Multi-Step Reasoning lets you find several layers of information about a topic when you’re searching for things with some contextual depth. Google used planning a trip as an example, showing how searching in Maps can help find hotels and set transit itineraries. It then went on to suggest restaurants and help with meal planning for the trip. You can deepen the search by looking for specific types of cuisine or vegetarian options. All of this info is presented to you in an organized way.
Advanced visual search in Lens.
Courtesy of Google
Lastly, we saw a quick demo of how users can rely on Google Lens to answer questions about whatever they’re pointing their camera at. (Yes, this sounds similar to what Project Astra does, but these capabilities are being built into Lens in a slightly different way.) The demo showed a woman trying to get a “broken” turntable to work, but Google identified that the record player’s tonearm simply needed adjusting, and it presented her with a few options for video- and text-based instructions on how to do just that. It even properly identified the make and model of the turntable through the camera.
WIRED’s Lauren Goode talked with Google head of search Liz Reid about all the AI updates coming to Google Search, and what it means for the internet as a whole.
Security and Safety
Scam Detection in action.
Photograph: Julian Chokkattu
One of the last noteworthy things we saw in the keynote was a new scam detection feature for Android, which can listen in on your phone calls and detect any language that sounds like something a scammer would use, like asking you to move money into a different account. If it hears you getting duped, it’ll interrupt the call and give you an onscreen prompt suggesting that you hang up. Google says the feature works on the device, so your phone calls don’t go into the cloud for analysis, making the feature more private. (Also check out WIRED’s guide to protecting yourself and your loved ones from AI scam calls.)
Google has also expanded its SynthID watermarking tool meant to distinguish media made with AI. This can help you detect misinformation, deepfakes, or phishing spam. The tool leaves an imperceptible watermark that can’t be seen with the naked eye, but can be detected by software that analyzes the pixel-level data in an image. The new updates have expanded the feature to scan content on the Gemini app, on the web, and in Veo-generated videos. Google says it plans to release SynthID as an open source tool later this summer.
Nearly a decade ago, Google showed off a feature called Now on Tap in Android Marshmallow—tap and hold the home button and Google will surface helpful contextual information related to what’s on the screen. Talking about a movie with a friend over text? Now on Tap could get you details about the title without having to leave the messaging app. Looking at a restaurant in Yelp? The phone could surface OpenTable recommendations with just a tap.
I was fresh out of college, and these improvements felt exciting and magical—its ability to understand what was on the screen and predict the actions you might want to take felt future-facing. It was one of my favorite Android features. It slowly morphed into Google Assistant, which was great in its own right, but not quite the same.
Today, at Google’s I/O developer conference in Mountain View, California, the new features Google is touting in its Android operating system feel like the Now on Tap of old—allowing you to harness contextual information around you to make using your phone a bit easier. Except this time, these features are powered by a decade’s worth of advancements in large language models.
“I think what’s exciting is we now have the technology to build really exciting assistants,” Dave Burke, vice president of engineering on Android, tells me over a Google Meet video call. “We need to be able to have a computer system that understands what it sees and I don’t think we had the technology back then to do it well. Now we do.”
I got a chance to speak with Burke and Sameer Samat, president of the Android ecosystem at Google, about what’s new in the world of Android, the company’s new AI assistant Gemini, and what it all holds for the future of the OS. Samat referred to these updates as a “once-in-a-generational opportunity to reimagine what the phone can do, and to rethink all of Android.”
Circle to Search … Your Homework
The upgraded Circle to Search in action.
Courtesy of Google
It starts with Circle to Search, which is Google’s new way of approaching Search on mobile. Much like the experience of Now on Tap, Circle to Search—which the company debuted a few months ago—is more interactive than just typing into a search box. (You literally circle what you want to search on the screen.) Burke says, “It’s a very visceral, fun, and modern way to search … It skews younger as well because it’s so fun to use.”
Samat claims Google has received positive feedback from consumers, but Circle to Search’s latest feature hails specifically from student feedback. Circle to Search can now be used on physics and math problems when a user circles them—Google will spit out step-by-step instructions on completing the problems without the user leaving the syllabus app.
Samat made it clear Gemini wasn’t just providing answers but was showing students how to solve the problems. Later this year, Circle to Search will be able to solve more complex problems like diagrams and graphs. This is all powered by Google’s LearnLM models, which are fine-tuned for education.
Gemini Gets More Contextual on Android
Gemini is Google’s AI assistant that is in many ways eclipsing Google Assistant. Really—when you fire up Google Assistant on most Android phones these days, there’s an option to replace it with Gemini instead. So naturally, I asked Burke and Samat whether this meant Assistant was heading to the Google Graveyard.
“The way to look at it is that Gemini is an opt-in experience on the phone,” Samat says. “I think obviously over time Gemini is becoming more advanced and is evolving. We don’t have anything to announce today, but there is a choice for consumers if they want to opt into this new AI-powered assistant. They can try it out and we are seeing that people are doing that and we’re getting a lot of great feedback.”