Google is partnering with a UK nonprofit to fight non-consensual intimate imagery (NCII). (You may know it better as revenge porn.) Over the coming months, the company will begin using StopNCII’s hashes. These user-uploaded digital fingerprints can block individuals’ unwanted intimate content from appearing in search results.
StopNCII has a pretty neat system to combat revenge porn. Say you have some images you most definitely don’t want surfacing online. Select the picture on your device, and StopNCII will create a digital fingerprint of the file. That hash will be uploaded to the service. The photo itself never leaves your device. The organization then shares the hash (again, not the spicy pic) with participating platforms.
Then, if an asshole ex takes the liberty of uploading said photo to one of those companies’ services, it should be removed automatically. If the platform uses real-time hash matching, it can even block the upload immediately before it reaches anyone’s eyes. It’s a pretty solid defense against a very ugly problem.
The system isn’t bulletproof. First, it only works for known images. So, if someone else has an intimate photo that you don’t have a copy of, you’ll have to fight that using other means. StopNCII doesn’t work for AI-generated images, audio recordings or saucy text chats.
The system also won’t help if the content is uploaded to a non-partner platform. In addition to Google, StopNCII partners with Meta, Reddit, Pornhub, OnlyFans, Snap, Microsoft Bing, X and more.
This is far from Google’s first move to combat NCII. A decade ago, it created a system for submitting revenge porn takedown requests. In 2024, it made it easier to remove deepfake NCII. On Wednesday, Google product manager Griffin Hunt explained that “given the scale of the open web, there’s more to be done to reduce the burden on those who are affected by it.”
If you’re 18 or older and have any photos of yourself that you want to flag proactively, you can start using StopNCII right now. Head to the org’s website to create a case. Note that the service only works for pictures that are nude, semi-nude or show a sexual act. And remember, the photo itself never leaves your device, so your privacy remains intact.
Nvidia CEO Jensen Huang is in London, standing in front of a room full of journalists, outing himself as a huge fan of Gemini’s Nano Banana. “How could anyone not love Nano Banana? I mean Nano Banana, how good is that? Tell me it’s not true!” He addresses the room. No one responds. “Tell me it’s not true! It’s so good. I was just talking to Demis [Hassabis, CEO of DeepMind] yesterday and I said ‘How about that Nano Banana! How good is that?’”
It looks like lots of people agree with him: The popularity of the Nano Banana AI image generator—which launched in August and allows users to make precise edits to AI images while preserving the quality of faces, animals, or other objects in the background—has caused a 300 million image surge for Gemini in the first few days in September already, according to a post on X by Josh Woodward, VP of Google Labs and Google Gemini.
Huang, whose company was among a cohort of big US technology companies to announce investments into data centers, supercomputers, and AI research in the UK on Tuesday, is on a high. Speaking ahead of a white-tie event with UK prime minister Keir Starmer (where he plans to wear custom black leather tails), he’s boisterously optimistic about the future of AI in the UK, saying the country is “too humble” about the country’s potential for AI advancements.
He cites the UK’s pedigree in themes as wide as the industrial revolution, steam trains, DeepMind (now owned by Google), and university researchers, as well as other tangential skills. “No one fries food better than you do,” he quips. “Your tea is good. You’re great. Come on!”
Nvidia announced a $683 million equity investment in datacenter builder Nscale this week, a move that—alongside investments from OpenAI and Microsoft—has propelled the company to the epicenter of this AI push in the UK. Huang estimates that Nscale will generate more than $68 billion in revenues over six years. “I’ll go on record to say I’m the best thing that’s ever happened to him,” he says, referring to Nscale CEO Josh Payne.
“As AI services get deployed—I’m sure that all of you use it. I use it every day and it’s improved my learning, my thinking. It’s helped me access information, access knowledge a lot more efficiently. It helps me write, helps me think, it helps me formulate ideas. So my experience with AI is likely going to be everybody’s experience. I have the benefit of using all the AI—how good is that?”
“I really like using an AI word processor because it remembers me and knows what I’m going to talk about. I could describe the different circumstance that I’m in and yet it still knows that I’m Jensen, just in a different circumstance,” Huang explains. “In that way it could reshape what I’m doing and be helpful. It’s a thinking partner, it’s truly terrific, and it saves me a ton of time. Frankly, I think the quality of work is better.”
His favorite one to use “depends on what I’m doing,” he says. “For something more technical I will use Gemini. If I’m doing something where it’s a bit more artistic I prefer Grok. If it’s very fast information access I prefer Perplexity—it does a really good job of presenting research to me. And for near everyday use I enjoy using ChatGPT,” Huang says.
“When I am doing something serious I will give the same prompt to all of them, and then I ask them to, because it’s research oriented, critique each other’s work. Then I take the best one.”
In the end though, all topics lead back to Nano Banana. “AI should be democratized for everyone. There should be no person who is left behind, it’s not sensible to me that someone should be left behind on electricity or the internet of the next level of technology,” he says.
“AI is the single greatest opportunity for us to close the technology divide,” says Huang. “This technology is so easy to use—who doesn’t know how to use Nano?”
Even if you ask the chatbot to prioritize a certain individual from a group, the result will not be as good as the tool uses generative AI.
Your reference image should have your face clearly visible with a neutral expression or a normal smile, so that the tool can generate a decent aesthetic image while retaining maximum details from your face.
If you ask Gemini AI to style you with a glittery or ultra-glamorous outfit, something funky for which Ranveer Singh is known, the tool will struggle.
If you have been experimenting with Gemini AI to generate stylish portrait images for Instagram, you are already familiar with its power. But getting those aesthetic results consistently is not just about typing a creative prompt. Sometimes, small mistakes like choosing an incorrect reference image, missing details in the prompt, or offering irrelevant details can make your photo look subpar.
In this guide, we will walk through the most common errors people make while creating portrait images with Gemini AI, and how avoiding them will help you achieve cleaner, sharper, and more realistic outputs.
Tips for using Gemini AI to create Portrait Images
1. Don’t Use Group Images
To achieve the best results, you should avoid using a group photo while creating portrait images using Gemini AI. This is because the tool finds it difficult to process multiple people at once, and the final result will be degraded. Even if you ask the chatbot to prioritize a certain individual from a group, the result will not be as good as the tool uses generative AI.
2. Ensure that Face is Visible Properly
Many people upload their candid photos, in which only one side of their face is visible. In such cases, Gemini AI does not get enough resources to process a proper portrait shot. Your reference image should have your face clearly visible with a neutral expression or a normal smile, so that the tool can generate a decent aesthetic image while retaining maximum details from your face.
3. Avoid Vague Styling Terms
A simple prompt like “make the photo aesthetic” or “add a cinematic effect” is not enough. You should properly describe how you want the result. For example, mention details like “add a warm and soft light from the right side, with the background reflecting some light. The clothes should contrast the lighting, and the styling should look like streetwear fashion from 2025”.
4. Describe Pose Properly
As per multiple reports from people on social media, Gemini AI creates vague poses that do not look cinematic enough. To tackle this, you should describe the pose in detail and add some references from a popular film pose, if that’s something you are looking for. For example, use a prompt like, “the subject should look away from the camera, while one arm is adjusting the sleeve of the other hand. The bracelet should be visible, and the legs should be straight, with a confident body language”.
5. Skip Unrealistic Outfits
If you ask Gemini AI to style you with a glittery or ultra-glamorous outfit, something funky for which Ranveer Singh is known, the tool will struggle. This is because when you force it to express creativity, it often fails to blend it with the aesthetic theme. Hence, to get the best results, you should stick to simple outfits that are easy to style. Adding too many designs and artefacts may result in a not-so-good outfit.
Be Mindful of Daily Limits and Edits
While Gemini AI can generate stunning portraits, it’s worth remembering that the free version comes with daily usage caps. If you are experimenting heavily with prompts, you may quickly hit those limits. Hence, you should think properly beforehand about what you want from the portrait image, to use your limits.
In case you hit the daily limit on using Gemini AI, you should wait for it to get reset. Gemini normally restores your limits to full within 6 hours, but the cooldown period may vary depending on server load. You can also get a Gemini AI Pro subscription to have higher limits. If you are a student, here’s how you can get 12 months of free Gemini AI Pro plan.
FAQs
Q. Why is Gemini AI not generating images?
The free version of Gemini works best for generating 2-3 portrait images. After that, you may notice a slowdown, or the chatbot may reject your request completely. In such cases, Gemini AI will not generate portrait images.
Q. Does Gemini AI steal face data?
Gemini AI does not specifically steal your facial data. However, Google mentions in its terms and conditions that it may use your images to train its AI models, which could be a privacy concern for some users.
Q. Is Gemini better than ChatGPT for generating images?
Google’s new Nano Banana image generation has proven to be superior to OpenAI’s image models in ChatGPT. Hence, currently, Gemini is slightly better than ChatGPT for generating and editing images.
Wrapping Up
By taking simple steps like using a clear picture of yourself and mentioning the correct details in the prompt, you can generate much better portrait images using Gemini AI. Since the tool uses generative AI and its new advanced Nano Banana model, the final results are as good as a magazine cover. By correcting the lighting effects, you can surprise your friends by expressing your creativity with prompts and making perfect portrait images.
You may also like to read:
Have any questions related to our how-to guides, or anything in the world of technology? Check out our new GadgetsToUse AI Chatbotfor free, powered by ChatGPT.
In an increasingly divided world, one thing that everyone seems to agree on is that artificial intelligence is a hugely disruptive—and sometimes downright destructive—phenomenon.
At WIRED’s AI Power Summit in New York on Monday, leaders from the worlds of tech, politics, and the media came together to discuss how AI is transforming their intertwined worlds. The Summit included voices from the AI industry, a current US senator and a former Trump administration official, and publishers including WIRED’s parent company, Condé Nast. You can view a livestream of the event in full below.
Livestream: WIRED’s AI Power Summit
“In journalism, many of us have been excited and worried about AI in equal measure,” said Anna Wintour, Condé Nast’s chief content officer and the global editorial director of Vogue, in her opening remarks. “We worry about it replacing our work, and the work of those we write about.”
Leaders from the world of politics offered contrasting visions for ensuring AI has a positive impact overall. Richard Blumenthal, the Democratic senator from Connecticut, said policymakers should learn from social media and figure out suitable guardrails around copyright infringement and other key issues before AI causes too much damage. “We want to deal with the perfect storm that is engulfing journalism,” he said in conversation with WIRED global editorial director Katie Drummond.
In a separate conversation, Dean Ball, a senior fellow at the Foundation for American Innovation and one of the authors of the Trump Administration’s AI Action Plan, defended that policy blueprint’s vision for AI regulation. He claimed that it introduced more rules around AI risks than any other government has produced.
Figures from within the AI industry painted a rosy picture of AI’s impact, too, arguing that it will be a boon for economic growth and would not be deployed unchecked.
Alphabet hit just over $3 trillion dollars in market cap on Monday as investors continue to reward it after a federal judge declined to break the company up.
On Sept. 2, U.S. District Court Judge Amit P. Mehta outlined softer-than-feared remedies for his year-ago ruling that Google maintained an illegal monopoly in search. The DOJ had proposed stronger remedies, including that Alphabet-owned Google be forced to sell Chrome. Tech companies like Perplexity and Ecosia lined up with unsolicited bids. But that possibility has been nixed.
Beyond Google’s cash cow of search, its cloud computing business is also growing rapidly on the strength of its AI offerings. Alphabet now joins Nvidia ($4.3T), Microsoft ($3.8T) and Apple ($3.5T) in the three-t club with Amazon next up but a lap behind ($2.5T).
Elon Musk’s A.I. firm is scaling back on “generalist A.I. tutors.” Allison Robbert/POOL/AFP via Getty Images
Data annotation may not be the most glamorous job in Silicon Valley, but it’s indispensable for A.I. developers and has made companies like Scale AI multibillion-dollar ventures overnight. Training large language models requires armies of humans to label text, images and video so A.I. systems can learn from them. Now, Elon Musk’s xAI is reshaping how that work is done by shifting away from general contractors and toward experts in specialized fields it calls “A.I. tutors.”
In that vein, xAI recently laid off at least 500 generalist annotators, as reported by Business Insider. The cuts affected about one-third of the company’s 1,500-person annotation team. In emails cited by the outlet, executives described a “strategic pivot” toward hiring domain experts as specialist A.I. tutors.
“Specialist A.I. tutors at xAI are adding huge value,” said xAI in a Sep. 12 post on X that declared the company will “immediately surge” its specialist A.I. team by tenfold. The company did not respond to requests for comment from Observer.
What data annotation is and why it matters
Human annotators play a crucial role in fine-tuning raw data, ensuring it can be used effectively to train models. But the work has long been fraught. Firms that outsource this work, like Scale AI, have faced lawsuits from contractors alleging wage theft, misclassification and exposure to disturbing content without safeguards.
Unlike rivals that rely heavily on third parties, xAI employs a large in-house annotation team. Other A.I. leaders—including OpenAI and Google—have worked with Scale in the past, though both distanced themselves from the firm after Metatook a 49 percent stake and hired its CEO, Alexandr Wang, to lead its new superintelligence division. Today, many also contract with competitor Surge AI, which counts Anthropic and Microsoft among its clients.
xAI itself has previously tapped third-party annotators, but is now doubling down on its own staff. The company has posted openings for more than a dozen specialist tutor roles spanning A.I. safety, data science, STEM, finance, Japanese and even “memes and headline commentary.” The latter position involves improving Grok’s ability to “recognize and analyze memes, trolling and virality mechanisms,” according to the listing.
Qualifications for these roles are steep. For STEM specialists, candidates must hold a master’s or Ph.D. in a relevant field—or have earned medals in competitions like the International Mathematical Olympiad. xAI says tutors can work part-time or full-time and earn between $45 and $100 per hour.
The changes come as xAI faces wider turnover beyond its annotation team. In July, the company’s head of infrastructure, Uday Ruddarraju, left for rival OpenAI. Co-founder Igor Babushkin departed the following month to launch a venture capital firm. And in September, Mike Liberatore resigned after just three months as chief financial officer.
Google’s parent company, Alphabet, is now worth $3 trillion, a feat only achieved by three other tech giants: Nvidia, Microsoft, and Apple.
Alphabet shares gained more than 4% in value on Monday, allowing the company to achieve a historic market capitalization of $3.03 trillion at the time of writing. Market capitalization measures the total value of a company by multiplying its share price by the number of outstanding shares.
Alphabet hit the $3 trillion mark just over two decades after Google first went public in 2004, and more than 10 years after its own creation as Google’s parent company.
Alphabet’s market cap has grown tremendously, more than 70%, from a low of $1.8 trillionin April. The recent surge value is partially due to an antitrust ruling earlier this month in the case Department of Justice (DOJ) v. Google, which resulted in lighter penalties than initially suggested by the DOJ. The ruling caused Alphabet shares to rise by over 20% over the past month.
Alphabet CEO Sundar Pichai. Photographer: David Paul Morris/Bloomberg via Getty Images
In the week following the ruling, Alphabet gained $234 billion in market cap. The company’s stock is up more than 30% year-to-date. For context, the Nasdaq as a whole is up 15% for the year, per CNBC.
Wall Street generally views Alphabet stock favorably. More than 80% of Wall Street analysts recommend buying the stock as of Monday, per Bloomberg.
Alphabet joins other tech giants that have made it into the $3 trillion club — and beyond. Apple achieved the $3 trillion milestone in June 2023, while Nvidia and Microsoft have taken it a step further by passing the $4 trillion mark.
Alphabet’s focus in recent years has been on artificial intelligence, as the company strives to compete with Meta, OpenAI, and other key players in the AI race. While announcing its second-quarter earnings in July, Alphabet mentioned that it was increasing its AI expenditures from $75 billion to $85 billion amid growing demand for its cloud and AI services.
“AI is positively impacting every part of the business, driving strong momentum,” Alphabet and Google CEO Sundar Pichai stated in the earnings report.
Google’s parent company, Alphabet, is now worth $3 trillion, a feat only achieved by three other tech giants: Nvidia, Microsoft, and Apple.
Alphabet shares gained more than 4% in value on Monday, allowing the company to achieve a historic market capitalization of $3.03 trillion at the time of writing. Market capitalization measures the total value of a company by multiplying its share price by the number of outstanding shares.
Alphabet hit the $3 trillion mark just over two decades after Google first went public in 2004, and more than 10 years after its own creation as Google’s parent company.
The publishing company behind USA Today and 220 other publications is today rolling out a chatbot-like tool called DeeperDive that can converse with readers, summarize insights from its journalism, and suggest new content from across its sites.
“Visitors now have a trusted AI answer engine on our platform for anything they want to engage with, anything they want to ask,” said Mike Reed, CEO of Gannett and the USA Today Network, at the WIRED AI Power Summit in New York, an event that brought together voices from the tech industry, politics, and the world of media, “and it is performing really great.”
Most publishers have a fraught relationship with AI, as the chatbots that trained on their content are now summarizing it and eating the traffic that search engines used to send them.
Reed said that Google’s AI Overview feature has dramatically cut traffic to publishers across the industry. “We are watching the same movie as everyone else is watching,” Reed said ahead of today’s announcement. “We can see some risk in the future to any content distribution model that is based primarily on SEO optimization.”
Like other publishers, Gannett has signed some deals with AI companies, including Amazon and Perplexity, to license its content. The company actively blocks the web scrapers that crawl websites in order to steal content.
DeeperDive represents a bet that harnessing the same generative artificial intelligence technology could help publishers capture readers attention by engaging with them in new ways.
The tool replaces a conventional search box and automatically suggests questions that readers might want to ask. For example, today it offers as one prompt, “How does Trump’s Fed policy affect the economy?”
DeeperDive generates a short answer to the query along with relevant stories from across the USA Today network. Reed says it is crucial that DeeperDive bases its output on factually correct information and does not draw from opinion pieces. “We only look at our real journalism,” he says.
The interface of DeeperDive on the homepage of USA Today
Photograph: USA Today
Reed adds that his company hopes that the tool will also reveal more about readers’ interests. “That can help us from a revenue standpoint,” he said.
DeeperDive was developed by the advertising company Taboola. Adam Singola, Taboola’s CEO, says his firm developed DeeperDive by fine-tuning several open source models.
Singola says DeeperDive benefits from data gathered from across its own network of more than 600 million daily readers across around 11,000 publishers. He says the tool “grounds every answer in articles retrieved from our publisher partners and requires sentence-level citations to those sources” and will avoid generating an output if information from two sources seems to conflict.
Gannett’s CEO Reed said ahead of today’s event that, together with Taboola, his firm is interested in exploring agentic tools for readers’ shopping decisions. “Our audiences have a higher intent to purchase to begin with,” he says. “That’s really the next step here.”
Google parent Alphabet (GOOG, GOOGL) became the fourth company to hit a market cap of $3 trillion Monday. The stock move comes after the company avoided the worst of the potential consequences in its antitrust trial, with federal district court Judge Amit Mehta ruling earlier this month that Google won’t have to sell its Chrome browser.
Google joins the likes of Apple (AAPL), Microsoft (MSFT), and Nvidia (NVDA) in the $3 trillion space, though Nvidia has since climbed to $4 trillion on the strength of its AI chips.
Google is one of the key players in the AI race thanks to its Gemini models and chatbot. The company, like Microsoft and Amazon, also bakes its AI into its cloud business, making for a powerful option for customers entering the space or upgrading their existing enterprise cloud subscriptions.
Citi analyst Ronald Josey wrote in an investor note Monday that Google’s antitrust trial results and cloud offerings will only help it continue to grow.
“We believe the pace of Google’s product velocity is ramping aided in part by Judge Mehta’s ruling as it provides more clear operational guidelines for Google,” Josey wrote.
“More specifically, we believe Gemini’s tools, capabilities, integration (across Google’s products), and adoption continues to expand across Google’s product halo and Google’s size and scale give it an inherent advantage (15 products with 500 [million monthly active users]).”
Google however also faces increasing competition in the search space from AI upstarts including OpenAI, Anthropic, and Perplexity.
During Google’s trial, Apple SVP of Services Eddy Cue testified that Apple had seen searches in its Safari browser decline for the first time in April. Google, however, later pushed back on the statement saying that the company is seeing growth in other areas.
Google isn’t out of the regulatory woods yet, either. The company still has to contend with its ongoing online advertising antitrust lawsuit. And according to Bloomberg, the Federal Trade Commission is probing Google and Amazon’s ad business practices.
Sign up for Yahoo Finance’s Week in Tech newsletter. ·yahoofinance
Email Daniel Howley at dhowley@yahoofinance.com. Follow him on X/Twitter at @DanielHowley.
Google has insisted that its AI-generated search result overviews and summaries have not actually hurt traffic for publishers. The publishers disagree, and at least one is willing to go to court to prove the harm they claim Google has caused. Penske Media Corporation, the parent company of Rolling Stone and The Hollywood Reporter, sued Google on Friday over allegations that the search giant has used its work without permission to generate summaries and ultimately reduced traffic to its publications.
Penske’s argument is pretty simple: by showing an AI-generated summary of an article at the top of the page via Google’s AI Overview panel, users have little reason to click through to read the full article, resulting in dwindling traffic finding its way to the publisher’s platforms, which it needs in order to monetize its content, either through ads or subscriptions. The search engine, the company argues, uses its monopoly over search to basically make publishers give up access to their content for next to nothing.
Notably, Penske claims that in recent years, Google has basically given publishers no choice but to give up access to its content. The lawsuit claims that Google now only indexes a website, making it available to appear in search, if the publisher agrees to give Google permission to use that content for other purposes, like its AI summaries. If you think you lose traffic by not getting clickthroughs on Google, just imagine how bad it would be to not appear at all.
A spokesperson for Google, unspurprisingly, said that the company doesn’t agree with the claims. “With AI Overviews, people find Search more helpful and use it more, creating new opportunities for content to be discovered. We will defend against these meritless claims.” Google Spokesperson Jose Castaneda told Reuters.
That has basically been the company line since rumbles of traffic declines started getting louder. Last month, the company published a blog post in which it claimed that click volume from Google Search results to websites has been “relatively stable year-over-year”—notably without offering a definition for what “relatively stable” is. The company also made the case that “click quality” has increased, so people who do click through are spending more time on the sites they get sent to.
That doesn’t match up with what publishers claim to be seeing. DMG Media, owner of the Daily Mail, claims click-through-rates by as much as 89% since AI Overviews were rolled out. A Wall Street Journal report from earlier this year said Business Insider, The Washington Post, and HuffPost have all reported traffic declines. Pew Research also found that people don’t click through nearly as often when an AI overview is available, finding that people who are served search results that don’t have an AI summary click through to an article nearly twice as often as those who see an AI-generated result.
Just for kicks, if you ask Google Gemini if Google’s AI Overviews are resulting in less traffic for publishers, it says, “Yes, Google’s AI Overview in search results appears to be resulting in less traffic for many websites and publishers. While Google has stated that AI Overviews create new opportunities for content discovery, several studies and anecdotal reports from publishers suggest a negative impact on traffic.” It might be fun to ask Google, “Are you lying about AI Overview’s impact on traffic, or is your AI assistant providing false and unreliable information?”
Even though Google’s AI Overviews were introduced with a comically rocky start, it’s about to face a far more serious challenge. Penske Media, the publisher for Rolling Stone, Variety, Billboard and others, filed a lawsuit against Google, claiming the tech giant illegally powers its AI Overviews feature with content from its sites. Penske claimed in the lawsuit that the AI feature is also “siphoning and discouraging user traffic to PMC’s and other publishers’ websites,” adding that “the revenue generated by those visits will decline.”
The lawsuit, filed in Washington, DC’s federal district court, claims that about 20 percent of Google searches that link to one of Penske’s sites now have AI Overviews. The media company argued that this percentage will continue to increase and that its affiliate revenue through 2024 dropped by more than a third from its peak. Google spokesperson Jose Castaneda said that the tech giant will “defend against these meritless claims” and that “AI Overviews send traffic to a greater diversity of sites.”
Earlier this year, Google faced a similar lawsuit from Chegg, an educational tech company that’s known for textbook rentals. Like Penske Media, this lawsuit alleged that Google’s AI Overviews hurt website traffic and revenue for Chegg. However, the Penske lawsuit is the first time that Google has faced legal action from a major US publisher about its AI search capabilities.
Beyond Google’s legal troubles, other AI companies have also been facing their own court cases. In 2023, the New York Times sued OpenAI, claiming the AI company used published news articles to train its chatbots without offering compensation. More recently, Anthropic agreed to pay a $1.5 billion settlement in a class action lawsuit targeting its Claude chatbot’s use of copyrighted works.
Google faces a new lawsuit accusing the company of illegally using news publishers’ content to create AI summaries that damage their business.
The lawsuit comes from Penske Media Corporation (PMC), which owns industry publications such as Rolling Stone, Billboard, Variety, Hollywood Reporter, Deadline, Vibe, and Artforum. While Penske’s suit is the first to target Google and its parent company Alphabet over showing AI-generated summaries in search, both publishers and authors have sued other AI companies over related copyright concerns. Google also is also facing an antitrust complaint over AI Overviews in Europe.
“As a leading global publisher, we have a duty to protect PMC’s best-in-class journalists and award-winning journalism as a source of truth,” said Penske Media CEO Jay Penske in a statement. “Furthermore, we have a responsibility to proactively fight for the future of digital media and preserve its integrity — all of which is threatened by Google’s current actions.”
Since launching its AI Overviews last year, Google has been criticized for threatening the business models of the same publishers it relies on to provide the content needed to create accurate AI summaries and answers.
The new lawsuit goes farther by accusing Google of continuing to “wield its monopoly to coerce PMC into permitting Google to republish PMC’s content in AI Overviews” and to use that content to train its AI models.
Google spokesperson José Castañeda said in a statement that AI Overviews make Google search “more helpful” and create “new opportunities for content to be discovered.”
“Every day, Google sends billions of clicks to sites across the web, and AI Overviews send traffic to a greater diversity of sites,” Castañeda said. “We will defend against these meritless claims.”
Techcrunch event
San Francisco | October 27-29, 2025
The lawsuit argues that while Penske Media allows Google to crawl its websites in an “exchange of access for traffic” that is “the fundamental bargain that supports the production of content for the open commercial Web,” Google has recently “begun to tie its participation in this bargain to another transaction to which PMC and other publishers do not willingly consent.”
“As a condition of indexing publisher content for search, Google now requires publishers to also supply that content for other uses that cannibalize or preempt search referrals,” the lawsuit claims, adding that the only way for Penske to opt out would be to remove itself from Google search entirely, which would be “devastating.”
The lawsuit also claims that Penske has seen “significant declines in clicks from Google searches since Google started rolling out AI Overviews.” That means less ad revenue for the publisher, and it also threatens subscription and affiliate revenue, the company says: “These revenue streams rely on people actually visiting PMC sites.”
And while Google has pushed back against complaints that AI Overviews reduce traffic to publishers, the lawsuit says, “Google has offered no credible competing information regarding search referral traffic.”
Penske’s suit comes after Google seemingly dodged an antitrust bullet — while a federal judge had ruled the company acted illegally to maintain a monopoly in online search, the judge did not to order the company to break up its businesses (for example by selling Chrome), due in part to an increasing competition in AI.
This post has been updated with a statement from Jay Penske.
All of the prices above are for a single line paid monthly. Google periodically offers half off and other specials, usually only if you bring your own phone.
Activate Your Chip
Once you’ve picked your plan and signed up, Google will mail out a SIM card. It took a couple of days for my physical SIM to arrive, but I’ll gladly take the slight delay if it saves me from setting foot in a physical carrier store. If you’re using an iPhone, Google Pixel, Samsung phone, or other device that supports eSIM, you can set up Fi with an eSIM instantly.
Once your chip arrives, you’ll need to use a SIM tool to pull out the SIM tray and insert the SIM card into your phone. Then, download the Google Fi app (you’ll need to be on Wi-Fi to do this since your chip won’t connect to the network yet), and follow the steps there. If you’re porting in your old phone number, it may take a little longer. For me, after setting up a new number, Fi was up and running after about 5 minutes. That’s it, you’re done.
I have traveled and lived in rural areas for the past 7 years, and I’ve tried just about every phone and hotspot plan around—none of them are anywhere near this simple. The only one that comes close is Red Pocket Mobile, which I still use in addition to Google Fi. There are cheaper plans out there, but in terms of ease of use and reliability, Fi is hard to beat.
Using Google Fi as a Hotspot
You can use Google Fi as a simple way to add cellular connectivity to any device that accepts a SIM card, like a mobile hotspot. You’ll need to activate your Google Fi SIM card with a phone using the Google Fi app, but once the activation is done, you can put that chip in any device your plan allows. If you go with the Unlimited Plus plan, that means you can put your chip in an iPad, Android tablet, or a 4G/5G mobile hotspot. You are still bound by the 50-gigabyte data limit, though, so make sure you don’t go too crazy with Netflix.
Alternatively, consider ordering a data-only SIM. Google allows you to have up to four if you’re on the Unlimited Premium or Flexible plans, meaning you can keep four gadgets—a spare phone or tablet—connected to the internet. The caveat is that they can’t place phone calls or receive texts. You don’t have to use your phone to activate the SIM first. You can order a data-only SIM in the Plan section of your account, under Devices & subscriptions. If you have an eSIM-only device you want to connect, you can tap Connect your tablet and Fi will offer a QR code you can scan to activate the SIM.
Frequently Asked Questions
Do I need a Google account? Yes, you do need a Google account to sign up for Google Fi, but you don’t need to be all-in on Google to use Fi. I have an Android phone, and I use Google apps since that’s what we use here at WIRED, but outside of work I do not use any Google services other than Fi, and it still works great.
Is Google Fi tracking my every move?Yes, but so is your current provider. Google Fi’s terms of service say Google doesn’t sell what’s known as customer proprietary network information—things like call location, details, and features you use—to anyone else.
I’m traveling and want to use Google Fi abroad. Will that work? Fi’s terms of service require you to activate your service in the US, but after that, in theory, it should work anywhere Fi has partnered with an in-country network. WIRED editor Julian Chokkattu has used Fi in multiple countries while traveling. However, based on feedback from WIRED readers, and reading through travel forums, it seems that most people are being cut off if they’re out of the US for more than a few weeks. I would say don’t plan on using Google Fi to fulfill your digital nomad dreams.
Tips and Tricks
There are several features available through the Google Fi app you might not discover at first. One of my favorites is an old Google Voice feature that allows you to forward calls to any phone you like. This is also possible in Google Fi. All you need to do is add a number to Fi’s forwarding list, and any time you get a call, it will ring both your cell phone and that secondary number—whether it’s a home phone, second cell, or the phone at the Airbnb you’re at. This is very handy in places where your signal strength is iffy—just route the call to a landline. Similarly, it can be worth enabling the Wi-Fi calling feature for times when you have access to Wi-Fi but not a cell signal.
Another feature that’s becoming more and more useful as the number of spam calls I get goes ever upward is call blocking. Android and iOS calling apps can block calls, but that sends the caller directly to voicemail, and you still end up getting the voicemail. Block a call through the Google Fi app, and the callers get a message saying your number has been disconnected or is no longer in service. As far as they know, you’ve changed numbers. To set this up, open the Fi app and look under Privacy & security > Manage contact settings > Manage blocked numbers, and then you can add any number you like to the list. If you change your mind, just delete the listing.
One final thing worth mentioning: I have not canceled my Google Fi service despite switching to Starlink for most of my hotspot needs. Instead, I just suspended my Fi service using the app. That way, should I need it for some reason, I can reactivate it very quickly.
Google has accidentally leaked its new Nest security cameras and video doorbell line. Setup options appeared in the Google Home app for wired versions of the Nest Cam Indoor (3rd gen), Nest Cam Outdoor (2nd gen), and Nest Doorbell (3rd gen), as reported by Android Authority. The options now appear to have been removed, but an eagle-eyed Redditor also found the new products locked up at Home Depot, ready to go on sale.
Google has already confirmed that it plans to unveil new information about the infusion of its Gemini voice assistant into Google Home on October 1, replacing Google Assistant. That’s likely when we’ll see the new hardware, too. These overdue updates are rumored to include a resolution bump to 2K, a new zoom and crop feature, fresh colors, and a switch to Gemini for Home. There’s also talk of a new subscription option as Nest Aware turns into Google Home Premium, and a new Google Home Premium Advanced plan. Details haven’t been confirmed, so take all of this with a pinch of salt.
As for the design of the new lineup, they look almost identical to the existing range, aside from the colors, which include an eye-catching red. Perhaps in preparation for the new releases, the Nest team recently updated the Home app to provide preview images from the last event before the live view loads, swiping between timeline and events, and better notifications with a static thumbnail expandable to a large animated preview. There was also a raft of performance improvements and some much-needed polish. —Simon Hill
Sony’s Xperia 10 VII Won’t Launch in the US
Courtesy of Sony
Sony stopped selling its flagship Xperia phones in the US last year, and that seems to be continuing with the latest midrange Xperia 10 VII, announced on Friday. It’ll launch in Asia, Europe, and the UK, and it debuts a fresh design language with a horizontal camera bar, much like Google’s Pixel phones (and even the iPhone Air).
It has a 6.1-inch screen, which may sound nice and compact, but it’s slightly bigger than the 6.1-inch iPhone 16. That’s probably because the bezels at the top and bottom of the screen are a little chunky for a modern phone. Still, you get a 120-Hz refresh rate, and some folks will be excited to see the 3.5-mm headphone jack and microSD card slot. It’s powered by the Qualcomm Snapdragon 6 Gen 3 chip with a 5,000-mAh battery in tow, and no wireless charging.
As for the cameras, Sony has a 50-megapixel main camera paired with a 13-MP ultrawide, and you can use the dedicated shutter button on the side to snap pics. It’ll cost £399 or €449 in the UK and Europe and goes on sale September 19, the same day as the latest iPhone 17 lineup.
Qualcomm Debuts Quick Charge 5+
This week, Qualcomm announced the next evolution of its fast-charging technology, known as Quick Charge 5+. Qualcomm calls it its “fastest and most versatile charging solution,” which can recharge phones from 0 to 50 percent in five minutes. That was true of the original version of Quick Charge 5, though, which is now more than 5 years old. The advances in Quick Charge 5+ revolve around “advanced thermal control” and “intelligent power delivery” to the standard. It’s less about increasing charging speed and more about maintaining that speed sustainably.
For example, Quick Charge 5+ doesn’t just flow all that juice to the device uninhibited; instead, it “dynamically” regulates that power using a “reduced-voltage approach.” This means it can lower the voltage on the fly to prevent overheating while charging, without impacting performance or battery health.
Qualcomm says its fast-charging technology powers over 1 billion devices, but we’ll have to see if Quick Charge 5+ picks up more mainstream adoption in phones and accessories in the US. Qualcomm’s annual Snapdragon Summit is coming up on September 23, and the company says devices announced at the conference will support Quick Charge 5+. —Luke Larsen
Ultraloq Enables NFC Unlock for Android Phones
Courtesy of Ultraloq
Smart-lock brand Ultraloq is adept at adding support for the latest smart-home standards into its devices, from Matter to HomeKit. Now, Android users can share a similar experience to Apple Home Key users with an update to its Bolt NFC smart lock ($200), allowing it to work with NFC-enabled Android devices for a tap-to-unlock feature, much like how you tap to pay. It’s a feature often touted for iPhones, and usually, you can’t switch between ecosystems when a device is compatible with both. The Bolt NFC lock will allow for both Apple and Android devices to wirelessly unlock this smart lock with a tap.
Universal translators were once a science fiction dream, appearing in shows like “Star Trek” as devices capable of translating any language into English.
Now, new advancements in AI have made it possible for big tech companies like Apple, Meta, and Google to create gadgets that translate from one language to another in real-time. And analysts expect the products to be popular.
Here’s what they’re working on:
Apple
Apple introduced the $250 AirPods Pro 3, a pair of earbuds capable of live translation in real-time, at its “Awe Dropping” product launch event earlier this week. The earbuds support translations from French, German, Portuguese, and Spanish into English. Older AirPods models, such as the AirPods 4 and AirPods Pro 2, will get the live translation feature as an update next week.
Apple AirPods Pro displayed at Apple headquarters in Cupertino, California, on Sept. 9. Photo by Justin Sullivan/Getty Images
In a demo video, Apple showcased how AirPods could be used in a live conversation with an English-speaker buying flowers from a Spanish-speaker. The AirPods translated the vendor’s words from Spanish to English in real-time through in-ear audio translations. When the English-speaker responds, their words are translated into Spanish via written text on her phone. If both people in a conversation are wearing AirPods, they can speak in different languages and have the earbuds translate in real-time.
Analysts expect the product to entice users to upgrade their Apple devices.
“If we can actually use the AirPods for live translations, that’s a feature that would actually get people to upgrade,” DA Davidson Analyst Gil Luria told CNBC.
Google announced last month that its Pixel 10 phone can translate from one language to another during phone calls — and preserve the speaker’s natural voice with translations. The feature, called Voice Translate, applies to real-time phone call conversations in languages like Spanish, Japanese, and Hindi, and will become available through a software update on Monday.
The translate feature processes translations on the Pixel 10 device, so conversations are kept private.
Google Pixel 10 smartphone. Photo by Andrej Sokolow/picture alliance via Getty Images
“Voice Translate allows you to break down language barriers during phone calls,” Google stated in a blog post announcing the feature.
Meta
Meta, meanwhile, has recently implemented translation capabilities for new or existing devices. Starting in April of this year, the bestselling Ray-Ban Meta smart glasses are capable of live translation.
The glasses, which have sold more than two million pairs since launch, can help English-speaking users understand speech in French, Italian, and Spanish with a simple voice command: “Hey Meta, start live translation.”
Meta CEO Mark Zuckerberg (left) and Mixed Martial Arts Fighter Brandon Moreno (right) at Meta Connect 2024. Photographer: David Paul Morris/Bloomberg via Getty Images
Meta CEO Mark Zuckerberg first introduced the live AI translation feature for the Ray-Ban Meta smart glasses at Meta Connect 2024. He demonstrated how he was able to understand Brandon Moreno, a mixed martial arts fighter, speaking in Spanish while he responded in English.
“You can simply speak to someone in Spanish, and hear the English translation directly in your ear,” Zuckerberg said at the event.
Universal translators were once a science fiction dream, appearing in shows like “Star Trek” as devices capable of translating any language into English.
Now, new advancements in AI have made it possible for big tech companies like Apple, Meta, and Google to create gadgets that translate from one language to another in real-time. And analysts expect the products to be popular.
If you’re hunting for a well-priced Android tablet that’s perfect for occasional use around the house, look no further than the Samsung Galaxy Tab S10 FE, which is currently discounted at Amazon to just $430. It’s one of our favorite Android tablets, with the right balance of features, power, and battery life for most people.
Courtesy of Samsung
Despite using an LCD screen instead of the increasingly common AMOLED, the Samsung’s 10.7-inch panel is vivid and clear for most use cases. It’s great for curling up with a movie in bed and bright enough to use outside if you’re keen on adventuring with your devices.
At its core is the Samsung Exynos 1580 processor, the same chip found in the Galaxy A56, with 8 GB of memory. It isn’t the most high-performance tablet around, but it’s fully capable of playing games like Magic: The Gathering Arena, and can even handle more demanding titles like Asphalt Legends: Unite. At 497 grams, it’s light enough to carry around and hold without much effort, but our reviewer did note that the corners are slightly uncomfortable over longer sessions.
Despite the sizable screen and midrange performance, the Samsung manages an impressive 20 hours of mixed use without needing to be plugged in. It also has wireless charging, which can take it to full battery in under two hours. That’s all with a slightly smaller battery than we’re used to seeing, so good optimization and component selection helps a lot with longevity here.
Samsung also sweetens the deal by including a stylus, something most of our other favorite tablets can’t claim. It’s a basic but helpful addition and is great for occasional note-taking or just keeping smudges off your screen. It also has a built-in fingerprint sensor, which was a little hard to find at first but ended up being a more reliable option than face detection.
If you’re already a Samsung smartphone user, you’ll recognize the Samsung One U17 software, which is based on Android 15. It adds some great functionality without being super disruptive like other manufacturer launchers can be, but if that’s a deal-breaker, make sure to check out our other favorite Android tablets for more options.
The Federal Trade Commission is investigating whether Amazon and Google misled advertisers regarding the pricing and terms for their ads. As first reported by , the investigation is being conducted by the agency’s consumer protection unit, and centers around the auction-style sale of advertising space by the companies.
Google sells ads using automated auctions that run after a user enters a search query. These auctions take place in less than a second. Amazon uses real-time auctions to place ads within its listings, which users would recognize as “sponsored listings” or “sponsored ads” when searching for specific products.
The investigation questions whether Amazon disclosed so-called “reserve pricing” for some of its ads, which is a price floor that advertisers must meet before they can buy an ad. For Google’s part, the FTC is looking at certain practices by the search giant including its internal pricing process and whether it was surreptitiously increasing the cost of ads in ways that advertisers weren’t privy to.
The FTC isn’t the only federal agency keeping a close eye on big tech. Earlier this year, a that Google held a monopoly in online ad tech after the Department of Justice (DOJ) sued to break up the giant’s ad business. Google also recently escaped from a Department of Justice monopoly case involving its Chrome browser.
FTC Chair Andrew Ferguson has previously said that big tech is one of the agency’s top priorities. These investigations move forward against a backdrop of top tech CEOs continuing to try to curry favor with President Trump via and sweeping (if potentially unrealistic) in the US economy.
Google is rolling out an update for Gmail on mobile and the web that will make it easier to track emails for your deliveries. The most prominent change you’ll see is a new Purchases tab, where Gmail will put all your delivery emails so you can view them in one place. In the app, you’ll be able to access the new view via the side menu. Just click the hamburger icon in the text box at the top of the interface.
Even though deliveries now have their own tab, Gmail will still show packages that are set to arrive within the day as cards at the top of your primary inbox, as you can see in the image above. Each card comes with a “See item” or a “Track Package” button that you can click or tap without having to search for the original delivery email. The new delivery tab will start showing up in your personal Gmail accounts starting today.
In addition, Google is updating Gmail’s Promotions tab, allowing you to sort the emails in it by “most relevant.” Gmail will decide which brands and emails are most relevant for you based on what you’ve interacted with the most in the past. It will also send you “nudges” on upcoming deals and offers that are set to expire soon. You’ll see the changes to the Promotions tab in the coming weeks.
Here’s a Mandela effect event that you probably thought was real: The Department of Government Efficiency, the pseudo-agency run by Elon Musk to cut “fraud, waste, and abuse” from federal operations, didn’t actually exist. At least, that is what Google’s AI Overview response will tell you if you search certain content related to DOGE’s operations.
A Bluesky user who goes by iucounu first pointed out this mistake in Google’s comprehension skills, finding that querying the search engine for information and the number of deaths caused by DOGE’s cutting of essential programs results in a response that claims the agency is “fictional” and from “a political satire or conspiracy theory.” Gizmodo was able to recreate these results:
According to Google, “There is no actual government department named DOGE, and the term is used in critical or satirical contexts to refer to policies or actions taken by the Trump administration.” The results expand on this later, stating, “It is crucial to understand that there is no actual government entity named DOGE, and the discussion around it is part of political discourse or satire, not a factual government action.”
The closest it gets to a source outright saying DOGE doesn’t exist is a link to the Democrats’ House Committee on the Budget, which has a page titled “The So-Called ‘DOGE,’” but even that offers a pretty clear statement that DOGE is not some mass delusion: “DOGE is an organization in the Executive Office of the President. It is not a cabinet-level agency with Senate-approved leadership and has no statutory authority to alter Congressionally appropriated funds.” The other sources, places like Lawfare and the Center on Budget and Policy Priorities, don’t even come close to suggesting the agency is a satire.
So what gives? Google didn’t offer any explanation when contacted, though a spokesperson for the company did tell Gizmodo, “This AI Overview is clearly incorrect. It violated our policies around civic information, and we are taking action to address the issue.”
So it looks like DOGE wasn’t all in our collective heads after all. Ain’t that a shame?
Apple’s AirPods Pro 3 are here, and with their arrival comes a lot of questions. One of the big questions after Apple’s annual iPhone event is, “Should I buy new AirPods right now?” But before you can answer that, it’s important to know the competition, and the main one is Google’s Pixel Buds Pro 2. While both bear the “Pro” moniker in their names, they’re not created equal, and small differences in features could have a big impact on which pair you ought to buy.
If you’re wondering which trigger to pull, here’s a breakdown of which pair of wireless earbuds does what.
While I haven’t gotten to hear Apple’s AirPods Pro 3 for myself yet, these wireless earbuds have a new architecture that Apple says should bring some improvements over the last generation. According to Apple, AirPods Pro 3 have a new “multiport acoustic architecture” that better controls the airflow and the way the sound carries to the ear. How demonstrable that change is remains to be seen, but it should be the best-sounding pair of AirPods yet, if Apple’s messaging is any indication.
Similarly, the Pixel Buds Pro 2 mark a significant boost in sound quality over the original iteration, with 11mm drivers that help augment both high and low ends. Which architecture delivers better sound quality will come down to preference, and we won’t know for sure until we try AirPods Pro 3 for ourselves, but both should be the best-sounding version in their respective product lines. AirPods Pro 3 will have tough competition, though—we thought the Pixel Buds Pro 2 were damn near perfect.
Apple is promising some big improvements gen-over-gen with active noise cancellation (ANC), claiming that its AirPods Pro 3 have 2x the ANC capability as the AirPods Pro 2. Apple generally offers better-than-average ANC (it’s not Bose QuietComfort Ultra, but it’s good), so double the ANC is an enticing offer. Google’s Pixel Buds Pro 2 also offer double the ANC over the first generation, and as we stated in our review, it is one of the highlights of the buds overall.
One thing that could give AirPods Pro 3 the edge, however, is a redesigned eartip that contains foam inside. That should make a very tight seal in your ear and provide good passive noise cancellation on top of ANC. Again, it’s hard to say without hearing the AirPods Pro 3 for ourselves, but there’s a chance that AirPods Pro 3 could have an X factor here.
One of the biggest AirPods improvements gen-over-gen, according to Apple, is in the battery life department. AirPods Pro 3 now have an 8-hour battery life outside of the case with ANC on, which is two more hours than the AirPods Pro 2. The thing is, we’re comparing to Google’s Pixel Buds Pro 2, which also have 8 hours of battery life outside the case, meaning this part of the showdown could be a tie.
That being said, the Pixel Buds Pro 2 do have a better battery life in the case. While AirPods Pro 3 have a 24-hour battery life in the case with ANC on, Google’s Pixel Buds Pro 2 have 30 hours. Case battery isn’t the biggest metric for success, but more is more when it comes to battery.
Features are where things get interesting and potentially where AirPods Pro 3 pull away. While both wireless earbuds have AI integrations (Google has Gemini, and AirPods Pro have Apple Intelligence), conversation detection, support for head gestures, and adaptive ANC, and even live translation abilities, Apple’s AirPods Pro 3 lean into health sensing as well.
AirPods Pro 3 introduces a heart rate sensor that allows the wireless earbuds to be used for tracking workouts and even calories burned, while Google’s Pixel Buds Pro 2 have no such health features. Whether that’s a game-changer is entirely up to you, but it’s clear that AirPods Pro 3 just do more in that department. Maybe Google will close the gap with its next pair of wireless earbuds, but for now, Apple has the advantage, especially if you’re using an iPhone.
Apple clearly spent a lot of time redesigning its AirPods Pro 3. Specifically, Apple says that it used “over 10,000 ear scans with more than 100,000 hours of user research” to tweak the fit of AirPods Pro 3. It also changed the “external geometry of the eartip,” which now aligns to the center of the body for more stability. Those changes could very well result in an even more comfortable fit and give AirPods an edge here.
With that said, we gave the Pixel Buds Pro 2 high marks for comfort, so Apple has its work cut out. Apple does objectively now have more eartip sizes than Google’s Pixel Buds Pro 2—five instead of Google’s four—but that doesn’t necessarily mean they’re more comfortable, even if they do have more fit options. If I were a betting man, I’d put my money on Apple in the fit metric, if just because they seem to have exhaustively redesigned the AirPods Pro 3 and focused on the weight and feel.
AirPods Pro 3 vs. Pixel Buds Pro 2: Price
While the AirPods Pro 3 are more expensive than the Pixel Buds Pro 2, they’re also brand new, and the price isn’t drastically different. Apple’s AirPods Pro 3 are $250, while Google’s Pixel Buds Pro 2 are $230. What’s notable is that Apple didn’t raise the price of its wireless earbuds, making the AirPods Pro 3 feel like a solid deal. Google’s Pixel Buds Pro 2 are almost exactly a year old at this point, and while $230 isn’t the most expensive starting price for wireless earbuds, it’s not a massive discount. That being said, we’ll be able to tell you which price is worth it once we actually test the AirPods Pro 3 ourselves.