ReportWire

Tag: Google

  • Founder of $100 million company never unplugs from work, but encourages her team to have work-life balance: ‘They didn’t sign up to be entrepreneurs’ | Fortune

    [ad_1]

    Founders can find it hard to step away from work when their company rests on their shoulders. The concept of having “work-life balance” has sparked fierce debate among entrepreneurs, who question if it’s even possible to have the best of both worlds: scaling a multimillion-dollar business, with enough downtime to recharge. Two-time founder Nicole Bernard Dawes is a strong advocate of unplugging from the job—but only for her employees. 

    “I think I probably am a little bit of a hypocrite, because I don’t unplug. I never do,” Dawes tells Fortune. “I never want to be the person that’s holding up a member of our team.”

    The serial entrepreneur encourages her staffers to totally disconnect from work once they’re off the clock, but doesn’t give herself the same breathing room. Having scaled two companies to success, she’s assumed the responsibility of always being on for decades. Dawes first founded organic, non-GMO tortilla chip brand Late July in 2003, which currently lines the aisles of Targets, Whole Foods, Krogers, and Walmarts across the country. Campbell’s acquired a majority stake of the business in 2014, eventually buying the rest of the $100 million company in 2017. In 2018, Dawes broke into another consumer packaged goods (CPG) market again, this time with zero-sugar, sustainably packaged soda line Nixie. The brand raised $27 million in new funding earlier this year, with its products being sold in over 11,000 major grocery stores. 

    With more than two decades of entrepreneurship under her belt at Late July, Dawes had pushed through economic downturns and many sleepless nights. But the hardships didn’t stop her from returning to the startup scene as Nixie’s founder—having grown up in the business world, Dawes is not so easily deterred. However, she doesn’t want work to overtake her staffers’ lives.

    “I signed up for this. I am the entrepreneur, I did this to myself—a self-inflicted situation. [My employees] didn’t sign up to be entrepreneurs,” Dawes says. “I am very comfortable taking downtime, but also making sure I’m available.”

    Dawes says never unplugging is “my life”—and she grew up in it

    Many leaders out there, like Google cofounder Sergey Brin, expect their staffers to clock in more than the typical nine-to-five job. But Dawes doesn’t hold her her employees to have the relentless work-ethic of entrepreneurs who pride themselves on having no personal lives. 

    “I think that where a lot of [leaders] differ, is extending that to their team. I feel very strongly that it should not extend to the team,” Dawes explains. “But I also feel like that is how I grew up. My father missed a lot of stuff because he felt like that was what you had to do. So I was determined I wasn’t gonna do that. I wanted to be present at things for my kids, and I wanted [it] to be okay for our team to be that way, too.”

    Dawes witnessed the pitfalls of entrepreneurship as a kid growing up in her parents’ food businesses. She spent her childhood years working the front counter of her mother’s health-food store, and roaming the floors of her late father’s $4.87 billion snack empire: Cape Cod Chips. As a kid in a family running two businesses, Dawes says it could be difficult for her parents to step away from the job. So when she decided to follow in their footsteps as a two-time founder of successful CPG brands, she knew exactly what to expect. 

    “When you decide to become an entrepreneur, there’s a lot of people [saying], ‘It’s stressful, it’s lonely, it’s all these things.’ And that’s true, but this is where I was really fortunate: I grew up in this business, so I entered eyes wide open,” Dawes says. “That’s why it’s really important to be passionate about your mission, passionate about your products. Because you do have to sacrifice a lot on the other side.”

    Dawes still makes time for the important things

    While Dawes admits she has difficulty stepping away from the grind, she still makes time for the things that keep her sane. 

    “You have to choose what’s the most important thing in that moment. I don’t think as an entrepreneur—at least for me—I’ve never really, truly, been able to shut off completely,” Dawes says. “But I also make time to have family dinner almost every night. There were things that were priorities to me, and I still make them priorities, like going out for a walk every day or exercising.”

    The entrepreneur also loves hitting the beach, reading, and cooking—and despite it feeling like a chore to many, Dawes really enjoys going to the grocery store. She calls it her “hobby”: observing what new products are stocked on shelves, and what items shoppers are gravitating towards. It’s gratifying to witness people pick up a bag of Late July or a case of Nixie drinks to bring home to their families, something she feels immensely grateful for. While getting her brands into those grocery aisles has been no easy feat, it’s all been worth it in the end. Dawes says passion is what eases the weight of her work-life balance. 

    “Sometimes when I wake up in the morning like, ‘I can’t even believe I’m this lucky that I get to do this job,’” Dawes says. “And because I feel that way, it doesn’t feel like working. I’m getting to do something fun all the time.”

    [ad_2]

    Emma Burleigh

    Source link

  • Google Nest Doorbell Cam (2025) Review: I’m So Tired of Subscriptions

    [ad_1]

    Google is betting that AI can justify the high price of its smart home security camera subscriptions. The idea is that with AI, your notifications would read more like a human looked outside and told you what they saw. And instead of you scrolling through endless video footage to see what happened, AI can summarize the day for you. Sounds good, right? Sounds great to me.

    If you already read my Nest Cam Outdoor (wired, 2nd gen) review, you’ll know the reality, as I experienced it, is underwhelming. Notifications, generated by Google’s Gemini AI chatbot, constantly misidentified my pets and gave weird and wrong descriptions of events taking place in triggered recordings. Daily summaries of my family’s comings and goings made it sound like my house was being mobbed with people and animals. None of it helped justify the pricey cloud storage service that the Google Home Premium (formerly Nest Aware) subscriptions otherwise are. And without those subscriptions, the Nest Cam Outdoor just doesn’t do enough to make it worth buying over some of the more capable, less cloud-reliant alternatives out there.

    Does the Google Nest Doorbell (wired, 3rd gen) fare any better? Well, the AI features are still broken the same way, but it may still be a better purchase, depending on how deep your roots are within the walled garden of the Google Home ecosystem. If you’re not a big Google Home user, though, it’s best to look elsewhere.


    Google Nest Doorbell (wired, 3rd gen)

    Janky AI summaries and spendy subscription aside, the Nest Doorbell is good enough if you’re deep in the Google Home ecosystem.

    • Clear, wide field of view
    • Nice integration with Google Home speakers and displays
    • Attractive design
    • Quick notifications
    • Inconsistent AI notifications
    • AI summaries are useless
    • Expensive hardware and subscriptions
    • No local storage

    The Nest Doorbell might be the nicest-looking video doorbell on the market. Its slender, bar-shaped housing is rounded on both ends, curving tightly around the camera and the LED ring-lit doorbell button. The whole thing has the same gentle, pleasingly symmetrical vibe that characterizes the other Google Nest cameras. It’s a lot nicer to look at than chunky, blocky video doorbells from the likes of Ring or Eufy.

    Beyond the pretty design, Google’s third-gen wired doorbell has solid specs like a 2K resolution camera sensor with a generous 166-degree diagonal field of view that spreads out over a square aspect ratio. It captures HDR video at 30 frames per second; clips come in vibrant color during the day and, using infrared LEDs, black and white at night. The Nest Doorbell also has a microphone and speaker that enables two-way audio. Connectivity-wise, the camera uses both 2.4GHz and 5GHz Wi-Fi and Bluetooth Low Energy. Thanks to that fast Wi-Fi and its always-on nature, its live feed loads almost instantly in the Google Home app.

    Installation is straightforward, assuming you’ve got the requisite doorbell wiring by your door. The Nest Doorbell comes with a mounting plate and a second angled adapter that you can use if you want to have the camera pointing more toward people at your door. Google includes wire extenders if you need them, and the Google Home app, which you use for setup, guides you through installation.

    © Wes Davis / Gizmodo

    It’s easy to connect the Nest Doorbell to the Google Home app—the only place you’ll ever use it, since this is exclusively a Google Home-compatible product—but a word of advice: Setup requires a QR code included in the box. Lose it and you’ll have to undo all of your physical installation work to get at the same QR code on the back of the doorbell itself.

    Once set up, it works like most other video doorbells. You’ll get notifications when someone presses its button, or when the Nest Doorbell detects the sorts of objects—people, pets, and vehicles—you’ve set it to notify you about. Unfortunately, you’ll need a subscription if you want those notifications to feature a zoomed-in preview of whatever triggered the recording, as well as for package detection. Seems stingy, but I guess thumbnail images and machine-learning cardboard box recognition don’t grow on trees?

    See Nest Doorbell (wired, 3rd gen) at Amazon

    Despite those omissions, Google is more generous with free features for the Nest Doorbell than the Nest Cam Outdoor. It works with existing mechanical and digital chimes, for instance, and if you don’t have a functioning chime (like me) then you’ve also got the option to use Google’s smart speakers or displays. They can be configured to announce when someone has rang your doorbell and—in the case of the Google Nest Hub or Hub Max—start streaming the camera’s live feed. Through the display you can also chat with the person who rang your doorbell or, if you’re not into chatting, pick an automated response such as one telling a delivery person to leave the payload there.

    In testing, my second-generation Nest Hub was fairly quick to announce that someone had pressed the button, and chatting back and forth with them was easy enough. The only problem was that I had to deal with the Nest Hub itself, which has an interface that’s absolutely sluggish in 2025. Still, it’s a cool integration. Now, if only I could get it to do this on the Google TV-equipped OLED TV in my basement.

    And that’s it for the Nest Doorbell, sans subscription. There’s no local recording, although Google did bump the amount of time it’ll keep a recorded event on its servers from a scant three hours in the previous Nest Doorbell to a still-meager six hours. Either way, it’s paltry compared to the free local storage offered for video doorbells from the likes of Eufy, Reolink, Blink, and Aqara.

    AI works better on the doorbell camera

    Nest Doorbell In Google Home App
    © Screenshots by Wes Davis / Gizmodo

    If you want more out of the Nest Doorbell, you’ll have to pay for a $10 or $20 per month Google Home Premium subscription. That’ll give you more cloud video storage history—to the tune of 30 days or 60 days, respectively, with the latter also adding 10 days of 24/7 recording that you can search using Gemini.

    The lower Standard tier also gets you facial recognition, package detection, and alerts if one of your Google Home devices hears glass breaking or smoke alarms. Those features, as well as local storage, are all things the Reolink Elite I recently reviewed offers for free. In fact, the only thing this subscription nets you that you can’t get with a lot of other cameras is a feature called “Help me create,” which lets you create automations by describing them in a text box in the Google Home app. It worked well for creating simple automations, although one thing that bothers me is that if you ask it to do something that Google Home’s automations aren’t capable of, Gemini won’t tell you that. It’ll just deliver a non-functioning automation.

    Eventually, the Standard plan will also include a wide rollout of Gemini to smart speakers. That includes features like Gemini Live, Google’s LLM-powered assistant’s back-and-forth voice chatting feature. As of this review, it’s best to hold off on the subscription if you want access to Gemini on your speakers, as that’s only available to some in early access.

    You have to subscribe to the $20/month Google Home Premium Advanced plan to get the headlining AI camera features like daily summaries and AI-created notifications for events. You can read a lot more about my issues with these features over in my review of the Nest Cam Outdoor, but to summarize: Google’s AI system has a tendency to misinterpret what’s happening in front of it, confidently misidentifies animals, and its summaries often describe a person coming and going in a way that makes it seem like I’m having a house party every day.

    That said, the system seems more accurate in the context of a video doorbell, perhaps because the camera is closer to the ground and can see what’s in front of it more clearly. Or maybe it’s just because what happens in front of my house is a lot more routine than in the backyard—it’s not trying to make sense of dogs going in and out or people doing yardwork or taking out the trash. Gemini still called my cat a dog sometimes, but it accurately called out when most packages were delivered and even noted that one was from Amazon.

    These features are slick when they work, and—again—like I said in my Nest Cam Outdoor review, they’re a clear technological leap forward for home security cameras. But Google’s AI descriptions are still wrong often enough that it’s like paying $20 a month to beta test, and that just doesn’t feel good to me. Heck, even when they aren’t flat wrong, they’re not much more useful than the generic, non-AI descriptors of “Person,” “Person with Package,” or “Activity or animal” of the subscription-free experience. Also, AI video search might be very cool, but as the Reolink Elite shows, you can get similar AI search from an on-device AI model. Like with local video storage, it feels like Google could make a camera with on-device AI search for free, and just didn’t do it because, well, more money via subscriptions is better than less money without them.

    Good buy if you’re all-in on Google Home

    Google Nest Doorbell Wired 3rdgen Review 2
    © Wes Davis / Gizmodo

    The Google Nest Doorbell (wired, 3rd gen) serves a pretty specific niche—people heavily invested in the Google Home ecosystem—very well. If you have a home full of Google Nest speakers and smart displays and you love using Google Gemini for things, you’ll probably like the Nest Doorbell. And if you’re already paying for a spendy Google Home Premium plan and don’t have a Nest Doorbell or you’ve only got the first-generation model, it’s a no-brainer.

    But for anyone else, the Nest Doorbell isn’t meaningfully useful on its own, and the Google Home Premium subscription is a raw deal at a time when your weary dollar won’t go as far as it used to. It’s hard to feel good about paying $20 a month for useless AI summaries, or for AI-written notifications that can be slightly more helpful than generic “person spotted” alerts when I’m canceling streaming services to save money. I’d much rather buy one of the many cheaper alternative video doorbells that offer local video storage and reactivate my Netflix account for a couple more months with the money I saved.

    See Nest Doorbell (wired, 3rd gen) at Amazon

    [ad_2]

    Wes Davis

    Source link

  • A decision about breaking up Google’s adtech monopoly is on the horizon

    [ad_1]

    Google made its final arguments in a longstanding case against the US Department of Justice on whether it has to split up its ad tech practices. However, the judge presiding over the case may be looking to wrap up the case before Google has a chance to appeal, according to a report from Reuters.

    On Friday, both sides made their closing statements in the lawsuit where the Justice Department accused the tech giant of illegally monopolizing the ad tech market. While the US District Court Judge Leonie Brinkema ruled in April that Google held a monopoly in the online adtech space, the judge recently asked the Justice Department how quickly an anticompetitive measure could go into effect, adding that “time is of the essence.”

    Google’s attorney, Karen Dunn, argued that forcing Google to sell its advertising tech subsidiary would be extreme and hurt customers in the process, according to the report. Google is also reportedly planning to appeal the latest decision. According to Reuters, Brinkema noted that any sort of remedy “most likely would not be as easily enforceable while an appeal is pending,” meaning that Google could delay the forced sale until the appeal is concluded. At the same time, Google is facing a $3.5 billion fine for violating the European Union’s antitrust laws within the adtech industry.

    [ad_2]

    Jackson Chen

    Source link

  • Gear News of the Week: Matter 1.5 Adds Smart Home Camera Support, and Gemini Comes to Android Auto

    [ad_1]

    The promise of interoperability for your smart home gadgets that Matter was supposed to bring has been a slow process, but it is starting to deliver, and the addition of cameras in the 1.5 release may be its biggest win yet. The Connectivity Standards Alliance (CSA) says the latest release supports all kinds of cameras, so we’re talking indoor security cameras, outdoor security cameras, video doorbells, baby monitors, and pet cameras.

    This could vastly improve a seriously fractured landscape, enabling you to easily add and access your cameras on whatever platform you choose. It’s also something that can potentially be delivered in a software update, so some of the cameras you already own might get Matter support.

    You may be worrying about limitations, but the supported feature list is impressive, including video and audio streaming, two-way communication, local and remote access, multiple streams, pan-tilt-zoom controls, and both detection and privacy zones. There’s also support for continuous or event-based recording, either locally or to the cloud. What it won’t handle is how that storage is managed, meaning some camera manufacturers will still require you to use their cloud-based subscription models.

    Pleasingly, there are no limitations on resolution, unlike Apple HomeKit Secure Video, or restrictions on AI detection features. Matter is using WebRTC technology, with remote access handled via the STUN and TURN protocols, meaning that manufacturers can choose to implement end-to-end encryption for footage. TCP transport support is designed to allow more efficient and reliable transmission of lots of data, like video cameras produce, which should reduce the load on your Wi-Fi and the impact on camera battery life.

    While this is very exciting news and the potential backwards compatibility is laudable, there’s no telling when you’ll see it in a camera in your home. The big trio: Apple, Amazon, and Google have yet to announce any plans to adopt Matter in their cameras.

    Matter 1.5 isn’t just about cameras, though—it also revamps support for closures, from garage doors to smart window shades, allowing for different motion types and configurations. There’s soil sensor support, too, to measure moisture and temperature and potentially trigger Matter-based water valves and irrigation systems.

    Enhanced energy management features are the final addition. Matter 1.5 enables devices to exchange data on energy pricing, tariffs, and grid operation, enabling you to potentially get a picture of the true cost of your gadgets in energy usage, cost, and carbon impact. EV charging has also been bolstered, with state-of-charge reporting and bi-directional charging that could enable vehicle-to-grid schemes in the future.

    While the Matter 1.5 spec is now available, it will take developers a while to adopt it and get their devices certified by the CSA. Expect some announcements at CES 2026. —Simon Hill

    Google’s Gemini Rolls Out on Android Auto

    Google has been gradually replacing its long-lived Google Assistant with the souped-up Gemini AI chatbot on all its platforms for the past year. After deploying it on its Wear OS smartwatches and, more recently, adding it directly to Google Maps, the company is bringing it to Android Auto. Google says the rollout will take place over the coming months for any Android Auto users who have upgraded from Google Assistant to Gemini on their phones.

    [ad_2]

    Julian Chokkattu

    Source link

  • Google Adds Gemini AI-Assistant to Android Auto

    [ad_1]

    Google is starting to add its Gemini AI assistant to all of its products, and now, Android Auto is getting the chatbot treatment. 

    The company announced on November 20 that Android Auto, which it says is available in more than 250 million vehicles on the road, will support Gemini for Android phone users who have upgraded from the previous Google Assistant system. The system has been rolling out to Android-based smartphones since last year, such as the Google Pixel, certain Samsung Galaxy models, as well as the ZFold 7 and ZFlip 7 phones.

    Launched in 2023 as Bard and renamed Gemini in 2024, Google says its newest AI assistant’s conversational abilities are suited to in-car use for getting directions to categories of destinations rather than just a specific location, as well as sending messages rather than canned responses. It can also support features such as Gmail, Google Calendar, Google Tasks, Samsung Calendar, Samsung Reminder, and Samsung Notes. The company says other third-party apps will be offered in the future.

    Google is also phasing in Gemini to vehicles with Google Built-In as it replaces the old Google Assistant. In May, Volvo was announced as the first automotive brand to receive Gemini, with 2026 models shipping with a revised Google Automotive OS-based infotainment system. Polestar, an all-electric brand of which Volvo owns a minority stake, announced this week it would add Gemini to its 2026 vehicles through a future software update.

    Last month, General Motors said it would add Google Gemini to its vehicles that use Android Automotive-based infotainment systems in 2025, including its electric vehicles such as the Chevrolet Equinox EV and Cadillac Escalade IQ. However, the automaker said it was also developing its own AI software for future models.

    [ad_2]

    Zac Estrada

    Source link

  • Google Exec Claims Company Needs to Double Its AI Serving Capacity ‘Every Six Months’: Report

    [ad_1]

    Tech companies are racing to build out their infrastructure as their increasingly resource-intensive AI products gobble up capacity, clean out chipmakers’ supply, and require more power. Google, once dubbed the “King of the Web,” is one of those companies, and a high-level exec for The Big G is reported to have told staff that the company needs to scale up its serving capabilities exponentially if it wishes to keep up with the demand for its AI services.

    CNBC got its hands on a recent presentation given by Amin Vahdat, VP of Machine Learning, Systems, and Cloud AI at Google. The presentation includes a slide on “AI compute demand” that asserts that Google “must double every 6 months…. the next 1000x in 4-5 years.”

    “The competition in AI infrastructure is the most critical and also the most expensive part of the AI race,” Vahdat reportedly said at the all-hands meeting where the presentation took place. Google’s “job is of course to build this infrastructure, but it’s not to outspend the competition, necessarily,” he added. “We’re going to spend a lot,” he said, in an effort to create AI infrastructure that is “more reliable, more performant and more scalable than what’s available anywhere else.”

    Since CNBC’s story was published, Google has quibbled with the reporting. While CNBC originally quoted Vahdat as saying that the company would need to “double” its compute capacity every six months, a Google spokesperson told Gizmodo that the executive’s words were taken out of context. The spokesperson further explained that Vahdat “was not talking about a capital buildout of anything approaching the magnitude suggested. In reality, he simply noted that demand for AI services means we are being asked to provide significantly more computing capacity, which we are driving through efficiency across hardware, software, and model optimizations, in addition to new investments.” 

    CNBC has since updated its reporting from “compute” to “serving” capacity. Serve capacity would refer to Google’s ability to handle a rising tide of user requests, while compute capacity woud refer to the company’s overall infrastructure dedicated to AI, including what is needed to train new models and other expenditures. When asked for further clarification about the difference between the two, the spokesperson said that the original headline “read as if he was implying that we are doubling the amount of compute we have — either measured by the # of chips we operate or the amount of MW of electricity.” Instead, “the capacity increases Amin described will be reached in a number of ways, including new more capable chips and model efficiency and optimization,” they added.

    Whatever’s happening under the hood, it would appear that Google—like its competitors—needs to scale up its operations to support its nascent AI infrastructure business. Vahdat’s comments come not long after the tech giant reported some chunky profits from its Cloud business, with the company announcing it plans to ramp up spending in the coming year.

    During his presentation, Vahdat also reportedly claimed that Google needs to “be able to deliver 1,000 times more capability, compute, storage networking [than its competitors] for essentially the same cost and increasingly, the same power, the same energy level.” He admitted that it “won’t be easy” but said that “through collaboration and co-design, we’re going to get there.”

    The race to build data centers—or “AI infrastructure” as the tech industry calls it—is getting crazy. Like Google, Microsoft, Amazon, and Meta all claim they are going to ramp up their capital expenditures in an effort to build out the future of computing (cumulatively, Big Tech is expected to spend at least $400 billion in the next twelve months). As these facilities go up, they are causing all sorts of drama in the communities where they reside. Environmental and economic concerns abound. Some communities have begun to protest data center projects—and, in some cases, they’re successfully repelling them. Still, given the sheer amount of money invested in this industry, it will be an ongoing fight for Americans who don’t want the AI colossus in their backyards.

    [ad_2]

    Lucas Ropek

    Source link

  • Google Just Put a Massive Crack in Apple’s Walled Garden and It’s Good News For Everyone

    [ad_1]

    I like my iPhone. I currently use an iPhone 17 Pro Max, and it’s great. It has great cameras, a great display, and it has more than enough power inside for any of the things I want to do, which usually means some combination of taking photos, responding to email and Slack, and looking up random medical questions on ChatGPT.

    Of course, I can do all of those things on really any smartphone. The reason I love the iPhone has very little to do with the camera or the chip. Like millions of iPhone users, it’s the fact that things “just work,” especially when it comes to my other Mac, Apple Watch, and iPad.

    Maybe the best example of this is AirDrop. When you think about it, the fact that you can just beam photos or files from one device to another is the result of an extraordinarily complex set of technologies. For the user, however, it’s incredibly simple. It just works.

    AirDrop is one of those features you don’t think about until you use a device that doesn’t have it. Then you realize how much friction it quietly removes from your life. It’s not fancy, but it’s pure Apple: invisible until the exact moment you need it, and then absolutely effortless.

    That’s why what Google’s announcement this week is so surprising. For the first time, the Pixel 10 can send files directly to an iPhone using Apple’s own AirDrop system. It isn’t some convoluted workaround or some cloud-based link. It’s basically AirDrop, but between Android and Apple. And Google did it without Apple’s help at all.

    AirDrop is Apple at its best

    The magic of AirDrop isn’t that it exists. Dozens of file-sharing protocols exist. The magic is that it works everywhere, instantly, and without any setup. Take a photo, tap share, choose a device, and it just appears. No accounts. No pairing. No QR codes. No asking whether the other person has the same app. No converting file formats. No compression.

    That level of simplicity is extremely difficult to engineer, and even harder to replicate across different hardware and software. It’s also one of Apple’s purest “it just works” moments—something the company does better than almost anyone else.

    And because Apple controls the hardware, software, radios, and protocols, AirDrop has always been strictly an Apple-to-Apple feature. That exclusivity turned AirDrop into one of Apple’s most interesting lock-in advantages. In fact, I think you can argue that AirDrop is far more powerful, in practice, than people give it credit for. If you’ve ever tried to send a video from an Android phone to a Mac user over text message, you understand.

    Which is why what Google pulled off here is a big deal.

    How Google made this work

    On paper, what Google did looks almost impossible. AirDrop isn’t documented publicly. The protocol isn’t designed to accept devices that aren’t signed and trusted within Apple’s ecosystem. And, Google says that Apple wasn’t involved in making this happen, but they figured it out anyway. It’s now possible to send Pixel-to-iPhone transfers that behave almost exactly the way AirDrop normally does.

    Image courtesy, Google

    The short version is that Google effectively built its own compatible implementation of the underlying AirDrop discovery and transfer behavior. It uses the same kinds of signals—Bluetooth LE for discovery, peer-to-peer Wi-Fi for the actual transfer—and wraps it in a security-hardened layer that Apple devices are willing to talk to.

    Google rewrote major portions of the logic in Rust, submitted it to independent security testing, and ensured that everything happens entirely on-device. There is no cloud service or servers involved, and Google isn’t collecting any data. It’s just one device sending bits directly to another.

    There is one catch: to receive files from a Pixel, an iPhone must temporarily be set to “Everyone for 10 Minutes,” Apple’s AirDrop visibility mode that loosens the usual “contacts only” restrictions. It’s not quite as seamless as Apple-to-Apple sharing, but it’s surprisingly close and—assuming Apple doesn’t make a change to nuke this capability, it’s a win for everyone.

    Of course, because Apple didn’t formally approve this, the company could break it at any time through a protocol change. And historically, Apple hasn’t been shy about doing exactly that when it believes a feature threatens security, privacy, or its overall user experience.

    This is good for everyone–including Apple

    Here, however, the risk is different. If Apple shuts this down, it won’t look like it’s protecting users. It will look like it’s protecting its walled garden and taking away a capability that genuinely makes using an iPhone better.

    The reality is, people who use iPhones don’t only know other people who use iPhones. I talked to a couple recently where the wife uses an iPhone and the husband has a Pixel. This is the kind of thing that will make sharing photos of their children infinitely better, as one example.

    AirDrop is great because it’s useful and removes friction. And frictionless experiences are more valuable when they work for everyone, not just for the people who bought a specific piece of hardware.

    Apple already knows this. It’s why Messages is adopting RCS. It’s why Apple brought Apple TV to smart TVs. It’s why Apple Music ships on Android. Even Apple—the world’s most successful walled garden—understands there are moments when expanding the garden is better than adding more walls.

    This is also smart for Google in that it positions the Pixel 10 as the Android phone to get if you want to reduce friction with the iPhone users in your life. That’s a powerful competitive advantage that shouldn’t go overlooked.

    Make the experience better for everyone

    There’s a broader takeaway here that applies far beyond smartphones:

    AirDrop is the kind of feature people love because it solves a real problem in the simplest of ways. People want things that reduce friction to exist everywhere. If you won’t provide that interoperability yourself, someone eventually will—whether it’s a competitor, a regulator, or an enterprising engineer on a deadline.

    Google didn’t beat Apple by creating a replacement for AirDrop. It beat Apple, at least temporarily, by making AirDrop more useful. That should get Apple’s attention—not because it undermines the iPhone, but because it reinforces what made the iPhone successful in the first place.

    The opinions expressed here by Inc.com columnists are their own, not those of Inc.com.

    The final deadline for the 2026 Inc. Regionals Awards is Friday, December 12, at 11:59 p.m. PT. Apply now.

    [ad_2]

    Jason Aten

    Source link

  • Google starts testing ads in AI Mode

    [ad_1]

    Google has started inserting ads into query results from its AI Mode, which was originally spotted by an SEO consultant named Brodie Clark. These ads show up in the bottom of search results in the Gemini-powered AI Mode. They are labeled as “sponsored” content, but otherwise look similar to other links whipped up by the chatbot.

    Google says this is just a test and that ads shouldn’t be showing up for all users. The company also told 9to5Google that there are no current plans to fully update AI Mode to incorporate ads. Those are nice words, but AI has to make money somehow and ads seem to be a good way to do that.

    For now, the software seems to be prioritizing organic links over sponsored links, but we all know how insidious ads can be once the floodgates open. AI presents an especially slippery slope here, as these chatbots are often advertised as personal assistants. I don’t really want a personal assistant barking ads at me 24/7.

    Incidentally, there doesn’t seem to be any way to hide the aforementioned sponsored links. Google Search lets users hide sponsored results once they scroll past them.

    It sure looks like our free ride is already coming to an end, as AI companies are really speedrunning through that whole enshittification thing. X recently announced that it would be incorporating ads into query results.

    There are also rumors that OpenAI has been hiring people to turn ChatGPT into an ad platform. That company’s recently-launched AI social media slop factory Sora reportedly burns through $15 million a day generating videos of Sam Altman eating pizza in space or whatever.

    [ad_2]

    Lawrence Bonk

    Source link

  • Hands On With Google’s Nano Banana Pro Image Generator

    [ad_1]

    Corporate AI slop feels inescapable in 2025. From website banner ads to outdoor billboards, images generated by businesses using AI tools surround me. Hell, even the bar down the street posts happy hour flyers with that distinctly hazy, amber glow of some AI graphics.

    On Thursday, Google launched Nano Banana Pro, the company’s latest image-generating model. Many of the updates in this release are targeted at corporate adoption, from putting Nano Banana Pro in Google Slides for business presentations to integrating the new model with Google Ads for advertisers globally.

    This “Pro” release is an iteration on its Nano Banana model that dropped earlier this year. Nano Banana became a viral sensation after users started posting personalized action figures and other meme-able creations on social media.

    Nano Banana Pro builds out the AI tool with a bevy of new abilities, like generating images in 4K resolution. It’s free to try out inside Google’s Gemini app, with paid Google One subscribers getting access to additional generations.

    One specific improvement is going to be catnip for corporations in this release: text rendering. From my initial tests generating outputs with text, Nano Banana Pro improves on the wonky lettering and strange misspellings common in many image models, including Google’s past releases.

    Google wants the images generated by this new model—text and all—to be more polished and production-ready for business use cases. “Even if you have one letter off it’s very obvious,” says Nicole Brichtova, a product lead for image and video at Google DeepMind. “It’s kind of like having hands with six fingers; it’s the first thing you see.” She says part of the reason Nano Banana Pro is able to generate text more cleanly is the switch to a more powerful underlying model, Gemini 3 Pro.

    An example of how the tool can create a composite from multiple images.

    Courtesy of Google

    [ad_2]

    Reece Rogers

    Source link

  • Google steps up AI scam protection in India, but gaps remain | TechCrunch

    [ad_1]

    Google is bringing more AI muscle to India’s fight against digital fraud, rolling out on-device scam detection for Pixel 9 devices and new screen-sharing alerts for financial apps.

    Digital fraud continues to rise in India as more people come online for the first time and increasingly rely on smartphones for payments, shopping, and accessing government services. Fraud involving digital transactions accounted for more than half of all reported bank fraud in 2024 — 13,516 cases resulting in losses of ₹5.2 billion (about $58.61 million), according to the Reserve Bank of India (RBI). Online scams caused an estimated ₹70 billion (roughly $789 million) in losses in the first five months of 2025, the Ministry of Home Affairs said. Many incidents likely go unreported, either because victims are unsure how to file a complaint or wish to avoid additional scrutiny.

    On Thursday, Google announced the expansion of its real-time scam-detection feature, which uses Gemini Nano to analyze calls on-device and flag potential fraud without recording audio or sending data to Google’s servers. The feature is off by default and applies only to calls from unknown numbers, and it plays a beep during the conversation to notify participants. It debuted in the U.S. in March as a beta for English-speaking Pixel 9 users.

    Google confirmed to TechCrunch that its on-device scam detection will initially work only on Pixel 9 and later models in India and will be limited to English-speaking users, with its warning also English only. That restricts its reach in a market where Android accounts for nearly 96% of smartphones, per Statcounter, but Pixel devices held less than 1% share in 2024. The language limitation is also notable in a country where most users primarily rely on non-English languages — an audience that Google and others like Amazon have acknowledged by adding support for Indian languages across their services in recent years.

    Image Credits:Google

    The tech giant did say it was working to bring scam detection to non-Pixel Android phones, as well, without offering a timeline.

    Google also announced a pilot in India with financial apps Navi, Paytm, and Google Pay aimed at limiting screen-sharing scams, in which fraudsters persuade victims to share their screens to obtain one-time passwords, PINs, and other credentials during a call. The feature was first announced at Google I/O in May and initially tested in the U.K.

    Users with devices running Android 11 or later will be able to access the alerts, which include a one-tap option to end the call and stop screen sharing. Google confirmed to TechCrunch that it plans to add more app partners and the feature will display alerts in Indian languages as well but did not provide details.

    Techcrunch event

    San Francisco
    |
    October 13-15, 2026

    For several months, Google has also been using its Play Protect service to restrict predatory loan apps in India by blocking the sideloading of third-party apps that request sensitive permissions often exploited for fraud. The company said the service blocked more than 115 million such installation attempts this year. Google Pay, meanwhile, surfaces more than a million warnings each week for transactions flagged as potentially fraudulent, according to the company.

    Google is also running its DigiKavach awareness campaign on digital fraud, which it said has reached more than 250 million people. The company has worked with the Reserve Bank of India to publish a public list of authorized digital lending apps and their associated non-banking financial companies to help limit malicious actors.

    Earlier this year, Google launched a Safety Charter in India to expand its AI-driven fraud detection and security efforts, part of a broader plan to deploy more AI tools in the country to address rising fraud.

    Yet Google still faces significant gaps in curbing digital fraud in India. The company — like Apple — has been questioned for allowing fake and misleading apps to appear on its app store despite review processes meant to block fraudulent submissions.

    In recent years, police and security researchers have flagged investment and loan apps used in scams that remained available on the Play Store until intervention. These cases underscore the challenges Google faces in policing an ecosystem that dominates the country’s smartphone market.

    [ad_2]

    Jagmeet Singh

    Source link

  • Got a Pixel 10? Google’s Android Phone Can Now Share Files With Apple’s AirDrop

    [ad_1]

    The caveat is that the iPhone user will need to switch AirDrop into the “Everyone for 10 Minutes” mode instead of “Contacts Only” mode. Google says this isn’t some kind of workaround solution. It’s a direct, peer-to-peer connection; your data isn’t routed through a server, shared content isn’t logged, and no extra data is shared. Naturally, iPhone owners will be able to send data back to Pixel 10 phones as well.

    Google has not worked with Apple on this cross-compatibility, as the company says it “welcomes the opportunity” to work with Apple so that this sharing function can work in the Contacts Only mode. “We accomplished this through our own implementation,” a Google spokesperson tells WIRED. “Our goal is to provide an easy and secure file-sharing experience for our users, regardless of who they are communicating with.”

    In a security blog post, Google says the underlying strategy for what makes this new synergy between Quick Share and AirDrop work is the memory-safe Rust programming language. “These overlapping protections on both platforms work in concert with the secure connection to provide comprehensive safety for your data when you share or receive,” writes Dave Kleidermacher, vice president of Google’s platforms security and privacy.

    Google tapped NetSPI, a third-party and independent penetration testing firm, to validate the security of the new sharing feature. The findings? The interoperability is “notably stronger” than other industry implementations. That’s pretty important, considering what happened the last time someone tried to improve cross-compatibility between iOS and Android without Apple: the startup Beeper tried to make texts from Android phones show up as blue bubbles on iPhones and caused all kinds of drama.

    The number of people who can actually use this feature is limited because it’s only available on Google’s latest Pixel 10 smartphones, which just launched this past August. However, Google says it’s looking to expand the feature to more Android devices in the future.

    This new feature in Quick Share is rolling out starting today to the Pixel 10 series, which includes the Google Pixel 10, Pixel 10 Pro, Pixel 10 Pro XL, and Pixel 10 Pro Fold. As it’s rolling out, you may not see it immediately on your device. To use it, all you need to do is select something to share, whether it’s a file, contact, or photo, choose Quick Share in the sharing menu, and make sure the iPhone owner has their AirDrop set to “Everyone for 10 Minutes Only.” The iPhone will be able to see the Pixel 10 device and can receive or send data.

    [ad_2]

    Julian Chokkattu

    Source link

  • Are tech companies training their AI with private data?

    [ad_1]

    Leading tech companies are in a race to release and improve artificial intelligence products, leaving U.S. users to puzzle out how much of their personal data could be extracted to train AI tools.

    Meta (which owns Facebook, Instagram, Threads and WhatsApp), Google and LinkedIn all have rolled out AI app features that have the capacity to draw on users’ public profiles or emails. Google and LinkedIn offer users ways to opt out of the AI features, while Meta’s AI tool provides no means for its users to say no thanks.

    “Gmail just flipped a dangerous switch on October 10, 2025 and 99% of Gmail users have no idea,” a Nov. 8 Instagram post said. 

    Posts warned the platforms’ AI tool rollouts make most private information available for tech company harvesting. “Every conversation, every photo, every voice message, fed into AI and used for profit,” a Nov. 9 X video about Meta said. 

    Technology companies are rarely fully transparent when it comes to the user data they collect and what they use it for, Krystyna Sikora, a research analyst for the Alliance for Securing Democracy at the German Marshall Fund, told PolitiFact.

    “Unsurprisingly, this lack of transparency can create significant confusion that in turn can lead to fear mongering and the spread of false information about what is and is not permissible,” Sikora said.

    The best — if tedious — way for people to know and protect their privacy rights is to read the terms and conditions, since it often explicitly outlines how the data will be used and whether it will be shared with third parties, Sikora said. The U.S. doesn’t have any comprehensive federal laws on data privacy for technology companies.

    Here’s what we learned about how each platform’s AI is handling your data:

    Meta

    Social media claim: “Starting December 16th Meta will start reading your DMs, every conversation, every photo, every voice message fed into AI and used for profit.” — Nov. 9 X post with 1.6 million views as of Nov. 19.

    The facts: Meta announced a new policy to take effect Dec. 16, but that policy alone does not result in your direct messages, photos and voice messages being fed into its AI tool. The policy involves how Meta will customize users’ content and advertisements based on how they interact with Meta AI. 

    For example, if a user interacts with Meta’s AI chatbot about hiking, Meta might start showing that person recommendations for hiking groups or hiking boots.

    But that doesn’t mean your data isn’t being used for AI purposes. Although Meta doesn’t use people’s private messages in Instagram, WhatsApp or Messenger to train its AI, it does collect user content that is set to “public” mode. This can include photos, posts, comments and reels. If the user’s Meta AI conversations involve religious views, sexual orientation and racial or ethnic origin, Meta says the system is designed to avoid parlaying these interactions into ads. If users ask questions of Meta AI using its voice feature, Meta says the AI tool will use the microphone only when users give permission.

    There is a caveat: The tech company says its AI might use information about people who don’t have Meta product accounts if their information appears in other users’ public posts. For example, if a Meta user mentions a non-user in a public image caption, that photo and caption could be used to train Meta AI.

    Can you opt-out? No. If you are using Meta platforms in these ways — making some of your posts public and using the chatbot — your data could be used by Meta AI. There is no way to deactivate Meta AI in Instagram, Facebook or Threads. WhatsApp users can deactivate the option to talk with Meta AI in their chats, but this option is available only per chat, meaning that you must deactivate the option in each chat’s advanced privacy settings.

    The X post inaccurately advised people to submit this form to opt-out. But the form is simply a way for users to report when Meta’s AI supplies an answer that contains someone’s personal information.

    David Evan Harris, who teaches AI ethics at University of California, Berkeley, told PolitiFact that because the U.S. has no federal regulations about privacy and AI training, people have no standardized legal right to opt out of AI training in the way that people in countries such as Switzerland, the United Kingdom and South Korea do.

    Even when social media platforms provide opt out options for U.S. customers, it’s often difficult to find the settings to do so, Harris said. 

    Deleting your Meta accounts does not eliminate the possibility of Meta AI using your past public data, Meta’s spokesperson said.

    Google

    Social media claim: “Did you know Google just gave its AI access to read every email in your Gmail — even your attachments?”  — Nov. 8 Instagram post with more than 146,000 likes as of Nov. 19.

    The facts: Google has a host of products that interact with private data in different ways. Google announced Nov. 5 that its AI product, Gemini Deep Research, can connect to users’ other Google products, including Gmail, Drive and Chat. But, as Forbes reported, users must first give permission to employ the tool.

    Users who want to allow Gemini Deep Research to have access to private information across products can choose what data sources to employ, including Google search, Gmail, Drive and Google Chat.

    There are other ways Google collects people’s data:

    •  Through searches and prompts in Gemini apps, including its mobile app, Gemini in Chrome or Gemini in another web browser

    • Any video or photo uploads the user entered into Gemini 

    • Through interactions with apps such as YouTube and Spotify, if users give permission

    •  Through message and phone calls apps, including call logs and messages logs, if users give permission

    A Google spokesperson told PolitiFact the company doesn’t use this information to train AI when registered users are under age 13. 

    Google can also access people’s data when they have smart features activated in their Gmail and Google Workplace settings (that are automatically on in the U.S.), which give Google consent to draw on email content and user activity data to help users compose emails or suggest Google Calendar events. With optional paid subscriptions, users can access additional AI features, including in-app Gemini summaries. 

    Turning off Gmail’s smart features can stop Google’s AI from accessing Gmail, but it doesn’t stop Google’s access on the Gemini app, which users can either download or access in a browser.

    (Screenshot shows a permission pop-up that appeared in the Gemini app after a PolitiFact reporter asked Gemini to summarize an email. Gemini asked permission to access that email.)

    A California lawsuit accuses Gemini of spying on users’ private communications. The lawsuit says an October policy change gives Gemini default access to private content such as emails and attachments in people’s Gmail, Chat and Meet. Before October, users had to manually allow Gemini to access the private content, now users must go into their privacy settings to disable it. The lawsuit claims the Google policy update violates California’s 1967 Invasion of Privacy Act, a law that prohibits unauthorized wiretapping and recording confidential communications without consent.

    Can you opt-out? If people don’t want their conversations used to train Google AI, they can use “temporary” chats or chat without signing into their Gemini accounts. Doing that means Gemini can’t save a person’s chat history, a Google spokesperson said. Otherwise, opting out of having Google’s AI in Gmail, Drive and Meet requires turning off smart features in settings. 

    LinkedIn

    Social media claim: Starting Nov. 3, “LinkedIn will begin using your data to train AI.” — Nov. 2 Instagram post with more than 18,000 likes as of Nov. 19.

    The facts: LinkedIn, owned by Microsoft, announced on its website that starting Nov. 3, it will use some U.S. members’ data to train content-generating AI models. 

    The data the AI collects includes details from people’s profiles and public content users post.

    The training does not draw on information from people’s private messages, LinkedIn said.

    LinkedIn also said, aside from the AI data access, Microsoft started receiving information about LinkedIn members — such as profile information, feed activity and ad engagement — as of Nov. 3 in order to target users with personalized ads.

    Can you opt-out? Yes. Autumn Cobb, a LinkedIn spokesperson, confirmed to PolitiFact that members can opt out if they don’t want their content used for AI training purposes. They can also opt out of receiving targeted, personalized ads. 

    To remove your data from being used for training purposes, go to data privacy, click on the option that says “Data for Generative AI Improvement” and then turn off the feature that says “use my data for training content creation AI models.”

    And to opt out of personalized ads, go to advertising data in settings, and turn off ads off LinkedIn and the option that says “data sharing with our affiliates and select partners.”

    [ad_2]

    Source link

  • Android Quick Share now works with Apple’s AirDrop feature on Pixel 10 phones

    [ad_1]

    Count this as the latest unexpected detente between Apple and Google. Today, Google announced that the Pixel 10 series of phones can use Android Quick Share with the iPhone’s AirDrop feature, meaning it’ll be much easier to shoot files and photos between the two platforms. While this feature is currently limited to Pixel 10 series phones, Google says it is looking to expand the feature to other devices.

    Google dropped details on how it made this work from a privacy and security standpoint in its technical blog if you want to get into the nitty-gritty. But it certainly sounds as if Google did this on its own without any input from Apple. “We accomplished this through our own implementation,” Alex Moriconi from Google told Engadget. “Our implementation was thoroughly vetted by our own privacy and security teams, and we also engaged a third party security firm to pentest the solution.”

    But functionally, it sounds like this will work the same as Quick Share currently does. The receiving Apple device (this will work with iPads and Macs as well as iPhones) needs to set their Airdrop visibility preferences to “anyone for 10 minutes.” This means that people outside of your contact list will be able to initiate an AirDrop or Quick Share transfer. From there, the Pixel 10 user should be able to see the receiving Apple device when they go to share things via Quick Share as normal.

    Google also notes that Android devices can receive files from Apple devices that are using AirDrop. They’ll just need to make sure their Quick Share visibility settings are similarly set to “everyone for 10 minutes” or that they’re in “receive” mode on the Quick Share page.

    It’s not clear if Apple was involved in making this new feature work or if Google did this all on its own. Apple hasn’t released a corresponding post on its own newsroom. If Apple wasn’t involved, the obvious question is whethere or not they’ll treat this as a security breach and release a software update that undos Google’s work. And if they do, it’s entirely possible that we’re going to head down another long road of the company’s bickering about security versus openness.

    We’ve reached out to Apple to get more details and will update this post if we learn anything.

    Update, November 20, 2025, 1:27PM ET: Added a statement from Google.

    [ad_2]

    Source link

  • Manage Android apps with the new ‘Uninstall’ button

    [ad_1]

    NEWYou can now listen to Fox News articles!

    If you use more than one Android device with the same Google account, you know how messy things can get.

    Tracking which apps are installed on which phone or tablet can quickly become confusing. The Google Play Store already showed how many of your devices had a particular app, but uninstalling apps across multiple devices required digging through several menus.

    That’s changing now, thanks to Google’s latest Play Store update.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

    APPLE RELEASES IOS 26.1 WITH MAJOR SECURITY IMPROVEMENTS AND NEW FEATURES FOR IPHONE USERS

    A new Play Store update makes it easier to manage apps across all your Android devices. (Kurt “CyberGuy” Knutsson)

    The new uninstall button rolls out

    Google is rolling out version 48.8 of the Play Store, and it introduces a new ‘Uninstall’ button right on each app’s listing. You can now remove an app from any of your devices directly from your main phone. This eliminates the need to pick up each device and remove the app one by one. According to Android Authority and other reliable tech outlets, this feature appears beside each Android device listed under your account, making it faster to keep your devices organized and clutter-free.

    The update replaces the older process that required navigating through ‘Profile,’ then ‘Manage Apps and Devices,’ then applying a device filter before uninstalling. That long-winded method still works, but the new shortcut saves time and effort. The feature is rolling out gradually, so you might not see it right away, but it should appear soon as part of the stable update.

    Why this update matters

    For anyone juggling a phone, tablet or even a work device, this new feature makes a real difference. Over time, unused apps pile up, taking up storage space and slowing down performance. Being able to remove them remotely helps keep every device clean and efficient without switching between screens.

    The change also improves digital hygiene. Many people forget about apps on old phones that still have access to personal data or permissions. Now you can easily remove those apps before they become a privacy or security risk. The update also makes it simpler for parents managing family devices to stay in control of what’s installed on their kids’ phones.

    How to use the new uninstall button on Android 

    Settings may vary depending on your Android phone’s manufacturer. 

    • Open the Play Store app on your device.
    • Navigate to the listing of an app that you know is installed on another device signed in to your account.
    • Under the “Installed on X devices” section, you may see a new ‘Uninstall’ button next to each listed device.
    • Tap Uninstall next to the one you want to remove from your Android.

    Then click This Device.

    GOOGLE CHROME AUTOFILL NOW HANDLES IDS

    An Android screen tutorial

    Steps to use the new uninstall button on Android. (Kurt “CyberGuy” Knutsson)

    How to uninstall Android apps when the new Play Store button isn’t showing

    Wait for the update to roll out if key features aren’t showing yet. If the button isn’t present, you can still uninstall an app with these steps:

    Settings may vary depending on your Android phone’s manufacturer.

    • Click Profile.
    • Tap Manage apps & devices.
    • Click Manage.
    • Use the device filter to select the target device.
    • Press the app you want to uninstall.

    Click uninstall.

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    A tutorial on how to uninstall apps from Android across multiple devices

    The new feature saves time and improves organization across a user’s Android devices. (Kurt “CyberGuy” Knutsson)

    What this means to you

    This feature saves time, improves organization and helps you keep your Android devices running smoothly. By uninstalling unused apps remotely, you free up valuable storage and reduce unnecessary background activity that can drain battery life. You also make your devices more secure by removing older apps that might not be receiving updates anymore. It’s a thoughtful update that shows how Google is paying attention to everyday usability rather than adding flashy new tools. Even if it seems like a small change, the impact adds up for people who live in a multi-device world.

    Take my quiz: How safe is your online security?

    Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.

    Kurt’s key takeaways

    The new ‘Uninstall’ button in Play Store version 48.8 is a quiet but powerful improvement for Android users. It makes it easier to manage your apps and maintain a cleaner digital environment across all your devices. Once this update reaches your phone, it’s worth exploring which apps you no longer need and removing them in just a few seconds.

    Do you plan to tidy up your devices using the new Google Play Store feature, or do you prefer to manage apps directly from each phone? Let us know by writing to us at Cyberguy.com.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

    Copyright 2025 CyberGuy.com.  All rights reserved.

    [ad_2]

    Source link

  • Trump Takes Aim at State AI Laws in Draft Executive Order

    [ad_1]

    US President Donald Trump is considering signing an executive order that would seek to challenge state efforts to regulate artificial intelligence through lawsuits and the withholding federal funding, WIRED has learned.

    A draft of the order viewed by WIRED directs US Attorney General Pam Bondi to create an “AI Litigation Task Force,” whose purpose is to sue states in court for passing AI regulations that allegedly violate federal laws governing things like free speech and interstate commerce.

    Trump could sign the order, which is currently titled “Eliminating State Law Obstruction of National AI Policy,” as early as this week, according to four sources familiar with the matter. A White House spokesperson told WIRED that “discussion about potential executive orders is speculation.”

    The order says that the AI Litigation Task Force will work with several White House technology advisors, including the Special Advisor for AI and Crypto David Sacks, to determine which states are violating federal laws detailed in the order. It points to state regulations that “require AI models to alter their truthful outputs” or compel AI developers to “report information in a manner that would violate the First Amendment or any other provision of the Constitution,” according to the draft.

    The order specifically cites recently enacted AI safety laws in California and Colorado that require AI developers to publish transparency reports about how they train models, among other provisions. Big Tech trade groups, including Chamber of Progress—which is backed by Andreessen Horowitz, Google, and OpenAI—have vigorously lobbied against these efforts, which they describe as a “patchwork” approach to AI regulation that hampers innovation. These groups are lobbying instead for a light touch set of federal laws to guide AI progress.

    “If the President wants to win the AI race, the American people need to know that AI is safe and trustworthy,” says Cody Venzke, senior policy counsel at the American Civil Liberties Union. “This draft only undermines that trust.”

    The order comes as Silicon Valley has been upping the pressure on proponents of state AI regulations. For example, a super PAC funded by Andreessen Horowitz, OpenAI cofounder Greg Brockman, and Palantir cofounder Joe Lonsdale recently announced a campaign against New York Assembly member Alex Bores, the author of a state AI safety bill.

    House Republicans have also renewed their effort to pass a blanket moratorium on states introducing laws regulating AI after an earlier version of the measure failed.

    [ad_2]

    Maxwell Zeff, Makena Kelly

    Source link

  • The Google Sans Flex typeface is now available to download

    [ad_1]

    Typography nerds and Android fans, rejoice: You can now download an official version of “the next generation of Google’s brand typeface.” The company has released the Google Sans Flex font to the public for free.

    The variable sans-serif font is part of Google’s Material 3 design language, which arrived in 2023. 9to5Google notes that it’s since been integrated into many of the company’s products, including in some corners of Pixel software.

    A 2024 Google Design blog post about variable typography highlights the font’s flexibility, as seen in the image above. Casey Henry, a designer with the company, wrote that Google Sans Flex “allows the font’s letterforms to shape-shift at different scales.” OpenType Font Variations is the standard Google uses for variable fonts.

    Meanwhile, a Reddit thread about the download dove deeper into typography nerdery. “Interesting behaviour when you condense the width,” u/hbpencil102 wrote. “Instead of circles becoming ovals, they become more rectangular with rounded tops and bottoms, reminding me of DIN 1451.” Amen to that.

    You can download Google Sans Flex from Google Fonts.

    [ad_2]

    Source link

  • Gemini 3 Is Here—and Google Says It Will Make Search Smarter

    [ad_1]

    Google has introduced Gemini 3, its smartest artificial intelligence model to date, with cutting-edge reasoning, multimedia, and coding skills. As talk of an AI bubble grows, the company is keen to stress that its latest release is more than just a clever model and chatbot—it’s a way of improving Google’s existing products, including its lucrative search business, starting today.

    “We are the engine room of Google, and we’re plugging in AI everywhere now,” Demis Hassabis, CEO of Google DeepMind, an AI-focused subsidiary of Google’s parent company, Alphabet, told WIRED in an interview ahead of the announcement.

    Hassabis admits that the AI market appears inflated, with a number of unproven startups receiving multibillion-dollar valuations. Google and other AI firms are also investing billions in building out new data centers to train and run AI models, sparking fears of a potential crash.

    But even if the AI bubble bursts, Hassabis thinks Google is insulated. The company is already using AI to enhance products like Google Maps, Gmail, and Search. “In the downside scenario, we will lean more on that,” Hassabis says. “In the upside scenario, I think we’ve got the broadest portfolio and the most pioneering research.”

    Google is also using AI to build popular new tools like NotebookLM, which can auto-generate podcasts from written materials, and AI Studio which can prototype applications with AI. It’s even exploring embedding the technology into areas like gaming and robotics, which Hassabis says could pay huge dividends in years to come, regardless of what happens in the wider market.

    Google is making Gemini 3 available today through the Gemini app and in AI Overviews, a Google Search feature that synthesizes information alongside regular search results. In demos, the company showed that some Google queries, like a request for information about the three-body problem in physics, will prompt Gemini 3 to automatically generate a custom interactive visualization on the fly.

    Robby Stein, vice president of product for Google Search, said at a briefing ahead of the launch that the company has seen “double-digit” increases in queries phrased in natural language, which are most likely targeted at AI Overviews, year over year. The company has also seen a 70 percent spike in visual search, which relies on Gemini’s ability to analyze photos.

    Despite investing heavily in AI and making key breakthroughs, including inventing the transformer model that powers most large language models, Google was shaken by the sudden rise of ChatGPT in 2022. The chatbot not only vaulted OpenAI to center stage when it came to AI research; it also challenged Google’s core business by offering a new and potentially easier way to search the web.

    [ad_2]

    Will Knight

    Source link

  • A.I. Models Can Exhibit Human-Like Gambling Addiction Behaviors: Study

    [ad_1]

    Researchers warn that A.I. models’ irrational betting behaviors could matter as the technology moves deeper into finance. Sara Oliveira for Unsplash+

    Human gambling addiction has long been marked by behaviors like the illusion of control, the belief that a win will come after a losing streak, and attempts to recover losses by continuing to bet. Such irrational actions can also appear in A.I. models, according to a new study from researchers at South Korea’s Gwangju Institute of Science and Technology.

    The study, which has not yet been peer-reviewed, noted that large language models (LLMs) displayed high-risk gambling decisions, especially when given more autonomy. These tendencies could pose risks as the technology becomes more deeply integrated into asset management sectors, said Seungpil Lee, one of the report’s co-authors. “We’re going to use [A.I.] more and more in making decisions, especially in the financial domains,” he told Observer.

    To test A.I. gambling behavior, the authors ran four models—OpenAI’s GPT-4o-mini and GPT-4.1.-mini, Google’s Gemini-2.5-Flash and Anthropic’s Claude-3.5-Haiku—through simulated slot games. Each model started with $100 and could either continue betting or quit, while researchers tracked their choices using an irrationality index that measured factors such as betting aggressiveness, extreme betting and loss chasing.

    The results showed that all four LLMs experienced higher bankruptcy rates when given more freedom to vary their betting sizes and choose target amounts, but the degree varied by model—a divergence Lee said likely reflects differences in training data. Gemini-2.5-Flash had the highest bankruptcy rate at 48 percent, while GPT-4.1-mini had the lowest at just over 6 percent.

    The models also consistently displayed human-like characteristics of human gambling addiction, such as win chasing, when gamblers keep betting because they view their winnings as “free money,” and loss chasing, when they continue in an effort to recoup losses. Win chasing was especially common: across the LLMs, bet-increase rates rose from 14.5 percent to 22 percent during winning streaks, according to the study.

    Despite these parallels, Lee emphasized that important differences remain. “These kinds of results don’t actually reveal they are reasoning exactly in the manner of humans,” he said. “They have learned some traits from human reasoning, and they might affect their choices.”

    That doesn’t mean that the human-like tendencies are harmless. A.I. systems are increasingly embedded in the financial sector, from customer-experience tools to fraud detection, forecasting and earnings-report analysis. Of 250 banking executives surveyed by MIT Technology Review Insights earlier this year, 70 percent said they are using agentic A.I. in some form.

    Because gambling-like traits increase significantly when LLMs are granted more autonomy, the authors argue that this should be factored into monitoring and control mechanisms. “Instead of giving them the whole freedom to make decisions, we have to be more precise,” said Lee.

    Still, the prospect of developing completely risk-free models is unlikely, Lee added, noting that the challenge extends beyond A.I. itself. “It seems like even human beings are not able to do that.”

    A.I. Models Can Exhibit Human-Like Gambling Addiction Behaviors: Study

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link

  • Google plans to invest $40 billion towards building data centers in Texas

    [ad_1]

    Google is getting ready to spend $40 billion to increase its data center footprint in Texas. In an announcement posted on its website, Google said it’s planning to build more infrastructure for its cloud and artificial intelligence operations in the state. The plans call for three new data centers, one in Armstrong County and two in Haskell County, according to Google.

    According to a press release from Texas Governor Greg Abbott, this is Google’s largest investment in any US state. The tech giant’s investment in the Lone Star State dates back to 2019, when it built a data center in Midlothian, Texas. Google later expanded its presence in the state with the development of another data center in Red Oak, bringing the company’s total investment into Texas to $2.7 billion. According to Google, the latest $40 billion investment will be made through 2027.

    Google isn’t the only major tech company developing more AI infrastructure in the US. Earlier this year, NVIDIA announced plans to build manufacturing space for AI supercomputers in Houston and Dallas. More recently, Meta said it would invest $600 billion to build AI data centers across the US without specifying which states.

    [ad_2]

    Jackson Chen

    Source link