ReportWire

Tag: Google

  • Google to pay $68m to settle eavesdropping lawsuit – Tech Digest

    Share


    Google will pay $68 million to resolve a class-action lawsuit alleging its voice-activated assistant secretly recorded private conversations to fuel targeted advertising.

    Filed in a California federal court on Friday, the settlement addresses claims that Google Assistant frequently triggered without its “Hey Google” command, capturing sensitive discussions intended to remain private.

    The legal challenge centered on “false acceptance,” where the software mistakenly identifies background noise or ordinary speech as a wake word. Plaintiffs argued that these accidental recordings were sent to Google’s servers, analyzed and shared with third-party advertisers.

    While Google denied any wrongdoing and maintained it settled only to avoid protracted litigation, the deal marks a significant victory for privacy advocates.

    Millions of Android and Google device owners may be eligible for a payout if they owned a Google-made device dating back to May 2016. The eligible devices include the Pixel smartphone series, Nest speakers, and Google Home units.

    The settlement must still receive final approval from US District Judge Beth Labson Freeman before the funds can be distributed among the claimants.

    This case mirrors a similar settlement reached by Apple, which recently agreed to pay $95 million over claims that its Siri assistant also recorded users without authorization.

    As tech giants continue to integrate “always-on” microphones into household products, these legal outcomes are forcing a re-evaluation of how virtual assistants handle ambient audio and user consent.


    For latest tech stories go to TechDigest.tv


    Discover more from Tech Digest

    Subscribe to get the latest posts sent to your email.

    Chris Price

    Source link

  • Google to pay $68 million over allegations its voice assistant eavesdropped on users

    Google has agreed to pay $68 million to settle a class-action lawsuit that alleged the technology giant’s voice assistant had illegally recorded users and then shared their private conversations with advertisers.

    The preliminary settlement, filed January 23 in federal court in San Jose, California, requires approval by U.S. District Judge Beth Labson Freeman.

    The settlement stems from a lawsuit filed by several Google device owners who claimed their conversations had been recorded without their knowledge. While Google stated that its voice assistant would only register people’s speech when consumers uttered an activation phrase, such as “Hey Google,” the consumers claimed that their devices recorded them even without using such language. 

    Some claimants alleged the Google devices recorded private conversations about financial issues, personal decisions and employment.

    If the settlement is approved, Google will place $68 million in a fund that will pay all consumer claims, as well as court-approved attorneys’ fees and other costs. 

    Alphabet-owned Google didn’t immediately respond to a request for comment from CBS News.

    Consumers will be able to submit claims for up to three Google devices, although how much individuals receive will depend on how many claims are submitted, according to the settlement.

    The agreement is similar to an Apple class-action lawsuit that alleged its Siri voice assistant had eavesdropped on private or confidential conversations. Apple device owners are this month receiving payments from the $95 million settlement, ranging from about $8 to $40 per person.

    Source link

  • Malicious Google Chrome extensions hijack accounts

    NEWYou can now listen to Fox News articles!

    Cybersecurity researchers have uncovered a serious threat hiding inside Google Chrome. 

    Several browser extensions pretend to be helpful tools. In reality, they quietly take over user accounts. These extensions impersonate popular human resources and business platforms such as Workday, NetSuite and SAP SuccessFactors. Once installed, they can steal login data and block security controls designed to protect users.

    Many people who installed them had no warning signs that anything was wrong.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    WHY CLICKING THE WRONG COPILOT LINK COULD PUT YOUR DATA AT RISK

    Cybersecurity researchers warn that fake Google Chrome extensions are silently hijacking user accounts by stealing login data and bypassing security protections. (Bildquelle/ullstein bild via Getty Images)

    The fake Chrome extensions to watch out for

    Security researchers from Socket’s Threat Research Team identified five malicious Chrome extensions connected to this campaign. The add-ons were marketed as productivity or security tools, but were designed to hijack accounts.

    The extensions include:

    • DataByCloud Access
    • Tool Access 11
    • DataByCloud 1
    • DataByCloud 2
    • Software Access

    We reached out to Google, and a spokesperson told CyberGuy that the extensions are no longer available on the Chrome Web Store. However, some are still available on third-party software download sites, which continues to pose a risk. If you see any of these names installed in your browser, remove them immediately.

    Why malicious Chrome extensions look legitimate

    These malicious add-ons are designed to look legitimate. They use professional names, polished dashboards and business-focused descriptions. Some claim to offer faster access to workplace tools. Others say they restrict user actions to protect company accounts. Privacy policies often promise that no personal data is collected. For people juggling daily work tasks or managing business accounts, the pitch sounds helpful rather than suspicious.

    What these extensions actually do

    After installation, the extensions operate silently in the background. They steal session cookies, which are small pieces of data that tell websites you are already logged in. When attackers get these cookies, they can access accounts without a password. At the same time, some extensions block access to security pages. Users may be unable to change passwords, disable accounts or review login history. One extension even allows criminals to insert stolen login sessions into another browser. That lets them sign in instantly as the victim.

    Why malicious Chrome extensions are so dangerous

    This attack goes beyond stealing credentials. It removes the ability to respond. Security teams may detect unusual activity, but cannot fix it through normal controls. Password changes fail. Account settings disappear. Two-factor authentication tools become unreachable. As a result, attackers can maintain access for long periods without being stopped.

    How to check for these extensions on your computer

    If you use Google Chrome, review your extensions now. The process only takes a few minutes.

    • Open Google Chrome
    • Click the three-dot menu in the top right corner
    • Select Extensions, then choose Manage Extensions
    • Review every extension listed

    Look for unfamiliar names, especially those claiming to offer access to HR platforms or business tools.

    WEB SKIMMING ATTACKS TARGET MAJOR PAYMENT NETWORKS

    Woman using Google on her laptop.

    Malicious Chrome add-ons disguised as productivity tools targeted users of popular business platforms like Workday, NetSuite and SAP SuccessFactors. (Photo by S3studio/Getty Images)

    How to remove suspicious Chrome extensions

    If you find one of these extensions, remove it immediately.

    • Open Manage Extensions in Chrome
    • Find the suspicious extension
    • Click Remove
    • Confirm when prompted

    Restart your browser after removal to ensure the extension is fully disabled. If Chrome sync is enabled, repeat these steps on all synced devices before turning sync back on.

    What to do after removing the extension

    Removal is only the first step. Change passwords for any accounts accessed while the extension was installed. Use a different browser or device if possible.

    A password manager can help you create strong, unique passwords for each account and store them securely. This reduces the risk of reused passwords being exploited again.

    Next, see if your email has been exposed in past breaches. Our No. 1 password manager pick includes a built-in breach scanner that checks whether your email address or passwords have appeared in known leaks. If you discover a match, immediately change any reused passwords and secure those accounts with new, unique credentials.

    Check out the best expert-reviewed password managers of 2026 at Cyberguy.com.

    Finally, review account activity for unfamiliar logins, locations or devices and be sure to follow the steps below to stay safe moving forward.

    Ways to stay safe going forward

    Simple habits can significantly reduce your risk.

    1) Limit browser extensions

    Only install extensions you truly need. The fewer extensions you use, the smaller your attack surface becomes.

    2) Be cautious with add-ons

    Avoid extensions that promise premium access or special tools for enterprise platforms. Legitimate companies rarely require browser add-ons for account access.

    3) Check permissions carefully

    Be wary of extensions that request access to cookies, browsing data or account management. These permissions can be abused to hijack sessions.

    4) Review extensions regularly

    Check your browser every few months and remove tools you no longer use or recognize.

    WHATSAPP WEB MALWARE SPREADS BANKING TROJAN AUTOMATICALLY

    Person typing on their computer.

    Several fake browser extensions were removed from the Chrome Web Store after researchers linked them to account takeover attacks. (Photo Illustration by Serene Lee/SOPA Images/LightRocket via Getty Images)

    5) Use strong antivirus software

    Strong antivirus software can help detect malicious extensions, block suspicious behavior and alert you to browser-based threats before damage occurs.

    The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.

    Get my picks for the best 2026 antivirus protection winners for your Windows, Mac, Android and iOS devices at Cyberguy.com.

    6) Consider a data removal service

    If your work or personal information has been exposed, a data removal service can help reduce your digital footprint by removing your details from data broker sites. This lowers the risk of follow-up scams or identity misuse.

    While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap, and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.

    Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.

    Get a free scan to find out if your personal information is already out on the web: Cyberguy.com.

    7) Avoid third-party download sites

    Do not reinstall extensions from third-party websites, even if they claim to offer the same features. These sites often host outdated or malicious versions.

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    Kurt’s key takeaways

    Browser extensions can be useful, but this research shows how easily they can also be abused. These fake Chrome add-ons did not rely on flashy tricks or obvious warnings. They blended in, looked professional and quietly did their damage in the background. The good news is that you do not need to be a tech expert to protect yourself. Taking a few minutes to review your extensions, remove anything unfamiliar and lock down your accounts can make a real difference. Small habits, repeated regularly, go a long way in reducing risk. If there is one takeaway here, it is this: convenience should never come at the cost of security. A clean browser and strong account protections give you back control.

    How many browser extensions do you have installed right now that you have never looked at twice? Let us know by writing to us at Cyberguy.com.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts  and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    Copyright 2026 CyberGuy.com. All rights reserved.

    Source link

  • Google Fast Pair flaw lets hackers hijack headphones

    NEWYou can now listen to Fox News articles!

    Google designed Fast Pair to make Bluetooth connections fast and effortless. One tap replaces menus, codes and manual pairing. That convenience now comes with serious risk. Security researchers at KU Leuven uncovered flaws in Google’s Fast Pair protocol that allows silent device takeovers. They named the attack method WhisperPair. An attacker nearby can connect to headphones, earbuds or speakers without the owner knowing. In some cases, the attacker can also track the user’s location. Even more concerning, victims do not need to use Android or own any Google products. iPhone users are also affected.

    Sign up for my FREE CyberGuy Report

    Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

    APPLE WARNS MILLIONS OF IPHONES ARE EXPOSED TO ATTACK

    Fast Pair makes connecting Bluetooth headphones quick, but researchers found that some devices accept new pairings without proper authorization.       (Kurt “CyberGuy” Knutsson)

    What WhisperPair is and how it hijacks Bluetooth devices

    Fast Pair works by broadcasting a device’s identity to nearby phones and computers. That shortcut speeds up pairing. Researchers found that many devices ignore a key rule. They still accept new pairings while already connected. That opens the door to abuse.

    Within Bluetooth range, an attacker can silently pair with a device in about 10 to 15 seconds. Once connected, they can interrupt calls, inject audio or activate microphones. The attack does not require specialized hardware and can be carried out using a standard phone, laptop, or low-cost device like a Raspberry Pi. According to the researchers, the attacker effectively becomes the device owner.

    Audio brands affected by the Fast Pair vulnerability

    The researchers tested 17 Fast Pair compatible devices from major brands, including Sony, Jabra, JBL, Marshall, Xiaomi, Nothing, OnePlus, Soundcore, Logitech and Google. Most of these products passed Google certification testing. That detail raises uncomfortable questions about how security checks are performed.

    How headphones can become tracking devices

    Some affected models create an even bigger privacy issue. Certain Google and Sony devices integrate with Find Hub, which uses nearby devices to estimate location. If a headset has never been linked to a Google account, an attacker can claim it first. That allows continuous tracking of the user’s movements. If the victim later receives a tracking alert, it may appear to reference their own device. That makes the warning easy to dismiss as an error.

    GOOGLE NEST STILL SENDS DATA AFTER REMOTE CONTROL CUTOFF, RESEARCHER FINDS

    A screenshot of a location screen

    Attacker’s dashboard with location from the Find Hub network. (KU Leuven)

    Why many Fast Pair devices may stay vulnerable

    There is another problem most users never consider. Headphones and speakers require firmware updates. Those updates usually arrive through brand-specific apps that many people never install. If you never download the app, you never see the update. That means vulnerable devices could remain exposed for months or even years.

    The only way to fix this vulnerability is by installing a software update issued by the device manufacturer. While many companies have released patches, updates may not yet be available for every affected model. Users should check directly with the manufacturer to confirm whether a security update exists for their specific device.

    Why convenience keeps creating security gaps

    Bluetooth itself was not the problem. The flaw lives in the convenience layer built on top of it. Fast Pair prioritized speed over strict ownership enforcement. Researchers argue that pairing should require cryptographic proof of ownership. Without it, convenience features become attack surfaces. Security and ease of use do not have to conflict. But they must be designed together.

    Google responds to the Fast Pair WhisperPair security flaws

    Google says it has been working with researchers to address the WhisperPair vulnerabilities and began sending recommended patches to headphone manufacturers in early September. Google also confirmed that its own Pixel headphones are now patched.

    In a statement to CyberGuy, a Google spokesperson said, “We appreciate collaborating with security researchers through our Vulnerability Rewards Program, which helps keep our users safe. We worked with these researchers to fix these vulnerabilities, and we have not seen evidence of any exploitation outside of this report’s lab setting. As a best security practice, we recommend users check their headphones for the latest firmware updates. We are constantly evaluating and enhancing Fast Pair and Find Hub security.”

    Google says the core issue stemmed from some accessory makers not fully following the Fast Pair specification. That specification requires accessories to accept pairing requests only when a user has intentionally placed the device into pairing mode. According to Google, failures to enforce that rule contributed to the audio and microphone risks identified by the researchers.

    To reduce the risk going forward, Google says it updated its Fast Pair Validator and certification requirements to explicitly test whether devices properly enforce pairing mode checks. Google also says it provided accessory partners with fixes intended to fully resolve all related issues once applied.

    On the location tracking side, Google says it rolled out a server-side fix that prevents accessories from being silently enrolled into the Find Hub network if they have never been paired with an Android device. According to the company, this change addresses the Find Hub tracking risk in that specific scenario across all devices, including Google’s own accessories.

    Researchers, however, have raised questions about how quickly patches reach users and how much visibility Google has into real-world abuse that does not involve Google hardware. They also argue that weaknesses in certification allowed flawed implementations to reach the market at scale, suggesting broader systemic issues.

    For now, both Google and the researchers agree on one key point. Users must install manufacturer firmware updates to be protected, and availability may vary by device and brand.

    SMART HOME HACKING FEARS: WHAT’S REAL AND WHAT’S HYPE

    A location screen

    Unwanted tracking notification showing the victim’s own device. (KU Leuven)

    How to reduce your risk right now

    You cannot disable Fast Pair entirely, but you can lower your exposure.

    1) Check if your device is affected

    If you use a Bluetooth accessory that supports Google Fast Pair, including wireless earbuds, headphones or speakers, you may be affected. The researchers created a public lookup tool that lets you search for your specific device model and see whether it is vulnerable. Checking your device is a simple first step before deciding what actions to take. Visit whisperpair.eu/vulnerable-devices to see if your device is on the list.

    2) Update your audio devices

    Install the official app from your headphone or speaker manufacturer. Check for firmware updates and apply them promptly.

    3) Avoid pairing in public places

    Pair new devices in private spaces. Avoid pairing in airports, cafés or gyms where strangers are nearby.

    4) Factory reset if something feels off

    Unexpected audio interruptions, strange sounds or dropped connections are warning signs.  A factory reset can remove unauthorized pairings, but it does not fix the underlying vulnerability. A firmware update is still required.

    5) Turn off Bluetooth when not needed

    Bluetooth only needs to be on during active use. Turning off Bluetooth when not in use limits exposure, but it does not eliminate the underlying risk if the device remains unpatched.

    6) Reset secondhand devices

    Always factory reset used headphones or speakers before pairing them. This removes hidden links and account associations.

    7) Take tracking alerts seriously

    Investigate Find Hub or Apple tracking alerts, even if they appear to reference your own device.

    8) Keep your phone updated

    Install operating system updates promptly. Platform patches can block exploit paths even when accessories lag behind.

    Kurt’s key takeaways

    WhisperPair shows how small shortcuts can lead to large privacy failures. Headphones feel harmless. Yet they contain microphones, radios and software that need care and updates. Ignoring them leaves a blind spot that attackers are happy to exploit. Staying secure now means paying attention to the devices you once took for granted.

    Should companies be allowed to prioritize fast pairing over cryptographic proof of device ownership? Let us know by writing to us at Cyberguy.com

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    Sign up for my FREE CyberGuy Report 

    Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter. 

    Copyright 2026 CyberGuy.com.  All rights reserved.

    Source link

  • Gmail is having issues with spam and misclassification | TechCrunch

    If your Gmail account doesn’t seem to be working properly today, you’re not alone.

    The official status dashboard for Google Workspace suggests that the issues began at around 5am Pacific on Saturday morning, with users experiencing both “misclassification of emails in their inbox and additional spam warnings.”

    For me, that meant my Primary inbox was filled with messages that would normally appear in the Promotions, Social, or Updates inboxes, and that spam warnings were appearing in emails from known senders.

    Other users have complained on social media that “all the spam is going directly to my inbox” and that Gmail’s filters seem “suddenly completely busted.”

    “We are actively working to resolve the issue,” Google said. “As always, we encourage users to follow standard best practices when engaging with messages from unknown senders.”

    TechCrunch has reached out to Google for additional comment.

    Anthony Ha

    Source link

  • Google says it’s working to fix Gmail issue that’s led to flooded inboxes and increased spam warnings

    If your Gmail inbox is all out of whack today, you’re not alone. Gmail users have been encountering issues with the automatic filters that keep their main inbox free from the clutter of promotional emails and non-urgent updates, and some have reported seeing notices that emails have not been scanned for spam. Google confirmed to Engadget and in an update on its Workspace status dashboard that it’s aware of the problems, and is currently working on a fix.

    On social media and DownDetector, some Gmail users have also reported delays in receiving messages, leading to issues with two-factor authentication logins. Google noted that the problem has resulted in the “misclassification of emails in their inbox and additional spam warnings,” including a banner that says, “Be careful with this message. Gmail hasn’t scanned this message for spam, unverified senders, or harmful software.” In a statement to Engadget, a Google spokesperson echoed the message from its status dashboard, saying, “We are actively working to resolve the issue. As always, we encourage users to follow standard best practices when engaging with messages from unknown senders.”

    Cheyenne MacDonald

    Source link

  • Former Googlers seek to captivate kids with an AI-powered learning app | TechCrunch

    Big Tech companies and upcoming startups want to use generative AI to build software and hardware for kids. A lot of those experiences are limited to text or voice, and kids might not find that captivating. Three former Google employees want to get over that hurdle with their generative AI-powered interactive app, Sparkli.

    Sparkli was founded last year by Lax Poojary, Lucie Marchand, and Myn Kang. As parents, Poojary and Kang were not able to satisfy their children’s curiosity or give engaging answers to their questions.

    “Kids, by definition, are very curious, and my son would ask me questions about how cars work or how it rains. My approach was to use ChatGPT or Gemini to explain these concepts to a six-year-old, but that is still a wall of text. What kids want is an interactive experience. This was our core process behind founding Sparkli,” Poojary told TechCrunch over a call.

    Image Credits:Sparkli

    Prior to launching Sparkli, Poojary and Kang co-founded a travel aggregator called Touring Bird and a video-focused social commerce app, Shoploop, at Google’s Area 120, the company’s internal startup incubator. Poojary later went on to work at Google and YouTube on shopping. Marchand, who is the CTO of Sparkli, was also one of the co-founders of Shoploop and later worked at Google.

    “When a kid asked what Mars looks like fifty years ago, we might have shown them a picture,” said Poojary. “Ten years ago, we might have shown them a video. With Sparkli, we want kids to interact and experience what Mars is like.”

    The startup said that education systems often fall behind in teaching modern concepts. Sparkli wants to teach kids about topics like skills design, financial literacy, and entrepreneurship by creating an AI-powered learning “expedition.”

    The app lets users explore some predefined topics in different categories or ask their own questions to create a learning path. The app also highlights one new topic every day to let kids learn something new. Kids can either listen to the generated voice or read the text. Chapters under one topic include a mix of audio, video, images, quizzes, and games. The app also creates choose-as-you-go adventures that don’t create the pressure of getting questions right or wrong.

    Techcrunch event

    San Francisco
    |
    October 13-15, 2026

    Image Credits:Sparkli

    Poojary mentioned that the startup uses generative AI to create all of its media assets on the fly. The company can create a learning experience within two minutes of a user asking a question, and it is trying to reduce this time further.

    The startup mentioned that while AI assistants can help children learn certain topics, their focus is not on education. It said that to make its product effective, the first two hires were a PhD holder in educational science and AI and a teacher. This was a conscious decision to ensure its content better serves children, keeping principles of pedagogy in mind.

    One of the key concerns around kids using AI is safety. Companies like OpenAI and Character.ai are facing lawsuits from parents who allege that these tools encouraged their children to self-harm. Sparkli said that while certain topics like sexual content are completely banned on the app, when a child asks about topics like self-harm, the app tries to teach them about emotional intelligence and encourages them to talk to their parents.

    The company is piloting its app with an institute that has a network of schools with over 100,000 students. Currently, its target audience is children aged 5-12, and it tested its product in over 20 schools last year.

    Sparkli has also built a teacher module that allows teachers to track progress and assign homework to kids. The company said that it was inspired by Duolingo to make the app engaging enough that kids can learn concepts and also feel like coming back to the app frequently. The app has streaks and rewards for kids for completing lessons regularly. It also gives kids quest cards, based on the initial avatar they have set up, for learning different topics.

    “We have seen a very positive response from our school pilots. Teachers often use Sparkli to create expeditions that kids can explore at the start of the class and lead them into a more discussion-based format. Some teachers also used it to create [homework] after they explain a topic to let kids explore further and get a measure of their understanding,” Poojary said.

    While the startup wants to primarily work with schools globally for the next few months, it wants to open up consumer access and let parents download the app by mid-2026.

    The company has raised $5 million in pre-seed funding led by Swiss venture firm Founderful. Sparkli is Founderful’s first pure-play edtech investment. The firm’s founding partner, Lukas Weder, said that the team’s technical skill and market opportunity nudged him to invest in the startup.

    “As a father of two kids who are in school now, I see them learning interesting stuff, but they don’t learn topics like financial literacy or innovation in technology. I thought from a product point of view, Sparkli gets them away from video games and lets them learn stuff in an immersive way,” Weder said.

    This post was first published on January 22, 2026.

    Ivan Mehta

    Source link

  • Google Photos can now turn you into a meme

    In Big Tech’s never-ending quest to increase AI adoption, Google has unveiled a meme generator. The new Google Photos feature, Me Meme, lets you create personalized memes starring a synthetic version of you.

    Google describes Me Meme as “a simple way to explore with your photos and create content that’s ready to share with friends and family.” You can choose from a variety of templates or “upload your own funny picture” to use in their place.

    The feature isn’t live for everyone yet, so you may not yet have access to it. (A Google representative told TechCrunch that the feature will roll out to Android and iOS users over the coming weeks.) But once it arrives, you can use it in the Google Photos app by tapping Create (at the bottom of the screen), then Me Meme. It will then ask you to choose a template and add a reference photo. There’s an option to regenerate it if you don’t like the result.

    Google says Me Meme works best with well-lit, focused and front-facing portrait photos. “This feature is still experimental, so generated images may not perfectly match the original photo,” the company warns.

    Will Shanklin

    Source link

  • Google Photos’ latest feature lets you meme yourself | TechCrunch

    Google Photos will now let you make memes with your own images. On Thursday, Google introduced a new generative AI-powered feature called “Me Meme,” which will allow you to combine a photo template and an image of yourself to generate an image of the meme.

    The new feature, which will be first available to U.S.-based users, was originally spotted in development last October by the blog Android Authority. It was formally announced by Google via its Photos Community site on Thursday.

    According to Google, the feature is experimental, so generated images “may not perfectly match the original photo.” It suggests uploading well-lit, focused, and front-facing photos to get the best results.

    The addition is meant to just be a fun way to explore your photos and experiment with Google’s Gemini AI technology, and specifically Nano Banana. Google’s popular AI image model powers other AI features in the Google Photos app, like the ability to re-create images in new styles, such as cartoons or paintings.

    Though a fairly unserious addition, all things considered, these types of features help remind users to return to the Photos app whenever they want to play around with AI tools, rather than going to a competitor’s product.

    Plus, users tend to gravitate toward features that show themselves in AI edits, as OpenAI found with its successful launch of the Sora app, which lets you make AI videos that can include yourself and your friends.

    “Me Meme” isn’t fully rolled out, so you may not see it in your updated Google Photos app just yet. When available, it will appear under the “Create” tab, Google says. A rep for Google told TechCrunch the feature will reach U.S. iOS and Android users over the “coming weeks.”

    Techcrunch event

    San Francisco
    |
    October 13-15, 2026

    To use the feature, you’ll select a template or upload your own, then tap “add photo” and “Generate.” Google notes that more templates are being added over time. After the AI creates the image, you can save the photo, share it on other platforms, or tap “regenerate” to have it re-imagine the image a second time.

    Sarah Perez

    Source link

  • Apple taps Google Gemini to power Apple Intelligence

    NEWYou can now listen to Fox News articles!

    Apple and Google just made one of the most important artificial intelligence (AI) announcements of the year. Under a new multi-year collaboration, Apple will base the next generation of its Apple Foundation Models on Google’s Gemini models and cloud technology.

    The companies confirmed the partnership in a joint statement, signaling a major shift in how Apple plans to deliver AI features across the iPhone, iPad and Mac. 

    The deal comes as Apple faces growing pressure to catch up in AI, especially after delaying a long-promised overhaul of Siri. 

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    5 BEST APPS TO USE ON CHATGPT RIGHT NOW

    Apple and Google have entered a multiyear AI partnership that will shape the future of Apple Intelligence and Siri. (Andrey Rudakov/Bloomberg via Getty Images)

    Why Apple chose Google’s Gemini

    Apple evaluated multiple AI options before settling on Gemini. According to the joint statement, Apple believes Google’s AI provides the strongest foundation for its own models. Gemini has quickly become one of the most capable large language model families, backed by Google’s massive cloud infrastructure. 

    For Apple, this means faster development, more reliable performance and the ability to roll out advanced features without rebuilding everything from scratch. At the same time, Apple says Apple Intelligence will still run on the device and through its Private Cloud Compute system. In other words, Apple controls how user data flows, even if the underlying models come from Google.

    The joint statement from Apple and Google

    Here is the full joint statement from the two companies:

    “Apple and Google have entered into a multi-year collaboration under which the next generation of Apple Foundation Models will be based on Google’s Gemini models and cloud technology. These models will help power future Apple Intelligence features, including a more personalized Siri coming this year.

    “After careful evaluation, Apple determined that Google’s AI technology provides the most capable foundation for Apple Foundation Models and is excited about the innovative new experiences it will unlock for Apple users. Apple Intelligence will continue to run on Apple devices and Private Cloud Compute, while maintaining Apple’s industry-leading privacy standards.”

    That last line is critical. Apple is clearly trying to reassure users that privacy remains central, even with Google’s technology involved.

    CHATGPT’S GPT-5.2 IS HERE, AND IT FEELS RUSHED

    Apple Macbook Air Computers 2010

    Google’s Gemini models will help power Apple’s next-generation AI features while Apple keeps control of on-device processing and privacy. (REUTERS/Norbert von der Groeben)

    A long-delayed Siri overhaul finally moves forward

    A more personalized Siri is one of the biggest promises tied to this deal. Apple had already previewed major Siri improvements but ran into development problems. Reports described internal frustration, bugs and delays that pushed the revamped assistant further out than planned. This partnership helps explain why. By leaning on Gemini, Apple can accelerate Siri’s evolution instead of trying to solve every AI challenge internally. The result should be a smarter assistant that better understands context, handles complex requests and integrates more deeply across Apple apps.

    Behind-the-scenes pressure at Apple

    This deal did not happen in a vacuum. Apple has faced criticism for moving too slowly on AI while rivals pushed ahead. Apple had reportedly been in talks to license a custom version of Gemini for Siri and was expected to pay roughly $1 billion per year, though the official announcement did not confirm any financial terms. 

    Apple has also reshuffled its AI leadership. The company recently hired Amar Subramanya as vice president of artificial intelligence. He replaced John Giannandrea, who stepped down from the role after leading Apple’s AI strategy since 2018.

    Antitrust questions loom

    There is also a regulatory angle. Apple and Google already face scrutiny for their long-standing search agreement. That partnership came under renewed attention after U.S. District Judge Amit Mehta ruled that Google holds a monopoly in online search, while still allowing payments to Apple to keep Google as the default search engine on iPhones. This new AI collaboration could attract fresh attention from antitrust regulators who worry about powerful tech companies becoming even more intertwined.

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    A 13-inch Apple MacBook Pro laptop

    The deal signals a strategic shift as Apple accelerates its AI roadmap to deliver smarter, more personalized experiences across its devices. (Phil Barker/Future Publishing via Getty Images)

    What this means for you

    For those of you using Apple devices, the impact is straightforward. You should see smarter Apple Intelligence features arrive faster, starting with a more capable Siri. Tasks like summarizing messages, handling complex reminders and understanding context across apps should improve. At the same time, Apple insists your data stays protected. Apple Intelligence will still rely on device processing and Private Cloud Compute, rather than funneling personal data directly into Google’s systems. In short, users get better AI without giving up Apple’s privacy stance, at least in theory.

    Take my quiz: How safe is your online security?

    Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.

    Kurt’s key takeaways

    Apple’s partnership with Google marks a turning point in its AI story. Instead of going it alone, Apple is betting that combining its privacy-focused platform with Google’s AI muscle is the fastest path forward. If Apple delivers on its promises, this deal could finally close the AI gap that has frustrated users and investors alike. The real test will come when those features land on your devices.

    Do you trust Apple to balance powerful AI with privacy now that Google’s technology sits under the hood? Let us know by writing to us at Cyberguy.com.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter. 

    Copyright 2026 CyberGuy.com. All rights reserved.

    Source link

  • ChatGPT to show ads, Grandparents hooked on ‘Boomerslop’ – Tech Digest


    Adverts will soon appear at the top of the AI tool ChatGPT
    for some users, the company OpenAI has announced. The trial will initially take place in the US, and will affect some ChatGPT users on the free service and a new subscription tier, called ChatGPT Go. This cheaper option will be available for all users worldwide, and will cost $8 a month, or the equivalent pricing in other currencies. OpenAI says during the trial, relevant ads will appear after a prompt – for example, asking ChatGPT for places to visit in Mexico could result in holiday ads appearing. BBC 

    Doctors and medical experts have warned of the growing evidence of “health harms” from tech and devices on children and young people in the UK. The Academy of Medical Royal Colleges (AoMRC) said frontline clinicians have given personal testimony about “horrific cases they have treated in primary, secondary and community settings throughout the NHS and across most medical specialities”. The body, which represents 23 medical royal colleges and faculties, plans to gather evidence to establish the issues healthcare professionals and specialists are seeing repeatedly that may be attributed to tech and devices. Sky News 


    “What are you even doing in 2025?”
    says a handsome kid in a denim jacket, somewhere just shy of 18. “Out there it looks like everyone is glued to their phones, chasing nothing.” The AI-generated teenager features in an Instagram video that has more than 600,000 likes from an account dubbed Maximal Nostalgia. The video is one of dozens singing the praises of the 1970s and 1980sCreated with AI, the videos urge viewers to relive their halcyon days. The clips have gone viral across Instagram and Facebook, part of a new type of AI content that has been dubbed “boomerslop”. Telegraph

    More than 60 Labour MPs have written to Keir Starmer urging him to back a social media ban for under-16s, with peers due to vote on the issue this week. The MPs, who include select committee chairs, former frontbenchers, and MPs from the right and left of the party, are seeking to put pressure on the Prime Minister as calls mount for the UK to follow Australia’s precedent. Starmer has said he is open to a ban but members of the House of Lords are looking to force the issue when they vote this week on an amendment to the children, wellbeing and schools bill. Guardian


    Huawei has released a new update for the Watch Ultimate 2 smartwatch, installing new health features, including a heart failure risk assessment. The update comes with HarmonyOS firmware version 6.0.0.209 and is spreading in batches. The new additions include a coronary heart disease risk assessment. Users can join a coronary heart disease research project via the Huawei Research app on their smartphone. HuaweiCentral

    Google has just changed Gmail after twenty years. In among countless AI upgrades — including “personalized AI” that gives Gemini access to all your data in Gmail, Photos and more, comes a surprising decision. You can now change your primary Gmail address for the first time ever. You shouldn’t hesitate to do so. This new option is good — but it’s not perfect. And per 9to5Google, “Google also notes this can only be done once every 12 months, up to 3 times, so make this one count.” Forbes

    Chris Price

    Source link

  • Google Launches Market Access Program to Help Indian AI Startups Scale Globally

    • While Google has not yet announced a formal list of startups participating in the Market Access Program, the company has consistently worked with Indian startups through accelerator programs and other initiatives.
    • This is where Google steps in with its newly launched Market Access Program for India, an initiative aimed at helping AI-focused startups move beyond the trial phases, unlock enterprise customers, and scale internationally.
    • Over the years, Google has supported founders via programs such as Google for Startups Accelerator and AI-first cohorts, offering a mix of technical mentorship, cloud infrastructure support, and access to its AI stack.

    As Indian startups continue to build increasingly capable AI products, one challenge keeps coming up repeatedly: going from a strong product to a globally viable business. It’s a gap that sits somewhere between technology readiness and real market adoption, especially when startups try to sell to large enterprises outside India.

    This is where Google steps in with its newly launched Market Access Program for India, an initiative aimed at helping AI-focused startups move beyond the trial phases, unlock enterprise customers, and scale internationally. Unlike growth programs that focus heavily on product building or investment funding, Google’s approach here is far more commercial, and that’s what makes it interesting. Let’s have a look.

    What is Google’s Market Access Program?

    Google’s Market Access Program is designed in partnership with two government-backed startup agencies: MeitY Startup Hub and Startup India. It aims to specifically help AI-first Indian startups that already have a working product and are now looking to expand their reach, both within India and globally. The core idea is simple: help startups understand how global enterprises buy technology, and help them prepare for that reality.

    Rather than focusing on R&D or early-stage experimentation, the program works with startups that are past the prototype phase. These companies typically have customers, use cases, and technical validation, but may struggle to scale it up. Typical problems include update cycles, pricing strategies, compliance expectations, or global distribution.

    Another important aspect of the program is its global outlook. While India remains a key market, the intent is to help startups position themselves for customers in regions where enterprise AI adoption is accelerating. This includes markets like North America, Europe, and Southeast Asia.

    Why This Program Matters for Indian Startups

    India’s AI startup ecosystem has matured rapidly over the past few years. There is no shortage of technical talent or innovative use cases. What’s often missing, however, is structured support around selling AI at scale, especially to large organisations that expect reliability, security, and long-term commitment.

    Google’s Market Access Program attempts to bridge this exact gap in the following ways:

    • Enterprise go-to-market support: Startups receive guidance on how to sell to large organisations, including deal structuring, long sales cycles, procurement processes, and stakeholder management.
    • Access to Google’s AI infrastructure: Participating startups can leverage Google’s AI models, cloud infrastructure, and tools to build reliable and production-ready solutions
    • Exposure to global customers: Through curated networking opportunities, startups get visibility with international CIOs, CXOs, and decision-makers who are actively exploring AI solutions.
    • Improved credibility: Being associated with Google adds a layer of trust, which can be critical when startups approach large global clients for the first time.

    Indian Companies Associated With Google’s Startup Ecosystem

    While Google has not yet announced a formal list of startups participating in the Market Access Program, the company has consistently worked with Indian startups through accelerator programs and other initiatives. Over the years, Google has supported founders via programs such as Google for Startups Accelerator and AI-first cohorts, offering a mix of technical mentorship, cloud infrastructure support, and access to its AI stack. These efforts, while not identical to the new Market Access Program, provide a clear indication of the kind of companies Google typically engages with.

    Startups backed by Google

    Startups backed through these initiatives are from sectors like healthcare, enterprise SaaS, generative AI, data intelligence, and more. Some popular examples include companies like Nawgati, a platform that helps people find CNG stations with less crowd.

    Another example is BigOHealth, which connects patients with top-grade medical facilities across India. Both companies have benefited from Google-led mentorship and technical support in their early days. Google has also helped India-based generative AI players like NeuralGarage, a company that builds ultra-high quality audio-visual foundation models, and Predis.ai, which helps businesses create engaging ads using AI.

    How Indian Startups Can Sign Up

    Startups interested in joining the Google Market Access Program need to meet a few basic criteria. First, the company should be AI-first, with a product that is already in use or close to commercial deployment. Early-stage ideas or research-only projects are unlikely to get approved, as per the details shared by the company during the event.

    1. Open the registration page of the Google Market Access Program.

    Google Market Access Program registration

    2. Click on apply, and sign in using your Google or company email account.

    3. Provide the necessary details of your startup, and add attachments as required.

    Note: Last date top register is January 20, 2026.

    These applications are reviewed by Google’s internal teams, and selected startups are invited to join the program directly. However, the company has not disclosed the exact selection criteria or other details involved the qualification.

    Google Expands New AI Models For Medical and Support Applications

    Alongside its Market Access Program, Google also expanded its open AI model ecosystem with two significant additions: MedGemma 1.5 and FunctionGemma. These models are part of Google’s broader Gemma family of models, which are designed for developers and help them build real-world AI solutions.

    MedGemma 1.5 is a 4-billion-parameter multimodal model specially fine-tuned for healthcare and medical AI applications. It builds on earlier MedGemma releases by enabling developers to work with complex medical imaging modalities such as CT and MRI volumes, whole-slide pathology images, longitudinal X-ray analysis, and text-based medical records. The model is positioned as a foundational tool for startups innovating in clinical workflows, diagnostic assistance, and health data interpretation, and its primary aim is to make healthcare more affordable.

    Similar to this is FunctionGemma, a lightweight, function-calling-optimised variant tuned for building on-device AI. Rather than just generating text, FunctionGemma is designed to translate natural language commands into executable actions. The best example of this is that developers can use this model to execute operations locally on a person’s device while offering customer support in a secure environment.

    FAQs

    Q. What is the difference between Google Market Access Program and Accelerator Program?

    The Google Accelerator Program helps startups that are in the prototype or trials phase, helping them to bring their ideas to reality. Whereas the Google Market Access Program aims to grow AI startups that already have a product or service deployed at a small scale.

    Q. Are there any charges to enrol in Google Market Access Program?

    Currently, there are no charges to enrol and register in the Google Market Access Program. However, Google may hold an equity stake or some other financial deal if your startup qualifies and gets selected.

    Wrapping Up

    With the Market Access Program, Google is addressing one of the most common gaps in India’s AI startup journey: the transition from capable technology to global enterprise adoption. By focusing on enterprise exposure and practical scaling support, the initiative can help startups grow faster and compete in the global markets. This coulkd be a strong push for some startups that have demand in foreign markets, but lack the resources to supply their service overseas.

    You may also like to read:

    Was this article helpful?

    YesNo

    Chinmay Dhumal

    Source link

  • What Doctors Really Think of ChatGPT Health and A.I. Medical Advice

    The rush to deploy A.I. in health care raises hard questions about accuracy and trust. Unsplash

    Each week, more than 230 million people globally ask ChatGPT questions about health and wellness, according to OpenAI. Seeing a vast, untapped demand, OpenAI earlier this month launched ChatGPT Health and made a swift $60 million acquisition of the health care tech startup Torch to turbocharge the effort. Anthropic soon followed suit, announcing Claude for Healthcare last week. The move from general-purpose chatbot to health care advisor is well underway.

    For a world rife with health care inequities—whether skyrocketing insurance costs in the U.S. or care deserts in remote regions around the globe—democratized information and advice about one’s health is, at least in theory, a positive development. But the intricacies of how large A.I. companies operate raise questions that health tech experts are eager to interrogate.

    “What I am worried about as a clinician is that there is still a high level of hallucinations and erroneous information that sometimes makes it out of these general-purpose LLMs to the end user,” said Saurabh Gombar, a clinical instructor at Stanford Health Care and the chief medical officer and co-founder of Atropos Health, an A.I. clinical decision support platform.

    “It’s one thing if you’re asking for a spaghetti recipe and it’s telling you to add 10 times the amount [of an ingredient] that you should. But it’s a totally different thing if it’s fundamentally missing something about the health care of the individual,” he told Observer.

    For example, a doctor might see left shoulder pain as a non-traditional sign of a heart attack in certain patients, whereas a chatbot might only suggest taking an over-the-counter pain medication. The reverse can also happen. If a patient comes to a provider convinced they have a rare disorder based on a simple symptom after chatting with A.I., it can erode trust when a human doctor seeks to rule out more common explanations first.

    Google is already under fire for its AI Overviews providing inaccurate and false health information. ChatGPT, Claude and other chatbots have faced similar criticism for hallucinations and misinformation, even as they attempt to limit liability in health-related conversations by noting that they are “not intended for diagnosis or treatment.

    Gombar argues that A.I. companies must do more to publicly emphasize how often an answer may be hallucinated and clearly flag when information is poorly grounded in evidence or entirely fabricated. This is particularly important given that extensive chatbot disclaimers serve to prevent legal recourse, whereas human health care models allow individuals to sue for malpractice.

    The primary care provider workforce in the U.S. has shrunk by 11 percent annually over the past seven years, especially in rural areas. Gombar suggests that physicians may no longer control how they fit into the global health care landscape. “If the whole world is moving away from going to physicians first, then physicians are going to be utilized more as an expert second opinion, as opposed to the primary opinion,” he said.

    The inevitable question of data privacy

    OpenAI and Anthropic have been explicit that their health tools are secure and compliant, including with the Health Insurance Portability and Accountability Act (HIPAA) in the U.S., which protects sensitive patient health information from unauthorized use and disclosure. But for Alexander Tsiaras, founder and CEO of the A.I.-driven medical record platform StoryMD, there is more to consider.

    “It’s not the protection from being hacked. It’s the protection of what they will do with [the data] after,” Tsiaras told Observer. “In the back end, their encryption algorithms are as good as anyone in HIPAA. But once you have the data, can you trust them? And that’s where I think it’s going to be a real problem, because I certainly would not trust them.”

    Tsiaras points to the persistent techno-optimism of Silicon Valley elites like OpenAI CEO Sam Altman, arguing that they live in a bubble and have “proven themselves to not care.”

    On a more tangible level, chatbots tend to be overly agreeable. xAI’s Grok recently drew criticism for agreeing to generate nearly nude photos of real women and children, though the company blocked this capability this week following public outcry. Chatbots can also reinforce delusions and harmful thought patterns in people with mental illness, triggering crises such as psychosis or even suicide.

    Andrew Crawford, senior counsel for privacy and data at the nonpartisan think tank Center for Democracy and Technology, said an A.I. company prioritizing profit through personalization over data protection can put sensitive health information at serious risk.

    “Especially as OpenAI moves to explore advertising as a business model, it’s crucial that the separation between this sort of health data and memories that ChatGPT captures from other conversations is airtight,” Crawford said in a statement to Observer.

    Then there is the question of non-protected health data that users voluntarily input. Personal wellness companies such as MyFitnessPal and Oura already pose data privacy risks. “It’s amplifying the inherent risk by making that data more available and accessible,” Gombar said.

    For people like Tsiaras, profit-driven A.I. giants have tainted the health tech space. “The trust is eroded so significantly that anyone [else] who builds a system has to go in the opposite direction of spending a lot of time proving that we’re there for you and not about abusing what we can get from you,” he said.

    Nasim Afsar, a physician, former chief health officer at Oracle and advisor to the White House and global health agencies, views ChatGPT Health as an early step toward what she calls intelligent health, but far from a complete solution.

    “A.I. can now explain data and prepare patients for visits,” Afsar said in a statement to Observer. “That’s meaningful progress. But transformation happens when intelligence drives prevention, coordinated action and measurable health outcomes, not just better answers inside a broken system.”

    What Doctors Really Think of ChatGPT Health and A.I. Medical Advice

    Rachel Curry

    Source link

  • Google is appealing the ruling from its search antitrust case to avoid sharing data with rivals

    Google has filed its appeal to the Department of Justice’s antitrust case that ended with a federal judge ruling that the company was maintaining a monopoly with its search business. While the company goes through the appeals process, it’s also asking that implementation of the remedies from the case, which include a requirement that Google share search data with its competitors, also be paused.

    “As we have long said, the Court’s August 2024 ruling ignored the reality that people use Google because they want to, not because they’re forced to,” Google said in a statement. “The decision failed to account for the rapid pace of innovation and intense competition we face from established players and well-funded start-ups. And it discounted compelling testimony from browser makers like Apple and Mozilla who said they choose to feature Google because it provides the highest quality search experience for their consumers.”

    The company says that the requirement that it “provide syndication services to rivals” and share search data is a privacy risk and could “discourage competitors from building their own products.” Both remedies where compromises based on what the Justice Department originally proposed, which included forcing Google to sell its Chrome web browser.

    After a 10-week trial held in 2023, Google was found to have a search monopoly in 2024 because of the placement it maintained as the default search engine on multiple platforms, and the control it exerted over the ads that appear in search results. Both arguments were key points in the DOJ’s original 2020 lawsuit.

    Ian Carlos Campbell

    Source link

  • At 25, Wikipedia Navigates a Quarter-Life Crisis in the Age of A.I.

    Turning 25 amid an A.I. boom, Wikipedia is racing to protect traffic, volunteers and revenue without losing its mission. Photo illustration by Nikolas Kokovlis/NurPhoto via Getty Images

    Traffic to Wikipedia, the world’s largest online encyclopedia, naturally ebbs and flows with the rhythms of daily life—rising and falling with the school calendar, the news cycle or even the day of the week—making routine fluctuations unremarkable for a site that draws roughly 15 billion page views a month. But sustained declines tell a different story. Last October, the Wikimedia Foundation, the nonprofit that oversees Wikipedia, disclosed that human traffic to the site had fallen 8 percent in recent months as a growing number of users turned to A.I. search engines and chatbots for answers.

    “I don’t think that we’ve seen something like this happen in the last seven to eight years or so,” Marshall Miller, senior director of product at the Wikimedia Foundation, told Observer.

    Launched on Jan. 15, 2001, Wikipedia turns 25 today. This milestone comes at a pivotal point for the online encyclopedia, which is straddling a delicate line between fending off existential risks posed by A.I. and avoiding irrelevance as the technology transforms how people find and consume information.

    “It’s really this question of long-term sustainability,” Lane Becker, senior director of earned revenue at the Wikimedia Foundation, told Observer. “We’d like to make it at least another 25 years—and ideally much longer.”

    While it’s difficult to pinpoint Wikipedia’s recent traffic declines on any single factor, it’s evident that the drop coincides with the emergence of A.I. search features, according to Miller. Chatbots such as ChatGPT and Perplexity often cite and link to Wikipedia, but because the information is already embedded in the A.I.-generated response, users are less likely to click through to the source, depriving the site of page views.

    Yet the spread of A.I.-generated content also underscores Wikipedia’s central role in the online information ecosystem. Wikipedia’s vast archive—more than 65 million articles across over 300 languages—plays a prominent role within A.I. tools, with the site’s data scraped by nearly all large language models (LLMs). “Yes, there is a decline in traffic to our sites, but there may well be more people getting Wikipedia knowledge than ever because of how much it’s being distributed through those platforms that are upstream of us,” said Miller.

    Surviving in the era of A.I.

    Wikipedia must find a way to stay financially and editorially viable as the internet changes. Declining page views not only mean that fewer visitors are likely to donate to the platform, threatening its main source of revenue, but also risk shrinking the community of volunteer editors who sustain it. Fewer contributors would mean slower content growth, ultimately leaving less material for LLMs to draw from.

    Metrics that track volunteer participation have already begun to slip, according to Miller. While noting that “it’s hard to parse out all the different reasons that this happens,” he conceded that the Foundation has “reason to believe that declines in page views will lead to declines in volunteer activity.”

    To maintain a steady pipeline of contributors, users must first become aware of the platform and understand its collaborative model. That makes proper attribution by A.I. tools essential, Miller said. Beyond simply linking to Wikipedia, surfacing metadata—such as when a page was last updated or how many editors contributed—could spur curiosity and encourage users to engage more deeply with the platform.

    Tech companies are becoming aware of the value of keeping Wikipedia relevant. Over the past year, Microsoft, Mistral AI, Perplexity AI, Ecosia, Pleias and ProRata have joined Wikimedia Enterprise, a commercial product that allows corporations to pay for large-scale access and distribution of Wikipedia content. Google and Amazon have long been partners of the platform, which was launched in 2021.

    The basic premise is that Wikimedia Enterprise customers can access content from Wikipedia at a higher volume and speed while helping sustain the platform’s mission. “I think there’s a growing understanding on the part of these A.I. companies about the significance of the Wikipedia dataset, both as it currently exists and also its need to exist in the future,” said Becker.

    Wikipedia is hardly alone in this shift. News organizations, including CNN, the Associated Press and The New York Times, have struck licensing deals with A.I. companies to supply editorial content in exchange for payment, while infrastructure providers like Cloudflare offer tools that allow websites to charge A.I. crawlers for access. Last month, the licensing nonprofit Creative Commons announced its support of a “pay-to-crawl” approach for managing A.I. bots.

    Preparing for an uncertain future

    Wikipedia itself is also adapting to a younger generation of internet users. In an effort to make editing Wikipedia more appealing, the platform is working to enhance its mobile edit features, reflecting the fact that younger audiences are far more likely to engage on smartphones than desktop computers.

    Younger users’ preference for social video platforms such as YouTube and TikTok has also pushed Wikipedia’s Future Audiences team—a division tasked with expanding readership—to experiment with video. The effort has already paid off, producing viral clips on topics ranging from Wikipedia’s most hotly disputed edits to the courtship dance of the black-footed albatross and Sino-Roman relations. The organization is also exploring a deeper presence on gaming platforms, another major draw for younger users.

    Evolving with the times also means integrating A.I. further within the platform. Wikipedia has introduced features such as Edit Check, which offers real-time feedback on whether a proposed edit fits a page, and is developing features like Tone Check to help ensure articles adhere to a neutral point of view.

    A.I.-generated content has also begun to seep onto the platform. As of August 2024, roughly 5 percent of newly created English articles on the site were produced with the help of A.I., according to a Princeton study. Seeing this as a problem, Wikipedia introduced a “speedy deletion” policy that allows editors to quickly remove content that shows clear signs of being A.I.-generated. Still, the community remains divided over whether using A.I. for tasks such as drafting articles is inherently problematic, said Miller. “There’s this active debate.”

    From streamlining editing to distributing its content ever more widely, Wikipedia is betting that A.I. can ultimately be an ally rather than an adversary. If managed carefully, the technology could help accelerate the encyclopedia’s mission over the next 25 years—as long as it doesn’t bring down the encyclopedia first.

    “Our whole thing is knowledge dissemination to anyone that wants it, anywhere that they want it,” said Becker. “If this is how people are going to learn things—and people are learning things and gaining value from the information that our community is able to bring forward—we absolutely want to find a way to be there and support it in ways that align with our values.”

    At 25, Wikipedia Navigates a Quarter-Life Crisis in the Age of A.I.

    Alexandra Tremayne-Pengelly

    Source link

  • The good, bad, and the ugly of Apple’s AI deal with Google | Fortune

    Apple and Google’s surprise AI partnership announcement on Monday sent shockwaves across the tech industry (and lifted Google’s market cap above $4 trillion). The two tech giants’ deal to infuse Google’s AI technology into Apple’s mobile software, including in an updated version of the Siri digital assistant, has major implications in the high-stakes battle to dominate AI and to own the platform that will define the next generation of computing.

    While there are still many unanswered questions about the partnership, including the financial component and the duration of the deal, some key takeaways are already clear. Here’s why the deal is good news for Google, so-so news for Apple, and bad news for OpenAI.

    The deal is further validation that Google has got its AI mojo back

    When OpenAI debuted ChatGPT in November 2022, and throughout a good part of the next two years, many industry observers had their doubts about Google’s prospects in the changing landscape. The search giant at times appeared to be floundering as it raced to field models that could be as capable as OpenAI’ s ChatGPT and Anthropic’s Claude. Google endured several embarrassing product debuts, when its Bard chatbot and then its successor Gemini models got facts wrong, recommended glue as a pizza topping, and generated images of historically anachronistic Black Nazis.

    But today, Google’s latest Gemini models (Gemini 3) are among the most capable on the market and gaining traction among both consumers and businesses. The company has also been attracting lots of customers to its Google Cloud, in part because of the power of its bespoke AI chips, called tensor processing units (or TPUs), which may offer cost and speed advantages over Nvidia’s graphics processing units (GPUs) for running AI models.

    Apple’s statement on Monday that “after careful consideration” it had determined that Google’s AI technology “provides the most capable foundation for Apple Foundation Models” served as Gemini’s ultimate validation—particularly given that until now, OpenAI was Apple’s preferred technology provider for “Apple Intelligence” offerings. Analysts at Bank of America said the deal reinforced “Gemini’s position as a leading LLM for mobile devices” and should also help strengthen investor confidence in the durability of Google’s search distribution and long-term monetization.

    Hamza Mudassir, who runs an AI agent startup and teaches strategy and policy at the University of Cambridge’s Judge School of Business, said Apple’s decision is likely about more than just Gemini’s technical capabilities. Apple does not allow partners to train on Apple user data, and Mudassir theorized that Apple may have concluded Google’s control over its ecosystem—such as owning its own cloud—could provide data privacy and intellectual property guarantees that perhaps OpenAI or Anthropic couldn’t match.

    The deal also likely translates directly into revenue for Google. Although the financial details of the were not disclosed, a previous report from Bloomberg suggested Apple was paying Google about $1 billion a year for the right to use its tech.

    The bigger prize for Google may be the foot-in-the-door the deal provides to Apple’s massive distribution channel: the approximately 1.5 billion iPhone users worldwide. With Gemini powering the new version of Siri, Google may get a share of any revenue those users generate through product discovery and purchases made through a Gemini-powered Siri. Eventually, it might potentially even lead to an arrangement that would see Gemini’s chatbot app pre-installed on iPhones.

    For Apple, the implications of the deal are a bit more ambivalent

    Apple’s Tim Cook
    David Paul Morris/Bloomberg via Getty Images

    The iPhone maker will obviously benefit from giving users a much more capable Siri, as well as other AI features, at an attractive cost and while guaranteeing user privacy. Dan Ives, an equity analyst who covers Apple for Wedbush, said in a note the deal provided Apple with “a stepping stone to accelerate its AI strategy into 2026 and beyond.”

    But Apple’s continuing need to rely on partners—first OpenAI and now Google—to deliver these AI features is a worrisome sign, suggesting that Apple, a champion of vertical integration, is still struggling to build its own LLM.

    It’s a problem that has dogged the company since the beginning of the generative AI era: For months last year several Apple Intelligence features were delayed, and the long-awaited debut of an updated Siri has been pushed back numerous times. These delays have taken a toll on Apple’s reputation as a tech leader and angered customers, some of whom filed a class action lawsuit against the company after the AI features promoted in ads for the iPhone 16 weren’t initially available on the device.

    When Apple CEO Tim Cook promised an updated version of Siri would be released in 2026, many assumed it would be powered by Apple’s own AI models. But apparently those models are not yet ready for prime time and the new Siri will be powered by Google instead.

    Daniel Newman, an analyst at the Futurum Group, said that 2026 is a “make-or-break year” for Apple. “We have long said the company has the user base and distribution that allows it to be more patient in chasing new trends like AI, but this is a critical year for Apple,” Newman said.

    Cook has shaken up the ranks, installing a new head of AI who previously worked at Google on Gemini. And, if the delays turn out to be related to Apple’s specific requirements around things like privacy, it may ultimately prove to have been worth the wait. Ideally, Apple would want an AI model that matches the capabilities of those from OpenAI, Anthropic, and Google but which is compact enough to run entirely on an iPhone, so that user data does not have to be transmitted to the cloud. It’s possible, said Mudassir, that Apple is grappling with technical limitations involving the amount of power these models consume and how much heat they generate. Partnering with Google buys Apple time to make breakthroughs in compression and architecture while also getting Wall Street “off its back,” he said.

    Apple defenders note that the company is rarely a first mover in new technology—it was not the first to create an MP3 player, a smartphone, wireless earphones, or a smart watch, yet it came from behind to dominate many of those product categories with a combination of design innovation and savvy marketing. And Apple has a history of learning from partners for key technology, such as chips, before ultimately bringing these efforts in-house.

    Or, in the case of internet search, Apple simply partnered with Google for the long-term, using the Google engine to handle search queries in its Safari browser. The fact that Apple never developed its own search engine has not hurt its growth. Could the same principle hold true for AI?

    But the Apple-Google tie up is almost certainly bad news for OpenAI

    OpenAI CEO Sam Altman
    Florian Gaertner/Photothek via Getty Images

    While the Google partnership is not exclusive, meaning that Apple may continue to rely on OpenAI’s models for some of its Apple Intelligence features and OpenAI still has a chance to prove its models’ worth to Cupertino, Apple’s decision to go with Google is definitely a blow. At the very least, it solidifies the narrative that Google has not only caught up with OpenAI, but has now edged past it in having the best AI models in the market.

    Deprived of built-in distribution through Apple’s customer base, OpenAI may find it harder to grow its own user base. The company currently boasts more than 800 million weekly users, but recent reports suggest that the rate of usage may be slowing. OpenAI CEO Sam Altman has noted that many people currently see ChatGPT as synonymous with AI. But that perception could fray if Apple users find delight in using Gemini through Siri and come to see Gemini as the better model.
    .
    Altman told reporters last month that he sees Apple as his company’s primary long-term rival. OpenAI is in the process of developing a new kind of AI device, with help from Apple’s former chief designer Jony Ive, that Altman hopes will rival the phone as the primary way consumers interface with AI assistants. That device may debut this year. As long as Apple was dependent on ChatGPT to power Siri, OpenAI had a good view into the capabilities its new device would be competing against. OpenAI is unlikely to have as much insight into Apple’s AI capabilities going forward, which may make it harder for the upstart to position its new device as an iPhone killer.

    OpenAI has to hope its new device is a hit that may enable it to cement users into a closed ecosystem, not dissimilar to the one Apple has built around its hardware device and iOS software. This “walled garden” approach is one way to keep users from switching to rival products when they offer broadly similar capabilities. OpenAI will also have to hope its AI researchers achieve breakthroughs that give it a more decisive and long-lasting edge over Google. That might convince Apple to rely more heavily on OpenAI again in the future. Or, it could obviate the need for OpenAI to have distribution on Apple’s devices at all.

    This story was originally featured on Fortune.com

    Jeremy Kahn, Beatrice Nolan

    Source link

  • Apple partners with Google to power major Siri AI upgrade – Tech Digest

    Share


    Apple has officially joined forces with Google to use its Gemini AI models as the foundation for a massive Siri overhaul – a move that confirms the iPhone maker is looking externally to accelerate its lagging artificial intelligence strategy.

    The multi-year collaboration, which has just been announced, will see Google’s Gemini 3 technology integrated into future Apple Foundation Models.

    This partnership marks a pragmatically significant shift for Apple, which has historically prided itself on developing every layer of its technology in-house. Reports suggest the deal is worth approximately $1 billion annually, positioning Google as the primary engine behind the “more personalized” Siri expected to debut later this year.

    The primary reason for this alliance is Apple’s need to catch up. Despite marketing “Apple Intelligence” heavily over the last two years, the company has faced significant development delays, pushing the full Siri revamp into 2026. Internal performance testing reportedly determined that Google’s Gemini offered a more capable and scalable foundation than Apple’s own early models.

    By leveraging Google’s infrastructure, Apple can quickly introduce features that its rivals, such as Samsung and Google’s own Pixel line, already offer. This includes Siri’s ability to understand on-screen content, manage complex multi-step tasks across different apps and utilize personal context from emails and messages to provide more relevant assistance.


    Performance and privacy gains

    The advantages for Apple users are expected to be substantial. The next generation of Siri will transition from a basic command-response assistant to a proactive agent capable of natural dialogue. Because the Gemini 1.2 trillion parameter model is far larger than anything Apple currently runs, Siri should become significantly more accurate and versatile.

    Apple has also taken steps to mitigate its biggest brand risk: privacy. To maintain its strict privacy standards, the companies confirmed that these AI features will run on Apple’s own devices and its “Private Cloud Compute” system.

    This means that while Google provides the “brains” or the underlying logic, the actual processing of sensitive user data remains within Apple-controlled environments, theoretically preventing Google from accessing personal user information.

    Market risks and regulatory hurdles

    However, the partnership carries considerable strategic and legal risks. By outsourcing the foundational layer of its AI, Apple risks becoming dependent on a direct competitor. Analysts warn that this could lead to “brand dilution,” where the iPhone’s unique edge is eroded because its core intelligence is identical to that of Android devices.

    The deal has also immediately caught the attention of global regulators. Coming on the heels of major antitrust rulings against Google’s search monopoly, this new alliance, which creates an “AI duopoly”, is being closely monitored by the UK’s Competition and Markets Authority and EU policymakers.

    Critics, including Elon Musk, have already slammed the move as an “unreasonable concentration of power” that could further stifle competition in the rapidly evolving AI landscape.


    For latest tech stories go to TechDigest.tv


    Discover more from Tech Digest

    Subscribe to get the latest posts sent to your email.

    Chris Price

    Source link

  • Google removes ‘dangerous’ AI health summaries, China’s Zeekr set to launch in the UK – Tech Digest

    Share

    Google has removed some of its artificial intelligence health summaries after a Guardian investigation found people were being put at risk of harm by false and misleading information. The company has said its AI Overviews, which use generative AI to provide snapshots of essential information about a topic or question, are “helpful” and “reliable”. But some of the summaries, which appear at the top of search results, served up inaccurate health information, putting users at risk of harm. The Guardian 

    Malaysia on Sunday temporarily blocked access to Grok, joining a growing list of countries taking action after the generative artificial intelligence chatbot sparked a global backlash by allowing users to create and publish sexualised images. xAI, the Elon Musk-led firm behind Grok, on Thursday said it would restrict image generation and editing to paying subscribers as it addressed lapses that allowed users on X to produce sexualised content of others, often without consent. Reuters


    You’re looking at the latest car from Geely’s Scandinavian brand
    – not Polestar, not Volvo – the other one, Zeekr. Called the 7GT it’s an electric shooting brake that’s been designed in Europe, with running gear and tech from the Chinese mothership. You may have heard Zeekr before, but soon you’ll seeing a lot more of them – CAR understands Zeekr is the next Geely brand coming to the UK.  UK dates and prices aren’t confirmed yet, but we do know it’ll launch in Europe with a starting price of €45,990. Car Magazine

    Manchester United captain Bruno Fernandes’s X account has been hacked, the club said, after a post on Sunday was critical of the club’s co-owners.  A message appeared on Fernandes’s social media account, which has more than 4.5m followers, that read: “let’s get rid of INEOS”. Ineos, the global chemical company, is owned by British billionaire Sir Jim Ratcliffe, who is the minority owner of United and in charge of football operations. Sky News 

    Media companies expect web traffic to their sites from online searches to plummet over the next three years, as AI summaries and chatbots change the way consumers use the internet. An overwhelming majority are also planning to encourage their journalists to behave more like YouTube and TikTok content creators this year, as short-form video and audio content continues to boom. The findings are drawn from a new report from the Reuters Institute for the Study of Journalism, which found media executives around the world fear search engine referrals will fall by 43% over three years. The Guardian 

    The next time you meet a friend for a drink, should you ask if you are being secretly recorded? You might come across as paranoid – surely only spies, politicians and drug dealers worry about being bugged. But it is increasingly likely that every word you say is being recorded. In one recent encounter detailed on social media, a London venture capitalist pulled out his phone at the end of a friendly coffee meeting, inadvertently revealing that the AI note-taking app had been recording the entire conversation. Telegraph 

    Microsoft is now scrabbling to assure hundreds of millions of Office users after a viral backlash that came out of nowhere has gotten totally out of hand. “Why is ‘Office is dead’ talk of the town again?” Windows Latest asks. The answer being viral posts on X, “particularly by Perplexity AI, claiming that Microsoft has killed the Office brand and that millions of users were now using AI overnight.” “BREAKING,” Perplexity posted last week. “Microsoft just renamed Office to “Microsoft 365 Copilot app. 400 million users just became ‘AI users’ overnight.” Forbes 

     

     


    For latest tech stories go to TechDigest.tv


    Discover more from Tech Digest

    Subscribe to get the latest posts sent to your email.

    Chris Price

    Source link

  • Google’s new commerce framework cranks up the heat on ‘agentic shopping’

    To further push the limits of consumerism, Google has launched a new open standard for agentic commerce that’s called Universal Commerce Protocol (UCP). In brief, it’s a framework that combines the power of AI agents and online shopping platforms to help customers buy more things.

    Thanks to the introduction of UCP, Google is offering three new online shopping features. To start, Google’s AI mode will have a new checkout feature that allows customers to buy eligible products from certain US retailers within Google Search. Currently, this feature works with Google Pay, but it will soon add PayPal compatibility and incorporate more capabilities, like related product discovery and using loyalty points.

    On the merchant side, the UCP also established the Business Agent feature, which Google said will be “a virtual sales associate that can answer product questions in a brand’s voice.” The Business Agent will launch tomorrow with early adopters including Lowe’s, Michaels, Poshmark, Reebok and more. Also for retailers, the UCP is responsible for the new Direct Offers feature, which lets companies advertising with Google to “present exclusive offers for shoppers who are ready to buy, directly in AI Mode.” The Direct Offers feature will work in tandem with the ads in AI Mode that Google is testing.

    With UCP, Google Search, retailers and payment processors are joining forces to make online shopping even easier, whether it’s figuring out what product to buy, completing the purchase or offering “post-purchase support.” According to Google, UCP is compatible with existing industry protocols, like Agent2Agent, Agent Payment Protocols and Model Context Protocol. UCP was even co-developed with industry giants like Shopify, Etsy and Walmart, and was endorsed by even more companies in the commerce ecosystem, including Macy’s, Stripe, Visa and more.

    Jackson Chen

    Source link

  • Google removes AI Overviews for certain medical queries | TechCrunch

    Following an investigation by the Guardian that found Google AI Overviews offering misleading information in response to certain health-related queries, the company appears to have removed the AI Overviews for some of those queries.

    For example, the Guardian initially reported that when users asked “what is the normal range for liver blood tests,” they would be presented with numbers that did not account for factors such as nationality, sex, ethnicity, or age, potentially leading them to think their results were healthy when they were not.

    Now, the Guardian says AI Overviews have been removed from the results for “what is the normal range for liver blood tests” and “what is the normal range for liver function tests.” However, it found that variations on those queries, such as “lft reference range” or “lft test reference range,” could still lead to AI-generated summaries.

    When I tried those queries this morning — several hours after the Guardian published its story — none of them resulted in seeing AI Overviews, though Google still gave me the option to ask the same query in AI Mode. In several cases, the top result was actually the Guardian article about the removal.

    A Google spokesperson told the Guardian that the company does not “comment on individual removals within Search,” but that it works to “make broad improvements.” The spokesperson also said that an internal team of clinicians reviewed the queries highlighted by the Guardian and found “in many instances, the information was not inaccurate and was also supported by high quality websites.”

    TechCrunch has reached out to Google for additional comment. Last year, the company announced new features aimed at improving Google Search for healthcare use cases, including improved overviews and health-focused AI models.

    Vanessa Hebditch, the director of communications and policy at the British Liver Trust, told the Guardian that the removal is “excellent news,” but added, “Our bigger concern with all this is that it is nit-picking a single search result and Google can just shut off the AI Overviews for that but it’s not tackling the bigger issue of AI Overviews for health.”

    Techcrunch event

    San Francisco
    |
    October 13-15, 2026

    Anthony Ha

    Source link