ReportWire

Tag: Google

  • Google dismantles 9M-device Android hijack network

    [ad_1]

    NEWYou can now listen to Fox News articles!

    Free apps are supposed to cost you nothing but storage space. But in this case, they may have cost millions of people control over their own internet connections.

    Google says it has disrupted what it believes was the world’s largest residential proxy network, one that secretly hijacked around 9 million Android devices, along with computers and smart home gadgets. Most people had no idea their devices were being used since the apps worked normally, and nothing looked broken.

    But behind the scenes, those devices were quietly routing traffic for strangers, including cybercriminals.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    STOP GOOGLE FROM FOLLOWING YOUR EVERY MOVE
     

    Google says it disrupted a massive residential proxy network that secretly hijacked about 9 million Android and smart devices. (AaronP/Bauer-Griffin/GC Images)

    How your device became part of a proxy network

    According to Google’s Threat Intelligence Group, the network was tied to a company known as IPIDEA. Instead of spreading through obvious malware, it relied on hidden software development kits, or SDKs, that were embedded inside more than 600 apps. These apps ranged from simple utilities to VPN tools and other free downloads. When you installed one, the app performed its advertised function. But it also enrolled your device into a residential proxy network.

    That means your phone, computer or smart device could be used as a relay point for someone else’s internet traffic. That traffic might include scraping websites, launching automated login attempts or masking the identity of someone conducting shady online activity. From the outside, it looked like that activity came from your home IP address. You wouldn’t see it happening, and in many cases, you wouldn’t notice any major performance issues.

    Google says in a single seven-day period earlier this year, more than 550 separate threat groups were observed using IP addresses linked to this infrastructure. That includes cybercrime operations and state-linked actors. Residential proxy networks are attractive because they make malicious traffic look like normal consumer activity. Instead of coming from a suspicious data center, it appears to come from someone’s living room.

    What Google did to shut it down

    Google says it took legal action in a U.S. federal court to seize domains used to control the infected devices and route proxy traffic. It also worked with companies like Cloudflare and other security firms to disrupt the network’s command-and-control systems. Google claims it also updated Play Protect, the built-in Android security system, so that certified devices would automatically detect and remove apps known to include the malicious SDKs.

    However, Google also warned that many of these apps were distributed outside the official Play Store. That matters because Play Protect can only scan and block threats tied to apps installed through Google Play. Third-party app stores, unofficial downloads and uncertified Android devices carry far greater risk.

    IPIDEA has claimed its service was meant for legitimate business use, such as web research and data collection. But Google’s research suggests the network was heavily abused by criminals. Even if some users knowingly installed bandwidth-sharing apps in exchange for rewards, many did not receive clear disclosure about how their devices were being used.

    Google’s investigation also found significant overlap between different proxy brands and SDK names. What looked like separate services were often tied to the same infrastructure. That makes it harder for consumers to know which apps are safe and which are quietly monetizing their connection.

    300,000 CHROME USERS HIT BY FAKE AI EXTENSIONS
     

    Samsung phones sit on display.

    Hidden software inside more than 600 apps allegedly turned phones and computers into internet relays for cybercriminals. (David Paul Morris/Bloomberg via Getty Images)

    7 ways you can protect yourself from Android proxy attacks

    If millions of devices can be quietly turned into internet relay points, the big question is, how do you make sure yours isn’t one of them? These steps reduce the risk that your phone, TV box or smart device gets pulled into a proxy network without you realizing it.

    1) Stick to official app stores

    Only download apps from the Google Play Store or other trusted app marketplaces. Some apps hide small pieces of code that can secretly use your internet connection. These are often spread through third-party app stores or direct app files called “APKs,” which are Android app files installed manually instead of through the Play Store. When you sideload apps this way, you bypass Google’s built-in security checks. Sticking to official stores helps keep those hidden threats off your device.

    2) Avoid “earn money by sharing bandwidth” apps

    If an app promises rewards for sharing your unused internet bandwidth, that’s a major red flag. In many cases, that is exactly how residential proxy networks recruit devices. Even if it sounds legitimate, you are effectively renting out your IP address. That can expose you to abuse, blacklisting or deeper network vulnerabilities.

    3) Review app permissions carefully

    Before installing any app, check what permissions it requests. A simple wallpaper app should not need full network control or background execution privileges. After installation, go into your phone’s settings and audit which apps have constant internet access, background activity rights or special device permissions.

    4) Install strong antivirus software

    Today’s mobile security tools can detect suspicious app behavior, unusual internet activity and hidden background services. Strong antivirus software adds an extra layer of protection beyond what’s built into your device, especially if you’ve installed apps in the past that you’re unsure about. Get my picks for the best 2026 antivirus protection winners for your Windows, Mac, Android and iOS devices at Cyberguy.com.

    5) Keep your devices updated

    Android security updates patch vulnerabilities that proxy operators may exploit. If you’re using an older phone, tablet or Android TV box that no longer receives updates, it may be time to upgrade. Unpatched devices are easier targets for hidden SDK abuse and botnet enrollment.

    6) Use a strong password manager

    If your device ever becomes part of a proxy network or is otherwise compromised, attackers often try to pivot into your accounts next. That’s why you should never reuse passwords. A password manager generates long, unique passwords for every account and stores them securely, so one breach does not unlock your email, banking or social media. Many password managers also include breach monitoring tools that alert you if your credentials appear in leaked databases, giving you a chance to act before real damage is done. Check out the best expert-reviewed password managers of 2026 at Cyberguy.com.

    7) Remove apps you don’t fully trust

    Go through your installed apps and delete or uninstall anything you don’t recognize or haven’t used in months. The fewer apps running on your device, the fewer opportunities there are for hidden SDKs to operate. If you suspect your device has been compromised, consider a full reset and reinstall only essential apps from trusted sources.

    ANDROID MALWARE HIDDEN IN FAKE ANTIVIRUS APP

    A person uses a laptop with a Google search tab open on the screen.

    Threat groups and state-linked actors allegedly used compromised devices to mask online activity and automate attacks. (Photo Illustration by Serene Lee/SOPA Images/LightRocket via Getty Images)

    Kurt’s key takeaway

    Residential proxy networks operate in a gray area that sounds harmless on paper but can quickly become a shield for cybercrime. In this case, millions of everyday devices were quietly enrolled into a system that attackers used to hide their tracks. Google’s takedown is a major move, but the broader market for residential proxies is still growing. That means you need to be cautious about what you install and what permissions you grant. Free apps are rarely truly free. Sometimes, the product being sold is you and your internet connection.

    Have you ever installed an app that promised rewards for sharing bandwidth, or used a free VPN without thinking twice about it? Let us know your thoughts by writing to us at Cyberguy.com.

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter 

    Copyright 2026 CyberGuy.com.  All rights reserved.

    Related Article

    Stop Google from following your every move

    [ad_2]

    Source link

  • Google Maps will finally be usable in South Korea

    [ad_1]

    Google will finally be able to provide real-time driving and walking directions in South Korea, The New York Times reported. The company has received permission from the nation’s Transport Ministry to export geographic data out of the country, which will allow it to provide GPS services as well as detailed listings for restaurants and other businesses.

    “We welcome today’s decision and look forward to our ongoing collaboration with local officials to bring a fully functioning Google Maps to Korea,” Google’s senior executive Cris Turner told the NYT in a statement. However, the approval is contingent “on the condition that strict security requirements are met,” a spokesperson from the Transport Ministry said. Those conditions reportedly restrict Google from displaying sensitive military sites and longitude and latitude coordinates.

    South Korea has generally restricted the export of 1/5000 scale map data over national security concerns, as it’s still technically at war with its neighbor North Korea. Google hasn’t been able to provide mapping directions or business details since it arrived in the nation, though it has applied twice in 2007 and 2016.

    This lack of data sharing has reportedly been a bone of contention in trade talks with the US. Google argued that it was unfairly handicapped by the restrictions that allowed local apps like Naver to thrive.

    However, critics in the nation have expressed concern that Google could now come in and monopolize the market. “If Naver and Kakao are weakened or pushed out and Google later raises prices, that becomes a monopoly. Then, even companies that rely on map services — logistics firms, for example — become dependent [on Google],” geography professor Choi Jin-mu told Reuters.

    [ad_2]

    Steve Dent

    Source link

  • Google paid startup Form Energy $1B for its massive 100-hour battery | TechCrunch

    [ad_1]

    Google announced earlier this week that it was building a new data center in Minnesota that would be powered by a mix of wind, solar, and a very unique battery built by startup Form Energy that’s capable of discharging for days on end.

    Now we know the price tag for that feat of electrochemical engineering: about $1 billion, according to The Information.

    Form Energy’s massive iron-air battery is capable of delivering a continuous 300 megawatts of electricity over 100 hours. It works by breathing, in a sense — oxygen pumped into the cells rusts iron, which releases electrons. The battery will work to smooth the flow of electrons from 1.4 gigawatts of wind power and 200 megawatts of solar power.

    The startup has been chipping away at the technology for years, and it has built a factory in West Virginia to produce the batteries. But it hadn’t landed a major customer until this recent deal with Google.

    With a big order on the books, Form Energy CEO Mateo Jaramillo said that his company is in the process of raising a $500 million round. Form has raised $1.4 billion to date, according to PitchBook. The company plans to go public next year.

    [ad_2]

    Tim De Chant

    Source link

  • 300,000 Chrome users hit by fake AI extensions

    [ad_1]

    NEWYou can now listen to Fox News articles!

    Your web browser may feel like a safe place, especially when you install helpful tools that promise to make your life easier. But security researchers have uncovered a dangerous campaign in which more than 300,000 people installed Chrome extensions pretending to be artificial intelligence (AI) assistants. Instead of helping, these fake tools secretly collect sensitive information like your emails, passwords and browsing activity.

    They used familiar names like ChatGPT, Gemini and AI Assistant. If you use Chrome and have installed any AI-related extension, your personal information may already be exposed. Even worse, some of these malicious extensions are still available today, putting more people at risk without their knowing.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    More than 300,000 Chrome users installed fake AI extensions that secretly harvested sensitive data. (Kurt “CyberGuy” Knutsson)

    What you need to know about fake AI extensions

    Security researchers at browser security company LayerX discovered a large campaign involving 30 malicious Chrome extensions disguised as AI-powered assistants (via BleepingComputer). Together, these extensions were installed more than 300,000 times by unsuspecting users.

    Some of the most popular extensions included names like AI Sidebar with 70,000 users, AI Assistant with 60,000 users, ChatGPT Translate with 30,000 users, and Google Gemini with 10,000 users. Another extension called Gemini AI Sidebar had 80,000 users before it was removed.

    These extensions were distributed through the official Chrome Web Store, which made them appear legitimate and trustworthy. Even more concerning, researchers found that many of these extensions were connected to the same malicious server, showing they were part of a coordinated effort.

    While some extensions have since been removed, others remain available. This means new users could still unknowingly install them and expose their personal data. Here’s the list of the affected extensions:

    • AI Assistant
    • Llama
    • Gemini AI Sidebar
    • AI Sidebar
    • ChatGPT Sidebar
    • Grok
    • Asking ChatGPT
    • ChatGBT
    • Chat Bot GPT
    • Grok Chatbot
    • Chat With Gemini
    • XAI
    • Google Gemini
    • Ask Gemini
    • AI Letter Generator
    • AI Message Generator
    • AI Translator
    • AI For Translation
    • AI Cover Letter Generator
    • AI Image Generator ChatGPT
    • Ai Wallpaper Generator
    • Ai Picture Generator
    • DeepSeek Download
    • AI Email Writer
    • Email Generator AI
    • DeepSeek Chat
    • ChatGPT Picture Generator
    • ChatGPT Translate
    • AI GPT
    • ChatGPT Translation
    • ChatGPT for Gmail

    FAKE AI CHAT RESULTS ARE SPREADING DANGEROUS MAC MALWARE

    A fake AI app in the Google Play Store

    These malicious tools were listed in the official Chrome Web Store, making them appear legitimate and trustworthy. (LayerX)

    How the fake AI Chrome extension attack works

    These fake extensions pretend to offer helpful AI features, such as translating text, summarizing emails, or acting as an AI assistant. But behind the scenes, they quietly monitor what you are doing online.

    Once installed, the extension gains permission to view and interact with the websites you visit. This allows it to read the contents of web pages, including login screens where you enter your username and password.

    In some cases, the extensions specifically targeted Gmail. They could read your email messages directly from your browser, including emails you received and even drafts you were still writing. This means attackers could access private conversations, financial information and sensitive personal details.

    The extensions then sent this information to servers controlled by the attackers. Because they loaded content remotely, the attackers could change their behavior at any time without needing to update the extension.

    Some versions could also activate voice features through your browser. This could potentially capture spoken conversations near your device and send transcripts back to the attackers.

    If you installed one of these extensions, attackers may already have access to extremely sensitive information. This includes your email content, login credentials, browsing habits and possibly even voice recordings.

    We reached out to Google for comment, and a spokesperson told CyberGuy that the company “can confirm that the extensions from this report have all been removed from the Google Web Store.”

    BROWSER EXTENSION MALWARE INFECTED 8.8M USERS IN DARKSPECTRE ATTACK

    Woman sitting on the floor with her laptop.

    Once installed, the extensions could read emails, capture passwords, monitor browsing activity and send the data to attacker-controlled servers. (Bildquelle/ullstein bild via Getty Images)

    7 ways you can protect yourself from malicious Chrome extensions

    If you have ever installed an AI-related Chrome extension, taking a few simple precautions now can help protect your accounts and prevent further damage.

    1) Remove any suspicious or unused browser extensions

    On a Windows PC or Mac, open Chrome and type chrome://extensions into the address bar. Review every extension listed. If you see anything unfamiliar, especially AI assistants you don’t remember installing, click “Remove” immediately. Malicious extensions depend on going unnoticed. Removing them stops further data collection and cuts off the attacker’s access to your information.

    2) Change your passwords

    If you installed any suspicious extension, assume your passwords may be compromised. Start by changing your email password first, since email controls access to most other accounts. Then update passwords for banking, shopping and social media accounts. This prevents attackers from using stolen credentials to break into your accounts.

    3) Use a password manager to create and protect strong passwords

    A password manager generates unique, complex passwords for each account and stores them securely. This prevents attackers from accessing multiple accounts if one password is stolen. Password managers also alert you if your login credentials appear in known data breaches, helping you respond quickly and protect your identity. Check out the best expert-reviewed password managers of 2026 at Cyberguy.com.

    4) Install strong antivirus software and keep it active

    Good antivirus software can detect malicious browser extensions, spyware, and other hidden threats. It scans your system for suspicious activity and blocks harmful programs before they can steal your information. This adds an important layer of protection that works continuously in the background to keep your device safe. Get my picks for the best 2026 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com.

    5) Use an identity theft protection service

    Identity theft protection services monitor your personal data, including email addresses, financial accounts, and Social Security numbers, for signs of misuse. If criminals try to open accounts or commit fraud using your information, you receive alerts quickly. Early detection allows you to act fast and limit financial and personal damage. See my tips and best picks on how to protect yourself from identity theft at Cyberguy.com.

    6) Keep your browser and computer fully updated

    Software updates fix security vulnerabilities that attackers exploit. Enable automatic updates for Chrome and your operating system so you always have the latest protections. These updates strengthen your defenses against malicious extensions and prevent attackers from taking advantage of known weaknesses.

    7) Use a personal data removal service

    Personal data removal services scan data broker websites that collect and sell your personal information. They help remove your data from these sites, reducing what attackers can find and use against you. Less exposed information means fewer opportunities for criminals to target you with scams, identity theft or phishing attacks.

    Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.

    Get a free scan to find out if your personal information is already out on the web: Cyberguy.com.

    Kurt’s key takeaway

    Even tools designed to make your life easier can become tools for cybercriminals. Malicious extensions often hide behind trusted names and convincing features, making them difficult to spot. You can significantly reduce your risk by reviewing your browser extensions regularly, removing anything suspicious and using protective tools like password managers and strong antivirus software.

    Have you checked your browser extensions recently? Let us know your thoughts by writing to us at Cyberguy.com.

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    Sign up for my FREE CyberGuy Report 
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    Copyright 2026 CyberGuy.com. All rights reserved.

    Related Article

    Malicious browser extensions hit 4.3M users

    [ad_2]

    Source link

  • India’s AI boom pushes firms to trade near-term revenue for users | TechCrunch

    [ad_1]

    Tech giants’ efforts to ramp up AI adoption in India may be about to hit a turning point, as companies end free promotions with hopes to convert the world’s fourth-largest economy into a windfall of paid subscribers.

    India became the world’s largest market for generative AI app downloads in 2025, according to market intelligence firm Sensor Tower, widening its lead over the U.S. as installs jumped 207% year-over-year.

    Companies including OpenAI, Google, and Perplexity rolled out extended free premium offers to accelerate user growth in the price sensitive market. Leading AI firms have also backed India in its push to become a global artificial intelligence hub. A major AI summit in New Delhi last week was attended by leaders including OpenAI’s Sam Altman, Anthropic’s Dario Amodei, and Alphabet CEO Sundar Pichai — a sign of the country’s growing weight in the global AI race.

    Now, some of those early promotional pushes are winding down. Perplexity ended its bundled Pro offer with Indian telco Airtel in January, while OpenAI’s free ChatGPT Go access in India is no longer available, potentially setting the stage for a clearer test of how many newly acquired users convert to paying subscribers.

    Despite strong download growth, India still generates a disproportionately small share of AI app revenue, accounting for about 1% of in-app purchases even as it drives roughly 20% of global GenAI app downloads, according to the Sensor Tower data shared with TechCrunch, highlighting the monetization challenge in one of the industry’s fastest-growing markets.

    GenAI app adoption in India accelerated sharply through 2025, with downloads peaking in September and October at year-over-year growth rates of about 320% and 260%, respectively, according to the data. Yet the surge in usage did not fully translate into revenue gains. In November and December 2025, AI app in-app purchase revenue in India fell 22% and 18% month over month, respectively. ChatGPT’s revenue dropped even more sharply — down 33% and 32% over the same period following the November launch of free sub-$5 ChatGPT Go access — reflecting the near-term impact of aggressive promotional pushes.

    Image Credits:Sensor Tower

    ChatGPT still commands more than 60% of GenAI in-app revenue in India, meaning shifts in its pricing strategy can significantly influence overall market performance.

    Techcrunch event

    Boston, MA
    |
    June 9, 2026

    Alongside promotional pushes, Sensor Tower attributed the surge in GenAI app adoption in India last year to a mix of new product launches, including the debut of platforms such as DeepSeek, Grok, and Meta AI, as well as upgrades to major chatbots like ChatGPT, Gemini, Claude, and Perplexity. Viral interest in AI-generated content also helped fuel adoption, with content creation and editing tools accounting for seven of the 20 most downloaded GenAI apps in India in 2025.

    The user surge has been equally pronounced. India accounted for about 19% of the global user base of leading AI assistant apps in 2025, ahead of the U.S. at 10%, Sensor Tower said. ChatGPT continues to dominate the Indian market by monthly active users, though rivals including Google’s Gemini and Perplexity have also seen rapid growth following promotional offers. ChatGPT was the most downloaded GenAI app in India and globally in 2025, according to earlier Sensor Tower data. Earlier this month, OpenAI’s CEO said that the chatbot now has more than 100 million weekly active users in India.

    The promotional push in India reflects a broader strategy by AI firms to reduce pricing friction in a highly value-conscious market, betting that early user adoption and engagement will translate into stronger long-term retention once free access periods expire, said Sneha Pandey, insights analyst at Sensor Tower.

    India’s appeal lies in its massive digital base. The country has more than a billion internet users and around 700 million smartphone owners, making it one of the largest potential markets for AI services globally and a critical battleground for user growth.

    Nonetheless, user engagement in India still trails more mature markets. In 2025, users of leading AI chatbot apps in the U.S. spent about 21% more time per week on the apps than their counterparts in India and logged 17% more sessions on average, per Sensor Tower.

    “AI in-app revenues will likely see meaningful but gradual improvement as users become more deeply integrated into these platforms, making sustained engagement paramount,” Pandey told TechCrunch.

    She added that pricing pressure in India is likely to remain elevated given the country’s young and value-conscious user base, making lower-cost tiers, telecom bundles, and micro-transaction models important for long-term retention.

    ChatGPT remained the clear market leader in India entering 2026, with 180 million monthly active users in January, per Sensor Tower, followed by Google’s Gemini with 118 million, Perplexity with 19 million, and Meta AI with 12 million. The figures underline both the scale of India’s AI opportunity and the growing challenge for firms to convert rapid user adoption into sustained revenue.

    Google, OpenAI, and Perplexity did not respond to requests for comments.

    [ad_2]

    Jagmeet Singh

    Source link

  • Sam Altman Defends A.I. Energy Use With Human Comparison, Sparking Debate

    [ad_1]

    Sam Altman challenged critics of A.I.’s water and electricity consumption. Photo by John MacDougall/AFP via Getty Images

    Sam Altman is pushing back on mounting criticism over the environmental toll of A.I. The OpenAI chief has dismissed claims about A.I.’s water consumption as “fake” and drawn comparisons between the electricity required to power A.I. systems and the energy it takes to develop human intelligence.

    Figures suggesting that tools like ChatGPT consume multiple gallons of water per query are “totally insane” and have “no connection to reality,” Altman said in a Feb. 20 interview with The Indian Express on the sidelines of the AI Impact Summit in New Delhi. Last year, Altman claimed that ChatGPT uses 0.000085 gallons of water per query—roughly one-fifteenth of a teaspoon—though he did not explain how he calculated that figure.

    A.I.’s water footprint largely stems from the need for evaporative cooling systems used to keep data center hardware from overheating. But Altman argued that companies like OpenAI are no longer directly managing such cooling processes. Many A.I. developers, he noted, are shifting toward cooling systems that recirculate liquid rather than continually drawing fresh supplies. Meanwhile, tech giants like Microsoft, Meta, Google and Amazon have pledged to replenish more water than they withdraw by 2030.

    Even so, data centers continue to drink up water at a rapid pace. Total A.I.-related water consumption for cooling reached 23.7 cubic kilometers in 2025, a 38 percent increase over 2020, and is expected to more than triple over the next 25 years, according to a January report from Xylem. Despite the industry’s pivot to alternative methods, the report found that 56 percent of data center capacity still relies on some form of evaporative cooling.

    Altman was more measured when it came to electricity usage. “What is fair, though, is the energy consumption,” he said. “We need to move towards nuclear, wind, and solar very quickly.”

    Last April, the International Energy Agency reported that data centers accounted for roughly 1.5 percent of global electricity consumption in 2024. Their power use is rising at a rate more than four times faster than overall electricity demand and is expected to more than double by 2030.

    In response, major tech companies are pursuing data center agreements tied to alternative energy sources, including nuclear power, to ease pressure on grids. Altman, who previously led Y Combinator, has personally invested in nuclear ventures such as Oklo, which is developing small-scale nuclear plants, and Helion, which aims to commercialize nuclear fusion.

    The OpenAI CEO also argued that critics overlook the energy required to develop human intelligence. “People talk about how much energy it takes to train an A.I. model relative to how much it costs a human to do one inference query,” he said. “But it also takes a lot of energy to train a human—it takes, like, 20 years of life and all the food you eat during that time before you get started.”

    A more appropriate comparison, he suggested, would measure the energy used by a fully trained A.I. model to answer a question against that used by a human doing the same task. “Probably A.I. has already caught up on an energy efficiency basis measured that way.”

    The remarks quickly sparked debate online over whether such comparisons are appropriate. “He’s saying a really big spreadsheet and a baby are morally equivalent,” wrote Matt Stoller, research director of the American Economic Liberties Project, in a post on X. Sridhar Vembu, founder and chief scientist of software firm Zoho Corporation, also took issue with the OpenAI chief’s statements. A.I. should “quietly recede into the background” instead of dominating our lives, said the billionaire on X. “I do not want to see a world where we equate a piece of technology to a human being.”

    Sam Altman Defends A.I. Energy Use With Human Comparison, Sparking Debate

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link

  • YouTube TV billing scam emails are hitting inboxes

    [ad_1]

    NEWYou can now listen to Fox News articles!

    An email arrived that looked like a routine billing alert for YouTube TV Premium. Near the top, it displayed “BILLING FAILED” in capital letters. Below that, the message claimed the payment was declined and urged immediate action to keep streaming. This email was sent to us by Jackie from New York, NY, who immediately knew something was wrong.

    “I’m not a YouTube TV Premium subscriber so I knew right away this was a scam. So why am I receiving these emails?”

    — Jackie from New York, NY

    That question matters. If a billing alert references a service you do not use, it is almost always a scam. The email still appeared legitimate. Billing notices like this are common, and scammers rely on that familiarity to slip past quick checks.

    Another warning sign appeared in the sender’s details. The message was routed through a domain with no connection to Google or YouTube. That mismatch confirmed what Jackie already suspected.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    TAX SEASON SCAMS 2026: FAKE IRS MESSAGES STEALING IDENTITIES

    Cybersecurity experts warn that billing emails from domains unrelated to Google or YouTube are a major red flag. (Photo by S3studio/Getty Images)

    Why this scam feels so convincing

    Scammers understand behavior. People skim emails. They react quickly when access to familiar services feels threatened. This message uses recognizable branding, clean formatting and simple language. It also assumes the recipient already subscribes. That assumption is intentional. These emails go out in bulk, knowing some recipients really do have YouTube TV and may act before verifying.

    Urgency language is meant to push for quick action

    Scam emails rely on pressure. This one uses several subtle cues.

    ‘BILLING FAILED’ draws immediate focus

    Capital letters pull attention to the problem first. It feels like a system notice, even though no real account check took place.

    ‘Fix your payment now to keep streaming’ creates momentum

    That line suggests access could stop at any moment. Scammers know interruptions feel urgent, so they push fast decisions.

    ‘Status: Payment declined’ sounds technical

    The word status makes the message feel automated and official. In reality, scammers use vague labels because they cannot see real billing data.

    ‘Date: Today’ adds time pressure

    Including today makes the issue feel current and unresolved. Legitimate companies rarely demand same-day action through email links alone.

    When urgency replaces clarity, that pressure itself becomes the warning sign.

    ROBINHOOD TEXT SCAM WARNING: DO NOT CALL THIS NUMBER

    YouTube playing on a TV screen.

    Scam emails mimicking YouTube TV billing notices use urgent language and fake support buttons to steal login and payment details. (Robert Michael/picture alliance via Getty Images)

    Red flags hiding in plain sight

    The layout of the email matters as much as the wording.

    “Confirm billing” buttons are designed to prompt clicks

    The red CONFIRM BILLING button encourages action before verification. Real companies usually direct users to sign in normally, not through a single email button.

    “Contact support” links can be misleading

    The black CONTACT SUPPORT button looks official and helpful. In scam emails, these links often lead to fake support pages or phishing forms.

    Color and design influence behavior

    Red suggests urgency. Dark colors suggest authority. Familiar branding builds comfort. Together, they encourage quick action.

    If an email pushes any button to fix a problem, pause and verify first.

    The biggest red flag most people miss

    The message claims to be about YouTube TV. The sending infrastructure points somewhere else. Lifeheaters.com has no legitimate relationship with Google or YouTube. Billing emails should always come from official domains tied directly to the company.

    We reached out to Google, YouTube’s parent company, and a spokesperson told us, “We can confirm that this is a phishing scam and not an official communication from YouTube.”

    How to protect yourself from YouTube TV billing email scams

    If you receive a billing alert like this, pause before acting. Scammers rely on speed and stress. These steps help you stay in control.

    1) Go straight to the official website or app

    Instead of clicking links in the email, open a new browser tab. Then go directly to the official YouTube TV website or app. Real billing issues always appear inside your account dashboard.

    2) Check billing inside your account settings

    Once you are logged in, review your payment status. If there is a real problem, you will see it there. If everything looks normal, the email is fake.

    3) Inspect links before you click

    Hover your cursor over any link in the email. Look closely at the destination. If the domain does not clearly match Google or YouTube, do not click it. That mismatch is a major warning sign. Also, installing strong antivirus software adds a critical layer of protection. It can block malicious links, flag phishing pages and stop malware before it installs. That matters if you accidentally click the wrong thing. The best way to protect yourself from malicious links that install malware and potentially access your private information is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.

    Get my picks for the best 2026 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com.

    4) Act fast if you already clicked

    If you clicked the link or entered information, respond quickly. Change your Google password right away. Consider using a password manager to securely store and generate complex passwords, reducing the risk of password reuse.  Then review recent account activity and payment methods for any suspicious activity.

    Next, see if your email has been exposed in past breaches. Our No. 1 password manager pick includes a built-in breach scanner that checks whether your email address or passwords have appeared in known leaks. If you discover a match, immediately change any reused passwords and secure those accounts with new, unique credentials.

    Check out the best expert-reviewed password managers of 2026 at Cyberguy.com.

    5) Remove your data from data broker sites

    Scammers often target people using leaked personal data. A data removal service helps reduce how much of your information is floating around online. Less exposed data means fewer targeted scam attempts.

    While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap, and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.

    Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.

    Get a free scan to find out if your personal information is already out on the web: Cyberguy.com.

    6) Watch for sender domains that do not match

    Legitimate companies send billing emails from their own domains. A message about YouTube TV should never route through an unrelated site like lifeheaters.com. That disconnect alone is enough to walk away.

    7) Never update payment info through email links

    Scammers want your login details or credit card number. Avoid giving them either. Always update billing information directly inside your account, not through an email prompt.

    HOW TO SAFELY VIEW YOUR BANK AND RETIREMENT ACCOUNTS ONLINE

    YouTube app download screen.

    Google confirmed a YouTube TV “billing failed” email routed through an unrelated domain was a phishing scam. (Jakub Porzycki/NurPhoto via Getty Images)

    Kurt’s key takeaways

    This email looked polished. The message felt urgent. The branding felt familiar. Yet one small detail gave it away. Billing emails should always come from official domains and verified accounts. When they do not, trust your instincts and verify independently. Pausing for ten seconds can save you weeks of cleanup.

    Have you received a billing or subscription email that looked real but turned out to be fake? What tipped you off? Let us know your thoughts by writing to us at Cyberguy.com.

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP 

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    Copyright 2026 CyberGuy.com. All rights reserved.

    [ad_2]

    Source link

  • Google boss says research needed on AI threats, Microsoft confirms private emails read by Copilot AI – Tech Digest

    [ad_1]

    Share


    More research on the threats of artificial intelligence (AI) “needs to be done urgently”,
    the boss of Google DeepMind has told BBC News. In an exclusive interview at the AI Impact Summit in Delhi, Sir Demis Hassabis said the industry wanted “smart regulation” for “the real risks” posed by the tech. Many tech leaders and politicians at the Summit have called for more global governance of AI, ahead of an expected joint statement as the event draws to a close. But the US has rejected this stance, with White House technology adviser Michael Kratsios saying: “AI adoption cannot lead to a brighter future if it is subject to bureaucracies and centralised control.” BBC

    Elon Musk is suing the EU over a landmark €120m (£105m) fine against his social media company X, accusing Brussels officials of bias. X launched an appeal against December’s fine at the EU General Court earlier this week, in what is the first legal challenge to Europe’s tough digital laws. The challenge will escalate a row between Mr Musk, Brussels and the White House after the Trump administration claimed EU policies are suppressing free speech. Telegraph 

    Mind is launching a significant inquiry into artificial intelligence and mental health after a Guardian investigation exposed how Google’s AI Overviews gave people “very dangerous” medical advice. In a year-long commission, the mental health charity, which operates in England and Wales, will examine the risks and safeguards required as AI increasingly influences the lives of millions of people affected by mental health issues worldwide. Guardian 

    An ongoing phishing campaign that targets Microsoft 365 users by abusing OAuth tokens to gain long‑term access to corporate data, which focuses on business users in North America and aims to compromise Outlook, Teams, and OneDrive without directly stealing passwords. Instead of attacking login pages with fake forms, the operators trick victims into completing a real sign‑in process on Microsoft’s own device login portal, which makes the attack harder for both users and basic security tools to spot. Cybersecuritynews 


    Microsoft has confirmed that a bug allowed its Copilot AI to summarize customers’ confidential emails for weeks without permission.
    The bug, first reported by Bleeping Computer, allowed Copilot Chat to read and outline the contents of emails since January, even if customers had data loss prevention policies to prevent ingesting their sensitive information into Microsoft’s large language model. Copilot Chat allows paying Microsoft 365 customers to use the AI-powered chat feature in its Office software products, including Word, Excel, and PowerPoint.  Techcrunch 

    If you want an even better AI model, there could be reason to celebrate. Google, on Thursday, announced the release of Gemini 3.1 Pro, characterizing the model’s arrival as “a step forward in core reasoning.” Measured by the release cadence of machine learning models, Gemini 3.1 Pro is hard on the heels of recent model debuts from Anthropic and OpenAI. There’s barely enough time to start using new US commercial AI models before a competitive alternative surfaces. And that’s to say nothing about the AI models coming from outside the US, like Qwen3.5. TheRegister 


    For latest tech stories go to TechDigest.tv


    Discover more from Tech Digest

    Subscribe to get the latest posts sent to your email.

    [ad_2]

    Chris Price

    Source link

  • Gemini can now generate a 30-second approximation of what real music sounds like

    [ad_1]

    Google has announced that using its newly incorporated Lyria 3 model, Gemini users will be able to generate 30-second music tracks based on a prompt, or remix an existing track to their liking. The new model builds on Gemini’s pre-existing ability to generate text, images and video, and will also be available in YouTube’s “Dream Track” feature, where it can be used to generate detailed backing tracks for Shorts.

    Like some other music generation tools, prompting Gemini doesn’t require a lot of detail to produce serviceable results. Google’s example prompt is “a comical R&B slow jam about a sock finding their match,” but after playing with Lyria 3, you can definitely get more granular about individual elements of a track — changing the tempo or the style of drumming, for example — if you want to. Outside of text, Gemini can also generate music based on a photo or video, and tracks can be paired with album art created by Google’s Nano Banana image model.

    Google says that Lyria 3 improves on its previous audio generation models in its ability to create more “realistic and musically complex” tracks, give prompters more control over individual components of a song and automatically generate lyrics. Gemini’s outputs are limited to 30-second clips for now, but given how Google’s promotional video shows off the feature, it’s not hard to imagine those clips getting longer or the model getting incorporated into other apps, like Google Messages.

    Like Gemini’s other AI-generated outputs, songs made with Lyria 3 are also watermarked with Google’s SynthID, so a Gemini clip can’t as easily be passed off as a human one. Google started rolling out its SynthID Detector for identifying AI-generated content at Google I/O 2025. The sample tracks Google included alongside its announcement are convincing, but you might not need the company’s tool to notice their machine-made qualities. The instrumental parts of Gemini’s clips often sound great, but the composition of the lyrics Lyria 3 produces sounds alternately corny and strange.

    If you’re curious to try Lyria 3 for yourself, Google says you can prompt tracks in Gemini starting today, provided you’re 18 years or older and speak English, Spanish, German, French, Hindi, Japanese, Korean or Portuguese.

    [ad_2]

    Ian Carlos Campbell

    Source link

  • Google I/O 2026 is set for May 19 and 20

    [ad_1]

    We’ll soon get a closer look at a bunch of features and updates Google has planned for Android and its other services. The company has confirmed that Google I/O 2026 will take place on May 19 and 20. As always, Google will stream some of the keynotes and sessions for free, including the opening keynote (during which the company makes the bulk of its major I/O announcements).

    Although I/O is primarily a conference for developers, it’s typically where we first learn about major upcoming Android changes, which of course affect tens of millions of people. Expect a lot of news about Google’s AI efforts as well, such as what’s next for Gemini.

    As has been the case for several years, Google revealed the conference’s dates for 2026 after enough folks completed a puzzle on the I/O website. This year’s puzzle has multiple “builds” to play through, all of which use Gemini.

    They start with a mini-golf game in which a virtual caddy that’s powered by Gemini offers some of the most anodyne advice imaginable. The second build is a nonogram. If you’ve ever played a Picross game, you’ll know what to do here. It’s about using logic to place tiles on a grid in order to create an image. Here, Google is using Gemini to generate “endless game boards.”

    The other three minigames are Word Wheel (which “leverages Gemini 3 to automate level design”), Super Sonicbot (which “uses Gemini to introduce microphone mechanics where noise controls the Android Bot’s altitude”) and Stretchy Cat. The latter “uses Gemini 3 as a stage designer balancing game mechanics and difficulty to create endless play.”

    [ad_2]

    Kris Holt

    Source link

  • Former NPR Host Accuses Google Of Copying His Voice For AI Offering

    [ad_1]

    Podcaster David Greene is accusing Google of using his voice without permission to create one of the AI voices in the company’s research and note-taking tool NotebookLM.

    Google added Audio Overviews in the second half of 2024, allowing NotebookLM users to make brief podcast episodes out of pages of notes and documents of any kind. The AI-generated podcasts typically have one male and one female cohost. Greene is now claiming that the male co-host was clearly trained on hours of his hard work, which it allegedly now mimics, and he is suing the company for failing to get his permission or offering him any compensation.

    “Without his consent, Google sought to replicate Mr. Greene’s distinctive voice—a voice made iconic over decades of decorated radio and public commentary—to create synthetic audio products that mimic his delivery, cadence, and persona,” the complaint filed in a state trial court in Santa Clara County, California claims.

    Greene was the co-host of NPR’s award-winning Morning Edition podcast for roughly a decade, and now he hosts KCRW’s Left, Right & Center podcast.

    Following the release of the AI podcasting feature in 2024, the internet praised how the podcasters sounded more human than expected. At the time, Forbes called the feature “eerily human,” while WIRED said that the cadence and vocal performance of the virtual podcasters, and the use of filler words or peculiar phrasing, made the product “stand out.”

    Google has called NotebookLM one of the company’s “breakout AI successes.”  The lawsuit claims that the company “misappropriated a beloved public radio and podcast host’s career, identity, and livelihood as raw material for a tech company’s bottom line without any compensation.”

    Greene was first alerted to the similarity by colleagues, and he then consulted an AI forensic firm to confirm his suspicions. According to the lawsuit, the tests indicated a 53-60% confidence that the voice was Greene’s, with any confidence score above 50% deemed “relatively high.” The CEO of the unnamed forensic company eventually concluded that it was their “confident opinion that the Google Podcast model was trained on David Greene’s voice,” per the lawsuit.

    “These allegations are baseless,” Google spokesperson José Castañeda told Gizmodo. “The sound of the male voice in NotebookLM’s Audio Overviews is based on a paid professional actor Google hired.”

    The use of intellectual and artistic property has been a huge issue in AI, leading to several high-profile lawsuits aimed at AI industry giants like OpenAI and Google. Models need lots of data for training, but with limited regulatory guardrails, the lines blur when it comes to proper authorization by and compensation for those who have labored to create the stuff it trains on.

    When it comes to mimicking likenesses, such as in voice or video generation, there is also the added uncanny experience of individuals having to surrender all autonomy over their own voice or image, as users can have the models do and say pretty much anything that they want. In a bit of high-profile fallout in 2024, Scarlett Johansson complained about OpenAI after the company allegedly used or replicated her voice to power a ChatGPT voice, even after the actress (who famously voiced an AI companion in the 2013 movie “Her”) declined the company’s requests for her participation.

    [ad_2]

    Ece Yildirim

    Source link

  • Reddit, Meta, and Google Voluntarily Gave DHS Info of Anti-ICE Users, Report Says

    [ad_1]

    Reddit, Meta, and Google voluntarily “complied with some of the requests” for identifying details of users critical of Immigration and Customs Enforcement (ICE) sent as part of a recent wave of administrative subpoenas the Department of Homeland Security has been distributing to Big Tech the past few months, according to an anonymously sourced New York Times report.

    Those three companies, plus Discord, have received “hundreds” of such requests that have come from DHS recently. Meta, it should be noted, is the parent company of Instagram, Facebook, and WhatsApp.

    Administrative subpoenas used for this purpose represent an escalation. This tool, which comes not from a judge but from DHS itself, was formerly reserved for situations like child abductions, according to the Times.

    The users were targeted because their posts “criticized ICE or pointed to the locations of ICE agents,” the Times says.

    A Google spokesperson replied to the Times with a statement, saying “When we receive a subpoena, our review process is designed to protect user privacy while meeting our legal obligations,” and “We inform users when their accounts have been subpoenaed, unless under legal order not to or in an exceptional circumstance. We review every legal demand and push back against those that are overbroad.”

    Gizmodo requested comment from Meta, Discord, and Reddit. We will update if we hear back.

    According to the Times, one or multiple of the relevant companies have stated that they notify users of these requests from DHS, and give them a 14-day window to “fight the subpoena in court” before complying.

    Amazon has also been accused of at least some degree of participation with ICE’s ongoing mass deportation efforts. In October, Amazon-owned Ring announced a partnership with Flock that would loop the AI-powered network into the content coming from users’ doorbell cameras. According to a 404 Media investigation, that network feeds information to law enforcement agencies at the local and federal levels, allowing for reasonable concern that ICE has access to all that footage.

    Protesters have launched an effort called “Resist and Unsubscribe” targeting ten tech companies they perceive as exceptionally supportive of ICE. That list includes Meta, Google, and Amazon, but not Reddit.

    [ad_2]

    Mike Pearl

    Source link

  • Homeland Security reportedly sent hundreds of subpoenas seeking to unmask anti-ICE accounts | TechCrunch

    [ad_1]

    The Department of Homeland Security has been increasing pressure on tech companies to identify the owners of social media accounts that criticize Immigration and Customs Enforcement (ICE), according to The New York Times.

    This echoes other recent reporting, with Bloomberg pointing to five cases in which Homeland Security sought to identify the owners of anonymous Instagram accounts, with the department withdrawing its subpoenas after the owners sued. And a Washington Post story described Homeland Security’s growing use of administrative subpoenas — which do not require the approval of a judge — to target Americans.

    Now the NYT says a practice that was previously used sparingly has become increasingly common in recent months, with the department sending hundreds of these subpoenas to Google, Reddit, Discord, and Meta. The subpoenas reportedly focused on accounts that did not have a real name attached and either criticized ICE or described the location of ICE agents.

    Google, Meta, and Reddit have reportedly complied in at least some cases. Echoing past comments, Google said that it informs users of these subpoenas when it can, and that it pushes back when the subpoenas are “overbroad.”

    [ad_2]

    Anthony Ha

    Source link

  • Android malware hidden in fake antivirus app

    [ad_1]

    NEWYou can now listen to Fox News articles!

    If you use an Android phone, this deserves your attention. 

    Cybersecurity researchers warn that hackers are using Hugging Face, a popular platform for sharing artificial intelligence (AI) tools, to spread dangerous Android malware. 

    At first, the threat appears harmless because it is disguised as a fake antivirus app. Then, once you install it, criminals gain direct access to your device. Because of this, the threat stands out as especially troubling. It combines two things people already trust — security apps and AI platforms.

    Sign up for my FREE CyberGuy Report

    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

    MALICIOUS GOOGLE CHROME EXTENSIONS HIJACK ACCOUNTS

    Researchers say hackers hid Android malware inside a fake antivirus app that looked legitimate at first glance.  (Kurt “CyberGuy” Knutsson)

    What Hugging Face is and why it matters

    For anyone unfamiliar, Hugging Face is an open platform where developers share AI, NLP and machine learning models. It is widely used by researchers and startups and has become a central hub for AI experimentation. That openness is also what attackers exploited. Because Hugging Face allows public repositories and supports many file types, criminals were able to host malicious code in plain sight.

    The fake antivirus app behind the attack

    The malware first appeared in an Android app called TrustBastion. On the surface, it looks like a helpful security tool. It promises virus protection, phishing defense and malware blocking. In reality, it does the opposite. 

    Once installed, TrustBastion immediately claims your phone is infected. It then pressures you to install an update. That update delivers the malicious code. This tactic is known as scareware. It relies on panic and urgency to push users into tapping before thinking.

    FAKE ERROR POPUPS ARE SPREADING MALWARE FAST

    A fake Android antivirus app in the Google Play store

    The fake TrustBastion app mimics a legitimate Google Play update screen to trick users into installing malware.  (Bitdefender)

    How the malware spreads and adapts

    According to Bitdefender, a global cybersecurity company, the campaign centers on a fake Android security app called TrustBastion. Victims were likely shown ads or warnings claiming their device was infected and were instructed to manually install the app.

    The attackers hosted TrustBastion’s APK files directly on Hugging Face, placing them inside public datasets that appeared legitimate at first glance. Once installed, the app immediately prompted users to install a required “update,” which delivered the actual malware.

    After researchers reported the malicious repository, it was taken down. However, Bitdefender observed that nearly identical repositories quickly reappeared, with small cosmetic changes but the same malicious behavior. That rapid re-creation made the campaign harder to fully shut down.

    What this Android malware can actually do

    This Trojan is not minor or annoying. It is invasive. Bitdefender says the malware can:

    Take screenshots of your device

    Show fake login screens for financial services

    Capture your lock screen PIN

    Once collected, that data is sent to a third-party server. From there, attackers can move quickly to drain accounts or lock you out of your own phone.

    What Google says about the threat

    Google says users who stick to official app stores are protected. A Google spokesperson told CyberGuy, “Based on our current detection, no apps containing this malware are found on Google Play.

    “Android users are automatically protected against known versions of this malware by Google Play Protect, which is on by default on Android devices with Google Play Services.

    “Google Play Protect can warn users or block apps known to exhibit malicious behavior, even when those apps come from sources outside of Play.”

    BROWSER EXTENSION MALWARE INFECTED 8.8M USERS IN DARKSPECTRE ATTACK

    A person typing on their Android phone

    Once installed, the malware could capture screenshots, fake login details and even your lock screen PIN. (Kurt “CyberGuy” Knutsson)

    How to stay safe from Hugging Face Android malware

    This threat is a reminder that small choices matter. Here is what you should do right now:

    1) Stick to trusted app stores

    Only download apps from reputable sources like Google Play Store or the Samsung Galaxy Store. These platforms have moderation and scanning in place.

    2) Read reviews before installing

    Look closely at ratings, download counts and recent comments. Fake security apps often have vague reviews or sudden rating spikes.

    3) Use a data removal service

    Even careful users can have personal data exposed. A data removal service helps remove your phone number, email and other details from data broker sites that criminals rely on. That reduces follow-up scams, fake security alerts and account takeover attempts.

    While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap, and neither is your privacy. 

    These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.

    Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com

    Get a free scan to find out if your personal information is already out on the web: Cyberguy.com

    4) Run Play Protect and use strong antivirus software

    Scan your device regularly with Play Protect and back it up with strong antivirus software for added protection. Google Play Protect, which is built-in malware protection for Android devices, automatically removes known malware. However, it is important to note that Google Play Protect may not be enough. Historically, it hasn’t been 100% effective at removing all known malware from Android devices.

    The best way to protect yourself against malicious links that install malware and potentially access your private information is to have strong antivirus software installed on all your devices. This protection can also help you detect phishing emails and ransomware, keeping your personal information and digital assets safe.

    Get my picks for the best 2026 antivirus protection winners for your Windows, Mac, Android and iOS devices at Cyberguy.com

    5) Avoid sideloading APK files

    Avoid installing apps from websites outside the app store. These apps bypass security checks, so always verify the publisher name and URL.

    6) Lock down your Google account

    Your phone security depends on it. Enable two-step verification (2FA) first, then use a strong, unique password stored in a password manager to prevent account takeovers.

    Next, see if your email has been exposed in past breaches. Our No. 1 password manager (see Cyberguy.com) pick includes a built-in breach scanner that checks whether your email address or passwords have appeared in known leaks. If you discover a match, immediately change any reused passwords and secure those accounts with new, unique credentials.

    Check out the best expert-reviewed password managers of 2026 at Cyberguy.com

    7) Be cautious with permissions

    Be cautious with accessibility permissions. Malware often abuses them to take control of your device.

    8) Watch app updates closely

    Malware can hide inside fake updates. Be cautious of urgent fixes that push you outside the app store.

    Kurt’s key takeaways

    This attack shows how quickly trust can be weaponized. A platform designed to advance AI research was repurposed as a delivery system for malware. A fake antivirus app became the threat it claimed to stop. Staying safe no longer means avoiding sketchy-looking apps. It means questioning even those apps that appear helpful and professional.

    Have you seen something on your phone that made you question its security? Let us know your thoughts by writing to us at Cyberguy.com

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    Sign up for my FREE CyberGuy Report 

    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter. 

    Copyright 2026 CyberGuy.com.  All rights reserved.

    [ad_2]

    Source link

  • Elon Musk Loses Half of xAI’s Founding Team—Where They’ve Gone Next

    [ad_1]

    Elon Musk’s xAI has lost half of its 12-person founding team. BRENDAN SMIALOWSKI/AFP via Getty Images

    Just days after Elon Musk merged his A.I. startup, xAI, with SpaceX in preparation for a widely anticipated trillion-dollar IPO later this year, two of xAI’s founding employees—Yuhuai (Tony) Wu and Jimmy Ba—announced their resignations. That means half of xAI’s founding team has now left the company barely three years after its launch. Musk framed the staff exodus as growing pains. “As a company grows, especially as quickly as xAI, the structure must evolve just like any living organism. This unfortunately required parting ways with some people. We wish them well in future endeavors,” he wrote on X yesterday (Feb. 11).

    Wu and Ba’s exits appeared amicable. But lower-level employees have been more candid about internal tensions at the Musk-run startup. Several members of xAI’s technical staff have also left in recent weeks, according to their posts on X and LinkedIn.

    “All A.I. labs are building the exact same thing, and it’s boring,” said Vahid Kazemi, who worked on xAI’s audio models, in a post on X. “I think there’s room for more creativity. So, I’m starting something new.”

    In an interview with NBC News, Kazemi also criticized the company’s working culture, saying he regularly worked 12-hour days, including holidays and weekends.

    Launched in March 2023 with a roster of industry veterans from companies like OpenAI, Google, Microsoft, and Tesla, xAI will now operate as a wholly owned subsidiary of SpaceX. The new iteration of SpaceX faces no shortage of challenges: Grok continues to face legal scrutiny, while Musk’s leadership style remains a point of contention.

    Here are the co-founders and notable leaders who have left xAI so far—and where they are now.

    Jimmy Ba

    Jimmy Ba, who led A.I. safety at xAI, announced his exit on Feb. 10. A professor at the University of Toronto who studied under A.I. pioneer Geoffrey Hinton, Ba’s research played a key role in shaping Grok’s development.

    “So proud of what the xAI team has done and will continue to stay close as a friend of the team,” Ba wrote on X. He hasn’t announced his next move, but added that “2026 is gonna be insane and likely the busiest (and most consequential) year for the future of our species.”

    Despite Ba’s departure, Dan Hendrycks, executive director of the nonprofit Center for AI Safety, remains a safety advisor for xAI.

    Yuhuai (Tony) Wu

    Tony Wu, a former research scientist at Google and postdoctoral researcher at Stanford University, announced his departure from xAI on Feb. 9.

    Wu led xAI’s reasoning team. “It’s time for my next chapter…It is an era with full possibilities: a small team armed with AIs can move mountains and redefine what’s possible,” he wrote on X.

    Wu has not disclosed his next role. Co-founders Guodong Zhang and Manuel Kroiss remain at xAI and are helping lead the company’s reorganization.

    Mike Liberatore

    While not a founding member, Mike Liberatore joined xAI as chief financial officer in April 2025, just one month after xAI acquired X in a deal that valued the combined company at $113 billion.

    Liberatore, formerly a finance executive at Airbnb and SquareTrade, left after only three months. He now works as a business finance officer at OpenAI, according to LinkedIn.

    Musk replaced Liberatore with ex-Morgan Stanley banker Anthony Armstrong. Armstrong advised Musk on his Twitter (now X) acquisition in 2022 and later served as a senior advisor at the Office of Personnel Management during Musk’s controversial tenure at the Department of Government Efficiency (DOGE).

    Greg Yang

    Greg Yang spent nearly six years as a researcher at Microsoft before joining xAI’s founding team. He left the company in January due to health complications from Lyme disease.

    “Likely I contracted Lyme a long time ago, but until I pushed myself hard building xAI and weakened my immune system, the symptoms weren’t noticeable,” Yang wrote on X. He continues to advise xAI in an informal capacity.

    Igor Babuschkin

    Igor Babuschkin, a former research engineer at OpenAI and Google DeepMind, was a co-founder and key engineering lead at xAI. Widely known as the primary developer behind Grok, Babuschkin left in July 2025 to start his own venture capital firm, Babuschkin Ventures, focused on A.I. research and startups.

    Christian Szegedy

    Christian Szegedy spent 12 years at Google before joining xAI as a founding research scientist. He left xAI in February 2025 to become chief scientist at superintelligence cloud company Morph Labs.

    More than a year later, he departed that role to found mathematical A.I. startup Math Inc. in September, according to his LinkedIn.

    I left xAI in the last week of February and I am on good terms with the team. IMO, xAI has a bright future,” Szegedy wrote on X.

    Other senior engineers and scientists at xAI include Yasemin Yesiltepe, Zhuoyi (Zoey) Huang and Yao Fu.

    Kyle Kosic

    Kyle Kosic left OpenAI in early 2023 after two years to co-found xAI, where he served as engineering infrastructure lead. He departed about a year later, in April 2024, to return to OpenAI as a technical staff member.

    Kosic was the first co-founder to leave xAI and did not issue a public statement. It is unclear who now leads xAI’s engineering infrastructure, though another co-founder, Ross Nordeen, remains the company’s technical program manager after previously holding the same role at Tesla.

    Elon Musk Loses Half of xAI’s Founding Team—Where They’ve Gone Next

    [ad_2]

    Rachel Curry

    Source link

  • EU reportedly opens another probe into Google’s ads pricing

    [ad_1]

    The European Commission has opened a new probe into Google, this time focused on the company’s massive online advertising business, Bloomberg reports. European Union regulators have already fined Google billions for violating the Digital Markets Act, and being found guilty of anticompetitive behavior in online advertising could add to that total.

    While the Commission has yet to announce a formal investigation, Bloomberg writes that it has started contacting Google’s customers and competitors for information about its dominance across multiple online advertising markets. Regulators are particularly concerned that Google could be “artificially increasing the clearing price” of ad auctions “to the detriment of advertisers.” If the company is found to be violating the EU’s competition rules, Google could be fined 10 percent of its global annual sales.

    Google’s approach to advertising to minors was reportedly already under investigation by the EU as of December 2024, and besides fines, regulators have ordered the company to open up Android to competing AI assistants and share search data with rivals. In the US, there’s also precedent for finding Google’s approach to online advertising anticompetitive.

    A US federal judge found that Google is a monopolist in online advertising in April 2025, the conclusion of a legal battle that started with a Department of Justice lawsuit accusing the company of dominating the ad market and using its control to charge more and keep a larger portion of ad sales. The DOJ ultimately wants Google to sell its ad tech business, but a final decision hasn’t been reached as to how the company’s anticompetitive behavior should be remedied.

    [ad_2]

    Ian Carlos Campbell

    Source link

  • Social media companies accused of

    [ad_1]

    The world’s biggest social media companies face several landmark trials this year that seek to hold them responsible for harms to children who use their platforms. Opening statements in one such trial in Los Angeles County Superior Court began on Monday.

    Instagram’s parent company Meta and Google’s YouTube face claims that their platforms deliberately addict and harm children. TikTok and Snap, which were originally named in the lawsuit, settled for undisclosed sums.

    Jurors got their first glimpse into what will be a lengthy trial characterized by dueling narratives from the plaintiffs and the two remaining social media companies named as defendants.

    Mark Lanier delivered the opening statement for the plaintiffs first, in a lively display where he said the case is as “easy as ABC,” which he said stands for “addicting the brains of children.” He called Meta and Google “two of the richest corporations in history” that have “engineered addiction in children’s brains.”

    He presented jurors with a slew of internal emails, documents and studies conducted by Meta and YouTube, as well as YouTube’s parent company, Google. He emphasized the findings of a study Meta conducted called “Project Myst” in which they surveyed 1,000 teens and their parents about their social media use. The two major findings, Lanier said, were that the company knew children who experienced “adverse events” like trauma and stress were particularly vulnerable for addiction; and that parental supervision and controls made little impact.

    Internal company documents

    He also showed internal Google documents that likened YouTube to a casino, and internal communication between Meta employees in which one person said Instagram is “like a drug” and that employees are “basically pushers.”

    At the core of the Los Angeles case is a 20-year-old identified only by the initials “KGM,” whose case could determine how thousands of other, similar lawsuits against social media companies will play out. She and two other plaintiffs have been selected for bellwether trials — essentially test cases for both sides to see how their arguments play out before a jury.

    KGM made a brief appearance after a break during Lanier’s statement and she will return to testify later in the trial. Lanier spent time speaking about her childhood, and particularly focused on what her personality was like before she began using social media, saying her mother called her a “creative spark” as a child. She started using YouTube at age 6 and Instagram at age 9, Lanier said. Before she graduated elementary school, she had posted 284 videos on YouTube.

    The outcome of the trial could have profound effects on the companies’ businesses and how they will handle children using their platforms.

    Lanier said the companies’ lawyers will “try to blame the little girl and her parents for the trap they built,” referencing the plaintiff. She was a minor when she said she became addicted to social media platforms, which she claims had a detrimental impact on her mental health.

    Lanier said that despite the public position of Meta and YouTube being that they work to protect children and implement safeguards for their use of the platforms, their internal documents show an entirely different position, with explicit references to young children being listed as their target audiences.

    Lanier also drew comparisons between the social media companies and tobacco firms, citing internal communication between Meta employees who were concerned about the company’s lack of proactive action about the potential harm their platforms can have on children and teens.

    “For a teenager, social validation is survival,” Lanier said. The defendants “engineered a feature that caters to a minor’s craving for social validation,” he added, speaking about “like” buttons and similar features.

    “This was only the first case — there are hundreds of parents and school districts in the social media addiction trials that start today, and sadly, new families every day who are speaking out and bringing Big Tech to court for its deliberately harmful products,” said Sacha Haworth, executive director of the nonprofit Tech Oversight Project.

    Jurors are not being asked to stop using Facebook, Instagram, YouTube or any other forms of social media throughout the course of the trial — which is expected to last about eight weeks — but Judge Carolyn B. Kuhl emphasized that they should not make any changes to the way they interact with the platforms, including changing their settings or creating new accounts.

    Kuhl said that jurors should decide the liability of Meta and YouTube independently when they deliberate.

    A separate trial in New Mexico, meanwhile, also kicked off with opening statements on Monday.

    KGM claims that her use of social media from an early age addicted her to the technology and exacerbated depression and suicidal thoughts. Importantly, the lawsuit claims that this was done through deliberate design choices made by companies that sought to make their platforms more addictive to children to boost profits. This argument, if successful, could sidestep the companies’ First Amendment shield and Section 230, which protects tech companies from liability for material posted on their platforms.

    “Borrowing heavily from the behavioral and neurobiological techniques used by slot machines and exploited by the cigarette industry, Defendants deliberately embedded in their products an array of design features aimed at maximizing youth engagement to drive advertising revenue,” the lawsuit says.

    Mark Zuckerberg expected to testify

    Executives, including Meta CEO Mark Zuckerberg, are expected to testify at the trial, which will last six to eight weeks. Experts have drawn similarities to the Big Tobacco trials that led to a 1998 settlement requiring cigarette companies to pay billions in health care costs and restrict marketing targeting minors.

    The tech companies dispute the claims that their products deliberately harm children, citing a bevy of safeguards they have added over the years and arguing that they are not liable for content posted on their sites by third parties.

    A Meta spokesperson said in a recent statement that the company strongly disagrees with the allegations outlined in the lawsuit and that it’s “confident the evidence will show our longstanding commitment to supporting young people.”

    José Castañeda, a Google spokesperson, said that the allegations against YouTube are “simply not true.” In a statement, he said, “Providing young people with a safer, healthier experience has always been core to our work.”

    The case will be the first in a slew of cases beginning this year that seek to hold social media companies responsible for harming children’s mental well-being.

    In New Mexico, opening statements began Monday for trial on allegations that Meta and its social media platforms have failed to protect young users from sexual exploitation, following an undercover online investigation. Attorney General Raúl Torrez in late 2023 sued Meta and Zuckerberg, who was later dropped from the suit.

    A federal bellwether trial beginning in June in Oakland, California, will be the first to represent school districts that have sued social media platforms over harms to children.

    In addition, more than 40 state attorneys general have filed lawsuits against Meta, claiming it is harming young people and contributing to the youth mental health crisis by deliberately designing features on Instagram and Facebook that addict children to its platforms. The majority of cases filed their lawsuits in federal court, but some sued in their respective states.

    TikTok also faces similar lawsuits in more than a dozen states.

    Other countries, meanwhile, are enacting new laws to limit social media for children. In January, French lawmakers approved a bill banning social media for children under 15, paving the way for the measure to enter into force at the start of the next school year in September, as the idea of setting a minimum age for use of the platforms gains momentum across Europe.

    In Australia, social media companies have revoked access to about 4.7 million accounts identified as belonging to children since the country banned use of the platforms by those under 16, officials said. The law provoked fraught debates in Australia about technology use, privacy, child safety and mental health and has prompted other countries to consider similar measures.

    The British government also said last month it will consider banning young teenagers from social media as it tightens laws designed to protect children from harmful content and excessive screen time.

    [ad_2]

    Source link

  • Sneak peek: Kiss of Death and the Google Exec

    [ad_1]


    Sneak peek: Kiss of Death and the Google Exec – CBS News









































    Watch CBS News



    The mysterious death of a Google executive and his last night with an exotic beauty captured on video — now a court decides her fate. “48 Hours” correspondent Maureen Maher reports Saturday, August 30 at 9/8c on CBS and streaming on Paramount+.

    [ad_2]
    Source link

  • Apple will reportedly allow third-party AI assistants in CarPlay

    [ad_1]

    Apple plans to allow third-party voice-controlled AI apps in CarPlay, Bloomberg reports. Siri is the default voice assistant for things like controlling music and looking up directions, but future AI apps in CarPlay could handle the complicated, open-ended requests Siri can’t answer.

    The expanded support would let developers like OpenAI or Google offer versions of their ChatGPT and Gemini apps for CarPlay. Similar functionality is possible just by connecting a smartphone to a car over Bluetooth and using an AI app’s voice mode, but CarPlay support would presumably make the process a little more seamless.

    Not so seamless that it replaces Siri, however. Bloomberg writes that these third-party apps won’t be able to replace the Siri button in the CarPlay interface or use their own wake words (“Hey Google,” etc.). Instead, anyone who wants to spend a long drive talking to Gemini will have to open the app first. That could cut down on the utility of using one of these apps, but Apple presumably wants to get Siri to a place where CarPlay users prefer it as their in-car assistant anyway.

    Apple and Google recently announced that Gemini would power future versions of Siri and Apple Foundation Models, the AI models underpinning Apple Intelligence. The delayed, updated version of Siri Apple introduced alongside Apple Intelligence in 2024 is supposed to be able to take actions on user’s behalf, work across apps and understand the context of what’s on screen, all things Gemini can currently do. Reports suggest Apple wants to eventually use Google’s Gemini models to transform Siri into a proper conversational chatbot, too. That future version of the voice assistant could be right at home in CarPlay.

    [ad_2]

    Ian Carlos Campbell

    Source link

  • Google and Microsoft-backed Terradot acquires carbon removal competitor | TechCrunch

    [ad_1]

    Carbon removal startup Terradot is acquiring competitor Eion, the two companies announced today. The sale was driven largely by big investors like sovereign wealth funds, which want to work with companies that can handle large contracts. Eion was simply too small, Eion CEO Anastasia Pavlovic Hans told The Wall Street Journal.

    Both companies spread pulverized rocks on farm fields to absorb carbon dioxide from the atmosphere. Known as enhanced rock weathering (EWR), it speeds up a natural process and has the potential to be a low-cost way to remove carbon, but it requires large and distributed operations. The spread between what EWR companies would like to charge and what buyers would like to pay remains wide, according to a survey by CDR.fyi. 

    California-based Terradot’s operations are centered on Brazil, where the company works with basalt as its mineral of choice, while Eion works in the U.S. and uses olivine. Terradot’s investor list includes Gigascale Capital, Google, Kleiner Perkins, and Microsoft, while Eion’s investors include AgFunder, Mercator Partners, and Overture.

    [ad_2]

    Tim De Chant

    Source link