ReportWire

Tag: Privacy

  • New Evite phishing scam uses emotional event invitations to target victims

    [ad_1]

    NEWYou can now listen to Fox News articles!

    I recently got an email from a friend with the subject “Special Celebration of Life.” It looked like a genuine Evite invitation. But when I clicked the “View Invitation” button, my antivirus software blocked the site, flagging it as a phishing attempt.

    It was one of the most convincing scam emails I’ve seen lately, complete with Evite branding, realistic design, and a personal touch. If I didn’t have strong antivirus protection, I might have walked right into it.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM/NEWSLETTER 

    DON’T FALL FOR THIS BANK PHISHING SCAM TRICK

    Phishing email appears to be a legitimate Evite invitation titled “Special Celebration Of Life.” (Kurt “CyberGuy” Knutsson)

    How this Evite phishing scam works

    Scammers send fake Evite messages with emotionally charged subjects, such as a “Special Celebration of Life,” to lure you into clicking. These emails mimic Evite’s design so they appear to come from someone you know, lowering your guard.

    Woman using laptop in chair

    Scammers are sending fake Evite invitations that look personal and trustworthy. One click can expose a user’s personal data or install malware. (Kurt “CyberGuy” Knutsson)

    Clicking the malicious link can:

    • Steal your personal information
    • Capture your login credentials
    • Install malware on your device

    Because these invitations feel personal and urgent, they can bypass skepticism. Always verify sender details before opening event links, especially for sensitive occasions. 

    Person reviewing email inbox on laptop

    Always hover over links and check sender details before clicking, especially on invitations or urgent messages from unfamiliar sources. (Kurt “CyberGuy” Knutsson)

    Steps to protect yourself from fake Evite phishing scams

    Even the most convincing invitation can be a trap, as the fake Evite email I received proved. By following these steps, you can lower your chances of falling for similar scams and keep your personal information safe.

    HOW FAKE MICROSOFT ALERTS TRICK YOU INTO PHISHING SCAMS

    1) Use strong antivirus software for real-time protection

    Strong antivirus software can stop you from landing on dangerous sites. In my case, the antivirus software blocked the fake Evite link and flagged it as phishing before any damage was done. Choose strong antivirus software with phishing detection and automatic blocking to protect against threats you might not spot yourself.

    The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.

    Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android & iOS devices at CyberGuy.com/LockUpYourTech 

    2) Check the sender’s email address carefully

    Scammers often use email addresses that look almost identical to legitimate ones, but with tiny changes, like an extra letter, a missing character, or a different domain extension. In my fake Evite example, the branding looked perfect, but the sender’s address didn’t match Evite’s official domain. Always double-check before trusting an email.

    HOW I ALMOST FELL FOR A MICROSOFT 365 CALENDAR INVITE SCAM

    3) Hover over links before clicking

    Before you click “You’re Invited!”, “View Invitation” or “RSVP Now,” hover your mouse over the link. Your email client will usually display the destination URL. In the phishing email I received, the link pointed to a suspicious domain, not Evite.com. In the phishing email I received, the link pointed to a suspicious domain, not Evite.com. If you look closely, you’ll see it was misspelled as “envtte.” If the address looks odd or unfamiliar, don’t click. 

    Fake Evite email with a red box highlighting a suspicious phishing link resembling Evite branding.

    A closer look reveals the fake link in this email that leads to a suspicious domain, not Evite.com. (Kurt “CyberGuy” Knutsson)

    4) Use a personal data removal service to limit your exposure

    The less personal information scammers can find about you online, the harder it is for them to target you. A personal data removal service can scrub your personal details, such as your phone number, home address, and email, from public databases. This reduces the risk of scammers crafting convincing, personalized phishing attempts like the fake Evite email I received.

    Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com/Delete

    Get a free scan to find out if your personal information is already out on the web: Cyberguy.com/FreeScan

    SOCIAL SECURITY ADMINISTRATION PHISHING SCAM TARGETS RETIREES

     5) Verify with the sender directly before clicking

    If an invitation appears to come from a friend, don’t assume it’s real. Scammers often spoof the names of people you know. Send a quick text or make a phone call to confirm they actually sent the invite. In many cases, they’ll be just as surprised as you are to hear about it.

    What this means for you

    Phishing scams are evolving to look more authentic than ever. Even if the message seems to come from someone you trust, one careless click can put your personal data at risk. Having strong cybersecurity tools in place and knowing how to spot a scam is your best defense.

    CLICK HERE TO GET THE FOX NEWS APP

    Kurt’s key takeaways

    I was lucky my antivirus software blocked this attack before any damage was done. But not everyone has that safety net. The next time an unexpected invitation or urgent message lands in your inbox, take a few extra seconds to verify before you click.

    Have you ever almost fallen for a fake event invite? What happened? Let us know by writing to us at Cyberguy.com/Contact

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM/NEWSLETTER

    Copyright 2025 CyberGuy.com. All rights reserved.  

    [ad_2]

    Source link

  • This Chrome VPN extension secretly spies on you

    [ad_1]

    NEWYou can now listen to Fox News articles!

    Browser extensions promise convenience, but some take far more than they give. A new report from Koi Security says that FreeVPN.One, a Chrome extension with more than 100,000 installs and even a “Featured” badge, has been secretly taking screenshots of users’ browsing sessions.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM/NEWSLETTER

    HOW AI IS NOW HELPING HACKERS FOOL YOUR BROWSER’S SECURITY TOOLS

    Google Chrome extension FreeVPN.One has allegedly taken screenshots of users’ sensitive information. (Kurt “CyberGuy” Knutsson)

    How FreeVPN.One secretly captured your browsing

    Once installed, FreeVPN.One didn’t just handle VPN traffic. It silently captured screenshots of every website you visited, bank logins, private photos, sensitive documents, and sent them to servers controlled by the developer.

    Even worse, the extension added permissions step by step, disguising its activity as “AI Threat Detection.” What looked like a useful feature became a tool for constant background surveillance.

    Why this Chrome extension threat is so dangerous

    People install VPNs to protect their privacy. Instead, this extension flipped that expectation on its head. By using Chrome’s and scripting permissions, FreeVPN.One gained access to every page you opened.

    Koi Security researchers tested the extension and confirmed it captured screenshots even on trusted sites like Google Photos and Google Sheets. The developer claimed these images were not stored, but offered no proof.

    MALICIOUS BROWSER EXTENSIONS CAUGHT SPYING ON 2 MILLION USERS

    Person looking at their computer screen.

    The screenshots were allegedly sent to the extension’s developer. (Kurt “CyberGuy” Knutsson)

    Warning signs of unsafe free VPN extensions

    There were red flags all along:

    • Awkward grammar and poorly written descriptions.
    • A generic Wix page as the only developer “contact.”
    • A promise of unlimited, free VPN service with no clear business model.

    While some free VPNs may work responsibly, most need a way to profit. If it isn’t by charging you, it may be by selling your data.

    FreeVPN.One developer’s response and Google’s removal

    When Koi Security published its findings, the developer behind FreeVPN.One offered a partial explanation. He claimed the automatic screenshot captures were part of a “Background Scanning” feature, intended only for suspicious domains. He also said the images weren’t stored, only briefly analyzed for threats.

    But researchers observed screenshots taken on trusted sites like Google Photos and Google Sheets, which don’t fit that explanation. When asked to provide proof of legitimacy, such as a company profile, GitHub repository, or professional contact, the developer stopped responding. The only public link tied to the extension led to a basic Wix starter page.

    FreeVPN.One has been removed from the Chrome Web Store. Attempts to visit its page now return the message: “This item is not available.”

    While the removal reduces the risk of new downloads, it also highlights a troubling gap. The extension spent months with spyware behavior while still carrying a verified label, raising questions about how thoroughly Chrome reviews updates to featured extensions.

    GOOGLE FIXES ANOTHER CHROME SECURITY FLAW BEING ACTIVELY EXPLOITED

    A screenshot of the Microsoft Edge store

    FreeVPN.one is not available in the Microsoft Edge store (Koi Security)

    Steps to protect yourself from VPN extension spyware

    If you’ve installed FreeVPN.One or any suspicious Chrome VPN extension, take these steps if you are concerned for your cybersecurity:

    1) Uninstall immediately

    Go to ChromeWindowExtensions and click remove.

    2) Use a trusted VPN

    Stick to reliable VPN providers that have proven track records, audited policies, and transparent operations. By choosing a legitimate VPN, you take control of your privacy instead of handing it over to an anonymous developer. A reliable VPN is also essential for protecting your online privacy and ensuring a secure, high-speed connection.

    For the best VPN software, see my expert review of the best VPNs for browsing the web privately on your Windows, Mac, Android & iOS devices at Cyberguy.com/VPN 

    3) Scan your device with strong antivirus software

    Run a trusted antivirus tool to check for hidden malware. The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.

    Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com/LockUpYourTech 

    4) Change your passwords

    Assume anything typed or viewed could have been logged. Consider using a password manager, which securely stores and generates complex passwords, reducing the risk of password reuse. 

    Next, see if your passwords have been exposed in past breaches. Our #1 password manager (see Cyberguy.com/Passwords) pick includes a built-in breach scanner that checks whether your passwords have appeared in known leaks. If you discover a match, immediately change any reused passwords and secure those accounts with new, unique credentials. 

    Check out the best expert-reviewed password managers of 2025 at Cyberguy.com/Passwords

    5) Use a personal data removal service

    Extensions like FreeVPN.One show how easily your private details can be collected and exploited. Even after uninstalling spyware, your personal information may already be circulating on data broker sites that sell your identity to marketers, scammers, and even cybercriminals. A personal data removal service can scan for your information across hundreds of broker sites and automatically request its removal. This limits how much of your data can be weaponized if it’s ever exposed through an extension like this.

    Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com/Delete

    Get a free scan to find out if your personal information is already out on the web: Cyberguy.com/FreeScan

    6) Check permissions

    Before adding any extension, review what it requests. If a VPN wants access to “all websites,” that’s a red flag.

    CLICK HERE TO GET THE FOX NEWS APP

    Kurt’s key takeaways

    FreeVPN.One is a reminder that “free” often comes at a hidden cost; your data. Don’t assume an extension is safe just because it looks popular or carries a badge. Be critical, vet carefully, and use privacy tools backed by real companies.

    Would you trade your browsing privacy for a free tool, or is it time to rethink the cost of “free”?  Let us know by writing to us at Cyberguy.com/Contact

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM/NEWSLETTER

    Copyright 2025 CyberGuy.com.  All rights reserved. 

    [ad_2]

    Source link

  • California Finalizes 2025 CCPA Rules on Data & AI Oversight

    [ad_1]

    The flags fly in front of Sacramento’s Capital Building
    Credit: Christopher Boswell via Adobe Stock

    If you’ve ever been rejected for a job by an algorithm, denied an apartment by a software program, or had your health coverage questioned by an automated system, California just voted to change the rules of the game. On July 24, 2025, the California Privacy Protection Agency (CPPA) voted to finalize one of the most consequential privacy rulemakings in U.S. history. The new regulations—covering cybersecurity audits, risk assessments, and automated decision-making technology (ADMT)—are the product of nearly a year of public comment, political pressure, and industry lobbying. 

    They represent the most ambitious expansion of U.S. privacy regulation since voters approved the California Privacy Rights Act (CPRA) in 2020 and its provisions took effect in 2023, adding for the first time binding obligations around automated decision-making, cybersecurity audits, and ongoing risk assessments.

    How We Got Here: A Contentious Rulemaking

    The CPPA formally launched the rulemaking process in November 2024. At stake was how California would regulate technologies often grouped under the “AI” umbrella-term. The CPPA opted to focus narrowly on automated decision-making technology (ADMT), rather than attempting to define AI in general. This move generated both relief and frustration among stakeholders. The groups weighing in ranged from Silicon Valley giants to labor unions and gig workers, reflecting the numerous corners of the economy that automated decision-making touches.

    Early drafts had explicitly mentioned “artificial intelligence” and “behavioral advertising.” By the time the final rules were adopted, those references were stripped out. Regulators stated that they sought to avoid ambiguity and not encompass too many technologies. Critics said the changes weakened the rules.

    The comment period drew over 575 pages of submissions from more than 70 organizations and individuals, including tech companies, civil society groups, labor advocates, and government officials. Gig workers described being arbitrarily deactivated by opaque algorithms. Labor unions argued the rules should have gone further to protect employees from automated monitoring. On the other side, banks, insurers, and tech firms warned that the regulations created duplicative obligations and legal uncertainty.

    The CPPA staff defended the final draft as one that “strikes an appropriate balance,” while acknowledging the need to revisit these rules as technology and business practices evolve. After the July 24 vote, the agency formally submitted the package to the Office of Administrative Law, which has 30 business days to review it for procedural compliance before the rules take effect.

    Automated Decision-Making Technology (ADMT): Redefining AI Oversight

    The centerpiece of the regulations is the framework for ADMT. The rules define ADMT as “any technology that processes personal information and uses computation to replace human decisionmaking, or substantially replace human decisionmaking.”

    The CPPA applies these standards to what it calls “significant decisions:” choices that determine whether someone gets a job or contract, qualifies for a loan, secures housing, is admitted to a school, or receives healthcare. In practice, that means résumé-screening algorithms, tenant-screening apps, loan approval software, and healthcare eligibility tools all fall within the law’s scope.

    Companies deploying ADMT for significant decisions will face several new obligations. They must provide plain-language pre-use notices so consumers understand when and how automated systems are being applied. Individuals must also be given the right to opt out or, at minimum, appeal outcomes to a qualified human reviewer with real authority to reverse the decision. Businesses are further required to conduct detailed risk assessments, documenting the data inputs, system logic, safeguards, and potential impacts. In short, if an algorithm decides whether you get hired, approved for a loan, or accepted into housing, the company has to tell you up front, offer a meaningful appeal, and prove that the system isn’t doing more harm than good. Liability also cannot be outsourced: with the business itself, firms remain responsible even when they rely on third-party vendors.

    Some tools are excluded—like firewalls, anti-malware, calculators, and spreadsheets—unless they are actually used to make the decision. Additionally, the CPPA tightened what counts as “meaningful human review.” Reviewers must be able to interpret the system’s output, weigh other relevant information, and have genuine authority to overturn the result.

    Compliance begins on January 1, 2027.

    Cybersecurity Audits: Scaling Expectations

    Another pillar of the new rules is the requirement for annual cybersecurity audits. For the first time under state law, companies must undergo independent assessments of their security controls.

    The audit requirement applies broadly to larger data-driven businesses. It covers companies with annual gross revenue exceeding $26.6 million that process the personal information of more than 250,000 Californians, as well as firms that derive half or more of their revenue from selling or sharing personal data.

    Audits must be conducted by independent professionals who cannot report to a Chief Information Security Officer (CISO) or other executives directly responsible for cybersecurity to ensure objectivity.

    The audits cover a comprehensive list of controls, from encryption and multifactor authentication to patch management and employee training, and must be certified annually to the CPPA or Attorney General if requested.

    Deadlines are staggered:

    • April 1, 2028: $100M+ businesses
    • April 1, 2029: $50–100M businesses
    • April 1, 2030: <$50M businesses

    By codifying this framework and embedding these requirements into law, California is effectively setting a de facto national cybersecurity baseline: one that may exceed federal NIST standards and ripple into vendor contracts nationwide. For businesses, these audits won’t just be about checking boxes: they could become the new cost of entry for doing business in California. Because companies can’t wall off California users from the rest of their customer base, these standards are likely to spread nationally through vendor contracts and compliance frameworks.

    Privacy Risk Assessments: Accountability in High-Risk Processing

    The regulations also introduce mandatory privacy risk assessments, required annually for companies engaged in high-risk processing.

    Triggering activities include:

    • Selling or sharing personal information
    • Processing sensitive personal data (including neural data, newly classified as sensitive)
    • Deploying ADMT for significant decisions
    • Profiling workers or students
    • Training ADMT on personal data 

    Each assessment must document categories of personal information processed, explain the purpose and benefits, identify potential harms and safeguards, and be submitted annually to the CPPA starting April 21, 2028, with attestations under penalty of perjury (a high-stakes accountability mechanism). This clause is designed to prevent “paper compliance.” By requiring executives to sign off under penalty of perjury, California is telling companies this isn’t paperwork. Leaders will be personally accountable if their systems mishandle sensitive data. Unlike voluntary risk assessments, California’s system ties accountability directly to the personal liability of signatories.

    Other Notable Provisions

    Beyond these headline rules, the CPPA also addressed sector-specific issues and tied in earlier reforms. For the insurance industry, the regulations clarify how the CCPA applies to companies that routinely handle sensitive personal and health data—an area where compliance expectations were often unclear. The rules also fold in California’s Delete Act, which takes effect on August 1, 2026. That law will give consumers a single, one-step mechanism to request deletion of their personal information across all registered data brokers, closing a major loophole in the data marketplace and complementing the broader CCPA framework. Together, these measures reinforce California’s role as a privacy trendsetter, creating tools that other states are likely to copy as consumers demand similar rights.

    Implications for California

    California has long served as the nation’s privacy laboratory, pioneering protections that often ripple across the country. This framework places California among the first U.S. jurisdictions to regulate algorithmic governance. With these rules, the state positions itself alongside the EU AI Act and the Colorado AI Act, creating one of the world’s most demanding compliance regimes.

    However, the rules also set up potential conflict with the federal government. The America’s AI Action Plan, issued earlier this year, emphasizes innovation over regulation and warns that restrictive state-level rules could jeopardize federal AI funding decisions. This tension may play out in future policy disputes.

    For California businesses, the impact is immediate. Companies must begin preparing governance frameworks, reviewing vendor contracts, and updating consumer-facing disclosures now. These compliance efforts build on earlier developments in California privacy law, including the creation of a dedicated Privacy Law Specialization for attorneys. This specialization will certify legal experts equipped to navigate the state’s intricate web of statutes and regulations, from ADMT disclosures to phased cybersecurity audits. Compliance will be expensive, but it will also drive demand for new privacy officers, auditors, and legal specialists. Mid-sized firms may struggle, while larger companies may gain an edge by showing early compliance. For businesses outside California, the ripple effects may be unavoidable because national companies will have to standardize around the state’s higher bar.

    The CPPA’s finalized regulations mark a structural turning point in U.S. privacy and AI governance. Obligations begin as early as 2026 and accelerate through 2027–2030, giving businesses a narrow window to adapt. For consumers, the rules promise greater transparency and the right to challenge opaque algorithms. For businesses, they establish California as the toughest compliance environment in the country, forcing firms to rethink how they handle sensitive data, automate decisions, and manage cybersecurity. California is once again setting the tone for global debates on privacy, cybersecurity, and AI. Companies that fail to keep pace will not only face regulatory risk but could also lose consumer trust in the world’s fifth-largest economy. Just as California’s auto emissions standards reshaped national car design, its privacy rules are likely to shape national policy on data and AI. Other states will borrow from California, and Washington will eventually have to decide whether to match it or rein it in.

    What starts in Sacramento rarely stays there. From Los Angeles to Silicon Valley, California just set the blueprint for America’s data and AI future.

    [ad_2]

    Hillah Greenberg

    Source link

  • Here’s the tech powering ICE’s deportation crackdown  | TechCrunch

    [ad_1]

    President Donald Trump made countering immigration one of his flagship issues during last year’s presidential campaign, promising an unprecedented number of deportations. 

    In his first eight months in office, that promise turned into around 350,000 deportations, a figure that includes deportations by Immigration and Customs Enforcement (around 200,000), Customs and Border Protection (more than 132,000), and almost 18,000 self-deportations, according to CNN.  

    ICE has taken center stage in Trump’s mass deportation campaign, raiding homes, workplaces, and public parks in search of undocumented immigrants. To aid its efforts, the ICE has at its disposal several technologies capable of identifying and surveilling individuals and communities.

    Here is a recap of some of the technology that ICE has in its digital arsenal. 

    Clearview AI facial recognition

    Clearview AI is perhaps the most well-known facial recognition company today. For years, the company promised to be able to identify any face by searching through a large database of photos it had scraped from the internet. 

    On Monday, 404 Media reported that ICE has signed a contract with the company to support its law enforcement arm Homeland Security Investigations (HSI), “with capabilities of identifying victims and offenders in child sexual exploitation cases and assaults against law enforcement officers.” 

    According to a government procurement database, the contract signed last week is worth $3.75 million. 

    ICE has had other contracts with Clearview AI in the last couple of years. In September 2024, the agency purchased “forensic software” from the company, a deal worth $1.1 million. The year before, ICE paid Clearview AI nearly $800,000 for “facial recognition enterprise licenses.”

    Clearview AI did not respond to a request for comment. 

    Contact Us

    Do you have more information about ICE and the technology it uses? We would love to learn how this affects you. From a non-work device, you can contact Lorenzo Franceschi-Bicchierai securely on Signal at +1 917 257 1382, or via Telegram and Keybase @lorenzofb, or email. You also can contact TechCrunch via SecureDrop.

    Paragon phone spyware

    In September 2024, ICE signed a contract worth $2 million with Israeli spyware maker Paragon Solutions. Almost immediately, the Biden administration issued a “stop work order,” putting the contract under review to make sure it complied with an executive order on the government’s use of commercial spyware. 

    Because of that order, for nearly a year, the contract remained in limbo. Then, last week, the Trump administration lifted the stop work order, effectively reactivating the contract

    At this point, it’s unclear what’s the status of Paragon’s relationship with ICE in practice. 

    The records entry from last week said that the contract with Parago is for “a fully configured proprietary solution including license, hardware, warranty, maintenance, and training.” Practically speaking, unless the hardware installation and training were done last year, it may take some time for ICE to have Paragon’s system up and running.

    It’s also unclear if the spyware will be used by ICE or HSI, an agency whose investigations are not limited to immigration, but also cover online child sexual exploitation, human trafficking, financial fraud, and more.

    Paragon has long tried to portray itself as an “ethical” and responsible spyware maker, and now has to decide if it’s ethical to work with Trump’s ICE. A lot has happened to Paragon in the last year. In December, American private equity giant AE Industrial purchased Paragon, with a plan to merge it with cybersecurity company Red Lattice, according to Israeli tech news site Calcalist.

    In a sign that the merger may have taken place, when TechCrunch reached out to Paragon for comment on the reactivation of the ICE contract last week, we were referred to RedLattice’s new vice president of marketing and communications Jennifer Iras. 

    RedLattice’s Iras did not respond to a request for comment for this article, nor for last week’s article.

    In the last few months, Paragon has been ensnared in a spyware scandal in Italy, where the government has been accused of spying on journalists and immigration activists. In response, Paragon cut ties with Italy’s intelligence agencies. 

    For years, ICE has used the legal research and public records data broker LexisNexis to support its investigations. 

    In 2022, two non-profits obtained documents via Freedom of Information Act requests, which revealed that ICE performed more than 1.2 million searches over seven months using a tool called Accurint Virtual Crime Center. ICE used the tool to check the background information of migrants.   

    A year later, The Intercept revealed that ICE was using LexisNexis to detect suspicious activity and investigate migrants before they even committed a crime, a program that a critic said enabled “mass surveillance.”

    According to public records, LexisNexis currently provides ICE “with a law enforcement investigative database subscription (LEIDS) which allows access to public records and commercial data to support criminal investigations.” 

    This year, ICE has paid $4.7 million to subscribe to the service. 

    LexisNexis spokesperson Jennifer Richman told TechCrunch that ICE has used the company’s product “data and analytics solutions for decades, across several administrations.”

    “Our commitment is to support the responsible and ethical use of data, in full compliance with laws and regulations, and for the protection of all residents of the United States,” said Richman, who added that LexisNexis “partners with more than 7,500 federal, state, local, tribal, and territorial agencies across the United States to advance public safety and security.” 

    Surveillance giant Palantir

    Data analytics and surveillance technology giant Palantir has signed several contracts with ICE in the last year. The biggest contract, worth $18.5 million from September 2024, is for a database system called “Investigative Case Management,” or ICM.

    The contract for ICM goes back to 2022, when Palantir signed a $95.9 million deal with Palantir. The Peter Thiel-founded company’s relationship with ICE dates back to the early 2010s. 

    Earlier this year, 404 Media, which has reported extensively on the technology powering Trump’s deportation efforts, and particularly Palantir’s relationship with ICE, revealed details of how the ICM database works. The tech news site reported that it saw a recent version of the database, which allows ICE to filter people based on their immigration status, physical characteristics, criminal affiliation, location data, and more. 

    404 Media cited “a source familiar with the database,” who said it is made up of ‘tables upon tables’ of data and that it can build reports that show, for example, people who are on a specific type of visa who came into the country at a specific port of entry, who came from a specific country, and who have a specific hair color (or any number of hundreds of data points).” 

    The tool, and Palantir’s relationship with ICE, was controversial enough that sources within the company leaked to 404 Media an internal wiki where Palantir justifies working with Trump’s ICE. 

    Palantir is also developing a tool called “ImmigrationOS,” according to a contract worth $30 million revealed by Business Insider
    ImmigrationOS is said to be designed to streamline the “selection and apprehension operations of illegal aliens,” give “near real-time visibility” into self-deportations, and track people overstaying their visa, according to a document first reported on by Wired.

    [ad_2]

    Lorenzo Franceschi-Bicchierai

    Source link

  • Hacker exploits AI chatbot in cybercrime spree

    [ad_1]

    NEWYou can now listen to Fox News articles!

    A hacker has pulled off one of the most alarming AI-powered cyberattacks ever documented. According to Anthropic, the company behind Claude, a hacker used its artificial intelligence chatbot to research, hack, and extort at least 17 organizations. This marks the first public case where a leading AI system automated nearly every stage of a cybercrime campaign, an evolution that experts now call “vibe hacking.”

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM/NEWSLETTER

    HOW AI CHATBOTS ARE HELPING HACKERS TARGET YOUR BANKING ACCOUNTS

    Simulated ransom guidance created by Anthropic’s threat intelligence team for research and demonstration purposes. (Anthropic)

    How a hacker used an AI chatbot to strike 17 targets

    Anthropic’s investigation revealed how the attacker convinced Claude Code, a coding-focused AI agent, to identify vulnerable companies. Once inside, the hacker:

    • Built malware to steal sensitive files.
    • Extracted and organized stolen data to find high-value information.
    • Calculated ransom demands based on victims’ finances.
    • Generated tailored extortion notes and emails.

    Targets included a defense contractor, a financial institution and multiple healthcare providers. The stolen data included Social Security numbers, financial records and government-regulated defense files. Ransom demands ranged from $75,000 to over $500,000.

    Why AI cybercrime is more dangerous than ever

    Cyber extortion is not new. But this case shows how AI transforms it. Instead of acting as an assistant, Claude became an active operator scanning networks, crafting malware and even analyzing stolen data. AI lowers the barrier to entry. In the past, such operations required years of training. Now, a single hacker with limited skills can launch attacks that once took a full criminal team. This is the frightening power of agentic AI systems.

    HOW AI IS NOW HELPING HACKERS FOOL YOUR BROWSER’S SECURITY TOOLS

    Webpage of AI generated ransom note

    A simulated ransom note template that hackers could use to scam victims. (Anthropic)

    What vibe hacking reveals about AI-powered threats

    Security researchers refer to this approach as vibe hacking. It describes how hackers embed AI into every phase of an operation.

    • Reconnaissance: Claude scanned thousands of systems and identified weak points.
    • Credential theft: It extracted login details and escalated privileges.
    • Malware development: Claude generated new code and disguised it as trusted software.
    • Data analysis: It sorted stolen information to identify the most damaging details.
    • Extortion: Claude created alarming ransom notes with victim-specific threats.

    This systematic use of AI marks a shift in cybercrime tactics. Attackers no longer just ask AI for tips; they use it as a full-fledged partner.

    GOOGLE AI EMAIL SUMMARIES CAN BE HACKED TO HIDE PHISHING ATTACKS

    A dark web page selling ransomware services

    A cybercriminal’s initial sales offering on the dark web seen in January 2025. (Anthropic)

    How Anthropic is responding to AI abuse

    Anthropic says it has banned the accounts linked to this campaign and developed new detection methods. Its threat intelligence team continues to investigate misuse cases and share findings with industry and government partners. The company admits, however, that determined actors can still bypass safeguards. And experts warn that these patterns are not unique to Claude; similar risks exist across all advanced AI models.

    How to protect yourself from AI cyberattacks

    Here’s how to defend against hackers now using AI tools to their advantage:

    1. Use strong, unique passwords everywhere

    Hackers who break into one account often attempt to use the same password across your other logins. This tactic becomes even more dangerous when AI is involved because a chatbot can quickly test stolen credentials across hundreds of sites. The best defense is to create long, unique passwords for every account you have. Treat your passwords like digital keys and never reuse the same one in more than one lock.

    Next, see if your email has been exposed in past breaches. Our No. 1 password manager (see Cyberguy.com/Passwords) pick includes a built-in breach scanner that checks whether your email address or passwords have appeared in known leaks. If you discover a match, immediately change any reused passwords and secure those accounts with new, unique credentials. 

    Check out the best expert-reviewed password managers of 2025 at Cyberguy.com/Passwords

    2. Protect your identity and use a data removal service

    The hacker who abused Claude didn’t just steal files; they organized and analyzed them to find the most damaging details. That illustrates the value of your personal information in the wrong hands. The less data criminals can find about you online, the safer you are. Review your digital footprint, lock down privacy settings, and reduce what’s available on public databases and broker sites.

    While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap, and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.

    Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com/Delete

    Get a free scan to find out if your personal information is already out on the web: Cyberguy.com/FreeScan

    Chinese hackers target US telecoms: What you need to know to protect your data

    Illustration of a hacker at work. (Kurt “CyberGuy” Knutsson)

    3. Turn on two-factor authentication (2FA)

    Even if a hacker obtains your password, 2FA can stop them in their tracks. AI tools now help criminals generate highly realistic phishing attempts designed to trick you into handing over logins. By enabling 2FA, you add an extra layer of protection that they cannot easily bypass. Choose app-based codes or a physical key whenever possible, as these are more secure than text messages, which are easier for attackers to intercept.

    4. Keep devices and software updated

    AI-driven attacks often exploit the most basic weaknesses, such as outdated software. Once a hacker knows which companies or individuals are running old systems, they can use automated scripts to break in within minutes. Regular updates close those gaps before they can be targeted. Setting your devices and apps to update automatically removes one of the easiest entry points that criminals rely on.

    5. Be suspicious of urgent messages

    One of the most alarming details in the Anthropic report was how the hacker used AI to craft convincing extortion notes. The same tactics are being applied to phishing emails and texts sent to everyday users. If you receive a message demanding immediate action, such as clicking a link, transferring money or downloading a file, treat it with suspicion. Stop, check the source and verify before you act.

    6. Use a strong antivirus software

    The hacker in this case built custom malware with the help of AI. That means malicious software is getting smarter, faster and harder to detect. Strong antivirus software that constantly scans for suspicious activity provides a critical safety net. It can identify phishing emails and detect ransomware before it spreads, which is vital now that AI tools make these attacks more adaptive and persistent.

    Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com/LockUpYourTech

    Hacker typing code.

    Over 40,000 Americans were previously exposed in a massive OnTrac security breach, leaking sensitive medical and financial records. (Jakub Porzycki/NurPhoto via Getty Images)

    7. Stay private online with a VPN

    AI isn’t only being used to break into companies; it’s also being used to analyze patterns of behavior and track individuals. A VPN encrypts your online activity, making it much harder for criminals to connect your browsing to your identity. By keeping your internet traffic private, you add another layer of protection for hackers trying to gather information they can later exploit.

    For the best VPN software, see my expert review of the best VPNs for browsing the web privately on your Windows, Mac, Android & iOS devices at Cyberguy.com/VPN

    CLICK HERE TO GET THE FOX NEWS APP  

    Kurt’s key takeaways

    AI isn’t just powering helpful tools; it’s also arming hackers. This case proves that cybercriminals can now automate attacks in ways once thought impossible. The good news is, you can take practical steps today to reduce your risk. By making smart moves, such as enabling two-factor authentication (2FA), updating devices, and using protective tools, you can stay one step ahead.

    Do you think AI chatbots should be more tightly regulated to prevent abuse? Let us know by writing to us at Cyberguy.com/Contact

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM/NEWSLETTER

    Copyright 2025 CyberGuy.com. All rights reserved.

    [ad_2]

    Source link

  • How China’s Propaganda and Surveillance Systems Really Operate

    [ad_1]

    A trove of internal documents leaked from a little-known Chinese company has pulled back the curtain on how digital censorship tools are being marketed and exported globally. Geedge Networks sells what amounts to a commercialized “Great Firewall” to at least four countries, including Kazakhstan, Pakistan, Ethiopia, and Myanmar. The groundbreaking leak shows in granular detail the capabilities this company has to monitor, intercept, and hack internet traffic. Researchers who examined the files described it as “digital authoritarianism as a service.”

    But I want to focus on another thing the documents demonstrate: While people often look at China’s Great Firewall as a single, all-powerful government system unique to China, the actual process of developing and maintaining it works the same way as surveillance technology in the West. Geedge collaborates with academic institutions on research and development, adapts its business strategy to fit different clients’ needs, and even repurposes leftover infrastructure from its competitors. In Pakistan, for example, Geedge landed a contract to work with and later replace gear made by the Canadian company Sandvine, the leaked files show.

    Coincidentally, another leak from a different Chinese company published this week reinforces the same point. On Monday, researchers at Vanderbilt University made public a 399-page document from GoLaxy, a Chinese company that uses AI to analyze social media and generate propaganda materials. The leaked documents, which include internal pitch decks, business goals, and meeting notes, may have come from a disgruntled former employee—the last two pages accuse GoLaxy of mistreating workers by underpaying them and mandating long hours. The document had been sitting on the open internet for months before another researcher flagged it to Brett Goldstein, a research professor in the School of Engineering at Vanderbilt.

    GoLaxy’s main business is different from Geedge’s: It collects open source information from social media, maps relationships among political figures and news organizations, and pushes targeted narratives online through synthetic social media profiles. In the leaked document, GoLaxy claims to be the “number one brand in intelligence big data analysis” in China, servicing three main customers: the Chinese Communist Party, the Chinese government, and the Chinese military. The included technology demos focus heavily on geopolitical issues like Taiwan, Hong Kong, and US elections. And unlike Geedge, GoLaxy seems to be targeting only domestic government entities as clients.

    But there are also quite a few things that make the two companies comparable, particularly in terms of how their businesses function. Both Geedge and GoLaxy maintain close relationships with the Chinese Academy of Sciences (CAS), the top government-affiliated research institution in the world, according to the Nature Index. And they both market their services to Chinese provincial-level government agencies, who have localized issues they want to monitor and budgets to spend on surveillance and propaganda tools.

    GoLaxy didn’t immediately respond to a request for comment from WIRED. In a previous response to The New York Times, the company denied collecting data targeting US officials and called the outlet’s reporting misinformation. Vanderbilt researchers say they witnessed the company remove pages from its website after the initial reporting.

    Closer Than They Seem

    In the West, when academic scholars see opportunities to commercialize their cutting-edge research, they often become startup founders or start side businesses. GoLaxy seems to be no exception. Many key researchers at the company, according to the leaked document, still occupy spots at CAS.

    But there’s no guarantee that CAS researchers will get government grants—just like a public university professor in the US can’t bet on their startup winning federal contracts. Instead, they need to go after government agencies like any private company would go after clients. One document in the leak shows that GoLaxy assigned sales targets to five employees and was aiming to secure 42 million RMB (about $5.9 million) in contracts with Chinese government agencies in 2020. Another spreadsheet from around 2021 lists the company’s current clients, which include branches of the Chinese military, state security, and provincial police departments, as well as other potential customers it was targeting.

    [ad_2]

    Zeyi Yang, Louise Matsakis

    Source link

  • Right-Wing Activists Are Targeting People for Allegedly Celebrating Charlie Kirk’s Death

    [ad_1]

    Far-right influencers and violent extremists are posting identifying details about people they view as celebrating or glorifying the murder of right-wing activist Charlie Kirk. The campaign has been swift and widespread and has already led to at least one person losing their job and others receiving death threats.

    The people posting the identifying information include Chaya Raichik, who runs the hugely influential, hate-filled LibsofTikTok account on X, Trump-whisperer Laura Loomer, and former Proud Boy leader Enrique Tarrio.

    A central hub of this activity is a website called Charlie’s Murderers, which was registered in the early evening on the day Kirk was shot and is revealing certain personal information, such as social media usernames and email addresses, of individuals the operators believe were celebrating the horrific murder.

    One of the first names listed on the sites was Rachel Gilmore, an independent journalist at Bubble Pop Media who wrote on X that she was “terrified to think of how far-right fans of Kirk, aching for more violence, could very well turn this into an even more radicalizing moment. Will they now believe their fears have been proven right and feel they have a right to ‘retaliate,’ regardless of who actually was behind the initial shooting?”

    As WIRED reported, this is exactly how much of the far right—along with Republican lawmakers including President Donald Trump—did respond to the news, even though no suspect had been arrested and no motive had been revealed.

    For Gilmore, the impact of her inclusion on the website was instant and terrifying.

    “This website has me genuinely afraid for my safety,” Gilmore tells WIRED. “I feel awful for anyone whose name is on it. It’s clear that the purpose of the website is to do exactly what the post that landed me on there warned Kirk’s supporters might do: retaliate.”

    Gilmore has received multiple death and rape threats since the site went live on Wednesday evening. (WIRED reviewed screenshots of emails and direct messages Gilmore has received to verify the threats.) She has not reported the threats to the police yet, she says.

    “I’ve gotten emails and DMs promising to find out where I live,” Gilmore says. “I have folks claiming my information is all over 4chan telling me in the same breath that they hope I get ‘raped and killed’ and telling me to ‘have fun walking the streets of’ my city, which they name.”

    At the time of publication, two dozen people were listed on the site, with many entries including full names, employment details, location, and social media accounts. The site’s operators, who are anonymous, claim to have received “thousands” of submissions. “All of them will be reviewed and uploaded shortly,” a note on the website reads. “This is a permanent archive and will soon contain a search feature.”

    “Most likely, we’d be happy to answer your questions,” the people controlling the website told WIRED in an email. Subsequent emails, though, went unanswered.

    The website asks people to submit a potential target’s full name, location, and employer information, as well as screenshots of incriminating social media posts, via email. An About section on the website, added on Thursday morning, says: “This is not a doxxing website. This website is a lawful data aggregator of publicly-available information. It has been created for the purposes of public education.”

    [ad_2]

    David Gilbert

    Source link

  • Cindy Cohn Is Leaving the EFF, but Not the Fight for Digital Rights

    [ad_1]

    After a quarter century defending digital rights, Cindy Cohn announced on Tuesday that she is stepping down as executive director of the Electronic Frontier Foundation. Cohn, who has led the San Francisco–based nonprofit since 2015, says she will leave the role later this year, concluding a chapter that helped define the modern fight over online freedom.

    Cohn first rose to prominence as lead counsel in Bernstein v. Department of Justice, the 1990s case that overturned federal restrictions on publishing encryption code. As EFF’s legal director and later executive director, she guided the group through legal challenges to government surveillance, reforms to computer crime laws, and efforts to hold corporations accountable for data collection. Over the past decade, EFF has expanded its influence, becoming a central force in shaping the debate over privacy, security, and digital freedom.

    In an interview with WIRED, Cohn reflected on EFF’s foundational encryption victories, its unfinished battles against National Security Agency (NSA) surveillance, and the organization’s work protecting independent security researchers. She spoke about the shifting balance of power between corporations and governments, the push for stronger state-level privacy laws, and the growing risks posed by artificial intelligence.

    Though stepping down from leadership, Cohn tells WIRED she plans to remain active in the fight against mass surveillance and government secrecy. Describing herself as “more of a warrior than a manager,” she says her intent is to return to frontline advocacy. She is also at work on a forthcoming book, Privacy’s Defender, due out next spring, which she hopes will inspire a new generation of digital rights advocates.

    This interview has been edited for length and clarity.

    WIRED: Tell us about the fights you won, and the ones that still feel unfinished after 25 years.

    CINDY COHN: The early fight that we made to free up encryption from government regulation still stands out as setting the stage for a potentially secure internet. We’re still working on turning that promise into a reality, but we’re in such a different place than we would’ve been in had we lost that fight. Encryption protects anybody who buys anything online, anyone who uses Signal to be a whistleblower or journalists, or just regular people who want privacy and use WhatsApp or Signal. Even the backend-certificate authorities provided by Let’s Encrypt—that make sure that when you think you’re going to your bank, you’re actually going to your bank website—are all made possible because of encryption. These are all things that would’ve been at risk if we hadn’t won that fight. I think that win was foundational, even though the fights aren’t over.

    The fights that we’ve had around the NSA and national security, those are still works in progress. We were not successful with our big challenge to the NSA spying in Jewel v. NSA, although over the long arc of that case and the accompanying legislative fights, we managed to claw back quite a bit of what the NSA started doing after 9/11.

    [ad_2]

    Dell Cameron

    Source link

  • FBI warns of QR code scam disguised in mystery packages

    [ad_1]

    NEWYou can now listen to Fox News articles!

    QR codes that were once seen as a convenient shortcut for checking menus or paying bills have increasingly been turned into weapons. Fake delivery texts, counterfeit payment links and malicious codes pasted over legitimate ones have all become part of the modern fraud playbook.

    The latest warning from federal authorities shows just how far these tactics have gone. Criminals are now mailing out packages that people never ordered. Inside these boxes is a QR code that, when scanned, can lead to stolen personal details, drained bank accounts or malware running silently in the background of a phone.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    QR CODE SCAMS RISE AS 73% OF AMERICANS SCAN WITHOUT CHECKING

    What you need to know about the QR code scam

    The scheme is a twist on what is known as a brushing scam. Traditionally, brushing scams involved online sellers sending products to strangers and then using the recipient’s details to post fake reviews. It was more of a nuisance than a serious crime.

    An Amazon package with a QR code. (Lindsey Nicholson/UCG/Universal Images Group via Getty Images)

    Now the practice has shifted from harmless free items to deliberate fraud. Instead of receiving a product, many victims find only a printed QR code. Once scanned, the code redirects them to fraudulent websites that ask for sensitive personal information, such as banking information, credit card numbers or login credentials. Some codes go a step further and install malicious software designed to track activity and steal data directly from the device.

    “The FBI warns the public about a scam variation in which criminals send unsolicited packages containing a QR code that prompts the recipient to provide personal and financial information or unwittingly download malicious software that steals data from their phone,” the agency said in a public notice. “To encourage the victim to scan the QR code, the criminals often ship the packages without sender information to entice the victim to scan the QR code.”

    WHATSAPP BANS 6.8M SCAM ACCOUNTS, LAUNCHES SAFETY TOOL

    Why QR codes appeal to scammers

    QR codes have become common in everyday life. They are used in restaurants, stores, airports, on polls and payment systems and most people scan them without a second thought. Unlike suspicious links that can be spotted, a QR code reveals nothing until it is scanned.

    That makes it a perfect disguise for a scam. The setup is simple: a package arrives with no sender information and no explanation. The mystery sparks curiosity, and many people scan the code to figure out who sent it. That moment of curiosity is what the scammers rely on.

    A woman scanning a QR code on a building

    Taylor Swift fans gather outside a building where a mural featuring a large QR code was being painted to promote Swift’s latest album, “The Tortured Poets Department,” on April 17, 2024, in Chicago. (Scott Olson/Getty Images)

    The consequences can be serious. Fake websites may harvest names, addresses and financial details. Malware may silently monitor accounts, log keystrokes or even target cryptocurrency wallets. Victims often do not notice until they see unauthorized charges or suspicious withdrawals. By then, their information may already be in the hands of criminals.

    DON’T FALL FOR THIS BANK PHISHING SCAM TRICK

    7 ways to stay safe from QR code scams

    Scammers rely on curiosity and convenience to trick people into scanning malicious QR codes. A few simple habits can help you avoid becoming a target. Here are seven ways to stay safe from QR code scams.

    1) Be cautious with unsolicited QR codes and use strong antivirus software

    Avoid scanning QR codes from mystery deliveries, random flyers or stickers on public signs. A QR code is just a disguised link, and until you know where it leads, it shouldn’t be trusted. To stay safe even if you accidentally scan a risky code, keep strong antivirus software on your phone. Mobile security apps can block fraudulent sites, warn you before downloads and protect against malicious QR code attacks.

    Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android and iOS devices at Cyberguy.com.

    2) Stick to trusted sources

    Only scan QR codes from businesses and organizations you already trust. Examples include your bank’s mobile app, an airline boarding pass or a known retailer’s checkout page. If you wouldn’t click a random link in a text message, don’t scan a random QR code either.

    3) Preview links before opening

    Most phones let you press and hold a QR code link to preview where it goes. If the URL looks suspicious, with misspellings, random numbers or shortened links, don’t open it. Taking a second to check can save you from a phishing trap. 

    A man holding a package

    Scammers are sending fake packages with QR codes that trick recipients into scanning and giving criminals access to their personal data. (Kurt “CyberGuy” Knutsson)

    4) Limit your digital footprint

    The less personal data available about you online, the harder it is for scammers to target you with convincing fraud attempts. Consider using data removal services that scrub your information from people-search sites and marketing databases. This reduces the chances that your address or phone number ends up in the wrong hands and is connected to a scam package.

    While no service promises to remove all your data from the internet, having a removal service is great if you want to constantly monitor and automate the process of removing your information from hundreds of sites continuously over a longer period of time.

    Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.

    Get a free scan to find out if your personal information is already out on the web: Cyberguy.com.

    5) Enable two-factor authentication (2FA)

    Even if your login details are stolen, 2FA makes it harder for criminals to access your accounts. By requiring a secondary code sent to your phone or generated through an authenticator app, 2FA helps prevent unauthorized logins to your banking, email and trading accounts.

    6) Keep your device updated

    Software updates often contain fixes for security vulnerabilities that scammers try to exploit. Running the latest version of your phone’s operating system, as well as updating apps regularly, gives you stronger protection against malware that can be delivered through a malicious QR code.

    7) Report suspicious activity

    If an unexpected package arrives at your door with a QR code inside, do not simply throw it away. Report it to local authorities and consider filing a complaint with the FBI’s Internet Crime Complaint Center. Reporting not only helps protect you, but it also gives law enforcement more information to track how these scams are spreading.

    CLICK HERE TO GET THE FOX NEWS APP

    Kurt’s key takeaway

    This scam might not be everywhere yet, but it shows just how quickly criminals adapt to new technology. QR codes were meant to make life easier, and most of the time they do, but that same convenience can turn into a weakness when people let curiosity override caution. The lesson here is that a mystery package with a QR code is not some fun puzzle to figure out. It is a red flag. The safest move is to step back, resist the urge to scan and if something feels off, report it instead of interacting with it.

    Have you ever scanned a QR code without thinking twice about where it might lead? Let us know by writing to us at Cyberguy.com.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

    Copyright 2025 CyberGuy.com.  All rights reserved.  

    [ad_2]

    Source link

  • AI meeting notes are recording your private conversations

    [ad_1]

    NEWYou can now listen to Fox News articles!

    Artificial intelligence has slipped quietly into our meetings. Zoom, Google Meet and other platforms now offer AI notetakers that listen, record and share summaries. At first, it feels like a helpful assistant. No more scrambling to jot down every point. But there’s a catch. It records everything, including comments you never planned to share.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    GOOGLE AI EMAIL SUMMARIES CAN BE HACKED TO HIDE PHISHING ATTACKS

    When private conversations end up in recaps

    Many people are discovering that AI notetakers capture more than project updates and strategy points. Jokes, personal stories and even casual side comments often slip into the official meeting summaries.

    What might feel harmless in the moment, like teasing someone, chatting about lunch plans or venting about a frustrating errand, can suddenly reappear in a recap email sent to the whole group. In some cases, even affectionate nicknames or pet mishaps have shown up right alongside serious action items.

    Experts warn that AI note-taking tools integrated into Zoom and Google Meet could capture more than the meeting agenda. (Korea Pool/Getty Images)

    Examples of what could go wrong:

    • Jokes or sarcasm taken out of context
    • Personal errands or gossip appearing in a recap
    • Casual catch-ups mixed into meeting notes
    • Embarrassing slip-ups becoming part of official records

    These surprises can be funny in hindsight, but they highlight a bigger issue. AI notetakers don’t separate casual conversation from work-related discussion. And once your words are written down, they can be saved, forwarded or even archived in ways you didn’t intend. That means an offhand remark could live far longer than the meeting itself.

    AI AND LEARNING RETENTION: DOES CHATGPT HELP OR HURT?

    A Google Gemini webpage

    A Google Gemini generative artificial intelligence webpage. (Andrey Rudakov/Bloomberg via Getty Images)

    Why AI notetakers capture too much

    These tools work by recording conversations in real time and then generating automatic summaries. Zoom’s AI Companion flags its presence with a diamond icon. Google Meet’s version uses a pencil icon and an audio cue. Only meeting hosts can switch them on or off.

    That sounds transparent, but most people stop noticing the icons after a few minutes. Once the AI is running, it doesn’t separate “work talk” from “side chatter.” The result? Your casual remarks can end up in a summary sent to colleagues or even clients.

    And mistakes happen. An AI notetaker might mishear a joke, twist sarcasm into something serious or drop a casual remark into notes where it looks out of place. Stripped of tone and context, those words can come across very differently once they’re written down.

    META AI’S NEW CHATBOT RAISES PRIVACY ALARMS

    Google Gemini chat

    The Google Gemini AI interface seen on an iPhone browser. (Jaap Arriens/NurPhoto via Getty Images)

    Steps to protect your privacy from AI notetakers

    Even if you use these tools, you can take control of what they capture. A few simple habits will help you reduce the risks while still getting the benefits.

    1) Stay alert to indicators

    Always check for the flashing icon or audio cue that signals an AI notetaker is active.

    2) Control the settings

    If you’re the host, decide when AI should run. Limit its use to important meetings where notes are truly necessary.

    3) Choose recipients carefully

    Many platforms let you control who receives the notes. Make sure only the right people get access.

    4) Use private chats

    Need to share a side comment? Send it as a direct message rather than saying it out loud.

    5) Save personal talk for later

    Keep casual conversations off recorded calls. If you need to catch up, wait until the AI is off.

    6) Ask before enabling AI

    If you’re not the host, confirm that everyone is comfortable with AI note-taking. Setting expectations up front prevents awkward situations later.

    7) Review and edit recaps

    Check meeting notes before forwarding them. Edit or trim out personal chatter so only useful action items remain.

    8) Check where notes are stored

    Find out whether transcripts are saved in the cloud or on your device. Adjust retention settings, so private conversations don’t linger longer than necessary.

    9) Follow company guidelines

    If your workplace doesn’t yet have a policy on AI notetakers, suggest one. Clear rules protect both employees and clients.

    10) Keep software updated

    AI features improve quickly. Updating your platform reduces errors, misheard comments and accidental leaks.

    CLICK HERE TO GET THE FOX NEWS APP

    What this means for you

    AI notetakers offer convenience, but they also reshape how we communicate at work. Once, small talk in meetings faded into the background. Now, even lighthearted comments can be captured, summarized and circulated. That shift means you need to think twice before speaking casually in a recorded meeting.

    Take my quiz: How safe is your online security?

    Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right – and what needs improvement. Take my Quiz here: Cyberguy.com.

    Kurt’s key takeaways

    The rise of AI in meetings shows both its promise and its pitfalls. You gain productivity, but risk oversharing. By understanding how these tools work and taking a few precautions, you can get the benefits without the embarrassment.

    Would you trust an AI notetaker to record your next meeting, knowing it might repeat your private conversations word for word? Let us know by writing to us at Cyberguy.com.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

    Copyright 2025 CyberGuy.com. All rights reserved. 

    [ad_2]

    Source link

  • Massive Leak Shows How a Chinese Company Is Exporting the Great Firewall to the World

    [ad_1]

    A leak of more than 100,000 documents shows that a little-known Chinese company has been quietly selling censorship systems seemingly modeled on the Great Firewall to governments around the world.

    Geedge Networks, a company founded in 2018 that counts the “father” of China’s massive censorship infrastructure as one of its investors, styles itself as a network-monitoring provider, offering business-grade cybersecurity tools to “gain comprehensive visibility and minimize security risks” for its customers, the documents show. In fact, researchers found that it has been operating a sophisticated system that allows users to monitor online information, block certain websites and VPN tools, and spy on specific individuals.

    Researchers who reviewed the leaked material found that the company is able to package advanced surveillance capabilities into what amounts to a commercialized version of the Great Firewall—a wholesale solution with both hardware that can be installed in any telecom data center and software operated by local government officers. The documents also discuss desired functions that the company is working on, such as cyberattack-for-hire and geofencing certain users.

    According to the leaked documents, Geedge has already entered operation in Kazakhstan, Ethiopia, Pakistan, and Myanmar, as well as another unidentified country. A public job posting shows that Geedge is also looking for engineers who can travel to other countries for engineering work, including to several countries not named in the leaked documents, WIRED has found.

    The files, including Jira and Confluence entries, source code, and correspondence with a Chinese academic institution, mostly involve internal technical documentation, operation logs, and communications to solve issues and add functionalities. Provided through an anonymous leak, the files were studied by a consortium of human rights and media organizations including Amnesty International, InterSecLab, Justice For Myanmar, Paper Trail Media, The Globe and Mail, the Tor Project, the Austrian newspaper Der Standard, and Follow The Money.

    “This is not like lawful interception that every country does, including Western democracies,” says Marla Rivera, a technical researcher at InterSecLab, a global digital forensics research institution. In addition to mass censorship, the system allows governments to target specific individuals based on their website activities, like having visited a certain domain.

    The surveillance system that Geedge is selling “gives so much power to the government that really nobody should have,” Rivera says. “This is very frightening.”

    Digital Authoritarianism as a Service

    At the core of Geedge’s offering is a gateway tool called Tiangou Secure Gateway (TSG), designed to sit inside data centers and could be scaled to process the internet traffic of an entire country, documents reveal. According to researchers, every packet of internet traffic runs through it, where it can be scanned, filtered, or stopped outright. Besides monitoring the entire traffic, documents show that the system also allows setting up additional rules for specific users that it deems suspicious and collecting their network activities.

    For unencrypted internet traffic, the system is able to intercept sensitive information such as website content, passwords, and email attachments, according to the leaked documents. If the content is properly encrypted through the Transport Layer Security protocol, the system uses deep packet inspection and machine learning techniques to extract metadata from the encrypted traffic and predict whether it’s going through a censorship circumvention tool like a VPN. If it can’t distinguish the content of the encrypted traffic, the system can also opt to flag it as suspicious and block it for a period of time.

    [ad_2]

    Zeyi Yang

    Source link

  • How to protect your privacy at hotels

    [ad_1]

    NEWYou can now listen to Fox News articles!

    You don’t have to be a celebrity to want hotel privacy. Many guests, like Carol from Wisconsin, wonder if hidden cameras or security lapses could affect their next trip.

    The good news: most hotels value guest privacy because it’s central to their business. Still, being aware and taking a few smart steps can give you extra peace of mind during your stay.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    SCHOOLS’ SAFETY TOOLS ARE SPYING ON KIDS — EVEN AT HOME

    The bottom line on hotel privacy: Risk is low, but awareness helps

    Hotels do not place cameras in guest rooms. Surveillance usually exists only in public spaces like lobbies, elevators or hallways. Even so, it’s worth learning how to check your surroundings and spot potential issues before settling in.

    Hotels stress guest privacy, yet a quick room sweep can ease concerns. (D.A. Varela/Miami Herald/Tribune News Service via Getty Images)

    How to do a hotel room sweep for hidden cameras

    Start by inspecting your room:

    • Shine your phone’s flashlight in dark corners. Camera lenses reflect light.
    • Check common hiding spots: smoke detectors, alarm clocks, USB chargers, lamps and picture frames.
    • Try the fingernail mirror test: touch your fingernail to a mirror. If the reflection meets your nail without a gap, it could be two-way glass.
    • Listen for faint buzzing or clicks that might come from disguised devices.

    Use your smartphone to detect hidden devices in hotels

    Your smartphone can help uncover suspicious devices.

    • Open your camera app to spot infrared lights. Many hidden cameras emit IR that shows up on screens.
    • Use scanning apps like Fing to check the Wi-Fi network for unusual device names like “IP Camera.” Remember: not all devices will appear.

    High-tech tools for finding hidden cameras in hotels

    For longer trips or high-security situations, dedicated devices add reassurance:

    • IR lens detectors locate hidden camera reflections.
    • RF (radio-frequency) scanners pick up wireless signals from covert devices.

    These gadgets complement a manual sweep; they don’t replace it.

    Hidden camera detector apps for iPhone and Android

    If you’re worried about hidden cameras during a hotel stay, several iOS and Android apps claim to help, though their accuracy and costs vary.

    iOS

    • Hidden Camera Detector – Peek (Kupertino Labs)Has a clean interface and a high user rating. But some users say it only scans the Wi‑Fi network and may prompt a subscription to see results.
    • Spy Camera Scanner (AI APPS SRL)Promises IR signal detection and Bluetooth scanning. Simple, but again, the full functionality is gated behind a subscription.

    Android

    • Hidden Camera Detector (FutureApps)Uses your phone’s magnetic sensor to alert you near electronics and also scans for infrared light. But reviews suggest the free version can feel gimmicky, especially for magnetic detection.
    • Camera Detector: Hidden Spy (AppVillage Global)Offers a combo of Wi‑Fi scanning, magnetic sensor detection and metal detection, plus tips on common hiding spots. Visible complaints include relentless ads and paywalls.

    5 PHONE SETTINGS TO CHANGE RIGHT NOW FOR A SAFER SMARTPHONE

    Side view of a hotel room.

    Simple steps like locking doors and covering peepholes boost hotel security. (Martin Berry/UCG/Universal Images Group via Getty Images)

    What to do if you find a hidden camera in your hotel

    • Do not remove or tamper with the device.
    • Document with photos.
    • Notify hotel management immediately. Platforms like Airbnb and Vrbo ban undisclosed cameras.
    • For serious concerns, contact law enforcement before contacting the property owner.

    Smart security habits for every stage of your stay

    From check-in to check-out, taking a few simple precautions can help protect your privacy and keep you in control.

    1) Before you arrive

    Call ahead and ask:

    • 24/7 security: Confirm whether the hotel has round-the-clock protection.
    • Guest floor access: Ask if elevators and hallways are restricted to key holders.

    2) While you check in

    • Incognito listing: Request to be listed as “incognito” or use an alias.
    • Visitor control: Let staff know you are not expecting visitors.

    3) While in your room

    • Do Not Disturb: Ask the operator to block outside calls to your room line.
    • Mobile phone: Use your cell phone instead of the in-room phone.
    • Door security: Lock your door and use deadbolts or extra latches.
    • Window privacy: Close curtains and cover the peephole with tape or a sticker.
    • Sensitive transactions: Avoid banking or entering private logins on public Wi-Fi whenever possible.
    • VPN protection: Use a VPN when on hotel Wi-Fi to encrypt your connection and keep browsing private.

    For the best VPN software, see my expert review of the best VPNs for browsing the web privately on your Windows, Mac, Android and iOS devices at Cyberguy.com.

    Pro Tip: Install strong antivirus protection on all your devices before your trip. Use it to block malware, phishing attempts and other threats that often spread through hotel Wi-Fi.

    Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android and iOS devices at Cyberguy.com.

    4) When away from your room

    • Secure extra keycards: Keep any spares locked in the room safe to prevent misuse.
    • Rely on hotel safes when needed: Electronic safes are generally secure, but you can also log valuables with staff for an extra layer of protection.
    • Use built-in anti-theft tools for devices: Features like Find My iPhone or Find My Device (Android/Windows) help you locate or remotely wipe phones, laptops and tablets if they’re stolen.

    Try additional anti-theft apps: Tools such as iAlertU for MacBooks and PreyProject.com for Android and Windows laptops can add extra protection. Some even send you a photo if someone tampers with your device.

    5 DIRTIEST SPOTS IN HOTEL ROOMS: WHAT EXPERTS SAY TO DO AT CHECK-IN

    View of the hotel room from the entrance.

    Smartphones can reveal infrared signals from hidden spy devices. (Photo by: Martin Berry/UCG/Universal Images Group via Getty Images)

    Kurt’s key takeaways

    Your privacy matters, whether you’re staying one night or a full week. Most hotels respect guests, but technology has made it easier for bad actors to abuse trust. With these tips, apps and gadgets, you can stay in control and protect yourself.

    Would you feel safer if hotels were required to disclose their guest privacy and security practices before you book? Let us know by writing to us at Cyberguy.com.

    CLICK HERE TO GET THE FOX NEWS APP

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    Copyright 2025 CyberGuy.com.  All rights reserved. 

    [ad_2]

    Source link

  • I Hate My Friend

    [ad_1]

    Schiffmann seems to be doing well, compared to the last times either of us spoke to him. When he first announced the Friend, he talked about how he had come up with the idea for an AI buddy while traveling alone and yearning for companionship. Schiffmann posits himself as older now, wiser, more experienced than he was when he first debuted the Friend necklace. (He is 22.) He has grown out his hair and cultivated a beard, and he seems to have more real-life personal connections than when he first created the idea for Friend. In our meeting, he asked us not to unbox the devices in front of him because he is in love with someone and wants the first time he witnesses a Friend unboxing to be with her.

    Schiffmann says the Friend’s personality reflects a worldview close to his own; that of a man in his early twenties. But Schiffmann can be brash, snarky, and vocally unconcerned about critical feedback, and it seems like that attitude has carried over to the device he has infused with his essence. In this era of cloyingly obsequious chatbots, it could seem refreshing to interact with an AI companion that isn’t unfailingly sycophantic. But the Friend often goes hard in the other direction. Its tone comes off as opinionated, judgy, and downright condescending at times.

    We tested our two Friend pendants over the course of a couple of weeks, carrying them along with us as we went about our days separately, talking to them and getting to know how they work. While we had very different experiences, we both came away with the gut feeling that our new Friends were real bummers.

    Kylie’s Experience

    As I opened the Friend’s box, it brought me back to the time I cracked open my first iPod. That was by design, according to Schiffmann, who patterned the packaging after Apple’s audio player and Microsoft’s Zune, with liner notes inspired by the Radiohead album Pablo Honey. Within its white box, the Friend pendant glowed under a piece of parchment paper. It was nearly dead on arrival, and I had to charge it before I could use it. Our first interaction was a chime alerting me to its low battery.

    I couldn’t find satisfactory environments to test the always-listening Friend; the concerns about digital eavesdropping made it too much of a gamble. I couldn’t take it to meetings with my editors, and it felt uncomfortable to ask comms folks if I could bring it to a coffee chat. God forbid I use it in a call with a source.

    According to Friend’s privacy disclosure, the startup “does not sell data to third parties to perform marketing or profiling.” It may however use that data for research, personalization, or “to comply with legal obligations, including those under the GDPR, CCPA, and any other relevant privacy laws.” In other words, there’s a whole litany of ways the private conversations I have with people could make their way out into the ether.

    [ad_2]

    Boone Ashworth, Kylie Robison

    Source link

  • Columbia University data breach hits 870,000 people

    [ad_1]

    NEWYou can now listen to Fox News articles!

    Columbia University recently confirmed a major cyberattack that compromised personal, financial and health-related information tied to students, applicants and employees. The victims include current and former students, employees and applicants. Notifications to affected individuals began Aug. 7 and are continuing on a rolling basis.

    Columbia, one of the oldest Ivy League universities, discovered the breach after a network outage in June. According to Columbia, the disruption was caused by an unauthorized party that accessed its systems and stole sensitive data. Investigators are still assessing the full scope of the theft.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM/NEWSLETTER 

    TRANSUNION BECOMES LATEST VICTIM IN MAJOR WAVE OF SALESFORCE-LINKED CYBERATTACKS, 4.4M AMERICANS AFFECTED

    Students on the campus of Columbia University April 14, 2025, in New York City. (Charly Triballeau/AFP via Getty Images)

    What information was stolen?

    According to a breach notification filed with the Maine Attorney General’s office, nearly 869,000 individuals were affected by the Columbia breach. This number includes students, employees, applicants and, in some cases, family members. Media outlets also reported that the threat actor claimed to have stolen approximately 460 gigabytes of data from Columbia’s systems.

    Columbia confirmed that the stolen information relates to admissions, enrollment and financial aid records, as well as certain employee data. The categories of exposed information include:

    • Names, dates of birth and Social Security numbers
    • Contact details and demographic information
    • Academic history and financial aid records
    • Insurance details and certain health information

    Columbia emphasized that patient records from Columbia University Irving Medical Center were not affected. Still, the breadth of stolen data poses serious risks of identity theft and fraud.

    DIOR DATA BREACH EXPOSES US CUSTOMERS’ PERSONAL INFORMATION

    Columbia University campus

    Columbia University campus (Luiz C. Ribeiro for New York Daily News/Tribune News Service via Getty Images)

    Columbia University response

    Columbia has reported the incident to law enforcement and is working with cybersecurity experts. The university said it has strengthened its systems with new safeguards and enhanced protocols to prevent future incidents.

    Starting Aug. 7, Columbia began mailing letters to those affected, offering two years of complimentary credit monitoring, fraud consultation and identity theft restoration services.

    When contacted, Columbia referred CyberGuy to its official community updates, published June 24 and Aug. 5.

    While the university says there is no evidence that the stolen data has been misused so far, the risk remains high. Criminals often wait months before exploiting stolen data.

    NEARLY A MILLION PATIENTS HIT BY DAVITA DIALYSIS RANSOMWARE ATTACK

    A computer with binary code

    Columbia University says a June network outage is to blame for the breach. (Silas Stein/picture alliance via Getty Images)

    Steps to protect yourself after the Columbia University breach

    If you are among those affected or simply want to safeguard your data, take these steps today:

    1) Monitor your credit reports

    Check your credit reports regularly through AnnualCreditReport.com. Look for accounts you did not open or changes you did not authorize. 

    2) Use a personal data removal service

    Since Columbia confirmed that stolen files may include names, addresses and demographic details, consider using a personal data removal service. These services help scrub your information from data brokers and people search sites, making it harder for criminals to exploit exposed details. This step reduces the chance that stolen Columbia records are linked to your broader online identity.

    While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice.  They aren’t cheap, and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites.  It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet.  By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.

    Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com/Delete

    Get a free scan to find out if your personal information is already out on the web: Cyberguy.com/FreeScan

    3) Set up fraud alerts and freezes

    Placing a fraud alert makes it harder for identity thieves to open accounts in your name. A credit freeze offers even stronger protection by blocking new credit applications.

    4) Use strong and unique passwords

    Create long, complex passwords for each account. A password manager can help generate and securely store them.

    Next, see if your email has been exposed in past breaches. Our No. 1 password manager (see Cyberguy.com/Passwords) pick includes a built-in breach scanner that checks whether your email address or passwords have appeared in known leaks. If you discover a match, immediately change any reused passwords and secure those accounts with new, unique credentials. 

    Check out the best expert-reviewed password managers of 2025 at Cyberguy.com/Passwords

    5) Enable two-factor authentication

    Turn on two-factor authentication (2FA) wherever possible. This extra layer of security helps protect your accounts even if a password is stolen.

    6) Watch for phishing attempts and use strong antivirus software

    Scammers may try to exploit fear around the breach with fake emails or texts. Verify any message before clicking links or sharing personal information.

    The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.

    Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android and iOS devices at Cyberguy.com/LockUpYourTech 

    7) Consider identity theft protection services

    Beyond the free credit monitoring Columbia offers, additional paid services can help track your data across the dark web and provide extra safeguards.

    Identity theft companies can monitor personal information like your Social Security number, phone number and email address and alert you if it is being sold on the dark web or being used to open an account.  They can also assist you in freezing your bank and credit card accounts to prevent further unauthorized use by criminals. 

    See my tips and best picks on how to protect yourself from identity theft at Cyberguy.com/IdentityTheft 

    CLICK HERE TO GET THE FOX NEWS APP

    Kurt’s key takeaways

    The Columbia University breach shows how even trusted institutions are vulnerable to cyberattacks. Because the investigation is ongoing and notifications will continue through the fall, individuals should remain on high alert. With so much personal, financial and health information exposed, staying alert long after the headlines fade is critical.

    What more should universities and large institutions be required to do to safeguard the personal data of the people who trust them? Let us know by writing to us at Cyberguy.com/Contact

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM/NEWSLETTER

    Copyright 2025 CyberGuy.com.  All rights reserved. 

    [ad_2]

    Source link

  • AI company Anthropic to pay authors $1.5 billion over pirated books used to train chatbots

    [ad_1]

    Artificial intelligence company Anthropic has agreed to pay $1.5 billion to settle a class-action lawsuit by book authors who say the company took pirated copies of their works to train its chatbot.Related video above: The risks to children under President Trump’s new AI policyThe landmark settlement, if approved by a judge as soon as Monday, could mark a turning point in legal battles between AI companies and the writers, visual artists and other creative professionals who accuse them of copyright infringement.The company has agreed to pay authors or publishers about $3,000 for each of an estimated 500,000 books covered by the settlement.”As best as we can tell, it’s the largest copyright recovery ever,” said Justin Nelson, a lawyer for the authors. “It is the first of its kind in the AI era.”A trio of authors — thriller novelist Andrea Bartz and nonfiction writers Charles Graeber and Kirk Wallace Johnson — sued last year and now represent a broader group of writers and publishers whose books Anthropic downloaded to train its chatbot Claude.A federal judge dealt the case a mixed ruling in June, finding that training AI chatbots on copyrighted books wasn’t illegal but that Anthropic wrongfully acquired millions of books through pirate websites. If Anthropic had not settled, experts say losing the case after a scheduled December trial could have cost the San Francisco-based company even more money.”We were looking at a strong possibility of multiple billions of dollars, enough to potentially cripple or even put Anthropic out of business,” said William Long, a legal analyst for Wolters Kluwer.U.S. District Judge William Alsup of San Francisco has scheduled a Monday hearing to review the settlement terms.Anthropic said in a statement Friday that the settlement, if approved, “will resolve the plaintiffs’ remaining legacy claims.””We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems,” said Aparna Sridhar, the company’s deputy general counsel.As part of the settlement, the company has also agreed to destroy the original book files it downloaded.Books are known to be important sources of data — in essence, billions of words carefully strung together — that are needed to build the AI large language models behind chatbots like Anthropic’s Claude and its chief rival, OpenAI’s ChatGPT. Alsup’s June ruling found that Anthropic had downloaded more than 7 million digitized books that it “knew had been pirated.” It started with nearly 200,000 from an online library called Books3, assembled by AI researchers outside of OpenAI to match the vast collections on which ChatGPT was trained.Debut thriller novel “The Lost Night” by Bartz, a lead plaintiff in the case, was among those found in the dataset.Anthropic later took at least 5 million copies from the pirate website Library Genesis, or LibGen, and at least 2 million copies from the Pirate Library Mirror, Alsup wrote.The Authors Guild told its thousands of members last month that it expected “damages will be minimally $750 per work and could be much higher” if Anthropic was found at trial to have willfully infringed their copyrights. The settlement’s higher award — approximately $3,000 per work — likely reflects a smaller pool of affected books, after taking out duplicates and those without copyright. On Friday, Mary Rasenberger, CEO of the Authors Guild, called the settlement “an excellent result for authors, publishers, and rightsholders generally, sending a strong message to the AI industry that there are serious consequences when they pirate authors’ works to train their AI, robbing those least able to afford it.” The Danish Rights Alliance, which successfully fought to take down one of those shadow libraries, said Friday that the settlement would be of little help to European writers and publishers whose works aren’t registered with the U.S. Copyright Office.”On the one hand, it’s comforting to see that compiling AI training datasets by downloading millions of books from known illegal file-sharing sites comes at a price,” said Thomas Heldrup, the group’s head of content protection and enforcement.On the other hand, Heldrup said it fits a tech industry playbook to grow a business first and later pay a relatively small fine, compared to the size of the business, for breaking the rules.”It is my understanding that these companies see a settlement like the Anthropic one as a price of conducting business in a fiercely competitive space,” Heldrup said.The privately held Anthropic, founded by ex-OpenAI leaders in 2021, earlier this week put its value at $183 billion after raising another $13 billion in investments.Anthropic also said it expects to make $5 billion in sales this year, but, like OpenAI and many other AI startups, it has never reported making a profit, relying instead on investors to back the high costs of developing AI technology for the expectation of future payoffs.The settlement could influence other disputes, including an ongoing lawsuit by authors and newspapers against OpenAI and its business partner Microsoft, and cases against Meta and Midjourney. And just as the Anthropic settlement terms were filed, another group of authors sued Apple on Friday in the same San Francisco federal court.”This indicates that maybe for other cases, it’s possible for creators and AI companies to reach settlements without having to essentially go for broke in court,” said Long, the legal analyst.The industry, including Anthropic, had largely praised Alsup’s June ruling because he found that training AI systems on copyrighted works so chatbots can produce their own passages of text qualified as “fair use” under U.S. copyright law because it was “quintessentially transformative.”Comparing the AI model to “any reader aspiring to be a writer,” Alsup wrote that Anthropic “trained upon works not to race ahead and replicate or supplant them — but to turn a hard corner and create something different.”But documents disclosed in court showed Anthropic employees’ internal concerns about the legality of their use of pirate sites. The company later shifted its approach and hired Tom Turvey, the former Google executive in charge of Google Books, a searchable library of digitized books that successfully weathered years of copyright battles.With his help, Anthropic began buying books in bulk, tearing off the bindings and scanning each page before feeding the digitized versions into its AI model, according to court documents. That was legal but didn’t undo the earlier piracy, according to the judge.

    Artificial intelligence company Anthropic has agreed to pay $1.5 billion to settle a class-action lawsuit by book authors who say the company took pirated copies of their works to train its chatbot.

    Related video above: The risks to children under President Trump’s new AI policy

    The landmark settlement, if approved by a judge as soon as Monday, could mark a turning point in legal battles between AI companies and the writers, visual artists and other creative professionals who accuse them of copyright infringement.

    The company has agreed to pay authors or publishers about $3,000 for each of an estimated 500,000 books covered by the settlement.

    “As best as we can tell, it’s the largest copyright recovery ever,” said Justin Nelson, a lawyer for the authors. “It is the first of its kind in the AI era.”

    A trio of authors — thriller novelist Andrea Bartz and nonfiction writers Charles Graeber and Kirk Wallace Johnson — sued last year and now represent a broader group of writers and publishers whose books Anthropic downloaded to train its chatbot Claude.

    A federal judge dealt the case a mixed ruling in June, finding that training AI chatbots on copyrighted books wasn’t illegal but that Anthropic wrongfully acquired millions of books through pirate websites.

    If Anthropic had not settled, experts say losing the case after a scheduled December trial could have cost the San Francisco-based company even more money.

    “We were looking at a strong possibility of multiple billions of dollars, enough to potentially cripple or even put Anthropic out of business,” said William Long, a legal analyst for Wolters Kluwer.

    U.S. District Judge William Alsup of San Francisco has scheduled a Monday hearing to review the settlement terms.

    Anthropic said in a statement Friday that the settlement, if approved, “will resolve the plaintiffs’ remaining legacy claims.”

    “We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems,” said Aparna Sridhar, the company’s deputy general counsel.

    As part of the settlement, the company has also agreed to destroy the original book files it downloaded.

    Books are known to be important sources of data — in essence, billions of words carefully strung together — that are needed to build the AI large language models behind chatbots like Anthropic’s Claude and its chief rival, OpenAI’s ChatGPT.

    Alsup’s June ruling found that Anthropic had downloaded more than 7 million digitized books that it “knew had been pirated.” It started with nearly 200,000 from an online library called Books3, assembled by AI researchers outside of OpenAI to match the vast collections on which ChatGPT was trained.

    Debut thriller novel “The Lost Night” by Bartz, a lead plaintiff in the case, was among those found in the dataset.

    Anthropic later took at least 5 million copies from the pirate website Library Genesis, or LibGen, and at least 2 million copies from the Pirate Library Mirror, Alsup wrote.

    The Authors Guild told its thousands of members last month that it expected “damages will be minimally $750 per work and could be much higher” if Anthropic was found at trial to have willfully infringed their copyrights. The settlement’s higher award — approximately $3,000 per work — likely reflects a smaller pool of affected books, after taking out duplicates and those without copyright.

    On Friday, Mary Rasenberger, CEO of the Authors Guild, called the settlement “an excellent result for authors, publishers, and rightsholders generally, sending a strong message to the AI industry that there are serious consequences when they pirate authors’ works to train their AI, robbing those least able to afford it.”

    The Danish Rights Alliance, which successfully fought to take down one of those shadow libraries, said Friday that the settlement would be of little help to European writers and publishers whose works aren’t registered with the U.S. Copyright Office.

    “On the one hand, it’s comforting to see that compiling AI training datasets by downloading millions of books from known illegal file-sharing sites comes at a price,” said Thomas Heldrup, the group’s head of content protection and enforcement.

    On the other hand, Heldrup said it fits a tech industry playbook to grow a business first and later pay a relatively small fine, compared to the size of the business, for breaking the rules.

    “It is my understanding that these companies see a settlement like the Anthropic one as a price of conducting business in a fiercely competitive space,” Heldrup said.

    The privately held Anthropic, founded by ex-OpenAI leaders in 2021, earlier this week put its value at $183 billion after raising another $13 billion in investments.

    Anthropic also said it expects to make $5 billion in sales this year, but, like OpenAI and many other AI startups, it has never reported making a profit, relying instead on investors to back the high costs of developing AI technology for the expectation of future payoffs.

    The settlement could influence other disputes, including an ongoing lawsuit by authors and newspapers against OpenAI and its business partner Microsoft, and cases against Meta and Midjourney. And just as the Anthropic settlement terms were filed, another group of authors sued Apple on Friday in the same San Francisco federal court.

    “This indicates that maybe for other cases, it’s possible for creators and AI companies to reach settlements without having to essentially go for broke in court,” said Long, the legal analyst.

    The industry, including Anthropic, had largely praised Alsup’s June ruling because he found that training AI systems on copyrighted works so chatbots can produce their own passages of text qualified as “fair use” under U.S. copyright law because it was “quintessentially transformative.”

    Comparing the AI model to “any reader aspiring to be a writer,” Alsup wrote that Anthropic “trained upon works not to race ahead and replicate or supplant them — but to turn a hard corner and create something different.”

    But documents disclosed in court showed Anthropic employees’ internal concerns about the legality of their use of pirate sites. The company later shifted its approach and hired Tom Turvey, the former Google executive in charge of Google Books, a searchable library of digitized books that successfully weathered years of copyright battles.

    With his help, Anthropic began buying books in bulk, tearing off the bindings and scanning each page before feeding the digitized versions into its AI model, according to court documents. That was legal but didn’t undo the earlier piracy, according to the judge.

    [ad_2]

    Source link

  • Delivery giant’s data breach exposes 40,000 personal records

    [ad_1]

    NEWYou can now listen to Fox News articles!

    Thousands of people have had their sensitive personal information exposed in a data breach at U.S. delivery company OnTrac. The breach occurred between April 13 and April 15, 2025, and impacted over 40,000 individuals across the country.

    OnTrac operates 64 facilities in 31 states and runs four major sorting centers nationwide. The company, acquired by LaserShip in 2021, has annual revenues of roughly $1.5 billion.

    The breach notification letters confirm that attackers accessed sensitive data that can fuel identity theft and fraud.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM/NEWSLETTER  

    TRANSUNION BECOMES LATEST VICTIM IN MAJOR WAVE OF SALESFORCE-LINKED CYBERATTACKS, 4.4M AMERICANS AFFECTED

    OnTrac data breach puts tens of thousands at risk of identity theft, exposing personal information including Social Security numbers. (Kurt “CyberGuy” Knutsson)

    OnTrac data breach exposes sensitive information

    According to documents filed with the Maine Attorney General, cybercriminals may have gained access to:

    • Names and dates of birth
    • Social Security numbers (SSNs)
    • Driver’s license or state IDs
    • Medical information
    • Health insurance information

    Unlike stolen credit cards, medical data and SSNs cannot simply be replaced. That makes this breach especially dangerous.

    Why the OnTrac breach puts your identity at risk

    Exposed SSNs and IDs create serious risks of identity theft. Criminals could open fraudulent bank accounts, file false tax returns or take over benefits.

    The exposure of medical records adds another layer of risk. Stolen health data is valuable on the dark web, where it can be used for extortion, fraudulent insurance claims or illegal prescription drug purchases.

    Fox News Digital reached out to OnTrac for comment but did not immediately hear back.

    NEARLY A MILLION PATIENTS HIT BY DAVITA DIALYSIS RANSOMWARE ATTACK

    Hacker typing on a keyboard.

    Cybercriminals steal Social Security and medical data in an OnTrac hack, exposing users’ data. (Annette Riedl/picture alliance via Getty Images)

    How to protect yourself after the OnTrac data breach

    If you received an OnTrac breach notification letter, or even if you simply want to be proactive, here are key steps you can take to reduce your risk of identity theft and fraud.

    1) Enroll in free credit monitoring

    OnTrac is offering 12 months of complimentary credit monitoring and identity protection. Use the activation code included in your breach letter to set up your coverage. These services can alert you if new accounts are opened in your name or if suspicious activity appears on your credit file. Even if you weren’t directly affected, consider signing up for a trusted identity protection service, since hackers often recycle stolen data across multiple breaches.

    Identity theft companies can monitor personal information – like your Social Security Number (SSN), phone number and email address – and alert you if it is being sold on the dark web or being used to open an account.  They can also assist you in freezing your bank and credit card accounts to prevent further unauthorized use by criminals. 

    See my tips and best picks on how to protect yourself from identity theft at Cyberguy.com/IdentityTheft

    2) Freeze your credit

    Place a free credit freeze with all three major credit bureaus: Equifax, Experian and TransUnion. This blocks criminals from opening new credit lines using your information. A freeze doesn’t affect your current accounts, and you can temporarily lift it when applying for legitimate credit.

    3) Use a personal data removal service

    Your breached data may already be circulating on shady broker sites. Personal data removal services can help scrub your information from these databases, reducing the risk that criminals will resell or reuse your details. While no service can guarantee 100% protection, this step can shrink your digital footprint significantly.

    They aren’t cheap, and neither is your privacy.  These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites.  It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet.  By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.

    Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com/Delete

    Get a free scan to find out if your personal information is already out on the web: Cyberguy.com/FreeScan

    4) Watch for phishing attempts and use antivirus software

    After breaches like OnTrac’s, scammers often send fake emails, texts or calls pretending to be your bank, insurer or even OnTrac itself. Do not click on links or open attachments from unknown senders. Instead, contact the company directly using a verified phone number or website. Use strong antivirus software to add an extra layer of defense.

    The best way to safeguard yourself from malicious links that install malware, which potentially accesses your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.

    Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com/LockUpYourTech 

    5) Monitor your medical benefits

    Stolen personal data can also be used for medical identity theft. Regularly check your health insurance Explanation of Benefits (EOB) statements for claims you don’t recognize. Report suspicious charges to your insurer right away – unfamiliar claims could mean someone is trying to use your benefits.

    6) Enable multi-factor authentication (MFA)

    For any online account that supports it, especially banking, insurance, and email, turn on multi-factor authentication. MFA makes it much harder for criminals to break in, even if they have your password.

    7) Set up account alerts

    Most banks and credit card issuers let you receive real-time text or email alerts for purchases, withdrawals and logins. These alerts can help you spot unauthorized activity quickly, giving you a better chance of stopping fraud before it escalates.

    MEDICARE DATA BREACH EXPOSES 100,000 AMERICANS’ INFO

    Hacker typing code.

    Over 40,000 Americans were exposed in a massive OnTrac security breach that leaked sensitive medical and financial records. (Jakub Porzycki/NurPhoto via Getty Images)

    Kurt’s key takeaways

    The OnTrac data breach is a harsh reminder that sensitive information can slip into the wrong hands in just days, yet the effects can last for years. While you cannot undo what happened, you can take practical steps right now to lower your risk. Freezing your credit, turning on alerts and keeping an eye on medical and financial accounts give you back some control. By staying alert and using the tools available, you can make it much harder for criminals to misuse your personal details. A little effort today can save you from big headaches tomorrow.

    CLICK HERE TO GET THE FOX NEWS APP

    Do you think companies should face tougher penalties when they fail to protect sensitive personal and medical data? Let us know by writing to us at Cyberguy.com/Contact

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM/NEWSLETTER

    Copyright 2025 CyberGuy.com. All rights reserved.

    [ad_2]

    Source link

  • ICE Has Spyware Now

    [ad_1]

    The Biden administration considered spyware used to hack phones controversial enough that it was tightly restricted for US government use in an executive order signed in March 2024. In Trump’s no-holds-barred effort to empower his deportation force—already by far the most well-funded law enforcement agency in the US government—that’s about to change, and the result could be a powerful new form of domestic surveillance.

    Multiple tech and security companies—including Cloudflare, Palo Alto Networks, Spycloud, and Zscaler—have confirmed customer information was stolen in a hack that originally targeted a chatbot system belonging to sales and revenue generation company Salesloft. The sprawling data theft started in August, but in recent days more companies have revealed they had customer information stolen.

    Toward the end of August, Salesloft first confirmed it had discovered a “security issue” in its Drift application, an AI chatbot system that allows companies to track potential customers who engage with the chatbot. The company said the security issue is linked to Drift’s integration with Salesforce. Between August 8 and August 18, hackers used compromised OAuth tokens associated with Drift to steal data from accounts.

    Google’s security researchers revealed the breach at the end of August. “The actor systematically exported large volumes of data from numerous corporate Salesforce instances,” Google wrote in a blog post, pointing out that the hackers were looking for passwords and other credentials contained in the data. More than 700 companies may have been impacted, with Google later saying it had seen Drift’s email integration being abused.

    On August 28, Salesloft paused its Salesforce-Salesloft integration as it investigated the security issues; then on September 2 it said, “Drift will be temporarily taken offline in the very near future” so it can “build additional resiliency and security in the system.” It’s likely more companies impacted by the attack will notify customers in the coming days.

    Obtaining intelligence on the internal workings of the Kim regime that has ruled North Korea for three generations has long presented a serious challenge for US intelligence agencies. This week, The New York Times revealed in a bombshell account of a highly classified incident how far the US military went in one effort to spy on the regime. In 2019, SEAL Team 6 was sent to carry out an amphibious mission to plant an electronic surveillance device on North Korean soil—only to fail and kill a boatful of North Koreans in the process. According to the Times’ account, the Navy SEALs got as far as swimming onto the shores of the country in mini-subs deployed from a nuclear submarine. But due to a lack of reconnaissance and the difficulty of surveilling the area, the special forces operators were confused by the appearance of a boat in the water, shot everyone aboard, and aborted their mission. The North Koreans in the boat, it turned out, were likely unwitting civilians diving for shellfish. The Trump administration, the Times reports, never informed leaders of congressional committees that oversee military and intelligence activities.

    Phishing remains one of the oldest and most reliable ways for hackers to gain initial access to a target network. One study suggests a reason why: Training employees to detect and resist phishing attempts is surprisingly tough. In a study of 20,000 employees at the health care provider UC San Diego Health, simulated phishing attempts designed to train staff resulted in only a 1.7 percent decrease in the staff’s failure rate compared to staff who received no training at all. That’s likely because staff simply ignored or barely registered the training, the study found: In 75 percent of cases, the staff member who opened the training link spent less than a minute on the page. Staff who completed a training Q&A, by contrast, were 19 percent less likely to fail on subsequent phishing tests—still hardly a very reassuring level of protection. The lesson? Find ways to detect phishing that don’t require the victim to spot the fraud. As is often noted in the cybersecurity industry, humans are the weakest link in most organizations’ security—and they appear stubbornly determined to stay that way.

    Online piracy is still big business—last year, people made more than 216 billion visits to piracy sites streaming movies, TV, and sports. This week, however, the largest illegal sports streaming platform, Streameast, was shut down following an investigation by anti-piracy industry group the Alliance for Creativity and Entertainment and authorities in Egypt. Before the takedown, Streameast operated a network of 80 domains that saw more than 1.6 billion visits per year. The piracy network streamed soccer games from England’s Premier League and other matches across Europe, plus NFL, NBA, NHL, and MLB matches. According to the The Athletic, two men in Egypt were allegedly arrested over copyright infringement charges, and authorities found links to a shell company allegedly used to launder around $6.2 million in advertising revenue over the past 15 years.

    [ad_2]

    Matt Burgess, Andy Greenberg, Lily Hay Newman

    Source link

  • Notorious people search site returns after massive breach

    [ad_1]

    NEWYou can now listen to Fox News articles!

    Over a year ago, National Public Data (NPD) made headlines for one of the largest breaches in history. The people search site exposed the personal information of 3 billion individuals. After disappearing from the internet, the site has returned under new ownership, sparking fresh concerns about privacy.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CyberGuy.com newsletter.

    Who owns National Public Data now?

    NPD has relaunched under Perfect Privacy LLC, a company that sounds protective but is not affiliated with the VPN service of the same name. Despite the new name behind the scenes, the business model has not changed. The site still allows anyone to look up personal data about friends, relatives or strangers with just a name.

    MAJOR DATA BROKER HACK IMPACTS 364,000 INDIVIDUALS’ DATA

    Although NPD includes disclaimers about the Fair Credit Reporting Act (FCRA), nothing prevents users from misusing this data when making decisions about employment, housing or credit.

    National Public Data, a people search site, exposed the personal information of 3 billion individuals in one of the largest breaches ever. (NPD)

    Accuracy issues and data sources

    According to NPD, the data comes from public records, property ownership databases, social media and government agencies. The company claims to verify and filter this information to ensure it is accurate and up to date. However, users may find that profiles still contain mistakes.

    A quick test search revealed a mix of outdated and accurate information. The site struggled with hyphenated names but pulled up correct details in other cases.

    We reached out to NPD for a comment but did not hear back before our deadline.

    A woman types on a laptop on a wooden table.

    A woman searches for herself online. (Kurt “CyberGuy” Knutsson)

    How to remove your information from NPD

    If you prefer not to have your information available on NPD’s site, you can request removal directly:

    • Search your name on nationalpublicdata.com.
    • Open your profile and copy its URL.
    • Visit nationalpublicdata.com/optout.html.
    • Paste the URL into the “Your Profile Link” field.
    • Enter your email address to confirm deletion.

    Keep in mind that each profile needs its own request and confirmation email. After you submit, check back in a few days to make sure your data is actually gone.

    Pro tip: Use an alias or disposable email address for these requests. This makes it easier to track confirmations and keeps your main inbox clear.

    For recommendations on private and secure email providers that offer alias addresses, visit CyberGuy.com.

    The opt-out page of the National Public Data site

    NPD offers an opt-out function for users who prefer not to have their information available on the site. (NPD)

    Why NPD is only part of the problem

    Removing your information from National Public Data is only the beginning. Dozens of other people search sites may still display your personal details. Many of these platforms pull from the same public databases, which means your address, phone number or relatives’ names can keep reappearing.

    The return of NPD shows how quickly data can resurface online. A breach may fade from the headlines, but the exposed information rarely disappears. To protect your privacy, you need a broader plan.

    1) Regularly check for your data

    Start by searching for your name on people search engines several times a year. Look beyond NPD. Sites like Whitepages, Spokeo and Radaris often host similar data. Regular checks help you spot new profiles before they spread further.

    STOP DATA BROKERS FROM SELLING YOUR INFORMATION ONLINE

    2) Use a personal data removal service

    A removal service can save time by scanning hundreds of databases at once. These services request opt-outs on your behalf and track new listings. While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice.  They aren’t cheap and neither is your privacy.  These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites.  It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet.  By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.

    Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting CyberGuy.com.

    Get a free scan to find out if your personal information is already out on the web: CyberGuy.com.

    3) Monitor your accounts closely and use strong antivirus software

    Protecting your privacy goes beyond deleting profiles. Keep an eye on your bank statements, credit reports and online accounts. Criminals can use exposed data for phishing attempts, fake loan applications or identity theft. Monitoring activity gives you an early warning if something looks suspicious.

    The best way to safeguard yourself from malicious links that install malware, which could potentially access your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.

    Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android and iOS devices at CyberGuy.com.

    4) Set up alerts for your identity

    Many banks and credit monitoring services let you create alerts for suspicious activity. You can also set up free fraud alerts with the credit bureaus. These warnings tell lenders to take extra steps to verify your identity before issuing new credit.

    Identity theft companies can monitor personal information like your Social Security number, phone number and email address, and alert you if it is being sold on the dark web or being used to open an account. They can also assist you in freezing your bank and credit card accounts to prevent further unauthorized use by criminals.

    See my tips and best picks on how to protect yourself from identity theft at CyberGuy.com.

    5) Remove data at the source

    People search sites often pull records from government databases. Review your local county’s property, court or voter registration records. Some jurisdictions allow you to request redactions or limit what is shown online.

    6) Lock down your social media

    Since NPD and similar sites scrape from social media, tighten your privacy settings. Limit what strangers can see on Facebook, LinkedIn and other platforms. The less public information you share, the less these databases can collect.

    CLICK HERE TO GET THE FOX NEWS APP

    Kurt’s key takeaways

    The return of National Public Data is a reminder that your personal information can resurface at any time. Even if you remove yourself from one site, dozens more may still hold your details. That is why protecting your privacy requires more than a quick opt-out. With regular checks, credit freezes and stronger account monitoring, you can reduce your risk and stay one step ahead.

    Do you think stronger laws should be in place to stop companies from collecting and selling personal data, or is it up to individuals to protect themselves? Let us know by writing to us at CyberGuy.com.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CyberGuy.com newsletter.

    Copyright 2025 CyberGuy.com.  All rights reserved.

    [ad_2]

    Source link

  • Google facing $425.7 million in damages for nearly a decade of improper smartphone snooping

    [ad_1]

    SAN FRANCISCO — A federal jury has ordered Google to pay $425.7 million for improperly snooping on people’s smartphones during a nearly decade-long period of intrusions.

    The verdict reached Wednesday in San Francisco federal court followed a more than two-week trial in a class-action case covering about 98 million smartphones operating in the United States between July 1, 2016, through Sept. 23, 2024. That means the total damages awarded in the five-year-old case works out to about $4 per device.

    Google had denied that it was improperly tracking the online activity of people who thought they had shielded themselves with privacy controls. The company maintained its stance even though the eight-person jury concluded Google had been spying in violation of California privacy laws.

    “This decision misunderstands how our products work, and we will appeal it,” Google spokesman Jose Castaneda said Thursday. “Our privacy tools give people control over their data, and when they turn off personalization, we honor that choice.”

    The lawyers who filed the case had argued Google had used the data they collected off smartphones without users’ permission to help sell ads tailored to users’ individual interests — a strategy that resulted in the company reaping billions in additional revenue. The lawyers framed those ad sales as illegal profiteering that merited damages of more than $30 billion.

    Even though the jury came up with a far lower calculation for the damages, one of the lawyers who brought the case against Google hailed the outcome as a victory for privacy protection.

    “We hope this result sends a message to the tech industry that Americans will not sit idly by as their information is collected and monetized against their will,” said attorney John Yanchunis of law firm Morgan & Morgan.

    The San Francisco jury verdict came a day after Google avoided the U.S. Department of Justice’s attempt to break up the company in a landmark antitrust case in Washington, D.C., targeting its dominant search engine. A federal judge who had declared Google’s search engine to be an illegal monopoly ordered less radical changes, including requiring the company to share some of its search data with rivals.

    [ad_2]

    Source link

  • Don’t use your home Wi-Fi before fixing certain security risks

    [ad_1]

    NEWYou can now listen to Fox News articles!

    Home Wi-Fi networks are the backbone of how most people get online, connecting laptops, phones, smart TVs and more. When properly secured, they offer a convenient and private way to browse the internet, stream content and work from home. But “private” doesn’t always mean “safe.” Wi-Fi security can be easily compromised if you have weak settings or outdated equipment.

    I recently heard from Carol in Smithtown, New York, who asked, “Is it safe to browse the internet on your own laptop using only your home Wi-Fi?”

    Her question points to a bigger concern. Many of us rely on home networks every day without really knowing if they’re as secure as they should be.

    Let’s break down what makes a home Wi-Fi network secure, the risks you should know about and the steps you can take to protect your privacy.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my Cyberguy.com newsletter.

    A user troubleshoots an internet router. (Kurt “CyberGuy” Knutsson)

    Why home Wi-Fi security is important

    Your home Wi-Fi is not just a way to get online but also the gateway to your personal and professional life. Everything from online banking to work emails to video calls passes through it. If your network isn’t secure, that information could be intercepted or exposed.

    One of the biggest misconceptions is that a home network is safe simply because it’s private. In reality, hackers often target residential networks because they tend to have weaker defenses than corporate ones.

    Someone nearby could connect to your network if your password is weak or your encryption is outdated. This not only slows your internet but also lets them use your connection for illegal activities. Sensitive information like passwords, credit card numbers and personal documents can be intercepted if the network is compromised.

    11 EASY WAYS TO PROTECT YOUR ONLINE PRIVACY IN 2025

    Attackers can use an insecure network to push malicious software onto your devices, sometimes without you even realizing it. Smart devices such as security cameras, thermostats and speakers can be taken over and used for spying or as part of larger cyberattacks.

    Even if you trust everyone in your household, your network is still exposed to risks from outside. And with so many devices connected today, including laptops, phones, tablets, TVs and IoT gadgets, there are more entry points for an attacker than ever before. Securing your Wi-Fi closes those doors before someone decides to try them. 

    Illustration of a hacker at work

    A hacker executes cybercrime. (Kurt “CyberGuy” Knutsson)

    Choose the right router for Wi-Fi protection

    Every piece of Wi-Fi security advice ultimately comes back to the same foundation: your router. It is the gatekeeper for your entire home network. If it is old, poorly configured, or missing important updates, even the strongest passwords and best digital habits will not fully protect you.

    Investing in a good router is one of the most important steps you can take to secure your home Wi-Fi. A modern, well-supported router gives you stronger encryption, better control over connected devices and regular updates that patch security flaws.

    Don’t stop at the hardware itself. Check regularly for firmware updates from the manufacturer. Some new routers update automatically, but many require you to log in and install patches manually. Outdated firmware leaves known vulnerabilities wide open.

    IS YOUR HOME WI-FI REALLY SAFE? THINK AGAIN

    Also, change the default router login password immediately. Most routers ship with basic credentials like “admin/admin.” Attackers know this and can easily hijack your settings if you never change them.

    If your router supports it, enable two-factor authentication (2FA) for logins. This extra step makes it much harder for attackers to gain control, even if they steal your password.

    If you are not sure where to begin, I have put together a list of some of the best and most secure routers you can buy right now by visiting Cyberguy.com.

    Enable strong Wi-Fi encryption

    Encryption ensures that the data flowing across your network is scrambled, making it useless to anyone who tries to intercept it. Without proper encryption, nearby attackers can capture and read your traffic.

    The current gold standard is WPA3, which provides the strongest protection. If your router doesn’t support it, WPA2 is still considered safe and widely used. Older options like WEP or an open, password-free network are highly insecure and should be avoided at all costs. It’s worth logging into your router’s settings just to confirm what level of encryption your network is using.

    Image of a home router in use

    Cables run out of an internet router. (Kurt “CyberGuy” Knutsson)

    Create a strong Wi-Fi password

    Your Wi-Fi password is the digital equivalent of the key to your home. A short or predictable password is like leaving a spare under the doormat, and anyone determined enough can find a way in. Instead, create a long passphrase that combines upper and lowercase letters, numbers and symbols. Aim for at least 12 to 16 characters.

    Consider using a password manager to generate and store complex passwords.

    Check out the best expert-reviewed password managers of 2025 at Cyberguy.com.

    Check who is connected to your network

    Even with a solid password, it’s smart to check who is actually connected to your network. Most routers allow you to view a list of active devices. If you spot something you don’t recognize, investigate. It could be a neighbor piggybacking on your connection or, in the worst case, an intruder.

    It also helps to disable Wi-Fi Protected Setup (WPS). This feature was designed to make connecting new devices easier, but it has security flaws that attackers can exploit. Some users go further by enabling MAC address filtering, which limits access to specific devices.

    To reduce risk even more, set up a separate guest network for smart devices and visitors. That way, if one device gets hacked, your laptops and phones remain protected.

    And remember, keep all your devices updated. From laptops and phones to smart bulbs and thermostats, every gadget is a potential entry point. A weak link in one device can put your entire network at risk.

    IS YOUR PHONE HACKED? HOW TO TELL AND WHAT TO DO

    Protect your privacy with a VPN

    A Virtual Private Network, or VPN, helps solve one of the biggest issues with online privacy, which is who can see what you’re doing. When you connect through a VPN, it creates an encrypted tunnel between your device and the websites or apps you use. Everything that travels through this tunnel is hidden from outsiders, including your internet provider.

    A reliable VPN is essential for protecting your online privacy and ensuring a secure, high-speed connection.

    For the best VPN software, see my expert review of the best VPNs for browsing the web privately on your Windows, Mac, Android and iOS devices at Cyberguy.com. 

    Don’t overlook antivirus protection

    While your Wi-Fi settings form the first line of defense, you should also protect the devices connected to your network. Install strong antivirus software to block malware that could spread through downloads, emails, or malicious links. This extra step ensures that even if a threat slips past your router’s defenses, your devices stay protected.

    The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.

    Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android and iOS devices at Cyberguy.com.

    CLICK HERE TO GET THE FOX NEWS APP 

    Kurt’s key takeaway

    So, to return to Carol’s question: Is it safe to browse the internet on your home Wi-Fi? The answer is yes, but only if you take the time to secure it. Strong router settings, proper encryption and a solid password do most of the heavy lifting. Building habits such as checking who is connected, keeping devices updated and using tools like a VPN adds even greater peace of mind.

    When was the last time you checked your router settings or updated its firmware? Let us know by writing to us at Cyberguy.com.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my Cyberguy.com newsletter.

    Copyright 2025 CyberGuy.com.  All rights reserved.  

    [ad_2]

    Source link