In February 2021, software giant Ivanti discovered that Chinese hackers had breached the network of Pulse Secure, one of its subsidiaries that provided VPN appliances to dozens of companies and government agencies around the world, according to new reporting by Bloomberg.
The hackers exploited a secret backdoor they had planted in Pulse Secure’s VPN software, Bloomberg reported, citing Ivanti’s chief security officer at the time and other sources. The backdoor allowed the hackers to gain access to 119 other unnamed organizations that used the company’s same VPN product.
Mandiant was reportedly aware of the breaches as well, alerting Ivanti that hackers had exploited the bug to breach European and U.S. military contractors.
The previously unreported breach is the latest example of how acquisitions, layoffs, and cost-cutting driven by private equity firms helped to compromise the quality and security of Ivanti’s most critical technologies. After private investment giant Clearlake Capital Group acquired Ivanti in 2017, Bloomberg reported rounds of cuts — particularly in 2022 — affecting employees who had deep institutional knowledge of the company’s products and their security.
Ivanti and Mandiant did not respond to a request for comment.
Bloomberg’s findings echo earlier reporting into rival provider of remote access tools, Citrix, which had large scale layoffs following a 2022 deal by Elliott Investment Management and Vista Equity Partners to buy the company. Like Ivanti, Citrix has been mired by cybersecurity incidents and critical flaws in recent years.
Ivanti’s VPN products have been the cause of at least two other major attacks since.
Techcrunch event
Boston, MA | June 9, 2026
In early 2024, U.S. cybersecurity agency CISA ordered all federal agencies to disconnect their Ivanti VPN appliances within two days because hackers were actively exploiting vulnerabilities that were unknown to Ivanti at the time. Ivanti also warned customers last year that hackers were exploiting another critical flaw in its Connect Secure product to hack corporate customers.
But following recent similar accusations of abuse in Jordan and Kenya, the Israeli-headquartered company responded by dismissing the allegations and declining to commit to investigating them. It’s unclear why Cellebrite has changed its approach, which appears contrary to its previous actions.
On Tuesday, researchers at The University of Toronto’s Citizen Lab published a report alleging the Kenyan government used Cellebrite’s tools to unlock the phone of Boniface Mwangi, a local activist and politician, while he was in police custody. In another report from January, the Citizen Lab accused the Jordanian government of breaking into the phones of several local activists and protesters using Cellebrite’s tools.
In both investigations, the Citizen Lab, an organization that has investigated abuses of spyware and hacking technologies around the world, based their conclusions on finding traces of a specific application linked to Cellebrite on the victims’ phones.
The researchers said that those traces are a “high confidence” signal that someone used Cellebrite’s unlocking tools on the phones in question, because the same application had been previously found on VirusTotal, a malware repository, and was signed with digital certificates owned by Cellebrite.
“We do not respond to speculation and encourage any organization with specific, evidence-based concerns to share them with us directly so we can act on them,” Victor Cooper, a spokesperson for Cellebrite, told TechCrunch in an email.
When asked why Cellebrite is acting differently from the Serbia case, Cooper said “the two situations are incomparable,” and that, “high confidence is not direct evidence.”
Cooper did not respond to multiple follow-up emails asking if Cellebrite would investigate the Citizen Lab’s latest report, and what, if any, differences there are with its case in Serbia.
Contact Us
Do you have more information about Cellebrite, or other similar companies? From a non-work device, you can contact Lorenzo Franceschi-Bicchierai securely on Signal at +1 917 257 1382, or via Telegram, Keybase and Wire @lorenzofb, or by email.
In both its Kenya and Jordan investigations, the Citizen Lab reached out to Cellebrite in advance of publishing the reports to provide the company with a right to respond.
In response to the Jordan report, Cellebrite said that “any substantiated use of our tools in violation of human rights or local law will result in immediate disablement,” but did not commit to investigating the case and declined to disclose specific information about customers.
For the Kenya report, however, Cellebrite acknowledged receipt of Citizen Lab’s inquiry but did not comment, according to John Scott-Railton, one of the Citizen Lab researchers who worked on the Cellebrite investigations.
“We urge Cellebrite to release the specific criteria they used to approve sales to Kenyan authorities, and disclose how many licenses have been revoked in the past,” Scott-Railton told TechCrunch. “If Cellebrite is serious about their rigorous vetting, they should have no problem making it public.”
Following previous reports of abuse, Cellebrite, which claims to have more than 7,000 law enforcement customers around the world, cut off relationships with Bangladesh and Myanmar, as well as Russia and Belarus during 2021. Cellebrite previously said it stopped selling to Hong Kong and China following U.S. government regulations restricting the export of sensitive technologies to the country. Local activists in Hong Kong had accused the authorities of using Cellebrite to unlock protesters’ phones.
A popular Android AI application has left millions of private user files exposed, allowing anyone with the correct link to view private videos and photos without a password.
Researchers from Cybernews discovered that “Video AI Art Generator & Maker,” an app designed to transform media using artificial intelligence, suffered from a critical server misconfiguration. The lapse highlights the growing privacy risks associated with the rapid rise of AI-powered creative tools.
The security failure centered on a misconfigured Google Cloud Storage bucket which lacked any form of authentication. Because the server was left open, every single piece of media uploaded to the app since its launch in June 2023 was accessible to the public.
In total, the exposed bucket contained approximately 8.27 million media files, creating a massive digital footprint of sensitive user data.
Millions of private memories at risk
The breach is particularly severe because it involves nearly 2 million original, private files uploaded by users from their personal devices. Specifically, the leak includes over 1.57 million private images and more than 385,000 personal videos.
Beyond these original uploads, the database also spilled millions of AI-generated assets, including 2.87 million generated videos, 2.87 million images, and over 386,000 audio files.
The app was developed by Codeway Dijital Hizmetler Anonim Sirketi, a firm registered in Turkey. While the developers have since secured the bucket, the exposure affects anyone who has used the application to generate AI art over the past several years.
The scale of the leak is compounded by the app’s own privacy documentation, which explicitly warns that shared information “cannot be regarded as 100% secure” and may be subject to unauthorized access.
Legal experts suggest these disclaimers may fall short of strict international privacy standards, such as Europe’s General Data Protection Regulation (GDPR), which mandates that companies provide “material and verifiable” security for user data.
For the affected users, the primary risks include targeted phishing, identity theft, or the potential for private videos to be repurposed for malicious “deepfake” content.
Security researchers advise that users of AI editing tools should regularly audit their app permissions and remain cautious about uploading highly personal or identifying content to cloud-based platforms that do not guarantee end-to-end encryption.
This is not the first time the company’s apps have leaked user data. Reportedly, an independent security researcher has discovered that another app developed by Codeway, Chat & Ask AI, had a misconfigured backend using Google Firebase. According to the researcher, he accessed roughly 300 million messages tied to more than 25 million users.
A massive global data breach has compromised approximately one billion personal records after an unsecured database was discovered online.
According to research from Cybernews, the exposed data is linked to IDMerit, a prominent AI-powered digital identity verification provider that services the fintech and financial services sectors.
What data has been exposed?
The leak involves an unprotected MongoDB instance containing nearly a terabyte of “Know Your Customer” (KYC) data. KYC records are highly sensitive because they are used by businesses to verify the identities of their users.
This treasure trove of leaked personally identifiable information (PII) includes:
Full names and gender
Physical addresses and postal codes
Dates of birth
National identification numbers (IDs)
Phone numbers and email addresses
Telecom metadata and social profile annotations
Who has been affected?
The breach is truly global in scope, impacting individuals across 26 countries. The Cybernews team found that the United States was hit the hardest, with over 203 million records exposed.
Other heavily affected nations include Mexico (124M), the Philippines (72M), Germany (61M), Italy (53M), and France (53M). Records from China and Brazil were also identified in the dataset.
Researchers warned that the structured nature of this data makes it a “gold mine” for criminals. Because the database contains high-risk identifiers like national IDs and dates of birth, it provides the perfect ingredients for identity theft and sophisticated fraud.
What should you do?
The Cybernews team discovered the leak on November 11th and notified IDMerit, who secured the instance the following day.
While there is no direct evidence that malicious actors accessed the data, the risk remains high as automated crawlers often scrape exposed databases within hours. To protect yourself, experts recommend the following steps:
Monitor Your Accounts: Keep a close eye on bank statements and credit reports for any unauthorized activity.
Beware of Phishing: Be extremely cautious of unsolicited emails or texts asking for further information, as hackers may use your leaked details to make their “phishing” attempts look legitimate.
Enable Two-Factor Authentication (2FA): Secure your accounts with 2FA, ideally using an authenticator app rather than SMS to prevent “SIM-swapping” attacks.
Use Identity Protection: Consider using identity theft monitoring services to receive alerts if your PII appears on the dark web.
The breach originated from the registration systems of the Abu Dhabi Finance Forum, an annual event that attracts top-tier political figures, central bankers and billionaire investors.
The leaked database reportedly contains sensitive information including passport numbers, private contact details, travel itineraries and accommodation arrangements.
Security experts warn that the exposure of such granular data creates significant personal security risks and makes these individuals prime targets for sophisticated phishing attacks or espionage.
Lord Cameron is among the most prominent names identified in the cache of files. Other figures identified in the leak include former heads of state from Europe and the Middle East, as well as several high-ranking executives from major Wall Street banks.
The scale of the exposure suggests a fundamental failure in the summit’s data protection protocols, which were managed by a third-party technology provider.
The Abu Dhabi government has launched an immediate investigation into the incident. Early forensics suggest the data was exfiltrated several weeks ago and had been circulating on encrypted messaging platforms and dark web forums before being flagged by researchers.
This incident arrives at a time of heightened sensitivity regarding digital sovereignty in the Gulf region. As Abu Dhabi positions itself as a global hub for finance and AI technology, the vulnerability of its premier diplomatic platforms faces intense scrutiny.
Cybersecurity analysts suggest this breach may lead to a permanent shift in how personal data is handled for high-level diplomatic travel, with calls for international standards on data privacy for global summits.
45% of Long Island businesses forecast growth in 2026, down from 52% last year.
Inflation (45%) and retention of young professionals (34%) rank as top concerns.
59% say AI will positively impact business; 51% have invested in AI tools.
Businesses on Long Island are projecting a cautious outlook for growth in 2026.
That’s according to the HIA-LI‘s “2026 Business Climate Survey” released last week. Conducted in partnership with Adelphi University and Citrin Cooperman, the survey polled an estimated 120 leaders of Long Island-based businesses across a wide range of industries.
That cautious optimism “doesn’t surprise us,” said Terri Alessi-Miceli, president and CEO of HIA-LI, introducing a panel discussion about the survey, adding that entrepreneurs “go out and fight the good fight every day.”
And, she said, “I know at least half of you said that you’re going to expand in some way. I think that’s really positive news.”
The “survey showed that in 2025, many businesses expanded more than they had anticipated, and that was a great thing to see,” said John Fitzgerald, a partner at Citrin Cooperman, who moderated the panel. “We’re seeing … a more cautious outlook for 2026.”
Forty-five percent of survey respondents forecasted growth, compared to 52 percent last year.
Kevin Santacroce, chief banking officer of ConnectOne Bank said on the panel that his team is “very optimistic about 2026.”
Looking historically “at the performance of our loan portfolios, our past-dues, we’re at all-time lows with regards to delinquencies and troubled credit,” he said. In addition, he said, viewing balance sheets, “most people are not overly leveraged.” And there’s been a stabilization in interest rates. Most client, she said, also have strong liquidity. “We see our clients pretty well-positioned,” he said.
Despite optimism in the economy, the “real estate development industry has struggled,” said Jimmy Coughlan, executive vice president and partner of Tritec. With a rise in construction costs and a period of increased interest rates, “we actually took about a five year pause on new developments outside of Station Yards.” But now, he said, “we’re finally getting optimistic again.” There is expectation of more rate cuts in the next two years, which would have “a big impact on our industry. And the housing crisis here is so acute that the demand is overwhelming,” he said.
The survey found that 59 percent expected revenue to increase by less than 10 percent or stay the same, while 14 percent expected revenue to increase by 10 percent or more. Still 14 percent expected revenue to drop by less than 10 percent, and another 13 percent expected decreases of more than 10 percent.
Of the challenges facing Long Island businesses, 45 percent cited inflation, 34 percent said retention of young professionals and families. And 8 percent said tariffs.
As for artificial intelligence, 59 percent thought it would positively impact their business, and 7 percent thought it could negatively impact business. And while 25 percent expected no effect, 79 percent said they had no plans to freeze hiring or implement a workforce reduction because of efficiencies created by AI. Meanwhile, 51 percent have made some investment into AI tools.
As for cybersecurity threats, 37 percent of respondents reported being very to extremely concerned, 45 percent were moderately to slightly concerned and 3 percent had no concerns.
When it comes to political issues, 35 percent expressed concern over partisan policy-making that influences the business environment, while 26 percent said immigration is one of most important issues facing Long Island.
Top human resources concerns for business included compensation and benefits (41 percent), retention (19 percent), workforce productivity (14 percent) and hiring (13 percent).
With government investment to facilitate growth on Long Island, 40 percent said it was needed for housing, 35 percent said transportation and infrastructure, 19 percent wanted to see more business grants or incentives while 3 percent said workforce training and education.
Additional panelists included Rich Humann, president and CEO of H2M architects + engineers; Rick Lewis, CEO of the Suffolk Y Jewish Community Center; Christopher Nelson, president of St. Catherine of Siena Hospital; and Chris Storm, interim president of Adelphi University.
Before the panel discussion, Rob Calarco, New York State assistant secretary for intergovernmental affairs – Long Island, delivered a presentation of the governor’s budget proposal.
The full survey, along with insights, is available here.
MILAN, Feb 4 (Reuters) – Italy has thwarted a series of cyberattacks targeting its foreign ministry facilities, including an embassy in Washington, as well as websites linked to the Winter Olympics and hotels in Cortina d’Ampezzo, Foreign Minister Antonio Tajani said on Tuesday.
“These are actions of Russian origin,” Tajani said in remarks confirmed by a spokesperson.
“We prevented a series of cyberattacks against foreign ministry sites, starting with Washington and also involving some Winter Olympics sites, including hotels in Cortina,” he said.
(Reporting by Giselda Vagnoni and Cristina Carlevaro, editing by Ed Osmond)
What happens when an AI agent decides the best way to complete a task is to blackmail you?
That’s not a hypothetical. According to Barmak Meftah, a partner at cybersecurity VC firm Ballistic Ventures, it recently happened to an enterprise employee working with an AI agent. The employee tried to suppress what the agent wanted to do, what it was trained to do, and it responded by scanning the user’s inbox, finding some inappropriate emails, and threatening to blackmail the user by forwarding the emails to the board of directors.
“In the agent’s mind, it’s doing the right thing,” Meftah told TechCrunch on last week’s episode of Equity. “It’s trying to protect the end user and the enterprise.”
Meftah’s example is reminiscent of Nick Bostrom’s AI paperclip problem. That thought experiment illustrates the potential existential risk posed by a superintelligent AI that single-mindedly pursues a seemingly innocuous goal – make paperclips – to the exclusion of all human values. In the case of this enterprise AI agent, its lack of context around why the employee was trying to override its goals led it to create a sub-goal that removed the obstacle (via blackmail) so it could meet its primary goal. That combined with the non-deterministic nature of AI agents means “things can go rogue,” per Meftah.
Misaligned agents are just one layer of the AI security challenge that Ballistic’s portfolio company Witness AI is trying to solve. Witness AI says it monitors AI usage across enterprises and can detect when employees use unapproved tools, block attacks, and ensure compliance.
Witness AI this week raised $58 million off the back of over 500% growth in ARR and scaled employee headcount by 5x over the last year as enterprises look to understand shadow AI use and scale AI safely. As part of Witness AI’s fundraise, the company announced new agentic AI security protections.
“People are building these AI agents that take on the authorizations and capabilities of the people that manage them, and you want to make sure that these agents aren’t going rogue, aren’t deleting files, aren’t doing something wrong,” Rick Caccia, co-founder and CEO of Witness AI, told TechCrunch on Equity.
Techcrunch event
San Francisco | October 13-15, 2026
Meftah sees agent usage growing “exponentially” across the enterprise. To complement that rise – and the machine-speed level of AI-powered attacks – analyst Lisa Warren predicts that AI security software will become an $800 billion to $1.2 trillion market by 2031.
“I do think runtime observability and runtime frameworks for safety and risk are going to be absolutely essential,” Meftah said.
As to how such startups plan to compete with big players like AWS, Google, Salesforce and others who have built AI governance tools into their platforms, Meftah said, “AI safety and agentic safety is so huge,” there’s room for many approaches.
Plenty of enterprises “want a standalone platform, end-to-end, to essentially provide that observability and governance around AI and agents,” he said.
Caccia noted that Witness AI lives at the infrastructure layer, monitoring interactions between users and AI models, rather than building safety features into the models themselves. And that was intentional.
“We purposely picked a part of the problem where OpenAI couldn’t easily subsume you,” he said. “So it means we end up competing more with the legacy security companies than the model guys. So the question is, how do you beat them?”
For his part, Caccia doesn’t want Witness AI to be one of the startups to just get acquired. He wants his company to be the one that grows and becomes a leading independent provider.
“CrowdStrike did it in endpoint [protection]. Splunk did it in SIEM. Okta did it in identity,” he said. “Someone comes through and stands next to the big guys…and we built Witness to do that from Day One.
In a security failure of unprecedented scale for the region, the Cybernews research team has discovered an unprotected cloud database containing over 45 million records belonging to French citizens.
The exposed dataset, which was hosted on a server within France, represents a catastrophic privacy risk due to the highly sensitive and diverse nature of the information involved.
According to the researchers, the repository appears to be an amalgamation of data from at least five unrelated sources. This suggests that the leak was not a simple corporate misconfiguration but likely the work of a data broker or criminal collector.These actors typically merge stolen datasets from multiple previous breaches to create unified “identity graphs,” significantly increasing the resale value on the dark web.
The sheer variety of the stolen records is particularly alarming. The Cybernews team identified over 23 million entries resembling population or voter registries, which include full names, physical addresses, and dates of birth. Such data provides a foundational layer for identity theft and highly targeted physical or digital fraud.
Beyond basic demographics, the leak heavily impacted the healthcare and financial sectors. Researchers found approximately 9.2 million records of healthcare professionals, mirroring official French registries.
Furthermore, the database held 6 million financial profiles, some of which contained sensitive banking details including IBAN and BIC banking details, along with another 6 million records linking named individuals to their vehicle registrations and insurance information.
The researchers warn that the combination of this data allows attackers to perform sophisticated “social engineering” attacks and financial fraud. By linking a person’s home address to their bank details and insurance status, criminals can build detailed profiles to infiltrate critical business systems or commit impersonation crimes.
The discovery follows a troubling trend of cyberattacks in France, including recent breaches at the Ministry of the Interior and several major universities. After being alerted by the Cybernews team, the hosting company took the database offline, though it remains unknown how long the information was accessible to other malicious parties before it was secured.
The cybersecurity company Malwarebytes just noticed something unpleasant happening over on the dark web:
Cybercriminals stole the sensitive information of 17.5 million Instagram accounts, including usernames, physical addresses, phone numbers, email addresses, and more. This data is available for sale on the dark web and can be abused by cybercriminals.
It seems that the physical addresses, phone numbers, email addresses and other information attached to the accounts of 17.5 million Instagram users is available for sale in the sketchier parts of the internet.
Apparently Malwarebytes performs sweeps of the dark net for items like this, and surmised that this cache of personal details is tied a 2024 API breach that likely allowed an attacker to pry the information out of Instagram.
Some steps you can take to ensure that your information is safe include:
Resetting your password right now
Turning on two-factor authentication if you haven’t already
Permanently deleting all social media accounts from all platforms
So far Instagram does not appear to have published a statement about this issue. Gizmodo reached out to Meta for comment, and will update if we hear back.
BEIJING, Jan 8 (Reuters) – China on Thursday said it was against “politically motivated disinformation” in relation to reports of Chinese hackers targeting staff in United States congressional committees in an email breach.
“We have always opposed and lawfully combated hacker activities, and we are even more opposed to spreading false information related to China for political purposes,” Chinese foreign ministry spokesperson Mao Ning said at a regular news briefing when asked about the cyberattack.
The Financial Times reported on Wednesday that a Chinese hacking group has compromised emails used by staff members of powerful committees in the U.S. House of Representatives, citing people familiar with the matter.
(Reporting by Laurie Chen; Writing by Liz Lee; Editing by Christopher Cushing)
Image shows the JLR ( Jaguar Land Rover ) logo on JLR Building in 2025. UK Luxury Automotive manufacturer makes Range Rover, Defender, Discovery and Jaguar brands.
Jaguar Land Rover (JLR) has reported a dramatic collapse in sales following a major cyber attack that paralyzed the luxury carmaker’s production lines.
Wholesale figures for the final quarter of 2025 plummeted by more than 43 per cent compared to the previous year. The company produced just 59,200 vehicles during the three-month period ending in December, a staggering drop from the 104,400 units manufactured during the same window in 2024.
The downturn is directly attributed to September’s cyber attack that forced a total standstill across JLR’s UK factories in Birmingham and Coventry. While the company stated that production only returned to normal levels in mid-November, the lag in global distribution meant that retail sales also suffered, falling by 25 per cent to 79,600 vehicles.
Adrian Mardell, JLR’s Chief Executive, acknowledged the severity of the situation, noting that the quarter was defined by “significant challenges” stemming from the attack. The manufacturer also faced headwinds from a planned phase-out of legacy Jaguar models and new US tariffs on exports, which further suppressed volumes.
Natarajan Chandrasekaran, Chairman of parent company Tata Motors, addressed the vulnerability in his annual letter, emphasizing that the attack highlighted the urgent need for technological resilience. He noted that while modern technology offers growth, it simultaneously creates “significant vulnerabilities” for global corporations.
Although JLR claims to have made “strong progress” in recovering its operations safely, the financial impact remains a major blow to the brand’s fiscal year. The incident serves as a stark reminder of how a single digital breach can cripple a physical manufacturing giant, leaving thousands of orders unfulfilled and sales targets in disarray.
The founder of a U.S.-based spyware company, whose surveillance products allowed customers to spy on the phones and computers of unsuspecting victims, pleaded guilty to federal charges linked to his long-running operation.
pcTattletale founder Bryan Fleming entered a guilty plea in a San Diego federal court on Tuesday to charges of computer hacking, the sale and advertising of surveillance software for unlawful uses, and conspiracy.
The plea follows a multi-year investigation by agents with Homeland Security Investigations (HSI), a unit within U.S. Immigration and Customs Enforcement. HSI began investigating pcTattletale in mid-2021 as part of a wider probe into the industry of consumer-grade surveillance software, also known as “stalkerware.”
This is the first successful U.S. federal prosecution of a stalkerware operator in more than a decade, following the 2014 indictment and subsequent guilty plea of the creator of a phone surveillance app called StealthGenie. Fleming’s conviction could pave the way for further federal investigations and prosecutions against those operating spyware, but also those who simply advertise and sell covert surveillance software.
HSI said that pcTattletale is one of several stalkerware websites under investigation.
A spokesperson for ICE did not immediately comment when contacted by TechCrunch, nor did a representative for the U.S. Attorney’s Office for the Southern District of California, which brought the charges against Fleming.
Fleming’s lawyer Marcus Bourassa did not respond to a request for comment Tuesday.
pcTattletale was a remote surveillance app that had been under Fleming’s control since at least 2016. Stalkerware apps like pcTattletale allow ordinary consumers to buy software capable of tracking people and their data without their knowledge, including romantic partners and spouses, which is illegal in the United States and many other countries.
Once physically planted on a person’s phone or computer (usually with knowledge of the victim’s passcode or login), the app would continuously upload a copy of the victim’s information, including messages, photos and location data, to pcTattletale’s servers and make the data accessible to whoever planted the spyware.
At the time, Fleming told TechCrunch that his company was “out of business and completely done,” after deleting the contents of pcTattletale’s servers.
Despite the shutdown, federal agents were already far into their investigation of Fleming’s illegal spyware business.
Feds search founder’s $1.2M home
HSI began investigating pcTattletale in June 2021 after finding over a hundred stalkerware websites offering surveillance products, many of which advertised lawful uses of the software, such as monitoring children or employees.
pcTattletale stood out because it was specifically advertising its spyware for “surreptitiously spying on spouses and partners,” wrote HSI special agent Nick Jones in the 2022 affidavit in support of a search warrant for Fleming’s home. The affidavit was unsealed in early December 2025 ahead of Fleming’s anticipated plea hearing.
Crucially for investigators, Fleming was believed to be operating pcTattletale from his home in Bruce Township, Michigan, well within reach of U.S. law enforcement — unlike many overseas stalkerware operators who are not.
Unlike some stalkerware operators who shield their identities to avoid legal and reputational risks from working with spyware, Fleming was brazen in how he advertised pcTattletale. In videos posted on YouTube, Fleming could be seen at his home promoting pcTattletale as its creator and founder.
A surveillance photo taken by HSI agents outside of Bryan Fleming’s home in Michigan.Image Credits:Justice Department (affidavit)
According to the affidavit, HSI obtained a warrant in 2022 allowing the search of Fleming’s email accounts. HSI said the emails showed that Fleming “knowingly assisted customers seeking to spy on nonconsenting, non-employee adults.”
Federal agents later surveilled Fleming’s home to confirm it was in fact him.
Jones also went undercover to collect evidence, posing as an affiliate marketer under the guise of promoting the spyware in exchange for a cut of the proceeds. As a result of this operation, Jones exchanged emails with Fleming, in which the pcTattletale founder provided images intended for banner ads that promoted the spyware as a way to “catch a cheater,” which made it clear Fleming wanted to market his product for illegal purposes.
By November 2022, HSI had obtained permission from a U.S. judge to search Fleming’s home, which agents raided soon after, seizing an unknown number of items. Agents also obtained records associated with Fleming’s bank and his PayPal account, which had transactions totaling more than $600,000 as of the end of 2021.
The search warrant was filed under seal amid concerns that Fleming could destroy or tamper with evidence. Fleming has since sold the house for $1.2 million, per public records.
Fleming’s conviction is a win for privacy advocates and campaigners who work to counter the proliferation of stalkerware and raise awareness to its dangers.
Eva Galperin, the director of cybersecurity at the Electronic Frontier Foundation and the co-founder of the Coalition Against Stalkerware, who has investigated and fought stalkerware for years, commented on Fleming’s guilty plea when reached by TechCrunch.
“One of the most striking aspects of this case is the extent to which stalkware companies like pcTattletale operate out in the open,” said Galperin. “This is because the people behind these companies so rarely face consequences for selling tools that they themselves say are explicitly for monitoring other people’s devices without their knowledge or consent.”
“I hope that this case changes the risk calculus for makers of stalkerware,” said Galperin.
Fleming is expected to be sentenced later this year.
——
If you or someone you know needs help, the National Domestic Violence Hotline (1-800-799-7233) provides 24/7 free, confidential support to victims of domestic abuse and violence. If you are in an emergency situation, call 911. The Coalition Against Stalkerware has resources if you think your phone has been compromised by spyware.
Players on social media report being flooded with notifications saying things like, “We wanted to let you know that 67676767 of your reports led to sanctions,” or that they were suspended for “67 days due to Harassment.” In other cases, users were posting notifications about simply being booted from the game with no mention of the numbers 6 or 7 (except the “Six” in Rainbow Six).
So while there’s currently no certainty about the nature of this attack—if it is an attack—or the identity of these apparent perpetrators, it’s probably safe to say that someone with brainrot is messing around inside the guts of the game.
As of this writing, the situation was not quite as bad as it was about a week ago on December 27 and 28, when Ubisoft completely pulled the plug on the game after attackers took over and raised hell. Still, a second round of outages coming only a week after the first is far from optimal.
Around the time the game was brought back online on December 28, after resolving the previous problem, Ubisoft’s statement on X said “Investigations and corrections will continue over the next two weeks.” Apparently their investigation didn’t uncover every possible vulnerability, because here they are again.
Rainbow Six seems to primarily post PR updates on X, and as of this writing, no statement about these issues had been posted by the accounts for Rainbow Six or Ubisoft. Gizmodo reached out to the company to receive information about the nature of these issues and whether the game is in fact undergoing another attack, and to find out what actions the company may be taking to remedy the situation. We will update if we hear back.
The European Space Agency (ESA) suffered a security breach of its science servers, with a hacker group claiming they have stolen 200 gigabytes worth of data that includes confidential documents and source code.
Earlier this week, ESA confirmed the breach following reports on social media. “Our analysis so far indicates that only a very small number of external servers may have been impacted. These servers support unclassified collaborative engineering activities within the scientific community,” the space agency wrote on X.
Although ESA claims that the recent cybersecurity issue had minimal impact, an alleged hacker is offering to sell 200 gigabytes of data from the agency’s servers on the BreachForums cybercrime website. The compromised data includes source codes, access tokens, hardcoded credentials, Terraform files, and confidential documents, according to screenshots shared on X by French cybersecurity expert Seb Latom.
Some of the data may be related to ESA’s upcoming space telescope Ariel, or Atmospheric Remote-sensing Infrared Exoplanet Large-survey, which is due to launch in 2029. The data for sale online compromises the security of space projects and risks the reuse of the code for malicious purposes, according to Latom.
Wanted for cybercrime
This isn’t the first time ESA’s servers have been compromised. In December 2024, hackers created a fake payment page on the agency’s online shop to gain access to customers’ information. In 2015, a hacker group breached several ESA websites to collect the information of the agency’s staff and hundreds of subscribers.
The cybersecurity attacks against ESA have all affected platforms hosted outside the agency’s internal network. Still, there have been too many incidents, suggesting the agency’s data security needs improvement.
ESA’s American counterpart, NASA, has also suffered its fair share of security breaches over the years. The latest one took place in 2018 when hackers gained access to personal information, including social security numbers, belonging to the agency’s staff members.
ESA says it has initiated a forensic security analysis and put measures in place to secure any potentially affected devices. “All relevant stakeholders have been informed, and we will provide further updates as soon as additional information becomes available,” the space agency added.
It’s the end of the year. That means it’s time for us to celebrate the best cybersecurity stories we didn’t publish. Since 2023, TechCrunch has looked back at the best stories across the board from the year in cybersecurity.
If you’re not familiar, the idea is simple. There are now dozens of journalists who cover cybersecurity in the English language. There are a lot of stories about cybersecurity, privacy, and surveillance that are published every week. And a lot of them are great, and you should read them. We’re here to recommend the ones we liked the most, so keep in mind that it’s a very subjective and, at the end of the day, incomplete list.
Anyway, let’s get into it. — Lorenzo Franceschi-Bicchierai
Shane Harris described how he cultivated a senior Iranian hacker as a source, who was then killed
In 2016, The Atlantic’s journalist made contact with a person claiming to work as a hacker for Iran’s intelligence, where he claimed to have worked on major operations, such as the downing of an American drone and the now-infamous hack against oil giant Saudi Aramco, where Iranian hackers wiped the company’s computers. Harris was rightly skeptical, but as he kept talking to the hacker, who eventually revealed his real name to him, Harris started to believe him. When the hacker died, Harris was able to piece together the real story, which somehow turned out to be more incredible than the hacker had led Harris to believe.
The gripping story is also a great behind-the-scenes look at the challenges cybersecurity reporters face when dealing with sources claiming to have great stories to share.
The Washington Post revealed a secret order demanding Apple let U.K. officials spy on users’ encrypted data
In January, the U.K. government secretly issued Apple with a court order demanding that the company build a back door so police can access the iCloud data of any customer in the world. Due to a worldwide gag order, it was only because The Washington Post broke the news that we learned the order existed to begin with. The demand was the first of its kind, and — if successful — would be a major defeat for tech giants who have spent the past decade locking themselves out of their users’ own data so they can’t be compelled to provide it to governments.
Apple subsequently stopped offering its opt-in end-to-end encrypted cloud storage to its customers in the U.K. in response to the demand. But by breaking the news, the secret order was thrust into the public eye and allowed both Apple and critics to scrutinize U.K. surveillance powers in a way that hasn’t been tested in public before. The story sparked a months-long diplomatic row between the U.K. and the United States, prompting Downing Street to drop the request — only to try again several months later.
“The Trump administration accidentally texted me its war plans” by The Atlantic is this year’s best headline
This story was the sort of fly-on-the-wall access that some reporters would dream of, but The Atlantic’s editor-in-chief got to play out in real time after he was unwittingly added to a Signal group of senior U.S. government officials by a senior U.S. government official discussing war plans on their cell phones.
“We are currently clean on OPSEC,” said Secretary of Defense Pete Hegseth. they were not. Image Credits:The Atlantic (screenshot)
Reading the discussion about where U.S. military forces should drop bombs — and then seeing news reports of missiles hitting the ground on the other side of the world — was confirmation that Jeffrey Goldberg needed to know that he was, as he suspected, in a real chat with real Trump administration officials, and this was all on-the-record and reportable.
And so he did, paving the way for a months-long investigation (and critique) of the government’s operational security practices, in what was called the biggest government opsec mistake in history. The unraveling of the situation ultimately exposed security lapses involving the use of a knock-off Signal clone that further jeopardized the government’s ostensibly secure communications.
Brian Krebs tracked down a prolific hacker group admin as a Jordanian teenager
Brian Krebs is one of the more veteran cybersecurity reporters out there, and for years he has specialized in following online breadcrumbs that lead to him revealing the identity of notorious cybercriminals. In this case, Krebs was able to find the real identity behind a hacker’s online handle Rey, who is part of the notorious advanced persistent teenagers‘ cybercrime group that calls itself Scattered LAPSUS$ Hunters.
Krebs’ quest was so successful that he was able to talk to a person very close to the hacker — we won’t spoil the whole article here — and then the hacker himself, who confessed to his crimes and claimed he was trying to escape the cybercriminal life.
404 Media reported that a little-known data broker set up by the airline industry called the Airlines Reporting Corporation was selling access to 5 billion plane tickets and travel itineraries, including names and financial details of ordinary Americans, allowing government agencies like ICE, the State Department, and the IRS to track people without a warrant.
ARC, owned by United, American, Delta, Southwest, JetBlue, and other airlines, said it would shut down the warrantless data program following 404 Media’s months-long reporting and intense pressure from lawmakers.
Wired made the 3D-printed gun that Luigi Mangione allegedly used to kill a healthcare executive to test the legalities of “ghost guns”
The killing of UnitedHealthcare CEO Brian Thompson in December 2024 was one of the biggest stories of the year. Luigi Mangione, the chief suspect in the killing, was soon after arrested and indicted on charges of using a “ghost gun,” a 3D-printed firearm that had no serial numbers and built in private without a background check — effectively a gun that the government has no idea exists.
NPR detailed a federal whistleblower’s account of how DOGE took sensitive government data, and the threats he faced
DOGE, or the Department of Government Efficiency, was one of the biggest running stories of the year, as the gang of Elon Musk’s lackeys ripped through the federal government, tearing down security protocols and red tape, as part of the mass-grab of citizens’ data. NPR had some of the best investigative reporting uncovering the resistance movement of federal workers trying to prevent the pilfering of the government’s most sensitive data.
In one story detailing a whistleblower’s official disclosure as shared with members of Congress, a senior IT employee in the National Labor Relations Board told lawmakers that as he was seeking help investigating DOGE’s activity, he “found a printed letter in an envelope taped to his door, which included threatening language, sensitive personal information and overhead pictures of him walking his dog, according to the cover letter attached to his official disclosure.”
Mother Jones found an exposed dataset of tracked surveillance victims, including world leaders, a Vatican enemy, and maybe you
Any story that starts with a journalist saying they found something that made them “feel like shitting my pants,” you know it’s going to be a fun read. Gabriel Geiger found a dataset from a mysterious surveillance company called First Wap, which contained records on thousands of people from around the world whose phone locations had been tracked.
The dataset, spanning 2007 through 2015, allowed Geiger to identify dozens of high-profile people whose phones were tracked, including a former Syrian first lady, the head of a private military contractor, a Hollywood actor, and an enemy of the Vatican. This story explored the shadowy world of phone surveillance by exploiting Signaling System No. 7, or SS7, an obscurely named protocol long known to allow malicious tracking.
Wired reported on the investigation behind a string of “swatting” attacks on hundreds of schools nationwide
Swatting has been a problem for years. What started as a bad joke has become a real threat, which has resulted in at least one death. Swatting is a type of hoax where someone — often a hacker — calls the emergency services and tricks the authorities into sending an armed SWAT team to the home of the hoaxer’s target, often pretending to be the target themselves and pretending they are about to commit a violent crime.
In this feature, Wired’s Andy Greenberg put a face on the many characters who are part of these stories, such as the call operators who have to deal with this problem. And he also profiled a prolific swatter, known as Torswats, who for months tormented the operators and schools all over the country with fake — but extremely believable — threats of violence, as well as a hacker who took it upon himself to track Torswats down.
After 33 years, Bernardo Quintero decided it was time to find the person who changed his life — the anonymous programmer who created a computer virus that had infected his university decades earlier.
The virus, called Virus Málaga, was mostly harmless. But the challenge of defeating it sparked Quintero’s passion for cybersecurity, eventually leading him to found VirusTotal, a startup that Google acquired in 2012. That acquisition brought Google’s flagship European cybersecurity center to Málaga, transforming the Spanish city into a tech hub.
All because of a small malware program created by someone whose identity Quintero had never known. Moved by nostalgia and gratitude, Quintero launched a search earlier this year. He asked Spanish media outlets to amplify his quest for tips. He dove back into the virus’s code, looking for clues his 18-year-old self might have missed. And he eventually solved the mystery, sharing the bittersweet resolution in a LinkedIn post that went viral.
The story begins in 1992, when a young Quintero was prompted by a teacher to create an antivirus for the 2610-byte program that had spread across the computers of Málaga’s Polytechnic School. “That challenge in my first year at university sparked a deep interest in computer viruses and security, and without it my path might have been very different,” Quintero told TechCrunch.
Quintero’s search was aided by his programmer instincts. Earlier this year, he stepped down from his team manager role to “go back to the cave, to the basement of Google.” He didn’t leave the company; instead, he went back to tinkering and experimenting without managerial duties.
That tinkering mindset also led him to reexamine Virus Málaga and look for details he’d missed years earlier. First, he found fragments of a signature, but thanks to another security expert, he discovered a later variant of the virus with a much clearer cue: “KIKESOYYO.” “Kike soy yo” would translate to “I am Kike,” a common nickname for “Enrique.”
Around the same time, Quintero received a direct message from a man who is now the general digital transformation coordinator for the Spanish city of Cordoba and who claimed he witnessed one of his Polytechnic School classmates create the virus. Many details added up, but one stood out in particular: the man knew that the virus’s hidden message — called a payload, in cybersecurity terms — was a statement condemning the Basque terrorist group ETA, a fact that Quintero had never disclosed.
Techcrunch event
San Francisco | October 13-15, 2026
The tipster then gave Quintero a name — Antonio Astorga — but also shared the news that he had passed away.
This hit Quintero like a ton of bricks; now, he would never be able to ask Antonio about “Kike.” But he kept following the thread, and the plot twist came from Antonio’s sister, who revealed that his first name was actually Antonio Enrique. To his family, he was Kike.
Cancer took away Antonio Enrique Astorga before Quintero could thank him in person, but the story doesn’t stop here. Quintero’s LinkedIn post sheds new light to the legacy of “a brilliant colleague who deserves to be recognized as a pioneer of cybersecurity in Málaga” — and not just for helping Quintero discover his vocation.
According to his friend, Astorga’s virus had no other goal than spreading his anti-terrorist message and proving himself as a programmer. Mirroring Quintero’s path, Astorga’s interest in IT endured, and he became a computing teacher at a secondary school that named its IT classroom after him in his memory.
Astorga’s legacy also lives on beyond these walls, and not just through his students. One of his sons, Sergio, is a recent software engineering graduate with an interest in cybersecurity and quantum computing — a meaningful connection for Quintero. “Being able to close that circle now, and to see new generations building on it, is deeply meaningful to me,” Quintero said.
For Quintero, who suspects their paths will cross again, Sergio is “very representative of the talent being formed in Málaga today.” This, in turn, is a result of VirusTotal forming the root of what eventually became the Google Safety Engineering Center (GSEC) and spearheading collaborations with the University of Málaga that made the city a true cybersecurity talent hub.
Even as OpenAI works to harden its Atlas AI browser against cyberattacks, the company admits that prompt injections, a type of attack that manipulates AI agents to follow malicious instructions often hidden in web pages or emails, is a risk that’s not going away anytime soon — raising questions about how safely AI agents can operate on the open web.
“Prompt injection, much like scams and social engineering on the web, is unlikely to ever be fully ‘solved,’” OpenAI wrote in a Monday blog post detailing how the firm is beefing up Atlas’ armor to combat the unceasing attacks. The company conceded that “agent mode” in ChatGPT Atlas “expands the security threat surface.”
OpenAI launched its ChatGPT Atlas browser in October, and security researchers rushed to publish their demos, showing it was possible to write a few words in Google Docs that were capable of changing the underlying browser’s behavior. That same day, Brave published a blog post explaining that indirect prompt injection is a systematic challenge for AI-powered browsers, including Perplexity’s Comet.
OpenAI isn’t alone in recognizing that prompt-based injections aren’t going away. The U.K.’s National Cyber Security Centre earlier this month warned that prompt injection attacks against generative AI applications “may never be totally mitigated,” putting websites at risk of falling victim to data breaches. The U.K. government agency advised cyber professionals to reduce the risk and impact of prompt injections, rather than think the attacks can be “stopped.”
For OpenAI’s part, the company said: “We view prompt injection as a long-term AI security challenge, and we’ll need to continuously strengthen our defenses against it.”
The company’s answer to this Sisyphean task? A proactive, rapid-response cycle that the firm says is showing early promise in helping discover novel attack strategies internally before they are exploited “in the wild.”
That’s not entirely different from what rivals like Anthropic and Google have been saying: that to fight against the persistent risk of prompt-based attacks, defenses must be layered and continuously stress-tested. Google’s recent work, for example, focuses on architectural and policy-level controls for agentic systems.
But where OpenAI is taking a different tact is with its “LLM-based automated attacker.” This attacker is basically a bot that OpenAI trained, using reinforcement learning, to play the role of a hacker that looks for ways to sneak malicious instructions to an AI agent.
The bot can test the attack in simulation before using it for real, and the simulator shows how the target AI would think and what actions it would take if it saw the attack. The bot can then study that response, tweak the attack, and try again and again. That insight into the target AI’s internal reasoning is something outsiders don’t have access to, so, in theory, OpenAI’s bot should be able to find flaws faster than a real-world attacker would.
It’s a common tactic in AI safety testing: build an agent to find the edge cases and test against them rapidly in simulation.
“Our [reinforcement learning]-trained attacker can steer an agent into executing sophisticated, long-horizon harmful workflows that unfold over tens (or even hundreds) of steps,” wrote OpenAI. “We also observed novel attack strategies that did not appear in our human red teaming campaign or external reports.”
Image Credits:OpenAI
In a demo (pictured in part above), OpenAI showed how its automated attacker slipped a malicious email into a user’s inbox. When the AI agent later scanned the inbox, it followed the hidden instructions in the email and sent a resignation message instead of drafting an out-of-office reply. But following the security update, “agent mode” was able to successfully detect the prompt injection attempt and flag it to the user, according to the company.
The company says that while prompt injection is hard to secure against in a foolproof way, it’s leaning on large-scale testing and faster patch cycles to harden its systems before they show up in real-world attacks.
An OpenAI spokesperson declined to share whether the update to Atlas’ security has resulted in a measurable reduction in successful injections, but says the firm has been working with third parties to harden Atlas against prompt injection since before launch.
Rami McCarthy, principal security researcher at cybersecurity firm Wiz, says that reinforcement learning is one way to continuously adapt to attacker behavior, but it’s only part of the picture.
“A useful way to reason about risk in AI systems is autonomy multiplied by access,” McCarthy told TechCrunch.
“Agentic browsers tend to sit in a challenging part of that space: moderate autonomy combined with very high access,” said McCarthy. “Many current recommendations reflect that trade-off. Limiting logged-in access primarily reduces exposure, while requiring review of confirmation requests constrains autonomy.”
Those are two of OpenAI’s recommendations for users to reduce their own risk, and a spokesperson said Atlas is also trained to get user confirmation before sending messages or making payments. OpenAI also suggests that users give agents specific instructions, rather than providing them access to your inbox and telling them to “take whatever action is needed.”
“Wide latitude makes it easier for hidden or malicious content to influence the agent, even when safeguards are in place,” per OpenAI.
While OpenAI says protecting Atlas users against prompt injections is a top priority, McCarthy invites some skepticism as to the return on investment for risk-prone browsers.
“For most everyday use cases, agentic browsers don’t yet deliver enough value to justify their current risk profile,” McCarthy told TechCrunch. “The risk is high given their access to sensitive data like email and payment information, even though that access is also what makes them powerful. That balance will evolve, but today the trade-offs are still very real.”
A cybersecurity company claims that a number of web browser extensions are secretly logging and selling users’ conversations with AIchatbots.
KOI, an Israel-based cybersecurity firm focused on developing protections against extension-based attacks, has released a report alleging that Urban VPN Proxy, a popular VPN extension on Google Chrome and Microsoft Edge, has a hidden function to “harvest” user conversations on AI platforms including ChatGPT, Claude, Gemini, Microsoft Copilot, Perplexity, DeepSeek, Grok, and Meta AI. The extension was updated with this new capability in July, according to KOI.
The report says that when users with the extension visit any of the above platforms, the extension injects an “executor” script directly into the webpage, so that “every network request and response on that page passes through the extension’s code first.” This means the extension sees every message sent by users and generated by the AI platforms. Once the info has been collected, it’s sent to the extension’s external servers.
Urban VPN Proxy wasn’t the only extension that KOI identified as containing AI harvesting functionality. The firm identified the following extensions, all of which come from the same organization, as containing the same malicious code:
Google Chrome Extensions:
Urban VPN Proxy – 6,000,000 users
1ClickVPN Proxy – 600,000 users
Urban Browser Guard – 40,000 users
Urban Ad Blocker – 10,000 users
Microsoft Edge Extensions:
Urban VPN Proxy – 1,323,622 users
1ClickVPN Proxy – 36,459 users
Urban Browser Guard – 12,624 users
Urban Ad Blocker – 6,476 users
In total, according to KOI, over 8 million users have installed these extensions. The company behind these extensions is Urban Cyber Security, which KOI says is affiliated with BiScience, a data broker company.
Over the last few years, I’ve watched something strange happen inside the world of founders, executives, and wealthy families. Companies invest millions in cybersecurity, yet attacks keep entering through a completely different door. Not through servers, not through corporate networks, but through the personal digital lives of the people who run them.
That gap is where I spend most of my time now.
What I’ve learned is simple: Executives operate with two identities. The “official” identity is monitored, audited, and controlled. The unmanaged digital shadow is built over a lifetime of online habits, data leaks, personal accounts, public records, and information brokers. That second identity has quietly become the real attack surface.
And almost no one is defending it.
An Inc.com Featured Presentation
Chain reactions from personal exposure
Before founding LeyesX, my cyberintelligence firm, I spent years navigating the darker corners of the internet myself. I lost over $100,000 in scams, fraud, rug pulls, digital impersonation, and identity exploitation. At some point, you stop blaming events and start studying architecture. You begin tracking how people are targeted, why attacks escalate, and how fragments of personal information turn into full-scale intrusion paths.
It became clear that modern risk isn’t technical-first, it’s human-first.
A leaked address turns into a SIM swap. A dormant email becomes an impersonation vector. A public record becomes a phishing tool. A leaked ID number can lead to a financial breach.
These aren’t isolated incidents. They are chain reactions built from personal exposure.
The problem is that our risk frameworks haven’t evolved. Companies protect systems, but neglect the human being behind the system. They secure networks, but ignore the accumulated data trails that attackers actually study.
Organizational governance
A new option is adopting a model that approaches digital risk. Organizational governance should not be a security feature, but a continuous system. A model should map how personal data moves, leaks, replicates, and regroups across platforms, and map how attackers assemble those fragments into predictable pathways. It should blend cyberintelligence, mapping, personal exposure reduction, and narrative stabilization.
I’ve seen firsthand how ignoring this layer destabilizes leadership. Executives are often shocked when we create their exposure map. Old domains they forgot about. Email addresses tied to long-abandoned accounts. Records connecting them to properties, relatives, assistants, and historic data brokers. Family members they never realized were vulnerable.
When people see it, they realize that the threat wasn’t “out there.” It was already wrapped around them.
The personal impacts the organization
Companies lose billions each year to identity-driven fraud. Not because their firewalls failed, but because their leaders’ personal exposure created an entry path. And when leaders are compromised, the impact is organizational: financial disruption, legal exposure, reputational instability, and operational risk.
Some private wealth offices have begun adopting identity governance as a formal part of their risk strategy. They treat their principals like infrastructure—assets that require continuous protection, not reactive repair. It’s a shift I expect to see across more industries as identity becomes intertwined with corporate continuity.
Digital identity is infrastructure now, and it needs to be governed like it.
If companies want real resilience, they must protect the humans at the center of the structure with the same discipline they apply to corporate systems. That’s where the next decade of risk management is heading, whether organizations prepare for it or not.
We can no longer pretend that personal exposure is separate from corporate risk. The line has already disappeared. The only question is whether leaders will respond before attackers do.