Just like you probably don’t grow and grind wheat to make flour for your bread, most software developers don’t write every line of code in a new project from scratch. Doing so would be extremely slow and could create more security issues than it solves. So developers draw on existing libraries—often open source projects—to get various basic software components in place.
While this approach is efficient, it can create exposure and lack of visibility into software. Increasingly, however, the rise of vibe coding is being used in a similar way, allowing developers to quickly spin up code that they can simply adapt rather than writing from scratch. Security researchers warn, though, that this new genre of plug-and-play code is making software-supply-chain security even more complicated—and dangerous.
“We’re hitting the point right now where AI is about to lose its grace period on security,” says Alex Zenla, chief technology officer of the cloud security firm Edera. “And AI is its own worst enemy in terms of generating code that’s insecure. If AI is being trained in part on old, vulnerable, or low-quality software that’s available out there, then all the vulnerabilities that have existed can reoccur and be introduced again, not to mention new issues.”
In addition to sucking up potentially insecure training data, the reality of vibe coding is that it produces a rough draft of code that may not fully take into account all of the specific context and considerations around a given product or service. In other words, even if a company trains a local model on a project’s source code and a natural language description of goals, the production process is still relying on human reviewers’ ability to spot any and every possible flaw or incongruity in code originally generated by AI.
“Engineering groups need to think about the development lifecycle in the era of vibe coding,” says Eran Kinsbruner, a researcher at the application security firm Checkmarx. “If you ask the exact same LLM model to write for your specific source code, every single time it will have a slightly different output. One developer within the team will generate one output and the other developer is going to get a different output. So that introduces an additional complication beyond open source.”
In a Checkmarx survey of chief information security officers, application security managers, and heads of development, a third of respondents said that more than 60 percent of their organization’s code was generated by AI in 2024. But only 18 percent of respondents said that their organization has a list of approved tools for vibe coding. Checkmarx polled thousands of professionals and published the findings in August—emphasizing, too, that AI development is making it harder to trace “ownership” of code.
South Korea is world-famous for its blazing-fast internet, near-universal broadband coverage, and as a leader in digital innovation, hosting global tech brands like Hyundai, LG, and Samsung. But this very success has made the country a prime target for hackers and exposed how fragile its cybersecurity defenses remain.
The country is reeling from a string of high-profile hacks, affecting credit card companies, telecoms, tech startups, and government agencies, impacting vast swathes of the South Korean population. In each case, ministries and regulators appeared to scramble in parallel, sometimes deferring to one another rather than moving in unison.
Critics argue that South Korea’s cyber defenses are hindered by a fragmented system of government ministries and agencies, often resulting in slow and uncoordinated responses, per local media reports.
“The government’s approach to cybersecurity remains largely reactive, treating it as a crisis management issue rather than as critical national infrastructure,” Brian Pak, the chief executive of Seoul-based cybersecurity firm Theori, told TechCrunch.
Pak, who also serves as an advisor to SK Telecom’s parent company’s special committee on cybersecurity innovations, told TechCrunch that because government agencies tasked with cybersecurity work in silos, developing digital defenses and training skilled workers often get overlooked.
The country is also facing a severe shortage of skilled cybersecurity experts.
“[That’s] mainly because the current approach has held back workforce development. This lack of talent creates a vicious cycle. Without enough expertise, it’s impossible to build and maintain the proactive defenses needed to stay ahead of threats,” Pak continued.
Political deadlock has fostered a habit of seeking quick, obvious “quick fixes” after each crisis, said Pak, all the while the more challenging, long-term work of building digital resilience continues to be sidelined.
This year alone, there has been a major cybersecurity incident in South Korea almost every month, further mounting concerns over the resilience of South Korea’s digital infrastructure.
January 2025
GS Retail, the operator of convenience stores and grocery markets across South Korea, confirmed a data breach that exposed the personal details of about 90,000 customers after its website was attacked between December 27 and January 4. The stolen information included names, birth dates, contact details, addresses, and email addresses.
February 2025
April and May 2025
South Korea’s part-time job platform Albamon was hit by a hacking attack on April 30. The breach exposed the resumes of more than 20,000 users, including names, phone numbers, and email addresses.
In April, South Korea’s telecom giant SK Telecom was hit by a major cyberattack. Hackers stole the personal data of about 23 million customers — nearly half the country’s population. Much of the aftermath of the cyberattack lasted through May, in which millions of customers were offered a new SIM card following the breach.
June 2025
Yes24, South Korea’s online ticketing and retail platform, was hit by a ransomware attack on June 9, which knocked its services offline. The disruption lasted for about four days, with the company back online by mid-June.
A North Korea-backed hacking group, Kimsuky, used AI-generated deepfake images in a July spear-phishing attempt against a South Korean military organization, according to Genians Security Center. The group has also targeted other South Korean institutions.
Seoul Guarantee Insurance (SGI), a Korean financial institution, was hit by a ransomware attack around July 14, which disrupted its core systems. The incident knocked key services offline, including the issuing and verification of guarantees, leaving customers in limbo.
Hackers broke into South Korean financial services company Lotte Card, which issues credit and debit cards, between July 22 and August. The breach exposed around 200GB of data and is believed to have affected roughly 3 million customers. The breach remained unnoticed for approximately 17 days, until the company discovered it on August 31.
Welcome Financial: In August 2025, Welrix F&I, a lending arm of Welcome Financial Group, was hit by a ransomware attack. A Russian-linked hacking group claimed it stole over a terabyte of internal files, including sensitive customer data, and even leaked samples on the dark web.
North Korea-linked hackers, believed to be the Kimsuky group, have been spying on foreign embassies in South Korea for months by disguising their attacks as routine diplomatic emails. According to Trellix, the campaign has been active since March and has targeted at least 19 embassies and foreign ministries in South Korea.
September 2025
KT, one of South Korea’s biggest telecom operators, has reported a cyber breach that exposed subscriber data from more than 5,500 customers. The attack was linked to illegal “fake base stations” that tapped into KT’s network, enabling hackers to intercept mobile traffic, steal information like IMSI, IMEI, and phone numbers, and even make unauthorized micro-payments.
In September 2025, the National Security Office announced that it would implement “comprehensive” cyber measures through an interagency plan, led by the South Korean president’s office. Regulators also signaled a legal change giving the government power to launch probes at the first sign of hacking — even if companies haven’t filed a report. Both steps aim to address the lack of a first responder that has long hindered South Korea’s cyber defenses.
But South Korea’s fragmented system leaves accountability weak, placing all authority in a presidential “control tower” could risk “politicization” and overreach, according to Pak.
A better path may be balance: a central body to set strategy and coordinate crises, paired with independent oversight to keep power in check. In a hybrid model, expert agencies like KISA would still handle the technical work — just with more straightforward rules and accountability, Pak told TechCrunch.
When reached for comment, a spokesperson for the South Korea’s Ministry of Science in ICT said the ministry, with KISA and other relevant agencies, is “committed to addressing increasingly sophisticated and advanced cyber threats.”
“We continue to work diligently to minimize potential harm to Korean businesses and the general public,” the spokesperson added.
This article was originally published on September 30.
Social event planning app Partiful, which calls itself “Facebook events for hot people,” has firmly replaced Facebook as the go-to platform for sending party invitations. But what Partiful also has in common with Facebook is that it’s collecting a tsunami of user data, and Partiful could have done better at keeping that data secure.
On Partiful, hosts can create online invitations with a retro, maximalist vibe, allowing guests to RSVP to events with the ease of ordering a salad on a touch-screen. Partiful aims to be user-friendly and trendy, propelling the app to #9 on the iOS App Store’s Lifestyle charts. Google called Partiful the “best app” of 2024.
Now, Partiful has evolved into a powerful Facebook-like social graph, easily mapping who your friends are and who your friends’ friends are, what you do, where you go, and all of your phone numbers.
As Partiful grew more popular, some users became skeptical of the company’s origins. One New York City promoter announced that it was boycotting Partiful because its founders and some staff are former employees of Palantir, Peter Thiel’s data mining company, which produces the software that powers ICE’s master database for the Trump administration’s deportation crackdown.
Given some of the speculation around the app, TechCrunch set up a new account and tested Partiful. We soon found that the app was not stripping the location data of user-uploaded images, including public profile photos.
TechCrunch found it was possible for anyone, using only the developer tools in a web browser, to access raw user profile photos stored in Partiful’s backend database hosted on Google Firebase. If the user’s photo contained the precise real-world location of where it was taken, anyone else could have also viewed the precise coordinates of where that photo was taken.
Almost all digital files, like the pictures you take on a smartphone, contain metadata, which includes information like the file size, when it was created, and by whom. In the case of photos and videos, metadata can include information about the kind of camera used and its settings, as well as the precise latitude and longitude coordinates of where the image was captured.
The security flaw is problematic because anyone using Partiful could have revealed the location of where a person’s profile photo was snapped. Some Partiful user profile photos contained highly granular location data that could be used to identify the person’s home or work, particularly in rural areas where individual homes are easier to distinguish on a map.
It’s common practice for companies that host user images and videos to automatically remove metadata upon upload to prevent privacy lapses like this.
TechCrunch verified the bug ourselves by uploading a new Partiful profile photo that we had previously captured from outside of the Moscone West Convention Center in San Francisco, which contained the photo’s precise location. When we checked the metadata of the photo stored on Partiful’s server, it still contained the exact coordinates of where the image was taken down to a few feet.
TechCrunch’s profile photo containing GPS coordinates uploaded to Partiful.Image Credits:TechCrunchThe location of where our Partiful profile photo was taken on a Google Map.Image Credits:TechCrunch
After discovering the security flaw, TechCrunch alerted Partiful co-founders Shreya Murthy and Joy Tao by email, as Partiful does not have a public means for reporting security flaws. TechCrunch shared a link to a Partiful user’s raw profile photo containing that user’s real-world location at the time the photo was taken, a residential address in Manhattan.
Tao told TechCrunch on Friday that the vulnerability was “already on our team’s radar, and was recently prioritized as an upcoming fix.”
Partiful initially provided a timeline to fix the flaw by “next week,” but given the sensitivity of the data involved, Partiful fixed the bug by Saturday at TechCrunch’s request.
TechCrunch confirmed Saturday that metadata was removed from existing user-uploaded photos. The profile photo that we uploaded with our real-world location also had the metadata removed.
Partiful disclosed the security lapse in a tweet shortly before the publishing of this story.
When asked by TechCrunch if Partiful has the technical means, such as logs, to determine if there was any direct or bulk access to user profile photos stored in its database, Partiful spokesperson Jess Eames said this was “still under investigation but we have found no evidence of this yet.”
Eames said the company “regularly perform security reviews with experts in the field, not just as a one-time action but as part of our ongoing processes.” Partiful did not provide TechCrunch with the name of the experts when asked.
Partiful has raised over $27 million from investors since its founding in 2022, including a $20 million Series A funding round led by Andreessen Horowitz. TechCrunch asked Partiful’s co-founders if they had commissioned a security review of their product before launch, but would not say.
Plus: China sentences scam bosses to death, Europe is ramping up its plans to build a “drone wall” to protect against Russian airspace violations, and more.
There’s a long list of reasons US stability is now teetering between “Fyre Festival” and “Charlie Sheen’s ‘Tiger Blood’ era.” Now you can add cybersecurity to the tally. A crucial cyber defense law, the Cybersecurity Information Sharing Act of 2015 (CISA 2015), has lapsed. With the government out of commission, the nation’s computer networks are more exposed for… who knows how long. Welcome to 2025, baby.
CISA 2015 promotes the sharing of cyber threat information between the private and public sectors. It includes legal protections for companies that might otherwise hesitate to share that data. The law promotes “cyber threat information sharing with industry and government partners within a secure policy and legal framework,” a coalition of industry groups wrote in a letter to Congress last week.
As Cybersecurity Diveexplains, CISA 2015 shields companies from antitrust liability, regulatory enforcement, private lawsuits and FOIA disclosures. Without it, sharing gets more complicated. “There will just be many more lawyers involved, and it will all go slower, particularly new sharing agreements,” Ari Schwartz, cybersecurity director at the law firm Venable, told the publication. That could make it easier for adversaries like Russia and China to conduct cyberattacks.
Senator Rand Paul (R-KY)
(Kevin Dietsch via Getty Images)
Before the shutdown, there was support for renewal from the private sector, the Trump administration and bipartisan members of Congress. One of the biggest roadblocks was Sen. Rand Paul (R-KY), chairman of the Senate Homeland Security Committee. He objected to reauthorizing the law without changes to some of his pet issues. Notably, he wanted to add language that would neuter the ability to combat misinformation and disinformation. He canceled his planned revision of the bill after a backlash from his peers. The committee then failed to approve any version before the expiration date.
Meanwhile, House Republicans included a short-term CISA 2015 renewal in its government funding bill. But Democrats, whose support the GOP needs, wouldn’t support the Continuing Resolution for other reasons. They want Affordable Care Act premium tax credits extended beyond their scheduled expiration at the end of the year. Without an extension, Americans’ already spiking health insurance premiums will continue to skyrocket.
In its letter to Congress last week, the industry coalition warned that the expiration of CISA 2015 would lead to “a more complex and dangerous” security landscape. “Sharing information about cyber threats and incidents makes it harder for attackers because defenders learn what to watch for and prioritize,” the group wrote. “As a result, attackers must invest more in new tools or target different victims.”
A cyberattack on the UK-based automaker Jaguar Land Rover has been causing a supply chain meltdown, halting vehicle production, costing JLR tens of millions of dollars, and forcing its parts suppliers to lay off workers. The beleaguered company will have to shoulder the full cost of the attack because of inadequate insurance coverage, prompting talks of possible UK government assistance.
And there’s more. Each week, we round up the security and privacy news we didn’t cover in depth ourselves. Click the headlines to read the full stories. And stay safe out there.
An app used to out those who spoke ill of the murdered right-wing activist Charlie Kirk was found to be leaking its users’ personal information, doxing the very people it had invited to dox its targets.
The app Cancel the Hate, founded in the wake of Kirk’s September 10 assassination, suspended its services this week after it was revealed that security flaws in the website where the app was hosted exposed users’ email addresses and phone numbers. That site had asked its users to collect and share employment and other personal information of critics of Kirk and others “supporting political violence.” But a security researcher who identified themselves only as BobDaHacker demonstrated to news outlet Straight Arrow News that privacy settings on the site didn’t work as advertised, publicly leaking users’ information even when it was set to private. The hacker also reportedly had the ability to delete users’ accounts at will.
Cancel the Hate, which displayed a photo of Kirk on its homepage and was founded by a Kirk supporter who cited his death as the motivation for creating the site, has since taken down its reporting features. It now displays a message on its homepage that it’s moving to a “new service provider.” The page that allows visitors to buy a $23 T-shirt remains online.
Ransomware groups continued to plumb the depths of abject immorality this week with a new tactic: extorting preschools by stealing toddlers’ personal information and threatening their parents. The BBC reports that a hacker group says it has stolen the names, addresses, and photos of around 8,000 children from the preschool chain Kido, which has sites largely around London but also in the US and India. The hackers are threatening to leak the data if a ransom isn’t paid, going so far as to contact some of the children’s parents to reinforce their threat. The group has also posted sample information and photos of 10 children on their dark-web site.
In August, The Guardian, Israeli-Palestinian publication +972 Magazine, and Hebrew-language publication Local Call revealed how Israeli signals intelligence agency Unit 8200 had built a comprehensive surveillance system to intercept and store Palestinian phone calls. More than “a million calls an hour” could be collected by the system, which reportedly amassed around 8,000 terabytes of call data and stored it in Microsoft’s Azure cloud service in the Netherlands, the publications reported.
This week, following an external investigation commissioned by Microsoft, the company pulled some of the Israeli military’s access to its technology. In a statement, Microsoft president Brad Smith said the firm has taken the decision to “cease and disable” some “specific cloud storage and AI services and technologies” that it was providing to Israeli forces. Microsoft’s action—its investigation is still ongoing—follows a wave of staff protests at its ties to Israel and its ongoing war in Gaza. “We do not provide technology to facilitate mass surveillance of civilians. We have applied this principle in every country around the world, and we have insisted on it repeatedly for more than two decades,” Smith wrote in a statement.
A data spill from an unsecured cloud server has exposed hundreds of thousands of sensitive bank transfer documents in India, revealing account numbers, transaction figures, and individuals’ contact details.
Researchers at cybersecurity firm UpGuard discovered in late August a publicly accessible Amazon-hosted storage server containing 273,000 PDF documents relating to bank transfers of Indian customers.
The exposed files contained completed transaction forms intended for processing via the National Automated Clearing House, or NACH, a centralized system used by banks in India to facilitate high-volume recurring transactions, such as salaries, loan repayments, and utility payments.
The data was linked to at least 38 different banks and financial institutions, the researchers told TechCrunch.
It’s not clear why the data was left publicly exposed and accessible to the internet, though security lapses of this nature are not uncommon due to misconfigurations and human error.
But it remains unclear who caused the data spill, who secured it, and who is ultimately responsible for alerting those whose personal data was exposed.
Data secured, but nobody accepts blame
In its blog post detailing its findings, the UpGuard researchers said that out of a sample of 55,000 documents, more than half of the files mentioned the name of Indian lender Aye Finance, which had filed for a $171 million IPO last year. The Indian state-owned State Bank of India was the next institution to appear by frequency in the sample documents, according to the researchers.
After discovering the exposed data, UpGuard’s researchers notified Aye Finance through its corporate, customer care, and grievance redressal email addresses. The researchers also alerted the National Payments Corporation of India, or NPCI, the government body responsible for managing NACH.
By early September, the researchers said the data was still exposed and that thousands of files were being added to the exposed server daily.
UpGuard said it then alerted India’s computer emergency response team, CERT-In. Shortly afterward, the exposed data was secured, the researchers told TechCrunch.
But nobody seems to want to take responsibility for the security lapse.
When reached for comment, NPCI spokesperson Ankur Dahiya told TechCrunch that the exposed data did not come from its systems.
“A detailed verification and review have confirmed that no data related to NACH mandate information/records from NPCI systems have been exposed/compromised,” the spokesperson said in an email sent to TechCrunch.
Aye Finance co-founder and CEO, Sanjay Sharma did not respond to a request for comment from TechCrunch. The State Bank of India also did not respond to a request for comment.
A new app offering to record your phone calls and pay you for the audio so it can sell the data to AI companies is, unbelievably, the No. 2 app in Apple’s U.S. App Store’s Social Networking section.
The app, Neon Mobile, pitches itself as a money-making tool offering “hundreds or even thousands of dollars per year” for access to your audio conversations.
Neon’s website says the company pays 30¢ per minute when you call other Neon users and up to $30 per day maximum for making calls to anyone else. The app also pays for referrals. The app first ranked No. 476 in the Social Networking category of the U.S. App Store on September 18, but jumped to No. 10 at the end of yesterday, according to data from app intelligence firm Appfigures.
On Wednesday, Neon was spotted in the No. 2 position on the iPhone’s top free charts for social apps.
Neon also became the No. 7 top overall app or game earlier on Wednesday morning, and became the No. 6 top app.
According to Neon’s terms of service, the company’s mobile app can capture users’ inbound and outbound phone calls. However, Neon’s marketing claims to only record your side of the call unless it’s with another Neon user.
That data is being sold to “AI companies,” the company’s terms of service state, “for the purpose of developing, training, testing, and improving machine learning models, artificial intelligence tools and systems, and related technologies.”
Image Credits:Neon Mobile
The fact that such an app exists and is permitted on the app stores is an indication of how far AI has encroached into users’ lives and areas once thought of as private. Its high ranking within the Apple App Store, meanwhile, is proof that there is now some subsection of the market seemingly willing to exchange their privacy for pennies, regardless of the larger cost to themselves or society.
Despite what Neon’s privacy policy says, its terms include a very broad license to its user data, where Neon grants itself a:
“…worldwide, exclusive, irrevocable, transferable, royalty-free, fully paid right and license (with the right to sublicense through multiple tiers) to sell, use, host, store, transfer, publicly display, publicly perform (including by means of a digital audio transmission), communicate to the public, reproduce, modify for the purpose of formatting for display, create derivative works as authorized in these Terms, and distribute your Recordings, in whole or in part, in any media formats and through any media channels, in each instance whether now known or hereafter developed.”
That leaves plenty of wiggle room for Neon to do more with users’ data than it claims.
The terms also include an extensive section on beta features, which have no warranty and may have all sorts of issues and bugs.
Though Neon’s app raises many red flags, it may be technically legal.
“Recording only one side of the phone call is aimed at avoiding wiretap laws,” Jennifer Daniels, a partner at the law firm Blank Rome’s Privacy, Security & Data Protection Group, tells TechCrunch.
“Under [the] laws of many states, you have to have consent from both parties to a conversation in order to record it… It’s an interesting approach,” says Daniels.
Peter Jackson, cybersecurity and privacy attorney at Greenberg Glusker, agreed — and tells TechCrunch that the language around “one-sided transcripts” sounds like it could be a backdoor way of saying that Neon records users’ calls in their entirety, but may just remove what the other party said from the final transcript.
In addition, the legal experts pointed to concerns about how anonymized the data may really be.
Neon claims it removes users’ names, emails, and phone numbers before selling data to AI companies. But the company doesn’t say how AI partners or others it sells to could use that data. Voice data could be used to make fake calls that sound like they’re coming from you, or AI companies could use your voice to make their own AI voices.
“Once your voice is over there, it can be used for fraud,” says Jackson. “Now, this company has your phone number and essentially enough information — they have recordings of your voice, which could be used to create an impersonation of you and do all sorts of fraud.”
Even if the company itself is trustworthy, Neon doesn’t disclose who its trusted partners are or what those entities are allowed to do with users’ data further down the road. Neon is also subject to potential data breaches, as any company with valuable data may be.
Image Credits:Neon Mobile
In a brief test by TechCrunch, Neon did not offer any indication that it was recording the user’s call, nor did it warn the call recipient. The app worked like any other voice-over-IP app, and the Caller ID displayed the inbound phone number, as usual. (We’ll leave it to security researchers to attempt to verify the app’s other claims.)
Kiam, who is identified only as “Alex” on the company website, operates Neon from a New York apartment, a business filing shows.
A LinkedIn post indicates Kiam raised money from Upfront Ventures a few months ago for his startup, but the investor didn’t respond to an inquiry from TechCrunch as of the time of writing.
Has AI desensitized users to privacy concerns?
There was a time when companies looking to profit from data collection through mobile apps handled this type of thing on the sly.
Now, AI agents regularly join meetings to take notes, and always-on AI devices are on the market. But at least in those cases, everyone is consenting to a recording, Daniels tells TechCrunch.
In light of this widespread usage and sale of personal data, there are likely now those cynical enough to think that if their data is being sold anyway, they may as well profit from it.
Unfortunately, they may be sharing more information than they realize and putting others’ privacy at risk when they do.
“There is a tremendous desire on the part of, certainly, knowledge workers — and frankly, everybody — to make it as easy as possible to do your job,” says Jackson. “And some of these productivity tools do that at the expense of, obviously, your privacy, but also, increasingly, the privacy of those with whom you are interacting on a day-to-day basis.”
The phenomenon of SIM farms, even at the scale found in this instance around New York, is far from new. Cybercriminals have long used the massive collections of centrally operated SIM cards for everything from spam to swatting to fake account creation and fraudulent engagement with social media or advertising campaigns. The SIM cards are typically housed in so-called SIM boxes that can control more than a hundred cards at a time, which are in turn connected to servers that can then control thousands of SIMs each.
SIM farms allow “bulk messaging at a speed and volume that would be impossible for an individual user,” one telecoms industry source, who asked not to be named due to the sensitivity of the Secret Service’s investigation, told WIRED. “The technology behind these farms makes them highly flexible—SIMs can be rotated to bypass detection systems, traffic can be geographically masked, and accounts can be made to look like they’re coming from genuine users.”
The telecom industry source adds that the images of SIM servers and boxes published by the Secret Service indicate a “really organized” criminal operation may have been behind the setup. “This means that there is great intelligence and significant resources behind it,” the person added.
The SIM farm found by the Secret Service, Unit 221b’s Coon says, isn’t the biggest operation he’s learned of in the US. But it’s the most concentrated in such a small single geographic area. SIM boxes, he notes, are illegal in the US, and the hundreds of them found in the Secret Service’s investigation must have been smuggled into the US. In one case he was involved in, Coon says, the boxes were imported from China, disguised as audio amplifiers.
The “clean, tidy racks” of equipment in a well-lit room shows that the operation may be well-organized and professional, says Cathal Mc Daid, VP of technology at telecommunication and cybersecurity firm Enea. Photos released by the Secret Service show multiple racks of telecom equipment neatly set up, with individual pieces of tech numbered and labeled, plus cables on the floor being covered and protected with tape. Each SIM box, Mc Daid says, appears to include around 256 ports and associated modems. “This looks more professional than many of the SIM farms you see,” says Mc Daid.
Mc Daid notes, however, that he’s tracked similar operations discovered in Ukraine—some of which have been as large or even larger than the one revealed on Tuesday by the Secret Service. Over the course of the last few years, law enforcement officials in Ukraine have discoveredtens of thousands of SIM cards being used in SIM farms allegedly set up by Russian actors. In one case in 2023, around 150,000 SIM cards were reportedly found. These SIM farms have been used to operate fake social media profiles that can spread disinformation and propaganda.
Additional equipment found in the New York–area SIM farm sites.
Building vehicles is a hugely complex process. Hundreds of different companies provide parts, materials, electronics, and more to vehicle manufacturers, and these expansive supply chain networks often relyupon “just-in-time” manufacturing. That means they order parts and services to be delivered in the specific quantities that are needed and exactly when they need them—large stockpiles of parts are unlikely to be held by auto makers.
“The supplier networks that are supplying into these manufacturing plants, they’re all set up for efficiency—economic efficiency, and also logistic efficiency,” says Siraj Ahmed Shaikh, a professor in systems security at Swansea University. “There’s a very carefully orchestrated supply chain,” Shaikh adds, speaking about automotive manufacturing generally. “There’s a critical dependency for those suppliers supplying into this kind of an operation. As soon as there is a disruption at this kind of facility, then all the suppliers get affected.”
One company that makes glass sun roofs has started laying off workers, according to a report in the Telegraph. Meanwhile, another firm told the BBC it has laid off around 40 people so far. French automotive company OPmobility, which employs 38,000 people across 150 sites, told WIRED it is making some changes and monitoring the events. “OPmobility is reconfiguring its production at certain sites as a consequence of the shutdown of its production by one of its customers based in the United Kingdom and depending on the evolution of the situation,” a spokesperson for the firm says.
While it is unclear which specific JLR systems have been impacted by the hackers and what systems JLR took offline proactively, many were likely taken offline to stop the attack from getting worse. “It’s very challenging to ensure containment while you still have connections between various systems,” says Orla Cox, head of EMEA cybersecurity communications at FTI Consulting, which responds to cyberattacks and works on investigations. “Oftentimes as well, there will be dependencies on different systems: You take one down, then it means that it has a knock on effect on another.”
Whenever there’s a hack in any part of a supply chain—whether that is a manufacturer at the top of the pyramid or a firm further down the pipeline—digital connections between companies may be severed to stop attackers from spreading from one network to the next. Connections via VPNs or APIs may be stopped, Cox says. “Some may even take stronger measures such as blocking domains and IP addresses. Then things like email are no longer usable between the two organizations.”
The complexity of digital and physical supply chains, spanning across dozens of businesses and just-in-time production systems, means it is likely that bringing everything back online and up to full-working speed may take time. MacColl, the RUSI researcher, says cybersecurity issues often fail to be debated at the highest level of British politics—but adds this time could be different due to the scale of the disruption. “This incident has the potential to cut through because of the job losses and the fact that MPs in constituencies affected by this will be getting calls,” he says. That breakthrough has already begun.
Russia conducted conspicuous military exercises testing hypersonic missiles near NATO borders, stoking tensions in the region after the Kremlin had already recently flown drones into Polish and Romanian airspace. Scammers have a new tool for sending spam texts, known as “SMS blasters,” that can send up to 100,000 texts per hour while evading telecom company anti-spam measures. Scammers deploy rogue cell towers that trick people’s phones into connecting to the malicious devices so they can send the texts directly and bypass filters. And a pair of flaws in Microsoft’s Entra ID identity and access management system, which have been patched, could have been exploited to access virtually all Azure customer accounts—a potentially catastrophic disaster.
But wait, there’s more! Each week, we round up the security and privacy news we didn’t cover in depth ourselves. Click the headlines to read the full stories. And stay safe out there.
The cybersecurity world has seen, to its growing dismay, plenty of software supply-chain attacks, in which hackers hide their code in a legitimate piece of software so that it’s silently seeded out to every system that uses that code around the world. In recent years, hackers have even tried linking one software supply-chain attack to another, finding a second software developer target among their victims to compromise yet another piece of software and launch a new round of infections. This week saw a new and troubling evolution of those tactics: a full-blown self-replicating supply-chain attack worm.
The malware, which has been dubbed Shai-Hulud after the Fremen name for the monstrous Sandworms in the sci-fi novel Dune (and the name of the Github page where the malware published stolen credentials of its victims), has compromised hundreds of open source software packages on the code repository Node Packet Management, or NPM, used by developers of Javascript. The Shai-Hulud worm is designed to infect a system that uses one of those software packages, then hunt for more NPM credentials on that system so that it can corrupt another software package and continue its spread.
By one count, the worm has spread to more than 180 software packages, including 25 used by the cybersecurity firm CrowdStrike, though CrowdStrike has since had them removed from the NPM repository. Another count from cybersecurity firm ReversingLabs put the count far higher, at more than 700 affected code packages. That makes Shai-Hulud one of the biggest supply-chain attacks in history, though the intent of its mass credential-stealing remains far from clear.
Western privacy advocates have long pointed to China’s surveillance systems as the potential dystopia awaiting countries like the United States if tech industry and government data collection goes unchecked. But a sprawling Associated Press investigation highlights how China’s surveillance systems have reportedly been largely built on US technologies. The AP’s reporters found evidence that China’s surveillance network—from the “Golden Shield” policing system that Beijing officials have used to censor the internet and crack down on alleged terrorists to the tools used to target, track, and often detain Uyghurs and the country’s Xinjiang region—appear to have been built with the help of American companies, including IBM, Dell, Cisco, Intel, Nvidia, Oracle, Microsoft, Thermo Fisher, Motorola, Amazon Web Services, Western Digital, and HP. In many cases, the AP found Chinese-language marketing materials in which the Western companies specifically offer surveillance applications and tools to Chinese police and domestic intelligence services.
Scattered Spider, a rare hacking and extortion cybercriminal gang based largely in Western countries, has for years unleashed a trail of chaos across the internet, hitting targets from MGM Resorts and Caesar’s Palace to the Marks & Spencer grocery chain in the United Kingdom. Now two alleged members of that notorious group have been arrested in the UK: 19-year-old Thalha Jubair and 18-year-old Owen Flowers, both charged with hacking the Transport for London transit system—reportedly inflicting more than $50 million in damage—among many other targets. Jubair alone is accused of intrusions targeting 47 organizations. The arrests are just the latest in a string of busts targeting Scattered Spider, which has nonetheless continued a nearly uninterrupted string of breaches. Noah Urban, who was convicted on charges related to Scattered Spider activity, spoke from jail to Bloomberg Businessweek for a long profile of his cybercriminal career. Urban, 21, has been sentenced to a decade in prison.
Opinions expressed by Entrepreneur contributors are their own.
Modern supply chains are a complex web of interconnected, intertwined digital ecosystems, each supporting the other. Look around you, and everything from how your workstations perform to how your data is being managed consists of several different suppliers and vendors, beyond what might be evident to you on first glance.
You may have bought your web domain from an American company, but your hosting servers are in Europe. You probably bought your cloud infrastructure from AWS or Google, but your data is being stored in a remote village in Norway.
Beyond what is visible lies a plethora of vendors and suppliers that work together like clockwork to make sure your business infrastructure remains up and running.
However, this is where the problem begins. A single outage, data breach or fault with one of these vendors can have a devastating ripple effect on your business operations.
Your direct vendor might not even be responsible, but their service might depend on a third-party provider, with whom you have no connection, and yet, your business takes the complete brunt of the situation.
Therefore, in today’s world, companies don’t just have to prepare for internal data risks but also think about the data risks posed to their suppliers and vendors.
In 2021, millions of websites across the world suddenly went offline. This included business websites, banks, ecommerce ports and even government agencies. In fact, it took out a major chunk of European and mostly French websites.
After a couple of hours, it was found that one of the four data centers owned by the company OVHcloud was destroyed due to a fire.
While the data centers supposedly had backups, the resulting damage in terms of data breaches and lost business cost tens of millions of dollars.
Even some of the largest companies in the world are regularly attacked and are susceptible to data leaks.
Orange Belgium‘s data breach exposed information of 850,000 customers. Allianz Life‘s data breach exposed personal information of more than a million customers, and a Qantas cyberattack leaked information on over six million airline customers!
More recently, a ransomware attack on the UK’s NHS (National Health Service) disrupted blood tests across several London hospitals, eventually leading to the death of at least one patient. The software provider for the NHS, Advanced Computer Systems, was eventually fined £3 million, but only after an innocent life had already been lost.
While these large organizations cannot be solely blamed, it is clear that even if you have the most robust IT and security infrastructure within your organization, you are never immune to the vulnerabilities of your vendors.
Common mistakes that lead to weak data management
Similar to the example of OVHcloud, many vendors simply lack a robust backup system to ensure operations run smoothly — this is where the problem starts. Due to a poor backup system, they also have an insufficient disaster recovery plan in case of a ransomware attack. Therefore, a fire in only one of their four data centers brought down millions of their customers’ websites.
Another example might be the NHS’s software. They probably had data integrity checks built into their security, but they were insufficient, making it easy for an attack to take place across a number of locations. Overall, a reliance on manual recovery efforts and weak cybersecurity practices creates vulnerabilities that can have devastating consequences.
Any data breaches or attacks on your vendors will have a direct impact on your business. It can directly result in operational downtime, which can include workflows that completely stop working, supply chain disruptions, invoicing issues and much more.
In the short run, it can lead to lost sales, SLA breaches and even penalties, while in the long run, the financial impact due to reputational damage can be even worse. If customers can’t trust you to deliver on time or protect their data, they might never return.
It’s important to safeguard your business against such scenarios, and there are a couple of steps that can help you mitigate these.
How to mitigate a vendor data crisis
Before signing a contract with a vendor, it’s important to do your due diligence and assess their data and security infrastructure. This might seem instructive, but it is one of the important first steps you can take to protect your business and data against vulnerabilities.
It is also important to carry out regular audits and ensure SLAs are met and that they are up-to-date with industry standards.
Overall, there needs to be a plan for diversification so that no single vendor can impact a critical workflow.
Why it’s important to have robust data recovery tools
Despite all the due diligence and backups, no system is 100% fail-proof. This is why your business must have reliable recovery tools that can help recover damaged files, important emails and even complete databases, making sure your organization can be back on its feet as soon as possible.
A company’s data can be worth tens of thousands of dollars for a small business and much more for a larger organization. Using such software is the perfect safety net when prevention fails.
Modern supply chains are a complex web of interconnected, intertwined digital ecosystems, each supporting the other. Look around you, and everything from how your workstations perform to how your data is being managed consists of several different suppliers and vendors, beyond what might be evident to you on first glance.
You may have bought your web domain from an American company, but your hosting servers are in Europe. You probably bought your cloud infrastructure from AWS or Google, but your data is being stored in a remote village in Norway.
Beyond what is visible lies a plethora of vendors and suppliers that work together like clockwork to make sure your business infrastructure remains up and running.
As businesses around the world have shifted their digital infrastructure over the last decade from self-hosted servers to the cloud, they’ve benefitted from the standardized, built-in security features of major cloud providers like Microsoft. But with so much riding on these systems, there can be potentially disastrous consequences at a massive scale if something goes wrong. Case in point: Security researcher Dirk-jan Mollema recently stumbled upon a pair of vulnerabilities in Microsoft Azure’s identity and access management platform that could have been exploited for a potentially cataclysmic takeover of all Azure customer accounts.
Known as Entra ID, the system stores each Azure cloud customer’s user identities, sign-in access controls, applications, and subscription management tools. Mollema has studied Entra ID security in depth and published multiple studies about weaknesses in the system, which was formerly known as Azure Active Directory. But while preparing to present at the Black Hat security conference in Las Vegas in July, Mollema discovered two vulnerabilities that he realized could be used to gain global administrator privileges—essentially god mode—and compromise every Entra ID directory, or what is known as a “tenant.” Mollema says that this would have exposed nearly every Entra ID tenant in the world other than, perhaps, government cloud infrastructure.
“I was just staring at my screen. I was like, ‘No, this shouldn’’t really happen,’” says Mollema, who runs the Dutch cybersecurity company Outsider Security and specializes in cloud security. “It was quite bad. As bad as it gets, I would say.”
“From my own tenants—my test tenant or even a trial tenant—you could request these tokens and you could impersonate basically anybody else in anybody else’s tenant,” Mollema adds. “That means you could modify other people’s configuration, create new and admin users in that tenant, and do anything you would like.”
Given the seriousness of the vulnerability, Mollema disclosed his findings to the Microsoft Security Response Center on July 14, the same day that he discovered the flaws. Microsoft started investigating the findings that day and issued a fix globally on July 17. The company confirmed to Mollema that the issue was fixed by July 23 and implemented extra measures in August. Microsoft issued a CVE for the vulnerability on September 4.
“We mitigated the newly identified issue quickly, and accelerated the remediation work underway to decommission this legacy protocol usage, as part of our Secure Future Initiative,” Tom Gallagher, Microsoft’s Security Response Center vice president of engineering, told WIRED in a statement. “We implemented a code change within the vulnerable validation logic, tested the fix, and applied it across our cloud ecosystem.”
Gallagher says that Microsoft found “no evidence of abuse” of the vulnerability during its investigation.
Both vulnerabilities relate to legacy systems still functioning within Entra ID. The first involves a type of Azure authentication token Mollema discovered known as Actor Tokens that are issued by an obscure Azure mechanism called the “Access Control Service.” Actor Tokens have some special system properties that Mollema realized could be useful to an attacker when combined with another vulnerability. The other bug was a major flaw in a historic Azure Active Directory application programming interface known as “Graph” that was used to facilitate access to data stored in Microsoft 365. Microsoft is in the process of retiring Azure Active Directory Graph and transitioning users to its successor, Microsoft Graph, which is designed for Entra ID. The flaw was related to a failure by Azure AD Graph to properly validate which Azure tenant was making an access request, which could be manipulated so the API would accept an Actor Token from a different tenant that should have been rejected.
Private Internet Access (PIA) has a long history in the VPN space, and it’s maintained a track record of defending user privacy—even in the face of actual criminal activity. In 2016, a criminal complaint was filed in Florida against Preston Alexander McWaters for threats made online. McWaters was eventually convicted and sentenced to 42 months in prison. Investigators traced the online threats back to PIA’s servers and subpoenaed the company. As the complaint reads, “A subpoena was sent to [Private Internet Access] and the only information they could provide is that the cluster of IP addresses being used was from the east coast of the United States.” McWaters engaged in several other identifying activities, according to the complaint, but PIA wasn’t among them. Despite such a clear view of a VPN provider upholding its no-logging policy, PIA didn’t impress me during my tests. It’s slightly more expensive than a lot of our top picks, and it delivered the worst speeds out of any VPN I tested, with more than a 50 percent drop on the closest US server. (Windscribe, for context, only dropped 15.6 percent of my speed.)
MysteriumVPN is the go-to dVPN, or decentralized VPN, as far as I can tell. The concept of a decentralized VPN has existed for a while, but it’s really gained traction over the last couple of years. The idea is to have a network of residential IP addresses that make up the network, routing your traffic through normal IP addresses to get around the increasingly common block lists for VPN servers. Mysterium accomplishes this network with MystNodes. It’s a crypto node. People buy the node to earn crypto, and they’re put into the Mysterium network. It’s not inherently bad, but routing your traffic through a single residential IP is a little worrisome. Even without the decentralized kick, Mysterium was slow, and it doesn’t maintain any sort of privacy materials, be it a third-party audit, warranty canary, or transparency report.
PrivadoVPN is one of the popular options to recommend as a free VPN. It offers a decent free service, with a handful of full-speed servers and 10 GB of data per month. You’ll have to suffer through four—yes, four—redirects begging you to pay for a subscription before signing up, but the free plan works. The problem is how new PrivadoVPN is. There’s no transparency report or audit available, and although the speeds are decent, they aren’t as good as Proton, Windscribe, or Surfshark. PrivadoVPN isn’t bad, but it’s hard to recommend when Proton and Windscribe exist with free plans that are equally as good.
How We Test VPNs
Functionally, a VPN should do two things: keep your internet speed reasonably fast, and actually protect your browsing data. That’s where I focused my testing. Extra features, a comfy UI, and customization settings are great, but they don’t matter if the core service is broken.
Speed testing requires spot-checking, as the time of day, the network you’re connected to, and the specific VPN server you’re using can all influence speeds. Because of that, I always set a baseline speed on my unprotected connection directly before recording results, and I ran the test three times across both US and UK servers. With those baseline drops, I spot-checked at different times of the day over the course of a week to see if the speed decrease was similar.
Security is a bit more involved. For starters, I checked for DNS, WebRTC, and IP leaks every time I connected to a server using Browser Leaks. I also ran brief tests sniffing my connection with Wireshark to ensure all of the packets being sent were secured with the VPN protocol in use.
On the privacy front, the top-recommended services included on this list have been independently audited, and they all maintain some sort of transparency report. In most cases, there’s a proper report, but in others, such as Windscribe, that transparency is exposed through legal proceedings.
Disclosure: Our goal is to feature products and services that we think you’ll find interesting and useful. If you purchase them, Entrepreneur may get a small share of the revenue from the sale from our commerce partners.
If you’re serious about getting into (or advancing your knowledge of) cybersecurity then access to quality, up-to-date training isn’t optional. It’s essential. InfoSec4TC’s Platinum Membership makes that access easy, with a comprehensive lifetime subscription to over 90 expert-led certification courses and continuously updated training material.
Through October 5, you’ll pay just a one-time payment of $52.97 (MSRP $280) to unlock lifetime, self-paced access to preparation for top IT security certifications: CISSP, GSEC, CISM, CISA, Ethical Hacking, and more. The membership also includes exam question updates, extra course resources, and future course additions — all at no additional cost.
Courses are designed for professionals at all levels. Whether you’re aiming to earn your first credential, shift careers into cybersecurity, or expand your current skill-set, the structured curriculum and clear learning paths make the journey approachable. You’ll also receive an attendance certificate with CPEs, access to private study groups, and one free session of career consulting and planning.
And InfoSec4TC doesn’t stop at instruction. Their mentorship approach helps you stay accountable and on track, no matter if your goal is certification, a job title upgrade, or a full career transition into infosec. With more than 90 courses and growing, your training evolves as the industry does.
If you’re serious about getting into (or advancing your knowledge of) cybersecurity then access to quality, up-to-date training isn’t optional. It’s essential. InfoSec4TC’s Platinum Membership makes that access easy, with a comprehensive lifetime subscription to over 90 expert-led certification courses and continuously updated training material.
Through October 5, you’ll pay just a one-time payment of $52.97 (MSRP $280) to unlock lifetime, self-paced access to preparation for top IT security certifications: CISSP, GSEC, CISM, CISA, Ethical Hacking, and more. The membership also includes exam question updates, extra course resources, and future course additions — all at no additional cost.
Courses are designed for professionals at all levels. Whether you’re aiming to earn your first credential, shift careers into cybersecurity, or expand your current skill-set, the structured curriculum and clear learning paths make the journey approachable. You’ll also receive an attendance certificate with CPEs, access to private study groups, and one free session of career consulting and planning.
Buried in an ocean of flashy novelties announced by Apple this week, the tech giant also revealed new security technology for its latest iPhone 17 and iPhone Air devices. This new security technology was made specifically to fight against surveillance vendors and the types of vulnerabilities they rely on the most, according to Apple.
The feature is called Memory Integrity Enforcement (MIE) and is designed to help stop memory corruption bugs, which are some of the most common vulnerabilities exploited by spyware developers and makers of phone forensic devices used by law enforcement.
“Known mercenary spyware chains used against iOS share a common denominator with those targeting Windows and Android: they exploit memory safety vulnerabilities, which are interchangeable, powerful, and exist throughout the industry,” Apple wrote in its blog post.
Cybersecurity experts, including people who make hacking tools and exploits for iPhones, tell TechCrunch that this new security technology could make Apple’s newest iPhones some of the most secure devices on the planet. The result is likely to make life harder for the companies that make spyware and zero-day exploits for planting spyware on a target’s phone or extracting data from them.
“The iPhone 17 is probably now the most secure computing environment on the planet that is still connected to the internet,” a security researcher, who has worked on developing and selling zero-days and other cyber capabilities to the U.S. government for years, told TechCrunch.
The researcher told TechCrunch that MIE will raise the cost and time to develop their exploits for the latest iPhones, and consequently up their prices for paying customers.
“This is a huge deal,” said the researcher, who asked to remain anonymous to discuss sensitive matters. “It’s not hack proof. But it’s the closest thing we have to hack proof. None of this will ever be 100% perfect. But it raises the stakes the most.”
Contact Us
Do you develop spyware or zero-day exploits and are studying studying the potential effects of Apple’s MIE? We would love to learn how this affects you. From a non-work device, you can contact Lorenzo Franceschi-Bicchierai securely on Signal at +1 917 257 1382, or via Telegram and Keybase @lorenzofb, or email. You also can contact TechCrunch via SecureDrop.
Jiska Classen, a professor and researcher who studies iOS at the Hasso Plattner Institute in Germany, agreed that MIE will raise the cost of developing surveillance technologies.
Classen said this is because some of the bugs and exploits that spyware companies and researchers have that currently work will stop working once the new iPhones are out and MIE is implemented.
“I could also imagine that for a certain time window some mercenary spyware vendors don’t have working exploits for the iPhone 17,” said Classen.
“This will make their life arguably infinitely more difficult,” said Patrick Wardle, a researcher who runs a startup that makes cybersecurity products specifically for Apple devices. “Of course that is said with the caveat that it’s always a cat-and-mouse game.”
Wardle said people who are worried about getting hacked with spyware should upgrade to the new iPhones.
The experts TechCrunch spoke to said MIE will reduce the efficacy of both remote hacks, such as those launched with spyware like NSO Group’s Pegasus and Paragon’s Graphite. It will also help to protect against physical device hacks, such as those performed with phone unlocking hardware like Cellebrite or Graykey.
Taking on the “majority of exploits”
Most modern devices, including the majority of iPhones today, run software written in programming languages that are prone to memory-related bugs, often called memory overflow or corruption bugs. When triggered, a memory bug can cause the contents of memory from one app to spill into other areas of a user’s device where it shouldn’t go.
Memory-related bugs can allow malicious hackers to access and control parts of a device’s memory that they shouldn’t be permitted to. The access can be used to plant malicious code that’s capable of gaining broader access to a person’s data stored in the phone’s memory, and exfiltrating it over the phone’s internet connection.
MIE aims to defend against these kinds of broad memory attacks by vastly reducing the attack surface in which memory vulnerabilities can be exploited.
According to Halvar Flake, an expert in offensive cybersecurity, memory corruptions “are the vast majority of exploits.”
MIE is built on a technology called Memory Tagging Extension (MTE), originally developed by chipmaker Arm. In its blog post, Apple said over the past five years it worked with Arm to expand and improve the memory safety features into a product called Enhanced Memory Tagging Extension (EMTE).
MIE is Apple’s implementation of this new security technology, which takes advantage of Apple having complete control of its technology stack, from software to hardware, unlike many of its phone-making competitors.
Google offers MTE for some Android devices; the security-focused GrapheneOS, a custom version of Android, also offers MTE.
But other experts say Apple’s MIE goes a step further. Flake said the Pixel 8 and GrapheneOS are “almost comparable,” but the new iPhones will be “the most secure mainstream” devices.
MIE works by allocating each piece of a newer iPhone’s memory with a secret tag, effectively its own unique password. This means only apps with that secret tag can access the physical memory in the future. If the secret doesn’t match, the security protections kick in and block the request, the app will crash, and the event is logged.
That crash and log is particularly significant since it’s more likely for spyware and zero-days to trigger a crash, making it easier for Apple and security researchers investigating attacks to spot them.
“A wrong step would lead to a crash and a potentially recoverable artifact for a defender,” said Matthias Frielingsdorf, the vice president of research at iVerify, a company that makes an app to protect smartphones from spyware. “Attackers already had an incentive to avoid memory corruption.”
Apple did not respond to a request for comment.
MIE will be on by default system wide, which means it will protect apps like Safari and iMessage, which can be entry points for spyware. But third-party apps will have to implement MIE on their own to improve protections for their users. Apple released a version of EMTE for developers to do that.
In other words, MIE is a huge step in the right direction, but it will take some time to see its impact, depending on how many developers implement it and how many people buy new iPhones.
Some attackers will inevitably still find a way.
“MIE is a good thing and it might even be a big deal. It could significantly raise the cost for attackers and even force some of them out of the market,” said Frielingsdorf. “But there are going to be plenty of bad actors that can still find success and sustain their business.”
“As long as there are buyers there will be sellers,” said Frielingsdorf.
Opinions expressed by Entrepreneur contributors are their own.
Our lives have migrated to a virtual world to the point where our emails have become an entry point to our identity. Medical records, employment history, education, world views and all that comes to mind, which pertains to who we are as people, likely have some form of digital footprint that can be traced back to us. While this can translate to seamless convenience, whether personalized recommendations or quick product deliveries, there remains a risk of exposure that threat actors constantly exploit.
The tech titans who handle our data and boast a robust security infrastructure are the same ones who lost control of our data. With 16 billion Apple, Facebook, Google and other passwords leaked, a large question mark looms over the reliability of traditional security systems. The centralized databases and login processes of yesteryear are simply unable to keep up with today’s increasingly sophisticated cyber threats. Our passwords and two-factor authentication fall short in securing our digital identities.
Digitization has become deeply entrenched in the fabric of how we operate as a society on a global scale, with 5.56 billion people online today and 402.74 million terabytes of data generated on a daily basis. The dizzying numbers demonstrate the breakneck speed with which every aspect of our lives has taken a virtual shape, and with it, the proliferation of the conversation about how we secure the digital world we have created.
With the current security measures in use, cybercrime is expected to cost over $639 billion in the United States this year, with the costs expected to balloon as far as $1.82 trillion by 2028. In light of such projected costs, the development of a secure infrastructure is a priority that requires immediate attention, one that could compromise digital identity if disregarded.
Decentralize to prevent compromise
The centralized databases of tech titans mean that there is one location, one source of truth, that if compromised, all that it contains is leaked, as was the case with the passwords that were leaked. If not a leak, then a ransomware attack that disrupts the systems on which our digital lives operate. This kind of disruption can cascade to fundamental services such as healthcare, as a recent ransomware attack caused a system-wide tech outage at a large network of medical centers in Ohio, cancelling inpatient and outpatient procedures.
Centralization’s single point of failure calls for a shift in how to operate tech infrastructures — a shift to decentralized data storage. Unlike centralized systems, blockchain networks distribute data across a large multitude of nodes that are in constant verification of one another through cryptographic consensus. To verify the data, the majority of nodes must be in agreement, a majority that rejects tampered “blocks” or compromised nodes. This means that there is no single repository that can be compromised, as attackers would need to compromise the majority of the nodes, a task immensely more challenging than the common compromise of a centralized server.
The beauty of blockchain technology is its ownership element. As everything is secured by cryptography, the only way to “decrypt” the data and access it is through your own private keys. However, if a threat actor is to gain access to your private keys, they also gain access to your data and funds, posing a threat that puts in question how secure the shift from centralized to decentralized storage really is.
If a private key is proof of one’s identity, then its loss equates to the loss of one’s digital identity, a compromise that can only be secured by undeniable proof that the owner of the keys is indeed who they claim to be. This is where biometric authentication becomes the final piece in the puzzle of securing one’s digital identity in a decentralized infrastructure.
Using one’s fingerprint in an offline environment for identity verification not only ensures ownership of data and its security but also prevents the exposure of biometric data to a server where it could be breached. This creates a new paradigm that deems passwords and two-factor authentication obsolete. Building on such a methodology opens pathways for a secure digital identity and KYC verification on a decentralized infrastructure, leaving no room for threat actors to compromise digital identities.
The conversation on digital security is the result of an absolute necessity in the face of increasingly sophisticated cyber attacks. However, adding uppercase letters, symbols and numbers to your password will not be enough. The added layer of two-factor authentication will not be enough either. More steps do not equate to more security. The future of security lies in an infrastructure shift from the centralized to the decentralized, protected by a layer of biometric authentication that ensures that one’s digital identity is secured.
Our lives have migrated to a virtual world to the point where our emails have become an entry point to our identity. Medical records, employment history, education, world views and all that comes to mind, which pertains to who we are as people, likely have some form of digital footprint that can be traced back to us. While this can translate to seamless convenience, whether personalized recommendations or quick product deliveries, there remains a risk of exposure that threat actors constantly exploit.
The tech titans who handle our data and boast a robust security infrastructure are the same ones who lost control of our data. With 16 billion Apple, Facebook, Google and other passwords leaked, a large question mark looms over the reliability of traditional security systems. The centralized databases and login processes of yesteryear are simply unable to keep up with today’s increasingly sophisticated cyber threats. Our passwords and two-factor authentication fall short in securing our digital identities.
Financial institutions are navigating a growing cybersecurity minefield, with data breaches doubling since 2023 and increasingly affecting a company’s market confidence or regulatory standing.
According to a report from AInvest, third-party breaches in the financial sector have doubled since 2023. The report also found that the average breach costs hitting $4.8 million, and insider-related incidents costing $17.4 million per organization.
With cyberattacks via third-party vendors and insiders rising, investors are beginning to scrutinize fintech and banking stocks for cyber resiliency as intensely as for earnings per share.
Hacks of this type often take around 80 days to contain, illustrating how experts still struggle to thwart real-time risks.
Hacks are growing in size and impact
The consequences also go beyond balance sheets: Santander’s 2025 cross-border data breach, for instance, dented its market standing even before regulatory fines were levied.
In that attack, 30 million customers from Spain, Uruguay and Chile and some Santander employees had their data hacked, including their personal data like social security numbers. In October 2024, the bank was fined €50,000 by the Spanish data protection agency (AEPD) for failing to report the breach and violating the General Data Protection Regulation (GDPR).
“Following an investigation, we have now confirmed that certain information relating to customers of Santander Chile, Spain and Uruguay, as well as all current and some former Santander employees of the group had been accessed,” it said in a statement posted at the time.
“No transactional data, nor any credentials that would allow transactions to take place on accounts are contained in the database, including online banking details and passwords.”
A rising tide of threats
These trends align with research from the International Monetary Fund, which found that the growing scale and sophistication of cyberattacks on financial infrastructure are now large enough to threaten economic stability.
The growing cost of cyber losses after a breach has been noticed, identified, disclosed to customers and fined by regulators has soared to $2.5 billion, accounting for reputation, regulatory, and remediation impacts.
Investors are also seeing a shift in the political and regulatory landscape. The European Union’s Digital Operational Resilience Act (DORA) and the UK’s Cyber Resilience Bill are ushering in higher standards for third-party risk and digital continuity in financial services.
Meanwhile, the Reserve Bank of India is demanding that banks deploy “AI-aware” defenses under a zero-trust framework, citing systemic risks tied to vendor lock-ins. For investors and regulators, cybersecurity is no longer just an IT concern, it’s a board-level strategic imperative.
The real-world cost of cyber vulnerability
In the UK, institutions like HSBC and Santander continue logging dozens of service outages each year, despite investments in cybersecurity and modernization. Barclays alone reported 33 outages between 2023 and 2025, an alarming reminder of the fragility of complex, dated infrastructure.
Similarly, a surge in phishing and third-party breaches is forcing firms to redirect resources toward building resilience-based infrastructure. New findings show that 45% of employees at large financial institutions remain susceptible to clicking malicious links, making human error a critical line of attack even with technical safeguards.
Thinking of investing in bank stocks?
For investors, the key takeaway is clear: cybersecurity maturity must factor into valuation and stock selection, especially within the fintech and banking sectors.
Companies investing in zero-trust architecture, which means requiring strict verification of every user, device, and application before granting access to resources, and AI-based anomaly detection are likely to be better protected and safer bets for investors wanting to avoid hacks.
Additionally, companies that have rigorous quarterly audits of their third-party cybersecurity plans see much more confidence from the capital markets.
Operational resilience is another critical factor, with institutions that participate in cyber war games and incident response exercises, organized by entities like the Federal Reserve and FS-ISAC, being viewed more favorably.
Another sign banks take security seriously? Financial institution leaders who prioritize employee cybersecurity training are recognized for effectively closing the most dangerous gaps in the defense chain, enhancing overall human risk management.
Security as a competitive edge
The confluence of regulatory pressure, rising financial fallout, and geopolitical cyber threats means investors can no longer afford to overlook cybersecurity metrics. Firms that treat defense as a cost center may ultimately come off worse than those that regard it as a strategic asset.
Financial institutions that embrace robust cyber hygiene, anticipate evolving threats—including AI and quantum risks—and align with regulatory expectations, could well distinguish themselves as proven leaders rather than potential liabilities. The security of tomorrow’s balance sheet may well depend on the strength of today’s defenses.
The Biden administration considered spyware used to hack phones controversial enough that it was tightly restricted for US government use in an executive order signed in March 2024. In Trump’s no-holds-barred effort to empower his deportation force—already by far the most well-funded law enforcement agency in the US government—that’s about to change, and the result could be a powerful new form of domestic surveillance.
Multiple tech and security companies—including Cloudflare, Palo Alto Networks, Spycloud, and Zscaler—have confirmed customer information was stolen in a hack that originally targeted a chatbot system belonging to sales and revenue generation company Salesloft. The sprawling data theft started in August, but in recent days more companies have revealed they had customer information stolen.
Toward the end of August, Salesloft first confirmed it had discovered a “security issue” in its Drift application, an AI chatbot system that allows companies to track potential customers who engage with the chatbot. The company said the security issue is linked to Drift’s integration with Salesforce. Between August 8 and August 18, hackers used compromised OAuth tokens associated with Drift to steal data from accounts.
Google’s security researchers revealed the breach at the end of August. “The actor systematically exported large volumes of data from numerous corporate Salesforce instances,” Google wrote in a blog post, pointing out that the hackers were looking for passwords and other credentials contained in the data. More than 700 companies may have been impacted, with Google later saying it had seen Drift’s email integration being abused.
On August 28, Salesloft paused its Salesforce-Salesloft integration as it investigated the security issues; then on September 2 it said, “Drift will be temporarily taken offline in the very near future” so it can “build additional resiliency and security in the system.” It’s likely more companies impacted by the attack will notify customers in the coming days.
Obtaining intelligence on the internal workings of the Kim regime that has ruled North Korea for three generations has long presented a serious challenge for US intelligence agencies. This week, The New York Times revealed in a bombshell account of a highly classified incident how far the US military went in one effort to spy on the regime. In 2019, SEAL Team 6 was sent to carry out an amphibious mission to plant an electronic surveillance device on North Korean soil—only to fail and kill a boatful of North Koreans in the process. According to the Times’ account, the Navy SEALs got as far as swimming onto the shores of the country in mini-subs deployed from a nuclear submarine. But due to a lack of reconnaissance and the difficulty of surveilling the area, the special forces operators were confused by the appearance of a boat in the water, shot everyone aboard, and aborted their mission. The North Koreans in the boat, it turned out, were likely unwitting civilians diving for shellfish. The Trump administration, the Times reports, never informed leaders of congressional committees that oversee military and intelligence activities.
Phishing remains one of the oldest and most reliable ways for hackers to gain initial access to a target network. One study suggests a reason why: Training employees to detect and resist phishing attempts is surprisingly tough. In a study of 20,000 employees at the health care provider UC San Diego Health, simulated phishing attempts designed to train staff resulted in only a 1.7 percent decrease in the staff’s failure rate compared to staff who received no training at all. That’s likely because staff simply ignored or barely registered the training, the study found: In 75 percent of cases, the staff member who opened the training link spent less than a minute on the page. Staff who completed a training Q&A, by contrast, were 19 percent less likely to fail on subsequent phishing tests—still hardly a very reassuring level of protection. The lesson? Find ways to detect phishing that don’t require the victim to spot the fraud. As is often noted in the cybersecurity industry, humans are the weakest link in most organizations’ security—and they appear stubbornly determined to stay that way.
Online piracy is still big business—last year, people made more than 216 billion visits to piracy sites streaming movies, TV, and sports. This week, however, the largest illegal sports streaming platform, Streameast, was shut down following an investigation by anti-piracy industry group the Alliance for Creativity and Entertainment and authorities in Egypt. Before the takedown, Streameast operated a network of 80 domains that saw more than 1.6 billion visits per year. The piracy network streamed soccer games from England’s Premier League and other matches across Europe, plus NFL, NBA, NHL, and MLB matches. According to the The Athletic, two men in Egypt were allegedly arrested over copyright infringement charges, and authorities found links to a shell company allegedly used to launder around $6.2 million in advertising revenue over the past 15 years.
Sextortion-based hacking, which hijacks a victim’s webcam or blackmails them with nudes they’re tricked or coerced into sharing, has long represented one of the most disturbing forms of cybercrime. Now one specimen of widely available spyware has turned that relatively manual crime into an automated feature, detecting when the user is browsing pornography on their PC, screenshotting it, and taking a candid photo of the victim through their webcam.
On Wednesday, researchers at security firm Proofpoint published their analysis of an open-source variant of “infostealer” malware known as Stealerium that the company has seen used in multiple cybercriminal campaigns since May of this year. The malware, like all infostealers, is designed to infect a target’s computer and automatically send a hacker a wide variety of stolen sensitive data, including banking information, usernames and passwords, and keys to victims’ crypto wallets. Stealerium, however, adds another, more humiliating form of espionage: It also monitors the victim’s browser for web addresses that include certain NSFW keywords, screenshots browser tabs that include those words, photographs the victim via their webcam while they’re watching those porn pages, and sends all the images to a hacker—who can then blackmail the victim with the threat of releasing them.
“When it comes to infostealers, they typically are looking for whatever they can grab,” says Selena Larson, one of the Proofpoint researchers who worked on the company’s analysis. “This adds another layer of privacy invasion and sensitive information that you definitely wouldn’t want in the hands of a particular hacker.”
“It’s gross,” Larson adds. “I hate it.”
Proofpoint dug into the features of Stealerium after finding the malware in tens of thousands of emails sent by two different hacker groups it tracks (both relatively small-scale cybercriminal operations), as well as a number of other email-based hacking campaigns. Stealerium, strangely, is distributed as a free, open source tool available on Github. The malware’s developer, who goes by the named witchfindertr and describes themselves as a “malware analyst” based in London, notes on the page that the program is for “educational purposes only.”
“How you use this program is your responsibility,” the page reads. “I will not be held accountable for any illegal activities. Nor do i give a shit how u use it.”
In the hacking campaigns Proofpoint analyzed, cybercriminals attempted to trick users into downloading and installing Stealerium as an attachment or a web link, luring victims with typical bait like a fake payment or invoice. The emails targeted victims inside companies in the hospitality industry, as well as in education and finance, though Proofpoint notes that users outside of companies were also likely targeted but wouldn’t be seen by its monitoring tools.
Once it’s installed, Stealerium is designed to steal a wide variety of data and send it to the hacker via services like Telegram, Discord, or the SMTP protocol in some variants of the spyware, all of which is relatively standard in infostealers. The researchers were more surprised to see the automated sextortion feature, which monitors browser URLs a list of pornography-related terms such as “sex” and “porn,” which can be customized by the hacker and trigger simultaneous image captures from the user’s webcam and browser. Proofpoint notes that it hasn’t identified any specific victims of that sextortion function, but the existence of the feature suggests it was likely used.