In today’s rapidly evolving threat landscape, security leaders are being asked to do more with less. Shrinking budgets, hiring freezes, and reduced access to critical tools are the new reality for CISOs and their teams. Yet, the expectations have never been higher: business resilience, regulatory compliance, and innovation must all move forward often simultaneously.
That’s why I sought out Microsoft’s top security minds during Security Summit Days. My goal was to surface the questions that matter most to CISOs to share actionable insights for navigating uncertainty, driving transformation, and building a future-ready security strategy.
The silo problem: Why integration is non-negotiable
I started by asking: What’s the biggest challenge facing security leaders today? The answer was unanimous.
“The biggest challenge for leaders is that a lot of products work in silos… We need to focus more on the ecosystem versus these siloed products.” — Emmanuel Taiwo, Microsoft Senior AI Security Solution Engineer Leader
This resonates with what I’m hearing across the industry. CISOs are expected to manage everything from risk assessments and compliance to incident response and board-level strategy—often with fewer resources and less support1. Integration isn’t optional; it’s the only way to do more with less.
From reactive to proactive: The AI advantage
I pressed the team on how organizations can shift from a reactive to a proactive security posture. The consensus? AI is a game-changer.
“Leaders have moved from a reactive to a more proactive approach… They want to focus more on a proactive approach to know about a vulnerability and threat before it could happen.” — Kriti Arora, Microsoft Senior Security & Compliance Solution Engineer
With budgets tight, CISOs are prioritizing high-impact areas like identity management and zero-trust architecture over broader awareness programs2. AI-driven tools like Microsoft 365 Copilot, Defender, and Sentinel help organizations anticipate threats, automate responses, and visualize their entire attack surface—across cloud, hybrid, and on-premises environments.
Data at the center: Know what you’re protecting
With so much data, how do you know what to protect? I challenged the group, and the answer was refreshingly practical:
“First, you need to understand what is the data that is important for your organization. If you don’t have the knowledge, it is very hard to put controls on it.” — Liliane Scarpari, Microsoft Security Solution Engineer
For CISOs, this means investing in data classification, governance, and compliance, especially as new AI regulations emerge globally. When resources are limited, knowing your “crown jewels” is the only way to focus your defenses where they matter most.
Security is everyone’s job: Building a security-first culture
Who owns security in a modern enterprise? The answer: Everyone.
“I don’t think we could just look at this as an IT professional, a security professional… We have to think about everyone being part of this transformation.” — Michael Billy, Microsoft Security General Manager
Training, awareness, and inclusive practices are essential. But with CISOs stretched thin, it is more important than ever to empower every employee to play their part.
Real-world impact: What success looks like
I wanted specifics. What does success look like when organizations get this right?
“When you bring [in] Sentinel and you’re able to bring these third party applications into that platform, you have cross correlation across everything—that’s immediate response data. In my experience in industry, that’s unheard of. Usually you’re having to pull this data set, pull that data set, and trying to bring them together. It just doesn’t work. With Sentinel and XDR, you’re getting a full picture of your estate quickly and more effectively. Overall, it’s going to take you a lot less time.” — Mike Taylor, Microsoft Senior Security Solution Engineer Leader
The bottom line: Integrated, AI-powered security delivers measurable business value—speed, efficiency, and resilience—even when budgets are tight.
Responsible AI and continuous improvement
How do we keep improving? I closed by asking about the future.
“Go back to the core fundamentals, know your estate, know what data you’re trying to protect. Ultimately, as you prepare for AI, you have to ensure that you have those identities. Make sure you have the data classifications established so you’ll be able to quickly move and pivot.” — Mike Taylor, Microsoft Senior Security Solution Engineer Leader
Continuous learning, responsible AI, and transparent governance are non-negotiable for leaders who want to stay ahead.
My takeaways for CISOs, BDMs, and SDMs
If you are leading security, here is what I would tell you after these conversations:
Break down silos. Integration is your best defense.
Invest in AI. Use it to anticipate, not just react.
Know your data. You cannot protect what you do not understand.
Empower your people. Security is everyone’s job.
Never stop learning. The threat landscape—and the technology—will keep evolving.
Continue your security leadership journey
The journey to future-proofing security does not end here. Each interview in the Security in the Age of AI: A Microsoft Leadership Series offers actionable insights and proven strategies from Microsoft’s security leadership—designed to help you lead with confidence in an evolving threat landscape.
Explore the full interview series and actionable knowledge directly from Microsoft’s security leaders on the topics that matter most:
A company that makes photo booths is exposing pictures and videos of its customers online thanks to a simple flaw in its website where the files are stored, according to a security researcher.
The researcher, who goes by Zeacer, alerted TechCrunch to the security issue in late November after reporting the vulnerability in October to Hama Film, the photo booth maker that has franchise presence in Australia, the United Arab Emirates, and the United States, but did not hear back.
Zeacer shared with TechCrunch a sample of pictures taken from Hama Film’s servers, which showed groups of clearly young people posing in photo booths. Hama Film’s booths not only print out the photos like a typical photo booth, but booths also upload the customers’ photos to the company’s servers.
Vibecast, which owns Hama Film, has yet to respond to his messages alerting the company of the issues. Vibecast also hasn’t responded to several requests for comment from TechCrunch, nor did Vibecast’s co-founder Joel Park respond to a message we sent via Linkedin.
As of Friday, the researcher said the company has still not fully resolved the security flaw and continues to expose customers’ data. As such, TechCrunch is withholding specific details of the vulnerability from publication.
When Zeacer first found this flaw, he noted that it appeared that photos were deleted from the photo booth maker’s servers every two to three weeks.
Now, he said, the pictures stored on the servers appear to get deleted after 24 hours, which limits the number of pictures exposed at any given time. But a hacker could still exploit the vulnerability he discovered each day and download the contents of every photo and video on the server.
Techcrunch event
San Francisco | October 13-15, 2026
Before this week, Zeacer said at one point he saw more than 1,000 pictures online for the Hama Film booths in Melbourne.
This incident is the latest example of a company that, at least for a time, was not implementing certain basic and widely accepted security practices, such as rate-limiting. Last month, TechCrunch reported that government contractor giant Tyler Technologies was not rate-limiting its websites used for allowing courts to manage their jurors’ personal information. This meant anyone could break into any juror’s profile by running a computer script capable of mass-guessing their date of birth and their easy-to-guess numerical identifier.
ITHACA, N.Y., December 9, 2025 (Newswire.com)
– GrammaTech, a long-trusted provider of cybersecurity services and tools that improve and accelerate software development, today announced it has achieved “Awardable” status with its Dykondo (DYnamic KONtainer Debloater/Optimizer): Debloating container images for reduced attack surface and optimized edge deployments offering through the Platform One (P1) Solutions Marketplace. This designation allows government buyers to easily acquire GrammaTech’s container debloating capability for mission-critical efforts. GrammaTech’s solution, developed with support from the Office of Naval Research, is designed to provide an automated solution for container debloating.
Container images frequently include unnecessary software, libraries, and files that are irrelevant to specific deployments, resulting in bloated images that increase attack surfaces, trigger false positives in static container scans, and impose unnecessary storage and bandwidth burdens. While current best practices rely on manual Dockerfile optimization, this is time-consuming and limited in scope. Dykondo automates container debloating by removing superfluous components from recognized application and file types, returning a lean, secure image. The result: smaller images, fewer false vulnerability reports, reduced deployment overhead, and a minimized attack surface, without the need for complex Dockerfile tuning.
“This recognition from Platform One highlights the innovation and impact of our ONR-supported research, translating into deployable cybersecurity capabilities for national defense.” said Dan Goodwin, CEO of GrammaTech.
The P1 Solutions Marketplace is a digital repository of post-competition, 5-minute long readily-awardable solutions, which address the Government’s greatest requirements in hardware, software and service solutions. “With Dykondo now available through the P1 Solutions Marketplace, government teams can easily integrate advanced container hardening into their software pipelines, increasingly a priority for securing edge environments.” said Dr. Lucja Kot, Vice President of Research at GrammaTech.
GrammaTech was recognized among a competitive field of applicants to the P1 Solutions Marketplace whose solutions demonstrated innovation, scalability, and potential impact on DoD missions. Government customers interested in viewing a solution video can create a P1Solutions Marketplace account at https://p1-marketplace.com/.
This material is based upon work supported by ONR under Contract No. N00014-21-C-1032. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of ONR.
About GrammaTech: GrammaTech is a provider of advanced cybersecurity services and leading developer of software-assurance solutions. Our origin story began in the computer science department at Cornell University and now traverses a thirty-five-year company history of delivering cutting-edge cyber capability in support of government, intelligence and mission-critical infrastructure. GrammaTech technology is used by software developers and system defenders alike, everywhere reliability and security are paramount. It covers threat detection and mitigation, malware analysis, machine learning and automation, migration to memory safe languages, attack surface area reduction, and software supply chain integrity.
Contact Information Sarah Riggins Project Manager, GrammaTech sriggins@grammatech.com 301-530-2900
Expand your mind, man. Opsec is really all about time travel—taking small, protective steps now before you have a disaster on your hands later. If you’re not on auto-delete, then an explosive, emotional text exchange with the person you’re currently dating—or, ahem, photos you sent to each other—will hang around forever. It’s normal for things to change and for relationships of all types to come and go. You may trust someone and be close to them now but grow apart in a year or two.
If you imagine an even more extreme scenario where you’re being investigated by the police, they could obtain warrants to search your digital accounts or devices. People have to go to great lengths to maintain their opsec if they’re trying to hide activity from law enforcement. To be clear, this guide is definitely not encouraging you to do crimes. Don’t do crimes! The goal is just to understand the value of keeping basic opsec principles in mind, because if some of your digital information is revealed haphazardly or out of context, it could, theoretically, appear incriminating.
You probably intuitively understand a lot of this. Don’t give your password to friends, duh.) So this guide is going to largely skip the obvious and emphasize more subtle, unintended consequences of failing to practice good opsec.
Memorable Opsec Fails
“Signalgate,” 2025: US officials discussed war plans in a group chat on the mainstream, secure messaging app Signal. Then they accidentally added a journalist to the chat. Subsequently, US defense secretary Pete Hegseth famously (embarrassingly) messaged the chat, “we are currently clean on OPSEC.” At least some members of the chat were also potentially using a modified, insecure version of Signal. All extremely not clean on opsec.
Gmail Drafts Exposed, 2012: Then-CIA director David Petraeus and his paramour shared a Gmail account to hide their communications by leaving them for each other to see as draft messages. Kind of ingenious given that this was before most texting or messaging apps offered timed disappearing/ephemeral messages, but the FBI figured out the strategy.
Identities
Opsec is all about compartmentalizing, and that’s the hardest part. Failure to compartmentalize is often how criminals get caught or how information that was meant to stay secret gets exposed. Think of your online life like rooms in a house. Each room has a separate key. If someone breaks into one room, they can grab everything there, but you don’t want them to be able to run wild beyond that room.
You can have multiple identities online and compartmentalize the activities of each, but it takes forethought to maintain the separation. There’s the real you who uses your main Gmail or Apple ID for personal and family stuff and social accounts where you use your real name, plus school and maybe work. Another compartment is your school email and school file storage. Then there’s your more adaptable, online personas who may have semi-anonymous handles, like jnd03 for Jane Doe. Friends know that these accounts are yours and classmates can probably guess them. Finally, there may be a pseudonymous you: alt accounts with no obvious link to real you—like Jane Doe using the handles “_aksdi0_0” or “peter_mayfield01.”
Rules of Separation
You have accounts under your real name, but you probably also need pseudonymous accounts. Tight compartmentalization will prevent people from doxing your pseudonymous accounts. But that’s easier said than done.
Obviously, don’t recycle usernames across platforms. If JaneD03 is your Instagram handle, don’t use it or a similar name for your anonymous Reddit account. Don’t even reuse passwords—but especially don’t reuse passwords between real and pseudonymous accounts. To prevent a compromised pseudonymous account from revealing your name, don’t use your main email address; instead, use a unique, pseudonymous one. Gmail “dot tricks” (jane.doe@, j.ane.doe@) don’t count, because they all equally reveal your master account.
AI is undeniably useful for certain simple tasks, and more and more people are using it when searching for information, but not every company allows or encourages AI tool use in the office. That’s not stopping workers from using AI anyway, according to a new report. In fact staggering amounts of people may be guilty of using “shadow AI,” including executives and cybersecurity experts.
The report comes from California-based cybersecurity outfit UpGuard, which surveyed 1,500 workers in the U.S., U.K. and other nations. Its most eye-popping result is that over eight in ten workers are guilty of using unapproved AI tools at work. Half of the respondents admitted they did this regularly. More embarrassingly, 90 percent of cybersecurity professionals surveyed by UpGuard do this too, despite the fact that they really should know better.
The report notes “regardless of company size, geography, industry, employee function or seniority, a sizable majority of workers use AI tools at work that they know are not approved.” The data show that regular use of “shadow AI” may be more common in smaller firms rather than larger corporations. Workers in financial firms, the information industry and manufacturing were also more likely to regularly use unapproved AI tools than people in healthcare, education and retail.
Why are workers doing this? It’s probably because their company either lacks any kind of AI use guidelines, has approved only a limited range of tools that workers may not find useful, or has banned AI use, tempting users who can see AI’s value from trying to lower their workplace burden by using the tools anyway.
An Inc.com Featured Presentation
This confidence in AI may be driven by surprisingly high levels of trust in AI. The UpGuard report notes that about a quarter of workers surveyed said they felt the AI tools they used were their “most trusted source of information,” placing the level of trust almost level with the trust they have in their managers and higher than reported trust levels regarding their colleagues. UpGuard links this trust with greater AI use, noting that “employees who view AI tools as their most trusted source of information are far more likely to use shadow AI tools as part of their regular workflow,” news site HRDive noted.
Shadow AI use also isn’t confined to just frontline workers: midlevel managers were as guilty of using unapproved AI as low-level workers were, but UpGuard found that executives were reporting the highest use of unapproved AI tools, underlying once again the wide division between executives and their workforce.
Using unapproved AI tools may be risky because it typically involves accessing an externally-supplied third party service, which may even result in any inputs users make being used to train later AI models. So if someone uploads sensitive company data it may leak out to other users at a later date, or security lapses by a third-party supplier may expose sensitive information in other ways.
UpGuard’s survey looked into this and found that despite widespread awareness of these risks, shadow AI users felt they could manage the situation safely. Meanwhile, fewer than half of the respondents said they understood their company’s AI use guidelines, and fully 70 percent said they knew that workers had shared sensitive data with AI models. This points to a training issue in companies rolling out AI — a problem previously reported on — where having the risks explained to workers isn’t enough to deter them from exposing the company to risk anyway.
The big takeaway from this data for your company is clear: If you don’t have an AI use policy, it’s definitely time to get one. If you have one already then it’s time to retrain your workers on why it’s important to use only the approved AI tools, and to be very very careful in their choice of data shared with AI tools. Just chatting with your workers about why they’re using unsanctioned AI systems may also be useful, since it will show you if you’ve made the wrong choice in “official” AI tools, compared to the actual frontline tasks that your employees are using shadow AI to tackle.
Cybersecurity experts are warning about scammers using QR codes to take advantage of unsuspecting victims.What is a QR code?Short for “quick response” code, the small barcodes are ubiquitous, getting scanned using a phone’s camera to link to restaurant menus, online payment systems or any other digital task.”We’re all familiar with these things,” said Jean-Paul Bergeaux, the federal chief technology officer for GuidePoint Security. “The concept is, ‘How do we give people a way to get to a link, where they don’t have to enter it, to simplify our life,’ which it does very well.”How does ‘Quishing’ work?Unfortunately, like most things, scammers are finding ways to use QR codes as part of a scam that’s known as “Quishing.””You’re just scanning it and hitting it and saying go, and so you can go anywhere, and the bad guys can send you anywhere,” Bergeaux said. “That’s the hook for them, right? It’s not only just ubiquitous and everywhere, but there’s a bit of (anonymity) to it.” Bergeaux said scammers most commonly use QR codes to send you to a dummy website to get your information — and money.”They’re going to steal things from your phone. They’re going to steal your information. They’re going to steal your accounts just by scanning it,” Bergeaux said.It happened in BaltimoreSo, say you’re trying to park your car. A fake QR code posted on a parking meter could take you to a website that looks like you’re paying for parking, when in reality, a scammer just stole your credit card information.It’s a problem that was reported in Baltimore earlier this year, when the Parking Authority of Baltimore City warned drivers not to scan QR codes on its meters.How do I avoid QR code scams?There are ways to protect yourself. For one, check whether the QR code has been tampered with, or whether it’s just covered up by a sticker or piece of paper. You could also avoid using them altogether.”You can simply just go to the app store, download the ParkMobile app — or whatever the app is —yourself, and don’t take the risk of being sent somewhere you potentially don’t want to go,” Bergeaux said.It’s also good practice to not reuse passwords and to turn on multi-factor authentication to keep information safe in your most important accounts.What do I do if I fell victim to a QR code scam?If hackers do access your phone, don’t wait to act.”The first thing is, what did they get? If they got an account, reset the account, retake control over that account, whatever that account is,” Bergeaux said.Bergeaux warned that even if a QR code is made with good intentions, the websites used to create the QR codes can sometimes do so maliciously. So, if something seems off or doesn’t seem right, check with the company or organization before scanning a QR code or clicking any links.
Cybersecurity experts are warning about scammers using QR codes to take advantage of unsuspecting victims.
What is a QR code?
Short for “quick response” code, the small barcodes are ubiquitous, getting scanned using a phone’s camera to link to restaurant menus, online payment systems or any other digital task.
“We’re all familiar with these things,” said Jean-Paul Bergeaux, the federal chief technology officer for GuidePoint Security. “The concept is, ‘How do we give people a way to get to a link, where they don’t have to enter it, to simplify our life,’ which it does very well.”
How does ‘Quishing’ work?
Unfortunately, like most things, scammers are finding ways to use QR codes as part of a scam that’s known as “Quishing.”
“You’re just scanning it and hitting it and saying go, and so you can go anywhere, and the bad guys can send you anywhere,” Bergeaux said. “That’s the hook for them, right? It’s not only just ubiquitous and everywhere, but there’s a bit of (anonymity) to it.”
Bergeaux said scammers most commonly use QR codes to send you to a dummy website to get your information — and money.
“They’re going to steal things from your phone. They’re going to steal your information. They’re going to steal your accounts just by scanning it,” Bergeaux said.
It happened in Baltimore
So, say you’re trying to park your car. A fake QR code posted on a parking meter could take you to a website that looks like you’re paying for parking, when in reality, a scammer just stole your credit card information.
There are ways to protect yourself. For one, check whether the QR code has been tampered with, or whether it’s just covered up by a sticker or piece of paper. You could also avoid using them altogether.
“You can simply just go to the app store, download the ParkMobile app — or whatever the app is —yourself, and don’t take the risk of being sent somewhere you potentially don’t want to go,” Bergeaux said.
It’s also good practice to not reuse passwords and to turn on multi-factor authentication to keep information safe in your most important accounts.
What do I do if I fell victim to a QR code scam?
If hackers do access your phone, don’t wait to act.
“The first thing is, what did they get? If they got an account, reset the account, retake control over that account, whatever that account is,” Bergeaux said.
Bergeaux warned that even if a QR code is made with good intentions, the websites used to create the QR codes can sometimes do so maliciously. So, if something seems off or doesn’t seem right, check with the company or organization before scanning a QR code or clicking any links.
As entrepreneurs, we obsess over funnels, pixels, and creative. But here’s the quiet leak many businesses miss: your links. In a world of AI-generated phishing and domain spoofing, a sketchy-looking URL isn’t just an aesthetic issue, it’s a trust issue. Consumers reported $12.5 billion in fraud losses last year, and attackers are weaponizing AI to make scams nearly impossible to spot.
Trust is fragile. DigiCert’s State of Digital Trustresearch found that 47 percent of consumers stopped doing business with a company after losing trust in its digital security, and 57 percent say they’d likely switch if they felt that trust erode. You don’t need a breach to trigger that. Looking “phishy” is often enough.
The pattern you normalize is the behavior you get
Generic shorteners and off-brand domains may be cheap and convenient, but they also look exactly like the links your customers are trained to avoid. CISA flags untrusted shortened URLs as a phishing red flag, and universities and security vendors warn that shortened links are a common obfuscation technique. When brands normalize those patterns, they’re teaching audiences to click them, and attackers will thank you.
An Inc.com Featured Presentation
Meanwhile, these bad-actors are scaling. KnowBe4’s 2025 benchmarking data found that 82 percent of phishing emails currently use AI. Microsoft recently exposed an AI-aided phishing campaign that generated fake sign-in pages with convincing realism. This isn’t the future, it’s today’s inbox.
Branded domains aren’t cosmetic, they’re conversion and security
When the domain matches your brand, hesitation drops. Across channels (QR, SMS, email, social) an on-brand URL helps customers decide in milliseconds: Is this really you? According to a Journal of Advertising Research study, users show higher click propensity when the search includes a brand name. More importantly, branded links harden your ecosystem: Clean, governed link patterns are harder to spoof and easier to audit.
You don’t need to build new infrastructure to do it. Most reputable SaaS platforms now let you connect your own branded domain and SSL certificate in minutes. You keep the ease of third-party tools, while retaining ownership of your customer-facing identity.
The QR and SMS moment: Trust at first glance
QR codes have become the bridge between offline and online (packaging, TV, live events), but they also create a new moment of truth. When someone scans a QR code, their phone usually shows a preview of the destination address before opening it. If that preview shows a third-party domain, you’ve already lost a little trust, and potentially leaked a little data. Using your own branded domain for every QR code ensures that the link preview itself reassures customers and keeps click analytics within your control.
The same principle applies to SMS messages, one of today’s most common phishing channels. Texts that include unfamiliar or third-party domains look indistinguishable from scam attempts. Whether it’s a delivery notification, a password reset, or a limited-time offer, the link should always live under your brand’s domain. That single design choice protects the most vulnerable users (our parents, kids, and grandparents), while reinforcing that every authentic message from your company looks the same.
Own your click front door. Every campaign, QR code, or text link should live under your domain, even if powered by outside tools.
Enforce HTTPS. Security signals and trust signals are now the same thing.
Audit quarterly. Look at the domains your customers actually see. If they aren’t yours, fix them.
The simplest marketing policy you’ll ever write might also be the most powerful: “If a link doesn’t come from our domain, don’t click it, and don’t forward it.”
That one rule aligns teams, protects customers, and quietly trains your entire ecosystem in URL literacy. It’s the digital equivalent of locking your front door.
Trust drives engagement
From startups to global enterprises, trust drives engagement. During “attack seasons” such as the holidays, elections, or major events, phishing spikes and customer attention wanes. Strong link practices cut through the noise to reduce bounce and abandonment, while boosting clicks.
Once trust is in place, the fun begins. The next step is making links smarter, so they understand what someone wants and where they are in their journey. Call it AI for links. But intelligence only works when the foundation is trusted. In an AI-driven threat landscape, trust is your conversion rate. Own trust at the link level.
Cryptocurrency investor and entrepreneur Michael Terpin discovered his phone number had been moved to a new SIM in 2018, when attackers used that access to reset passwords and steal millions in cryptocurrency. The incident, an archetypal SIM swap, and the litigation that followed made clear how a single exposed phone number can turn public visibility into catastrophic loss. That is the exact risk wealthy individuals and high-profile companies now pay boutique privacy teams to manage.
Why visibility becomes a liability
Public profiles, data-broker records, legacy social handles, and archived accounts create an attack surface attackers can monetize in hours. A scraped home address, an old email, or a leaked phone number lets imposters stitch together enough detail to impersonate a trusted contact or convince support staff to hand over access. Law enforcement reporting shows that account takeover remains widespread; the FBI IC3 report and contemporary journalism documenting a recent SIM-swap surge reinforce that point.
Technical controls like passwords and standard two-factor authentication help, but attackers often exploit human processes at service providers, flawed verification scripts, split responsibilities, and rushed help-desk interactions. That is why defensive plans must change the processes attackers target as much as the tools they use.
An Inc.com Featured Presentation
What boutique privacy teams actually do
These specialized teams combine monitoring, removal, telecom hardening, and reputation repair. They sweep open web sources and underground forums for mentions of a client, file legal and platform takedown requests when sensitive data appears, and publish authoritative content so search engines surface verified information rather than rumors. For modern threat behavior and why rapid escalation matters, the CISA and FBI advisory on social engineering explains how attackers use human tricks to bypass technical controls.
A valuable provider runs three tracks at once: technical mitigation, legal escalation, and public messaging. When those tracks operate in parallel, a takedown, a forensic report, and a careful public statement reinforce one another so an exposure remains an incident rather than a reputation-destroying saga.
That is why clients buy standing retainers. Paying in advance guarantees immediate, coordinated action across platforms, courts, and media—calling platforms, filing emergency court papers, and placing verified content as one synchronized response. In practice, a pre-authorized retainer is materially cheaper and faster than an emergency scramble, which typically multiplies costs, drags out recovery, and damages trust.
Why phone companies matter
Mobile phone companies, such as AT&T, Verizon, and T-Mobile, manage the number itself and the porting procedures attackers exploit, which can defeat text message multi-factor authentication if a number is transferred without strong controls. To reduce that risk, carriers now offer features like account locks and port-out safeguards; AT&T’s Account Lock is one example of a mitigation providers request when protecting a client.
Buying silence is not about hiding financial accomplishments. It is about reducing the attack surface so achievements cannot be weaponized. Insist on audit trails and documented response times from any provider you hire, require carrier protections for critical lines, and plan for visibility as an operational expense rather than a one-off fix. Quiet bought in advance is resilience; quiet bought in panic is costly.
Digital invisibility has become purchasable because it meets three things wealthy buyers highly value: scarcity, measurability, and replaceable cost. Firms bundle monitoring, telecom hardening, legal escalation, and reputation engineering into subscription retainers that scale with a client’s public footprint. That subscription model creates scarcity by design—these services are labor- and relationship-intensive, not commodity software—and the price signals that scarcity. For many family offices, a monthly retainer is now a normal line item, the same way concierge medicine or a private jet is: expensive, but predictable and insurable.
When visibility generates revenue for you or your company, silence is risk management. Buying it in advance converts vulnerability into a manageable operational asset.
The Douglas County Sheriff’s Office has stopped using its CodeRED system to alert residents of orders to evacuate or shelter in place or of other emergencies after learning of a cyberattack on the network and a data breach.
Sheriff’s Deputy Daniel Carlin said Monday that the county stopped using CodeRED Nov. 21 when it learned of the data breach. Two weeks before that, the sheriff’s office started getting notifications that the system was down, but couldn’t get confirmation.
Carlin said CodeRED, accessed through an app, lost a lot of customers’ information. “We don’t trust continuing to use them.”
Although the data haven’t been published online, the sheriff’s office is encouraging all CodeRED users to contact credit bureaus to ensure their personal information has not been compromised. The sheriff’s office was among hundreds of agencies affected by the nationwide cybersecurity attack.
Douglas County is talking to representatives of similar alert systems and hopes to have a new network locked in within the next week or two, Carlin said. Until then, the sheriff’s department will go door-to-door in cases of a need to evacuate or shelter in place and use social media and other means to alert people, he added.
Douglas County is one of several counties that use CodeRED to alert residents of evacuation orders and other emergencies. Weld County also is looking for a new alert provider since CodeRED went down. The Park County Sheriff’s Office decommissioned the platform as well.
It’s unclear how many other Colorado counties use CodeRED. A message left with the company seeking more information went unreturned as of 5 p.m.
Some counties also use the state-run Integrated Public Alert and Warning System, or IPAWS, to notify people of wildfires and other emergencies.
“CodeRED was a great system for us to alert the public very fast,” Carlin said. “Easy access is of concern, but we 100% believe we can mitigate it via door-to-door knocks and social media posts.”
He said that residents will likely have to sign up for the system because their information won’t automatically be transferred.
Reuters first reported this weekend that DOGE had broken up, ending the months-long effort by Musk and his associates — many recruited from his various private-sector companies — to reduce alleged fraud and waste and cut employees across the federal government. DOGE was created by an executive order signed by President Trump in January. The initiative was expected to run for close to two years.
As of early November, DOGE “doesn’t exist,” according to Scott Kupor, the director of the U.S. Office of Personnel Management, which serves as the federal government’s human resources department.
In a tweet on Sunday, Kupor said that DOGE “may not have centralized leadership” anymore under the U.S. Digital Service, but “the principles of DOGE remain alive and well: de-regulation; eliminating fraud, waste and abuse; re-shaping the federal workforce; making efficiency a first-class citizen.”
While active, DOGE claimed to have saved the federal government billions of dollars in wasted taxpayer dollars. But critics, including lawmakers, say DOGE dismantled federal programs and government departments with little to show in terms of quantifiable savings.
According to Politico, several DOGE staffers are said to be fearful that they could face future federal charges without protections from Musk, who might have been able to secure presidential pardons for them if necessary.
Several DOGE staffers are now working for other U.S. federal government departments, according to Reuters, while other prominent DOGE staffers have said they no longer work for the government. Edward Coristine, whose nickname “Big Balls” went viral, said in a post on X in June that he is “officially out” of DOGE.
As generative AI pushes the speed of software development, it is also enhancing the ability of digital attackers to carry out financially motivated or state-backed hacks. This means that security teams at tech companies have more code than ever to review while dealing with even more pressure from bad actors. On Monday, Amazon will publish details for the first time of an internal system known as Autonomous Threat Analysis (ATA), which the company has been using to help its security teams proactively identify weaknesses in its platforms, perform variant analysis to quickly search for other, similar flaws, and then develop remediations and detection capabilities to plug holes before attackers find them.
ATA was born out of an internal Amazon hackathon in August 2024, and security team members say that it has grown into a crucial tool since then. The key concept underlying ATA is that it isn’t a single AI agent developed to comprehensively conduct security testing and threat analysis. Instead, Amazon developed multiple specialized AI agents that compete against each other in two teams to rapidly investigate real attack techniques and different ways they could be used against Amazon’s systems—and then propose security controls for human review.
“The initial concept was aimed to address a critical limitation in security testing—limited coverage and the challenge of keeping detection capabilities current in a rapidly evolving threat landscape,” Steve Schmidt, Amazon’s chief security officer, tells WIRED. “Limited coverage means you can’t get through all of the software or you can’t get to all of the applications because you just don’t have enough humans. And then it’s great to do an analysis of a set of software, but if you don’t keep the detection systems themselves up to date with the changes in the threat landscape, you’re missing half of the picture.”
As part of scaling its use of ATA, Amazon developed special “high-fidelity” testing environments that are deeply realistic reflections of Amazon’s production systems, so ATA can both ingest and produce real telemetry for analysis.
The company’s security teams also made a point to design ATA so every technique it employs, and detection capability it produces, is validated with real, automatic testing and system data. Red team agents that are working on finding attacks that could be used against Amazon’s systems execute actual commands in ATA’s special test environments that produce verifiable logs. Blue team, or defense-focused agents, use real telemetry to confirm whether the protections they are proposing are effective. And anytime an agent develops a novel technique, it also pulls time-stamped logs to prove that its claims are accurate.
This verifiability reduces false positives, Schmidt says, and acts as “hallucination management.” Because the system is built to demand certain standards of observable evidence, Schmidt claims that “hallucinations are architecturally impossible.”
Eight years after a researcher warned WhatsApp that it was possible to extract user phone numbers en masse from the Meta-owned app, another team of researchers found that they could still do exactly that using a similar technique. The issue stems from WhatsApp’s discovery feature, which allows someone to enter a person’s phone number to see if they’re on the app. By doing this billions of times—which WhatsApp did not prevent—researchers from the University of Vienna uncovered what they’re calling “the most extensive exposure of phone numbers” ever.
Vaping is a major problem in US high schools. But is the solution to spy on students in the bathroom? An investigation by The 74, copublished with WIRED, found that schools around the country are turning to vape detectors in an effort to crack down on nicotine and cannabis consumption on school grounds. Some of the vape detectors go far beyond detecting vapor by including microphones that are surprisingly accurate and revealing. While few defend addiction and drug use, even non-vapers say the added surveillance and the punishments that result go too far.
If you’ve ever attended a conference, you probably worried about getting sick in the cesspools that are a conference center. But one hacker conference in New Zealand, Kawaiicon, invented a novel way to keep attendees a little bit safer. By tracking the CO2 levels in each conference room, Kawaiicon’s organizers were able to create a real-time air-quality monitoring system, which would tell people which rooms were safe and which seemed … gross. The project brings new meaning to antivirus monitoring.
And that’s not all. Each week, we round up the security and privacy news we didn’t cover in depth ourselves. Click the headlines to read the full stories. And stay safe out there.
The US Border Patrol is operating a predictive-intelligence program that monitors millions of American drivers far beyond the border, according to a detailed investigation by the Associated Press. A network of covert license-plate readers—often hidden inside traffic cones, barrels, and roadside equipment—feeds data into an algorithm that flags “suspicious” routes, quick turnarounds, and travel to and from border regions. Local police are then alerted, resulting in traffic stops for minor infractions like window-tint violations, air fresheners, or marginal speeding. AP reviewed police records showing that drivers were questioned, searched, and sometimes arrested despite no contraband being found.
Internal group chats obtained through public-records requests show Border Patrol agents and Texas deputies sharing hotel records, rental car status, home addresses, and social media details of US citizens in real time while coordinating what officers call “whisper stops” to obscure federal involvement. The AP identified plate-reader sites more than 120 miles from the Mexican border in the Phoenix area, as well as locations in metropolitan Detroit and near the Michigan-Indiana line that capture traffic headed toward Chicago and Gary. Border Patrol also taps DEA plate-reader networks and has, at various times, accessed systems run by Rekor, Vigilant Solutions, and Flock Safety.
CBP says the program is governed by “stringent” policies and constitutional safeguards, but legal experts told AP that its scale raises new Fourth Amendment concerns. A UC Law San Francisco official said the system amounts to a “dragnet” tracking Americans’ movements, associations, and daily routines.
Microsoft claims to have mitigated the largest distributed denial-of-service (DDoS) attack ever recorded in a cloud environment—a 15.72 Tbps, 3.64-billion-pps barrage launched on October 24 against a single Azure endpoint in Australia. Microsoft says The attack “originated from the Aisuru botnet,” a Turbo-Mirai–class IoT network of compromised home routers, cameras, and other consumer devices. More than 500,000 IP addresses are said to have participated, generating a massive DDoS attack with little spoofing. Microsoft says its global Azure DDoS Protection network absorbed the traffic without service disruption. Microsoft described the attack as the “the largest DDoS ever observed in the cloud,” emphasizing the single endpoint; however, Cloudflare also recently reported a 22.2 Tbps flood, naming it the largest DDoS attack ever seen.
Researchers note that Aisuru has recently launched multiple attacks exceeding 20 Tbps and is expanding its capabilities to include credential stuffing, AI-driven scraping, and HTTPS floods via residential proxies.
The US Securities and Exchange Commission has dropped its remaining claims against SolarWinds and its CISO, Tim Brown, ending a long-running case over the company’s 2020 supply-chain hack, in which Russian SVR operatives allegedly compromised SolarWinds’ Orion software and triggered widespread breaches across government and industry. The agency’s lawsuit—filed in 2023 and centered on alleged fraud and internal-control failures—had already been mostly dismantled by a federal judge in 2024. SolarWinds called the full dismissal a vindication of its argument that its disclosures and conduct were appropriate and said it hopes the outcome eases concerns among CISOs about the case’s potential chilling effect.
Law enforcement records show that the FBI accessed messages from a private Signal group used by New York immigration court-watch activists—a network that coordinates volunteers monitoring public hearings at three federal immigration courts. According to a two-page FBI/NYPD “joint situational information report” dated August 28, 2025, agents quoted chat messages, labeled the nonviolent court watchers as “anarchist violent extremist actors,” and circulated the assessment nationwide. The report did not explain how the FBI penetrated an encrypted Signal group, but it claimed the information came from a “sensitive source with excellent access.”
The documents, first reported by the Guardian, were original obtained by the government-transparency group Property of the People. They describe activists discussing how to enter courtrooms, film officers, and gather identifying details of federal personnel, but provide no evidence to support the FBI’s allegation that a member previously advocated violence. A separate set of records—also obtained by the group—shows the bureau framed ordinary observation of public immigration hearings as a potential threat, even as Immigration and Customs Enforcement has escalated courthouse arrests and set what advocates call “deportation traps.” Civil liberties experts told the paper that the surveillance mirrors earlier FBI campaigns targeting lawful dissent and risks chilling protected political activity.
The Federal Communications Commission voted 2-1 along party lines on Thursday to scrap rules that required U.S. phone and internet giants to meet certain minimum cybersecurity requirements.
The FCC’s two Trump-appointed commissioners, chairman Brendan Carr and his Republican colleague Olivia Trusty, voted to withdraw the rules that require telecommunications carriers to “secure their networks from unlawful access or interception of communications.” The Biden administration had adopted these rules prior to leaving office earlier this year.
The FCC’s sole Democratic commissioner, Anna Gomez, dissented. In a statement following the vote, Gomez called the now-overturned rules the “only meaningful effort this agency has advanced” since the discovery of a sweeping campaign by a China-backed hacking group called Salt Typhoon that involved hacking into a raft of U.S. phone and internet companies.
The hackers broke into more than 200 telcos, including AT&T, Verizon and Lumen, during the years-long campaign to conduct broad-scale surveillance of American officials. In some cases, the hackers targeted wiretap systems that the U.S. government previously required telcos to install for law enforcement access.
The FCC’s move to change the rules sparked rebuke from senior lawmakers, including Sen. Gary Peters (D-MI), the ranking member of the Senate Homeland Security Committee. Peters said he was “disturbed” by the FCC’s effort to roll back “basic cybersecurity safeguards” and warned that doing so will “leave the American people exposed.”
Sen. Mark Warner (D-VA), the ranking member of the Senate Intelligence Committee, said in a statement that the rule change “leaves us without a credible plan” to address the basic security gaps exploited by Salt Typhoon and others.
For its part, the NCTA, which represents the telecommunications industry, praised the scrapping of the rules, calling them “prescriptive and counterproductive regulations.”
But Gomez warned that while collaboration with the telecommunications industry is valuable for cybersecurity, it is insufficient without enforcement.
“Handshake agreements without teeth will not stop state-sponsored hackers in their quest to infiltrate our networks,” said Gomez. “They won’t prevent the next breach. They do not ensure that the weakest link in the chain is strengthened. If voluntary cooperation were enough, we would not be sitting here today in the wake of Salt Typhoon.”
(Reuters) -The United States, Australia and the United Kingdom announced coordinated sanctions on Wednesday against Russia-based bulletproof hosting service provider Media Land for its role in supporting ransomware operations.
U.S. Treasury’s Office of Foreign Assets Control (OFAC) also designated three members of the Russian company’s leadership team and three of its sister companies, the Department of Treasury said in a statement.
“These so-called bulletproof hosting service providers like Media Land provide cybercriminals essential services to aid them in attacking businesses in the United States and in allied countries,” said John Hurley, Under Secretary of the Treasury for Terrorism and Financial Intelligence.
DoorDash disclosed a data breach that exposed the personal information of an unspecified number of users, which included names, email addresses, phone numbers, and physical addresses.
Despite the fact that hackers stole phone numbers and physical addresses, DoorDash said that “no sensitive information was accessed by the unauthorized third party and we have no indication the data has been misused for fraud or identity theft at this time.”
DoorDash said in the post that the breach impacted a mix of customers, delivery workers, and merchants. The company did not respond to a request for comment, which included a question on exactly how many users were victims of the breach.
The breach originated from an employee falling for a social engineering attack. When the company identified the breach, it shut down the hackers’ access to its systems, started an investigation, and reported the incident to law enforcement, according to a post published last week by the company.
DoorDash said no “Social Security numbers, other government-issued identification numbers, driver’s license information, or bank or payment card information” were stolen as part of the breach.
The United Statesissued a seizure warrant to Starlink this week related to satellite internet infrastructure used in a scam compound in Myanmar. The action is part of a larger US law enforcement interagency initiative announced this week called the District of Columbia Scam Center Strike Force.
Meanwhile, Google moved this week to sue 25 people that it alleges are behind a “staggering” and “relentless” scam text operation that uses a notorious phishing-as-a-service platform called Lighthouse.
And there’s more. Each week, we round up the security and privacy news we didn’t cover in depth ourselves. Click the headlines to read the full stories. And stay safe out there.
China’s massive intelligence apparatus has never quite had its Edward Snowden moment. So any peak inside its surveillance and hacking capabilities represents a rare find. One such glimpse has now arrived in the form of about 12,000 documents leaked from the Chinese hacking contractor firm KnownSec, first revealed on the Chinese-language blog Mxrn.net and then picked up by Western news outlets this week. The leak includes hacking tools such as remote-access Trojans, as well as data extraction and analysis programs. More interesting, perhaps, is a target list of more than 80 organizations from which the hackers claim to have stolen information. The listed stolen data, according to Mrxn, includes 95 GB of Indian immigration data, three TB of call records from South Korean telecom operator LG U Plus, and a mention of 459 GB of road-planning data obtained from Taiwan, for instance. If there were any doubts as to whom KnownSec was carrying out this hacking for, the leak also reportedly includes details of its contracts with the Chinese government.
The cybersecurity community has been warning for years that state-sponsored hackers would soon start using AI tools to supercharge their intrusion campaigns. Now the first known AI-run hacking campaign has surfaced, according to Anthropic, which says it discovered a group of China-backed hackers using its Claude tool set extensively in every step of the hacking spree. According to Anthropic, the hackers used Claude to write malware and extract and analyze stolen data with “minimal human interaction.” Although the hackers bypassed Claude’s guardrails by couching the malicious use of its tools in terms of defensive and whitehat hacking, Anthropic says it nonetheless detected and stopped them. By that time, however, the spy campaign had successfully breached four organizations.
Even so, fully AI-based hacking still isn’t necessarily ready for prime time, points out Ars Technica. The hackers had a relatively low intrusion rate, given that they targeted 30 organizations, according to Anthropic. The AI startup also notes that the tools hallucinated some stolen data that didn’t exist. For now, state-sponsored spies still have some job security.
The North Koreans raising money for the regime of Kim Jong Un by getting jobs as remote IT workers with false identities aren’t working alone. Four Americans pleaded guilty this week to letting North Koreans pay to use their identities, as well as receiving and setting up corporate laptops for the North Korean workers to remotely control. Another man, Ukrainian national Oleksandr Didenko, pleaded guilty to stealing the identities of 40 Americans to sell to North Koreans for use in setting up IT worker profiles.
A report from 404 Media shows that a Customs and Border Protection app that uses face recognition to identify immigrants is being hosted by Google. The app can be used by local law enforcement to determine whether a person is of potential interest to Immigration and Customs Enforcement. While platforming the CBP app, Google has meanwhile recently taken down some apps in the Google Play Store used for community discussion about ICE activity and ICE agent sightings. Google justified these app takedowns as necessary under its terms of service, because the company says that ICE agents are a “vulnerable group.”
From ransomware to quantum disruption, Canada must take urgent steps to defend its institutions and build long-term cyber capacity. Observer Labs
This Q&A is part of Observer’s Expert Insights series, where industry leaders, innovators and strategists distill years of experience into direct, practical takeaways and deliver clarity on the issues shaping their industries. At a moment when cyber threats are escalating alongside geopolitical tensions, Canada finds itself at a crossroads: how to defend its digital infrastructure, protect its economy and maintain global competitiveness while preserving the values of an open, democratic society.
Judith Borts, senior director of the Rogers Cybersecure Catalyst at Toronto Metropolitan University,sits at the intersection of policy, security and economic strategy. With a career spanning provincial economic development, national innovation policy and cross-sector collaboration, Borts has become one of Canada’s most vocal advocates for treating cybersecurity not as a niche technical specialty but as a shared societal responsibility—one that will determine the country’s digital sovereignty in the years ahead.
Her work at the Catalyst focuses on building the talent, partnerships and operational capacity Canada needs to withstand increasingly sophisticated attacks. But it’s her policy background that gives her a panoramic view of what’s at stake. Canada, she argues, can no longer afford a reactive approach to cyber risk. Nation-state adversaries, criminal networks and A.I.-accelerated threats are moving faster than traditional governance models can respond, and the downstream costs to Canadians are already enormous.
Borts outlines where Canada is falling behind global peers, what a truly unified national cyber strategy would require and why talent development may ultimately matter more than any single technological breakthrough. She also offers a candid look at the sectors most vulnerable today, the policies needed to strengthen resilience and how emerging technologies like A.I. and quantum computing will reshape the country’s digital future. Canada’s prosperity increasingly depends on something once viewed as purely defensive: a secure and trusted digital ecosystem.
With global alliances shifting and the U.S. pulling back from international cooperation, how are these geopolitical tensions directly reshaping Canada’s cybersecurity priorities and its role in intelligence-sharing networks?
Even as global alliances shift, intelligence sharing through networks like the Five Eyes, G7 and NATO remains strong. That’s not really where Canada’s biggest challenge is. What we really need to zero in on is building our own sovereign defence and resilience—including in the cyber and digital domains—so we can protect ourselves, respond quickly when threats come up and recover safely and securely.
Cyberattacks today can come from anywhere (foreign governments, organized groups or even individuals), and they pose real risks to Canadian institutions, businesses and citizens. Our national security and defence strategies need to reflect that reality. We need to invest more in homegrown talent and innovation, from cybersecurity research to advances in A.I. and quantum technologies, so that Canada can stay ahead of the curve. It’s not about losing trust in our allies; it’s about maintaining our strong relationships while also making sure we have the strength and resilience to stand on our own when it matters most.
Which Canadian sectors are most exposed to cyber risk, and how prepared are they to defend against the sophisticated attacks we’re seeing today?
Every sector in Canada, as well as around the world, is exposed to cyber risk. Healthcare continues to face some of the most visible and alarming threats. Ransomware attacks have forced hospitals to cancel surgeries and even shut down emergency systems, putting patient safety directly at risk. The energy sector is another major target. And what used to be mainly about stealing data has now shifted to attempts to interfere with the systems that keep our power grid running. As our digital and physical infrastructure becomes more connected, those risks multiply and even a single successful attack can throw essential services across the country into chaos.
Canada’s economy is powered by small and medium-sized businesses, which make up about 99 percent of all companiesin the country and account for more than half of the country’s GDP. These companies are increasingly being targeted but often lack the specialized staff, training and resources to respond effectively. Plus, the impacts of a ransomware attack on an SMB’s bottom line can be massive.
We’re seeing progress in some areas, but these are still isolated efforts. Real national cybersecurity and resilience mean a coordinated approach, one that brings strong security standards together with real investment in education, innovation and long-term capacity building. That’s how we keep Canada’s economy secure and competitive in the years ahead.
What specific policy mechanisms are needed to create a unified national cyber strategy that also respects Canada’s diverse regional priorities?
A top-down approach alone won’t keep up with how fast threats evolve or be able to address the practical needs of all regions. Real resilience comes from bringing federal, provincial and local efforts together so we can build safe and secure communities, share information faster, respond in real time and build trust across sectors.
We also need to make it easier for Canadian businesses to operate securely, both at home and abroad. That means creating a more harmonized and less fragmented set of cyber standards and compliance requirements, so companies aren’t forced to navigate a maze of conflicting rules across jurisdictions. Taking a more unified approach that integrates leading global approaches and consistent standards would help Canada stay internationally competitive while keeping our digital ecosystem strong and secure.
In a nutshell, the federal government should set the national vision and provide the framework and tools while empowering local governments, organizations and innovators to adapt that framework to their realities. When everyone works from the same playbook, security can become part of how we do business—not a barrier to it.
As cyber threats evolve, is Canada keeping pace with peers like the U.S. and the E.U. in building defensive capabilities, or are governance gaps holding it back?
It’s an exciting time for cybersecurity in Canada, but the truth is we’re not yet keeping pace with our peers. The United States invests close to $800 billion or 3.5 percent of GDP annually in research and development, while Canada spends less than 2 percent of ours, and only a fraction of that goes toward cyber and defense innovation. That gap matters. The European Union, meanwhile, approaches cybersecurity not just as a security issue but as a pillar of economic resilience, seeing digital protection and competitiveness as two sides of the same coin.
Canada has world-leading talent in cybersecurity, A.I. and quantum. We are also building a strong foundation with proposed legislation like the Critical Cyber Systems Protection Act (Bill C-8) and a growing base of innovation, but we need to move faster—connecting our federal, provincial and municipal strategies, strengthening our talent pipeline and investing in homegrown technology. If we treat cybersecurity as both national defence and economic opportunity, we can close the gap and position Canada as a real leader in the digital future.
What are the most critical lessons from recent high-profile cyberattacks, and how should they guide efforts to build systemic resilience?
If there’s one thing recent cyberattacks have taught us, it’s that we need to wake up. No one is really paying attention to how serious this has become. We’re seeing massive fraud and data theft happening quietly, every day, and too often the response is weak at best. The impacts are not only felt at the victim’s level; the burden of the costs to Canadians is enormous, and we’re all paying for this.
And still, people aren’t changing their passwords, companies still skip basic protections like multi-factor authentication, and we’ve normalized the idea that our data will be stolen eventually. That has to change.
There’s a common mantra in the cyber community that when it comes to cyber threats: ‘it’s not if, but when.’ But the lesson isn’t that attacks are inevitable. It’s that we need to take preventative action and prepare for potential threats. Complacency is our biggest weakness.
We can’t treat cybersecurity as background noise while we rush to adopt new technologies like A.I. A.I. can make systems smarter, but it also makes cyber threats faster, more targeted and harder to detect. At the same time, many organizations are adopting A.I. without fully addressing the very real risks that come with it. Every organization embracing A.I. should be asking: Are we doing this in a way that keeps us secure and our clients/customers safe?
True resilience isn’t about specific actions by a cyber team; it’s about how fast and effectively we respond and how seriously we take the responsibility to protect ourselves in the first place.
What role should partnerships between universities, public institutions, government, private industry and Canadian tech companies play in building national cyber resilience?
No single group can solve Canada’s cybersecurity challenges on its own—the threats are too complex, the digital infrastructure is too vast and diverse and the stakes are too high. True resilience depends on everyone working together: universities driving research and developing talent, government providing intelligence, guidance and coordination, industry building secure systems and helping to generate specialized talent and Canadian tech companies pushing innovation forward.
But collaboration can’t just happen in boardrooms or policy papers: we also have to meet Canadians where they are. Digital resilience and cyber awareness are no longer specialized skills; they are now basic workplace essentials. Everyone, regardless of their role, needs to understand how to protect information, manage digital tools responsibly, and remain vigilant to evolving threats. If we’re going to reach everyone, it means finding more creative and practical ways to weave cyber awareness and digital resilience into everyday life, whether that’s through local community programs, small business training or more accessible education.
When universities, public institutions, government, and industry connect directly with Canadians, cybersecurity stops being an abstract concept and becomes something everyone can take part in.
That whole-of-society approach is no longer optional. It’s literally the foundation of our national resilience.
How does developing a skilled and diverse cybersecurity workforce contribute to Canada’s digital sovereignty and long-term competitiveness?
When we talk about securing Canada’s digital future, the real advantage isn’t just in technology; it’s in people. We need Canadians to protect what matters to Canada and build a robust digital infrastructure that we can rely on to keep our economy and country growing in the face of mounting threats. This requires a trustworthy and capable workforce. At the Catalyst, we have no delusions about the impacts of A.I. on cybersecurity work. The key question is: what does a skilled cybersecurity workforce look like in the age of A.I.?
We are hyper-focused on creating not only skilled cybersecurity professionals, but also helping those in other organizational roles across different sectors to better understand the cybersecurity challenges they are facing while maintaining a keen eye on emerging technologies such as A.I. and quantum computing. Through our programs, we’re building job-ready professionals who can address the human, organizational and technical issues of cybersecurity.
But in an era where A.I. can automate certain technical functions, the real challenge—and opportunity—is in ensuring that we have an agile workforce and that we educate and support individuals in exercising judgment, creativity, critical thinking, contextual understanding and ethical reasoning that machines can’t replicate.
It’s like asking how you maintain a community of great writers when A.I. can draft a paragraph for you: the value shifts to insight, empathy, strategy and human perspective.
How can Canada’s cyber strategy link security, innovation and economic growth?
For too long, we’ve talked about cybersecurity as a purely defensive measure. Many still view it as just the cost of doing business. The truth is, in the modern economy, cybersecurity is an investment, and resilience is one of our biggest competitive advantages. It’s the bedrock of national prosperity and our ticket to maintaining our position as a serious player on the global stage.
Think about it: when we create an environment built on digital trust, with infrastructure that is both robust and secure, everything else follows. It’s what gives international partners the confidence to invest here, and it’s what gives our own innovators in critical sectors like finance, healthcare and technology the secure launchpad they need to bring their best ideas to life.
So, the critical question is, how do you intentionally build that kind of environment? It doesn’t happen by accident, and it can’t rest solely on a policy or a plan. It only comes about through action.
By combining smart government policies and strong intellectual property and patent protections with real incentives for our businesses, we stop treating cybersecurity as a problem to be solved and start seeing it for what it is: a massive opportunity to build our next generation of tech leaders and secure Canada’s role as an innovator.
How will emerging technologies such as A.I. and quantum computing reshape Canada’s cybersecurity landscape, and what must be done now to ensure a secure, sovereign, and competitive digital ecosystem by 2030?
A.I. is rewriting the cybersecurity landscape, and quantum computing won’t be far behind. Each one presents both huge opportunities and serious threats. As these technologies start to converge, we will see incredible new possibilities and potential, but also significant power to cause real damage if we’re not prepared.
A.I. is now an arms race. For every advanced risk detection model we create, our adversaries are using A.I. to launch attacks. And quantum computing is the horizon. This will threaten most of the common encryption used today.
This new reality demands a strategic change, including what the industry calls the “shift-left approach.” Traditionally, security testing happened at the end of a project, just before the software was released. Shift-left flips that model by pushing security earlier in the development cycle—essentially “shifting” it to the left on the project timeline.
For example, instead of waiting until a new system is fully built to check for vulnerabilities, developers should build security into the design on day one, and then test for risks at each step. This approach comes from modern software engineering, but it’s now essential for cybersecurity: if emerging technologies like A.I. aren’t built with security-by-design, we’re already behind.
Ultimately, by investing in talent, targeting the best in R&D, and investing in an innovative ecosystem, Canada can make sure we’re not just reacting to technological change but we are leading the change.
CPA Australia has cautioned that the accelerating use of AI by businesses may bring new cybersecurity risks.
The warning comes after findings from CPA Australia’s latest Business Technology Report 2025 highlighted that 18% of Australian businesses reported experiencing loss of time or money because of cyber incidents over the past year.
The report, which surveyed 1,117 accounting and finance professionals throughout the Asia-Pacific region, points to a particular vulnerability among smaller businesses.
CPA Australia business investment and international lead Gavan Ord said: “As AI tools become more integrated into financial systems and workflows, they also create new cybersecurity vulnerabilities that businesses must proactively manage to avoid substantial financial and reputational damage.”
The survey reported that 14% of Australian companies were found to have what were described as inconsistent or mostly reactive approaches to cybersecurity policy.
Results indicate that Australian companies are marginally better prepared than the Asia-Pacific average.
Ord said that businesses should not become complacent, particularly those that are beginning to incorporate AI tools into their operations.
He added: “Australian small businesses generally lag in technology adoption compared to Asian markets, but the good news is that investment in AI is now accelerating. However, it is vital this is matched by investment in cybersecurity.”
According to the report, 71% of Australian businesses plan to further integrate AI into their activities by 2026.
While this trend is expected to benefit productivity, Ord expressed concern that the reliance on digital technology over the next couple of years could leave more companies exposed to online attacks.
He said: “Used correctly, AI will help boost business productivity and inspire growth, but there are also questions about its vulnerability to emerging online threats. The last thing a business needs is a major investment in technology opening the door to criminals.
“While AI can be game-changing for businesses, it is also arming cybercriminals with even more sophisticated tools. It is enhancing their existing tactics and creating new avenues for online scams and attacks.”
Even among larger organisations with more established systems in place there was a higher likelihood of reporting losses due to cyber incidents, the report said, indicating that no business is immune from these challenges.
In response to these risks, CPA Australia is advising businesses to revisit and update their cybersecurity protocols; ensure basic protections such as firewalls and multi-factor authentication are in place; frequently educate staff about good cyber practices and bring awareness about phishing attempts; and consult with qualified professionals for guidance.
The association also recommended that companies make use of resources such as the Australian Cyber Security Centre’s Essential Eight framework and government-supported training run by CyberWardens.
“CPA Australia flags cybersecurity risks as AI use expands in business ” was originally created and published by The Accountant, a GlobalData owned brand.
The information on this site has been included in good faith for general informational purposes only. It is not intended to amount to advice on which you should rely, and we give no representation, warranty or guarantee, whether express or implied as to its accuracy or completeness. You must obtain professional or specialist advice before taking, or refraining from, any action on the basis of the content on our site.
Amid a governmentshutdown that has dragged on for more than five weeks, the United States Congressional Budget Office said on Thursday that it recently suffered a hack and moved to contain the breach. CBO provides nonpartisan financial and economic data to lawmakers, and The Washington Post reported that the agency was infiltrated by a “suspected foreign actor.”
CBO spokesperson Caitlin Emma told WIRED in a statement that it has “implemented additional monitoring and new security controls to further protect the agency’s systems” and that “CBO occasionally faces threats to its network and continually monitors to address those threats.” Emma did not address questions from WIRED about whether the government shutdown has impacted technical personnel or cybersecurity-related work at CBO.
With increasing instability in the Supplemental Nutrition Assistance Program (SNAP) leaving Americans hungry, air traffic control personnel shortages disrupting flights, financial devastation for federal workers, and mounting operational shortages at the Social Security Administration, the shutdown is increasingly impacting every corner of the US. But researchers, former and current government workers, and federal technology experts warn that gaps in foundational activities during the shutdown—things like system patching, activity monitoring, and device management—could have real effects on federal defenses, both now and for years to come.
“A lot of federal digital systems are still just running in the cloud throughout the shutdown, even if the office is empty,” says Safi Mojidi, a longtime cybersecurity researcher who previously worked for NASA and as a federal security contractor. “If everything was set up properly, then the cloud offers an important baseline of security, but it’s hard to rest easy during a shutdown knowing that even in the best of times there are problems getting security right.”
Even before the shutdown, federal cybersecurity workers were being impacted by reductions in force at agencies like the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency—potentially hindering digital defense guidance and coordination across the government. And CISA has continued cutting staff during the shutdown as well.
In a statement, spokesperson Marci McCarthy said “CISA continues to execute on its mission” but did not answer WIRED’s specific questions about how its work and digital defenses at other agencies have been impacted by the government shutdown, which she blamed on Democrats.
The government’s transition to the cloud over the last decade, as well as increased attention to cybersecurity in recent years, does provide an important backstop for a disruption like a shutdown. Experts emphasize, though, that the federal landscape is not homogenous, and some agencies have made more progress and are better equipped than others. Additionally, missed and overlooked digital security work that accumulates during the shutdown will create a backlog when workers return that could be difficult to surmount.
The Washington Post has said that it was one of the victims of a hacking campaign tied to Oracle’s suite of corporate software apps.
Reuters first reported the news on Friday, citing a statement from the newspaper that said it was affected “by the breach of the Oracle E-Business Suite platform.”
A spokesperson for the Post did not immediately respond to TechCrunch’s request for comment
When reached by email, Oracle spokesperson Michael Egbert referred TechCrunch to its twoadvisories that it previously posted, and did not answer our questions.
Last month, Google said that the ransomware gang Clop was targeting companies after exploiting multiple vulnerabilities in Oracle’s E-Business Suite software, which companies use for their business operations, storing their human resources files, and other sensitive data.
The exploits allowed the hackers to steal their customer’s business data and employee records from more than 100 companies, per Google.
The hackers’ campaign began in late September when corporate executives reported receiving extortion messages sent from email addresses previously associated with the Clop gang, claiming that the hackers had stolen large amounts of sensitive internal business data and employees’ personal information from hacked Oracle systems.
Anti-ransomware firm Halcyon told TechCrunch at the time that the hackers demanded one executive at an affected company to pay $50 million in a ransom payment.
On Thursday, Clop claimed on its website that it had hacked The Washington Post, claiming that the company “ignored their security,” language that the Clop gang typically uses when the victim does not pay the hackers.
It’s not uncommon for ransomware or extortion gangs like Clop to publicize the names and stolen files of their victims as a pressure tactic, which can suggest that the victim has not negotiated a payment with the gang, or the negotiation broke down.