Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


~“May you live in interesting times”~

Having the blessing and the curse of working in the field of cybersecurity, I often get asked about my thoughts on how that intersects with another popular topic — artificial intelligence (AI). Given the latest headline-grabbing developments in generative AI tools, such as OpenAI’s ChatGPT, Microsoft’s Sydney, and image generation tools like Dall-E and Midjourney, it is no surprise that AI has catapulted into the public’s awareness.

As is often the case with many new and exciting technologies, the perceived short-term impact of the latest news-making developments is probably overestimated. At least that’s my view of the immediate within the narrow domain of application security. Conversely, the long-term impact of AI for security is huge and is probably underappreciated, even by many of us in the field.

Fantastic accomplishments; tragic failures

Stepping back for a moment, machine learning (ML) has a long and deeply storied history. It may have first captured the public’s attention with chess-playing software 50 years ago, advancing over time to IBM Watson winning a Jeopardy championship to today’s chatbots that come close to passing the fabled Turing test.

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

 


Register Now

What strikes me is how each of these milestones was a fantastic accomplishment at one level and a tragic failure at another. On the one hand, AI researchers were able to build systems that came close to, and often surpassed, the best humans in the world on a specific problem.

On the other hand, those same successes laid bare how much difference remained between an AI and a human. Typically, the AI success stories excelled not by outreasoning a human or being more creative but by doing something more basic orders of magnitude faster or at exponentially larger scale.

Augmenting and accelerating humans

So, when I’m asked, “How do you think AI, or ML, will affect cybersecurity going forward?” my answer is that the biggest impact in the short-term will come not from replacing humans, but by augmenting and accelerating humans.

Calculators and computers are one good analogy — neither replaced humans, but instead, they allowed specific tasks — arithmetic, numeric simulations, document searches — to be offloaded and performed more efficiently.

The use of these tools provided a quantum leap in quantitative performance, allowing these tasks to be performed more pervasively. This enabled entirely new ways of working, such as new modes of analysis that spreadsheets like VisiCalc, and later Excel, to the benefit of humans and society at large. A similar story played out with computer chess, where the best chess in the world is now played when humans and computers collaborate, each contributing to the area they are best in.

The most immediate impacts of AI on cybersecurity based on the latest “new kid on the block” generative AI chatbots are already being seen. One predictable example, a pattern that often occurs anytime a trendy internet-exposed service becomes available, whether ChatGPT or Taylor Swift tickets, is the plethora of phony ChatGPT websites set up by criminals to fraudulently collect sensitive information from consumers.

Naturally, the corporate world is also quick to embrace the benefits. For example, software engineers are increasing development efficiency by using AI-based code creation accelerators such as Copilot. Of course, these same tools can also accelerate software development for cyber-attackers, reducing the amount of time required from discovering a vulnerability until code exists that exploits it.

As is almost always the case, society is usually quicker to embrace a new technology than they are to consider the implications. Continuing with the Copilot example, the use of AI code generation tools opens up new threats.

One such threat is data leakage — key intellectual property of a developer’s company may be revealed as the AI “learns” from the code the developer writes and shares it with the other developers it assists. In fact, we already have examples of passwords being leaked via Copilot.

Another threat is unwarranted trust in the generated code that may not have had sufficient experienced human oversight, which runs the risk of vulnerable code being deployed and opening more security holes. In fact, a recent NYU study found that about 40% of a representative set of Copilot-generated code had common vulnerabilities.

More sophisticated chatbots

Looking slightly, though not too much, further forward, I expect bad actors will co-opt the latest AI technology to do what AI has done best: Allowing humans, including criminals, to scale exponentially.  Specifically, the latest generation of AI chatbots has the ability to impersonate humans at scale and at high quality.

This is a great windfall (from the cybercriminals’ perspective), because in the past, they were forced to choose to either go “broad and shallow” or “narrow and deep” in their selection of targets. That is, they could either target many potential victims, but in a generic and easy-to-discern manner (phishing), or they could do a much better, much harder to detect job of impersonation to target just a few, or even just one, potential victim (spearphishing).

With the latest AI chatbots, a lone attacker can more closely and easily impersonate humans — whether in chat or in a personalized email — at a much-increased attack scale. Protection countermeasures will, of course, react to this move and evolve, likely using other forms of AI, such as deep learning classifiers. In fact, we already have AI-powered detectors of faked images. The ongoing cat-and-mouse game will continue, just with AI-powered tools on both sides.

AI as a cybersecurity force multiplier

Looking a bit deeper into the crystal ball, AI will be increasingly used as a force multiplier for security services and the professionals who use them. Again, AI enables quantum leaps in scale — by virtue of accelerating what humans already do routinely but slowly.

I expect AI-powered tools to greatly increase the effectiveness of security solutions, just as calculators hugely sped up accounting. One real-world example that has already put this thinking into practice is in the security domain of DDoS mitigation. In legacy solutions, when an application was subjected to a DDoS attack, the human network engineers first had to reject the vast majority of incoming traffic, both valid and invalid, just to prevent cascading failures downstream.

Then, having bought some time, the humans could engage in a more intensive process of analyzing the traffic patterns to identify particular attributes of the malicious traffic so it could be selectively blocked. This process would take minutes to hours, even with the best and most skilled humans. Today, however, AI is being used to continuously analyze the incoming traffic, automatically generate the signature of invalid traffic, and even automatically apply the signature-based filter if the application’s health is threatened — all in a matter of seconds. This, too, is an example of the core value proposition of AI: Performing routine tasks immensely faster.

AI in cybersecurity: Advancing fraud detection

This same pattern of using AI to accelerate humans can, and is, being adopted for other next-generation cybersecurity solutions such as fraud detection. When a real-time response is required, and especially in cases where trust in the AI’s evaluation is high, the AI is being empowered to react immediately.

That said, AI systems still do not out-reason humans or understand nuance or context. In such cases where the likelihood or business impact of false positives is too great, the AI can still be used in an assistive mode — flagging and prioritizing the security events of most interest for the human.

The net result is a collaboration between humans and AIs, each doing what they are best at, improving efficiency and efficacy over what either could do independently, again rhyming with the analogy of computer chess.

I have a great deal of faith in the progression thus far. Peering yet deeper into the crystal ball, I feel the adage “history rarely repeats, but it often rhymes” is apt. The longer-term impact of human-AI collaboration,that is, the results of AI being a force multiplier for humans, is as hard for me to predict as it might have been for the designer of the electronic calculator to predict the spreadsheet.

In general, I imagine it will allow humans to further specify the intent, priorities and guardrails for the security policy, with AI assisting and dynamically mapping that intent onto the next level of detailed actions.

Ken Arora is a distinguished engineer at F5.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Ken Arora, F5

Source link

You May Also Like

Citi Resumes GLP J-REIT at Buy

Citi Resumes GLP J-REIT at Buy Investing.com Source link

Garcia Joins Altour as Global Sales and Strategy SVP

Internova Travel Group has hired longtime travel management company sales executive Shannon…

The Disruptive Power of Weight Loss Drugs Is Being Felt Beyond Pharma

As they do every summer, publicly traded companies posted their second-quarter results…

Gamers over 43 outnumber Gen Z on mobile per survey of 200K+

GamesBeat Next unites gaming industry leaders for exceptional content, networking, and deal-making…