Fuzzing is a term that sounds hard to take seriously. But it needs to be, in light of today’s attack landscape. Fuzzing has traditionally been a sophisticated technique used by professional threat researchers to discover vulnerabilities in hardware and software interfaces and applications. They do this by injecting invalid, unexpected, or semi-random data into an interface or program and then monitoring for events such as crashes, undocumented jumps to debug routines, failing code assertions and potential memory leaks. This process helps developers and researchers find bugs and zero-day vulnerabilities that would be nearly impossible to discover otherwise.
Fortunately, cybercriminals don’t tend to use fuzzing as a way to uncover vulnerabilities because it is very hard to do and requires a lot of custom development. Because they operate on a for-profit basis, it is hard to justify such overhead when simpler attack methods continue to be so profitable. There is actually only a tiny group of people with the expertise needed to develop and run effective fuzzing tools—which is also why, in those rare instances when they do resort to fuzzing, their use by the criminal community tends to be limited to simple things like DDoS attacks.
However, there is likely a vast quantity of vulnerabilities that could be discovered and exploited in commercially available software and operating systems right now using fuzzing technologies. The value of owning an unknown vulnerability for a zero-day exploit to target is high, but because there simply haven’t been enough purpose-built fuzzing tools or skilled developers available to discover them, the ROI for finding such things has been higher still. But given the right economic model or opportunity, all of that could quickly change.
AI Makes the Difficult Possible
As machine learning models begin to be applied to the fuzzing process, this technique is predicted to not only become more efficient and tailored to help developers and researchers, but it will also become available for the first time to a wider range of less- technical individuals.
As cybercriminals begin to leverage automated fuzzing programs augmented by machine learning, they will be able to accelerate the discovery of zero-day vulnerabilities. This will lead to an increase in zero-day attacks targeting different programs and platforms. Because of the difficulty of anticipating and defending against completely unknown attacks, this development could be a significant game changer for cybersecurity.
This approach is called Artificial Intelligence Fuzzing (AIF). Bad actors will be able to develop and train fuzzing programs to automate and accelerate the discovery of zero-day attacks. Then, by simply pointing an AIF application at a target, they could begin to automatically mine it for zero-day exploits.
The two machine learning phases of AIF would be Discovery and Exploit. In the Discovery phase, the AIF tool would learn about the functionalities and requirements of a new target, including the patterns it uses for structured data. Then, in the Exploitation phase, the AIF tool would start to inject intentionally designed, structured data into that software or interface, monitor the outcome, use machine learning to refine the attack, and eventually force the target to break. This constitutes discovering a vulnerability and an exploit at the same time.
This involves a machine learning approach that can easily be supervised by a trained cybercriminal, and it can be repeated—allowing a criminal to discover and exploit zero-day vulnerabilities and then run continuous combinations of attacks against a victim. In a landscape where a vast supply of zero-day attacks are available, even advanced tools designed to detect unknown threats, such as sandboxing, would be quickly overwhelmed.
How AIF Will Affect the Cybercrime Economy
For many criminal organizations, attack techniques are evaluated not only in terms of their effectiveness but in the overhead required to develop, modify, and implement them. As a result, many attack strategies can be interrupted by addressing the economic model employed by cybercriminals rather than circumventing their attack. Strategic changes to people, processes, and technologies can force some cybercriminal groups to rethink the financial value of using certain attacks.
One way that organizations are interrupting attackers is by adopting new technologies and strategies, such as machine learning and automation, to take on tedious and time-consuming security activities that normally require a high degree of human supervision and intervention. These newer defensive strategies are likely to impact cybercriminal strategies, causing bad actors to change attack methods and accelerate their own development efforts.
By using AIF, however, attackers will be able to increase the number and variety of available vulnerabilities and exploits at their command, including the ability to quickly produce zero-day exploits, and even provide for-hire services on the Dark Web, such as Zero-Day Mining-as-a-Service. The availability of such a service will require a complete change in how organizations approach security, because there’s no way to foresee where these zero-days are, nor how to properly defend against them. This is particularly true using the kinds of isolated, legacy security tools most organizations have deployed in their networks today.
Fighting Back
Once the purview of defense researchers, advanced fuzzing is poised to fall into the hands of the criminal community. AI-powered fuzzing will change the game for both attacker and target. One effective method of counter-attack is to go after the underlying economic strategies of criminal organizations. Instead of getting caught up in a perpetual arms race, organizations need to leverage automation, machine learning and AI for themselves to anticipate threats and change strategies so that it’s no longer economically viable for adversaries to attack.
(Derek Manky is Chief of Security Insights and Global Threat Alliances at Fortinet. He has more than 15 years of cyber security experience to his work helping customers formulate security strategy.)