InfoSec Insider

4 Innovative Ways Cyberattackers Hunt for Security Bugs

David “moose” Wolpoff, co-founder and CTO at Randori, talks lesser-known hacking paths, including unresolved “fixme” flags in developer support groups.

Blue teamers are in constant battle against hackers — faceless adversaries whose persistence can seem unending. But these actors have processes just like corporate operations, even if theirs are bootlegged.

Attackers seek the path of least resistance: Gain access with as little effort as possible; make as little noise as possible; and use the fewest possible exploits.

Once they’ve identified a tempting asset to exploit, attackers employ techniques to find a vulnerability. Some can give attackers a win more quickly, others take more time. Finding and exploiting a bug can take anywhere from a couple of hours to several months, or longer. Some attackers use tried-and-true methods, but the most creative in the group find ways to exploit systems through unexpected vectors. In-house security teams must understand which parts of their attack surface are most tempting to adversaries, in order to develop effective defense strategies.

An attacker’s perspective on bug hunting can help inform how defenders protect valuable assets, which begins with four common methods.

Finding CVE Doppelgangers

Much like security teams facing alert fatigue, attackers face a firehose of vulnerability information; only some of which matters for their purposes. Attackers may cross-check vulnerabilities against their targets as a starting point, but high-severity CVEs aren’t always fruitful (they’re publicly known and will likely be well-monitored by security teams). However, known CVEs are excellent starting points to discover similar bugs hiding in code. Think about the software development cycle. Code deployed in your organization may be reused and recycled, infiltrating your environment. If you patch a vulnerability for code that’s currently in development, but not other versions, you’ve left a variant of that bug unpatched. For attackers, doing an audit of open-source code is an easy way to find vulnerabilities and a relatively unguarded path into a network.

Unresolved Developer Notes

Reading source code can be a little like unearthing a treasure map for attackers. One place I often find low-hanging fruit is in the notes that developers leave for each other, left at some point in the software development cycle. While building software, developers go through code and mark known buggy areas. But developers move fast, and can leave these notes unresolved. I know I’ve struck gold when I’ve found tags from developers in their code that say “FIXME” or “RBF” (remove before flight). Tags like this put a bullseye on potentially exploitable, unpatched vulnerabilities. I once found a bug in a function labeled “FIXME: buffer overflow possible here. DO NOT SHIP AS IS.” It was, in fact, shipped as is, and we exploited that flaw with ease.

SOS Flags in Support Forums

Once, while searching for a place to exploit on a target’s perimeter, my team noticed that the company was testing a new appliance — and the company’s IT team had posted several questions in a generic support forum with their corporate email addresses. The asset appeared to be easy to break. After a quick Google search, we determined the appliance was an expensive product from a well-known manufacturer of telephony equipment. We dug around support forums and found part of a firmware update posted online, which contained three bugs.

In this instance, one bug was located in the URL path-parsing function that let us bypass authentication. Another let us reach code paths without being a system administrator, leading us to the ability to upload and download files. The last was an arbitrary file-leak bug that let us read every file in the file system of the application. At every chapter, these exploits were publicly available information, each holding the key to the next. Attackers love to follow the footsteps of your team members outside of the walls of your network to find traces of information that could lead to an exploit.


A more time-consuming and less satisfying tactic to find bugs is fuzzing. I was once tasked with breaking into a company, so I started at a relatively simple place — its employee login page. I began blindly prodding, entering ‘a’ as the username, and getting my access denied. I typed two a’s… access denied again. Then I tried typing 1000 a’s, and the portal stopped talking to me. A minute later, the system came back online and I immediately tried again. As soon as the login portal went offline, I knew I found a bug.

Fuzzing may seem like an easy path to finding every exploit on a network, but for attackers, it’s a tactic that rarely works on its own. And if an attacker fuzzes against a live system, they’ll almost certainly tip off a system admin. I prefer what I call spear-fuzzing: Supplementing the process with a human research element. Using real-world knowledge to narrow the attack surface and identify where to dig saves a good deal of time.

Defenders are constantly focused on making intrusion more difficult for attackers, but hackers simply don’t think like defenders. Hackers are bound to the personal cost of time and effort, but not to corporate policy or tooling. For businesses, adapting to hacker logic and understanding what makes a target tempting is the first step in offensive defense. Begin by understanding the potential impact of a compromised asset, and the likelihood of a successful hack. This narrows the understanding of the attack surface that’s most critical to defend. This allows defenders to then consider the failsafes in place and the CVEs that could actually matter. Understanding the hacker perspective opens up businesses to building resiliency beyond traditional best practices, to build up a layered defense strategy and keep persistent hackers at bay.

David “moose” Wolpoff is co-founder and CTO at Randori.

Enjoy additional insights from Threatpost’s InfoSec Insider community by visiting our microsite.

Suggested articles