Memory Corruption Mitigations Doing Their Job

At the Security Analyst Summit, Mark Dowd described how memory corruption mitigations are successfully driving up exploit development costs.

SINT MAARTEN—Memory corruption mitigations that have been integrated into major desktop and mobile operating systems are driving up the cost of client-side exploit development and making viable vulnerabilities scarcer than they were a decade ago.

Mark Dowd, whose career has been intimately linked to vulnerability research and exploit development, said today during his keynote at the Kaspersky Lab Security Analyst Summit that mitigations have put up significant barriers to attackers, forcing them to spend more time finding and chaining bugs to run code on compromised machines.

“Useful bugs are harder to find,” said Dowd, founder of Azimuth Security. “Bypassing mitigations is not trivial. Now we’re talking about exploit chains where you first have to compromise a process and then develop a sandbox escape.”

Dowd explained that an old, early-2000s Internet Explorer 6 or 7 browser exploit, for example, used to be a three-stage operation that involved triggering a vulnerability, running code and reading data and maintaining persistence. An attacker could have their desired effect within a week, he estimated. But by the time IE 11 for Windows 8 was introduced and included memory protections such as ASLR, DEP, Control Flow Guard and heap mitigations, attackers were forced to retool and add costly additional steps such as the use of of data corruption techniques to force information leaks and the development of sandbox escapes. This requires, he said, additional study of internal states of a program and understanding user-supplied data in memory in order to build a kernel-level attack that bypasses sandbox protections.

Dowd said that not only are mitigations proving effective in driving up costs for attackers, but also aggressive patching routines.

“Sandboxes are having the biggest effect on development time,” Dowd said. “If you’re considering [cost] calculations for attackers, you have to consider the time it takes to discover a vulnerability and the time to develop an exploit. You often have to find multiple bugs and develop and put together an exploit chain.”

Dowd estimates that the development times for browser exploits are close to two to four weeks.

“This is a large burden to exploit development,” Dowd said.

Dowd confirmed too that memory corruption mitigations have all but wiped out Microsoft server exploits and had a large hand in making it impossible for successful Stagefright worms to surface against the Android platform.

Moving forward, expect the trend of rising exploit development costs to continue, and mitigations to eventually focus on limiting later exploit stages and the prevention of code execution. Apple, Dowd said, has led the way with its code-signing requirements in macOS and iOS which allow only for the execution of verified code. Microsoft is said to be following suit with its upcoming Creators update.

“Exploitable memory corruption vulnerabilities in ubiquitous software will be increasingly rare,” Dowd said, pointing to the increased use of type-safe languages in software development, improved static analysis and IDE tools, and Google and Microsoft spending significant dollars on fuzzing and triaging of bugs in development. Dowd also expects attackers to, in turn, focus on softer targets such as connected, embedded IoT devices that have none of these ingrained protections.

“Next-generation memory exploits will likely be data-only attacks,” Dowd said. “And future mitigations will focus on data structure integrity.”

Suggested articles

Using Fuzzing to Mine for Zero-Days

Infosec Insider Derek Manky discusses how new technologies and economic models are facilitating fuzzing in today’s security landscape.

Discussion

Subscribe to our newsletter, Threatpost Today!

Get the latest breaking news delivered daily to your inbox.