Dino Dai Zovi has gained a reputation as one of the top Apple security researchers in the industry and is the author of a new book on Apple security, “The Mac Hacker’s Handbook.” In this interview, he talks about the state of Apple security, why the company hasn’t implemented better memory protections and his ‘no more free bugs’ meme.
You gave a couple of talks at SOURCE Boston and CanSecWest where you talked about the ease of exploiting OS X, saying it was like going back to 1999. What did you mean by that?
One of my points I made was that current releases of OS X are a little behind the curve with regard to their exploit mitigation. This includes address space layout randomization, non-executable memory, and stack and heap memory protections. These are defenses that are found in other modern operating systems. While Apple has partial implementations of these, they don’t provide as much protection as users would be led to believe. I wanted to show realistically how effective they were. In reality, they’re not that effective at all. So part of the comments were that a number of things that Apple’s competitors have implemented, but not them. So it’s much easier to hack a Mac than it is to hack other operating systems. And so I made the joke that Apple’s heap implementation is so 1999 because that’s when it was implemented and it hasn’t been touched since.
Why haven’t they gotten to the point that Microsoft has gotten to where they see the need for these memory protections?
Honestly, Apple just hasn’t put forth the engineering effort to implement this level of security in their systems, largely because they haven’t seen the level of attacks that other systems have. Windows, notably, has seen it on both the server and client side and Linux has seen it on the server side. So some of these defenses are implemented when these systems are attacked and largely for Mac OS X the only attacks in the wild we’ve seen are what I call social engineering malware attacks where users are tricked into downloading a malicious application.
Does some of this have to do with the fact that Apple doesn’t have a lot of adoption in the enterprise?
Quite likely. If the customer demands it, the company will provide it. I think what’s evident is that customers, either consumers or enterprise customers, haven’t demanded it to the point that Apple has really made it a priority. However, it remains to be seen what they’ll do with Snow Leopard, which is coming out this summer and hopefully that will address a lot of their deficiencies in this area and hopefully users will apply it.
That’s exactly what it took for Microsoft to come around is their large enterprise customers saying, look we can’t have these worms destroying our networks every six weeks.
And Microsoft took a very risky step in stopping all development for months and putting all engineers through retraining and implementing that massive effort, the Security Development Lifecycle. One can only imagine how loud and clear their customers’ message to them was.
Microsoft also started reaching out to researchers and saying, how can we make things better.
That was exactly when Microsoft changed their relationship with the external research community and started having an increased presence at security conferences. But also being more receptive to vulnerability reports. And other community outreach like Blue Hat events and bringing in external researchers to talk to executives at Microsoft. That really has done a lot to change the external perspectives on Microsoft.
To your knowledge is Apple doing any of that kind of stuff at this point?
As far as I know, no. They seem to have way less involvement with the security community. They don’t seem to want to be overt. I know several of the members of the product security team personally, so I see them at the security conferences. They are sending their people there. But the company as a whole hasn’t been public about it.
And they aren’t calling you or Charlie Miller and saying, can you come in and talk to our engineers.
Exactly. The culture at Apple is way more inward-looking than even Microsoft, so they are much less likely to engage with external researchers in that way. A lot of my concern is I don’t want to see Mac users getting hit with malware attacks.
Where did the idea for the “no more free bugs” campaign come from?
It’s something we’ve been kicking around for a while. It all came together when Charlie was visiting in New York City and Alex was there and we all went out for drinks and we just kind of happened on this meme. This has a nice ring to it and really resonated with our opinions on how the security industry deals with researchers.
Which is not in the most equitable way.
I try not to focus too much on the equity thing because a lot of people have misinterpreted the comments and characterized this as a cash grab or advocating blackmail. It’s more to bring attention to the fact vendors aren’t doing much to get vulnerabilities in their shipped software fixed. They will receive external reports of vulnerabilities and that’s a purely passive stance. They’re not taking any active action in fixing vulnerabilities in their shipped software. Apple is a notable exception, where if you look closely through their product security updates, you’ll see that a number of the vulnerabilities fixed are credited to Apple. The software vendors with regards to security are in a damage control mode. This is largely because security is not a selling point for their software.
So you’re saying, you spend all this time finding a bug and making it work and then you hand it over to the vendor and they go away and fix it and say thank you very much.
Largely I’m saying, the vendors aren’t doing anything to make it worth your while, so one shouldn’t spend their time volunteering on this especially if there’s no legal agreement that protects them. There’s no researcher amnesty agreement. There’s little to no compensation from the vendors directly. As a benchmark, use the prices they’ve offered for Pwn2own. They’ve set the price for a critical browser vulnerability at $5,000.
The easy counter argument to this is, what’s going to stop you from blackmailing the vendor?
What’s going to stop anyone from doing that now? This actually creates a disincentive because there actually is a stated price for it. So they know before looking for the vulnerability what it’s worth. In no way am I advocating blackmailing vendors with vulnerabilities in any way. I’m saying, just don’t look. The second stage is that our industry is overly focused on vunerabilities in my opinion. It distracts us from making more secure software.
This is an edited version of a podcast interview with Dai Zovi.