Market Fail: Regulations May Be Only Hope For Securing Critical Infrastructure

Threatpost’s exclusive interview with Ralph Langner continues, as our conversation shifts from  the legacy of the Stuxnet worm to larger issues facing the critical infrastructure sector including mounting attacks, tensions between vendors and security researchers over responsible disclosure, and what’s needed to secure critical infrastructure and industrial control systems.  

Threatpost’s exclusive interview with Ralph Langner continues, as our conversation shifts from  the legacy of the Stuxnet worm to larger issues facing the critical infrastructure sector including mounting attacks, tensions between vendors and security researchers over responsible disclosure, and what’s needed to secure critical infrastructure and industrial control systems.  

 Ralph Langner at the S4 Conference

Paul Roberts: Does the fact that the more people are going know about (vulnerabilities in control systems) as a result of (Project Basecamp) change things?

Ralph Langner: That’s something we’ll have to see. But I would like to point out one thing. I believe that over the last couple of years, this community has been obsessed with thinking about irresponsible disclosure. My opinion is that the vendors succeeded to some extent in creating the impression that anybody who would disclose vulnerabilities in an uncontrolled manner, would be per se, irresponsible. And, actually, this is no longer valid. The vendors and (DHS) ICS-CERT have come up with a theory that these are not vulnerabilities because they are design features, so we are not disclosing vulnerabilities any longer.

Certainly, there are real problems associated with these vulnerabilities, no matter if you call them vulnerabilities or not. And there was one thing I wanted to point out in my talk, that an attacker really doesn’t care if this specific problem or issue is due to a programming error or if it just was is just due to let’s say a bad design decision. What counts for the attacker is the ability to exploit any feature of the target, and they really wouldn’t care if it’s about a bug or a feature. Probably, they would even say, “Wait a second. It’s much more interesting if it’s not a bug, because in the case of a bug, I have to take into account that at some point in time, probably tomorrow or next week, the asset owner will apply a security patch and the vulnerability is gone.” So your sophisticated attacker will definitely try to take advantage of design flaws, where the chance is much higher that they are going to stay. So even in a year or even in five years, the exploit will still work.

Paul Roberts: If you look at these PLCs (programmable logic controllers) and particularly if you look at GE device (the D20 Programmable Logic Controller), which has been out there for 15 or 20 years, there really haven’t been incremental improvements, either to the firmware or to the hardware that would even enable security. So, in the case of the GE PLC, you’re stuck with a device that you can’t even really secure. And, in essence that’s what GE is saying: “We can’t fix this.”

Ralph Langner: Yes. That’s correct. And I think one issue brought up in that discussion around the GE device that’s very, very important is that it’s not all about securing the install legacy base. That’s one problem, and that’s a difficult problem. But we were also talking about new installations where the same vulnerable design is installed, let’s say, for example, in a power plant tomorrow. And this same outdated, vulnerable design is going to stay for another 15 years. This is something that we should focus on to just get rid of all the obsolete and insecure stuff for new installations. So this would be very high on my prioritized list, so it’d be one of the issues that should be worked upon immediately.

So anytime that vendors or even asset owner say, “Well, that’s such a problem because we can only patch during downtime, etcetera,” that’s all true. But, again, we’re not just talking about the install base. New systems are installed every day. And as you said, you will also find GE trainees in new installations, which probably is not such a good idea.

Paul Roberts: In your experience, are customers buying that? Obviously, I would assume if you went to the customer and said, “Look, here’s what you’re getting for your $15,000.00. You’re getting a 25-year-old processor and a brittle operating system, and back doors,” they would say ‘that doesn’t cut it.’ So do they buy it because, well, it’s GE and there’s kind of the imprimatur of a major company behind it?

Ralph Langner: Well, I think there are a couple of reasons why these devices are still installed. One reason, certainly, is that only a small percentage of the customer base is aware of the security issues. The majority is not aware of any security issues because if they get across something that you write, or something that I say, or something that Dave says, they might think about it in a way like, “Well, this is just another security guy making up this stuff to raise attention, etcetera.” And the few who consider, “Could this be real? Is that true?” They decide “let’s just ask the vendor.” And you know what the vendor says? “No, no, no, no, no. That’s not a problem. We’ve got that under control. Ralph is exaggerating.” So there are a couple of reasons why we don’t have the occurrence that we would actually need. Besides that, there are other, more pressing issues. So, for example, think about the larger installations. Think about a power plant, for example. And in a sense, you purchase the PLC for such a power plant from one big vendor, maybe GE maybe Siemens. And here is what happens: If you go to any of these vendors and say, “You know what? I really have a problem with your XYZ component because I read a Threatpost article that says its insecure. I really don’t like this product. Instead, I would like something else, let’s say an ABC so and so product from a different vendor.” And here’s what happens. The integrator, be it GE, or Siemens or Rockwell or whoever says, “Well, if you really want to pursue this, we are afraid that we can’t keep up with our guarantee, the overall reliability of your plan, because this is not our product and so we can’t assume any responsibility.” So the customer just, they don’t do it. So this is one example where vendors try to put some decent pressure on their customers in order to keep it all in a proprietary solution that they control. And it happens more than once that the customer finally gave in because he’s not in a position to say, “Okay. So you don’t accept the responsibility. That’s fine. But we get rid of the damn GE D20s. That just doesn’t happen, because – and let’s use, again, this example of power plant -this specific controller is a minor item. It’s small potatoes. So you really don’t wanna make a big fuss because of this tiny component.

Paul Roberts: Right. And my sense is that in this world, in the ICS world, that these systems are not necessarily swappable or interchangeable in the same way that different components on a traditional LAN would be.

Ralph Langner: Yes.

Paul Roberts: In other words, not every PLC is going to integrate with other PLCs? So in essence, there is a brittleness to this. It’s not everything’s talking via a common protocol and a common interface, right?

Ralph Langner: Correct. Yes. What I have just described was based on the case, which we have seen quite often, that actually technically you could use a different product from different vendors. So, technically, that was possible. But the vendor managed to get out of this just by making the case –making it a legal case, so to speak.

Paul Roberts: The types of things that we heard about today in the Project Basecamp presentation – about nmap vulnerability scans knocking PLCs offline – is that unusual in your experience?

Ralph Langner: No. That’s just the average. Because for some funny reason, there are very few vendors on the planet who sell controls with network interfaces who really bother to include text procedures into their quality management. This is something I still today I can’t really go along with and I can’t get comfortable with because in a way, you could say this is just not tolerable. You’re selling a product. You say it’s network compatible, but you never bothered to test what’s going to happen if you do a simple nmap scan? So it’s quite obvious that there are a bunch of products on the market where the vendor never, obviously, never did one simple scan, because they would have recognized immediately, “Oh, wait a second. Let’s go back into the lab and let’s do some further research and get this problem fixed.”

Paul Roberts: What’s really amazing to me is the lack of evolution of these products over decades in some cases. The idea that if you were selling a PLC with a contemporary processor in 1990, that in 2011, you would still be selling the same device with the same processor. Certainly in the software world and the consumer hardware world, that’s an anathema. You should be taking advantage of advances in memory and processers as they come along.

Ralph Langner: Well, are there are several reasons for this, and the most prominent reason is the long life cycle of the installation as such. So for example, when you look at a nuclear power plant, it has a lifespan of 60 years. And actually, once that you have all of the ISC in place, from a functionality point of view, during these 60 years, you don’t intend to change anything. You just want to the damn plant to run as it did during commissioning. So that’s your best case. And this is certainly extremely different from the IT world. We want to take advantage of product improvements, new services, and etcetera. But in most industry environments, this just doesn’t happen because the plant as such is static for quite a long time. Certainly nuclear power plants are a little bit extreme here, but even if you look at an assembly line in an automotive factory, you are talking about, let’s say ten years. And, again, you really don’t want to see any changes. So there are no improvements that you would like to take advantage of. You have no desire to put in faster processor or more memory. What for? Let’s say your welding robot is doing exactly the same work over a decade. Why change anything? You couldn’t take advantage of faster processing, more memory. It’s just not needed. It’s not required. And let’s assume here for a minute – this is a real problem – that at some point in time, let’s say five years after commission, this specific PLC breaks and you have to replace it. In the best case for the customer, you can replace it with an identical PLC. They would definitely love to put in the very same obsolete stuff.  Most planners, most maintenance engineers are extremely conservative. Once they think they’ve arrived at a good proven solution, they should use it over and over again until the product is just no longer available. This leads to some very bizarre situations. For example, there are cases in nuclear where in new power plants, where we see actually completely obsolete controllers being installed, such as the Siemens S5 controllers, which they don’t even sell on the open market for several years. But there are still some new (nuclear) installations where these products are actually used and are expected to be around for another 30 years or so.

Paul Roberts: Right. But that’s assuming that the threat environment is the same.

Ralph Langner: Great point. Great point, Paul. So that’s completely ignoring that the environment changed a little bit. Because if you look at it in a modern installation, they have such a tight coupling of process control safety and the office applications, such as quality management (and) optimization routines. And this certainly wasn’t the case some 10 or 20 years ago, when most of the stuff was just analogue.

Paul Roberts: Okay. So let me just, as an imaginative exercise make you head of the department of Homeland Security, ICS-CERT or the head of some organizing body that could actually force changes upon the industry. Where would you direct your attention and energies?

Ralph Langner: So first of all, I have to say as sad as it might sound, I do believe we need regulation because the market is just not willing to do this deliberately. And talking about regulations, certainly, it’s a touchy issue because we have seen more than once that regulators can come up with some completely bizarre solutions and that they might take years to arrive at anything.

But there is one positive example, and that’s the regulation for cyber security that we see in the nuclear industry. So at the NRC did is probably the best you can think of, and it certainly should be examined by other industries as some kind of a best practice. So in a nutshell here is what the NRC did. There are core regulatory document that stands here R73.54, which contains their registration on cyber security. It’s two pages long. When you print it out, it’s two pages. And that’s a great starting point because nobody wants to read 500 pages. And they basically just set the frame. They were saying like, “Well, this is an issue that needs to be addressed, and you as the operator have some liberties here. We won’t tell you exactly what to do. We just say, well, you have prove that you’re secure against cyber attacks.

In addition to this, the NRC has issued a regulatory guideline to educate utilities which probably don’t have a clue. They never thought about cyber security, on how this could or should be implemented. But they don’t enforce it. They just say, “Well, if you have no clue, this is how you could do it,” and then if you do it this way, you will certainly get our approval.”If you look at Reg Guide 5.71, which it is called, then and which you can do a search on Internet very quickly, well, that’s pretty good stuff. You would find all the ingredients that somebody like me, or any other security professional in the area would say, “Yes, this is definitely what we need, if you want to arrive any educated asset, if this facility is secure or not. So it’s not all that bad. There are very good examples out there. And I think that the big insight here is that we probably really do need regulation.

Paul Roberts: And are there countries that do it better than others? I mean, you mentioned the NRC, you’re German, obviously, and here in the United States, we kind of think that Germans are excellent and efficient at all things, but when it comes to this, are there countries that have shown themselves to really be on top of this?

Ralph Langner: No. Well, actually, the US is leading the pack here. Paul Roberts: Yes, which is interesting. Ralph Langner: Well, it’s – I don’t think it’s surprising because the major reason seems to be the 9/11 attacks that just put the focus on the terrorists. And there have been many smart people who realized quickly that terrorists at some point in time, would not necessarily really try to fly passenger airplanes into skyscrapers. They could also use other means of attacks. And this certainly includes cyber.

Paul Roberts: Your book is called Robust Control System Networks. Give us a brief synopsis. What are you arguing? Ralph Langner: Well, I think for your readers, it might seem funny because this book actually is not about security. It’s not about cyber security. What I’m doing is address problems that have been discussed under the rubric of cyber security for around about a decade. My theory is that we professionals in the community probably should accept responsibility for what Dale (Digital Bond CEO Dale Peterson) calls the lost decade. So that means we really haven’t been able to improve the security of critical infrastructure to a great extent. And so this bothered me over the last couple of years that if we follow this path which is mostly derived from IT security, mostly using the concept of risk analysis and risk mitigation, if we just continued to do this probably even increase these efforts, it might lead us to nowhere. So I considered if there could be a different paradigm that we can use, and just hoping that a fresh approach might arrive at better results.

What I try to do is really focus on these people in the trenches, because if we don’t get the message over to them, things are lost. Nothing’s going to happen. So I refrained most of the stuff that we had been discussing over the last decade in terms that an engineer would understand.

It turns out that most of the issues that we have, or the lack of cyber security can be viewed as quality issues. So what’s lacking here is quality management. And as everybody knows, when you really take quality management seriously, you can’t do it as a project. It has to be a long-term process. Once you stop your quality controls, quality is going to degrade. And so this is one example how I try to really attach what needs to be done in my opinion, to concepts and methods. They’re not new. They’ve been around for a couple of decades, and most corporations are following these procedures. So what I’m trying to convince them is that they just apply the very same procedures and methods to what I call cyber robustness and cyber fragility, as one sustainable long-term process. Basically, my message is “Don’t worry about risk. Don’t worry about attackers, about external forces, aliens that have landed and that, at some point in time, may mess with your systems and you can’t see them. Just forget about this.”

Paul Roberts: So, forget about Stuxnet?
Ralph Langner: Yes. As an example. Just forget about this and just consider that the process you’re running now depends on systems with properties that you have to control because these properties influence the outcome of your process. So for example, the quality of your product, your reliability, your availability, even your safety and when you have safety-critical processes. And this is something that obviously is much more appealing to engineers because many readers realized ‘Yes, that is just one big fuzzy cloud that we have installed here in our company, which has a direct influence on the reliability of the process, and which we really don’t understand, and can’t control to the fullest extent.’ This has got to change.

Paul Roberts: That said, one of the things that keeps coming up here at S4 is the lack of tools, security tools that either ICS vendors or ICS customers have at their disposal to sort of provide that visibility. Is that something that needs to change? And if we’re going to have as ICS security suite, so to speak, is it going to be different from the PC security suite?

Ralph Langner: Well, it could be. But I would like to make another point here. I think that experience has shown that the idea that I can just buy security as a product has failed. This is just not possible. If anybody promises you that your facility will be secure after you have installed Product XYZ, then he is either lying or he doesn’t really know what he’s talking about.

A more common example would be that you lack system understanding. You lack system documentation. You encounter this in almost every case. I can’t remember even one project where we said, “Hey, wait a second. That’s great documentation. That’s everything we need.” This is, in my point of view, a much more basic finding than discovering a system that hasn’t been patched for ten years. That’s a reality. But another question would be: ‘Is it really an option to run a power plant or a large automotive facility when you really don’t know, you really don’t understand why this particular computer, for example, is talking to that other controller, and to that robot. But this is what happens almost every time when we do assessments, then we have to figure it out, and the system specialist says, “I don’t know.” And we say, “Okay. So let’s go to the documentation.” Guess what? It doesn’t exist.

I’m not joking here. Most of the time, we start our security assessments with reverse engineering the systems under consideration just to arrive at a system understanding. And I think this is just – you can’t just operate this way, because unless I have a good understanding and documentation of a system, the system cannot be secure. That’s ridiculous. So, no, you don’t even need to bother about anti-virus and (intrusion detection) if this is just one freaking cloud or black box.

Paul Roberts: Ralph, thank you for taking the time to speak with Threatpost!

Ralph Langner: Thank you!

Suggested articles