Researchers Blame ‘Monolithic’ Linux Code Base for Critical Vulnerabilities

Researchers contend almost all Linux OS flaws could be mitigated to less-than-critical severity with an OS design based on a verified microkernel.

In an exhaustive study of critical Linux vulnerabilities, a team of academic and government-backed researchers claim to have proven that almost all flaws could be mitigated to less than critical severity – and that 40 percent could be completely eliminated – with an OS design based on a verified microkernel.

“The security benefits of keeping a system’s trusted computing base (TCB) small has long been accepted as a truism, as has the use of internal protection boundaries for limiting the damage caused by exploits,” wrote researchers from Data61, the Australian government’s Commonwealth Scientific and Industrial Research Organisation (CSIRO) and the University of New South Wales in Sydney, in a paper to be presented next week at APSys ’18. “Applied to the operating system, this argues for a small microkernel as the core of the TCB, with OS services separated into mutually-protected components (servers) – in contrast to ‘monolithic’ designs.”

The Linux operating system, which has 26 million source lines of code (MSLOC), contains most services within its kernel, which is the part of the system that executes in the privileged mode of the hardware.

This results in “an explosive growth in the amount of privileged code,” the team said. “Any code executing in privileged mode can bypass security, and is therefore inherently part of a system’s trusted computing base (TCB). As almost all code is buggy, and the number of bugs grows with the size of the code base, this TCB growth is bound to lead to a growth of the number of vulnerabilities.”

Microkernel design in contrast is built to keep the TCB small, with the ability to encapsulate untrusted components.

A kernel’s core security function is to provide isolation between unrelated applications that are executing concurrently: One malicious application in theory shouldn’t be able to compromise the confidentiality, integrity or availability of another application. Flaws in the kernel allow exploits to violate this isolation – but a microkernel reduces the impact of any compromise, the researchers argued.

“With a microkernel design, most components are isolated from one another and run with reduced security privileges so that a vulnerability doesn’t lead to the compromise of the whole system,” Linux developer Andrew Ayer told Threatpost.

This in turn provides fine-grained control over access rights in the system, and enables a true least-privilege design.

“In a monolithic OS, compromising one (kernel-provided) service compromises the whole system, therefore the whole multi-million-SLOC kernel is in every application’s TCB,” the researchers wrote. “In contrast, in a microkernel-based system, the TCB of an application becomes highly dependent on the system services it uses.”

It should be noted that while the paper only addressed Linux, the idea is not restricted to the open-source pioneer: Windows and macOS also have monolithic architectures and sprawling code bases, as the researchers pointed out: “The Windows kernel, while not growing as quickly, is even bigger, with a recent version said to be 60 to 65 MSLOC.”

Defanging Flaws

To see if such a microservices architecture would truly be superior from a security perspective, the researchers analyzed every critical security bug in the Linux kernel that was listed in the Common Vulnerabilities and Exposures (CVE) repository in 2017, looking to see if a microkernel-based approach would make a difference. They found that 96 percent of critical Linux compromises would no longer be critical with a microkernel-based design, 40 percent would be completely eliminated by an OS based on a verified microkernel and 29 percent would be gone even with an unverified microkernel.

For instance, CVE-2015-4001 describes an integer signedness error in the OZWPAN driver. This is a USB host controller device driver used to communicate with a wireless peripheral over WiFi. The integer signedness error can lead to the result of a subtraction becoming negative, causing a memcpy operation to interpret the value as an intention to copy large amounts of network-supplied data into a heap buffer.

“An attacker can insert a payload into a crafted packet to trigger the error and inject data,” the researchers explained. “Since Linux loads the driver into the kernel, it could cause a denial of service by crashing the kernel, or could possibly execute arbitrary code with kernel privileges.”

But in a microkernel-based system, the driver would run as a user-level server in its own address space.

“As such, [it] could not overwrite kernel memory and cause a system crash, information leakage or corruption, the team found. “Furthermore, any code injection would only execute with the minimal privileges required by the driver. In a well-designed microkernel-based system, this driver would only have the ability to communicate with a Wi-Fi user-level server to interact with the device and with applications using it, but little more. Therefore, this exploit would not affect the security of our hypothetical application.”

Some flaws were only eliminated with formal verification. For instance, CVE-2014-9803 describes a flaw where the Linux kernel on some Nexus devices mishandled execute-only pages, which allowed a crafted application to gain kernel privileges.

“As this operation must be performed in kernel mode, it could equally occur in a microkernel,” the researchers said. “However, in a formally verified microkernel, such as seL4, this bug could not occur.”

The positive results were much the same for most of the 115 flaws the team examined.

“The results are a stark confirmation of the arguments in favor of a small TCB,” they said.

Ayer told Threatpost that the results confirm what has been theorized for decades – and ironically refutes Linux’ own kernel developer. “The theoretical benefits of microkernels have been known for a long time, and the topic was the subject of a famous debate in 1992 between [Linux founder] Linus Torvalds and operating systems researcher Andrew Tanenbaum,” he said. “However, until this paper, no one had tried to quantify how much more secure microkernels could be. The results provide confirmation.”

Not a Silver Bullet

While anything that shrinks the attack surface of a computing resource is a good thing, whether it’s at the operating system or application level, security researchers noted that embracing microkernels as a panacea doesn’t take into account other challenges.

“The usage of limited function OS and/or distributed component OS is fine, but this introduces new risks and expands the attack surface that complicates security monitoring and association of various moving parts that become unmanageable for humans,” Joseph Kucic, chief security officer at Cavirin, told Threatpost. “While artificial intelligence/machine learning can aid in this security validation process, I expect hackers will have the advantage for 12 to 24 months if the paradigm is changed for OSes. Existing security AI models do not account for these microservices orientation while the applications themselves are not prepared for this prototype.”

He also pointed out that new hacks are focusing on runtime and memory exploits – and that these will see limited benefit from the proposed changes.

Rick Moy, CMO at Acalvio, also pointed out the complexity challenge.

“There are so many different Linux distributions that IT staff trying to determine which one is right for them are faced with the twofold challenge,” he told us. “This is, one, picking the right base distribution (yes there are smaller ones out there); and then two, customizing it with the packages and services needed. This same customization challenge is part of the micro-services route being proposed. While one may reduce attack surface of a monolithic OS, one increases complexity and security validation requirements for additional micro-services.”

Nick Bilogorskiy, cybersecurity strategist at Juniper Networks, told Threatpost that while a monolithic architecture should be avoided in cases where security and reliability are of the highest priority, there is a speed trade-off to consider.

“A monolithic kernel is faster because the kernel resides in a single address space and all of the features can communicate in the fastest way possible without resorting to any type of message passing,” he explained.

All of that said, re-engineering Linux – or any other major, established OS – is a bit of a hypothetical.

“Unfortunately, it’s unlikely that Linux will change as a result of this paper,” Ayer said “We’re more likely to see the benefits of microkernels in brand new operating systems like Google’s Fuchsia, which is already using a microkernel.”


Suggested articles


  • Anonymous on

    The major problem is keeping the performance. So far, all the current microkernels in use (NT and MacOS) neither are microkernels - for speed.
  • John Moser on

    Rewriting Minix 3 to run on L4 and porting the specific Linux APIs supporting things like Docker, Systemd, and dbus is relatively-trivial. A full from-scratch overhaul brings images of the full past 30 years of development; yet even then we're only looking at the latest implementation, not the rework and replacement done again and again. Such a minimal effort—substantial, but not nearly what most would imagine—would immediately allow something like Ubuntu or ChromeOS to run directly on this new kernel, with all of the same interfaces and system calls exposed. It is then a matter of porting drivers—which can be a larger matter, as any point of interfacing with the rest of the monolith requires modification. The quick-and-dirty driver approach is basically paravirtualization, but let's not go there.
  • Andrew Wolfe on

    Confusing entry to the story. The pull quote is about microkernels, and an uninformed reader might think that Linux uses a microkernel architecture.
    • Tara Seals on

      Thanks for the feedback and I agree! I've updated it. :-)
  • ZolaIII on

    Sure thing micro kernel infrastructure in it's ideal form (which this discussion is all about) sounds better, smaller, faster, safer. But we don't live in the ideal world. Now let me remind you how security flows are mostly introduced by the bad code quality that gets added & how control mechanism for adding code would become not existing with shattering to a forest of micro kernels. Now let me also remind you how little good maintainers that Linus fully trust their is for Linux kernel. Even with it's iron fist Linus is forced (or just like to say it that way) to use f word every couple merge cycles for bullshits some twit bird wants to be added. So you see it's not all roses in reality & every future talk about micro kernels should be taken as an assault & not only on the freedom of software but also individual freedoms.
  • ZolaIII on

    Most advanced operating systems which involves usage of micro kernels but as addon to the monolithic one (for described problematic) as a hybrid kernel is also my favourite BSD the DragonFly. Its rather very secure as non of those guys including those Australian academics don't use it. If you think how you don't use micro kernels with your everyday OS then think again. They are there you just don't have any control over them (nor knowledge of them in the most cases). QC modems (DSP's) work on property RTOS version in the end microcode updates (so popular this day's) can be considered as micro kernel updates... Just my quarter of a mile on this topic.

Leave A Comment


This site uses Akismet to reduce spam. Learn how your comment data is processed.

Subscribe to our newsletter, Threatpost Today!

Get the latest breaking news delivered daily to your inbox.