Over the past few years the embedded-systems industry has been moving toward the use of memory protection, and operating systems which support it. With this comes the increasing popularity of commodity operating systems, particularly embedded versions of Linux and Windows. Those systems, if stripped to a bare minimum for embedded-systems use, may have a kernel (defined as the code executing in the hardware’s privileged mode) of maybe 200,000 LOC (Lines Of Code), which is a lower bound on the size of the TCB (Trusted Computing Base). In practice, the TCB is larger than just the kernel; for example, in a Linux system every root daemon is part of the TCB. Hence the TCB will, at an optimistic estimate, still contain hundreds if not thousands of bugs, far too many for comfort.
If we want a secure system, we need a secure, trustworthy TCB, which really means one free of bugs. Is this possible? Methods for guaranteeing the correctness of code (exhaustive testing and mathematical proof, a.k.a. formal methods) scale very poorly; they are typically limited to hundreds or, at best, thousands of lines of code. Can the TCB be made so small? Maybe not, but maybe it doesn’t have to be.
Modularity is a proven way of dealing with complexity, as it allows one to separate the problem into more tractable segments. However, with respect to trustworthiness, modularizing the kernel does not help, as there is no protection against kernel code
violating module boundaries. As far as assertion goes, the kernel is atomic.
The situation is better for non-kernel code. If this is modularized, then individual modules (or components) can be encapsulated into their own address spaces, which means that the module boundaries are enforced by hardware mechanisms mediated by the kernel. If the kernel is trustworthy, then the trustworthiness of such a component can be established independently from other components.
That way, the TCB can be made trustworthy even if it is larger than what is tractable by exhaustive testing or formal methods.