Apparently, there is some confusion about whether sandboxing is necessary, sufficient, and/or affordable. (Here is an example from Security Week, although this is not the only instance.)
As the lead of Chrome’s sandboxing team and as co-lead of Chrome’s memory safety efforts, perhaps I can clarify a little.
As I said in my Enigma presentation (slide 7), “good sandboxing is table stakes.” I reiterated this point in my previous post (“if [...] your application is not making use of process sandboxing, consider exploring that first before starting a rewrite effort”).
Contrary to what the Security Week article and some Twitter discourse suggest, sandboxing and memory safety are complementary techniques, and both are necessary.
- Sandboxing reduces the severity of bugs.
Sandboxing isolates code away from system resources and application resources, reducing the damage that compromise can do. (Sandboxing also has certain efficiency advantages, as well as disadvantages, too.)
However, a certain amount of attack surface will always be available from within a sandbox, and memory unsafety (and other bugs) can enable an attacker to get at it.
So you still need to get rid of as many bugs inside the sandbox as possible.
- Memory safety reduces the number of bugs.
As discussed at Enigma and in my previous post, very many bugs, including an overwhelming majority of the vulnerabilities we know about right now, are due to memory unsafety. It helps to get rid of as many of those as possible.
However, memory safety can’t constrain access to system resources, including the file system, system calls, et c.
So you still need sandboxing.
There are 2 key ways that Chromium (specifically) is nearing the limits of how much sandboxing we can do right now:
- Our unit of isolation, the process, is expensive in time and space on some (not all) platforms.
- Some operating systems do not provide sufficiently fine-grained mechanisms to allow us to maximally constrain sandboxed processes. Things are improving, but it’s an unavoidably slow process.
I tried also to raise awareness that not all the applications that need sandboxing are making use of it. I know of at least 1 organization that was compromised because their server application did not sandbox a file format parser (written in C), and allowed anyone on the internet to send input to it. So, more developers need to do more sandboxing — as an industry, we are nowhere near the limits yet.
We are still pursuing additional sandboxing in Chromium. It’s just that we can see a limit to what’s possible at the moment. If OS developers give us more of the primitives we want, we’ll jump right on them — as we always have.
Finally, nobody knowledgeable, that I know of, has claimed or would claim that eliminating 100% of memory unsafety bugs would also get rid of all vulnerabilities. The claim — based on repeated real-world experience and evidence — is that memory unsafety accounts for a large majority of vulnerabilities. There will still be bugs. Our goal is to marginalize memory unsafety bugs, because they are currently our worst observed problem.