Native Client Sandbox – Sandboxing Sandboxing

For those of you don’t know, Google’s Native Client is a way for browsers to run native code within the browser. In other words, I can write a C/C++ program (or any other LLVM supported program) and run it within the browser – pretty cool! The benefits are all over the place but, basically, ChromeOS has been largely criticized for being a ‘limited’ operating system, with apps that aren’t very powerful, and NaCl provides a way for developers to create secure and powerful applications.

But NaCl isn’t the first project to try to do this. The infamous Active X tried beforehand and, as we all know, totally sucked in terms of security. Will NaCli be a massive hole in an otherwise secure browser? Nope. Because Google poured on the security goodness here once more. Seriously, I realize most people don’t have the monetary capabilities of Google, but they do a hell of a lot when it comes to securing products these days.

We all know by now (if you don’t, read more of my posts!) that Chrome runs in a pretty cool sandbox. On Windows sandboxing is limited and, while Chrome does an excellent job, Linux provides more tools for sandboxing that address critical issues. On Linux, even conservatively, the sandbox is very impressive. Your renderer process, the most exposed codebase, is running with no rights – it can’t interact with the kernel, it has no file access, it basically gets fonts and that’s it. It’s locked into a tight sandbox. Yet Google decided that, for NaCl, they’re going to add *yet another sandbox*, which means that all NaCl code runs within the Chrome sandbox and the NaCl sandbox. In short, the Native Client process is a PPAPI process that runs in the Chrome Renderer process, so it is limited in the same ways.

That’s pretty cool. What’s cooler is how the NaCl sandbox works (without getting into PPAPI and the proxy it’s kind of not doing it all justice, but I’m writing this spontaneously at 3am oh well!).

On x86 NaCl uses a processor specific feature called segmentation. Segmentation, something I’ve seen used in PaX, the group who invented security techniques such as ASLR, is a method for the CPU to change which areas of address space are accessible to programs, and their rights. Unfortunately, segmentation is not supported on other architectures, and NaCl supports ARM and 64bit as well as x86. Just like PaX found a work around, so did Google – the implementation differs between ARM and x86_64 but the goals are all the same. (Upon watching a video on NaCl he also skims over it – anyone know more on the documentation? Seems like for 64bit they just use guard pages to separate the data/ code ‘segments’.)

NaCl executables are built with a toolchain that does a couple of pretty interesting things. NaCl executables are compiled without specific instructions, they’re blacklisted and will simply not be allowed into the codebase. Interestingly, they ban ret… so instead of returning, you jmp, push, pop. There’s a toolchain feature that has to do with alignment of instructions, rather than get into the details, the point is that you can’t jump into the middle of a chain of instructions, you have to jump to the beginning. When the toolchain returns your assembly it’s provided a safer and saner memory model that invalidates the ability to exploit specific types of vulnerabilities.

NaCl also performs instruction validation. If it sees any blacklisted instructions it kills the process, naturally. It basically does a check, before runtime, on the file to ensure that it’s not trying to perform actions that shouldn’t be allowed (though if you use the toolchain these should never be built in anyways).

Again, all of the visible attack surface from a NaCl executable is also sandboxed. That means that even if I get out of the NaCl sandbox through the proxy interface or through the renderer I’m still stuck in what are essentially the strongest sandboxes currently implemented on consumer systems and I still need to leverage another attack to get out.

I’d love to take each specific area of the sandbox (like the ret removal) and just break down exactly how that works and how effective it is, but this was a post of boredom and inability to sleep. The sandbox itself is very complex, but pretty cool. I’m not quite sure how I feel about it right now, but, as an extra layer I think it’s somewhat ideal in its goals at least. We’ll see how it works out, I’m looking forward to the next Pwnium when we’ve got NaCl built in. I’d also love to see Google add a 20,000 dollar bug bounty reward for NaCl sandbox bypasses like they’ve done for broker sandbox bypasses..

I probably missed a lot of stuff, most of what I’ve read was a while ago, but I’m hoping that we get more documentation soon.

Honestly, I just wish every company had the resources to do what Google does with security. NaCl was some experimental little project hack they made, and they are able to pour massive resources into fuzzing and all sorts of stuff. Really cool.

There are a few great resources on the NaCl sandbox. I’ve read as much as I can about it, but this video is pretty great: https://www.youtube.com/watch?v=5bcyuKh3__0

 

Explaining Chrome’s Linux Sandbox

Note: The documentation for Chrome’s Linux sandbox is lacking. This is my attempt to make sense of it and clarify how it works for users who may not want to sift through multiple docs on the subject. If I have misinterpreted, let me know, some of the docs are out of date and I may not have been informed.

Chrome is well known for its sandbox, which has held up incredibly well over the years – not a single in the wild attack against it. But on the Linux side of things it’s even more impressive, Chrome’s sandbox is immensely more powerful than on Windows. Though the architecture is similar, the mechanism is fairly different.

Chrome’s architecture is made up of multiple parts – on Linux there is a broker, your SetUID Sandbox process, and your tabs, renderer, plugins, and extensions (the Zygote processes).

The Chrome-Sandbox SUID Helper Binary launches when Chrome does, and sets up the sandbox environment. The sandbox environment is meant to be restrictive to the file system and other processes, attempting to isolate various Chrome parts (such as the renderer) from the system.

A sandboxed process is put inside a Chroot, a sort of a virtual file system (chroot = change root, it’s a new root). It basically gets its own file system to work with, an din this case, it’s not given any write access to the system. The limitations imposed on the process prevent it from escaping the chroot.

The sandboxed process is also provided a PID namespace (a way for a process to look like it’s standalone on the machine,  or among a subset of processes), denying it the ability to use ptrace() or kill() other processes. ptrace() in particular is dangerous as it allows processes to read or manipulate data in other processes. Sandboxed processes are unable to ptrace() each other as well (set to undumpable).

A network namespace is used as well in order to prevent sandboxed processes from connecting out – not much documentation on this.

The Broker process, which remains unrestricted by SUID, is what handles decisions about downloading files, writing to the disk, etc. It handles the dangerous stuff, and is unrestricted, but it is separated from the areas of the program that are most open to attack. Using an Apparmor profile will allow restriction even of the broker process. Otherwise it remains confined purely by DAC.

The next layer of restriction is provided by the Seccomp-BPF sandbox. Seccomp filters are something I’ve written about before. Their goal isn’t to protect the system from damage, like the SUID sandbox does, but to protect the system from further exploitation.

Seccomp-BPF works by restricting the system calls that programs can make. The implications of this are covered in this post. A quick summary is that a sandbox, or any form of access control, is only as powerful as the kernel. It is very often the case that, rather than trying to find issues with the sandbox itself, an attacker can simply go after the big buggy kernel running underneath it. An attack on the kernel allows for a full bypass of the sandbox.*

Seccomp works by restricting access to the kernel by filtering the ‘calls’ that can be made to it. The fewer calls a program can make the fewer ways it can exploit the system. Suddenly the kernel isn’t this massive glob of attack surface, it’s a much smaller are, with monitored interaction between it and the program.

Chrome on Windows had its sandbox broken at Pwn2Own by MWRLabs. It was, in fact, a local kernel vulnerable that allowed them to bypass the sandbox once they’d gained access to the renderer. Such an attack would be far more difficult on a Linux system with Seccomp enabled.

Overall the sandbox works by reducing the potential for damage and reducing the potential for local exploitation. Chrome is, as always, pouring work into their security. Their sandbox is very impressive, and I would love to see some research into breaking it.

There was a ‘partial reward’ for PinkiePie exploiting ChromeOS, but it was unreliable. No details have been released yet, quite unfortunately.

*Plug for Grsecurity here. See my guide for setting up a hardened Grsec kernel. Seccomp limits kernel attack surface, Grsecurity makes the entire kernel more difficult to exploit.

Sandboxing Popularity Will Do Two Things

Sandboxes are getting more popular, with Chrome, Internet Explorer, Adobe Reader and FlashPlayer all implementing some version of a sandbox. So far these sandboxes have been very effective – attacks against these programs either don’t exist or they’ve shrunken down to fairly rare events. While attackers have momentarily shifted their focus to unsandboxed programs, such as older versions of these programs, or other plugins like Java, there will likely come a point where they’re faced with actually dealing with them. So how will they fair?

There are two ways hackers will monetize a situation in which a sandbox is involved. Given the scenario where the vast majority of users are now only running sandboxed processes (no more silverlight or Java) an attacker will be forced to either:

1) Break out of the sandbox

or

2) Monetize from within the sandbox

Breaking Sandboxes

In my last post I briefly wrote about Chrome getting hacked at Pwn2Own. The bypass of Chrome’s notoriously strong sandbox took place on Windows, and it made use of a kernel vulnerability to get privilege escalation.

While breaking Chrome’s sandbox through design issues is very difficult, and the code for the broker process is relatively small, the Windows Kernel is large and complicated, ripe for exploitation.

The kernel isn’t the only vulnerable piece of software viable for sandbox escalation. Security software is constantly poking holes in sandboxes – you can get a full bypass of Chrome’s sandbox just by attacking the AV that injects into it.

Hackers are very likely to make use of local privilege escalation attacks, especially for high value targets, in order to monetize systems that use sandboxes.

In-Sandbox Attacks

Attackers do one thing really well, and it’s pretty much universal – they make money. It doesn’t matter if they’re just getting your emails, some paypal info, or whatever else they can get their hands on, they will usually be able to find a way to sell or use that information to their advantage.

Just because an attacker is stuck in a sandbox does not mean they can’t make money. Depending on sandbox architecture they can potentially have more than enough information, just by compromising your browser, to steal bank info, credit card credentials, email passwords, and more.

Conclusion

There is one thing we can be sure of – attackers won’t just give up. Maybe they’ll accept losses, maybe they’ll change their focus, but hackers aren’t going anywhere. There is still far too much money to be made.

Whether they break the sandboxes or learn to work within them attacks are still going to happen.

0-Day Exploit Bypasses Adobe Reader Sandbox

A youtube video demonstrates an attack against Adobe’s PDF Reader – something that used to be completely mainstream, boring. But what makes this interesting is that it also bypasses the Adobe Reader sandbox, based on the sandbox used by Google Chrome, and the exploit doesn’t rely on Javascript.

Adobe Reader implemented a sandbox of similar architecture to Google Chrome, using a low integrity process to handle untrusted code and a broker process to make security decisions. This attack bypasses the Adobe Reader sandbox entirely and, unlike most Adobe Reader exploits, doesn’t require JavaScript to work.

Attacks like this are likely to become more common. As programs make use of sandboxes it becomes necessary for attackers to break out of those sandboxes to further monetize the system.

Adobe Reader has always been a popular program to exploit due to the nature of PDF and the popularity of the software. It seems attackers aren’t giving up just because of a sandbox, though it’s clear that the Adobe Reader Sandbox has reduced attacks in the wild.

The exploit, which is being sold on the black market for 30,000-50,000 dollars is already incorporated into the popular Blackhole Exploit Kit. Blackhole Exploit Kit is a very popular way for attackers to distribute malware such as Zeus (a popular piece of malware that steals bank info) so it’s best to be wary while opening PDFs until a patch is out.

For protection against this exploit I suggest setting up EMET. Click here to read how.

 

Update: Adobe is now in contact with Group-IB and hopefully there will be a fix out soon.

Newly Discovered Java Vulnerability Effects All Versions 5/6/7

A newly discovered Java vulnerability allows an attacker to bypass the Java sandbox allowing remote code execution of unsigned content. An attacker exploiting this vulnerability would be able to exploit Java 5, 6, and the latest version Java 7.

Due to the nature of Java as a cross platform language all Java users, whether on Linux, OSX, or Windows, are vulnerable. It’s because of this ability to ‘write once, exploit everywhere’ feature that Java is such a tempting target. With over 1 billion devices running Java it’s plain to see why an attacker would look for exploits there.

The exploit is also confirmed to work on all browsers on Windows 7 32bit, though it should work on all browsers on all Java capable platforms.

On top of the tempting nature of Java there’s Oracle’s poor history with Java security. Patches tend to be late and long after an attack while the Java Runtime Environment has no particular security oriented aspects (despite it seeming like it could if they only tried).

It wasn’t long ago that another vulnerability in the JRE had been found. That one had been for Java 7 only and everyone was surprised that Oracle was able to patch it after about 4 days. Or at least they were surprised until they found out Oracle had been notified of the vulnerabilities months ago.

The short story is that Java is always going to be a target. On Windows you can rely on third party software to secure it and on Linux you can Apparmor it.

Pwnium Two – Google Chrome To Hold Another Hacking Contest

Google had so much fun with the Pwnium competition the first time they’ve decided to hold another one. This should be interesting as we’ll get to see if Chrome exploits are really worth 60,000 dollars or if attackers are more willing to sell to higher bidders.

The rewards are similar though now instead of a 1 million dollar limit there’s a 2 million dollar limit. This is largely irrelevant as it is very unlikely there will be that many exploits.

The competition essentially lets a bunch of people come together and see how far they can break Chrome. Last competition we had three exploits bypass Chrome’s sandbox – One by Pinkie Pie, one by Vupen, and one by Sergey Glazunov.

The Vupen exploit was pretty lame and used the Flash plugin. The Flash plugin for Chrome is now PPAPI and far stronger than it used to be so Vupen’s going to have to find another way to get out of the sandbox.

The Vupen exploit was not revealed but the others were. They made use of 6 and 12 bugs respectively and were really brilliant.

Chrome’s sandbox has improved since the last competition – the renderer now runs at Untrusted as does Flash – so it will be fun to see how people break out this time.

Chrome Seccomp-BPF Sandbox

Chrome://sandbox has gotten an update reflecting the newly implemented Mode 2 Seccomp Filters implemented through the Berkley Packet Filter (BPF). To learn more about Syscall and Seccomp Filtering you can read this post and learn about how Chrome’s new sandbox on Linux.

Chrome’s seccomp sandbox is a powerful restriction on how Chrome can interact with the system’s kernel. This limitation is an effective way to prevent kernel exploitation, which is a wonderful reinforcement to Chrome’s SUID sandbox. The seccomp sandbox is ideal for a program like chrome, programs that already implement some form of sandboxing. The best way to escape from a sandbox, outside of a sandbox design issue, is to exploit the kernel – doing so allows you to bypass almost any security implemented, and the seccomp sandbox attempts to mitigate this threat.

Check to make sure that you’re adequately sandboxed by going to chrome://sandbox.

Image

Comparing PPAPI Flash To Firefox Flash Sandbox

Chrome 21 hit Windows a few days ago and I’ve been meaning to write a post letting users know that they should know be running the PPAPI Flash Plugin by default (check chrome://plugins), which enables a far more restrictive sandbox.

How much stronger is this sandbox? IBM has written a wonderful whitepaper about it comparing the Firefox NPAPI Flash Sandbox, Chrome NPAPI Flash Sandbox, and the newly rolled out Chrome PPAPI Flash Sandbox.

The PPAPI Sandbox adds a number of restrictions. Here’s a few screenshots of the presentation that highlight some important areas.

Image

Image

Image

Image

Image

Image

Image

 

As you can see the Pepper Flash is significantly more secure than the NPAPI Plugin used previously and currently. It has considerably reduced read access, write access, and registry access as well as stronger and more restrictive job tokens.

The first sandbox bypass for Chrome by Vupen used the Flash plugin because the sandbox for it was the weak link. Firefox’s sandbox has improved somewhat over the old Chrome sandbox but the latest iteration, Pepper, is much stronger than either of the two.

Flash exploits in the wild are going to drop significantly, just as they did with Adobe reader.

Chrome 21 Brings PPAPI Flash To Windows

Chrome 21 is in Beta right now in it won’t be long before Chrome users are all benefiting from a much more powerful PPAPI Sandbox. The sandbox is built around the Adobe Flash Plugin, which has been commonly exploited in the past. Of the vulnerabilities used in the Blackhole Exploit Kit about 20% are Flash (65% Java, the rest PDF).

Chrome had previously sandboxed Flash player but it built the sandbox around Flash, leading to holes and looser restrictions. This time Flash has been built to work in the sandbox – the way it should be. This allows for a stronger sandbox.

The first public exploitation of Google Chrome was by Vupen in 2011. They broke through a “default installation of Chrome”, which includes Flash. It was confirmed later that it did in fact use the Flash plugin. Why did Vupen choose the Flash plugin? It’s the easy target – or it was.

Vupen’s exploit is ‘proof’ that the Flash sandbox was the easier target. It’s nice to see that Google is still taking steps to harden their sandbox even though it’s never been targeted in the wild.

Explaining Seccomp Filters

The seccomp filters implemented in the 3.5 and Ubuntu kernel is really cool and I’m bored so I want to write about it (hooray for having a blog.) I’m going to explain what seccomp filters actually do at as low a level as I feel comfortable. I’ll leave some stuff out and gloss over a few other things because either 1) I personally don’t know it well enough 2) it would take forever to explain. I want to make this as accessible as possible for those readers who aren’t necessarily familiar with all of this terminology.

Seccomp Filters are a compile-time whitelist of what System Calls can be made by the compiled program. If a new system call (one that hasn’t been whitelisted) is called the program closes.

What Is A System Call?

A system call is basically how a program speaks to the kernel. Programs are basically (or literally, I guess) instructions, they want to get something done. Oftentimes they have to (for performance or ease of use reasons) outsource that action to the kernel. They do this through a system call, something like write(). The () is your parameters, so you might have (and this is not a real world example at all nor is it even correct, in reality a write() creates a file buffer among other things, passing the information to the syscall) write(“hello world”) and your program passes that to the kernel, which sees “the syscall is ‘write’ and the argument is ‘hello world'” and then it does what it wants to do and you end up writing “hello world” somewhere.

What’s The Issue?

There are a few issues with this. The first is that the previously mentioned kernel is the highest level that software can reach in terms of the OSI model of security. This means exploits in the kernel are also going to be at the highest level and they can practically do anything at that point including directly interact with your hardware. Following this it’s only possible to exploit code that you can interact with either directly or indirectly. A system call is a way for programs at any level to interact with the kernel therefor it’s a way for any program to escalate to kernel level via an exploit.

The other issue is that there are a lot of system calls and new ones can be created over time as new kernel features appear. This means new kernel attack surface and it also means new capabilities for programs. What if I don’t want my program to be able to write? Well it has access to write() so I would have to find some other way to stop that like LSM – but there’s a lot of other syscalls not so easily stopped. By whitelisting the syscalls we implement absolute least privilege, meaning that programs can only use the syscalls they really need.

The short answer is that abusing syscalls allows for new and unforseen behaviors as well as the potential for privilege escalation. Filtering syscalls directly limits kernel attack surface and what programs can do.

Where Filters Really Help

To understand where these filters really help I think I should explain the concept of least privilege. Least privilege is the implementation of a program in which the program only has access to what it needs and nothing more. This means if there’s files A-Z on a system and the program only ever uses A, B, C, then it won’t have access to D-Z. It may also not need Inter Process Communication abilities with various programs, the IPC may be restricted too. Maybe it shouldn’t be able to execute specific files, again, limit it. The idea is to make it so that it can do only what it needs to function and nothing else.

This is one of the more important concepts in computer security. What this means is that if the aformentioned program gets exploited and my critical file is at E the hacker can’t get to E, they’re stuck only using some useless config files at A-C. And maybe there’s a way to exploit program F but, again, they can’t access F so the visible attack surface is reduced.

The simplest way out of a good sandbox (one not full of holes or, in our case, letters) is usually privilege escalation and a kernel exploit is great for that. So if the above program is exploited and then I send it write(exploit code) I’ve made breaking out a lot simpler.

This is where seccomp filters are best used. Reinforcing least privilege. They directly reduce visible kernel attack surface thereby reinforcing any strong sandbox.

And Hopefully…

Right now Chrome, OpenSSL, and a few other programs have implemented these filters. It’s not too difficult to implement them and I’d really like to see it in more applications, especially running services. In an ideal world everything would have seccomp filters as least privilege should be applied universally but I’d settle to have a few services like cupsd running with one. The biggest issue is that third party libraries can have compatibility issues.

What I Left Out

I didn’t go into libraries and APIs, I just kinda combined the ideas into the system calls themselves. For those interested in programming you already know what an API is and you probably know what a library is.

If I got anything wrong let me know. I’m a crap programmer and I extrapolate a lot. If you notice a gaping hole in what I’m saying point it out (be gentle) and I’ll be happy to learn something and will correct it asap.