Sandboxing: Conclusion

In total I’ve written five methods for sandboxing code. These are certainly not the only methods but they’re mostly simple to use, and they’re what I’ve personally used.

A large part of this sandboxing was only possible because I built the code to work this way. I split everything into privileged and unprivileged groups, and I determined my attack surface. By moving the sandboxing after the privileged code and before the attack surface I minimized risk of exploitation. Considering security before you write any code will make a very big difference.

One caveat here is that SyslogParse can no longer write files. What if, after creating rules for iptables and apparmor, I want to write them to files? It seems like I have to undo all of my sandboxing. But I don’t – there is a simple way to do this. All I need is to have SyslogParse spawned by another privileged process, and have that process get the output from SyslogParse, validate it, and then write that to a file.

One benefit of this “broker” process architecture is that you can actually move all of the privileged code out of SyslogParse. You can launch it in another user, in a chroot environment, and pass it a file descriptor or buffer from the privileged parent.

The downside is that the parent must remain root the entire time, and flaws in the parent could lead to it being exploited – attacks like this should be difficult as the broker could would be very small.

Hopefully others can read these articles and apply it to their own programs. If you build a program with what I’ve written in mind it’s very easy to write sandboxed software, especially with a broker architecture. You’ll make an attacker miserable if you can make use of all of this – their only real course of action is to attack the kernel, and thanks to seccomp you’ve made that a pain too.

Before you write your next project, think about how you can lock it down before you start writing code.

If you have anything to add to what I’ve written – suggestions, corrections, random thoughts – I’d be happy to read comments about it and update the articles.

Here’s a link to all of the articles:

Seccomp Filters: http://www.insanitybit.com/2014/09/08/3719/

Linux Capabilities: http://www.insanitybit.com/2014/09/08/sandboxing-linux-capabilities/

Chroot Sandbox: http://www.insanitybit.com/2014/09/08/sandboxing-chroot-sandbox/

Apparmor: http://www.insanitybit.com/2014/09/08/sandboxing-apparmor/

And here’s a link to the GitHub for SyslogParse:

https://github.com/insanitybit/SyslogParser

Sandboxing: Apparmor

Sandboxing: Apparmor

This is the fifth installment on a series of various sandboxing techniques that I’ve used in my own code to restrict an applications capabilities. You can find a shorter overview of these techniques here. This article will be discussing sandboxing with Apparmor.

Mandatory Access Control:

Mandatory Access Control (MAC), like Discretionary Access Control (DAC), is meant to define permissions for a program. Users and Groups are DAC. But what if you want to confine a program with full root? As discussed, root with full capabilities is quite dangerous – and in the case of SyslogParser quite a few of those capabilities are necessary.

Apparmor is a form of Mandatory Access Control implemented through the Linux Security Module hooks in the Linux kernel. MAC is “administrator” defined policy, and can confine even root applications.

Apparmor is a *bit* out of scope for this series, as it doesn’t actually involve any code, but it’s still relevant.

The Code:

While Apparmor itself doesn’t have code in SyslogParse, here’s the profile for the program.


# Last Modified: Wed Aug 13 18:57:15 2014
#include

/usr/bin/syslogparse{

/usr/bin/syslogparse mr,
/var/log/* mr,

/etc/ld.so.cache mr,

/sys/devices/system/cpu/online r,

/lib/@{multiarch}/libgcc_s.so* mr,
/lib/@{multiarch}/libc-*.so mr,
/lib/@{multiarch}/libm-*.so mr,
/lib/@{multiarch}/libpthread*.so mr,

/usr/lib/@{multiarch}/libseccomp.so* mr,
/usr/lib/@{multiarch}/libcap-ng*.so* mr,

/usr/lib/@{multiarch}/libstdc*.so* mr,

}

Apparmor is incredibly straight forward. There is a path, and then there is one or more letters. These letters stand for certain things.

r = read
m = map
w = write

All of this is pretty straight forward. SyslogParse gets the number of CPU cores from /sys/devices/system/cpu/online , so it needs “r” access.

It needs to read some libraries in order to function.

And that’s it. Sort of… apparmor on my system is, unfortunately, quite broken. The tools for enforcing/ complaining crash on me (I have a lot of weird profiles that I experiment with), which is actually why I started building SyslogParse. So this profile is a bit incomplete. It still needs some capabilities defined for chroot, setuid/setgid, and possibly more file access.

Conclusion:

When enabled an Apparmor profile will begin enforcing policy as soon as the process begins. That means that, even if running as root, an attacker is always confined to those files defined in the profile. Apparmor is quite powerful, and combined with the other sandboxing techniques used it’s a very nice reinforcement – writing to the chroot, for example, is denied throughout the process by both DAC and MAC now.

Apparmor is, unfortunately, only available on certain distributions.

Next Up: Final Conclusion

Sandboxing: Chroot Sandbox

Sandboxing: Limited Users

This is the fourth installment on a series of various sandboxing techniques that I’ve used in my own code to restrict an applications capabilities. You can find a shorter overview of these techniques here. This article will be discussing sandboxing a program using Limited Users.

Users and Groups:

Linux Discretionary Access Control works by separating and grouping applications into ‘users’ and ‘groups’. A process in user A is, in terms of DAC, isolated from a process in user B.

There’s also user 0, the root user, which is a privileged user account.

Only a program with root, or with CAP_SETUID / CAP_SETGID can manipulate its own UID/GID. In the case of SyslogParse, we have root, and we definitely want to lose it when we can.

So, after getting the file handles we need, here’s the code for dropping to a limited user account (if you’ve read the previous articles this happens right after the chroot).


if (setgid(65534) != 0)
err(0, "setgid failed.");
if (setuid(65534) != 0)
err(0, "setuid failed.");

Very simple. So here’s a simple explanation.


if (setgid(65534) != 0)
err(0, "setuid failed.");

setgid(65534) sets the GID to 65534. This is the “nobody” group on my system. Nobody is an unprivileged user often used by programs wanting to drop privileges. If 65534 doesn’t exist, all the better – dropping to a GID that doesn’t exist is great.


if (setuid(65534) != 0)
err(0, "setgid failed.");

setuid(65534) is changing the user to 65534, which, as above, is the nobody user. Same as before, if the user doesn’t exist, that’s dandy.

Conclusion:

Dropping privileges is a hugely beneficial thing to do. By separating the code into a “privileged stuff done all at once, then never again” you can drop privileges before doing anything dangerous, and there goes an attackers ability to escalate.

Dropping root privileges is incredibly important. The attack surface and amount of post-exploitation work an attacker can do shrinks drastically.

In the case of SyslogParse, as any attacks would be for local escalation (it does no networking), an attacker would probably lose privileges by exploiting it if going from any normal compromised process. At this stage they are in a chroot with no read or write access, running in an unprivileged user/ group with no capabilities, they have access to 22 system calls and some very nice to have calls, such as read() are denied, and their only chance for getting a few capabilities is by exploiting a few lines of code that involve opening a file.

I was going to have the next section be on rlimit, but it’s really not important and also not viable unless you’ve built the application from the bottom up to never write to a file, which will typically involve a broker’d architecture.

Next Up: Apparmor

Sandboxing: Linux Capabilities

This is the second installment on a series of various sandboxing techniques that I’ve used in my own code to restrict an applications capabilities. You can find a shorter overview of these techniques here. This article will be discussing Linux Capabilities.

Intro To Linux Capabilities:

On Linux you’re likely familiar with the root user. Root is the ‘admin’ account of the system, it has privileges that other processes don’t. But, what you may not have known, is that those privileges given to root are actually enumerable and defined. For example, root has the capability CAP_SYS_CHROOT, which is what allows it to call chroot().

Let’s say a program needs root, but only because it calls chroot at some point. Instead of giving all of the privileges except CAP_SYS_CHROOT.

So, if your program has to run as root (as mine does), you can actually drop some of your root privileges while maintaining others. How effective is this? Jump down to the conclusion below to see – hint: it can go between great and awful.

The Code: (includes should have a # in front, but WP is mean)

include include

capng_clear(CAPNG_SELECT_BOTH);
capng_updatev(CAPNG_ADD, (capng_type_t)(CAPNG_EFFECTIVE | CAPNG_PERMITTED), CAP_SETUID, CAP_SETGID, CAP_SYS_CHROOT, CAP_DAC_READ_SEARCH, -1);
capng_apply(CAPNG_SELECT_BOTH);

Let’s break this down ~line by ~line.


include include

This includes the headers for linux capabilities (so that you can refer to them as their type) and cap-ng, the library we’ll be using to actually drop privileges.


capng_clear(CAPNG_SELECT_BOTH);

This line will clear all privileges. If you were to apply this, you’d have essentially dropped all root privileges.


capng_updatev(CAPNG_ADD, (capng_type_t)(CAPNG_EFFECTIVE | CAPNG_PERMITTED), CAP_SETUID, CAP_SETGID, CAP_SYS_CHROOT, CAP_DAC_READ_SEARCH, -1);

This line is where we state which capabilities we’d like. After the capng_clear() we have none, but the program does need a few.

The first two parameters are effectively saying to add these rules.

The third, fourth, and fifth parameters are the defined capabilities to allow.

The last parameter is a -1, which lets capng know that your list of capabilities is terminated.


capng_apply(CAPNG_SELECT_BOTH);

And now, with this call, the rules are applied. Only these capabilities are given to the program… “only”.

Conclusion:
This was really easy to do. Three lines of code and a large number of capabilities are gone. But, what’s left?

CAP_SETUID/ CAP_SETGID : Quite dangerous, as it means you can interact with processes of other UID/GID’s by simply making your UID/GID the same as theirs.

CAP_SYS_CHROOT : Not as scary, you can chroot, and if you retain the ability to chroot you can then break out of that.

CAP_DAC_READ_SEARCH : You can read all files that root can read. Password files, sensitive files, whatever. All yours to read.

So in a lot of ways you’re just dropping from root…. to root. You’re still quite powerful and dangerous if you drop these capabilities, it’s not a very large barrier. An attacker who gains the above privileges still gains quite a lot. But, in the case of SyslogParse, all capabilities are dropped eventually.

The nice thing about capabilities is you can do it as soon as the program starts. After you’ve gotten your file descriptors and all that, you can go ahead and start real sandboxing, then do the actual dangerous stuff. In this case, I had to give a lot of scary permissions. But for someone else, maybe all they need is to bind port 80, and in that case you just give CAP_NET_BIND_SERVICE, drop everything else, and that’s pretty nice.

It honestly feels like a “Well, it’s better than giving it full root” in this case, which is bittersweet. It still pretty much feels like full root. But uh, hey, it’s better than giving it full root.

Sandboxing: Seccomp Filters

This is the first installment on a series of various sandboxing techniques that I’ve used in my own code to restrict an applications capabilities. You can find a shorter overview of these techniques here. This article will be discussing seccomp filters.

What is Seccomp? An Introduction:

System calls are your way of asking the kernel to do something for you. You send a message saying “Hey, open a file for me” and it’ll probably do it for you, barring permission errors or some other issue. But, if you can talk to the kernel, you can exploit the kernel. Many vulnerabilities are found in kernel system calls, leading to full root privileges – bypassing sandboxing techniques like SELinux, Apparmor, namespaces, chroots, you name it. So, how do we deal with this without patching the kernel, as a developer? Seccomp filters.

Seccomp is a way for a program to register a set of rules with the kernel. These rules deal with the system calls a program can make, and which parameters it can send with them.

When you create your rules you get a nice overview of your kernel attack surface. Those calls are the ways your attacker can attack the kernel. On top of that ,you’ve just reduced kernel attack surface – if an attacker requires system call A and you’ve only allowed system calls B through D, they can’t attack with system call A.

Another nice benefit is the ability to restrict capabilities. If your program never writes a file, don’t give it access to the write() system call. Now you’ve reduced the kernel attack surface, but you’ve also stopped the program from writing files.

The Code:

Seccomp code is fairly simple to use, though I haven’t found any really good documentation. Here is the seccomp code used in my program, SyslogParse, to restrict its system calls.


scmp_filter_ctx ctx;
ctx = seccomp_init(SCMP_ACT_KILL);

seccomp_rule_add(ctx, SCMP_ACT_ALLOW, SCMP_SYS(rt_sigreturn), 0);
seccomp_rule_add(ctx, SCMP_ACT_ALLOW, SCMP_SYS(exit), 0);
seccomp_rule_add(ctx, SCMP_ACT_ALLOW, SCMP_SYS(exit_group), 0);
seccomp_rule_add(ctx, SCMP_ACT_ALLOW, SCMP_SYS(tgkill), 0);

seccomp_rule_add(ctx, SCMP_ACT_ALLOW, SCMP_SYS(access), 0);

seccomp_rule_add(ctx, SCMP_ACT_ALLOW, SCMP_SYS(write), 2,
SCMP_A0(SCMP_CMP_GE, 1),
SCMP_A0(SCMP_CMP_LE, 2)
);

seccomp_rule_add(ctx, SCMP_ACT_ALLOW, SCMP_SYS(fstat), 0);
seccomp_rule_add(ctx, SCMP_ACT_ALLOW, SCMP_SYS(open), 0);
seccomp_rule_add(ctx, SCMP_ACT_ALLOW, SCMP_SYS(close), 0);

seccomp_rule_add(ctx, SCMP_ACT_ALLOW, SCMP_SYS(brk), 0);

seccomp_rule_add(ctx, SCMP_ACT_ALLOW, SCMP_SYS(mprotect), 1,
SCMP_A2(SCMP_CMP_NE, PROT_EXEC)
);

seccomp_rule_add(ctx, SCMP_ACT_ALLOW, SCMP_SYS(mmap), 2,
SCMP_A0(SCMP_CMP_EQ, NULL),
SCMP_A5(SCMP_CMP_EQ, 0)
);

seccomp_rule_add(ctx, SCMP_ACT_ALLOW, SCMP_SYS(munmap), 2,
SCMP_A0(SCMP_CMP_NE, NULL),
SCMP_A1(SCMP_CMP_GE, 0)
);

seccomp_rule_add(ctx, SCMP_ACT_ALLOW, SCMP_SYS(madvise), 0);
seccomp_rule_add(ctx, SCMP_ACT_ALLOW, SCMP_SYS(futex), 0);
seccomp_rule_add(ctx, SCMP_ACT_ALLOW, SCMP_SYS(execve), 0);
seccomp_rule_add(ctx, SCMP_ACT_ALLOW, SCMP_SYS(clone), 0);

seccomp_rule_add(ctx, SCMP_ACT_ALLOW, SCMP_SYS(getrlimit), 0);
seccomp_rule_add(ctx, SCMP_ACT_ALLOW, SCMP_SYS(rt_sigaction), 0);
seccomp_rule_add(ctx, SCMP_ACT_ALLOW, SCMP_SYS(rt_sigprocmask), 0);
seccomp_rule_add(ctx, SCMP_ACT_ALLOW, SCMP_SYS(set_robust_list), 0);
seccomp_rule_add(ctx, SCMP_ACT_ALLOW, SCMP_SYS(set_tid_address), 0);

if(seccomp_load(ctx) != 0) //activate filter
err(0, “seccomp_load failed”);

I’ll go through this bit by bit.


scmp_filter_ctx ctx;
ctx = seccomp_init(SCMP_ACT_KILL);

This should be fairly simple to understand if you’ve written basically any code. This instantiates the seccomp filter, “ctx”, and then initializes it to kill on rule violations. Simple.


seccomp_rule_add(ctx, SCMP_ACT_ALLOW, SCMP_SYS(futex), 0);

This line is a rule for the “futex” system call. The first parameter, “ctx”, is our instantiated filter. “SCMP_ACT_ALLOW”, the second parameter, is saying to allow when a condition is met. The third is a macro for the futex() system call, as that’s the call we want to allow through the filter. The last parameter, “0”, is how many rules we want to add to this that deal with parameters.

Simple. So this rule will allow any futex system call regardless of parameters.

I chose futex in this example to demonstrate that seccomp can not protect you from every attack. Despite the heavy amount of sandboxing I’ve done in this program, this filter will do nothing to stop attacks that use the futex system call. Recently, one vulnerability was found that could do just that – a call to futex would lead to control over the kernel. Seccomp just isn’t all powerful, but it’s a big improvement.

Note: I found all of these syscalls by repeatedly running strace on SyslogParse with different parameters. Strace will list all of the system calls as well as their arguments and makes creating rules very easy.


if(seccomp_load(ctx) != 0) //activate filter
err(0, "seccomp_load failed");

seccomp_load(ctx) will load up the filter and from this point on it is enforced. In this case I’ve wrapped it to ensure that it either loads properly or the program won’t run.

And that’s it. That’s all the code it takes. If the program makes a call to any other system call it crashes with “Bad System Call”.

Seccomp is quite easy to use and is the first thing I’d make use of if you are considering sandboxing. All sandboxing relies on a strong keernel, but as a developer you can only change your own program, and seccomp is a good way to reduce kernel attack surface and make all other sandboxes more effective.

Linux has something like 200 system calls (can’t find a good source, anyone know a more definitive number?), and SyslogParse has dropped that down to about 22. That’s a nice drop in privileges and attack surface.

Next Up: Linux Capabilities

Writing Sandboxed Software

I wrote a program recently, SyslogParse, to display apparmor and iptables rules based on violations found in my system log. I did this because my apparmor-utils packages always break / were quite slow when going through my profiles, and going through iptables rules in syslog was a big of a pain too.

I decided this would be a fun project to sort of “lock down” against theoretical attacks, and I’d like to blog that experience to demonstrate how to use these different sandboxing mechanisms, as well as how they make the program more secure.

What takes place below is after the process of designing the application from a functional point of view – “what do I need this thing to do?”.

Step One: Threat Modeling

This step was a little less important for SyslogParse, as I was going to secure it regardless of real-world threats, but I’ll explain how I went about threat modeling.

The first thing I did is figure out what permissions this SyslogParse needs. I know the application, by design, must read from /var/log/syslog – a file that you need root permissions to read from. So I’ll be running this as root in order to do work.

To make things easier for users who don’t log to syslog, I’ll take in a path parameter, which means someone running this program can specify an arbitrary input file. That is the attack surface – one file being taken in.

An attacker who can control content in that file can potentially escalate to root privileges.

Step Two: Seccomp Mode 2 Filters

I’ve discussed seccomp filters on my blog beforehand, but to give a short recap, seccomp filters are developer-defined rules that will dictate which system calls can be made, and do light validation on the parameters.

Seccomp filters are very simple to use, and they’re the first thing I implemented.

Here I declare the seccomp filter.

scmp_filter_ctx ctx;

Here I initialize it to kill the process when rules are violated.

ctx = seccomp_init(SCMP_ACT_KILL);

And here is an example of a seccomp rule being created.

seccomp_rule_add(ctx, SCMP_ACT_ALLOW, SCMP_SYS(futex), 0);

In the above rule I’ve said to allow the futex system call, a call used when a program uses threads and has to set mutexes. The “0” means I have no additional verification of the system calls arguments. In an ideal world I’d validate arguments to all of these calls, but it’s not always possible.

In the end I had about 22 calls, 3 of which I validate parameters on.

The thing about seccomp is that there’s no point doing sandboxing before I set this up, because without it the kernel will always be an easy target, and as many sandboxes as I layer on, I can’t change that from within my SyslogParse code – until I use Seccomp.

23 calls is quite a lot (though considerably less than it could be), and I chose futex as an example to show that despite limiting the calls, the recent futex requeue exploit would bypass this seccomp sandboxing and all other sandboxing this program uses. There’s only so much we can do from within the context of this program.

What is nice, however, is that I now know my kernel’s attack surface. Barring flaws in the seccomp enforcement, I know how my attacker can interact with the kernel, and that in itself is quite valuable.

Step Two: Chroot

By design SyslogParse must be root, in order to read root files, so that means I’ve got the chroot capability. May as well make use of it.

There’s a misconception that chroots are really poor security boundaries. This isn’t entirely false, but it’s not the whole story.

With one call I can set up a chroot environment that’s not so easy to break out of, at least it won’t be by the end of this article.

mkdir("/tmp/syslogparse/", 400);

That creates a folder /tmp/syslogparse/ with the permissions that only root can read from it. Right now we’re root, so we can read from it, but that won’t last too much longer (about two more steps).

chroot("/tmp/syslogparse/");

The file system as SyslogParse now knows it is an empty wasteland that only root can read and no one else can write to. A regular user would have no ability to read or write to it, which is nice because Inter-Process Communication (IPC) would require at least write access, and ideally read and write access.

Step Three: RLimit

For SyslogParse this is a bit unnecessary, but I went with it anyways.

rlrmit() is a system call you can make that will irreversibly limit that process in some way. In this case, because I want to limit IPC, and because SyslogParse only ever writes to stdout, which is already open, I’m going to tell it that it can not write to any new files.

struct rlimit rlp;
rlp.rlim_cur = 0;

setrlimit(RLIMIT_FSIZE, &rlp);

In a more literal way, I’ve told the system that my process can not write to a file that is larger than 0 bytes.

Step Four: Dropping Privileges

The last significant step in this sandbox is to lose root. In this case, dropping to user 65534, which, at least on my system, is the ‘nobody’ user. A more ideal situation would have SyslogParse drop to a completely nonexistent user (to avoid sharing a user with another process) but I’m going with this for now.

setgid(65534);

setuid(65534);

That’s all it takes – SyslogParse is now running as the nobody user/group. No more root, and the process is within a chroot environment that it has no permissions to read or write to.

Step Five: Apparmor

I’m on elementary OS, which has apparmor. So, in my makefile I’ve put an ‘mv’ command that puts my profile into the users apparmor directory.

For a small program like this the apparmor profile is very simple.

# Last Modified: Wed Aug 13 18:57:15 2014
#include <tunables/global>

/usr/bin/syslogparse flags=(complain){

/usr/bin/syslogparse mr,
/var/log/* mr,

/etc/ld.so.cache mr,

/sys/devices/system/cpu/online r,
/lib/@{multiarch}/libgcc_s.so* mr,
/lib/@{multiarch}/libc-*.so mr,
/lib/@{multiarch}/libm-*.so mr,
/lib/@{multiarch}/libpthread*.so mr,

/usr/lib/@{multiarch}/libseccomp.so* mr,
/usr/lib/@{multiarch}/libstdc*.so* mr,

}

 

A few library files, read access to /var/log/ (for arbitrary log files), and, because I threaded the process, it needs to read

/sys/devices/system/cpu/online.

The real benefit of this apparmor profile is that it takes effect before any code runs – the rest of the sandboxing all happens right after I open /var/log/syslog – there is very little code before it, but some, and a compromise at that point will lead to full root control of the process. With the apparmor profile the worst case scenario is that they have access to only what is listed there.

Conclusion

Overall, I think that’s a fairly robust sandbox. It was mostly for fun, but it was all fairly simple to implement.

If an attacker did break into this system, the above would make things a bit annoying, though the obvious path is to simply attack one of the allowed system calls, as I only validate parameters on 3 and there is clearly attack surface still left.

This isn’t bullet proof, and it’s not an excuse to not test your code. I fed SyslogParse garbage files/ unexpected input to make sure it failed gracefully/ err’d out immediately when it came across something it didn’t know how to deal with.

Lots of fun to write, and hopefully others can make use of this to make their programs a little bit stronger.

 

Windows XP Support Has Ended

A long time ago I posted an article entitled Windows XP – Abandon Ship. That was nearly one year ago today. And just a few days ago XP officially stopped getting support and patches from Microsoft.

I’d like to clear up some misconceptions that people still seem to have.

You can not be secure on Windows XP. In truth, it’s been a lost cause for quite some time, but Microsoft has been pretty good at dealing with threats through an active approach. Shatter attacks devastate XP machines due to poor privilege separation, but Microsoft addressed this issue decently with a few patches and by lowering service permissions.

Patches are not coming anymore. Support is gone. Do not expect the next big attack to be swiftly put down.

But what does that mean for you, XP user?

It could mean nothing – attackers may not care. We’ve never had such a widely used piece of software go out of support, so many people are still on XP. As far as I know this is unprecedented. Predictions are meaningless – I can not tell you what attackers will do, only what they can do.

So, as always, if you’re using XP or any unsecured system you will be playing a game of chance and not skill. It becomes ‘any attacker who wants to’ as opposed to ‘any attacker who can’ when it comes to getting into your system.

Is that a system you want to rely on?

I’ll also take this time to say that no one should be extending support for XP. Notably, Google Chrome will be continuing to patch XP. To me this is nothing but a false sense of security. Google Chrome relies heavily on its sandbox to protect its users, but any sandbox on Windows is going to rely entirely on a secure operating system. So the sandbox is very clearly not a huge barrier because the unpatched XP kernel and services will be easily leveraged for a full sandbox escape.

No one should be encouraged to use XP now. Take no pride in it- you’re gambling, that’s it.

“But I run EMET! You said EMET is great!”

EMET is awesome. And largely useless to an attacker on XP – while it’s a cute way to push back patch time on systems by a little bit it is by no means a significant barrier when basic memory corruption mitigations are not even supported on the operating system.

“But I run NoScript”

I love NoScript – great piece of software. But what will you do when a kernel vulnerability in text parsing is being used in the wild? You’ll get infected.

I really have very little to say here. XP is not securable. It wasn’t a year ago but it really more than ever is not.

I’m not saying you’ll get infected. I’m not saying that every XP machine will be linked to a botnet in a year. I’m saying that you are not secure, and anyone who wants to take advantage of that will not have a hard time.

Native Client Sandbox – Sandboxing Sandboxing

For those of you don’t know, Google’s Native Client is a way for browsers to run native code within the browser. In other words, I can write a C/C++ program (or any other LLVM supported program) and run it within the browser – pretty cool! The benefits are all over the place but, basically, ChromeOS has been largely criticized for being a ‘limited’ operating system, with apps that aren’t very powerful, and NaCl provides a way for developers to create secure and powerful applications.

But NaCl isn’t the first project to try to do this. The infamous Active X tried beforehand and, as we all know, totally sucked in terms of security. Will NaCli be a massive hole in an otherwise secure browser? Nope. Because Google poured on the security goodness here once more. Seriously, I realize most people don’t have the monetary capabilities of Google, but they do a hell of a lot when it comes to securing products these days.

We all know by now (if you don’t, read more of my posts!) that Chrome runs in a pretty cool sandbox. On Windows sandboxing is limited and, while Chrome does an excellent job, Linux provides more tools for sandboxing that address critical issues. On Linux, even conservatively, the sandbox is very impressive. Your renderer process, the most exposed codebase, is running with no rights – it can’t interact with the kernel, it has no file access, it basically gets fonts and that’s it. It’s locked into a tight sandbox. Yet Google decided that, for NaCl, they’re going to add *yet another sandbox*, which means that all NaCl code runs within the Chrome sandbox and the NaCl sandbox. In short, the Native Client process is a PPAPI process that runs in the Chrome Renderer process, so it is limited in the same ways.

That’s pretty cool. What’s cooler is how the NaCl sandbox works (without getting into PPAPI and the proxy it’s kind of not doing it all justice, but I’m writing this spontaneously at 3am oh well!).

On x86 NaCl uses a processor specific feature called segmentation. Segmentation, something I’ve seen used in PaX, the group who invented security techniques such as ASLR, is a method for the CPU to change which areas of address space are accessible to programs, and their rights. Unfortunately, segmentation is not supported on other architectures, and NaCl supports ARM and 64bit as well as x86. Just like PaX found a work around, so did Google – the implementation differs between ARM and x86_64 but the goals are all the same. (Upon watching a video on NaCl he also skims over it – anyone know more on the documentation? Seems like for 64bit they just use guard pages to separate the data/ code ‘segments’.)

NaCl executables are built with a toolchain that does a couple of pretty interesting things. NaCl executables are compiled without specific instructions, they’re blacklisted and will simply not be allowed into the codebase. Interestingly, they ban ret… so instead of returning, you jmp, push, pop. There’s a toolchain feature that has to do with alignment of instructions, rather than get into the details, the point is that you can’t jump into the middle of a chain of instructions, you have to jump to the beginning. When the toolchain returns your assembly it’s provided a safer and saner memory model that invalidates the ability to exploit specific types of vulnerabilities.

NaCl also performs instruction validation. If it sees any blacklisted instructions it kills the process, naturally. It basically does a check, before runtime, on the file to ensure that it’s not trying to perform actions that shouldn’t be allowed (though if you use the toolchain these should never be built in anyways).

Again, all of the visible attack surface from a NaCl executable is also sandboxed. That means that even if I get out of the NaCl sandbox through the proxy interface or through the renderer I’m still stuck in what are essentially the strongest sandboxes currently implemented on consumer systems and I still need to leverage another attack to get out.

I’d love to take each specific area of the sandbox (like the ret removal) and just break down exactly how that works and how effective it is, but this was a post of boredom and inability to sleep. The sandbox itself is very complex, but pretty cool. I’m not quite sure how I feel about it right now, but, as an extra layer I think it’s somewhat ideal in its goals at least. We’ll see how it works out, I’m looking forward to the next Pwnium when we’ve got NaCl built in. I’d also love to see Google add a 20,000 dollar bug bounty reward for NaCl sandbox bypasses like they’ve done for broker sandbox bypasses..

I probably missed a lot of stuff, most of what I’ve read was a while ago, but I’m hoping that we get more documentation soon.

Honestly, I just wish every company had the resources to do what Google does with security. NaCl was some experimental little project hack they made, and they are able to pour massive resources into fuzzing and all sorts of stuff. Really cool.

There are a few great resources on the NaCl sandbox. I’ve read as much as I can about it, but this video is pretty great: https://www.youtube.com/watch?v=5bcyuKh3__0

 

Browser Exploitation Expanded – NoScript, Sandboxie

So I got a message asking me to expand on my previous post on browser exploitation. The user wanted to know about how security software such as NoScript and Sandboxie would deal with a browser exploit. I’m going to just go through each one on their own and explain what an attacker would be dealing with in each case.

The scenario is that you’re running Firefox with NoScript and Firefox with Sandboxie (separately, for simplicity) and you’ve visited a malicious website where the attacker controls the entire page of content. The attackers goal is to exploit the browser and monetize the system.

NoScript

NoScript works in a few ways. For the purposes of this post I’ll be focusing on the scripting whitelist aspect of it, as things like HSTS/XSS won’t make a difference in our scenario.

As an attacker I’m incredibly limited by NoScript. Most exploits are going to be in the Javascript renderer or through some plugin. With NoScript I have none of that attack surface. Instead I have to resort to exploiting some other component, like a font renderer, or find a flaw in NoScript that will allow a bypass.

This limitation is significant. I can’t even start my attack unless it’s a very specific (and less common) type. So NoScript is incredibly effective here.

If, however, I trick the user into whitelisting the site (or I have hacked an already whitelisted site) my options are much better. Now I can run Javascript, and now my exploit should work just about perfectly, as long as it doesn’t rely on XSS/CSRF.

On a whitelisted site the user is partially protected, specifically against XSS/CSRF attacks, but if I control the entire site and it is whitelisted I have enough power to exploit the browser as if it weren’t there.

Sandboxie

Sandboxie is a program designed to create a copy-on-write sandbox for programs. It emulates system services and attempts to isolate the browser as best it can. As an attacker Sandboxie doesn’t come into play until I’ve actually taken over the browser.

So, I get you to click a website, I break into your browser (see other post on browser exploitation), and now I’m in a somewhat confined environment. Anything on the system is readable by default, giving me a massive amount of valuable information about the system, like what programs are installed, security policies, personal documents, passwords, databases, etc. Post exploitation becomes much easier when read access is granted so gratuitously, making later steps much easier.

Is an attacker I can probably already make serious money off of this user. I have their browser info, potentially passwords or hashes, I can get personal documents, I can keylog, I can read work documents, etc. But what if I want to get persistence? What if I want this to be part of my new botnet? I have to get out of the sandbox.

Now I have to get out of the sandbox if I want enough rights to hook this machine up to my botnet. How do I go about doing this? Well, thanks to the read access I’ve been given I have a ton of info on the system. This makes local exploitation much easier. I can exploit the kernel in the sandbox (reducing kernel attack surface on Windows is ridiculously difficult read: not a logical approach) and break right out, once I’m kernel level I simply unhook Sandboxie and I own the computer, I can do whatever I want.

Depending on the sandbox configuration things can be much much easier or potentially more difficult (I see more weak policies than strong policies in my experience).

Conclusion

And there you have it. Two security programs that a few people have been asking me to discuss for some time. I’m avoiding talking about the programs themselves and their own attack surface, but if you read my posts you’ll be able to extrapolate.

I would say that NoScript adds a very significant layer of security, and should be on every Firefox users browser. Sandboxie is a good choice if you’re willing to set up powerful policies and start denying read access – a default install is OK though.

Ubuntu 13.10 And mprotect() Restrictions

For a while I’ve had to keep the Restrict mprotect() option in PaX disabled because it wasn’t compatible with certain programs. It was kind of a huge pain to deal with for that reason. But I’ve finally taken the 30 seconds to just deal with it and I’ll post how.

The program that has the biggest issue with the restrictions is Unity, the program that handles your user interface on Ubuntu. So, we need to kill Unity so that we can use the paxctl program to disable mprotect restrictions.

Keep in mind that you need to enable CONFIG_PAX_PT_PAX_FLAGS in your kernel config for this.

1) Download paxctl

A simple ‘apt-get install paxctl’ is enough here.

2) Kill Unity and Xorg

This is the annoying part. Xorg just restarts every time it’s killed. So you have to run the following command:

service lightdm stop

And then hit ctrl + alt + F4.

You should now have a terminal.

3) Apply flags

Run:
paxctl -c /usr/bin/unity
paxctl -m /usr/bin/unity

Now you can reboot and your UI should work. You’ll have to do this for a few programs (like Chrome) as well.
From the Grsecurity wiki on mprotect() restrictions:

Enabling this option will prevent programs from
– changing the executable status of memory pages that were
not originally created as executable,
– making read-only executable pages writable again,
– creating executable pages from anonymous memory,
– making read-only-after-relocations (RELRO) data pages writable again.

You should say Y here to complete the protection provided by
the enforcement of non-executable pages.