Chrome Sandbox Change On Windows

Up until recently the renderer for Chrome tabs ran at a ‘Low Integrity’ meaning that it could only read/write to low integrity files and folders. Perhaps coincidentally (though I doubt it) after the pwnium exploits broke out of the Chrome sandbox Chrome now runs the renderer at untrusted, meaning it can only access ‘Untrusted Integrity’ files and folders.

By default there actually aren’t any areas of the Windows OS that have Untrusted integrity so this pretty much means the Chrome renderer no longer has disk access capabilities.

Google hasn’t said anything about this, which I find odd, so perhaps it’s a bug in process explorer? Doesn’t seem to be but who knows.

Either way, it’s a significant restriction if they’ve managed to do this.

More On Common Sense

Imagine that a user goes looking around for a new browser. They’ve downloaded Firefox and Chrome but they’re just not satisfied. So they come across a website advertising a “cool new browser” and download it. The website says “Because the browser is new and makes lots of connections to the internet your antivirus may pick it up. Don’t worry, this is simply a false positive, we’re full accredited and you can see that we’ve signed the installer.”

The user runs the .exe, a little “This software is signed but we don’t recognize the cert” comes up and asks for Admin. Makes sense, most programs ask for admin when installing.

They install it, a browser installs (let’s say a reskinned firefox) but so does a malicious payload that embeds itself into the system.

No exploits were used, purely social engineering.

Most people would blame the user here. They should have known better, they should have double checked, they should have kept an AV up to date, blah blah blah.

This is stupid. Users are not capable of ‘knowing better’ nor should they be required to in order to use a system in a secure manor. We create advanced heuristics, which analyze malware on a code level and correlate it with past malware and we  still only ever get like…. 50% of the malware without unruly false positives. Stop treating humans like they can analyze code better than an advanced heuristics engine.

Security necessarily has to be handled at the lowest possible level ie: hardware or kernel. There is no getting around that. You can have superfluous layers and exude your common sense but it’s easily bypassed (click here to find out why everyone is vulnerable) and in the end security absolutely has to come from the OS.

In this case Windows should have either detected the payload reliably or prevented the rootkit payload from installing. It should have done something.

Thankfully Microsoft has implemented things like PatchGuard and SecureBoot that limit malware without truly limiting the user so had this user installed it on an EFI 64bit system the malware would have been limited to Admin and couldn’t have bypassed too many security systems.

No, I am not advocated a walled garden. That approach doesn’t work, it limits the user not the software. Limiting the user isn’t good because we always find away around it and we simply won’t use the product.

To reiterate: nearly everyone gets the question of “who is to blame?” wrong. I’ve seen so few people ‘get it’ and they’ve all been (perhaps coincidentally) security researchers. The answer is always “the operating system” or “The OS and the AV” or whatever but the user should never be blamed and anyone who resorts to what amounts to victim blaming probably just doesn’t understand what security is about.

PatchGuard Should Mimic SecureBoot

PatchGuard is Microsoft’s implementation of Kernel Patch Protection available on Vista/7/8 64bit systems. The idea is to prevent any code from patching the kernel ie: third party code can not modify or change kernel code.

This has had an immense effect on the security of Windows 64bit systems. Rootkits are far more limited in what they can do and how they can hide.

The problem here is that no one’s allowed to patch it. That means we’re locked into the Microsoft security model, which until Windows 8 has been pretty awful (integrity levels.) So while a security company on 32bit can implement their own security model and have it act on a kernel-level (you need to be same rights or higher to properly intercept SYS_CALL etc) they’re just as limited as rootkits now, having to resort to other means to limit malware.

SecureBoot is another feature aimed at preventing bootkits. Where SecureBoot differs is that it maintains a whitelist so that signed software can actually ‘bypass’ (not truly a bypass as this is the design and strength of it) it.

It would have been cool if PatchGuard had done something similar. Had Microsoft implemented a vetting system for PatchGuard certs we’d still see security products on Windows capable of performing up-to-par with the 32bit counterparts.

Why I Don’t Like Anti-Executables

A somewhat new method of security systems is the anti-executable. The idea is that you only allow the programs that you trust to run and, by default, everything else can not run. There are a few problems with this.

The Principle

I believe that security can not be ‘noisy’ or ‘interactive’ ie: the user can’t be burdened. Anti-executable is a tool designed for user interaction. A strong policy restricts software, a weak one restricts users. Anti-executable does both.

Usability Is Nonexistent

As I’ve said above an anti-executable program will burden the user. Users have to directly interact with it and therefor it’s a major pain. Any time you try to run new software it will be blocked. The average user would hate it.

This, of course, means that it’s wide open to social engineering. Users don’t trust the anti-executable to make decisions as it makes no attempt to do so therefor an attacker really only needs to trick the user. If a user has already downloaded the file they obviously trust it.

Of course, that still leaves drive by downloads…

It Just Doesn’t Work

Even for driveby downloads the anti-executable is only effective in a world where hackers don’t pay attention to it. This is not like a sandbox where a hacker has to either rethink their entire attack method or come up with multiple new exploits for escalation – an anti-executable means pretty much the same game plan but with a few slight hurdles.

Programs create new code in their virtual address space, it’s how they work. Your browser couldn’t open new tabs without this, Chrome couldn’t create a new process, etc. Any attempt to stop a malicious payload from hanging out in a processes address space would completely break every program. The approach they take, I don’t know, but I suspect they make use of a few API calls, which they intercept like onopen or potentially virtualalloc.

All exploits start out in the virtual address space anyways so the hacker doesn’t have to take any extraordinary steps here. They’ve just overflowed some buffer or whatever, they’ve created their ROP chain, and now it’s just a matter of either stealing what they need and getting out or, if they want persistence, they can hop to other processes (apparently quite easy to do) or use some other technique to write to the disk.

Conclusion

An anti-executable is, at best, a half decent layer of security. It sorta kinda attempts to prevent persistent malware but I can’t see it being too effective. Current anti-executables have already been bypassed multiple times in POCs but I’ve only seen one malicious exploit that stays in RAM using reflective DLL injection if I recall correctly.

If you combine something like a sandbox and other strong policies with an anti-executable I suppose it could be worthwhile but at the moment the only reason it’s so effective (at preventing drivebys) is due purely to the fact that no one uses it and hackers don’t really care about it.

I mean, yeah, it’s better than nothing. But if you have to say “better than nothing” about something it says a lot.

P.S. I hate outbound firewalls for this reason too. Process A can create a thread in process B (lol windows) and I really don’t know how a firewall that blocks A but allows B is going to do a thing about that. That and user interaction is poor security. Outbound firewalls are also a half decent layer but if the malware is already on your system, potentially with root, you’re quite likely screwed.

The Importance Of Detection

I received a comment on one of my articles recently about antiviruses being useless and I’d like to talk a bit about that. I personally do not run any antivirus software – not on Linux Ubuntu 12.04 and not on my Windows 8 Release Preview despite the fact that Windows 8 comes with Microsoft Security Essentials by default.

Antiviruses are often considered a staple for security. The average user has an antivirus installed and that’s pretty much the central piece of security for them. It’s simply the most widely used method for security. But a lot of people, especially those with some knowledge about computer security, will tell you that antiviruses are not enough or even, as n=n+1 stated, entirely useless.

Why I Don’t Use Antivirus

I’m one of many users who doesn’t use antivirus software, and not just because I’m on Linux. The fact is that current antiviruses are stupid, the entire basis for their model is “If I don’t know it’s bad, I assume it’s good”, which isn’t inherently wrong but you should never really assume anything is good. It should be “If I don’t know it’s bad, I assume it’s bad and take precautions when running it.” Basically if the AV doesn’t flag the software the software has full access to my /user/ or /home/ folders and can potentially escalate.

Antiviruses are also a bit heavy. New on-access AVs are better about this but compared to other solutions that simply hook specific APIs and otherwise use virtually no resources it’s a lot. Disk and file access goes up and I just like to keep things shaved down.

Every antivirus relies on updates. If your AV isn’t up to date you’re vulnerable, it’s like trying to stay patched except attackers are creating malware 1000x an hour. And heuristics isn’t an answer with the current model, you’re either so low it’s useless or so high you’re bothering the user every 5 seconds with false positives.

Speaking of false positives, they all have them, and as soon as a user gets one single false positive the entire antivirus becomes virtually useless when protecting against social engineering. Social engineering is all about trust and if a user downloaded the file they already trust the file, the antivirus’s job is to be trusted more and every false positive seriously degrades that trust.

Why I Like The Idea

The idea of an antivirus is noble and I believe inherent to a proper security policy (which doesn’t exist currently.) Antiviruses attempt to make decisions about things that users are incapable of. As I said above if a user has downloaded a file that means they trust it. An antivirus tries to get the user to stop trusting it. It’s a good thing, just a horrible horrible implementation that hasn’t gotten better despite years of issues.

Heuristics is necessary for true security. Decision making is inherent to all security because everything comes down to a users decisions – visit the website or not, download the file or not, run the file or not, admin rights or not, etc. Users are not (and never ever will be, no matter how much education) capable of making these decisions. Heuristics act on a level that we can not, they can perform code analysis and behavioral analysis and correlate trends in malware with what they see. Our brains are amazing learning beautiful things but we’re better at the whole survival reproduction – leave file analysis to the experts.

So while I absolutely think that heuristics is not just important, but necessary, I wouldn’t touch an AV with a ten foot poll right now. They’re useless for a targeted attack, not all that useful even with automated attacks, and generally a pain in the ass.

That said, I also wouldn’t ever tell an average user to turn their AV off. Not on Windows at least.

Why I Sandbox Chrome With AppArmor

Google Chrome is a browser designed with least privilege in mind. The Chrome multiprocess architecture sandboxes each tab, the renderer,  the GPU, and extensions and has them use IPC to talk to the ‘browser’ process, which runs with higher rights. The idea is that all untrusted code (websites) is dealt with on the lowest possible level (the renderer has virtually no rights) and then the renderer deals with the trusted browser process. It’s very effective and there hasn’t been a single Chrome exploit in the wild.

On Linux the Chrome sandbox makes use of a Chroot, seccomp mode 2 filters, SUID, and a few other techniques. On the outside this seems really secure, the problem is that the documentation is outdated and not nearly as clear as the Windows documentation.

To use Chroot you need root, so for the browser process to Chroot the other processes it needs root. Chrome seems to find a way around this using SUID where it runs as root under a separate name, I don’t really know, again the documentation doesn’t cover this at all.

Basically, it sounds really strong but if I don’t understand something I can’t consider it secure.

That’s why I apparmor Chrome. I know how AppArmor works, I know it’s track record, I know what my profile allows and what it doesn’t allow. And I know that even if Chrome is running at root my apparmor profile will limit it.

I would post my AppArmor profile for Chrome up here but it’s fairly specific to my needs. For those of you looking to sandbox Chrome make sure you use a separate profile for the sandbox, chrome itself, and the native client bootstrap.

One Final Post About SecureBoot?

I did a post highlighting the positive side of things and then a very negative M$$$-bashy type post. I want something I can point to that at least makes an attempt at fair and balanced with enough information for the reader to make a decision so here it is.

What Is SecureBoot?

SecureBoot is a UEFI protocol that blocks anything that isn’t digitally signed from running before the operating system starts. Essentially, untrusted code can not start before trusted code. This directly addresses an entire class of malware and attacks that we already see on systems in the real world. On a SecureBoot system the malware could not start up because it is not digitally signed.

Windows 8 (currently in Release Preview) uses SecureBoot by default on systems that have “Windows 8 Approved” hardware. This means that, by default, these systems will only boot code that’s been digitally signed.

You can disable SecureBoot on x86 devices but not ARM.

So How Does Linux Fit Into This?

Linux, in a SecureBoot environment, is considered untrusted code. It isn’t signed, therefor it can’t boot. Thankfully Microsoft has ordained that all x86 devices must allow the user to disable SecureBoot and users will also be able to sign software with their own keys. You can also purchase a Microsoft signature.

The problem is that, while Linux is not entirely locked out, it’s still discouraging. As a user you have two options:

1) Disable a security feature (potentially difficult to do)

2) Go through the procedure to sign your software with your own key (almost definitely very difficult to do)

And as a developer your options are:

1) Tell anyone who wants to use your OS to disable a major security feature (discouraging)

2) Pay 99 dollars to VeriSign for a Microsoft signature.

These options aren’t good. Microsoft has not locked Linux out but it’s now more difficult for small Linux distros to gain members and it’s more difficult for users to make choices about which distro to use.

And, to reiterate, Linux is entirely locked out of Windows 8 ARM devices.

It’s worth noting that other distros can not simply use Fedora’s bootloader. The entire chain of trusted software must be signed including kernels and modules. This is what complicates things for distros. I personally run my own kernel so this complicates things a ton for me as I now have to go through the process of signing my own kernel and modules and blah blah blah every damn time (well, not really, I don’t have EFI, but I would.)

So Is There A Bright Side?

There is, and in the spirit of fair and balanced I will delve into it.

SecureBoot is actually a really awesome feature. It prevents cold boot attacks* on disk encryption, it seriously restricts malware, and it’s actually implemented in an not totally horrible way (we can sign things! Way better than patchguard!)

*by preventing immediate loading of a livecd/usb for it. It also prevents bootkits, which bypass encryption.

Microsoft is actually subsidizing VeriSign keys so that they’re only 99 dollars (SSL certs can be 200-300 dollars and only last 1-3 years) so that’s pretty nice I guess…

And Linux distros are in fact already working on implementing SecureBoot, which will make transitioning to Linux (well, to some distros of Linux) as smooth as ever while still providing a really fantastic security feature. Fedora has already confirmed it’s working on it and Canonical is likely to announce the same soon.

SecureBoot is actually one of the better security protocols to come about. It’s not some silly little thing to block out mere theoretical attacks, it’s legitimately a strong layer of security.

How Should I Be Feeling?

I can’t really tell you how to feel about this situation. Some people are just happy for the security and are fine with using a big name distro and others are outright pissed at Microsoft and calling for their heads on a plate. But it’s my blog so I’m going to tell you how I feel…

Honestly, I’m really into security so part of me is happy to see it happen… but it feels very forced. I would have preferred to see this come about naturally. If SSL had come about naturally we probably wouldn’t have all of the problems we see today with CA’s just ‘tacked on’ as a last resort “couldn’t think of anything better, had to rush it” type deal. If the community had openly discussed how to do this in a way where everyone benefits I think things could have not only gone smoother but we would also end up with a more secure product. SecureBoot as an idea is amazing, one of the best ideas for security in the last few years really, but this is not the proper process for implementing it.

My 2 cents, I think this covers everything.

LiveCD’s Are Not Security

I see many guides on the internet advocating a LiveCD for security – not specific distros, not a LiveUSB, just “Use a LiveCD for your online banking to protect yourself.” I’m going to highlight exactly why they aren’t just useless for security but actually detrimental in some situations.

Most LiveCDs Are Not Built For Security

Most distros provide a LiveCD as a way to test out the system. They are not designed for security nor to they make any attempt to be more secure than a default installation, in fact they explicitly make no attempt to do so because they want users to get exactly the same experience as a default insallation.

Just because you’re running from a CD does not mean an attacker is limited to that CD. Most LiveCDs will give full rights to the hard drive and all devices.

On top of that most LiveCDs will run either with Root by default or with a default root password or no root password at all meaning that an attacker can gain root without even trying.

LiveCDs Necessitate Dangerous Sessions

If you’re using a LiveCD for security it’s probably for banking or some such thing. A sensitive session. So while the argument for a LiveCD is that persistence isn’t possible (except on most it is, but this applies to all LiveCDs)  it’s entirely unimportant. If a hacker gains access to your LiveCD session they’re gaining access to everything they need. Even if you shut off right after the session and the hacker wasn’t able to install to the drive you’re still screwed because they don’t care about persistence.

LiveCDs don’t make this more dangerous, it’s just a false sense of security because persistence is not the only goal and in the case of a LiveCD session it’s pretty unimportant.

LiveCDs Can Not Update

If I burn my LiveCD a month ago there’s a month of vulnerabilities in it. My only option is to burn a new CD every time a patch comes out, which is costly and ineffective. I can use a LiveUSB, which solves this issue to a large extent though.

False Sense Of Security

Because people think that persistence matters and that Linux is unhackable and that running from a CD will break access to devices they put faith into a broken idea. A false sense of security is going to do serious damage because a user will think they can go onto an insecure network with a LiveCD or not worry about other issues.

So if you really want security a LiveCD is not the way to go. LiveUSBs solve pretty much all of these issues when used with the right distro  so I suggest you look into that. Leave LiveCDs for testing distros and saving Windows.

Most of this also applies to VMs actually but it gets more complicated with them.

The Definitive Guide For Securing Chrome

This is Part 2 in a series where I’ll be detailing various settings for specific programs and operating systems. For Part 1 (Firefox) click here. I won’t get to do the Ubuntu/ Windows guides today as both of those will probably take days on their own – don’t expect them before Monday.

Chrome

Google Chrome is based on the open source Chromium project. It differs in that it includes Adobe Flash Player, a PDF viewer, an auto-updater, as well as support for closed source codecs. Chrome makes use of a sandbox based on OS-provided MAC. On Linux it uses a SUID, PID, and Chroot sandbox with Mode 2 Seccomp filters and on Windows it uses various levels of Integrity Access Control.

Chrome is the browser that I consider to be most secure and in this guide I’ll be showing how to lock it down further.

I am choosing Chrome and not Chromium due to including Flash and handling updates automatically.

Privacy Settings

Chrome enables certain features that users may feel pose a privacy concern. You can enable and disable these features in the Chrome ->Settings -> Advanced Settings page.

Image

Those are my specific settings but you can enable/ disable as you please. See this link by Mattcuts to understand communications to Google Chrome.

To make Chrome more private click on the Content Settings.

Chrome allows for a fair level of control over what websites can and can not do. You can disable third party cookies from being set entirely and you can blacklist/ whitelist sites from setting cookies at all.

Image

Next you can type about:flags into the URL bar.

Go enable the feature labeled:

Disable sending hyperlink auditing pings.

Enabling this disables hyperlink audit pings, which can be used to track users.

LastPass

Chrome does not include a master password feature so you’ll have to use LastPass for something similar. I’ve posted a guide to setting up LastPass here.

Adblock Plus

As Chrome does not yet implement a Do Not Track feature if you’d like to use it you need to install Adblock Plus, which will block ads and tracking.

I also suggest you use this filter to block tracking.

UPDATE: Chrome now supports Do Not Track in the Privacy settings.

Security Settings

Credit to m00nbl00d here.

We can set Chrome to block Javascript globally and then allow by top level domain (ie: .com, .org.) This means that we can block Javascript on many sites without it bothering us. By blocking Javascript on domains like .ru and .cn we actually block a fair amount of pages that could otherwise be used against us.

Image

Notice that I’ve done the same thing with plugins. Something I personally like to do is set Click To Play, and not whitelist any sites. This is a wonderful way to prevent attacks. My recommendation is Click To Play and no whitelist.

Image

HTTPS-Everywhere

HTTPS-Everywhere is an extension developed by the EFF (Electronic Frontier Foundation) that aims to force HTTPS on all sites that make it available.

Many sites, like wordpress, offer HTTPS but don’t default to it. HTTPS-Everywhere will block and redirect requests so that you end up using the HTTPS version.

HTTPS means that the traffic between you and the server is encrypted. That means that no one besides you and the server gets to read or manipulate the data.

This prevents MITM attacks that can be used to sniff passwords or even compromise the machine by redirecting your request to an exploit page.

HTTPSwitchBoard

HTTPSwitchBoard is another Chrome extension aimed at providing a more private and secure browser. The extension allows you to limit requests that the browser makes for a wide variety of content – you can allow a website to load its CSS/images and nothing else, or add in scripting, plugins, video tags, etc on a per-request basis.

It’s quite easy to use, maintains a great blacklist that makes whitelisting safe and easy, and is much faster than conventional content blockers.

https://github.com/gorhill/httpswitchboard

 

AppArmor (Linux Only)

Chrome does not have an AppArmor profile by default on any distro that I know of. You’ll have to make one, so have a look at this guide.

Chrome already makes use of a powerful sandbox on Linux but making use of AppArmor is a good idea. There isn’t a ton of up to date documentation on the Linux sandbox so while we can gather that it’s pretty strong we shouldn’t trust it and therefor AppArmor is a very good idea. What we do know is that the Chrome sandbox makes use of Chroot, a call that requires root privilege, so I’m not sure how they’re accomplishing this (I think they use a separate UID for this and then drop from root) but either way I don’t want anything that can Chroot and Chmod having access to more of my system than it needs.

Seccomp (Linux Only)

Chrome now uses Seccomp filters for plugins. Read about seccomp here.

PPAPI Flash Player

UPDATE: Chrome now uses the PPAPI Flash Player by default, which comes in a very powerful sandbox. Make sure you have your Flash using only PPAPI in chrome://plugins.

Remember

Chrome doesn’t update anything other than itself and Flash so make sure to keep your Java, Silverlight, or any other plugins up to date as well as the underlying operating system. And make sure to set your plugins to Click To Play.

And Of Course…

If I’ve missed anything let me know. I don’t think I’ve missed anything worth putting it. I’ve purposefully left ScriptNo (now SafeScript) out as I can’t attest to it actually working correctly 100% of the time and it doesn’t have many important features built into NoScript. I think that m00n’s Javascript trick works fine.

Patching Really Is Necessary

There are certain things in the tech world that go from Myth A to Myth B. The “ghz” myth is one of these things – a CPU’s clockspeed is measured in ghz and people used to use this as the go-to benchmark for determining performance and they’d ignore everything else. Now people go around saying that “ghz” doesn’t matter at all, which is equally stupid.

I see this with patching. Patching used to be the go-to practice for keeping an application secure. A program that was quick to patch was more secure and that was a way to measure security. Now people pretend that patching doesn’t matter – that if you use techniques like ASLR/DEP and you sandbox your applications you don’t need to worry. I see this all over.

This is incorrect. Patching is an invaluable layer in any security setup and I think the latest Chrome exploit shows why.

Google Chrome makes use of ASLR (very strong ASLR), DEP, and SEHOP. It has a fairly finely grained sandbox for each process on Windows. It’s a nice mixture of policy and technology.

And yet it’s still hackable. No matter how much policy you have it will have flaws. No matter how many memory techniques you implement there will be backdoors. Do those methods make things way more secure? Absolutely – there’s never been a single exploit in the wild that bypasses Chrome’s sandbox, even their relatively weak Flash sandbox.

But if you’re looking for security in depth you’d better patch because if you’re running Chrome 14 there’s been a thousand holes since then and it’s simply a matter of chaining the right ones together.

And this applies to everything. In Linux I’m running Chrome, which implements an incredibly secure sandbox, which is highly reinforced by the patches I make to my kernel. But if I’m running a super old unpatched version of Chrome all an attacker has to do is google for some exploits and chain it all together.

The cost of attacking a user is drastically lower when the exploit code is already available and there’s documentation on the vulnerability. By patching you force the attacker to find a new vulnerability, and in the case of a program like Chrome you actually end up forcing them to come up with a dozen vulnerabilities.

There is one simple reason why the entire threat landscape would have to change if Linux were suddenly the most popular OS. It’s not some magic memory technique or sandbox, it’s patching. All of my applications are always up to date on Linux, on Windows they aren’t. And hackers take huge advantage of that.

So do yourself a favor. Keep your system up to date.