Against User Education – In Depth

This post is meant to be a comprehensive overview of the costs and benefits of user education in an enterprise environment (though the same applies everywhere). I have talked time and time again about users not being educated, and why that’s the case. A very significant portion of my posts have been about this subject, and it seems that this subject has gotten a lot more popular in the last few weeks, so I’m going to make a post with definitive evidence.

 

Costs

The costs of a security technique are always to be considered. Sometimes the cost is performance overhead, downtime, annoying popups, money, whatever. Sometimes the costs are worth it, sometimes they aren’t, but they’re always going to be there.

So what are the costs of user education?

1) Time: Your IT staff is going to have to take time to meet with users, or at the very least write something up for them. As a single session is unlikely to be effective (more on this later) it’s more useful to have multiple sessions, and thus a significant amount of time is spent on user education.

2) Money: Your IT staff is getting paid either way. But what about the employees? They’ve got work to do – billable hours. An hour of IT is an hour you’re paying them to learn, and another hour taken away from their work. Is that a significant cost? Maybe, maybe not.

These are really the two costs for user training, as it’s not a software technique, and really just involves taking time to talk to someone. The issue is that it’ snot just the IT staff, which is paid for security, but it’s your everyday employee.

 

Benefits

Simple Polices On Deaf Ears

The real meat here is the potential and perceived benefits of this training. This is less clear cut – it’s not a list of things users will or won’t do, but instead I’ll look at how likely your training was to be effective.

Let’s look at one of the simplest policies, probably a policy every company will try to enforce, and something we constantly try to teach users: use a strong password.

Do you know what we’ve learned from password dumps? It’s 2012 and these are still the most common passwords:

password_chart

passrec

Yes, the top password from the Yahoo password dump is 123456, and the next one is “password”, followed by “welcome”. The advice to “use a strong password” is probably the most pervasive and consistent advice in the security community. Honestly, I think it’s the number one thing people will tell you to do, and that goes triple for a corporate environment, where passwords are critical.

And yet passwords have not improved. And I’m quite positive that if you look at a corporate environment you will find very similar results, but with the corporate policies ‘smushed’ on: password12!! instead of password, because someone decided to force them to use a number and symbol.

Users Don’t Care, And It’s Not Irrational

A Microsoft research paper explores why exactly users are incapable of following policies. Why is it that time and time again they don’t follow company advice, or so-called ‘common sense’? The answer is really very simple, and would surprise most people: they’re actually making entirely rational decisions.

Everyone performs rudimentary cost benefit analysis every day, for any task that requires a choice that will lead to a consequence. Should you have some ice cream? Go for a run? Study? Play video games? In our head we make simple assumptions like “well I can study, but it won’t be very fun, but it’ll pay off later” and come to a conclusion.

Users in a corporate environment are no different, you tell them to come up with a strong password and they ask themselves “I can use a strong password and something won’t happen, but I’ll absolutely have a hard time remembering it and it’ll be a pain to type”.

The key point here is that there are definitive and predictable costs and only theoretical consequences. A user is going to be annoyed having to retype their password 5 times. A strong password might prevent an attack.

So you have to convince your users that an attack is imminent and likely, and that the pressure is on them… to which I would imagine they’d ask why it’s their job and not yours.

It Wouldn’t Matter If They Cared

Even if your users could manage to care at all about the security of their systems more than how annoying long passwords are it very likely wouldn’t matter. That’s for two reasons:

1) You can’t get them all to care. If one user is exploited an attacker is on the network. Is the game lost? No. But if you expect user education to save you after this point, good luck.

2) They don’t know anything about computer security. Even if they did care about it, they’re incompetent in the subject. We already know they have zero clue about creating strong passwords, even when policies are enforced, so what makes you think that they’ll be able to do anything else better? They are very unlikely to know how to keep every single program up to date, how to generate strong passwords, how to verify a site is using TLS, how to differentiate between a malicious email and a legitimate one. Humans just aren’t good at that stuff, and you’re not going ot be able to teach it in a reasonable amount of time (again, assuming they care enough to learn, which they don’t).

Conclusion

So I’ve been meaning to write this for a while, and I write about this stuff a ton anyways, but then Schneier put somethign out and it got all this attention and I thought “Oh, look, people actually care.”

So there’s my two cents on the matter. You can see that passwords haven’t changed, you can understand why nothing has changed, and you can consider the potentially very significant costs of implementing user training, and ask yourself if you can’t find a better use for that time/ money.

 

Sources:

http://arstechnica.com/information-technology/2012/11/born-to-be-breached-the-worst-passwords-are-still-the-most-common/

http://www.insanitybit.com/2012/07/13/has-anyone-learned-anything-6/

https://research.microsoft.com/en-us/um/people/cormac/papers/2009/SoLongAndNoThanks.pdf

More On Common Sense

Imagine that a user goes looking around for a new browser. They’ve downloaded Firefox and Chrome but they’re just not satisfied. So they come across a website advertising a “cool new browser” and download it. The website says “Because the browser is new and makes lots of connections to the internet your antivirus may pick it up. Don’t worry, this is simply a false positive, we’re full accredited and you can see that we’ve signed the installer.”

The user runs the .exe, a little “This software is signed but we don’t recognize the cert” comes up and asks for Admin. Makes sense, most programs ask for admin when installing.

They install it, a browser installs (let’s say a reskinned firefox) but so does a malicious payload that embeds itself into the system.

No exploits were used, purely social engineering.

Most people would blame the user here. They should have known better, they should have double checked, they should have kept an AV up to date, blah blah blah.

This is stupid. Users are not capable of ‘knowing better’ nor should they be required to in order to use a system in a secure manor. We create advanced heuristics, which analyze malware on a code level and correlate it with past malware and we  still only ever get like…. 50% of the malware without unruly false positives. Stop treating humans like they can analyze code better than an advanced heuristics engine.

Security necessarily has to be handled at the lowest possible level ie: hardware or kernel. There is no getting around that. You can have superfluous layers and exude your common sense but it’s easily bypassed (click here to find out why everyone is vulnerable) and in the end security absolutely has to come from the OS.

In this case Windows should have either detected the payload reliably or prevented the rootkit payload from installing. It should have done something.

Thankfully Microsoft has implemented things like PatchGuard and SecureBoot that limit malware without truly limiting the user so had this user installed it on an EFI 64bit system the malware would have been limited to Admin and couldn’t have bypassed too many security systems.

No, I am not advocated a walled garden. That approach doesn’t work, it limits the user not the software. Limiting the user isn’t good because we always find away around it and we simply won’t use the product.

To reiterate: nearly everyone gets the question of “who is to blame?” wrong. I’ve seen so few people ‘get it’ and they’ve all been (perhaps coincidentally) security researchers. The answer is always “the operating system” or “The OS and the AV” or whatever but the user should never be blamed and anyone who resorts to what amounts to victim blaming probably just doesn’t understand what security is about.

Microsoft Gives Advice To IT Professionals About Social Engineering

In a new security article on social engineering Microsoft highlights what measures can be taken to both prevent and remediate  socially engineered attacks.

Some key highlights are:

  •  Limit attack surface
  •  Limit user accounts and strictly monitor high privilege accounts
  •  Maintain a proper incidence response team
  •  Risk analysis and weighting
  •  Proper training

You can read the full article for details but I think those are the tips that stand out. The go-to policy for many companies is “enforce periodic password changes, don’t hand out smartphones to just anyone, tell users to be secure.” That’s my (limited) experience at least. This article should prove useful to anyone willing to put the work into maintaining a secure environment.

Common Sense Is Anything But

The one piece of advice I see more than all others, more than “keep updated”, more than “run an antivirus”, more than anything else at all is “exercise common sense.” Common sense is a misnomer.

To put it simply, if common sense by IT standards were in fact common we wouldn’t have to keep telling people to use it.

What seems like a no brainer for someone who works with computers and has experience is something that the average user might never think to do. And, honestly, that’s fine.

I also think that this “common sense” thing is a great way to blame users. “Oh, it was social engineering? Well that’s there fault, you can’t defend against that.” “Oh, the vulnerability had a patch out? It’s their fault for not updating.” “It’s their fault” is the common subtext for so much of what I see.

The simple truth is that blaming users for not knowing as much as you is stupid. It’s defeatist, it’s lazy, and it doesn’t solve anything. Users should share responsibility, never blame.

I’ll post much more about common sense and the relationship between users and security in the future.