10 Immutable Laws of an Assumed Breach
A few years back Microsoft released a set of 10 Immutable Laws of Security. These are tried and true and should be a foundation of security posture. I have been developing some information around the Assumed Breach model of security. You can read about it in a series of blog posts I am going to be publishing after the holidays on that very topic. In this series, I am going to dive into why these laws matter, what it means to a business owner ,and what we can do about it. I am going to take a stab at forking from these laws that Microsoft has published and defining 10 Immutable Laws of an Assumed Breach. I am looking for input from the InfoSec community on this to help me fine tune these and add to them. We don’t have to limit this to 10, but initially I sort of took the original laws Microsoft published and worked from there.
Immutable Laws of an Assumed Breach
Law #1: If you can check email or browse the internet, assume that a bad guy is running his or her program on the device
Instead of focusing our efforts on trying to stop users from doing something they shouldn’t or estimating the probability that they will do it, let’s just call a spade a spade and say that they are going to do it. No one is immune from a well targeted phish, not even the best InfoSec guru. Assume that if you can access the internet at large or check email, then a bad guy is running their program on your system. This isn’t to say we shouldn’t put preventative measures in place. The purpose of this is to change how we view those systems. If a user machine can access the internet then even if it has preventative controls in place, it should have a lower trust level than most any other system. We should also use this mind set to educate executives on the real risk reductions provided by specific tools compared to the aggregate impact of things like a flat network and local admins.
Law #2: If a user can access a device with a user account that can check email or browse the internet somewhere else, assume that a bad guy has access to the device
Lateral movement is key in gaining a foothold in a network. Since we are assuming #1 is true, we have to assume that any account on the device that violated #1 is compromised, therefore any device where that account has access is fair game for the attacker.
Law #3: If the device isn’t behind reasonable physical security, then assume that a bad guy has access to the device
This is part of creating trust zones. If you have a device sitting in a lobby, then you can’t trust it.
Law #4: If your website is public facing, then unless a threat model shows that a risk is reasonably mitigated, assume any threat is a risk and increase the probability to the highest rating
What we can’t assume here is that just because we don’t know about the threat or the vulnerability, it doesn’t exist. This law was a little more difficult for me to define. Essentially, unless we can show a high probability of mitigation, we need to assume that a threat or vulnerability exists for a public facing application. This is one of the most common ways into a network.
Law #5: Assume that something you know can be guessed
We can discuss how 2FA can be bypassed or hijacked, but the fact is (get ready for a completely made up statistic) 9 times out of 10 an attacker is going to move on if 2FA is in place. If they don’t, then the rest of the Assumed Breach Laws should make their job much more difficult. Humans are too predictable, and without a second factor, at least someone is going to use something that is easily guessable or publicly available, therefore we have to assume that any input is.
Law #6a: If you can’t detect who, what, when and where of an administrators actions, assume that they are malicious
It’s not that we don’t trust administrators, but we shouldn’t trust the administrator accounts without proper detect and access controls.
Law #6b: If an administrator can access a system that violates any other law, then assume that any system that administrator can access also violates those same laws
This is almost redundant from law #2, but I thought it was worth calling out specifically because of administrative credential reuse. Administrators should never access systems of more or lesser trust.
Law #7: If you don’t control the private key to the data, and unless a threat model shows that a risk is reasonably mitigated, assume the data is publicly accessible
This most often applies to cloud services, but without compensating controls, if you don’t control encryption keys, then you don’t control the data.
Law #8: If whole disk encryption is the only encryption then assume is only protected from physical theft of a disk
I hear this from so many people that “the whole disk is encrypted so I’m good.” If the application can see the data in plain text or it is stored in plain text, then you’re only protecting the data from physical theft.
Law #9: Even if AV is running and up to date, just assume it’s not there
Antivirus is too often used as a crutch. Whether it is behavioral analytics, hash based AV, or ‘next-gen’ AV it would have to follow Law #10. And since we all know that malware is ever changing and there is no way to detect ALL of it, then we can’t ensure that AV will stop everything. Therefore, we have to assume it won’t detect something. In the assumed breach model, we cannot rely on AV.
Law #10: If you put some tool or process in place, assume it cannot do what it is supposed to unless you can prove otherwise
This is not saying don’t use tools. It is saying test them. Understand their capabilities, and only rely on what you know to be actionable and tested.
Update 12/28/17: Tweaked the wording of #8, removed 9 (redundant) and reordered a couple.
Update 12/29/17: Per a suggestion from Alan Jones, added some additional text to Law #1 description.