The Assumed Breach Model – A Practical Approach Part 2

In Part 1, I gave a brief overview of the Assumed Breach model.  In this part, I will begin to dive a little deeper into some of the areas where the assumed breach model can focus.  I am going to cover three areas:

  • Network Segmentation
  • Tiered Accounts and Access Control
  • Log Management and Threat Hunting

The idea is not to simply prevent attacks (though this is still an integral part of the strategy) or even to plan for all possible attacks. The goal is to funnel the attackers into your defensive positions and ensure that the proper controls are in place to detect and prevent based on the risk and probability.  This is going to be different for each industry and each organization.

Network Segmentation

One might think that this is pretty obvious, but it is surprising how many organizations I have worked with that have a completely flat network.  I am not going to get into the 1s and 0s of doing this, but am going to focus on the general idea.

At the highest level, networks should be segmented by trust level and the sensitivity of the systems within it.  One additional consideration is the operational resilience needed within that network as well. This is one reason that security and business continuity should be very closely linked. One example, in a hospital setting, is where it would be very beneficial to be able to drop a proverbial grenade on user networks (where email and web browsing take place) without affecting patient care. This would require systems like virtual desktops, medical devices, printers and servers to be separate from the user networks.  They should also be segmented based on the organizations tolerance for loss.  In other words, if there are 10,000 devices and only a small department is infected, is it better to be able to easily isolate that department or all user networks?  In the hospital example, if networks where users did administrative work (email, web browsing, etc) were segmented from the thin clients for VDI and the medical device for patient care, then patient care could continue in an outbreak.  Of course there are many scenarios, but in a critical threat situation, if user segments were quarantined, then the VDI and medical devices would still function.  This wouldn’t provide full functionality, but it would allow many processes to continue without full down time procedures.

Let’s break this scenario down a bit.  You notice that I mention browsing the internet and checking email as an indicator of user networks.  If you look at my post on the 10 immutable laws of assuming breach, the first law is if a device can browse the internet and check email, then assume that an attacker can run their programs on the computer.  So instead of worrying about making sure Adobe is patched (though it should be), or IE is patched (though it should be) or that Java is up to date (though it should be), we just assume they are not, and that it has been breached.  If we use that approach, then it’s a lot easier to justify the need for 2FA to administer servers, tightly managed administrative access to higher trust networks, more clarity on what data is actually available from those devices, etc.  It reduces the burden on the security team to prevent every possible combination, maintains the status quo for IT teams to patch as they should, and shifts the focus on the security team to be able to detect anomalies.  To reiterate, I am not saying don’t patch.  That should absolutely be done, it’s saying that there are very few organizations that are patched 100% all the time, and as such there is always a risk of missing something.  If we assume breach, instead of focusing great efforts on patching to 100%, it could make sense to consider the threats that can take advantage of unpatched systems and make sure we can isolate them and/or detect them.

So we have some network segmentation now, what next?  This essentially shifts the landscape of your network to a layout similar to a major city with highways, interstates and toll roads.  Traffic needs to have a purpose as it traverses the network.  Now we need to implement the systems to control access.  Keeping to the city analogy, it doesn’t do any good to have a toll road without any toll cards.

Tiered Accounts and Access

Assumed Breach Law #6 covers the assumptions about administrative access.  This leads us to tiered accounts.  If an account is only as trusted as the least trusted device it has access to, we need to have separate accounts for different trust models.  Anyone ever heard of Mimikatz?  Right, so if I use the same account to log on to a workstation that can check email and browse the internet that I use to manage a domain controller, what do I have to assume?  First, we assume that if a device can check email and access the internet, then someone can run Mimikatz (or any other program).  If they can run that, then we have to assume they can see the credentials of any account that was used on that device.  If we used the same account to manage that device as we did to manage a domain controller, then we have to assume that the credentials to manage a domain controller have been compromised.

The solution to breakdown this logic is simple.  Don’t use the same accounts.  Maintain separate accounts managing devices in each trust zone.  There are other controls that should be reviewed as well, but this is a start.  The number of trust zones that are created based on the risk tolerance of the organization determines how many credentials there should be.  This is kind of like toll road payment cards.  It requires a certain access level to move through that portion of the network.  I will dig into this more in part 3.

Log Management and Threat Hunting

So now, we have our toll roads and our toll cards passed out; we need to be able to know when someone goes through the toll booth without a card.  Let’s stay with the toll road.  Notice they don’t usually have a gate stopping people, or even slowing them down.  It would be unusable at that point (based on the traffic volume for this particular road).  We want the users to still be able to function, but we need to know if they violate our policies.  In the IT world, this will break down, of course, because we need to be able to prevent access if they try to violate some policies, but again, this depends on the threat model.  The gate to a military base has steel beams that can prevent a semi from entering, the gate to a parking deck is a wooden board, and in many toll booths there is no gate.  It depends on the circumstances, but in all of them we also need detective controls.

We need to be able to detect when someone from a lower tier of trust tries to access a higher tier and vice versa.  We need to know when someone who browses the internet and checks email runs certain commands that they should not run 99-100% of the time.  We need to know if something changes in the rules that separate our networks.  These don’t require a really expensive SIEM to find, though it makes it easier sometimes.  We need logs, but very specific logs.  If we try to get all the logs and then go looking for things, it’s going to be hard to find.  Take the needle in a haystack analogy.  If we get all the logs and start looking for threats, it is like we are looking for a needle in the haystack.  What if instead we started with the needles.  What if we say we know we need EventID 4688, so we determine if someone runs ‘whoami’ on a device.  While this won’t happen often, we know if it does it should be all hands on deck.  Rule #10 says that we should assume our tool can’t do what it is supposed to unless we can prove otherwise.  With this approach we know we have something that will have next to no false positives and we can test.  We can create a list of needles that we know exist, and we know if we find them we need to act.  This reduces churn in the SOC, allows more time for proactivity, and provides meaningful metrics (not just executive fluff).  Once we have this in place, we can then focus on building the haystack in a systematic fashion.

This is not a new idea.  In the military it’s called a choke point strategy and access control. In the medical field, it’s called triage and quarantine.  It’s been discussed in IT and security for years, but I have yet to see it fully embraced by an organization.  Understand trust zones, control access, detect anomalies that are known and are actionable.  In part 3, I will layout some specific suggestions for first steps.  A lot of actions can be gleaned from this post, but a concise plan of action will help.

 

Leave a Reply

Your email address will not be published. Required fields are marked *

*