The Assumed Breach Model – A Practical Approach Part 3

In Part 1 of this series I gave a brief overview of  the assumed breach model of security.  In Part 2, I dove into some details about major components to implementing the assumed breach model.  In Part 3, I am going to provide some concise, real world steps to moving toward this mindset within an organization.  I’ll use the same three categories from Part 2.

This will be something that will have to be socialized with executives because they are going to lose a lot of their pretty graphs and the new numbers they will see will be apples and oranges compared to what they are used to. This will also have to be socialized this with IT because on one hand it’s going to sound like you don’t care about patching or firewalls or passwords, but on the other hand it’s going to change the fundamental way administrators access systems.  And finally, it will have to be socialized with infosec teams.  This is a mindset shift.  It’s not about setting up a wall and deflecting everything that comes at you anymore. We have to shift our focus from preventing everything an attacker does and instead limit their abilities and detect their actions; we have to make assumptions; we have to model threats, and we have to understand realized risk.

Network Segmentation

First we need to identify your trust zones.  This will be different for every organization and I suggest starting small.  Start with user networks, specialized systems (ATMs, medical devices, PCI, etc), servers or datacenter, and identity management as four different zones.

User networks are generally any network where users do user things like check email.  Specialized systems are like radiology modalities, bank ATMs, point of sale devices and other similar device types.  Servers should all be in a datacenter behind a stateful firewall.  As this mindset matures, even segmentation within the trust zones will grow.  And finally identity management should be segmented all on its own.  This includes domain controllers, identity management servers, system management tools and anything that manages the identify of a user or an asset.  Microsoft has a good walk through of doing this within an Active Directory environment.

This lays the foundation of the toll roads mentioned in the previous post.  It can be done in numerous ways, and that is not the purpose of this blog, but as you do it, you will start to see different micro levels of trust within each trust zone.

Tiered Accounts and Access Control

First, if we take the high level trust zones we mentioned in the last section, we should have administrative accounts for each zone.  Each account should be unable to even access systems in any other zone up or down.  The Microsoft article explains this really well. This isolates the credentials and requires additional hops to each zone.  Second, based on the trust zones identified, administrative access to each zone should be limited to specific source networks.  Again, there are numerous ways of doing this, but this is a good starting point.

As an example, you can go so far as to say that access to domain controllers can only come from a specific user account(s) AND from a specific network AND from a specific device.  For step one, just limiting the user account may be a start, and then based on an organization’s risk a decision can be made on whether others should be added as well.

Additionally, no one should be in the Microsoft privileged groups such as domain admins, enterprise admins or schema admins.  Access can be delegated to manage those groups, and if they are needed, users can be added.  No one is adding domain controllers, domains or making schema changes on a regular basis.  I have a couple of blog posts here and here that talk about securing accounts and services.

Log Management

I recently did a brief presentation on this at our local Central Alabama ISSA chapter on rethinking log management.  There are three main reasons to collect logs.  Forensics, detection and operations.  We are not going to discuss operations.  For many organizations forensics are essential and that log information must be detected.  However, often times, there is significantly more data needed for forensics than for threat detection.  Most of it overlaps, but there is a much bigger hay stack to dig through.  For detecting threats for someone just shifting their mindset to assumed breach, it could be better to start over on their rule sets.  It could be better to build a stack of needles before adding the haystack.

So what I am saying is for assuming breach, we definitely need to worry about forensics, but we are going to focus on detection for now.  Organizations should ask security teams today, can they detect if a user runs ‘whoami’ on a computer?  If they can’t they should.  If this runs, it should be all hands on deck to find out why because there is no legitimate need to run this program.

Look at the alerts and reports that are used today.  How much data has to be sifted through and run down before finding something that actually mattered?  Most teams are spending all of their time figuring out if they need to respond to something instead of actually responding.  Let’s take a step back and start with only the things that will almost always require action.  Here are some examples (most definitely not exhaustive):

SOC teams should be detecting these things either in an alert or a report that is reviewed regularly:

See my blog post on references for windows event forwarding to see how to collect logs from all Windows endpoints.

For Windows:

  • EventID 1102 – no one should be clearning the security logs except in very specific situations
  • EventID 4688 – this is very noisy, but you should know if any of the following commands are run in the user networks:
    • Whoami
    • Sc
    • Psexec
    • Net use
  • The following are good to know about and usually will be something that should not happen, but may have a higher false positive rate compared to the first list:
    • Powershell on a user workstation
    • cmd on a user workstation
  • EventID 47xx, 46xx
    • Did an account from one trust group attempt to login to another trust group
    • Was an account added or removed from an administrative group
    • Did any user in any of the MS priv groups or special groups (domain admins, enterprise admins, schema admins and others) try to log into anything?

These are a good starting point, and I could go on more and more, but if an organization can’t detect these things, they may need to stop what they are doing and review.  Action should be taken on almost all of these events.  If you can, then you are off to a good start.  It only gets better from here.  Another good resource worth mention is by a guy named Florian Roth who has a repo of generic signatures of IOCs that can be ported to any alerting system.  The formatting is called Sigma.  One other reference to keep handy is the ATT&CK Matrix for testing SIEM and other alerting to make sure you can detect the things attackers are actually doing.

Hopefully this is helpful for someone.  It has been fun putting my thoughts and the things I have been pushing for years into writing.  I’m hoping to present some of this in some conferences coming up (Bsides Huntsville). Twitter is a great way to reach out to me with questions.


Leave a Reply

Your email address will not be published. Required fields are marked *