Seeing Red: Improving Blue Teams through Red Teaming

743bcf91e8ecab3be5a6af5d7fdf3b85?s=47 Dave Hull
October 26, 2016

Seeing Red: Improving Blue Teams through Red Teaming

This talk is largely based around almost four years of continuous red / blue team engagements while I was the technical lead for security incident response in Microsoft's Office 365.

Below are slide notes that were written during the creation of this deck. The deck has been modified a few times since the notes were written, so they don't align perfectly, but I think they are still useful.

Title slide:
This is Seeing Red: Improving blue teams through red teaming.

Slide two:
This is a slide I shared here two years ago. It shows the first persistence mechanism I ever found during a red team / blue team engagement while working at Microsoft. This is a collection of ListDLLs output from several thousand machines, grouped by process name, in this case, the process is spoolsv.exe. For each spoolsv.exe process we see the dlls loaded into that process. Halfway down on the right-hand side of the screen a black line sticks out like a sore thumb. I’ve circled it in red. That sore thumb was a unique dll that luckily for me, had a long name. That dll was a persistence mechanism the red team had loaded into a spoolsv.exe process on a single machine in this particular environment.

When our red team saw this during the postmortem walkthrough, they assigned themselves a bug to change tactics and they never made it this easy for us to find them again. They improved their processes and as a result, the blue team had to improve their processes and that’s largely the subject of this talk.

Slide three:
Who am I? I’ve been doing IT related work for over 20 years. Just over half of that time has been focused on security incident response and digital forensics. I came up during the heyday of wormable attacks, while working in a research university with 25K+ devices sitting on IPv4 routable addresses, no NAT, no firewalls, no patching requirements, no mandatory AV. It was the wild, wild west. I learned a great deal.

Slide four:
In an effort to control this contagion and limit the damage, I worked on a team that developed a custom Network Access Control solution. During that work, we discovered we could the configuration options that DHCP clients ask for when they come online as a means of passively and remotely fingerprinting those client operating systems.

Our lawyers didn’t think this was novel enough to warrant intellectual property protection. After writing an article for SysAdmin Magazine on this technique, some folks from Harvard asked if they could present a talk on the subject. They spoke at Black Hat the following year.

Slide five:
About two years later, Infoblox Inc. applied for a patent on a very similar concept.

Slide six:
From 2007 to 2012 I was an instructor in the DFIR track.

Slide seven:
I managed and was a leading contributor to the award winning SANS DFIR blog.

Slide eight:
I’ve authored and contributed to a number of open source and commercial DFIR projects.

Slide nine:
I was the senior technical lead for security incident response in Microsoft’s Office 365.

Slide ten:
While working at Microsoft, I put my own time into creating an open source PowerShell framework for doing security incident response work.

Slide 11:
I currently work at Tanium as a product engineer. I joined Tanium because after writing and using my own incident response framework it in large scale environments for several years, which worked better than another commercial solution companies pay millions of dollars for, I saw that Tanium’s communications platform, had solved the problem of scalability. What I could do with Kansa across a few thousand machines in hours, Tanium could do across half a million machines in seconds.

Slide 12:
But that’s a digression, I’m really here to share with you my thoughts on why you should be red teaming; what red teaming is; some highlights and lessons learned from my own experiences over several years; who should be doing it and when and some practical considerations.

Slide 13:
Why red teaming? Because it delivers a security incident.

Slide 14:
Contrast this with a penetration test, which may deliver a nice report.

Slide 15:
Or in too many cases, may just deliver a report.

Slide 16:
Why red team? Because as coaches everywhere like to say, you will play like you practice.

Slide 17:
Let’s watch the final 4.7 seconds of last year’s NCAA Men’s Basketball Div. 1 National Championship game between Villanova and North Carolina. In this clip, Kris Jenkins inbounds the ball to Ryan Arcidiacono (Archie-diacahnoh) dribbles down the side-line, crosses towards the middle, and at the top of the key dishes the ball back to Jenkins, who puts up a three-point shot that swishes through the net at the buzzer giving Villanova the national title.

Why am I showing you this? Not simply because I love seeing North Carolina lose, but because

Slide 18:
This is a play that Villanova runs through at the end of every single practice. Think about that. At the end of every single practice, when the players are tired from running drills, when they are exhausted from scrimmaging, when their minds are wandering to a nice shower, when they are thinking about what’s for dinner and ready to be done with practice, they run through this play. Because their coach knows, they will play like they practice.

Slide 19:
Your organization will respond to security incidents like you practice them, especially when under the stress of a real security incident.

Slide 20:
So that is the why of red teaming. But what is red teaming? How do I define it? It may be easiest to talk about what it isn’t.

Slide 21:
Red teaming is not threat modeling, though it may include threat modeling, the process of trying to enumerate threats to your organization or some facet of your organization and how you will mitigate those threats.

Slide 22:
Red teaming is not vulnerability assessment, though it may include vulnerability assessment, the process of trying to enumerate all of the vulnerabilities currently in your organization, so you can work through the backlog of addressing them.

Slide 23:
It is not penetration testing.

Slide 24:
Red teaming may include all of these things, but it is fundamentally different.

Slide 25:
Recall that I said penetration tests deliver reports. Red teams deliver security incidents that your organization must respond to just as they would a real security incident.

Slide 26:
Red teams have mission objectives.

Slide 27:
Maybe the objective is for the red team to acquire enterprise or domain admin level access. This is generally not the end goal, however, but rather a means to achieving some other objective.

Slide 28:
That objective may be a “customer pivot.” If you run an online service like O365, or Amazon AWS, or GitHub, etc. If one customer can find a way to access the data of another customer without authorization, that’s a customer pivot.

Slide 29:
An objective may be the theft of intellectual property.

Slide 30:
The objective may be to “burn it all down” in a Saudi Aramco or Sony Pictures style attack.

Slide 31:
Ultimately, the red team’s objective should be to test the incident response capabilities and procedures of the organization, not just those of the IR team, but the entire organization. In a real security incident, you will likely have subject matter experts from outside the IR team engaged. You will likely have legal and communications teams engaged. Red teaming gives the organization an opportunity to see how these processes work, where they fail and where they need improvement.

Slide 32:
We now have the why and the what. Let me share some highlights and some lessons learned from my own experiences. I’ll be sprinkling lessons learned throughout the rest of the talk.

Slide 33:
Outliers may be leads. Frequency analysis in incident response is a simple concept based on an assumption that attackers want to stay under the radar and as side-effect of that, their actions and artifacts will be less frequently occurring than normal actions and artifacts. I put emphasis on “may” in the slide because there are places where this breaks.

In very large environments, despite Dan Geer’s warnings about the dangers of monocultures, you will likely find that there are large numbers of benign one-offs in the long tail of artifacts. I am a bit of a Dan Geer fanboy, but as an incident responder, I’d like to run an investigation in his mythical monoculture where outliers don’t exist and my team and I don’t have to spend countless hours running them to ground.

And the converse is true. Just because something is frequently occuring, doesn’t mean it is benign. In one red team engagement I heard about from a colleague, management wanted the team to try and demonstrate that the organization could be hit with a Saudi Aramco or Sony Pictures style attack. The red team was able to compromise the enterprise management system and the build system and drop an implant on nearly every host. So there are long tails of benign outliers and there may be frequently occurring malicious implants. Outliers may be leads. Or they may not.

Slide 34:
It should go without saying, but automation is essential to so much of what we do. During one red team engagement in a large environment, I was working with a very large data set, trying to slice it up for analysis to find evidence of the red team’s activities. I spent six hours manipulating one day’s worth of data and was faced with the prospect of having to do it all again the next day. That evening, I spent 45 minutes writing a script that could do all of the data manipulation and analysis for me, returning the results I wanted. The script was able to run over the entire data set in less than one second. Automate whatever you can, there’s always more interesting work to be done.

Slide 35:
Remediation -- the ultimate goal of incident response teams. Remediation may include determining how the adversary entered the organization, probably via phishing, possibly via 0day and fix those vulnerabilities. I don’t like our chances on the phishing front. Remediation isn’t easy. Geer says information security may be the most intellectually challenging profession of our time and if it is, I offer that remediation may be the most intellectually challenging aspect of the field.

This is my favorite screenshot from my time working at Microsoft and it is one I presented here two years ago. I like it because it shows the blue team’s attempt to evict the red team from one particular facet of O365. What you see on this slide is the command and control GUI for the O365 red team’s botnet. Each line at the top represents a compromised asset. Lines in burnt orange are bots that have stopped reporting to the red team’s C2. Our remediation was done via Kansa, which uses PowerShell remoting over Windows Remote Management. Unfortunately for us, the WinRM stack on the one asset in gray above, was corrupted and not responding to us, so we were unable to remediate that one asset via Kansa.

Slide 36:
Which leads to another lesson learned. Lather, rinse and repeat. Or in the case or incident response

Slide 37:
Investigate, remediate and repeat.

Slide 38:
Revisiting our agenda, let’s talk about who should be red teaming.

Slide 39:
I think any organization that may have a security incident should be red teaming.
Slide 40:
Any organization with something worth protecting, should be red teaming.

Slide 41:
But an organization should have some foundational things in place first. If the organization effectively has no monitoring, no defenses and no IR capabilities, then they should work on building those things first. Don’t wait for them to be perfect as red teaming can certainly inform the strategies around creating those things, but you should have some capabilities in these areas before you engage in red teaming activities or you’ll be in for a very bad time. As it is, you’re probably just in for a bad time.

Slide 42:
Who should be red teaming? Probably an internal team, but not just the security team. If you don’t have offensive folks on staff, hire consultants, but it’s going to be expensive and probably less effective since they likely won’t know and understand your environments the way an internal team should.

Slide 43:
And speaking of knowing and understanding your environments, this is a good opportunity to point out another lesson learned. You will likely find during red team engagements that your documentation is wrong. Comments in your source code is wrong and your assumptions are wrong. I worked multiple engagements where I was burned because I took the word of a subject matter expert to be gospel. That network segment that could not be reached from the Internet, could be. That enterprise administration suite that required 2FA, had some exceptions that people had forgotten about. Red teams are excellent at finding these things.

Slide 44:
When should you be red teaming?

Slide 45:
As often as you can. Remember Villanova worked on that play at the end of every practice. You’ll want to run through red team engagements as often as you can but beware of burnout and your team is going to need time to work on improving capabilities and gaps that are found via the red team process. I think three to four engagements a year is probably reasonable.

Slide 46:
Avoid concurrent incidents. At one point in time in O365, we had three or four concurrent red team engagements and our IR team was not a big team. It was too much. It was frustrating and bad for morale. Avoid concurrent red team incidents. And don’t get cute with your incidents. I have a friend who worked for a large social network in Silicon Valley. He told me management once scheduled a red team exercise with a Valentine’s Day theme. They thought this would be clever, but it sucked. Some engineers had made reservations at hard to get into restaurants months in advance and ended up having to cancel their plans for a red team engagement. Keep it professional.

Slide 47:
Practicalities of red teaming

Slide 48:
Have documented rules of engagement that are mutually agreed upon and subject to regular review by all parties -- the red team, the blue team, management and stakeholders.

Slide 49:
Get approval from management and legal. The red team may want to do some interesting things. Things that real attackers would do like install keyloggers or access employee’s email or IMs. You should have an agreed upon process for this -- mailboxes and IMs could be searched for specific terms in the presence of someone from HR or legal and only messages matching those terms could be reviewed after being screened by HR or legal -- things like password, credential, username, etc.

Slide 50:
No accessing or tampering with customer data.

Slide 51:
No accessing or tampering with real customer data. Your red team may create bogus customer accounts in an effort to see if they can do a customer pivot.

Slide 52:
No outages. No DDoS. No weakening of the organization’s security controls.

Slide 53:
Give the red team access. If your organization is like all organizations and it probably is, you can and will be successfully phished. Don’t make the red team prove they can gain access to your corporate network. Assume breach. If you must, have the red team engage in a phishing exercise annually just to demonstrate that you do have people who will click a link or enter credentials into a legitimate looking, but bogus web site.

Slide 54:
Give the red team source code.

Slide 55:
Give the red team architecture diagrams.

Slide 56:
Keep the blue team in the dark… or at least in a dimly lit room.
Slide 57:
Real incidents trump red team incidents.

Slide 58:
Red team incidents should be core hours only.
Some of you may say, how can you guarantee core hours only, if monitoring triggers an incident at 1700 on Friday or at 0200 on Saturday and the blue team is called on to respond, that’s outside core hours...

Slide 59:
Red team incidents should be core hours only, plus a little. In my experience, red teams will do things that lend themselves to attribution. They may spend weeks building backdoors that they won’t want to recreate for every engagement. Because of this, the blue team should develop the ability to quickly identify red team artifacts and TTPs with a very high level of fidelity. It may take a couple hours of analysis to make the determination and once it’s made, the blue team can stand-down and resume the exercise during core hours.

Slide 60:
Practice how you want to play.

Slide 61:
And by this I mean, get cross team collaboration going for the incident, just as you would for a security incident. You’re going to want developers, operators, legal, communications and management teams involved at some level. Even if it’s just for regular status reporting, it’s important to flex those muscles and make sure the teams cooperate, understand their roles and that the processes operate smoothly.

Slide 62:
Establish a situation room or at the very least a phone bridge.

Slide 63:
Designate incident and investigative leads. The incident lead PMs the incident, interfaces with management and other teams, while the investigative lead, PMs the investigation.

Slide 64:
Delegate and PM. Manage the investigation like you would any significant project.

Slide 65:
Investigate.

Slide 66:
Document.

Slide 67:
Report your findings.

Slide 68:
Few people love writing reports, but as I said a few years ago on Twitter:
Analyze for show, report for dough.

Slide 69:
As you run your investigation, keep in mind the end goal is remediation. You should be planning for that eventuality and briefing management on that plan.

Slide 70:
Execute remediation.

Slide 71:
Post remediation monitoring. Recall the lather, rinse and repeat. You may have missed something, go actively looking for adversary kit that you may have missed.

Slide 72:
Postmortems, probably the most important aspect of the red team / blue team process.

Slide 73:
Postmortem attendees should include stakeholders, blue team, red team.

Slide 74:
Postmortems should not be about assigning blame.

Slide 75:
But do hold yourself accountable for any failures and have a plan for how you will address those shortcomings. The entire exercise is about finding gaps and filling them in.

Slide 76:
Blue team presents first. They present what they think happened.

Slide 77:
All cards on the table. If you give away your most successful investigative techniques and the red teams learns from it, so be it. This is about driving improvements. Recall the listdlls output I showed previously. When we showed that to the red team, they changed their tactics, switching to reflectively loading dlls into memory, without ever having to write them to disk. This made our lives considerably more difficult and improved the IR team capabilities.

Slide 78:
Red team presents second and tells what actually happened.

Slide 79:
All teams will likely get bugs and feature requests as a result of the postmortem meeting. Blue teamers will discover investigative capability gaps, stakeholders may have vulnerabilities or misconfigurations to fix, red team may have improvements to make based on blue team investigations.

Slide 80:
And finally, some additional lessons learned and then the conclusion.

Slide 81:
No one should run as admin.

Slide 82:
Rather than having standing administrator privileges, use Just in Time administration procedures. If someone needs to be domain, enterprise, Exchange admin, etc., they should have to go through an approval process that creates a separate account with those privileges, with a non-human generated password and have the account set to expire within a matter of hours.

Slide 83:
Segment the network.

Slide 84:
But also segment the accounts.

Slide 85:
Use dedicated workstations for administrative functions that can’t be used to read email or access the internet.

Slide 86:
Zero human generated passwords. Humans are terrible sources of entropy. Use password managers, strong password policies and computer generated passwords.

Slide 87:
2FA everywhere.

Slide 88:
Don’t trust. Verify. I touched on this briefly earlier when I say your documentation is wrong, your code comments and wrong and your assumptions are wrong. Don’t trust what the subject matter experts tell you, verify that what they are saying is correct.

Slide 89:
And in conclusion…

Slide 90:
Red teaming is hard.

Slide 91:
Real incidents may be harder.

Slide 92:
Practice how you want to play.

Slide 93:
Thank you. Questions?

743bcf91e8ecab3be5a6af5d7fdf3b85?s=128

Dave Hull

October 26, 2016
Tweet