Slide 1

Slide 1 text

Some necessary jargon LIS 510

Slide 2

Slide 2 text

What is “jargon”? ✦ Vocabulary specialized to/for/by a particular community that isn’t broadly understood outside that community ✦ Could be a professional community (as with infosec), also called a “community of practice” ✦ Could be a community delimited by a demographic commonality ✦ Could be a community of choice, such as a hobby community ✦ Jargon is not bad! ✦ Can speed up communication, make it more precise ✦ … But jargon can be a barrier to entry to its community. ✦ (Sometimes, it must be said, intentionally.)

Slide 3

Slide 3 text

Compromises and pwning ✦ I’ve used these words already without defining it in this context. Sorry about that. ✦ In infosec, compromises have nothing to do with negotiating or agreeing on anything! ✦ COMPROMISE (verb and noun): A successful attack on someone/something’s security ✦ “Eve compromised Alice’s email” = “Eve attacked Alice’s email account successfully [and read Alice’s email when she shouldn’t have].” ✦ “There are thousands of compromised systems” = “We know thousands of systems have been successfully attacked.” ✦ PWN (verb): To compromise someone/something ✦ PWNING (noun): A compromise. “What a terrible pwning!” ✦ Comes from gaming; to “own” someone is to thoroughly defeat them.

Slide 4

Slide 4 text

“Incident:” sounds innocent but isn’t ✦ “Incident” is info-security-ese for “we got pwned.” ✦ Not all incidents are all-hands-on-deck crises. ✦ Most of the time, something happens, someone notices, it’s not major, it gets fixed, any closable holes get closed, and that’s that. ✦ Something like that won’t get a full incident report, or incident reporting would be all infosec pros ever do! ✦ If there’s a full incident report, the incident must have been major. ✦ Usually, this means either something really bad resulted, or the incident pointed to a serious, re-pwnable security problem. Or both! ✦ I’ll be using the Equifax breach as our example this module.

Slide 5

Slide 5 text

Bugs, patches, vulnerabilities, exploits ✦ BUG: a mistake in software code/hardware behavior ✦ VULNERABILITY: a bug that is a security problem for a given piece of software or hardware ✦ Not all bugs are security problems! Bugs can certainly be non-security- related. So all vulnerabilities are bugs, but not all bugs are vulnerabilities. ✦ PATCH: a programmed fix for a bug ✦ SECURITY PATCH: a programmed fix for one or more vulnerabilities ✦ Most times, patches can be applied to software without redownloading or reinstalling the whole software package. Hardware is harder to patch. ✦ EXPLOIT: A technological security attack that leverages a specific vulnerability ✦ ZERO-DAY [EXPLOIT]: An exploit that is so new there is no patch for the vulnerability it leverages. Very dangerous!

Slide 6

Slide 6 text

“Attack surface” ✦ How much opportunity you are giving attackers to compromise you. ✦ A function of: ✦ How many different systems / software / platforms you’re using (more systems, more problems!) ✦ How exposed to the open Internet you and your systems are ✦ How sensible your (physical, digital/online, and human) security practices are ✦ Whether your systems / software / platforms are common attack targets ✦ Whether YOU are a particularly desirable or common attack target

Slide 7

Slide 7 text

Defenses ✦ FIREWALL: Like the wall around a castle, a defense against exploits originating from outside. ✦ DEFENSE IN DEPTH: Don’t just have a firewall! ✦ When you only have a firewall, if it gets pwned (or an attack comes from inside) you’re in bad trouble. Have more defenses! Such as… ✦ INTRUSION DETECTION/PREVENTION SYSTEMS (IDS, IPS): Pretty much what they sound like ✦ An IDS tries to notice attempted exploits, based on rules for what they might look like. When it sees one, it raises an alarm for human beings to evaluate. ✦ An IPS goes one step further: when it sees an attempted exploit, it stops it. (Which can be a problem if the IPS is incorrect!)

Slide 8

Slide 8 text

Questions? Ask them! This lecture is copyright 2020 by Dorothea Salo. It is available under a Creative Commons Attribution 4.0 International license.

Slide 9

Slide 9 text

Attack stages: the Cyber Kill Chain LIS 510

Slide 10

Slide 10 text

There are patterns to how attackers proceed. ✦ The more you know about them, the better your defenses and incident response will be. ✦ There are a couple of systematizations of attack steps that I think you should know. ✦ (Before these, there definitely was a shared understanding of how attacks work among infosec pros—it was just implicit.) ✦ One is the Cyber Kill Chain (this lecture); the other is the MITRE ATT&CK framework (different lecture). ✦ I am also going to critique the communication around MITRE ATT&CK, because it is extremely bad and I want you to do better.

Slide 11

Slide 11 text

Cyber Kill Chain ✦ (There is a lot of hypermasculinized, hyperaggressive, militaristic rhetoric in infosec. It’s honestly really gross and can be outright scary. Please don’t contribute to it.) ✦ By military contractor Lockheed Martin ✦ Attacks proceed in defined “stages:” ✦ Reconnaissance (“recon”) ✦ Intrusion ✦ Exploitation ✦ Privilege escalation ✦ Lateral movement (also called “migration”) ✦ Obfuscation / anti-forensics ✦ Denial of service ✦ Exfiltration ✦ Let’s walk through these in order.

Slide 12

Slide 12 text

Reconnaissance ✦ RECONNAISSANCE = gathering “intel[ligence]” ✦ Assess the target’s systems and Internet infrastructure. What software are they using? What software are they running on the open internet (where it’s directly attackable)? What data have they left hanging in the breeze on the open Internet (especially in Amazon Web Services buckets!)? ✦ OSINT: Discover as many people as you can who are currently part of (or interacting with) the target. Learn as much about as many of them as possible (org website, social media, GitHub, etc). Bonus: IT employees, management, disgruntled/bribeable insiders. ✦ Tools exist for the above two steps! Skilled attackers go beyond them. ✦ Assess the target’s physical security. Possible to infiltrate a server room? Trashpick exploitable information (e.g. passwords, procedures)? ✦ Look at the target’s supply chain and contractors. Attackable?

Slide 13

Slide 13 text

Defending against reconnaissance ✦ It’s hard. Often impossible. ✦ Are you seriously going to tell your people “get off LinkedIn, it’s a social-engineering risk”? If you do, they’ll only ignore you! ✦ Basic prophylactics (i.e. “don’t make it too easy”) ✦ Org website: contact forms that don’t disclose email addresses instead of, um, disclosing email addresses (or phone numbers) ✦ Any web-facing tools (content-management systems etc): disguise the tool—by default it probably announces itself. (Check the favicon!) ✦ Hide domain-name registration information. All registrars can. ✦ Buy (or make) the org a private Git(like) server! Don’t make them put your org’s code in free public GitHub repos! (This goes for other leak-prone services too, e.g. project-management tools.) ✦ Recon your own organization. Fix unnecessary disclosures you find.

Slide 14

Slide 14 text

Recon in the Equifax case ✦ We don’t know much. ✦ Partly because OSINT techniques aren’t detectable by the target! ✦ Partly because Equifax’s network-traffic analyzer broke (we’ll talk about “SSL certificates” elsewhere in the course) and nobody noticed it was broken until the hack happened. ✦ This means that Equifax didn’t notice attackers poking around in its network and network-attached systems. ✦ What is clear is that the attackers were looking for the Apache Struts vulnerability they used to get in. What isn’t clear is whether Equifax was a target of choice or opportunity. ✦ That is, whether the attackers were thinking “I wonder if Equifax is running Struts?” or just “I wonder who-all is running Struts?”

Slide 15

Slide 15 text

Network recon: scanning ✦ Once the attacker understands more about the target’s systems, it’s time to look for holes. ✦ Typical network attackers try a PORT SCAN first. ✦ Goal: Find likely possibilities for successfully getting access to a system. ✦ PORT: a dedicated “lane” assigned to a certain kind of Internet traffic—like bikes in a bike lane. For example, ssh traffic usually uses port 22, outgoing email port 25, unencrypted web traffic port 80, and so on. ✦ Software (“nmap” is common for this) can scan some or all of a system’s ports to see which ones are OPEN (that is, receiving traffic). ✦ Open ports may offer additional information about the software that’s using them to listen for traffic. Attacker gold! ✦ Port scans are common enough not to set off security alarms, but fast/comprehensive scans might.

Slide 16

Slide 16 text

Defending against network scanning ✦ Firewalls ✦ LINUX SERVER USERS: Linux firewalls are disabled by default. This is hideously dangerous! Configure and turn on iptables right away! (If you’re using a shared web host, don’t worry; they’ve done this for you.) ✦ Firewalls are necessary… but not sufficient, for reasons we’ve discussed. ✦ Intrusion-detection systems (IDSes). ✦ Port-scan your own organization! Inside and out! ✦ Look for: ports you didn’t know were open, software you didn’t know anybody was running, software that is advertising itself too openly on open ports ✦ “Port-forward” services away from obvious ports. ✦ For example, set up ssh to run on port 10672 instead of the usual port 22. ✦ This won’t stop a determined attacker, but it can stop automated attacks and unskilled attackers.

Slide 17

Slide 17 text

Software recon: Shodan ✦ shodan.io is a search engine for web-available software and services. ✦ “Who’s running Drupal?” ✦ “Who’s running versions of Apache Struts that are vulnerable to the zero-day exploit that just got publicized?” (EQUIFAX) ✦ “Who’s got those easily-pwnable smart light bulbs? The ones with the default admin/admin password?” ✦ “What’s my target running that might be insecure?” ✦ Heavily used by infosec professionals ✦ Lots of infosec tools leverage Shodan’s APIs ✦ Limited free functionality… but if you’re serious about this work, it’s worth buying access.

Slide 18

Slide 18 text

Scanning in the Equifax case ✦ Again, we’re not completely sure how all this went down. ✦ Some things we do know: ✦ The Countermeasures team added a rule to its network-traffic analyzer looking for attempts to exploit the Struts vulnerability. They saw (and blocked) some. A lot, actually. This suggests attackers were scanning Equifax! ✦ (Which is not a surprise. EVERYBODY’S GETTING SCANNED, ALL THE TIME.) ✦ They didn’t see the successful attack because of the SSL certificate problem that prevented their network analyzer seeing some traffic.

Slide 19

Slide 19 text

Intrusion ✦ This is the step most people think of first when they think about hacking a system. ✦ I hope I’ve convinced you that it’s NOT the first step in an attack! ✦ As you write your incident reports, look for signs that the attacker performed reconnaissance! (Scanning is practically never reported on; I don’t expect you’ll see it, but if you do, go ahead and add it to your report.) ✦ Attacker tries to gain unauthorized access to a system belonging to the target. ✦ This may be through social engineering, malware, direct exploit use, physical access, or some combination of these. ✦ It usually takes the attacker more than one try to be successful. If the attacker doesn’t disguise their attempts carefully enough, they may be caught and stopped (e.g. by an IDS) at this stage. ✦ Once the attacker finds the right hole, however, access happens FAST. Within minutes—or seconds!

Slide 20

Slide 20 text

Defending against intrusion ✦ Competent policies, procedures, and communication ✦ Competent systems administrators ✦ If your org is too small to hire one, outsource your IT. I mean it. ✦ Minimize “ATTACK SURFACE:” as little software as possible! ✦ The less software running (especially web-based software), the fewer attack routes available to attackers. ✦ Patch those vulnerabilities! ✦ IDSes/IPSes, again ✦ Logs and log monitors ✦ A log won’t stop an unauthorized access—but when one happens, logs help defenders figure out the when/where/why/how/what. ✦ Log-monitoring software can issue “hey, what’s this?” alerts for unusual access patterns that may signal an attacker with access.

Slide 21

Slide 21 text

Privilege escalation ✦ Gain more power over the machine(s) an attacker has access to. ✦ A normal system will partition off dangerous system power and confidential data only to those who absolutely need access to it: this is sometimes called the PRINCIPLE OF LEAST PRIVILEGE. ✦ If you see “got root/administrator access,” that’s privilege escalation. This type of privilege escalation is often accomplished via technological exploits. ✦ Pwning the credentials or account(s) of someone who has extra system privileges also counts. (Imagine how much personal and financial information an attacker who pwns the head of HR has access to!) This can be done technologically or via social engineering.

Slide 22

Slide 22 text

Equifax and privilege escalation ✦ It doesn’t look like attackers had to do much escalation—the Struts vulnerability was severe enough to confer elevated privileges without extra work. ✦ Sometimes a vulnerability is just that bad!!!! ✦ Vulnerability warnings will almost always tell you whether the vulnerability allows privilege escalation. ✦ Privilege escalation also a major ingredient in raising the severity of a vulnerability in vulnerability warnings. (If the vulnerability allows this, it’s extra-severe!)

Slide 23

Slide 23 text

Lateral movement ✦ also known as “migration” ✦ Now that the attacker is in, they likely want to hop over to a different computer or system inside the organization. ✦ Partly this is to cover their tracks—the initial unauthorized access is fairly likely to be noticed, so attackers won’t stay on that machine. ✦ Partly this is because different computers/systems inside the target have different vulnerabilities and different loot. ✦ (For example, if the goal is grabbing customer data, the ultimate target is probably the database server, which shouldn’t be directly Internet-accessible. The way there might be to attack the web server, then move laterally.)

Slide 24

Slide 24 text

Equifax and lateral movement ✦ The attackers initially got into Equifax’s web server. ✦ This happened through a known Apache Struts vulnerability that Equifax didn’t patch. ✦ They then moved laterally through a whole lot of Equifax’s network (!), eventually landing at the server with all the credit-report data. Jackpot! ✦ The Senate report notes that network segmentation (which we discuss separately) could have prevented this.

Slide 25

Slide 25 text

Defending against lateral movement and escalation ✦ Patch, patch, patch those vulnerabilities! ✦ Pretty much all lateral movement/escalation attacks work from exploits. ✦ Don’t regularly run as root/administrator, and don’t let anybody else do it either! ✦ An attacker who migrates onto a desktop/laptop whose user is normally logged on with administrator privileges is THRILLED. So much they can now do! Installing malware is the least of it! ✦ Yeah, typing good passwords a lot is annoying. Too bad. This time, the security is absolutely worth the added friction. ✦ PRINCIPLE OF LEAST PRIVILEGE: only let people have the power over computers that they MUST have ✦ Network segmentation (discussed elsewhere in course)

Slide 26

Slide 26 text

Obfuscation / anti-forensics ✦ OBFUSCATION: hiding the attacker’s presence ✦ In addition to lateral movement, this could involve deleting or altering logs, or pretending to be an existing legitimate user or system. ✦ ANTI-FORENSICS: defending against known ways defenders try to figure out what happened to their systems during an incident ✦ Example: “Fileless malware,” because a lot of forensics tools rely on detecting certain kinds of files or changes to files ✦ Defenses: mostly the same as for lateral movement and privilege escalation

Slide 27

Slide 27 text

Equifax and obfuscation ✦ It doesn’t look like the Equifax attackers had to do anything particularly special here (though they might well have). ✦ Equifax plain old didn’t see their attack or their subsequent data exfiltration! ✦ Attackers stayed in Equifax’s systems undetected for ONE HUNDRED FORTY-SEVEN DAYS. Horrifying. ✦ (“How long it typically takes to kick attackers out” is a metric that infosec analysts pay attention to. The numbers are pretty bad—a whole month isn’t uncommon at all.)

Slide 28

Slide 28 text

Denial of service / exfiltration ✦ This is the step where the Bad Stuff (from the target’s point of view) happens. Commonly: ✦ stealing data (“EXFILTRATION”) ✦ deleting data ✦ altering data (“My account balance is really $BIG_NUMBER”) ✦ DENIAL OF SERVICE (for Cyber Kill Chain purposes): damaging or disabling systems (e.g. with ransomware) ✦ defacing public-facing systems

Slide 29

Slide 29 text

Incorporating this into your incident reports ✦ Make a chart of the Cyber Kill Chain steps. As you read, fill it in as completely as you can. ✦ One bullet point per thing the attacker did ✦ As you surface new information, check the chart and see if it fits anywhere. ✦ Insofar possible, add date/time information to each bullet point ✦ Ideally, this will turn into a “timeline” in your final incident report. ✦ Don’t feel bad if you can’t surface much timing information. Publicly- available accounts of attacks are often not detailed enough for this! ✦ You may at least be able to get a sense of the order of the attacker’s actions. That’s still a useful timeline!

Slide 30

Slide 30 text

Questions? Ask them! This lecture is copyright 2018 by Dorothea Salo. It is available under a Creative Commons Attribution 4.0 International license.

Slide 31

Slide 31 text

MITRE ATT&CK LIS 510

Slide 32

Slide 32 text

Problematic assumptions in the Cyber Kill Chain ✦ Attacks always proceed tidily through the steps. ✦ They don’t! This is known! Attackers may backtrack and/or skip steps they don’t need! ✦ (If you can get to the loot without lateral movement, for example, why would you bother? Get the loot and get out!) ✦ If you prevent one early step, you’re safe. ✦ Nope! Again, attackers often skip steps! ✦ Attackers are always outsiders. ✦ Argh. We’ve talked about this. Insiders definitely get to skip steps! ✦ The goal is always data exfiltration. ✦ Nope! Denial of service may be the goal! Surveillance may be the goal! Or leveraging a system for a completely different attack!

Slide 33

Slide 33 text

Another problem: HOW? ✦ The Cyber Kill Chain doesn’t make clear which “TACTICS, TECHNIQUES, AND PROCEDURES” (TTP) attackers use at which stage of attacks. ✦ This makes it hard to use to plan for and structure defenses and incident response. ✦ It also makes evaluating a just-announced vulnerability harder than it needs to be. ✦ As vulnerabilities and TTPs have evolved, the Cyber Kill Chain… hasn’t. It’s a bit gappy by now. ✦ Enter the MITRE ATT&CK knowledge base. ✦ attack.mitre.org and who puts an ampersand in the name of anything ever?! (You can’t use an ampersand in a domain/subdomain name.) Confusing choice, MITRE.

Slide 34

Slide 34 text

Did that website freak you out? IT’S NOT YOU. Jargon! (and notice how the word “cybersecurity” doesn’t appear until the END of the first paragraph?)

Slide 35

Slide 35 text

Get started! Or don’t! Because there is no actual information on this page!

Slide 36

Slide 36 text

I won’t take you through all the stuff linked to from the Getting Started page. I’ll just tell you: IT’S UNHELPFUL GARBAGE. All of it. Every last word. And the “101 Blog Post” is the worst!

Slide 37

Slide 37 text

WTF?!?!?!?!?!?!

Slide 38

Slide 38 text

that matrix, fit on a slide *screaming in librarian* How is anybody supposed to FIND ANYTHING in this incredible morass? Search? Site-search only. No tailored search.

Slide 39

Slide 39 text

This is bad communication. MITRE should be ashamed. ✦ It’s an intimidating turnoff to anyone without deep information-security knowledge. ✦ This means, for example, that infosec pros can’t take ATT&CK to their non-technical C-suite to say “hey, this is useful and we should use it!” There’s nothing there the C-suite can understand! ✦ It inspires despair. So many attacks! ✦ It’s impossible to teach. (Ahem.) ✦ MITRE does want new infosec people to be able to use it, right? ✦ If you’re mid-incident, how in the world is that ginormous matrix even slightly useful?!

Slide 40

Slide 40 text

Lessons for communicating ✦ Think about your AUDIENCE(s). ✦ What do they want to know? Why do they want to know it? ✦ What do they already know? What do they not know? ✦ What do you most need them to know, so they do what you want them to with the information you’re giving them? ✦ With multiple audiences, your home page needs to give INTRODUCTORY INFORMATION, and useful direction to audience-specific resources. ✦ Write a dang primer (101 document, FAQ, whatever). A clear one. That doesn’t rely on jargon. And minimizes necessary prior knowledge. ✦ Do better than MITRE, please!

Slide 41

Slide 41 text

In this class… ✦ I am not your audience. ✦ I mean, I am. But you shouldn’t design your communications for me. ✦ My communication assignments try to be clear about the audience(s) you’re communicating to. ✦ Where they’re not (and I am human and may have missed something), ask about it! ✦ Take that seriously. Consider your audience(s) carefully. Do not bore, lose, blame-and-shame, or intimidate them. ✦ Word to the wise: infodumps are very intimidating! Also boring! ✦ (I get the temptation: you’re showing your work to me. But I am not your audience!)

Slide 42

Slide 42 text

Okay, done ranting now.

Slide 43

Slide 43 text

What does ATT&CK do? ✦ Make a ginormous list of known TTPs (technological only, nothing purely social!) and goals/loot. ✦ Categorize them, without assumptions about where in an attack any category of techniques happens. ✦ ATT&CK has no consistent name for its categories, nor does it explain them anywhere. *screaming even louder in librarian, also in educator* ✦ More recently, because that matrix is SO unwieldy, they’ve started breaking their list of TTPs into “sub-techniques.” ✦ Their current (2020) website visualization of sub-techniques is even harder to follow than the current matrix. Can I hear a wahoo. (I swear Crowley designed this.) ✦ Dear MITRE: Stop trying to make a single-page visualization happen!

Slide 44

Slide 44 text

ATT&CK categories we’ve already seen in CKC ✦ remember, MITRE intentionally doesn’t impose order on these! ✦ Discovery (≈ tech-only reconnaissance) ✦ Initial access (= intrusion) ✦ Privilege escalation ✦ Defense evasion (= obfuscation) ✦ Lateral movement ✦ Exfiltration

Slide 45

Slide 45 text

New-to-us ATT&CK categories ✦ EXECUTION ✦ Once an attacker is in, what do they actually DO in the system? How? ✦ PERSISTENCE ✦ Because some attacks take a lot of time and experimentation, the attacker (who doesn’t work 24/7/365!) wants to make sure they can get back into the target’s systems when they want to. ✦ This often involves installing malware and/or backdoors onto the target’s systems, or stealing user credentials (usernames and passwords). ✦ Rapid attacks likely skip this step. (Which in ATT&CK is allowed!) ✦ CREDENTIAL ACCESS ✦ What it sounds like! Getting those sweet juicy usernames, passwords, and keys.

Slide 46

Slide 46 text

More new categories ✦ COLLECTION ✦ What user surveillance can be done on/with compromised systems? ✦ Includes keylogging, surreptitiously turning on mics/cams, etc. ✦ COMMAND AND CONTROL (often “C2”) ✦ Once you’re in, what other machines can you control, and how do you make them do things? ✦ Usually this involves forcing an inside machine to ask for, receive, and accept commands from a machine on the outside. ✦ IMPACT ✦ Goals/loot, beyond data exfiltration and collection (above)

Slide 47

Slide 47 text

Okay, so why is this useful? ✦ If you’re defending, it’s a compendium of what you’re defending against. ✦ Including specific things you can ask a security vendor about! ✦ Including things you can test your systems for, then lock down! ✦ If you’re responding to an incident and you’re stuck, it’s a list of hints to what an attacker might have done. ✦ If there’s a new (or just new-to-you) vulnerability that’s been categorized in ATT&CK, that can help you understand and mitigate it fast.

Slide 48

Slide 48 text

There. I hope and believe you understand MITRE ATT&CK better than MITRE does now.

Slide 49

Slide 49 text

Questions? Ask them! This lecture is copyright 2020 by Dorothea Salo. It is available under a Creative Commons Attribution 4.0 International license.

Slide 50

Slide 50 text

An incident! Uh-oh! What now?!

Slide 51

Slide 51 text

If you don’t know “what now” it’s already too late. ✦ By which I mean: have crisis processes in place! Well in advance! Step 1 of incident response: PREPARATION. ✦ Incidents gonna incident. No security is perfect! ✦ Practice the processes. By definition, crises are rare. You must practice for them! ✦ Also make it easy for people to communicate problems to you… before they become crises. ✦ Don’t be the people/orgs who ignore vulnerability reports! ✦ Reports may come from inside or outside your org. Make sure both are feasible. ✦ Got a report? ACTUALLY DEAL WITH IT. ✦ And once more, this means having a process for dealing with reports! ✦ You’d think all these steps would be obvious. Yet here we (and Troy Hunt, and Equifax) are.

Slide 52

Slide 52 text

Incident-response teams ✦ Get named about a billion different things. ✦ We discuss why when we talk about the Infosec Org Chart Wars. ✦ Pragmatically: your incident-response plan should make clear who’s responsible for what during an incident. If that’s in place, the name of the team doesn’t matter. ✦ Response is not just technical! ✦ Somebody needs to be responsible for internal and external communication about the incident. (In many cases, you won’t want that somebody to be from your infosec team, or even your IT team. Infosec and IT come by their reputation for poor communication at least somewhat honestly.) ✦ Lawyers and DPOs need to own the legal and compliance pieces. (There pretty much always are some.) ✦ Somebody reliable and calm from management needs to be involved (for many reasons, but “signing off on overtime” is often one).

Slide 53

Slide 53 text

Noticing != understanding ✦ When you first discover an incident, you won’t know everything about what’s going on. ✦ You can’t necessarily wait for a full understanding, either. Delay gives the attacker more time to do Bad Stuff! ✦ Step 2, IDENTIFICATION: As quickly as possible, try to figure out what’s going on and how to shut the attack and attacker down. ✦ Sometimes it will be obvious and clear-cut, in which case this step is (appropriately) very short-duration. ✦ Often… it won’t. In this case, learn what you can quickly, and keep researching as you move through additional steps.

Slide 54

Slide 54 text

What you want to know ✦ What systems and people are involved? ✦ Because of lateral movement and obfuscation, this is not always quite as straightforward as it might appear. ✦ You also want to know this so that you can preserve evidence—both for your incident report and for any legal proceedings afterwards. ✦ How extensive and effective is the attack, and how many systems are involved? (“SEVERITY LEVEL”) ✦ Did they get at the crown jewels and we have to stop them now?! (“PRIORITY LEVEL”) ✦ Severity level: the nastiness, extent, and continuing risk (if any) of the ATTACK ✦ Priority level: the extent and importance of the INFORMATION and SYSTEMS involved

Slide 55

Slide 55 text

Step 3: CONTAINMENT ✦ Limit the damage as best you can, while disrupting normality as little as possible ✦ Contradictory goals? Yes! Welcome to information security. ✦ More helpfully: Severity and priority govern how much disruption you can get away with causing. ✦ Taking all systems down because one laptop is dealing with intrusive browser popups? Almost certainly overkill. Taking (nearly) all systems down because of a huge spreading ransomware attack? Possibly okay! ✦ This step is intended to be fast-and-dirty. You will likely miss things. That’s okay. ✦ Communication is part of this step! You’re also “containing” the reputation damage.

Slide 56

Slide 56 text

Step 4: ERADICATION ✦ This step starts when the urgency level of the prior steps has gone down, at least a little. ✦ This is the methodical, thorough examination of systems, logs, gathered evidence, etc. and performance of additional info-gathering and security actions as needed. ✦ Goal 1: ensuring the attacker is fully gone (and can’t come back) and the attack is 100% over ✦ Goal 2: gathering the fullest account possible of the incident, how it happened, who did it, etc. ✦ Ideally, the security team has been documenting everything it did all along. If not, they need to document now, before they forget!

Slide 57

Slide 57 text

Step 5: RECOVERY ✦ Returning to normal ✦ Service broken? Unbreak it. ✦ Data messed up or gone? Restore from backup. ✦ … you do have backups, right? HAVE BACKUPS. ✦ Something had to be disabled? Turn it back on. ✦ (After you’ve done any needed patching, security-tool installation, or other remediation, of course.) ✦ Communicate, communicate, communicate. ✦ Deal with public relations. ✦ Deal with The Law as needed. ✦ Deal with internal communication. You want truth out there, or the gossip/rumor mills go wild.

Slide 58

Slide 58 text

Step 6: REVIEW ✦ This is the step where the incident report gets written and delivered! ✦ Goal: everyone understands what happened and why, and agrees on measures to prevent similar attacks ✦ Some of these measures may be technological. ✦ All of them? Almost certainly not. Don’t forget to work out how to deal with the human aspects of the incident! ✦ (Even “technological” fixes involve installation, configuration, and monitoring work from human beings!) ✦ Beware, beware, beware of “solutions” that only address the exact attack employed, rather than taking a broad view! This will land your organization at a rickety pile of dubious fixes with lots of gaps in it!

Slide 59

Slide 59 text

It’s never this tidy. ✦ In wide-ranging or especially serious incidents, steps 1 through 3 will often be hopelessly tangled up in one another. ✦ Step 5 (“Recovery”) will often happen in (untidy) stages. ✦ All the while, people will be yelling at you. ✦ Keep working your procedures. Communicate. Remember to breathe.

Slide 60

Slide 60 text

Questions? Ask them! This lecture is copyright 2018 by Dorothea Salo. It is available under a Creative Commons Attribution 4.0 International license.

Slide 61

Slide 61 text

Individual-level incident response: an example LIS 510

Slide 62

Slide 62 text

Let’s walk through the incident-response process with an individual-level situation.

Slide 63

Slide 63 text

“My laptop keeps opening up these windows telling me my computer has been hacked! When I close them, they just pop up again! I can’t do any work like this! Argh!”

Slide 64

Slide 64 text

Step 1: Preparation ✦ This is actually what we’re doing RIGHT NOW: constructing a plan for when Something Bad happens to your computer or other device. ✦ BACK UP YOUR STUFF. OFTEN. REGULARLY. ✦ This should be part of any plan you make! ✦ I like set-and-forget cloud services for this. I use SpiderOak, myself. ✦ Research and bookmark reliable sources of help. Sure, search engines, but do you really want to sort through a million pages when you’re stressed and worried? ✦ This is highly dependent on your operating system and the settings and software on your computer, so I don’t have any suggestions that will work for everyone. Please share good resources you know about!

Slide 65

Slide 65 text

Step 2: Identification ✦ Can you figure out what software is opening the popup windows? ✦ It’s likely to be a web browser, but that’s not 100% certain. ✦ Windows: Click on one of the popup windows (carefully! try not to hit a link or anything!), then hit the alt key to bring up a software menu. If that doesn’t work, start up the “Task Manager” program. ✦ Mac: Click on a popup window (carefully!), then look at the menu bar for the software name. Can’t find it? Start up the program “Activity Monitor.” ✦ In Task Manager or Activity Monitor, look for software you didn’t know you were running, and/or software that’s eating CPU time for lunch. ✦ No joy? DON’T SPEND MUCH TIME. Move on to contain the problem. ✦ Seriously, minutes count here. No more than two or three on this!

Slide 66

Slide 66 text

Step 3: Containment ✦ This looks like malware, so immediately stop your computer from communicating with the Internet. ✦ Turn off wifi, mobile hotspots, Bluetooth, everything! ✦ If you have a wired connection (I know, I know, old-school!), remove the cable. ✦ Reason: Modern malware will often 1) spread to other machines via your Internet connection, 2) make your computer do Bad Things over the Internet as part of a botnet, and/or 3) connect to a command- and-control server to get more malware and/or instructions. ✦ Can you (force-)quit the offending software? ✦ Windows: Task Manager’s “End Task” button. ✦ Mac: Apple menu, “Force Quit,” or (last resort) the “x” button in Activity Monitor

Slide 67

Slide 67 text

About the containment step ✦ You are NOT DONE troubleshooting just because you fixed the immediate symptoms! ✦ You have to actually get rid of the malware. ✦ Ideally you also figure out how the malware got to you so you don’t pick it up again.

Slide 68

Slide 68 text

Step 4: Eradication ✦ From another device, research the problem and how to fix it. Get help if you can. ✦ Helpful search terms include the name of the software (if you discovered that) and text of any messages it sent you. ✦ Try to find recent results. (DuckDuckGo: !date puts recent results first.) Exception: If you’re on an older OS, add its name to your search. ✦ Follow instructions you find, and think are reliable. ✦ Chances are good that an individual software program is infected, so you’ll likely have to erase/reinstall it. ✦ Last resort: completely erase the machine and reinstall everything, then restore backup data. ✦ DO NOT let the machine back on the Internet until you’re sure the malware is gone.

Slide 69

Slide 69 text

Steps 5 and 6: Recovery and review ✦ Get things back in order. ✦ If you don’t already know, research likely ways this malware got to your computer. ✦ Did you make a security error? How do you not do that again? ✦ (This is not about beating yourself up! This is about learning from this incident.) ✦ Research prevention techniques you can employ. Employ them! ✦ Research how to improve your computer’s security generally, while you’re at it.

Slide 70

Slide 70 text

Questions? Ask them! This lecture is copyright 2018 by Dorothea Salo. It is available under a Creative Commons Attribution 4.0 International license.

Slide 71

Slide 71 text

Incident reports and attribution

Slide 72

Slide 72 text

Incident report examples? ✦ Rarer than I wish they were, but that’s understandable. ✦ These are almost always org-internal documents, not for sharing outside the org. (Not everybody gets hauled up in front of Congress the way Equifax did!) ✦ Partly this is liability avoidance: getting sued only adds problems to an existing incident, and the more that’s known externally about what went wrong, the more reasons a lawyer can find to sue. ✦ Partly it’s not wanting to give attackers any more hints about your systems and processes. Remember, the first attack may not be the last from a given attacker! ✦ The we-got-phished report I gave you to read a little while ago is a decent example, except for being too informal in tone. (You wouldn’t hand that to a CEO.)

Slide 73

Slide 73 text

NIST on questions to answer: ✦ Exactly what happened, and at what times? [“Timeline”] ✦ How well did staff and management perform in dealing with the incident? ✦ Were the documented procedures followed? Were they adequate? ✦ What information was needed sooner? ✦ Were any steps or actions taken that might have inhibited the recovery? ✦ What would the staff and management do differently the next time a similar incident occurs? ✦ How could information sharing with other organizations have been improved? ✦ What corrective actions can prevent similar incidents in the future? ✦ What precursors or indicators should be watched for in the future to detect similar incidents? ✦ What additional tools or resources are needed to detect, analyze, and mitigate future incidents?

Slide 74

Slide 74 text

Your incident reports ✦ Which questions/guidelines/templates to use? ✦ Whichever one the knowledge you’re gathering fits into most neatly. ✦ Use NIST where it’s useful! Don’t where it isn’t. ✦ I am not grading you for your adherence to any “how to write an incident report” guidelines. ✦ If you’re communicating completely and clearly, you win! ✦ As you read about incidents, keep a (mental or physical) list of example screwups. ✦ Did the organization you’re studying do something similarly unwise? ✦ (Equifax is kind of an infosec bonanza here. Anything they could possibly have screwed up, they screwed up.)

Slide 75

Slide 75 text

“The attacker”? Who’s that? ✦ ATTRIBUTION: the “whodunit” in a mystery novel ✦ More than that, really: also the what, to whom, how, and why. ✦ Unlike mystery novels, you may never know whodunit. ✦ If the attacker is good at obfuscation, they won’t leave enough clues for you to track the attack back to them. ✦ Many techniques exist to camouflage or fake the origin of an attack coming from the Internet. DDoSes were practically invented to do this, after all! ✦ (Possibly the easiest camouflage technique for you to try yourself: “MAC address spoofing.” Look it up!) ✦ The actual person typing on the keyboard to attack you may be the least of it! Hold that thought; I’ll get back to it.

Slide 76

Slide 76 text

No attribution = problem ✦ One of the theories behind many legal systems (including in the US) is “deterrence by (threat of) punishment.” ✦ This falls all the way apart if the system can’t punish because it can’t find attackers. ✦ Or if the attacker and target are across national borders from one another, as is often the case. ✦ Or if the attacker is a group too big or diffuse to tackle. ✦ “Anonymous” and Wikileaks and others ✦ And competing legal regimes over computer-based crime don’t help at all. ✦ The attack type, system attacked, and information attacked may all cause the attack to fall under different sets of laws.

Slide 77

Slide 77 text

A word on “ADVANCED PERSISTENT THREATS” ✦ Giant buzzword. Beware of buzzwords. ✦ Originally: targeted threats from well-resourced, well-trained actors—like, “entire countries.” ✦ The kind of threat where they just never stop—if one attack doesn’t work, they try another until something does. ✦ Some organized hacking/data-exfiltration groups, like Anonymous or Wikileaks, have been considered APTs. Also terrorists, corporations. ✦ Now: broadened to “threats too tough for ordinary prevention efforts to stop.” ✦ Expert obfuscators: can hide from detection systems, logs, etc. ✦ Leverage a small breach to compromise systems further, stay in longer ✦ Definitely not doing it “for the lulz:” have specific target(s), goal(s)

Slide 78

Slide 78 text

Was the attack on Equifax due to APTs? ✦ Yes… and also no? ✦ Yes, because the current attribution theory is “China did it,” and China is well-known for APT- level attacking. ✦ Personally, I’m not 100% convinced this attribution is correct, but I don’t completely DISbelieve it either. ✦ Plenty of people have access to (and understanding of) evidence that I don’t. Trust them, not me. ✦ No, because the attackers didn’t need an APT level of skill or persistence to get in, partly because the Struts vulnerability was so severe and exploited so quickly.

Slide 79

Slide 79 text

Questions? Ask them! This lecture is copyright 2018 by Dorothea Salo. It is available under a Creative Commons Attribution 4.0 International license.