Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Organizational behavior [in information security]

Organizational behavior [in information security]

Dorothea Salo

March 23, 2022
Tweet

More Decks by Dorothea Salo

Other Decks in Technology

Transcript

  1. Organizational behavior:


    incentives

    View Slide

  2. Truths about the mechanics
    of security and privacy
    ✦ It’s di
    ffi
    cult-to-impossible to write secure code.


    ✦ It’s expensive to write secure code, and even more
    expensive to thoroughly audit code for security.


    ✦ We’ve seen that privacy is a messy concept.
    Distilling it into code… is often near-impossible?


    ✦ Code is not good at respecting human contexts.


    ✦ Too often, neither are coders.


    ✦ Safety is an even messier concept, and it
    absolutely requires people to see beyond code.


    ✦ At minimum, you must understand what makes people feel (un)safe!
    Let’s just say tech industries do not have a great track record on that.

    View Slide

  3. Interdependency
    ✦ We depend on a lot of people and organizations
    to keep us private, safe, and secure.


    ✦ Including our family and friends — not just software and services!


    ✦ Key question: what INCENTIVES do they have to
    lend us a hand with that?


    ✦ Or can they just dump the problem on us? “EXTERNALITY:” problem
    one party creates and then dumps on other parties.


    ✦ This is part of the reason I am so adamantly opposed to “they were
    stupid!” analyses. It’s easy to use this to cover up externalities.


    ✦ Spoiler answer: Not many, unfortunately. And
    those incentives compete with other, often more
    compelling, incentives.

    View Slide

  4. Sources of incentive


    (positive and negative)
    ✦ Law and regulation


    ✦ Market advantage


    ✦ Money (cuts both ways: cost and revenue)


    ✦ Reputation


    ✦ Professional and/or personal ethics


    ✦ or, bluntly, lack of same


    ✦ Concern for (speci
    fi
    c or general) others who are
    somehow implicated in the business at hand


    ✦ So let’s take these one at a time, and see how they
    play out.

    View Slide

  5. Law and regulation…
    ✦ … we’ve talked about some.


    ✦ I believe I’ve also mentioned legal COMPLIANCE and its costs, while noting it’s
    practically never a perfect
    fi
    x.


    ✦ It’s frequently the Only Incentive That Works.


    ✦ 2022: Big Exploitative Tech is pushing toothless
    privacy law in various US states, hoping that it will
    inform a similarly toothless federal law.


    ✦ REGULATORY CAPTURE: when the industry needing to be regulated pwns the
    regulators into not actually regulating very much


    ✦ 15 March 2022: Law mandating 72-hour breach
    reporting passed/signed in the US


    ✦ Incidents must “threaten national security interests, foreign relations, or the
    economy… or the public con
    fi
    dence, civil liberties, or public health and safety
    of the people” before they’re reportable. Actual rulemaking to come.

    View Slide

  6. Market advantage
    ✦ Privacy and security don’t confer any market
    advantage (that is, raise product sales and revenue).


    ✦ Why not?


    ✦ LEMON MARKET: it’s impossible for regular folks to tell whether hardware,
    software, and services are private or secure! Insecure, unprivate lemons
    sell for the same price as secure and private stu
    ff
    !


    ✦ And there’s no analogue to Underwriters’ Laboratory (which ensures
    electrical appliances are safe) for online security or privacy.


    ✦ “INFORMATION ASYMMETRY:” customers know very little about the privacy
    and security of their software/services, because companies don’t disclose.


    ✦ A lot of insecure stu
    ff
    is a one-time purchase, so by the time the buyer
    knows it’s a lemon, there’s nothing they can really do. (Internet of Things!)


    ✦ Because of other considerations (e.g. network e
    ff
    ects, usability), even
    software/services PROVEN unprivate/insecure may do just
    fi
    ne. (Zoom!)

    View Slide

  7. Exceptions?
    ✦ Search engines: DuckDuckGo seems to be doing
    okay! Privacy is its main market di
    ff
    erentiator. Neeva
    is a new market entrant.


    ✦ Google’s focus on EVER MORE ADS ON RESULTS PAGES isn’t helping it,
    granted.


    ✦ Web browsers? Brave, Firefox (with add-ons)


    ✦ N.b. Firefox’s market share is… the smallest of the major browsers.


    ✦ Why anybody trusts Chrome I can’t fathom… but it’s the top browser.


    ✦ Mobile? Apple certainly trumpets iOS privacy (not
    always wholly truthfully).


    ✦ Conclusion: It’s possible to move a market in privacy/
    security-protecting directions. Not easy.


    ✦ And because of lemon markets, it can feel like swimming upstream.

    View Slide

  8. Money
    ✦ Secure software costs. A lot.


    ✦ Insecurity may cost little to nothing.


    ✦ (and can even be insured against… up to a point, hold that thought)


    ✦ Privacy, at least in the pre-GDPR age, amounts to
    leaving money on the table.


    ✦ Not selling data to data brokers? Not using data to lure advertisers with
    microtargeting? Well, how are you even making money online, then?


    ✦ Except in a very few industries, secure and/or
    private software won’t make its developers any
    more money.


    ✦ It’s not a “feature” customers use to di
    ff
    erentiate between products.


    ✦ Monetary incentives here do not favor privacy and
    security!

    View Slide

  9. “Cyberinsurance” and its issues
    ✦ Early cyberinsurers priced risk much too low… and
    got slammed
    fi
    nancially because of it.


    ✦ Orgs with crappy infosec were like “we have insurance, we don’t have to
    care about infosec!” Then they got pwned and insurers had to pay.


    ✦ Ransomware particularly occasioned big payouts. So did careless data
    leaks and breaches.


    ✦ Quite a few insurers left the business because they ran out of money.


    ✦ Underwriting is quite a bit more cautious today,
    with more carveouts.


    ✦ If you don’t take “reasonable precautions,” you may not be able to get
    insurance, and if you do get insurance but you aren’t doing infosec
    more or less right, insurers won’t pay for incidents.


    ✦ What “reasonable precautions” are is in
    fl
    ux, but think “basic best
    practices” and “periodic security audits.” I think this will tighten up.

    View Slide

  10. Reputation
    ✦ Some amoral jerks in various IT-related industries
    straight-up don’t care about this!


    ✦ I can’t explain Mark Zuckerberg, Elizabeth Holmes, Jack Dorsey, or
    Elon Musk any other way. They. Just. Do. Not. Care.


    ✦ (Or, put another way, security and privacy failures don’t harm their
    reputations enough for them to notice, much less care.)


    ✦ Few organizations stake much on their
    reputation for privacy or security.


    ✦ There is also a phenomenon called “PRIVACY SALIENCE:” talking up
    privacy causes people to be distrustful. Argh.


    ✦ Apple is one that commits, to some extent. (Not that they don’t screw
    up—they do. But they take public stances other IT
    fi
    rms won’t.)


    ✦ Zero-knowledge storage providers seek out the tech-savvy.

    View Slide

  11. Public claims about
    security and privacy…
    ✦ … can be like waving red
    fl
    ags in front of bulls.


    ✦ The infosec community not-uncommonly WILL try to falsify such
    public claims, even when they’re clearly true.


    ✦ That said, such claims can and do overreach.


    ✦ A lot. A LOT. Hype really brings the infosec folks down on your head!


    ✦ And sometimes the only cure for clueless or wrongheaded hype is a
    good old-fashioned public shaming.


    ✦ Still… this contributes to the lemon-market
    problem, because thoughtfully-secured products
    are worried about drawing the wrong attention!

    View Slide

  12. Bad PR is a thing.
    ✦ It’s possible for a bad security or privacy screwup
    to scuttle a product or service, given enough
    mainstream media coverage.


    ✦ Quite a few Internet of Things things have gone under.


    ✦ Mostly this hits startups, though. If you’re
    Facebook or Google, you’re more or less immune,
    no matter how much you screw up.


    ✦ And both of them have. A lot.


    ✦ Coverage of screwups is unsystematic and gappy.
    A lot of o
    ff
    enders squeak by unnoticed and
    unscathed.

    View Slide

  13. Ethics
    ✦ Not as much of an incentive toward privacy and
    security as I honestly wish it were.


    ✦ I return to Zuckerberg, Holmes, Dorsey, Musk… if any of them
    pretends to personal ethics, I have seen little to no evidence of it.


    ✦ Even in my own profession, librarianship, which
    has had privacy as a major ethical tenet for
    nearly a century.


    ✦ “Assessment” and “learning analytics” and “personalization” are
    eroding librarianship’s privacy commitments before my eyes.


    ✦ I am absolutely furious—apoplectic—about this erosion and actively
    working to stop it… but I’m in a distinct minority in the
    fi
    eld just now.


    ✦ It’s an active, vocal minority. I have hope! But this is, and will
    continue to be, a slog.

    View Slide

  14. Concern and care
    ✦ Where concern for others, and their privacy and
    security, exists in technology ecosystems and
    industries…


    ✦ … it’s usually grassroots, and not powerful.


    ✦ It can also be pretty clueless about how power and oppression work.
    (Though it isn’t always, to be fair.)


    ✦ This is… let’s go with “unfortunate.”


    ✦ “Unconscionable” and “callous” and “abusive” and “vile” also spring to
    mind, but I’m extra like that.


    ✦ Example: Google activists, 2016ish-2022


    ✦ Working to (among other things) get Google to stop helping ICE


    ✦ Google
    fi
    red them. They sued, and eventually settled.


    ✦ Google hasn’t stopped working with garbage humans.

    View Slide

  15. But IT isn’t alone in this.
    ✦ When I explain to my classes that “personalization”
    means surveillance…


    ✦ … I get a shrug from most.


    ✦ “I don’t have anything to hide. I
    fi
    nd personalized ads useful. Why should I
    care about the associated privacy loss?”


    ✦ The whiter, cis-er, male-r, and wealthier the student, the more likely this is
    their response. This is not coincidental. More societal power = less worry
    for self, less concern for others.


    ✦ Security and privacy are communal and
    interdependent: if some of us give them away, others
    lose them too—and they may need them more.


    ✦ We can also lose both as a society. Arguably that’s in progress right now!


    ✦ Please care. Even if you’re not personally worried.

    View Slide

  16. So that’s bleak.
    ✦ Yeah. I’d soften it if I could. I can’t.


    ✦ But we are not helpless. We have options for
    creating incentives where there are none.


    ✦ Money talks. Do not buy Internet of Things things. Ever. Not for
    yourself, and for pity’s sake, not for others!


    ✦ Because of the GDPR, regulation is within our grasp (for the
    fi
    rst
    time, really). Activism is an option!


    ✦ Including local activism on local concerns. (Students have
    successfully pushed back on exam proctoring, for example.)


    ✦ Explain, teach, help. “Each one teach one” works.


    ✦ Be vocal! Demand better!

    View Slide

  17. Questions? Ask them!
    This lecture is copyright 2018 by Dorothea Salo.


    It is available under a Creative Commons Attribution
    4.0 International license.

    View Slide

  18. Organizational behavior:


    organizations

    View Slide

  19. Your boss comes to you and says,


    “We want to expand into the


    European Union.


    We hear this GDPR thing


    is Serious Business.


    Tell me what to do about it!”


    What do you say?


    And what does your boss say


    when you say it?

    View Slide

  20. Ugh, workplace politics
    ✦ Yeah, I know. Not a fan (or an expert) either.


    ✦ But if you want to be e
    ff
    ective—get things done
    —you have to understand how workplaces work
    and how to work within that.


    ✦ Giant, GIANT caveat: this is to some extent culturally-determined,
    and I am not an expert in international organizational behavior!


    ✦ Take everything I say as US-based. De
    fi
    nitely don’t assume it applies
    anywhere else.


    ✦ Lecture thesis: US workplace cultures behave in
    predictable ways that (for now) are often not
    positive for security.

    View Slide

  21. New thing = Do Not Want
    ✦ Digital information security is a new thing for most
    organizations.


    ✦ 2018: GDPR created mad scrambles in many (most?) EU organizations.


    ✦ Organizations do not embrace new things with glad
    cries. (I am the voice of experience on this!)


    ✦ One common reaction: “What’s the least we can get away with here?” (Your
    readings mention “free-riding.” For most orgs, that’s ideal for New Things!)


    ✦ Another: “Put the new thing (and people associated with it) in a corner
    where it won’t get in the way.”


    ✦ Another: Organizational turf wars (for power, people, resources), especially
    when the new thing displaces old things somehow


    ✦ Result: Infosec underresourced, understa
    ff
    ed, under-
    authoritied

    View Slide

  22. New thing = confusion
    ✦ Common. Possibly unavoidable?


    ✦ Security: the Organizational Structure Wars


    ✦ Where does the “CHIEF INFORMATION SECURITY OFFICER” (CISO)
    fi
    t
    within the organization?


    ✦ Where there even is a CISO. Where there isn’t, who’s responsible for
    security? If something goes wrong, where does the buck stop?


    ✦ Security is hard. It con
    fl
    icts with other incentives, especially for IT
    employees. If IT is also responsible for security… guess what loses
    when there’s a con
    fl
    ict between security and something else?


    ✦ Other departments that might become responsible for security su
    ff
    er
    similar con
    fl
    icts of interest.


    ✦ When there’s an incident, ask yourself if security
    sta
    ff
    could realistically do their jobs.

    View Slide

  23. Micro-confusion: BYOD
    and Shadow IT
    ✦ BRING-YOUR-OWN-DEVICE (BYOD)


    ✦ Super-common with mobile (phones/tablets) in many workplaces


    ✦ These then become security endpoints… that security sta
    ff
    have
    limited visibility into and limited means to secure


    ✦ SHADOW IT


    ✦ Another clash between security and usability/convenience


    ✦ IT won’t do the thing you (think you) need? Maybe because security
    sta
    ff
    told them (or you) no?


    ✦ Time to do an endrun around them! It’s easy. Software-as-a-service!


    ✦ (I do this with DigitalOcean regularly, because putting up a server,
    even just for class practice, is A-G-O-N-Y here. I’ve tried!)


    ✦ But if data ends up there, and security sta
    ff
    don’t know and/or can’t
    fi
    x it, and there’s a breach… you see the problem, I hope.

    View Slide

  24. Patching
    ✦ It’s possibly the most basic, fundamental security
    advice there is: patch your systems and software!


    ✦ But individuals and organizations often don’t.


    ✦ Whyever not?!?!?!?!?!


    ✦ Is there even a patch? (Internet of Things!)


    ✦ Convenience, usability (patching is annoying and interrupty)


    ✦ Ripple e
    ff
    ect on other systems/software: if a patch is itself buggy, or if
    it changes something that another piece of software relies on…


    ✦ Control/authority. Do you have enough of it to patch, or to make
    somebody else patch? (Often requires root/administrator privileges!)

    View Slide

  25. Questions of trust
    ✦ Information security sta
    ff
    are often the least-liked,
    least-trusted IT sta
    ff
    in an organization. Why?


    ✦ They say “no” to people a lot. (Including by proxy—other IT sta
    ff
    may
    say “no” and blame it on security.) They increase friction. They slow
    down projects all over the organization.


    ✦ The bene
    fi
    ts they provide are non-obvious to most folks…


    ✦ … but many security failures are super-obvious, and they are natural
    blame targets even when the failure was not remotely their fault.


    ✦ Because they are a New Thing, they are often embroiled in turf wars and
    other internal organizational con
    fl
    ict. This often means less support from
    colleagues and management.


    ✦ Not to put too
    fi
    ne a point on it, but… like all IT sta
    ff
    (all humans, really,
    but IT sta
    ff
    have a not-wholly-unearned reputation) they can be jerks.
    And being treated the way they typically are is a short road to
    resentment, burnout, and associated jerkitude.

    View Slide

  26. It’s really hard to do any kind of
    job when you’re not trusted.


    It’s extra-hard


    for information security staff,


    whose work often depends


    on convincing other people


    to do things differently.

    View Slide

  27. I know I keep harping on
    this, but…
    ✦ “They were stupid!” is not an acceptable
    explanation for a privacy or security failure.


    ✦ Look deeper.


    ✦ Look at incentives. Look at common
    organizational-behavior di
    ffi
    culties.


    ✦ That’s why I’m teaching you about them. You may
    or may not be able to change them—but being
    aware of them gives you a chance, at least.


    ✦ “They were stupid!” does not give you a clear course of action… or
    points to actions that don’t work (e.g. training on social engineering).

    View Slide

  28. Questions? Ask them!
    This lecture is copyright 2018 by Dorothea Salo.


    It is available under a Creative Commons Attribution
    4.0 International license.

    View Slide

  29. Vulnerabilities and
    vulnerability disclosure

    View Slide

  30. All software and systems
    have bugs.
    ✦ The more complex a software/system, the more bugs in it.


    ✦ The more complex its interactions with other software,
    the more bugs are involved.


    ✦ Bugs are often obscure, hard to
    fi
    nd, hard to
    fi
    x. Even for
    top-notch, experienced developers!


    ✦ Bug reports aren’t always written helpfully!


    ✦ Bugs can come from other people/systems you depend
    on.


    ✦ Software builds on other software, practically always. Auditing every line of code
    in every library your software imports and every programming language it uses…
    yeah, no, not going to happen. Not reasonable to expect!


    ✦ Because all vulnerabilities are bugs (well, except
    backdoors), these axioms are also true of vulnerabilities.

    View Slide

  31. Vulnerability-specific
    problems
    ✦ Software-engineer training tends to be light on
    “how to develop code that is secure.”


    ✦ There are… some… best practices. At least there are lists of things to
    look for! (CODE SMELL: a hint that code may be buggy.)


    ✦ There are automated “vulnerability assessment” tools that look for
    bugs and code smells, but like most automated tools, they miss stu
    ff
    .


    ✦ This is part of the reasoning behind my explain-an-attack assignment.
    Knowing I’ll have CS and SE folks in this class… this may be all the
    exposure they get to vulnerabilities and exploits!


    ✦ Many QA/QC/testing folks don’t have enough
    background in secure development either.


    ✦ If that’s not bad enough, some cowboy coders don’t like to hear
    they’re wrong, much less be told to
    fi
    x it.

    View Slide

  32. Testing for vulnerabilities
    ✦ “Working as expected” doesn’t, um, work.


    ✦ Security isn’t a “feature” you can test normally.


    ✦ Security
    fl
    aws are rarely apparent during software use!


    ✦ Vulnerability assessments, yes… but really, what
    you have to do is attack the software.


    ✦ Our good friend “adversarial thinking” returns!


    ✦ Yes, this means you use the same tools and techniques on your
    software that an attacker would try.


    ✦ If that feels ethically a bit o
    ff
    … well, yes, it gets into some gray areas.
    We talk about “ethical hacking” elsewhere in the course.


    ✦ The earlier you test, the better.


    ✦ The later you catch a bug, typically the more complicated it is to
    fi
    x…
    and the more tempting it is to just hope nobody
    fi
    nds it.

    View Slide

  33. Common attack tool:
    Metasploit
    ✦ Let me be clear about this: Metasploit is used by
    good people… as well as garbage humans.


    ✦ Many infosec folks call Metasploit users “script kiddies,” and there’s
    some justi
    fi
    cation for that… but we all have to start somewhere.


    ✦ It’s sort of a meta-exploit tool? (Thus the name.)


    ✦ It provides a single interface for launching hundreds (thousands?) of
    exploits at systems—a throwing-spaghetti-at-the-wall tool.


    ✦ It doesn’t reeeeeeeally do this fully automatedly. You have to know as
    much as possible about the system you’re trying to exploit and the
    exploits that are available, and you have to do some con
    fi
    guration for
    each attempt at exploitation.

    View Slide

  34. On “SECURITY BY OBSCURITY”
    ✦ (= keeping source code secret so nobody discovers its vulnerabilities)


    ✦ SOURCE CODE: human-readable programming code (as opposed to
    “compiled” or “binary” code, no longer human-readable)


    ✦ It doesn’t work. Louder: IT DOES NOT WORK.


    ✦ Finding a vulnerability does not depend on reading the source code. Lots
    of vulnerability-
    fi
    nding techniques just involve poking at the software in
    speci
    fi
    c ways while it’s running. (I’m not an expert on this, but if you’re
    curious, ask; I know a few things.)


    ✦ Therefore, if garbage humans can run the software, they can
    fi
    nd
    vulnerabilities in it. Then they exploit them! And you won’t know!


    ✦ Not having the source code DOES slow down security experts and
    bughunters. This helps security how? Hint: It doesn’t!


    ✦ Extremely common misconception about software
    security. I want you to know better.

    View Slide

  35. But code sharing is not a
    panacea either.
    ✦ We kiiiiiiiiinda used to think it was?


    ✦ Reasoning behind much “open-source software is more secure than
    proprietary software” hype: “many eyes make bugs shallow.”


    ✦ Then Heartbleed and Shellshock happened.


    ✦ Short version: vital, exceptionally commonly-used open-source
    Internet utility software had serious, major, awful vulnerabilities. No
    one noticed them for YEARS… until they were exploited!


    ✦ Turned out the software had very few developers, no paid developers,
    ergo little-to-no bughunting and bug
    fi
    xing capacity.


    ✦ A lot of open code, even really important open
    code, is understa
    ff
    ed, underfunded, undertested.


    ✦ This is a recipe for un
    fi
    xed vulnerabilities.

    View Slide

  36. You discover a vulnerability.
    Now what?
    ✦ What are your actual options?


    ✦ Exploit the vulnerability, if you are a garbage human. (Schneier
    passed this by; I will too. I don’t like thinking about garbage humans.)


    ✦ Fix the vulnerability yourself.


    ✦ Report the vulnerability quietly to the responsible party. (Often with
    a “proof-of-concept” [often abbreviated POC] exploit.)


    ✦ Tell the whole world (and MITRE!) about the vulnerability.


    ✦ Don’t say anything to anyone. Just keep quiet about it.


    ✦ Which option to pick? That comes right back to
    incentives.

    View Slide

  37. Bugfix incentives
    ✦ Can you?


    ✦ If it’s a vulnerability in an open-source software package, AND you
    have apropos coding skill, you can.


    ✦ Otherwise,
    fi
    xing the vulnerability is not even an option.


    ✦ How will you and your patch be treated?


    ✦ If you approach a typical open-source project with an urgent
    security patch as an outsider to the project’s developers, expect
    hostile treatment. (Remember the thing about coders not liking to
    be wrong?)


    ✦ Heaven HELP you if you are female or non-binary or trans, a person
    of color, queer, and/or disabled… (a lot of open-source projects are
    rife with garbage humans)

    View Slide

  38. Quiet reporting incentives
    ✦ Will you be rewarded?


    ✦ “BUG BOUNTY” programs (demonstrate a vulnerability, get paid by the
    responsible party) cropping up. Most are legit. Some aren’t.


    ✦ Responsible parties may admire (and/or hire!) quiet bug reporters.


    ✦ Will the responsible party pay attention to you?


    ✦ Sadly, they often don’t. Even Troy “yes, you’ve been pwned” Hunt
    can’t always get companies to listen to him. Startups are the worst!


    ✦ Will the responsible party try to smear you? Sue
    you? Have you arrested for cracking?


    ✦ All of these have happened to good-Samaritan vulnerability reporters.


    ✦ Will the vulnerability even get
    fi
    xed?!


    ✦ Some orgs utterly ignore quiet reports.

    View Slide

  39. Public reporting incentives
    ✦ Credit for
    fi
    nding the vulnerability


    ✦ If you’re part of the infosec community, this matters.


    ✦ Pressures the responsible party to
    fi
    x the
    vulnerability


    ✦ especially if they’re known to be unresponsive to quiet reports


    ✦ or you already tried a quiet report and got nowhere


    ✦ Could get the vulnerability exploited faster


    ✦ which, since you’re not a garbage human, is not what you want!


    ✦ Could get you smeared, sued, or arrested by the
    angry responsible party


    ✦ More likely for a public report than a quiet one

    View Slide

  40. Incentives to shut up
    ✦ You’re a vulnerability hoarder. You are awful.


    ✦ “Not my circus, not my monkeys”


    ✦ Smearing/lawsuit/arrest avoidance


    ✦ Avoiding sexist/racist/homophobic/ableist
    software-development communities


    ✦ Why do a favor for a community that treats people like this?!


    ✦ Is the vulnerability somehow a competitive
    advantage for you (Schneier)?


    ✦ I would not have thought of this one, but I believe Schneier.

    View Slide

  41. So… it’s messy.
    ✦ I hope you have a better sense of why it can take
    so long to get a vulnerability patched.


    ✦ IT industries have a serious communications
    problem around vulnerabilities speci
    fi
    cally.


    ✦ It’s understandable! But it’s also suboptimal.


    ✦ It’s not clear who has the authority to
    fi
    x it.


    ✦ Personally, I’d like to see MITRE take this on. The CVE list is an
    impressive achievement.


    ✦ But bad laws that endanger security researchers and bughunters are
    also a problem here, and MITRE can’t
    fi
    x the law by itself!

    View Slide

  42. 2021: Supply-chain
    springs a leak
    ✦ How patching often works:


    ✦ Software checks a selection of update servers for a new version.
    (Versions are numbered, so it’s easy to check if what’s there is new.) If
    it
    fi
    nds one, it downloads and installs it.


    ✦ Why “a selection”? Because software can be developed locally, by a
    paid developer, or open-source. Plenty of places to look!


    ✦ 2021: Malware in fake patches


    ✦ Say you have developed a program called OurSoft locally. Patches and
    new versions live on an update server inside your organization.


    ✦ OurSoft starts to update itself… but is set up to look OUTSIDE the
    organization for new versions of software
    fi
    rst.


    ✦ Attacker puts malware named OurSoft with a high version number
    on a public update server. OurSoft innocently downloads and installs
    it! Uh-oh.

    View Slide

  43. Questions? Ask them!
    This lecture is copyright 2018 by Dorothea Salo.


    It is available under a Creative Commons Attribution
    4.0 International license.

    View Slide

  44. The blame game
    when organizations prefer


    to blame somebody else


    for their crappy infosec

    View Slide

  45. The “oh crap” moment
    ✦ OH CRAP SOMETHING’S GONE WRONG


    ✦ somebody says they found a vulnerability in our software/service


    ✦ our data leaked somehow


    ✦ “we’re trending now on Reddit” —Forrest Brazeal


    ✦ OH NO IT’S BAD, WHAT NOW?!?!?!?!?!


    ✦ Look. If you’re panicking, you’ll respond the
    wrong way. This is a Human Thing.


    ✦ Nobody thinks well when they’re panicking. The more that depends
    on a human thinking well while panicking… the more will go wrong.


    ✦ If you weren’t trained on how to respond, you’ll
    respond the wrong way.


    ✦ This happens A LOT with social-media folks who don’t know infosec.

    View Slide

  46. You should be so lucky


    as never to have


    to respond to an incident.


    (You won’t be that lucky.)


    So let’s talk


    about Doing Incident PR Wrong.

    View Slide

  47. (there’s a whole module


    elsewhere in the course


    on other aspects


    of incident response)

    View Slide

  48. Wrong response:


    bluster and DARVO
    ✦ DARVO: Deny, Attack, Reverse Victim and
    O
    ff
    ender.


    ✦ “We did nothing wrong! It’s your fault for saying something, you big
    evil meanie hacker you!”


    ✦ A too-common dishonorable and disreputable response.


    ✦ Everybody’s seen it. Everybody will call it out. Don’t even try it.


    ✦ “Our thing is the Most Securest Thing Ever!”


    ✦ I mean, unless you WANT all of Infosec Twitter working on proving
    how wrong this is… don’t say it.


    ✦ This is commonly how an org’s social-media people mess up. TRAIN
    THEM, okay?

    View Slide

  49. Do not be Mike Parson.

    View Slide

  50. Why you do not want


    to be Governor Parson
    ✦ He made himself look amazingly foolish, and kept
    doubling down on his foolishness.


    ✦ The journalist looks like a hero!


    ✦ Who’s going to want to work infosec for the
    Missouri state government now?!


    ✦ Arguably, Parson made his state government a tempting target for
    “for the lulz” attackers, and damaged its security posture by
    demonizing a good-faith vulnerability reporter such that no one else
    will report vulnerabilities and Parson’s government won’t be able to
    hire people to
    fi
    x them.

    View Slide

  51. Better: acknowledge,
    apologize, make amends
    ✦ Basically, “Yep, that happened, it sucked, we’re
    SORRY, here’s what we’re going to do about it.”


    ✦ only in proper formal business diction, of course


    ✦ (Even if your org’s branding is informal, this is NOT THE TIME for
    being cutesy or funny—that’ll just make people angrier. Stay formal.
    Again, I’m showing you What Not To Do here!)


    ✦ Where you don’t know something yet, say “We
    don’t know {thing} yet; we’re working on it. We’ll
    report out when we do know.”


    ✦ It’s important to say this as quickly as possible! People will want
    answers you don’t have yet! Even when it’s not remotely reasonable to
    expect that you have those answers!


    ✦ You never want to look like you’re covering stu
    ff
    up.

    View Slide

  52. Better: Be specific about
    your security measures.
    ✦ instead of “Most Securest Thing Ever”


    ✦ If you meet ISO 27K or PCI-DSS or whatever
    relevant standards, say so, and (as applicable) say
    when you were last audited.


    ✦ “Our product/service is professionally pentested
    every {so often}. The last time was Y.”


    ✦ “Our bug bounty program has found X vulns and
    paid out $Y.”


    ✦ “We investigate all reports of vulnerabilities. Please
    report via {whatever way the org uses}.”


    ✦ Calm, speci
    fi
    c, non-braggy, non-hype-y.

    View Slide

  53. Bad and getting-worse
    response: Silence.
    ✦ Thought process: “if nobody knows, we won’t get in
    trouble!”


    ✦ This is behind attempts to hide breaches behind attorney-client privilege. It’s
    why breach-reporting laws exist!


    ✦ It’s garbage humanning anytime people could be hurt because of the incident.


    ✦ The coverup, once exposed, will be hugely worse PR
    than the actual incident was. Guaranteed.


    ✦ It’s also hard to put resources into
    fi
    xing something
    that nobody’s admitting happened.


    ✦ Super-common thing: infosec sta
    ff
    see vuln, ask for resources to
    fi
    x vuln, get
    told no. Attack exploits vuln, gets big press, org blames/silences infosec sta
    ff
    .


    ✦ Infosec sta
    ff
    1) leave, 2) blow the whistle, with receipts. This doesn’t improve
    matters for the org, to say the least. Example: Fairfax County Schools
    ransomware attack, 2020.

    View Slide

  54. View Slide

  55. Better: disclose.
    ✦ It’s not fun. It can be expensive. Do it anyway.


    ✦ I know, I know, business doesn’t care what the right thing to do is…
    but this is the right thing to do, okay?


    ✦ It’s also the only thing that lets the org keep even
    a little trust. Coverups will KILL the org if they
    get out.


    ✦ Oh, and, um, don’t try to bribe the vulnerability
    reporter to stay silent. They won’t. They want
    you to FIX your dang VULNERABILITY.


    ✦ If you do, that lets them report that you did! Which makes you look
    all responsible and stu
    ff
    ! You win!

    View Slide

  56. Goes for individuals too!
    ✦ If you think something you did may have opened
    the door to an attack, report it AT ONCE.


    ✦ In an attack, speed of response is absolutely vital to limiting damage.


    ✦ The sooner infosec/IT knows, the faster they can
    fi
    nd the attackers
    and kick them out, and the faster they can isolate a
    ff
    ected systems so
    that attacks don’t spread.


    ✦ Is there risk of a bad boss reaction? Yeah. But
    frankly, the risk is HIGHER if you just sit there
    and let an attack happen.


    ✦ And a workplace that punishes you for doing the right thing is a
    workplace that is showing its underwear. Start jobhunting.


    ✦ Report report REPORT, please.

    View Slide

  57. Wrong response:


    attack vuln reporters
    ✦ The Parson case is only one way to do this.


    ✦ Some orgs have yelled at vulnerability reporters,
    or threatened them, or sued them (DMCA), or
    tried to get them arrested (usually CFAA).


    ✦ Others seem bewildered at the reports and
    respond… weirdly?


    ✦ Troy Hunt’s blog and Twitter have seen some stu
    ff
    .


    ✦ Once responses like this get out into the infosec
    community, your org is not in a good place.

    View Slide

  58. Better:


    thank, investigate, disclose
    ✦ Thank the person reporting the vulnerability.


    ✦ Yeah, even if it’s inaccurate or timewasting.


    ✦ A boilerplate thank-you is FINE, no need to get creative every time.


    ✦ Investigate. Fix.


    ✦ Disclose, if necessary


    ✦ Tell the reporter you
    fi
    xed it. That builds trust.


    ✦ If there’s an actual leak/breach/other Bad Thing involved, disclose.

    View Slide

  59. P-R-O-C-E-S-S.
    ✦ None of the better responses I’ve laid out for you
    here is exactly instinctual.


    ✦ Defensiveness is just one of those Human Things, you know?


    ✦ So is fear, all the way up to panic.


    ✦ This means that orgs need to build, document,
    resource, and train on good practice.


    ✦ IT bug triagers need to know to escalate vulnerability reports.


    ✦ Social-media/PR folks need to know not to DARVO or “most securest
    thing ever!” and where to make a security report.


    ✦ Admin needs to know that butt-covering is a bad move. Because
    they’re admin, their moves MUST be enshrined in org policy.

    View Slide

  60. Questions? Ask them!
    This lecture is copyright 2022 by Dorothea Salo.


    It is available under a Creative Commons Attribution
    4.0 International license.

    View Slide