Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Design for Security — BSides Wellington 2017

Serena Chen
November 24, 2017

Design for Security — BSides Wellington 2017

Design and security are two seemingly incongruous fields — but what if by teaming up we produce a more usable, more secure world for both?

Serena Chen

November 24, 2017
Tweet

More Decks by Serena Chen

Other Decks in Technology

Transcript

  1. “Given a choice between dancing pigs and security, the user

    will pick dancing pigs every time.” —MCGRAW, G., FELTEN, E., AND MACMICHAEL, R. Securing Java: getting down to business with mobile code. Wiley Computer Pub., 1999
  2. “Given a choice between dancing pigs and security, the user

    will pick dancing pigs every time.” —Me(me), 2017 CAT MEMES CAT MEMES
  3. It’s our job to care • We need to empower

    through education • We need to push for small, but long lasting, habitual changes • We need to mainstream better security heuristics
  4. AMOUNT OF CODE WRITTEN ~ * ~ D E S

    I G N ~ * ~ ~ * ~ S E C U R I T Y ~ * ~
  5. “Security features should be invisible when you don’t need it,

    helpful when you do” — Adrienne Porter Felt, Chrome Security Team
  6. The trilogy 1. Finding intent 2. Path of least resistance

    3. (Mis)communication and human vulnerabilities 4. Mental model matching
  7. What’s our job again? Our job is to make a

    specific task • that a specific, legitimate user wants to do • at that specific time • in that specific place … easy. Everything else we can lock down.
  8. • What is the time of day? • Do we

    know who they are? • Do we know where they are? • Do we know what mode they’re in? • How did they get here?
  9. Don’t force prompts if you can help it. ☞ teaches

    users that security obstructs their work ☞ trains them to dismiss prompts in general ☞ the false alarm effect is a thing
  10. “Each false alarm reduces the credibility of a warning system.”

    — S. Breznitz and C. Wolf. The psychology of false alarms. Lawrence Erbaum Associates, NJ, 1984
  11. Source: Anderson et al. How polymorphic warnings reduce habituation in

    the brain: Insights from an fMRI study. In Proceedings of CHI, 2015
  12. Finding intent We can infer a lot from proximity and

    relevance, not just aggressive surveys Use our wealth of data to test whether our inferences are correct. Improve, lather, rinse, repeat.
  13. Path of Least Resistance Note the zeroth order path: do

    nothing This is why we say everything should be secure by default. Doing nothing is the easiest and therefore most common action at any time in any application.
  14. First order path? User behaviour is guided by affordances. ☞

    Are your security-relevant interactions discoverable? Path of Least Resistance
  15. First order path? User behaviour is guided by affordances. ☞

    Are your security-relevant interactions discoverable? ☞ Is your security team discoverable? Path of Least Resistance
  16. • If your user doesn’t understand the consequences of an

    action — don’t surface it • Hiding things under “advanced” sections is not enough. Path of Least Resistance
  17. • If your user doesn’t understand the consequences of an

    action — don’t surface it • Hiding things under “advanced” sections is not enough. Path of Least Resistance
  18. • If your user doesn’t understand the consequences of an

    action — don’t surface it • Hiding things under “advanced” sections is not enough. ☞ If you need it, communicate the consequences of any action clearly. Path of Least Resistance
  19. “A system is secure from a given user’s perspective if

    the set of actions that each actor can do are bounded by what the user believes it can do.” — Ka-Ping Yee, “User Interaction Design for Secure Systems”, Proc. 4th Int’l Conf. Information and Communications Security, Springer-Verlag, 2002
  20. Two approaches: 1. Find their mental model (e.g. infer intent)

    and communicate to that 2. Influence their mental model (e.g. manage expectations) to better match the system Matching mental models
  21. • This doesn’t mean reading people’s minds ◦ Have you

    tried asking nicely? ◦ Have you watched a non-security-expert use your system/process/application? ◦ Have you observed a user session? 1: What’s their model?
  22. • This doesn’t necessarily mean reading people’s minds ☞ Consider

    customisation ☞ Bad identifiers = miscommunication 1: What’s their model?
  23. Communicate what’s actually happening. If it clarifies your interface without

    causing further confusion, then it is good. 2: How do we influence?
  24. • Whenever we make things, we teach. • Whenever someone

    interacts with us / a thing we made, they learn. • Often the path of least resistance becomes the default “way to do things”. 2: How do we influence?
  25. • Cross pollination between design and security is rare ◦

    This is a massive missed opportunity! Let’s be friends! <3 Takeaways
  26. • Our job is ultimately about security outcomes ◦ Stop

    expecting everyone to be experts ◦ Let people focus on their tasks ◦ Go from being the “no” team to being the “yes but what about…” team Takeaways
  27. • Align the user’s goals to your security goals: ◦

    Aim to know their intent ◦ Collaborate with design to craft more secure paths of least resistance ◦ Understand the user’s mental model vs yours ◦ Communicate to that model Takeaways