Design for Security — BSides Wellington 2017

C2817e27f333415dec3be6e5b805469a?s=47 Serena Chen
November 24, 2017

Design for Security — BSides Wellington 2017

Design and security are two seemingly incongruous fields — but what if by teaming up we produce a more usable, more secure world for both?

C2817e27f333415dec3be6e5b805469a?s=128

Serena Chen

November 24, 2017
Tweet

Transcript

  1. 3.
  2. 5.
  3. 6.
  4. 11.

    “Given a choice between dancing pigs and security, the user

    will pick dancing pigs every time.” —MCGRAW, G., FELTEN, E., AND MACMICHAEL, R. Securing Java: getting down to business with mobile code. Wiley Computer Pub., 1999
  5. 12.

    “Given a choice between dancing pigs and security, the user

    will pick dancing pigs every time.” —Me(me), 2017 CAT MEMES CAT MEMES
  6. 13.
  7. 14.
  8. 17.
  9. 18.
  10. 19.
  11. 20.
  12. 22.
  13. 27.

    It’s our job to care • We need to empower

    through education • We need to push for small, but long lasting, habitual changes • We need to mainstream better security heuristics
  14. 31.

    AMOUNT OF CODE WRITTEN ~ * ~ D E S

    I G N ~ * ~ ~ * ~ S E C U R I T Y ~ * ~
  15. 32.

    “Security features should be invisible when you don’t need it,

    helpful when you do” — Adrienne Porter Felt, Chrome Security Team
  16. 36.
  17. 39.

    The trilogy 1. Finding intent 2. Path of least resistance

    3. (Mis)communication and human vulnerabilities 4. Mental model matching
  18. 44.

    What’s our job again? Our job is to make a

    specific task • that a specific, legitimate user wants to do • at that specific time • in that specific place … easy. Everything else we can lock down.
  19. 47.

    • What is the time of day? • Do we

    know who they are? • Do we know where they are? • Do we know what mode they’re in? • How did they get here?
  20. 48.

    Don’t force prompts if you can help it. ☞ teaches

    users that security obstructs their work ☞ trains them to dismiss prompts in general ☞ the false alarm effect is a thing
  21. 49.

    “Each false alarm reduces the credibility of a warning system.”

    — S. Breznitz and C. Wolf. The psychology of false alarms. Lawrence Erbaum Associates, NJ, 1984
  22. 50.

    Source: Anderson et al. How polymorphic warnings reduce habituation in

    the brain: Insights from an fMRI study. In Proceedings of CHI, 2015
  23. 51.

    Finding intent We can infer a lot from proximity and

    relevance, not just aggressive surveys Use our wealth of data to test whether our inferences are correct. Improve, lather, rinse, repeat.
  24. 52.
  25. 54.
  26. 55.
  27. 56.
  28. 57.

    Path of Least Resistance Note the zeroth order path: do

    nothing This is why we say everything should be secure by default. Doing nothing is the easiest and therefore most common action at any time in any application.
  29. 58.
  30. 59.
  31. 60.

    First order path? User behaviour is guided by affordances. ☞

    Are your security-relevant interactions discoverable? Path of Least Resistance
  32. 61.
  33. 62.

    First order path? User behaviour is guided by affordances. ☞

    Are your security-relevant interactions discoverable? ☞ Is your security team discoverable? Path of Least Resistance
  34. 64.

    • If your user doesn’t understand the consequences of an

    action — don’t surface it • Hiding things under “advanced” sections is not enough. Path of Least Resistance
  35. 65.
  36. 66.

    • If your user doesn’t understand the consequences of an

    action — don’t surface it • Hiding things under “advanced” sections is not enough. Path of Least Resistance
  37. 68.

    • If your user doesn’t understand the consequences of an

    action — don’t surface it • Hiding things under “advanced” sections is not enough. ☞ If you need it, communicate the consequences of any action clearly. Path of Least Resistance
  38. 74.
  39. 75.
  40. 82.
  41. 83.
  42. 85.

    “A system is secure from a given user’s perspective if

    the set of actions that each actor can do are bounded by what the user believes it can do.” — Ka-Ping Yee, “User Interaction Design for Secure Systems”, Proc. 4th Int’l Conf. Information and Communications Security, Springer-Verlag, 2002
  43. 86.

    Two approaches: 1. Find their mental model (e.g. infer intent)

    and communicate to that 2. Influence their mental model (e.g. manage expectations) to better match the system Matching mental models
  44. 87.

    • This doesn’t mean reading people’s minds ◦ Have you

    tried asking nicely? ◦ Have you watched a non-security-expert use your system/process/application? ◦ Have you observed a user session? 1: What’s their model?
  45. 88.

    • This doesn’t necessarily mean reading people’s minds ☞ Consider

    customisation ☞ Bad identifiers = miscommunication 1: What’s their model?
  46. 89.

    Communicate what’s actually happening. If it clarifies your interface without

    causing further confusion, then it is good. 2: How do we influence?
  47. 90.
  48. 91.

    • Whenever we make things, we teach. • Whenever someone

    interacts with us / a thing we made, they learn. • Often the path of least resistance becomes the default “way to do things”. 2: How do we influence?
  49. 95.
  50. 96.
  51. 99.
  52. 106.
  53. 107.

    • Cross pollination between design and security is rare ◦

    This is a massive missed opportunity! Let’s be friends! <3 Takeaways
  54. 108.

    • Our job is ultimately about security outcomes ◦ Stop

    expecting everyone to be experts ◦ Let people focus on their tasks ◦ Go from being the “no” team to being the “yes but what about…” team Takeaways
  55. 109.

    • Align the user’s goals to your security goals: ◦

    Aim to know their intent ◦ Collaborate with design to craft more secure paths of least resistance ◦ Understand the user’s mental model vs yours ◦ Communicate to that model Takeaways
  56. 111.
  57. 112.
  58. 113.