Pro Yearly is on sale from $80 to $50! »

Design for Security — BSides Wellington 2017

C2817e27f333415dec3be6e5b805469a?s=47 Serena Chen
November 24, 2017

Design for Security — BSides Wellington 2017

Design and security are two seemingly incongruous fields — but what if by teaming up we produce a more usable, more secure world for both?

C2817e27f333415dec3be6e5b805469a?s=128

Serena Chen

November 24, 2017
Tweet

Transcript

  1. Design for Security @Sereeena | sides wellington 2K17

  2. Design for Security White hat social engineering

  3. None
  4. Good user experience design and good security cannot exist without

    the other.
  5. None
  6. None
  7. Why design?

  8. Everyone should be able to operate securely, without being experts.

  9. We need to stop expecting everyone to become security experts.

  10. “I don’t care about security.” —Literally everyone who isn’t watching

    Mr. Robot right now
  11. “Given a choice between dancing pigs and security, the user

    will pick dancing pigs every time.” —MCGRAW, G., FELTEN, E., AND MACMICHAEL, R. Securing Java: getting down to business with mobile code. Wiley Computer Pub., 1999
  12. “Given a choice between dancing pigs and security, the user

    will pick dancing pigs every time.” —Me(me), 2017 CAT MEMES CAT MEMES
  13. None
  14. None
  15. “I AM AN ADULT” —Serena Chen, a Real Adult

  16. “I KNOW HOW TO INTERNET”

  17. None
  18. None
  19. None
  20. None
  21. Blaming the user for being foolish is lazy.

  22. None
  23. People just want to get things done.

  24. “I don’t care about security.” —Literally everyone who isn’t watching

    Mr. Robot right now
  25. “I care!!” —Nerds, shouting into the void

  26. It’s our job to care.

  27. It’s our job to care • We need to empower

    through education • We need to push for small, but long lasting, habitual changes • We need to mainstream better security heuristics
  28. Source: https://mobile.twitter.com/joernchen/status/915587942130896896

  29. Perfect is the enemy of the good

  30. AMOUNT OF CODE WRITTEN DESIGN DEV TEST SECURITY “LOOKS GOOD!”

  31. AMOUNT OF CODE WRITTEN ~ * ~ D E S

    I G N ~ * ~ ~ * ~ S E C U R I T Y ~ * ~
  32. “Security features should be invisible when you don’t need it,

    helpful when you do” — Adrienne Porter Felt, Chrome Security Team
  33. Security features should be invisible. Serena’s ideal world:

  34. Often in security we build walls

  35. In security we put up walls

  36. None
  37. We’re all tools

  38. How to design thinking ?

  39. The trilogy 1. Finding intent 2. Path of least resistance

    3. (Mis)communication and human vulnerabilities 4. Mental model matching
  40. Finding intent “Tell me what you want, what you really

    really want”
  41. “Security opposes the desire to make things easy”

  42. It’s not the designers’ job to make everything easy.

  43. It’s not the security team’s job to make things hard.

  44. What’s our job again? Our job is to make a

    specific task • that a specific, legitimate user wants to do • at that specific time • in that specific place … easy. Everything else we can lock down.
  45. The tension between security and usability occurs when we cannot

    accurately determine intent.
  46. Get more specific about the user’s intent.

  47. • What is the time of day? • Do we

    know who they are? • Do we know where they are? • Do we know what mode they’re in? • How did they get here?
  48. Don’t force prompts if you can help it. ☞ teaches

    users that security obstructs their work ☞ trains them to dismiss prompts in general ☞ the false alarm effect is a thing
  49. “Each false alarm reduces the credibility of a warning system.”

    — S. Breznitz and C. Wolf. The psychology of false alarms. Lawrence Erbaum Associates, NJ, 1984
  50. Source: Anderson et al. How polymorphic warnings reduce habituation in

    the brain: Insights from an fMRI study. In Proceedings of CHI, 2015
  51. Finding intent We can infer a lot from proximity and

    relevance, not just aggressive surveys Use our wealth of data to test whether our inferences are correct. Improve, lather, rinse, repeat.
  52. None
  53. Path of Least Resistance “I don’t have to choose to

    be insecure”
  54. None
  55. None
  56. None
  57. Path of Least Resistance Note the zeroth order path: do

    nothing This is why we say everything should be secure by default. Doing nothing is the easiest and therefore most common action at any time in any application.
  58. None
  59. None
  60. First order path? User behaviour is guided by affordances. ☞

    Are your security-relevant interactions discoverable? Path of Least Resistance
  61. None
  62. First order path? User behaviour is guided by affordances. ☞

    Are your security-relevant interactions discoverable? ☞ Is your security team discoverable? Path of Least Resistance
  63. BY THE WAY

  64. • If your user doesn’t understand the consequences of an

    action — don’t surface it • Hiding things under “advanced” sections is not enough. Path of Least Resistance
  65. None
  66. • If your user doesn’t understand the consequences of an

    action — don’t surface it • Hiding things under “advanced” sections is not enough. Path of Least Resistance
  67. “I KNOW HOW TO INTERNET”

  68. • If your user doesn’t understand the consequences of an

    action — don’t surface it • Hiding things under “advanced” sections is not enough. ☞ If you need it, communicate the consequences of any action clearly. Path of Least Resistance
  69. (Mis)communication

  70. Wherever there is a miscommunication, there exists a human security

    vulnerability.
  71. What are you unintentionally miscommunicating?

  72. encrypted is actually google.com* people think this means secure!

  73. Wherever there is a miscommunication, there exists a human security

    vulnerability.
  74. None
  75. None
  76. people think this means secure!

  77. (I didn’t actually do this)

  78. Do your users know what you’re trying to communicate?

  79. What is their mental model of what’s happening compared to

    yours?
  80. Matching mental models [galaxy brain]

  81. It’s the user’s expectations that define whether a system is

    secure or not.
  82. None
  83. None
  84. <blink> FORMAL DEFINITION TIME </blink>

  85. “A system is secure from a given user’s perspective if

    the set of actions that each actor can do are bounded by what the user believes it can do.” — Ka-Ping Yee, “User Interaction Design for Secure Systems”, Proc. 4th Int’l Conf. Information and Communications Security, Springer-Verlag, 2002
  86. Two approaches: 1. Find their mental model (e.g. infer intent)

    and communicate to that 2. Influence their mental model (e.g. manage expectations) to better match the system Matching mental models
  87. • This doesn’t mean reading people’s minds ◦ Have you

    tried asking nicely? ◦ Have you watched a non-security-expert use your system/process/application? ◦ Have you observed a user session? 1: What’s their model?
  88. • This doesn’t necessarily mean reading people’s minds ☞ Consider

    customisation ☞ Bad identifiers = miscommunication 1: What’s their model?
  89. Communicate what’s actually happening. If it clarifies your interface without

    causing further confusion, then it is good. 2: How do we influence?
  90. None
  91. • Whenever we make things, we teach. • Whenever someone

    interacts with us / a thing we made, they learn. • Often the path of least resistance becomes the default “way to do things”. 2: How do we influence?
  92. How are we already influencing our users’ mental models?

  93. Source: https://krausefx.com/blog/ios-privacy-stealpassword-easily-get-the-users-apple-id-password-just-by-asking iOS phish

  94. What are we teaching?

  95. None
  96. None
  97. How are we already influencing everyone’s mental models?

  98. How do we start understanding and matching mental models for

    better security?
  99. None
  100. Technical problems in security are hard enough, which means we

    often forget the human side.
  101. This is why good, reliable, security is hard.

  102. Because communicating and managing expectations through a thin layer of

    user interface is hard.
  103. But that’s where design thinking can help.

  104. What are your users’ mental models?

  105. IN SUMMARY:

  106. None
  107. • Cross pollination between design and security is rare ◦

    This is a massive missed opportunity! Let’s be friends! <3 Takeaways
  108. • Our job is ultimately about security outcomes ◦ Stop

    expecting everyone to be experts ◦ Let people focus on their tasks ◦ Go from being the “no” team to being the “yes but what about…” team Takeaways
  109. • Align the user’s goals to your security goals: ◦

    Aim to know their intent ◦ Collaborate with design to craft more secure paths of least resistance ◦ Understand the user’s mental model vs yours ◦ Communicate to that model Takeaways
  110. One final anecdote...

  111. None
  112. None
  113. None
  114. Thanks! Fight me @Sereeena