$30 off During Our Annual Pro Sale. View Details »

Design for Security — BSides Wellington 2017

Serena Chen
November 24, 2017

Design for Security — BSides Wellington 2017

Design and security are two seemingly incongruous fields — but what if by teaming up we produce a more usable, more secure world for both?

Serena Chen

November 24, 2017
Tweet

More Decks by Serena Chen

Other Decks in Technology

Transcript

  1. Design for Security
    @Sereeena | sides wellington 2K17

    View Slide

  2. Design for Security
    White hat social engineering

    View Slide

  3. View Slide

  4. Good user experience
    design and good
    security cannot exist
    without the other.

    View Slide

  5. View Slide

  6. View Slide

  7. Why design?

    View Slide

  8. Everyone should be able to operate
    securely, without being experts.

    View Slide

  9. We need to stop expecting
    everyone to become security
    experts.

    View Slide

  10. “I don’t care about security.”
    —Literally everyone who isn’t watching Mr. Robot right now

    View Slide

  11. “Given a choice between
    dancing pigs and security, the
    user will pick dancing pigs
    every time.”
    —MCGRAW, G., FELTEN, E., AND MACMICHAEL, R.
    Securing Java: getting down to business with
    mobile code. Wiley Computer Pub., 1999

    View Slide

  12. “Given a choice between
    dancing pigs and security, the
    user will pick dancing pigs
    every time.”
    —Me(me), 2017
    CAT MEMES
    CAT MEMES

    View Slide

  13. View Slide

  14. View Slide

  15. “I AM AN ADULT”
    —Serena Chen, a Real Adult

    View Slide

  16. “I KNOW HOW TO
    INTERNET”

    View Slide

  17. View Slide

  18. View Slide

  19. View Slide

  20. View Slide

  21. Blaming the user for
    being foolish is lazy.

    View Slide

  22. View Slide

  23. People just want to
    get things done.

    View Slide

  24. “I don’t care about security.”
    —Literally everyone who isn’t watching Mr. Robot right now

    View Slide

  25. “I care!!”
    —Nerds, shouting into the void

    View Slide

  26. It’s our job to care.

    View Slide

  27. It’s our job to care
    ● We need to empower through education
    ● We need to push for small, but long lasting,
    habitual changes
    ● We need to mainstream better security
    heuristics

    View Slide

  28. Source: https://mobile.twitter.com/joernchen/status/915587942130896896

    View Slide

  29. Perfect is the enemy
    of the good

    View Slide

  30. AMOUNT OF CODE WRITTEN
    DESIGN DEV TEST
    SECURITY
    “LOOKS
    GOOD!”

    View Slide

  31. AMOUNT OF CODE WRITTEN
    ~ * ~ D E S I G N ~ * ~
    ~ * ~ S E C U R I T Y ~ * ~

    View Slide

  32. “Security features should be
    invisible when you don’t need it,
    helpful when you do”
    — Adrienne Porter Felt, Chrome Security Team

    View Slide

  33. Security features should
    be invisible.
    Serena’s ideal world:

    View Slide

  34. Often in security we build walls

    View Slide

  35. In security we put up walls

    View Slide

  36. View Slide

  37. We’re all tools

    View Slide

  38. How to design thinking ?

    View Slide

  39. The trilogy
    1. Finding intent
    2. Path of least resistance
    3. (Mis)communication and human vulnerabilities
    4. Mental model matching

    View Slide

  40. Finding intent
    “Tell me what you want, what you really really want”

    View Slide

  41. “Security opposes the
    desire to make things
    easy”

    View Slide

  42. It’s not the designers’
    job to make
    everything easy.

    View Slide

  43. It’s not the security
    team’s job to make
    things hard.

    View Slide

  44. What’s our job again?
    Our job is to make a specific task
    ● that a specific, legitimate user wants to do
    ● at that specific time
    ● in that specific place
    … easy.
    Everything else we can lock down.

    View Slide

  45. The tension between security and
    usability occurs when we cannot
    accurately determine intent.

    View Slide

  46. Get more specific about
    the user’s intent.

    View Slide

  47. ● What is the time of day?
    ● Do we know who they are?
    ● Do we know where they are?
    ● Do we know what mode they’re in?
    ● How did they get here?

    View Slide

  48. Don’t force prompts if you can help it.
    ☞ teaches users that security obstructs their work
    ☞ trains them to dismiss prompts in general
    ☞ the false alarm effect is a thing

    View Slide

  49. “Each false alarm reduces the
    credibility of a warning system.”
    — S. Breznitz and C. Wolf. The psychology of false
    alarms. Lawrence Erbaum Associates, NJ, 1984

    View Slide

  50. Source: Anderson et al. How polymorphic warnings reduce habituation
    in the brain: Insights from an fMRI study. In Proceedings of CHI, 2015

    View Slide

  51. Finding intent
    We can infer a lot from proximity and relevance, not
    just aggressive surveys
    Use our wealth of data to test whether our
    inferences are correct. Improve, lather, rinse, repeat.

    View Slide

  52. View Slide

  53. Path of Least Resistance
    “I don’t have to choose to be insecure”

    View Slide

  54. View Slide

  55. View Slide

  56. View Slide

  57. Path of Least Resistance
    Note the zeroth order path: do nothing
    This is why we say everything should be secure by
    default. Doing nothing is the easiest and therefore
    most common action at any time in any application.

    View Slide

  58. View Slide

  59. View Slide

  60. First order path?
    User behaviour is guided by affordances.
    ☞ Are your security-relevant interactions discoverable?
    Path of Least Resistance

    View Slide

  61. View Slide

  62. First order path?
    User behaviour is guided by affordances.
    ☞ Are your security-relevant interactions discoverable?
    ☞ Is your security team discoverable?
    Path of Least Resistance

    View Slide

  63. BY THE WAY

    View Slide

  64. ● If your user doesn’t understand the consequences of
    an action — don’t surface it
    ● Hiding things under “advanced” sections is not
    enough.
    Path of Least Resistance

    View Slide

  65. View Slide

  66. ● If your user doesn’t understand the consequences of
    an action — don’t surface it
    ● Hiding things under “advanced” sections is not
    enough.
    Path of Least Resistance

    View Slide

  67. “I KNOW HOW TO
    INTERNET”

    View Slide

  68. ● If your user doesn’t understand the consequences of
    an action — don’t surface it
    ● Hiding things under “advanced” sections is not
    enough.
    ☞ If you need it, communicate the consequences of
    any action clearly.
    Path of Least Resistance

    View Slide

  69. (Mis)communication

    View Slide

  70. Wherever there is a
    miscommunication,
    there exists a human
    security vulnerability.

    View Slide

  71. What are you
    unintentionally
    miscommunicating?

    View Slide

  72. encrypted is actually google.com*
    people think this means
    secure!

    View Slide

  73. Wherever there is a
    miscommunication,
    there exists a human
    security vulnerability.

    View Slide

  74. View Slide

  75. View Slide

  76. people think this means
    secure!

    View Slide

  77. (I didn’t actually do this)

    View Slide

  78. Do your users know what
    you’re trying to communicate?

    View Slide

  79. What is their mental
    model of what’s
    happening compared to
    yours?

    View Slide

  80. Matching mental models
    [galaxy brain]

    View Slide

  81. It’s the user’s expectations that
    define whether a system is secure
    or not.

    View Slide

  82. View Slide

  83. View Slide


  84. FORMAL DEFINITION
    TIME

    View Slide

  85. “A system is secure from a given
    user’s perspective if the set of
    actions that each actor can do
    are bounded by what the user
    believes it can do.”
    — Ka-Ping Yee, “User Interaction Design for Secure
    Systems”, Proc. 4th Int’l Conf. Information and
    Communications Security, Springer-Verlag, 2002

    View Slide

  86. Two approaches:
    1. Find their mental model (e.g. infer intent) and
    communicate to that
    2. Influence their mental model (e.g. manage
    expectations) to better match the system
    Matching mental models

    View Slide

  87. ● This doesn’t mean reading people’s minds
    ○ Have you tried asking nicely?
    ○ Have you watched a non-security-expert use your
    system/process/application?
    ○ Have you observed a user session?
    1: What’s their model?

    View Slide

  88. ● This doesn’t necessarily mean reading people’s
    minds
    ☞ Consider customisation
    ☞ Bad identifiers = miscommunication
    1: What’s their model?

    View Slide

  89. Communicate what’s actually happening.
    If it clarifies your interface without causing further
    confusion, then it is good.
    2: How do we influence?

    View Slide

  90. View Slide

  91. ● Whenever we make things, we teach.
    ● Whenever someone interacts with us / a thing we
    made, they learn.
    ● Often the path of least resistance becomes the
    default “way to do things”.
    2: How do we influence?

    View Slide

  92. How are we already
    influencing our users’
    mental models?

    View Slide

  93. Source: https://krausefx.com/blog/ios-privacy-stealpassword-easily-get-the-users-apple-id-password-just-by-asking
    iOS phish

    View Slide

  94. What are we teaching?

    View Slide

  95. View Slide

  96. View Slide

  97. How are we already
    influencing everyone’s
    mental models?

    View Slide

  98. How do we start
    understanding and
    matching mental models
    for better security?

    View Slide

  99. View Slide

  100. Technical problems in security are
    hard enough, which means we often
    forget the human side.

    View Slide

  101. This is why good, reliable,
    security is hard.

    View Slide

  102. Because communicating
    and managing
    expectations through a
    thin layer of user interface
    is hard.

    View Slide

  103. But that’s where design
    thinking can help.

    View Slide

  104. What are your users’
    mental models?

    View Slide

  105. IN SUMMARY:

    View Slide

  106. View Slide

  107. ● Cross pollination between design and security is rare
    ○ This is a massive missed opportunity! Let’s be
    friends! <3
    Takeaways

    View Slide

  108. ● Our job is ultimately about security outcomes
    ○ Stop expecting everyone to be experts
    ○ Let people focus on their tasks
    ○ Go from being the “no” team to being the “yes but
    what about…” team
    Takeaways

    View Slide

  109. ● Align the user’s goals to your security goals:
    ○ Aim to know their intent
    ○ Collaborate with design to craft more secure
    paths of least resistance
    ○ Understand the user’s mental model vs yours
    ○ Communicate to that model
    Takeaways

    View Slide

  110. One final anecdote...

    View Slide

  111. View Slide

  112. View Slide

  113. View Slide

  114. Thanks!
    Fight me @Sereeena

    View Slide