Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Styliana Sarris - Reporting your User Research Findings like a Psychologist

Styliana Sarris - Reporting your User Research Findings like a Psychologist

Our user research findings lack gravity when they are reported poorly.

Oftentimes, we negate key pieces of information or unwittingly breach ethical guidelines. Despite the various benefits of having some creative freedom, as it stands, the user research report comes in too many shapes and sizes. This impacts both our ability to quickly consume previous research conducted by fellow researchers and more importantly, denies us the ability to appraise its quality; a fundamental function of research.

Albeit presenting our research insights with long slabs of text and convoluted words is a far cry from the desired objective, we have a lot to gain from leveraging the structure that psychologists turn to when presenting their findings - the academic journal.

The design of the academic journal is very intentional, with each section devoted to addressing a key aspect of the scientific method. Insofar as user research is to be legitimate in guiding large-scale product decisions, we are not exempt from holding ourselves to high standards of rigour - but we also don’t need to be terrified by them. What we need is a standard that works in our unique context - a context where time is money, deliverables are created in a hurry and budgets are minimal.

In this talk, we will hone in on the architecture of the academic journal and explore what we can borrow from it to raise the bar on our research reporting, ultimately maximising the weight and longevity of our insights.

uxaustralia
PRO

March 18, 2021
Tweet

More Decks by uxaustralia

Other Decks in Design

Transcript

  1. Reporting your user research insights like a psychologist

    View Slide

  2. A deep understanding of the people we are
    designing for is cardinal to a product’s success

    View Slide

  3. View Slide

  4. Our research process can be very unclear to our stakeholders

    View Slide

  5. View Slide

  6. An immediate solution is to bring
    stakeholders along on the journey

    View Slide

  7. What about the stakeholders you
    don’t have direct access to?

    View Slide

  8. View Slide

  9. As it stands, user research reports come in too many shapes and sizes.
    They oftentimes miss key information and follow different structures.
    This impacts our ability to…

    View Slide

  10. Source: Searching for Explanations: How the Internet Inflates Estimates of Internal Knowledge, Journal of Experimental Psychology: General, June 2015, by Matthew Fisher, Mariel K. Goddu, and Frank C. Keil

    View Slide

  11. View Slide

  12. View Slide

  13. Why was the introduction section designed?
    It is easy to fall into the trap of getting excited about a problem space or research
    question and jump straight into the data collection.
    “Lets just talk to users!”

    View Slide

  14. Elements of the Introduction

    View Slide

  15. The Introduction is designed to hold us accountable - it
    demands us to make a case for why research is needed,
    vs a default “we need to talk to users!”

    View Slide

  16. View Slide

  17. Why was the Method section designed?
    It is easy to fall into the trap of skipping over some details of how you conducted an
    experiment. This is because it is tedious and of no immediate personal use
    (you know what you did!)

    View Slide

  18. Elements of the method pt.1

    View Slide

  19. Elements of the Method pt.2

    View Slide

  20. Individuals are selected
    from the population to form
    the sample
    Findings derived from
    sample are generalised to
    the wider population

    View Slide

  21. Individuals are selected
    from the population to form
    the sample
    Findings derived from
    sample are generalised to
    the wider population
    If all your report says is
    “we spoke to 5 users”then we can’t
    verify if this happened…

    View Slide

  22. View Slide

  23. Why was the Results Section designed?
    It’s very easy to fall into the trap of looking at your results at face value and
    making huge inferential leaps about their meaning,

    View Slide

  24. Elements of the results section

    View Slide

  25. Perspicuity
    1.40
    Dependability
    1.10
    Efficiency
    1.47 Stimulation
    Novelty
    Attractiveness
    1.1
    Pragmatic Qualities Hedonic Qualities
    Attractiveness
    0.40 0.20
    Say we only reported the descriptive statistics of our study...
    User Experience Questionnaire →https://www.ueq-online.org/

    View Slide

  26. User Experience Questionnaire →https://www.ueq-online.org/
    Inferential statistics give us a better understanding of the precision of the estimate,
    and help us make inferences from our sample to the wider population. They give us
    confidence in our findings, and as you can see here, stimulation and novelty are
    not statistically significant (P-value is not <0.05)

    View Slide

  27. Perspicuity
    1.40
    Dependability
    1.10
    Efficiency
    1.47
    Stimulation Novelty
    Attractiveness
    1.1
    Pragmatic Qualities Hedonic Qualities
    Attractiveness
    0.40 0.20
    If we reported these findings without looking at whether they were
    statistically significant, we would have mislead our stakeholders by
    making the wrong conclusions.
    User Experience Questionnaire →https://www.ueq-online.org/






    View Slide

  28. View Slide

  29. Why was the Discussion section designed?
    It’s very easy to fall into the trap of refraining from sharing the flaws of your study
    and being defensive when questioned about the quality of them.

    View Slide

  30. Elements of the Discussion section

    View Slide

  31. View Slide

  32. Why was the Abstract section designed?
    It’s very easy to fall into the trap of thinking your audience will have time to read
    your full report in detail.

    View Slide

  33. Elements of the Abstract section

    View Slide

  34. “All this is great,
    but we don’t have time or budget for it”

    View Slide

  35. What we don’t have time for is wasting
    human capital and $ on “research” which
    isn’t valid nor sound.

    View Slide

  36. Insofar as our research is to inform large
    scale product decisions directly
    impacting a large scale user base... we
    need to lift our game.

    View Slide

  37. View Slide

  38. Thank you
    Sydney — Melbourne — Brisbane — San Francisco — New York —
    London — Wrocław — Dubai — Mumbai — Singapore — Tokyo
    LinkedIn
    Facebook
    Twitter
    Instagram
    Tigerspike
    We deliver business value, fast.
    tigerspike.com
    Styliana Sarris
    Senior UX Designer
    [email protected]

    View Slide

  39. How can we apply this
    to User Research?

    View Slide

  40. View Slide

  41. What body of knowledge do we already have?
    What does the literature say?

    View Slide

  42. View Slide

  43. View Slide