Upgrade to Pro — share decks privately, control downloads, hide ads and more …

The Case for Public Service Recommender Algorithms

Ben Fields
October 06, 2018

The Case for Public Service Recommender Algorithms

In this position paper we lay out the role for public service organisations within the fairness, accountability, and transparency discourse. We explore the idea of public service algorithms and what role they might play, especially with recommender systems. We then describe a research agenda for public service recommendation systems.

(As given at FATRec2018 https://piret.gitlab.io/fatrec2018/program/ )

Ben Fields

October 06, 2018
Tweet

More Decks by Ben Fields

Other Decks in Technology

Transcript

  1. The Case for Public Service
    Recommender Algorithms
    Ben Fields (presenting)
    Rhianne Jones
    Tim Cowlishaw
    BBC

    View Slide

  2. Motivations and Aims
    The Case for Public Service Recommender Algorithms | FATrec18 | 6 October 2018

    View Slide

  3. Motivations and Aims
    Data to deliver personalised services
    has become a key strategic priority
    for Public Service Media
    organisations across Europe
    The Case for Public Service Recommender Algorithms | FATrec18 | 6 October 2018

    View Slide

  4. Motivations and Aims
    Concerns about potential risk for PSM values:
    • potential to undermine shared and collective media experiences
    • reinforce audiences’ preexisting preferences
    • (Public Service) Media becoming more like a goldfish bowl,
    rather than a window to the world
    • Lots of examples in recent work: Pasquale 2015, Bulck and Moe
    2017, Bennet 2018, Lots 2018, Sørensen and Hutchinson 2018
    The Case for Public Service Recommender Algorithms | FATrec18 | 6 October 2018

    View Slide

  5. Motivations and Aims
    For traditional
    commercial applications
    the goal is a
    straightforward extension
    of an organisation’s
    overall commercial aims
    https://www.flickr.com/photos/24354425@N03/16593266327
    The Case for Public Service Recommender Algorithms | FATrec18 | 6 October 2018

    View Slide

  6. “inform,
    educate,
    and
    entertain”
    The Case for Public Service Recommender Algorithms | FATrec18 | 6 October 2018

    View Slide

  7. “inform,
    educate,
    and
    entertain”
    Fairness,
    accountability,
    and
    transparency
    <=?=>
    The Case for Public Service Recommender Algorithms | FATrec18 | 6 October 2018

    View Slide

  8. A Research Agenda
    We join wider calls for PSM to do personalisation
    differently (Bennet 2018, Helberger 2015)
    We do this from a specific PSM context but with wider
    relevance in mind
    The Case for Public Service Recommender Algorithms | FATrec18 | 6 October 2018

    View Slide

  9. A Research Agenda
    Thus far guidance on how PSM should approach
    personalisation has been vague and not sufficient to
    drive implementation
    The Case for Public Service Recommender Algorithms | FATrec18 | 6 October 2018

    View Slide

  10. Our position paper asks - how can/should
    public service algorithms that enshrine
    the principles of Fairness, Accountability
    and Transparency (FAT) lead to novel ways
    to design recommender algorithms?
    The Case for Public Service Recommender Algorithms | FATrec18 | 6 October 2018

    View Slide

  11. How do we operationalise public service
    media (PSM) values as tangible concepts in
    specific PSM contexts?
    The Case for Public Service Recommender Algorithms | FATrec18 | 6 October 2018
    <1>

    View Slide

  12. operationalise PSM
    • Caveat: Notions of public service inevitably vary across
    different geo-political and cultural contexts (Helberger
    2015)
    • This is principally about core aspects to a PSM approach/
    remits have implications for how we design and evaluate
    recommenders
    • Key challenges: operationalising concepts like diversity,
    surprise, shared experience, etc.
    The Case for Public Service Recommender Algorithms | FATrec18 | 6 October 2018

    View Slide

  13. operationalise PSM
    “any initiative to promote diversity
    exposure will first have to deal with
    ‘the question of what exposure
    diversity actually is’ as well as how
    to measure it” (Helberger et al. 2018)
    The Case for Public Service Recommender Algorithms | FATrec18 | 6 October 2018

    View Slide

  14. What are useful metrics for which to optimise
    (e.g. diversity or serendipity), how should the
    importance of different metrics be balanced
    in different PSM contexts?
    The Case for Public Service Recommender Algorithms | FATrec18 | 6 October 2018
    <2>

    View Slide

  15. Optimise the metrics
    Can we select our metrics to explicitly address civic
    goals:
    • to counter filter bubble effects (Bozdag and van den
    Hoven 2015)?
    • for sociocultural diversity (Sheth et al 2011)?
    The Case for Public Service Recommender Algorithms | FATrec18 | 6 October 2018

    View Slide

  16. Optimise the metrics
    How can we broaden the scope of these metrics? e.g
    serendipity, self-actualisation

    The Case for Public Service Recommender Algorithms | FATrec18 | 6 October 2018

    View Slide

  17. Optimise the metrics
    “even if an algorithm is designed with the goal of
    stimulating ‘diversity’ an assessment of its performance
    by other measures nullifies these good intentions” (van
    Es 2017)

    The Case for Public Service Recommender Algorithms | FATrec18 | 6 October 2018

    View Slide

  18. What data (metadata/audience data) should
    algorithms work on, what are the limits of
    this data in its current form and how might
    awareness of this inform new approaches?
    The Case for Public Service Recommender Algorithms | FATrec18 | 6 October 2018
    <3>

    View Slide

  19. Data selection
    • Content vs. metadata vs. behavioural?
    • Privacy/ethical concerns
    • Regulatory requirements

    The Case for Public Service Recommender Algorithms | FATrec18 | 6 October 2018

    View Slide

  20. How much accuracy loss is acceptable in
    pursuit of new metrics, e.g. diversity?
    The Case for Public Service Recommender Algorithms | FATrec18 | 6 October 2018
    <4>

    View Slide

  21. At what (accuracy) cost?
    • Tradeoffs must be made explicitly to minimise
    unexpected and undesirable outcomes
    • Can this question be answered through other
    research practises? (e.g. audience research, UX
    methodologies)?
    The Case for Public Service Recommender Algorithms | FATrec18 | 6 October 2018

    View Slide

  22. How should transparency work - when and to
    whom is it useful, e.g. regulators?
    The Case for Public Service Recommender Algorithms | FATrec18 | 6 October 2018
    <5>

    View Slide

  23. Transparency for whom?
    Principles of transparency key to the mission of PSM:
    They provide the mechanism by which PSM are
    regulated and held accountable
    The Case for Public Service Recommender Algorithms | FATrec18 | 6 October 2018

    View Slide

  24. Transparency for whom?
    • Transparency for stakeholders/audiences or regulators?
    • When we say transparency, when do we mean disclosure?
    • In a user-facing system this is an issue of enabling
    consent to be meaningful
    • However in a stakeholder arrangement this is about
    disclosure
    The Case for Public Service Recommender Algorithms | FATrec18 | 6 October 2018

    View Slide

  25. To what extent should we be transparent
    about how we are resolving metric and
    optimisation complexity (the trade-offs we
    are making)?
    The Case for Public Service Recommender Algorithms | FATrec18 | 6 October 2018
    <6>

    View Slide

  26. Transparency, how much?
    • What is the effective and maximally useful fidelity of
    transparency?
    • Is it optimal/necessary/ideal to exposure users to
    metric tradeoffs?
    • How transparent to be about accuracy v. diversity?
    • Third parties/party platforms/challenges?
    The Case for Public Service Recommender Algorithms | FATrec18 | 6 October 2018

    View Slide

  27. How do we design for interpretability and
    explainability to enable appropriate oversight
    of how recommenders are making decisions
    and ensure due accountability?
    The Case for Public Service Recommender Algorithms | FATrec18 | 6 October 2018
    <7>

    View Slide

  28. Design for accountability?
    • Accountability vital to PSM/BBC
    • Transparency/full disclosure vs meaningful
    explanations for a user
    • Does the desire for transparency push algorithmic
    design in particular directions?
    The Case for Public Service Recommender Algorithms | FATrec18 | 6 October 2018

    View Slide

  29. Design for accountability?
    • How can complex algorithmic systems designed to be
    intelligible/interpretable
    • What types of explanations can a system generate/what’s
    possible?
    • What explanations will be sufficient for oversight?
    • How will explanation needs vary e.g regulators/
    stakeholders/editorial/public?
    The Case for Public Service Recommender Algorithms | FATrec18 | 6 October 2018

    View Slide

  30. What do emerging approaches in algorithmic
    auditing offer us in terms of scrutinising
    recommender systems in the real world?
    The Case for Public Service Recommender Algorithms | FATrec18 | 6 October 2018
    <8>

    View Slide

  31. Algorithmic auditing for recsys?
    • Does auditing an algorithmic system change the
    system and by extension the user (experience)
    • How can we observe and monitor impacts of
    algorithmic systems?
    • Is auditing a useful approach to assess bias,
    unfairness, diversity etc?
    The Case for Public Service Recommender Algorithms | FATrec18 | 6 October 2018

    View Slide

  32. What type/level of explanation will be most
    useful? Will explanations produced for editorial
    need to vary from the
    type of explanations PSM may provide to
    audiences?
    The Case for Public Service Recommender Algorithms | FATrec18 | 6 October 2018
    <9>

    View Slide

  33. Explanation, how much? For whom?
    • Are we explaining algorithmic decision making to
    users or to experts (principally: editorial teams)?
    • What is the distance between these two groups?
    The Case for Public Service Recommender Algorithms | FATrec18 | 6 October 2018

    View Slide

  34. How will we determine the value of different
    potential approaches? How might new
    methodologies, e.g. multi-method,
    comparative, or longitudinal research, explore
    cumulative effects?
    The Case for Public Service Recommender Algorithms | FATrec18 | 6 October 2018
    <10>

    View Slide

  35. Determining value
    • What does testing look like in a public service
    context?
    • Should we consult public on value of different
    approaches in PSM contexts
    • Do public service organisations have an obligation to
    explore longer view and cumulative impacts?
    The Case for Public Service Recommender Algorithms | FATrec18 | 6 October 2018

    View Slide

  36. Next steps
    The Case for Public Service Recommender Algorithms | FATrec18 | 6 October 2018

    View Slide

  37. better align recommender systems in
    public service contexts with their
    underlying value frameworks
    The Case for Public Service Recommender Algorithms | FATrec18 | 6 October 2018

    View Slide

  38. The Case for Public Service Recommender Algorithms | FATrec18 | 6 October 2018

    View Slide

  39. Thanks!
    Let’s have some questions!
    Ben Fields
    @alsothings
    ben.fi[email protected]
    http://bit.ly/bbcfatrec18
    The Case for Public Service Recommender Algorithms | FATrec18 | 6 October 2018

    View Slide