Upgrade to Pro — share decks privately, control downloads, hide ads and more …

#24 Interprétabilité des modèles de Machine Learning avec LIME

#24 Interprétabilité des modèles de Machine Learning avec LIME

L'utilisation de modèles de Machine Learning (ML) dans la résolution d'une grande variété de problèmes est de plus en plus populaire. Ces modèles mathématiques sont pour certains (comme les régressions linéaires ou bien les arbres de décisions) facilement interprétables. Cependant, les modèles ML à la pointe de la technologie, permettant de répondre à des problèmes complexes, comme les réseaux de neurones profonds, sont de facto des boîtes noires, au fonctionnement interne inaccessible. Cela soulève une question primordiale : puis-je avoir confiance dans les résultats donnés par mon modèle ?

En prenant l'exemple concret d'un algorithme de classification d'images utilisant un réseau de neurones, nous introduirons LIME, une technique novatrice permettant de visualiser les caractéristiques de l'image qui ont permis au modèle de prendre sa décision.

Florent Pajot, Samia Drappeau : Data Scientists chez Continental Intelligent Transportation Systems France

Toulouse Data Science

November 14, 2017
Tweet

More Decks by Toulouse Data Science

Other Decks in Technology

Transcript

  1. Introduction to LIME
    A further step towards ML model interpretability
    Florent Pajot — Samia Drappeau
    November 14, 2017

    View full-size slide

  2. Florent Pajot — Samia Drappeau #TDS
    Who are we?
    2
    Samia Drappeau
    Florent Pajot
    @samiadrappeau
    @FlorentPajot
    Data Scientists at

    View full-size slide

  3. Florent Pajot — Samia Drappeau #TDS
    Why do we need
    interpretability?
    3

    View full-size slide

  4. Florent Pajot — Samia Drappeau #TDS
    Why do we need
    interpretability?
    4

    View full-size slide

  5. Florent Pajot — Samia Drappeau #TDS
    General Data Protection
    Regulation
    • EU Data Protection Law // Regulation 2016/679 —
    Directive 95/46/EC

    • Takes effect in May 2018

    • Concerned for the ML community — Article 22:
    Automated individual decision-making, including profiling.
    5

    View full-size slide

  6. Florent Pajot — Samia Drappeau #TDS
    General Data Protection Regulation
    Article 22
    • Automated decision are contestable

    ‣ what data was used?

    ‣ need to explain a decision-making

    to EU citizen

    ‣ non-discrimination

    ‣ Up to 4% of the world gross revenues as fees
    6

    View full-size slide

  7. Florent Pajot — Samia Drappeau #TDS
    Interpretability vs Power
    7
    Power

    View full-size slide

  8. Florent Pajot — Samia Drappeau #TDS
    The two approaches
    1. Use simple models that are interpretable

    2. Use complex models and try to explain/interpret
    predictions
    8

    View full-size slide

  9. Florent Pajot — Samia Drappeau #TDS
    The two approaches
    1. Use simple models that are interpretable

    2. Use complex models and try to explain/interpret
    predictions
    9

    View full-size slide

  10. Florent Pajot — Samia Drappeau #TDS
    Local Interpretable Model-
    agnostic Explanations
    10
    "Why Should I Trust You?": Explaining the Predictions of Any
    Classifier (Feb 2016)
    Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin
    As of Septembre 26, 2017
    First release March 2016

    View full-size slide

  11. Florent Pajot — Samia Drappeau #TDS
    Local Interpretable Model-
    agnostic Explanations
    11

    View full-size slide

  12. Florent Pajot — Samia Drappeau #TDS 12
    Local model Global model

    View full-size slide

  13. Florent Pajot — Samia Drappeau #TDS
    LIME in practice
    • Go to the Jupiter notebook LIME tutorial

    • https://github.com/SamAstro/tds-meetup-LIME-
    presentation
    13

    View full-size slide

  14. Florent Pajot — Samia Drappeau #TDS
    A field in expansion…
    14

    View full-size slide

  15. Florent Pajot — Samia Drappeau #TDS
    Take away message
    • Interpretability helps us understand the inner working of
    models

    • However, one should never forget the three most
    important questions in ML:

    - Do I understand my data?

    - Do I understand the model and answers my ML
    algorithm is giving me?

    - Can I trust them?
    15

    View full-size slide

  16. Florent Pajot — Samia Drappeau #TDS
    References
    • https://github.com/marcotcr/lime

    • https://arxiv.org/pdf/1606.05386.pdf

    • https://lime-ml.readthedocs.io/en/latest/

    • https://medium.com/@thommash/local-interpretable-model-
    agnostic-explanations-lime-and-gdpr-9e3d66b64207

    • https://www.oreilly.com/ideas/ideas-on-interpreting-machine-
    learning

    • http://blog.fastforwardlabs.com/2017/09/01/LIME-for-couples.html
    16

    View full-size slide