Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Bye-bye, Miss AI

Bye-bye, Miss AI

The presentation describes a need to implement explainable AI.

Kacper Łukawski

July 03, 2019
Tweet

More Decks by Kacper Łukawski

Other Decks in Research

Transcript

  1. Kacper Łukawski, Data Science Lead at Codete

    View Slide

  2. Agenda:
    AI hype
    ML project lifetime
    Real-life story
    XAI

    View Slide

  3. Forty percent of “AI startups” in Europe don’t actually
    use AI
    The State of AI 2019: Divergence
    AI hype

    View Slide

  4. Startups labelled as being in AI attract 15% to 50% more
    funding than other technology firms.
    The State of AI 2019: Divergence
    AI hype

    View Slide

  5. 1. Data wrangling
    2. Model training
    3. Validation
    4. Inference
    ML project lifetime

    View Slide

  6. ML project lifetime
    1. Data wrangling
    2. Model training
    3. Validation
    4. Inference

    View Slide

  7. Real-life story

    View Slide

  8. 1. Sweet little girl
    2. Her grandmother
    3. Little red cap
    4. A piece of cake and a bottle of wine to be
    delivered
    5. Big bad wolf
    Real-life story

    View Slide

  9. The question is:
    How to distinguish grandma from a wolf?
    Real-life story

    View Slide

  10. It doesn’t matter what is the issue. Our Little Red Riding
    Hood needs some support so badly. The original story
    comes from the 17th century and at this time there was
    no rescue, but now we have AI!
    Real-life story

    View Slide

  11. There is a well-known example of a Machine Learning
    system designed for classifying the images of wolves
    and huskies.
    Classification: wolf or a husky?

    View Slide

  12. Even though, just by looking at the numbers, everything
    may seem to work perfectly, we need to understand how
    it works behind the scene.
    Classification: wolf or a husky?

    View Slide

  13. We are in a point, when we desperately need AI that can
    be explained. In other words, our models cannot just
    make the decisions - they need to clarify them as well or
    at least allow to be clarified.
    XAI

    View Slide

  14. Thank you!
    Kacper Łukawski
    [email protected]

    View Slide