Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Speaker Deck
PRO
Sign in
Sign up for free
LIME
Sinhrks
December 16, 2017
2
1.1k
LIME
@Tokyo.R 66
https://atnd.org/events/92993
Sinhrks
December 16, 2017
Tweet
Share
More Decks by Sinhrks
See All by Sinhrks
daskperiment: Reproducibility for Humans
sinhrks
1
310
PythonとApache Arrow
sinhrks
6
1.3k
大規模データの機械学習におけるDaskの活用
sinhrks
10
2.7k
機械学習と解釈可能性
sinhrks
7
5.2k
データ分析言語R 1年の振り返り
sinhrks
5
2.1k
pandasでのOSS活動事例と最初の一歩
sinhrks
2
17k
Dask Distributedによる分散機械学習
sinhrks
4
1.3k
Data processing using pandas and Dask
sinhrks
1
180
pandasでのOSS活動事例
sinhrks
0
660
Featured
See All Featured
VelocityConf: Rendering Performance Case Studies
addyosmani
317
22k
We Have a Design System, Now What?
morganepeng
37
5.9k
Designing Experiences People Love
moore
130
22k
Rebuilding a faster, lazier Slack
samanthasiow
69
7.5k
What’s in a name? Adding method to the madness
productmarketing
12
1.9k
Stop Working from a Prison Cell
hatefulcrawdad
263
18k
The Straight Up "How To Draw Better" Workshop
denniskardys
226
130k
Building a Modern Day E-commerce SEO Strategy
aleyda
6
4.5k
The Power of CSS Pseudo Elements
geoffreycrofte
52
4.3k
Why You Should Never Use an ORM
jnunemaker
PRO
49
7.9k
Building a Scalable Design System with Sketch
lauravandoore
451
31k
How GitHub Uses GitHub to Build GitHub
holman
465
280k
Transcript
LIME Masaaki Horikoshi @ ARISE analytics
ࣗݾհ • R • ύοέʔδ։ൃͳͲ • Git Awards ࠃ1Ґ •
Python • http://git-awards.com/users/search?login=sinhrks
Α͋͘Δ͜ͱ ΤʔΞΠͰ͍͍ײ͡ʹͬͱ͍ͯΑʂ ݁Ռ͕ྑ͚Εதؾʹ͠ͳ͍Αʂʂ Ͱɺ͜Εͬͯ݁ہͲ͏͍͏͜ͱͳͷʁ த͕Θ͔Βͳ͍ͷ͑ͳ͍Αʂʂʂ ͑Β͍ਓ ˞ʔγϟͰͳ͍ ݁Ռ͕ग़Δͱʜ
Interpretability ղऍՄೳੑ
ղऍͷͨΊͷΞϓϩʔν 1. આ໌͍͢͠ػցֶशख๏ΛબͿ • ਫ਼͕ෆेͳ߹͕͋Δ 2. ػցֶशख๏ʹΑΒͳ͍ղऍख๏Λ͏
ղऍՄೳੑ • Global Interpretability • ϞσϧσʔλશମͷΛղऍ • ۙࣅཁ౷ܭྔΛར༻ => ہॴతʹෆਖ਼֬ͳ߹
• Local Interpretability • ϞσϧσʔλͷݶΒΕͨྖҬΛղऍ • ΑΓਖ਼֬ͳઆ໌͕Մೳ
ղऍՄೳੑ • దͳख๏ʮԿΛʯղऍ͍͔ͨ͠ʹґଘ .PEFM4QFDJpD .PEFM"HOPTUJD (MPCBM *OUFSQSFUBCJMJUZ w 3FHSFTTJPO$PF⒏DJFOUT w
'FBUVSF*NQPSUBODF ʜ w 4VSSPHBUF.PEFMT w 4FOTJUJWJUZ"OBMZTJT ʜ -PDBM *OUFSQSFUBCJMJUZ w .BYJNVN"DUJWBUJPO"OBMZTJT ʜ w -*.& w -0$0 w 4)"1 ʜ
LIMEͱʁ
Local Interpretable Model-agnostic Explanations
LIME • “Why Should I Trust You?” Explaining the Predictions
of Any Classifier (2016) • Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin
LIME • LIMEҎԼͷؔΛͱʹσʔλ x ͷղऍΛಘΔ • G: ղऍ༻ͷֶशثͷू߹ • L:
ղऍֶ͍ͨ͠शثͱղऍ༻ͷֶशثͷ ΠxͷݩͰͷࠩ • f: ղऍֶ͍ͨ͠शث • Πx: σʔλ x ͱͷྨࣅ • Ω: ղऍ༻ͷֶशثͷෳࡶ͞ʹର͢Δേଇ߲ • ۩ମతखஈυϝΠϯʹґଘ
ςʔϒϧσʔλɾྨͷྫ • σʔλ x ͷपลͰαϯϓϦϯά • طఆͰ5,000 • αϯϓϦϯάํ๏มͷछྨʹґଘ •
Exponential KernelͰॏΈ͚ • มબ • Forward/Backward, LARSͳͲ • RidgeճؼͳͲ ˞1ZUIPO࣮ ޙड़ ʹͱͮ͘
ύοέʔδ • Python • จஶऀ࡞ • https://github.com/marcotcr/lime • R •
্هͷϙʔςΟϯά • https://github.com/thomasp85/lime install.packages(‘lime’)
LIME (R) • αϯϓϧ library(caret) library(lime) model <- train(iris[-5], iris[[5]],
method = 'rf') explainer <- lime(iris[-5], model) explanations <- explain(iris[1, -5], explainer, n_labels = 1, n_features = 2) explanations model_type case label label_prob model_r2 model_intercept 1 classification 1 setosa 1 0.3776584 0.2544468 2 classification 1 setosa 1 0.3776584 0.2544468 model_prediction feature feature_value feature_weight feature_desc 1 0.7113922 Sepal.Width 3.5 0.02101138 3.3 < Sepal.Width 2 0.7113922 Petal.Length 1.4 0.43593404 Petal.Length <= 1.60 data prediction 1 5.1, 3.5, 1.4, 0.2 1, 0, 0 2 5.1, 3.5, 1.4, 0.2 1, 0, 0 ֶशثΛ܇࿅ ղऍ༻ͷΫϥεΛ࡞ ղऍΛग़ྗ
LIME (R) plot_features(explanations) ղऍΛϓϩοτ
Enjoy!