Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
[読み会] TabNet: Attentive Interpretable Tabular L...
Search
mei28
January 05, 2021
0
140
[読み会] TabNet: Attentive Interpretable Tabular Learning
読み会資料
TabNet: Attentive Interpretable Tabular Learning(ICLR, 2020, rejected)
mei28
January 05, 2021
Tweet
Share
More Decks by mei28
See All by mei28
[読み会] “Are You Really Sure?” Understanding the Effects of Human Self-Confidence Calibration in AI-Assisted Decision Making
mei28
0
85
[JSAI'24] 人間の判断根拠は文脈によって異なるのか?〜信頼されるXAIに向けた人間の判断根拠理解〜
mei28
1
490
[CHI'24] Fair Machine Guidance to Enhance Fair Decision Making in Biased People
mei28
0
56
[DEIM2024] 卓球の得点予測における重要要素の分析
mei28
0
39
[Human-AI Decision Making勉強会] 意思決定 with AIは個人vsグループで変わるの?
mei28
0
200
[読み会] Words are All You Need? Language as an Approximation for Human Similality Judgements
mei28
0
38
[参加報告] AAAI'23
mei28
0
89
[計算機構論] Learning Models of Individual Behavior in Chess
mei28
0
71
[計算機構論] Why do tree-based models still outperform deep learning on tabular data?
mei28
0
56
Featured
See All Featured
Fantastic passwords and where to find them - at NoRuKo
philnash
51
3k
Automating Front-end Workflow
addyosmani
1369
200k
Cheating the UX When There Is Nothing More to Optimize - PixelPioneers
stephaniewalter
280
13k
Unsuck your backbone
ammeep
669
57k
Reflections from 52 weeks, 52 projects
jeffersonlam
348
20k
jQuery: Nuts, Bolts and Bling
dougneiner
63
7.7k
Stop Working from a Prison Cell
hatefulcrawdad
268
20k
Facilitating Awesome Meetings
lara
53
6.2k
GraphQLの誤解/rethinking-graphql
sonatard
69
10k
Visualizing Your Data: Incorporating Mongo into Loggly Infrastructure
mongodb
45
9.4k
Testing 201, or: Great Expectations
jmmastey
42
7.2k
ReactJS: Keep Simple. Everything can be a component!
pedronauck
666
120k
Transcript
TabNet: Attentive Interpretable Tabular Learning ಡΈձ@2021/01/05 ༶໌
• ஶऀ • Sercan O. Arik, Tomas Pfister •
Google Cloud AI • ग़య: ArxivͷPreprint • ICLR 2020ͰϦδΣΫτ͞Εͨจ จใ
• ςʔϒϧσʔλ͚ͷDNNϞσϧ • ܾఆͱNNϞσϧͷ͍͍ͱ͜औΓΛࢦͨ͠ख๏ • ղऍੑ + ਫ਼ ͷ্͕ୡͰ͖ͨɽ
֓ཁ ͲΜͳจʁ
• DNNͷϞσϧ͕ಛʹը૾,ݴޠ,ԻͷͰSOTAͰ͋Δɽ • KaggleͳͲͷੳίϯϖͰॳΊʹܾఆϕʔεͷख๏͕ओྲྀ • ղऍੑ͕ߴ͍͔Β ং ݚڀഎܠ
• ͳΜͰςʔϒϧσʔλʹରͯ͠ɼਂֶशΛऔΓೖΕ͍ͨͷ͔ʁ • େنͳσʔληοτʹ͍ͨͯ͠ɼਂֶशʹΑ্͕ͬͯظͰ͖Δ ͔Β • Deep Learning Scaling
is Predictable, Empirically.(Hestness et al., 2017) ং ݚڀഎܠ
• ςʔϒϧσʔλʹରͯ͠NNϞσϧΛ͏3ͭͷϝϦοτ 1. ෳͷσʔλΛޮΑ͘ΤϯίʔσΟϯάͰ͖Δ 2. ಛྔΤϯδχΞϦϯάͷखؒΛݮΒͤΔ 3. End-to-endͰѻ͏͜ͱ͕Ͱ͖Δɽ ং
ݚڀഎܠ
• σʔλͷલॲཧΛߦΘͣʹend-to-endͰͷֶशΛߦ͑Δɽ • ஞ࣍ҙΛ༻͍Δ͜ͱͰղऍੑͷߴ͍Ϟσϧʹͳ͍ͬͯΔɽ • Local interpretability: ೖྗಛͷॏཁ •
Global interpretability: ֤ಛྔ͕Ϟσϧʹରͯ͠Ͳͷ͘Β͍Өڹ͔ͨ͠ ং ఏҊख๏ͷߩݙ
• DNN+DT • ஞ࣍ҙΛ༻͍ͯɼಛબΛߦ͍ಛΛೖΕࠐΜͰ͍Δɽ • Tree-based learning • ಛબʹDNNΛ༻͍͍ͯΔɽ
• Feature Selection • ίϯύΫτͳදݱ͕Ͱ͖ͨɽ ؔ࿈ݚڀ
• Attentive transformer • ಛྔʹରͯ͠͏MaskͷֶशΛߦ͏ɽ • Feature transformer •
ಛྔͷมɼ࣍εςοϓʹ͏ͷΛܾΊΔɽ ఏҊख๏ ॏཁͳύʔπ
• ͜ΕҎ߱ग़ͯ͘Δ εςοϓ1,2,...ʹରԠ͍ͯ͠Δ i ఏҊख๏ શମͷߏ
• • : աڈͷMͰΘΕ͍ͯΔ͔ʁʹΑͬͯ มΘΔॏΈ(࣮Ͱར༻੍ݶΈ͍ͨͳͷ) • Sparsemax: softmaxʹࣅͨ׆ੑԽؔ M[i]
= sparsemax(P[i] ⋅ hi (a[i − 1])) P[i] ఏҊख๏ Attentive Transformer: ϚεΫͷֶशΛߦ͏ɽ
• SoftmaxΑΓૄʹͳΓ͍͢ ͔ΒɼॏཁͳಛྔΛऔΓग़ ͍͢͠ ίϥϜ SparseMax (Andre et al.,
2016)
• ɼa࣍ͷεςοϓʹճ͞ΕΔ [d[i], a[i]] = fi (M[i] ⋅ f)
ఏҊख๏ Feature Transformer: ೖྗΛม͠ɼ࣍ʹ͏ͷΛܾΊΔ
• ֤εςοϓ Λूܭ ͯ͠࠷ऴతͳ༧ଌʹ ༻͍Δ d[i] ఏҊख๏ ࠷ऴ༧ଌ
• ಛྔͷॏཁϚεΫΛͬͯܭࢉ͢Δ • ؆୯ʹܭࢉ͢ΔͨΊɼϚεΫͰͳ͘ಛྔΛ༻͍Δ ɹɹɹ ɹˠͲͷαϯϓϧ͕ॏཁ͔ʁ • → ಛྔͷॏཁ
ηb [i] = Nd ∑ c ReLU(db,c [i]) Magg−b,j = ∑Ns teps i=1 ηb [i]Mb,j [i] ∑D j=1 ∑Nsteps i=1 ηb [i]Mb,j [i] ఏҊख๏ ղऍੑʹ͍ͭͯ
• Feature selection͕֤εςοϓʹରԠ ఏҊख๏ ಛྔબͷΠϝʔδ
• ֤ϚεΫʹΑͬͯ࡞ΒΕΔಛྔ͕ذʹରԠ͍ͯ͠Δɽ ఏҊख๏ Ͳ͕ܾ͜ఆΆ͍ͷʁ
• ର߅ख๏: • ޯϒʔεςΟϯάܥ: LightGBM, XGBoost, CatBoost • NNϞσϧ
• ͳʹͰൺΔ͔ʁ • ςετσʔλʹର͢Δaccuracy • ϞσϧͷαΠζ ࣮ݧ ࣮ݧઃఆ
• ࣮σʔλ(ForestCoverType)Ͱର߅ख๏ΑΓਫ਼͕ྑ͔ͬͨɽ ࣮ݧ݁Ռ ਫ਼ʹؔͯ͠
• ϞσϧαΠζ͕ܰྔͰਫ਼͕͍͍ɽ ࣮ݧ݁Ռ ϞσϧαΠζʹؔͯ͠
࣮ݧ݁Ռ ղऍੑʹ͍ͭͯ • ͷ݁ՌΛՄࢹԽ • ߦ͕αϯϓϧɼྻ͕ಛྔ • ന͍ͱ͜Ζ͕ಛྔͱͯ͠ॏཁ ͱஅͨ͠ͱ͜Ζ
ηb [i]
• ஞ࣍ҙΛߦ͏͜ͱͰɼॏཁͳಛྔબΛߦͳ͍ͬͯΔɽ • ϚεΫΛ༻͍Δ͜ͱͰղऍੑͷߴ͍Ϟσϧʹͳͬͨɽ • ༷ʑͳྖҬͷςʔϒϧσʔλͰੑೳΛൃشͰ͖Δ͜ͱΛࣔ͠ ͨɽ ·ͱΊ
• Accuracy: 0.81, ROC-AUC: 0.78 ͓·͚ TitanicσʔληοτͰTabNetΛ༡ΜͰΈͨɽ
͓·͚ LightGBM vs NN model vs TabNet LightGBM NN
model • TabNet: Accuracy: 0.81, ROC-AUC: 0.78 https://github.com/mei28/playground_python/blob/main/notebooks/titanic.ipynb ϋΠύϥॳظͷ··Ͱ νϡʔχϯάΛߦͳ͍ͬͯͳ͍