Upgrade to Pro — share decks privately, control downloads, hide ads and more …

反実仮想に基づく説明可能な機械学習

kelicht
February 24, 2022

 反実仮想に基づく説明可能な機械学習

2022/02/24
コンピューティング基盤CREST「学習/数理モデルに基づく時空間展開型アーキテクチャの創出と応用」フォレストワークショップ
https://www-alg.ist.hokudai.ac.jp/~atsu/forest_workshop20220214.html

タイトル:
反実仮想に基づく説明可能な機械学習

概要:
深層学習に代表される機械学習手法の発展により,機械学習モデルが医療や金融などといった実社会意思決定の現場に応用され始めている.これにともない,機械学習モデルの予測根拠や判断基準を人間が理解可能な形で提示できる説明可能性(explainability)の実現が重要視されており,近年活発に研究が行われている.中でも,反実仮想説明法(counterfactual explanation, CE)は,モデルから所望の予測結果を得るための摂動ベクトルを「アクション」として提示する局所説明手法であり,ユーザにとってより建設的な説明が得られることから近年注目を集めている.本発表では,反実仮想説明法の研究背景や動機について概観した後,発表者らがこれまでに取り組んだ研究として,(1)データ分布を考慮した反実仮想説明法(DACE)[IJCAI-20],(2)順序付き反実仮想説明法(OrdCE)[AAAI-21],(3)反実仮想説明木(CET)[AISTATS-22] について紹介する.

kelicht

February 24, 2022
Tweet

More Decks by kelicht

Other Decks in Science

Transcript

  1. 2022/02/24 ST-CREST ϑΥϨετϫʔΫγϣοϓ K.Kanamori Hokkaido Univ. 1 ൓࣮Ծ૝ʹجͮ͘ આ໌Մೳͳػցֶश Counterfactual-Explainable

    Machine Learning ۚ৿ ݑଠ࿕ ๺ւಓେֶେֶӃ ৘ใՊֶӃ ത࢜ޙظ՝ఔ2೥ [email protected] | https://sites.google.com/view/kentarokanamori Joint work with T.Takagi (Fujitsu Ltd.), K.Kobayashi (Fujitsu Ltd. / TIT), Y.Ike (UTokyo), K.Uemura (Fujitsu Ltd.), & H.Arimura (HU) ίϯϐϡʔςΟϯάج൫CREST ʮֶश/਺ཧϞσϧʹجۭͮ࣌ؒ͘ల։ܕΞʔΩςΫνϟͷ૑ग़ͱԠ༻ʯ ϑΥϨετϫʔΫγϣοϓ
  2. 2022/02/24ɹϑΥϨετϫʔΫγϣοϓɹK.Kanamori (HU) എܠ: ػցֶशͷઆ໌Մೳੑ 3 • ػցֶशϞσϧ͕࣮ࣾձͷҙࢥܾఆʹԠ༻͞Ε͍ͯΔ • ࣬පϦεΫ༧ଌ (ҩྍ),

    ༩৴ϦεΫ༧ଌ (ۚ༥), ࠶൜ϦεΫ༧ଌ (࢘๏) • ػցֶशϞσϧͷ༧ଌ݁Ռʹؔ͢Δઆ໌ΛఏࣔͰ͖Δ આ໌Մೳੑ (explainability) ͷ࣮ݱ͸ॏཁͳ՝୊Ͱ͋Δ • ࣾձతཁ੥: GDPR (EU 2018), ਓؒத৺ͷAIࣾձݪଇ (಺ֳ෎ 2019), … આ໌Մೳੑͷ࣮ݱ͸, ػցֶशͷ৴པੑ޲্΁ͷୈҰา આ໌Մೳੑ ࣮ݱ ػցֶशϞσϧ (ྫ: ਂ૚ֶशϞσϧ) ౶೘පϦεΫ͕ߴ͍Ͱ͢ ༧ଌ Ͳ͏ͯ͠ʁ ৴པੑ௿Լ ౶೘පϦεΫ͕ߴ͍Ͱ͢ ࠜڌ͸BMIͰ͢ ༧ଌ & આ໌ આ໌Մೳͳ ػցֶशϞσϧ ͳΔ΄Ͳʂ ৴པੑ޲্
  3. 2022/02/24ɹϑΥϨετϫʔΫγϣοϓɹK.Kanamori (HU) આ໌Մೳੑ΁ͷΞϓϩʔν 4 • ղऍՄೳͳϞσϧ (interpretable model) ͷֶश •

    ༧ଌࠜڌΛਓ͕ؒཧղ͠΍͍͢ ղऍੑͷߴ͍ϞσϧΛֶशͯ͠࢖͏ • ྫ: εύʔεઢܗϞσϧ (Lasso), ྫ: ϧʔϧϞσϧ (ܾఆ໦, ϧʔϧηοτ) • ہॴઆ໌ͷࣄޙతநग़ (post-hoc local explanation) • ݸʑͷ༧ଌ݁Ռʹؔ͢Δہॴઆ໌Λ ֶशࡁΈϞσϧ͔Βࣄޙతʹநग़͢Δ • ྫ: LIME [Ribeiro+ 16], SHAP [Lundberg+ 17], ྫ: ൓࣮Ծ૝આ໌๏ (CE) [Wachter+ 18] આ໌Մೳͳػցֶशʹ͸ҎԼͷ2ͭͷΞϓϩʔν͕͋Δ ݂౶஋ BMI ೥ྸ ੑผ ಛ௃ྔॏཁ౓ Ϟσϧࣗମ͕ ༧ଌ݁Ռͷઆ໌Λ ఏࣔͰ͖Δ :FT /P :FT /P ݂౶஋ ≤ 127 #.* ≤ 29.5 ݈߁ ౶೘ප ݈߁ ݸʑͷ༧ଌͰ ॏཁͳಛ௃ྔΛఏࣔ
  4. 2022/02/24ɹϑΥϨετϫʔΫγϣοϓɹK.Kanamori (HU) ൓࣮Ծ૝આ໌๏ (Counterfactual Explanation, CE) 6 ॴ๬ͷ༧ଌ݁ՌΛಘΔͨΊͷ“ΞΫγϣϯ”Λઆ໌ͱͯ͠ఏࣔ • ैདྷͷہॴઆ໌๏

    (ྫ: LIME [Ribeiro+ 16]) • Ϟσϧͷ༧ଌ݁Ռͷࠜڌͱͳͬͨಛ௃ྔΛఏࣔ ౶೘පϦεΫ͕ߴ͍Ͱ͢ ࠜڌ͸݂౶஋ͱBMIͱ೥ྸͰ͢ ༧ଌ & આ໌ XAI͘Μ Ϣʔβ ;ʔΜ… (݁ہɼͲ͏͢Ε͹ ݈߁ʹͳΕΔͷʁ) Ϣʔβ XAI͘Μ BMIΛ27.3·ͰݮΒͤ͹ ϦεΫ͕௿͍ͱ༧ଌ͞Ε·͢ CE (ΞΫγϣϯ) ‣ ༧ଌ݁Ռʹؔ͢Δ ΑΓݐઃతͳઆ໌ μΠΤοτ͢Ε͹͍͍ͷ͔ʂ • ൓࣮Ծ૝આ໌๏ (CE) [Wachter+ 18] • Ϟσϧ͔Βॴ๬ͷ༧ଌ݁ՌΛಘΔͨΊͷಛ௃ྔͷมߋํ๏Λఏࣔ
  5. 2022/02/24ɹϑΥϨετϫʔΫγϣοϓɹK.Kanamori (HU) ൓࣮Ծ૝આ໌๏ (Counterfactual Explanation, CE) 7 ॴ๬ͷ༧ଌ݁ՌΛಘΔͨΊͷ“ΞΫγϣϯ”Λઆ໌ͱͯ͠ఏࣔ • ैདྷͷہॴઆ໌๏

    (ྫ: LIME [Ribeiro+ 16]) • Ϟσϧͷ༧ଌ݁Ռͷࠜڌͱͳͬͨಛ௃ྔΛఏࣔ ౶೘පϦεΫ͕ߴ͍Ͱ͢ ࠜڌ͸݂౶஋ͱBMIͱ೥ྸͰ͢ ༧ଌ & આ໌ XAI͘Μ Ϣʔβ ;ʔΜ… (݁ہɼͲ͏͢Ε͹ ݈߁ʹͳΕΔͷʁ) Ϣʔβ XAI͘Μ BMIΛ27.3·ͰݮΒͤ͹ ϦεΫ͕௿͍ͱ༧ଌ͞Ε·͢ CE (ΞΫγϣϯ) ‣ ༧ଌ݁Ռʹؔ͢Δ ΑΓݐઃతͳઆ໌ μΠΤοτ͢Ε͹͍͍ͷ͔ʂ • ൓࣮Ծ૝આ໌๏ (CE) [Wachter+ 18] • Ϟσϧ͔Βॴ๬ͷ༧ଌ݁ՌΛಘΔͨΊͷಛ௃ྔͷมߋํ๏Λఏࣔ طଘͷCEख๏Ұཡ (2020೥8݄࣌఺) [Karimi+ 20]
  6. 2022/02/24ɹϑΥϨετϫʔΫγϣοϓɹK.Kanamori (HU) • ओͳղ๏ (࠷దԽํ๏): • ϥάϥϯδϡ؇࿨ & ޯ഑๏ •

    ࠞ߹੔਺ઢܗܭը๏ (MILO) • ͦͷଞ (SAT, ہॴ୳ࡧ, GA, …) 8 Ϣʔβ͕࣮ߦՄೳͳ࠷খίετͷΞΫγϣϯΛ࠷దԽ ೖྗ , ෼ྨث , ॴ๬ͷϥϕϧ ʹର͠, ҎԼͷ࠷దԽ໰୊ͷ࠷దղͱͳΔઁಈϕΫτϧ (ΞΫγϣϯ) ΛٻΊΔ: ͜͜Ͱ, ͸ΞΫγϣϯީิू߹, ͸ίετؔ਺. x ∈ 𝒳 f : 𝒳 → 𝒴 y* ∈ 𝒴 ( f(x) ≠ y*) a* a* = arg min a∈𝒜 C(a ∣ x) subject to f(x + a) = y* 𝒜 C: 𝒜 → ℝ≥0 Counterfactual Explanation (CE) [Ustun+ 19] ࣮ߦՄೳͳΞΫγϣϯʹ੍໿ ΞΫγϣϯͷ࣮ߦίετΛධՁ BMI ݂౶஋ x x + a* • : ౶೘පϦεΫߴ • : ౶೘පϦεΫ௿ ΞΫγϣϯ a* ϑΥϨετͳͲ ඍ෼ෆՄೳͳϞσϧʹ ରԠՄೳʂ ΞΫγϣϯநग़໰୊ͷఆࣜԽ
  7. 2022/02/24ɹϑΥϨετϫʔΫγϣοϓɹK.Kanamori (HU) 9 Ϣʔβʹͱͬͯ “ҙຯͷ͋ΔΞΫγϣϯ (આ໌)” ͱ͸Կ͔ʁ Ϟνϕʔγϣϯͱ՝୊ Ϟσϧ ͔Βॴ๬ͷ༧ଌ

    ΛಘΔͨΊͷΞΫγϣϯ Λઆ໌ͱͯ͠ఏࣔ: f y* a* a* = arg min a∈𝒜 C(a ∣ x) subject to f(x + a) = y* [࠶ܝ] Counterfactual Explanation (CE) • ղ͘໰୊͸ఢରతઁಈ (adversarial attack) ͱ΄΅ಉ͡ • CEʹ͓͍ͯ͸, ઁಈϕΫτϧ ͸ ΞΫγϣϯ (આ໌) ͱͯ͠ղऍ͞ΕΔ ՝୊1. ΞΫγϣϯͷݱ࣮ੑΛͲ͏ධՁ͢΂͖ʁ ՝୊2. มߋ͢Δಛ௃ྔؒͷҼՌޮՌΛߟྀ͢Δʹ͸ʁ ՝୊3. ݸผͷೖྗ͚ͩͰͳ͘େҬతʹΞΫγϣϯΛఏࣔɾཁ໿Ͱ͖Δʁ a* ఢରతઁಈ [Szegedy+ 14] ΞΫγϣϯʁ
  8. 2022/02/24ɹϑΥϨετϫʔΫγϣοϓɹK.Kanamori (HU) ͜Ε·Ͱͷݚڀ੒Ռ 10 ൓࣮Ծ૝આ໌๏ (CE) ͷ৽͍͠ΞϓϩʔνΛఏҊ • DACE: σʔλ෼෍Λߟྀͨ͠CE

    ‣ ಛ௃ྔؒͷ૬ؔͱ֎Ε஋ϦεΫΛߟྀͨ͠ ݱ࣮తͳCEͷ৽ख๏Λ։ൃ (IJCAI-20) • OrdCE: มߋॱং΋ఏࣔ͢ΔCE ‣ ҼՌޮՌʹج͖ͮมߋॱংΛ࠷దԽͯ͠ ఏࣔ͢ΔCEͷ࿮૊ΈΛ։ൃ (AAAI-21) • CET: CEͷղऍՄೳͳେҬతཁ໿ ‣ ܾఆ໦ͰΞΫγϣϯΛཁ໿ɾ༧ଌ͢Δ CEͷ࿮૊ΈΛ։ൃ (AISTATS-22) 0 50 100 150 200 250 MSinceOldestTradeOpen 0 20 40 60 80 100 AverageMInFile TLPS DACE (ours) DACE Method Order Feature Action OrdCE + TLPS 1st “JobSkill” +1 2nd “Income” +6 OrdCE + DACE 1st “HealthStatus” +3 2nd “WorkPerDay” +1 3rd “Income” +4 Action ෦ॺ: Ӧۀ → ਓࣄ Action ࢒ۀ: ༗ → ແ Action ೥ऩ: + 12K $ ࢒ۀ = ༗ ۀ੷ ≥ " :FT /P :FT /P
  9. 2022/02/24 ST-CREST ϑΥϨετϫʔΫγϣοϓ K.Kanamori Hokkaido Univ. 12 DACE: Distribution-Aware Counterfactual

    Explanation by Mixed-Integer Linear Optimization* Kentaro Kanamori Takuya Takagi Ken Kobayashi Hiroki Arimura (Hokkaido University) (Fujitsu Laboratories) (Fujitsu Laboratories / Tokyo Institute of Technology) (Hokkaido University) Accepted to IJCAI-20 * K. Kanamori, T. Takagi, K. Kobayashi, and H. Arimura: “DACE: Distribution-Aware Counterfactual Explanation by Mixed-Integer Linear Optimization,” In Proceedings of the 29th International Joint Conference on Artificial Intelligence (IJCAI 2020), pp. 2855-2862, July 2020.
  10. 2022/02/24ɹϑΥϨετϫʔΫγϣοϓɹK.Kanamori (HU) ݚڀ໨ඪ 13 σʔλ෼෍ͷಛੑΛߟྀͨ͠ݱ࣮తͳΞΫγϣϯΛఏࣔ͢Δ • ैདྷͷCEʹ͸ҎԼͷ՝୊͕͋Δ: • طଘͷίετؔ਺ (ྫ:

    TLPS) Ͱ͸, ಛ௃ྔؒͷ૬ؔؔ܎ΛߟྀͰ͖ͳ͍ • ୯ͳΔίετ࠷খԽͰ͸, ΞΫγϣϯͷ࣮ߦ݁Ռ͕֎Ε஋ʹͳΔ ‣ σʔλ෼෍ͷಛੑΛे෼ߟྀͰ͖ͳ͍ͨΊ, ඇݱ࣮తͳΞΫγϣϯ͕ఏࣔ͞ΕΔ [Laugel+ 19] • ཁ݅: σʔλ෼෍ͷಛੑΛߟྀͯ͠ ΞΫγϣϯͷݱ࣮ੑΛධՁ͢΂͖ ໨ඪ 1. ಛ௃ྔؒͷ૬ؔͱ֎Ε஋ϦεΫΛߟྀͨ͠ίετؔ਺Λಋೖ͢Δ ໨ඪ 2. ಋೖͨ͠ίετؔ਺ʹରͯ͠MILOʹΑΔ࠷దԽํ๏ΛఏҊ͢Δ x + a x + a ਖ਼ৗ஋ ֎Ε஋ x ࣮ࡍͷ ίετେ ے೑ྔ ମॏ ࣮ࡍͷ ίετখ
  11. 2022/02/24ɹϑΥϨετϫʔΫγϣοϓɹK.Kanamori (HU) ఏҊ: σʔλ෼෍Λߟྀͨ͠൓࣮Ծ૝આ໌๏ 14 ಛ௃ྔؒͷ૬ؔͱ֎Ε஋ϦεΫΛߟྀͨ͠ίετؔ਺Λಋೖ DACE: Distribution-Aware Counterfactual Explanation

    ೖྗΠϯελϯεू߹ , ڞ෼ࢄߦྻ , ඇෛ࣮਺ , ࣗવ਺ ʹରͯ͠, ҎԼͷίετؔ਺ͷ΋ͱͰCE໰୊ͷղΛٻΊΔ: ͜͜Ͱ, ͸ϚϋϥϊϏεڑ཭, ͸ Local Outlier Factor (LOF). X ⊆ 𝒳 Σ ∈ ℝD×D λ ≥ 0 k ∈ ℕ CDACE (a ∣ x) := d2 M (x, x + a ∣ Σ−1) + λ ⋅ qk (x + a ∣ X) dM qk • ίετؔ਺ ͰΞΫγϣϯ ͷݱ࣮ੑΛධՁ͢Δ • ϚϋϥϊϏεڑ཭ [Mahalanobis 36]: ಛ௃ྔؒͷ૬ؔΛߟྀͨ͠ڑ཭ؔ਺ • LOF [Breunig+ 00]: -ۙ๣఺ू߹ͷີ౓ൺʹجͮ͘֎Ε஋ݕग़είΞ ‣ ࠷খԽ͢Δ͜ͱͰσʔλ෼෍Λߟྀͨ͠ݱ࣮తͳΞΫγϣϯΛಘΔ CDACE a k
  12. 2022/02/24ɹϑΥϨετϫʔΫγϣοϓɹK.Kanamori (HU) ఏҊ: MILO໰୊ͱͯ͠ͷఆࣜԽ 15 ୅ཧؔ਺ͷಋೖʹΑΓม਺ͱ੍໿ࣜͷ૯਺ͷ࡟ݮʹ੒ޭ DACE: MILO Formulation (Ϟσϧ:

    ܾఆ໦Ξϯαϯϒϧ) minimize ∑ D d=1 δd + λ ⋅ ∑ N n=1 l(n) ⋅ ρn ∑ D d=1 ∑ Id i=1 (c(n) d,i − c(n′ ) d,i ) πd,i ≤ Cn (1 − νn ), ∀n, n′ ∈ [N ] ρn ≥ d(n) ⋅ νn , ∀n ∈ [N ] ρn ≥ ∑ D d=1 ∑ Id i=1 c(n) d,i πd,i − Cn (1 − νn ), ∀n ∈ [N ] ∑ N n=1 νn = 1 1-LOF πd,i ∈ {0,1}, ∀d ∈ [D], i ∈ [Id ] ϕt,l ∈ {0,1}, ∀t ∈ [T ], l ∈ [Lt ] δd ≥ 0,∀d ∈ [D] νn ∈ {0,1}, ρn ≥ 0,∀n ∈ [N ] subject to ∑ Id i=1 πd,i = 1,∀d ∈ [D] ∑ Lt l=1 ϕt,l = 1,∀t ∈ [T ] D ⋅ ϕt,l ≤ ∑ D d=1 ∑i∈I(d) t,l πd,i , ∀t ∈ [T ], l ∈ [Lt ] ∑ T t=1 wt ∑ Lt l=1 ̂ yt,l ϕt,l ≥ 0 −δd ≤ ∑ D d′ =1 Ud,d′ ∑ I d′ i=1 ad′ ,i πd,i ≤ δd , ∀d ∈ [D] -MD ℓ1 N : σʔλ਺ I : ΞΫγϣϯ૯਺ ( ) D : ಛ௃ྔ਺ L : ༿ϊʔυ૯਺ I ≫ D ݫີͳఆࣜԽͱൺֱͯ͠ ม਺ͱ੍໿ࣜͷ૯਺Λ࡟ݮ ‣ ٻղͷߴ଎Խʹ੒ޭʂ Exact Proposed #Variables O(N2 + I2 + L) O(N + I + L) #Constraints O(N2 + I2 + L) O(N2 + D + L)
  13. 2022/02/24ɹϑΥϨετϫʔΫγϣοϓɹK.Kanamori (HU) ࣮ݧ݁Ռ (FICOσʔληοτ) 16 ૬ؔͱ֎Ε஋ϦεΫΛߟྀͨ͠ΞΫγϣϯΛఏࣔͰ͖ͨ Results ఏҊख๏ (DACE) ʹΑΓMDͱ10-LOFʹؔͯ͠

    طଘख๏ΑΓྑ͍ΞΫγϣϯ͕ಘΒΕͨ ‣ طଘख๏͸, ಛ௃ྔؒͷ૬ؔΛߟྀͰ͖ͣ, ࣮ߦ݁Ռ͕֎Ε஋ʹͳΔՄೳੑ͕ߴ͍ ‣ ఏҊख๏͸, ૬ؔͱ֎Ε஋ϦεΫΛߟྀͨ͠ ݱ࣮తͳΞΫγϣϯΛఏࣔͰ͖Δ Logistic Regression Random Forest MD 10-LOF MD 10-LOF TLPS[1] 9.09 ± 2.97 3.86 ± 1.49 2.22 ± 1.31 1.49 ± 1.07 MAD[2] 5.42 ± 4.04 1.65 ± 1.29 2.29 ± 1.58 1.56 ± 1.14 PCC[3] 9.46 ± 6.66 1.61 ± 1.31 3.76 ± 2.36 1.6 ± 1.27 DACE 1.97 ± 1.46 1.54 ± 1.12 1.54 ± 1.18 1.33 ± 0.496 • طଘख๏ͱఏҊख๏ʹΑΓಘΒΕͨ ΞΫγϣϯͷMDͱ10-LOFΛൺֱ 55 60 65 70 75 80 85 90 ExternalRiskEstimate 0 20 40 60 80 100 PercentInstallTrades TLPS DACE (ours) 0 50 100 150 200 250 MSinceOldestTradeOpen 0 20 40 60 80 100 AverageMInFile TLPS DACE (ours) TLPS DACE TLPS DACE [1] B. Ustun et al.: “Actionable Recourse in Linear Classification,” FAT*, 2019. [2] C. Russell: “Efficient Search for Diverse Coherent Explanations,” FAT*, 2019. [3] V. Ballet et al.: “Imperceptible Adversarial Attacks on Tabular Data,” NeurIPS Workshops, 2019.
  14. 2022/02/24ɹϑΥϨετϫʔΫγϣοϓɹK.Kanamori (HU) ·ͱΊ: σʔλ෼෍Λߟྀͨ͠൓࣮Ծ૝આ໌๏ 17 • ཁ݅: σʔλ෼෍ͷಛੑΛߟྀͯ͠ΞΫγϣϯͷ ɹɹ Ϣʔβʹͱͬͯͷݱ࣮ੑΛධՁ͢΂͖

    • ಛ௃ྔؒͷ૬ؔؔ܎ͱ֎Ε஋ϦεΫΛߟྀ͢Δ͜ͱͰ ݱ࣮తͳΞΫγϣϯΛఏࣔ͢ΔCEख๏Λ։ൃͨ͠ • ϚϋϥϊϏεڑ཭ͱLOFʹجͮ͘ίετؔ਺Λಋೖ • ಋೖͨ͠ίετؔ਺ʹରͯ͠MILOʹجͮ͘࠷దԽํ๏ΛఏҊ 55 60 65 70 75 80 85 90 ExternalRiskEstimate 0 20 40 60 80 100 PercentInstallTrades TLPS DACE (ours) 0 50 100 150 200 250 MSinceOldestTradeOpen 0 20 40 60 80 100 AverageMInFile TLPS DACE (ours) طଘख๏ ఏҊख๏ طଘख๏ ఏҊख๏ ಛ௃ྔؒͷҼՌؔ܎͸ ߟྀͰ͖ͳ͍ ࢒՝୊ ಛ௃ྔؒͷ૬ؔͱ֎Ε஋ϦεΫΛߟྀͨ͠ݱ࣮తͳCE
  15. 2022/02/24 ST-CREST ϑΥϨετϫʔΫγϣοϓ K.Kanamori Hokkaido Univ. 18 Accepted to AAAI-21

    Ordered Counterfactual Explanation by Mixed-Integer Linear Optimization* Kentaro Kanamori Takuya Takagi Ken Kobayashi Yuichi Ike Kento Uemura Hiroki Arimura (Hokkaido University) (Fujitsu Laboratories) (Fujitsu Laboratories / Tokyo Institute of Technology) (Fujitsu Laboratories) (Fujitsu Laboratories) (Hokkaido University) * K. Kanamori, T. Takagi, K. Kobayashi, Y. Ike, K. Uemura, and H. Arimura: “Ordered Counterfactual Explanation by Mixed-Integer Linear Optimization,” In Proceedings of the 35th AAAI Conference on Artificial Intelligence (AAAI 2021), pp. 11564-11574, May 2021.
  16. 2022/02/24ɹϑΥϨετϫʔΫγϣοϓɹK.Kanamori (HU) ݚڀ໨ඪ 19 ಛ௃ྔͷมߋํ๏͚ͩͰͳ͘, มߋॱং΋ఏࣔ͢Δ • ಛ௃ྔؒʹ૬ޓ࡞༻ (ྫ: ҼՌޮՌ

    [Karimi+ 20]) ͕͋Δ৔߹, ΞΫγϣϯͷίετ͸ಛ௃ྔͷมߋॱংʹ΋ґଘ͢Δ • ཁ݅: ΞΫγϣϯͱͯ͠, ಛ௃ྔͷมߋํ๏͚ͩͰͳ͘, ૬ޓ࡞༻Λߟྀͯ͠ద੾ͳมߋॱং΋ఏࣔ͢Δ΂͖ ໨ඪ 1. ૬ޓ࡞༻ʹج͍ͮͯมߋॱংΛධՁ͢Δίετؔ਺Λಋೖ͢Δ ໨ඪ 2. มߋํ๏ͱมߋॱংΛಉ࣌ʹ࠷దԽ͢Δํ๏ΛఏҊ͢Δ ϩʔϯ͕ঝೝ͞ΕΔͨΊʹ͸, “Income” Λ૿΍͠·͠ΐ͏ʂ ͋ͱ͸ “JobSkill” ΋্͛ͯͶʂ Ͱ΋ “WorkPerDay” ͸ݮΒͯ͠ʂ ͋ͱ͸… XAI͘Μ ͏ʔΜ… ͲΕΛ࠷ॳʹ΍Ε͹ ͍͍ͷʁ Ϣʔβ CE (ΞΫγϣϯ) Insulin Glucose SkinThickness BMI 0.09 0.05 0.04 0.16 Education JobSkill Income WorkPerDay HealthStatus 1.00 6.00 4.00 0.50 JobSkill ҼՌ DAG
  17. 2022/02/24ɹϑΥϨετϫʔΫγϣοϓɹK.Kanamori (HU) ఏҊ: ॱং෇͖൓࣮Ծ૝આ໌๏ 20 ಛ௃ྔؒͷ૬ޓ࡞༻͔Β࠷దͳॱং෇͖ΞΫγϣϯΛఏࣔ OrdCE: Ordered Counterfactual Explanation

    ૬ޓ࡞༻ߦྻ , ઁಈಛ௃ྔ਺ , ύϥϝʔλ ʹରͯ͠, ҎԼͷ࠷దԽ໰୊ͷ࠷దղͱͳΔॱং෇͖ΞΫγϣϯ ΛٻΊΔ: M ∈ ℝD×D K ∈ [D] γ ≥ 0 (a*, σ*) (a*, σ*) = arg min a∈𝒜,σ∈Σ(a) C(a ∣ x) + γ ⋅ Cord (a, σ ∣ M) subject to f(x + a) = y* ∧ ∥a∥0 ≤ K • ॱྻ ͸ ͷมߋॱংΛද͢ • ॱংίετؔ਺ ͸ҼՌDAGͳͲͷ ૬ޓ࡞༻৘ใʹج͖ͮ ΛධՁ ‣ ίετؔ਺ ͱͷಉ࣌࠷খԽʹΑΓ มߋํ๏ ͱมߋॱং Λܾఆ σ = (σ1 , …, σK ) ∈ Σ(a) a Cord σ C a σ ҼՌ୳ࡧख๏ͰਪఆՄೳ Order Feature Action 1st “JobSkill” +1 2nd “Income” +7 ॱྻ σ
  18. 2022/02/24ɹϑΥϨετϫʔΫγϣοϓɹK.Kanamori (HU) ఏҊ: MILO໰୊ͱͯ͠ͷఆࣜԽ 21 มߋํ๏ͱมߋॱংͷಉ࣌࠷దԽΛMILO໰୊ͱͯ͠ఆࣜԽ OrdCE: MILO Formulation minimize

    ∑ D d=1 ∑ Id i=1 cd,i πd,i + γ ⋅ ∑ K k=1 ζk σk,d = 1 − π(k) d,1 , ∀k ∈ [K ], d ∈ [D] ∑ D d=1 σk,d ≤ 1,∀k ∈ [K ] มߋॱং ∑ K k=1 σk,d ≤ 1,∀d ∈ [D] ∑ D d=1 σk,d ≥ ∑ D d=1 σk+1,d , ∀k ∈ [K − 1] π(k) d,i ∈ {0,1}, ∀k ∈ [K ], d ∈ [D], i ∈ [Id ] δk,d , ζk ∈ ℝ, ∀k ∈ [K ], d ∈ [D] σk,d ∈ {0,1}, ∀k ∈ [K ], d ∈ [D] subject to ∑ Id i=1 πd,i = 1,∀d ∈ [D] πd,i = ∑ K k=1 π(k) d,i , ∀d ∈ [D], i ∈ [Id ] ξd = xd + ∑ Id i=1 ad,i πd,i , ∀d ∈ [D] ∑ D d=1 wd ξd ≥ 0 δk,d ≥ ∑ Id i=1 ad,i π(k) d,i − εk,d − Uk,d (1 − σk,d ), ∀k ∈ [K ], d ∈ [D] ॱংίετؔ਺ δk,d ≤ ∑ Id i=1 ad,i π(k) d,i − εk,d − Lk,d (1 − σk,d ), ∀k ∈ [K ], d ∈ [D] Lk,d σk,d ≤ δk,d ≤ Uk,d σk,d , ∀k ∈ [K ], d ∈ [D] εk,d = ∑ k−1 l=1 ∑ D d′ =1 Md′ ,d δl,d′ , ∀k ∈ [K ], d ∈ [D] −ζk ≤ ∑ D d=1 δk,d ≤ ζk , ∀k ∈ [K ] ੔਺ม਺Λ༻੍͍ͨ໿ࣜͰ ॱྻ ͱॱংίετؔ਺ Λ දݱՄೳ σ Cord
  19. 2022/02/24ɹϑΥϨετϫʔΫγϣοϓɹK.Kanamori (HU) ࣮ݧ݁Ռ (Diabetesσʔληοτ) 22 ҼՌؔ܎ͱ੔߹͢Δॱং෇͖ΞΫγϣϯΛఏࣔͰ͖ͨ Method Order Feature Action

    Cdist Cord Greedy 1st “BMI” -6.25 0.778 0.828 OrdCE 1st “Glucose” -3.0 0.825 0.749 2nd “BMI” -5.05 (a) TLPS Method Order Feature Action Cdist Cord Greedy 1st “BMI” -0.8 0.716 0.825 2nd “SkinThickness” -2.5 3rd “Glucose” -8.5 4th “Insulin” -32.0 OrdCE 1st “Insulin” -32.0 0.716 0.528 2nd “Glucose” -8.5 3rd “SkinThickness” -2.5 4th “BMI” -0.8 (b) DACE Table 1: Examples of ordered actions extracted from the RF classifier on the Diabetes dataset. Method Order Feature Action Cdist Cord Greedy 1st “BMI” -6.25 0.778 0.828 OrdCE 1st “Glucose” -3.0 0.825 0.749 2nd “BMI” -5.05 (a) TLPS Method Order Feature Action Cdist Cord Greedy 1st “BMI” -0.8 0.716 0.825 2nd “SkinThickness” -2.5 3rd “Glucose” -8.5 4th “Insulin” -32.0 OrdCE 1st “Insulin” -32.0 0.716 0.528 2nd “Glucose” -8.5 3rd “SkinThickness” -2.5 4th “BMI” -0.8 (b) DACE Table 1: Examples of ordered actions extracted from the RF classifier on the Diabetes dataset. Results • ఏҊख๏ (OrdCE) ͸ॱংίετ ͕ྑ͍ॱং෇͖ΞΫγϣϯΛൃݟ • ఏҊख๏ͰಘΒΕͨॱং෇͖ΞΫγϣϯ͸ࣄલਪఆͨ͠ҼՌؔ܎ͱ੔߹ ‣ ಛ௃ྔؒͷ૬ޓ࡞༻ʹج͍ͮͯ, ద੾ͳมߋॱংΛఏࣔͰ͖ͨ Cord มߋ͢Δಛ௃ྔ͕ҟͳΔ (ಉ࣌࠷దԽͷޮՌ) มߋ͢Δಛ௃ྔ͸ಉ͕ͩ͡, มߋॱং͕ҟͳΔ • ࣄޙతͳॱং෇͚๏ (Greedy) ͱൺֱ Insulin Glucose SkinThickness BMI 0.09 0.05 0.04 0.16 Education JobSkill Income WorkPerDay HealthStatus 1.00 6.00 4.00 0.50 ҼՌ DAG
  20. 2022/02/24ɹϑΥϨετϫʔΫγϣοϓɹK.Kanamori (HU) ·ͱΊ: ॱং͖ͭ൓࣮Ծ૝આ໌๏ 23 ಛ௃ྔͷมߋॱংΛఏࣔ͢Δ৽ͨͳCEͷϑϨʔϜϫʔΫ • ཁ݅: ΞΫγϣϯͱͯ͠, ಛ௃ྔͷมߋํ๏͚ͩͰͳ͘,

    ɹɹ ૬ޓ࡞༻ʹج͖ͮద੾ͳมߋॱং΋ఏࣔ͢Δ΂͖ • ΞΫγϣϯͱͯ͠, ಛ௃ྔͷมߋํ๏ͱมߋॱংΛ ಉ࣌ʹ࠷దԽͯ͠ఏࣔ͢Δ৽ͨͳCEख๏Λ։ൃͨ͠ • ҼՌޮՌͳͲͷ૬ޓ࡞༻ʹجͮ͘ॱংίετؔ਺Λಋೖ͠, มߋํ๏ͱมߋॱংΛಉ࣌ʹ࠷దԽ͢Δ໰୊ΛఆࣜԽ • MILOʹجͮ͘ղ๏ΛఏҊ ҼՌ DAG Insulin Glucose SkinThickness BMI 0.09 0.05 0.04 0.16 Education JobSkill Income WorkPerDay HealthStatus 1.00 6.00 4.00 0.50 Method Order Feature Action OrdCE + TLPS 1st “JobSkill” +1 2nd “Income” +6 OrdCE + DACE 1st “HealthStatus” +3 2nd “WorkPerDay” +1 3rd “Income” +4 ఏҊख๏ (ॱং෇͖CE) ૬ޓ࡞༻͔Β ॱংΛܾఆ
  21. 2022/02/24 ST-CREST ϑΥϨετϫʔΫγϣοϓ K.Kanamori Hokkaido Univ. 24 Counterfactual Explanation Tree:

    Transparent and Consistent Actionable Recourse with Decision Tree* Accepted to AISTATS-22 * K. Kanamori, T. Takagi, K. Kobayashi, and Y. Ike: “Counterfactual Explanation Tree: Transparent and Consistent Actionable Recourse with Decision Tree,” In Proceedings of the 25th International Conference on Artificial Intelligence and Statistics (AISTATS 2022), to appear. Kentaro Kanamori Takuya Takagi Ken Kobayashi Yuichi Ike (Hokkaido University) (Fujitsu Ltd.) (Fujitsu Ltd. / Tokyo Institute of Technology) (The University of Tokyo)
  22. 2022/02/24ɹϑΥϨετϫʔΫγϣοϓɹK.Kanamori (HU) Ϟνϕʔγϣϯ: େҬతͳ൓࣮Ծ૝આ໌ 25 ෳ਺ͷೖྗ ʹରͯ͠ಉ࣌ʹΞΫγϣϯΛఏ͍ࣔͨ͠ X ⊂ 𝒳

    • ༧ଌΛड͚Δݸਓ (≒ ೖྗΠϯελϯε ) ࣗ਎͕ ΞΫγϣϯ Λ࣮ߦ͢Δͱ͸ݶΒͳ͍ [Karimi+ 20] • ྫ: ཭৬༧ଌ (ैۀһͷ཭৬ϦεΫΛԼ͛ΔΞΫγϣϯΛاۀ͕࣮ߦ) • ͋Δݸਓ ʹର͢ΔΞΫγϣϯ (ྫ: సଐ) ͸, Ҏ֎ͷଞͷݸਓʹ΋ӨڹΛ༩͑Δ (ྫ: ਓࣄ੍౓ͷมߋ) ‣ ΞΫγϣϯΛೖྗ͝ͱʹݸผʹ࠷దԽ͢Δͷ͸ෆద੾ x a* x a* x ैۀһ ΞΫγϣϯ ঢڅ ࢒ۀݮ సଐ ʜ ʜ XAI͘Μ & ҙࢥܾఆऀ େҬతʹ ΞΫγϣϯΛ ׂΓ౰ͯΔ
  23. 2022/02/24ɹϑΥϨετϫʔΫγϣοϓɹK.Kanamori (HU) ݚڀ໨ඪ 26 ಁ໌ੑͱҰ؏ੑΛඋ͑ͨΞΫγϣϯͷେҬతཁ໿Λֶश͢Δ • ΞΫγϣϯͷେҬతׂΓ౰ͯʹ๬·ΕΔੑ࣭: • ಁ໌ੑ (transparency)

    [Rawal+ 20]: ΞΫγϣϯׂ͕Γ౰ͯΒΕͨ աఔ (ཧ༝) Λઆ໌Ͱ͖Δ • Ұ؏ੑ (consistency) [Rudin+ 19]: ΞΫγϣϯͷׂΓ౰ͯཧ༝͕ ݸਓؒͰໃ६͠ͳ͍ • ྫ: “೥ྸ>35 ͔ͭ ෦ॺ=Ӧۀ” ͱ͍͏આ໌͸ ྫ: ྆ऀʹ֘౰͢Δ (ҰҙͰͳ͍) ͷͰෆద੾ • ཁ݅: ಁ໌ͰҰ؏ͨ͠ΞΫγϣϯͷׂΓ౰ͯํ๏͕ඞཁ ໨ඪ 1. ೖྗۭؒશମʹΞΫγϣϯΛׂΓ౰ͯΔཁ໿ϞσϧΛಋೖ͢Δ ໨ඪ 2. ͦͷཁ໿ϞσϧΛॴ༩ͷσʔλ͔Βֶश͢Δํ๏Λ։ൃ͢Δ ैۀһ ಛ௃ྔ ΞΫγϣϯ ঢڅ సଐ ೥ྸ: 37 ෦ॺ: Ӧۀ ࢒ۀ: ແ ۀ੷: A ʜ ೥ྸ: 42 ෦ॺ: Ӧۀ ࢒ۀ: ແ ۀ੷: B ʜ ͳΜͰԶ͚ͩ సଐͳΜͩʁʂ
  24. 2022/02/24ɹϑΥϨετϫʔΫγϣοϓɹK.Kanamori (HU) ίετ ੍໿ ͷ؇࿨ f(x + a) = y*

    ఏҊ: ൓࣮Ծ૝આ໌໦ 27 ೖྗʹରͯ͠༗ޮͳΞΫγϣϯΛ༧ଌ͢Δܾఆ໦ CET: Counterfactual Explanation Tree ೖྗۭؒ , ΞΫγϣϯީิू߹ ʹରͯ͠, ൓࣮Ծ૝આ໌໦ (Counterfactual Explanation Tree, CET) ͸, ܾఆ໦ Ͱ͋Δ. 𝒳 𝒜 h: 𝒳 → 𝒜 • ೖྗ ʹରͯ͠༗ޮͳΞΫγϣϯ Λ༧ଌ͢Δܾఆ໦ • ೖྗ্ۭؒͰΞΫγϣϯΛׂΓ౰ͯΔ աఔΛϧʔϧͰઆ໌Մೳ (ಁ໌ੑ) • ೚ҙͷೖྗͱΞΫγϣϯʹରͯ͠ ϧʔϧ͕Ұҙʹܾఆ͢Δ (Ұ؏ੑ) • ༗ޮੑࢦඪ (invalidity): x a iγ (a ∣ x) := C(a ∣ x) + γ ⋅ l( f(x + a), y*) Action ೥ऩ: + 12K $ Action ࢒ۀ: ༗ → ແ Action ෦ॺ: Ӧۀ → ਓࣄ ࢒ۀ = ༗ ۀ੷ ≥ " :FT /P :FT /P
  25. 2022/02/24ɹϑΥϨετϫʔΫγϣοϓɹK.Kanamori (HU) ఏҊ: CETͷֶश໰୊ͱΞϧΰϦζϜ 28 ֬཰తہॴ୳ࡧͱMILOʹΑΔަޓ࠷దԽͰCETΛֶश͢Δ ΞΫγϣϯͷฏۉ༗ޮੑ ༿ϊʔυ૯਺ (ΞΫγϣϯͷ૯਺) Learning

    Counterfactual Explanation Tree ೖྗू߹ , ύϥϝʔλ ʹରͯ͠, ҎԼͷ࠷దԽ໰୊Λղ͘: ͜͜Ͱ, ͸CETͷू߹, ͸ ʹؚ·ΕΔ༿ϊʔυͷू߹. X ⊆ 𝒳 γ, λ > 0 minh∈ℋ oγ,λ (h ∣ X) := 1 |X| ∑x∈X iγ (h(x) ∣ x) + λ ⋅ |ℒ(h)| ℋ ℒ(h) h • ઓུ: ໦ߏ଄ (ೖྗۭؒͷ෼ׂ) ͱΞΫγϣϯͷަޓ࠷దԽ • ໦ߏ଄ͷ୳ࡧ: ֬཰తہॴ୳ࡧ (stochastic local search) + ࢬמΓ • ෳ਺ͷೖྗ ʹର͢ΔΞΫγϣϯͷ࠷దԽ: MILOఆࣜԽͷ֦ு ‣ ΞΫγϣϯͷ༗ޮੑͱ૯਺ͷτϨʔυΦϑΛௐ੔ͭͭ͠࠷దԽ͢Δ Xl ⊆ X Theorem 1 |ℒ(h*)| ≤ γ + λ λ
  26. 2022/02/24ɹϑΥϨετϫʔΫγϣοϓɹK.Kanamori (HU) ࣮ݧ݁Ռ (IBM Attritionσʔληοτ) 29 ಁ໌ੑͱҰ؏ੑΛҡ࣋ͭͭ͠༗ޮͳΞΫγϣϯΛఏࣔͰ͖ͨ Results • ΞΫγϣϯͷ༗ޮੑ͸,

    ଟ͘ͷ৔߹ͰఏҊख๏ (CET) ͕༏Ε͍ͯͨ • Ϣʔβ࣮ݧͷਖ਼౴཰ɾճ౴࣌ؒ͸, ͱ΋ʹఏҊख๏͕༏Ε͍ͯͨ ‣ Ϣʔβ͕ղऍՄೳͳܗࣜͰ༗ޮͳΞΫγϣϯΛׂΓ౰ͯΔ͜ͱ͕Ͱ͖ͨ • ϧʔϧηοτʹΑΔཁ໿ (AReS [Rawal+ 20]) ͱൺֱ • ܭࢉػ࣮ݧ: ׂΓ౰ͯΒΕΔΞΫγϣϯͷ༗ޮੑ (ఆྔతͳൺֱ) • Ϣʔβ࣮ݧ: ֤ख๏ͷਓؒʹͱͬͯͷղऍՄೳੑ (ఆੑతͳൺֱ) Dataset Method Cost Loss Invalidity Train AReS 0.436 ± 0.06 0.435 ± 0.07 0.871 ± 0.04 CET 0.349 ± 0.1 0.4 ± 0.11 0.749 ± 0.05 Test AReS 0.45 ± 0.08 0.298 ± 0.09 0.748 ± 0.09 CET 0.383 ± 0.12 0.318 ± 0.19 0.701 ± 0.12 Dataset Method Cost Loss Inva Train AReS 0.436 ± 0.06 0.435 ± 0.07 0.871 CET 0.349 ± 0.1 0.4 ± 0.11 0.749 Test AReS 0.45 ± 0.08 0.298 ± 0.09 0.748 CET 0.383 ± 0.12 0.318 ± 0.19 0.701 Method User Acc. Time [s] AReS 95.12% 784.8 ± 202 CET 100.0% 674.0 ± 392
  27. 2022/02/24ɹϑΥϨετϫʔΫγϣοϓɹK.Kanamori (HU) ࣮ݧ݁Ռ (IBM Attritionσʔληοτ) 30 ಁ໌ੑͱҰ؏ੑΛҡ࣋ͭͭ͠༗ޮͳΞΫγϣϯΛఏࣔͰ͖ͨ Results • ΞΫγϣϯͷ༗ޮੑ͸,

    ଟ͘ͷ৔߹ͰఏҊख๏ (CET) ͕༏Ε͍ͯͨ • Ϣʔβ࣮ݧͷਖ਼౴཰ɾճ౴࣌ؒ͸, ͱ΋ʹఏҊख๏͕༏Ε͍ͯͨ ‣ Ϣʔβ͕ղऍՄೳͳܗࣜͰ༗ޮͳΞΫγϣϯΛׂΓ౰ͯΔ͜ͱ͕Ͱ͖ͨ • ϧʔϧηοτʹΑΔཁ໿ (AReS [Rawal+ 20]) ͱൺֱ • ܭࢉػ࣮ݧ: ׂΓ౰ͯΒΕΔΞΫγϣϯͷ༗ޮੑ (ఆྔతͳൺֱ) • Ϣʔβ࣮ݧ: ֤ख๏ͷਓؒʹͱͬͯͷղऍՄೳੑ (ఆੑతͳൺֱ) Dataset Method Cost Loss Invalidity Train AReS 0.436 ± 0.06 0.435 ± 0.07 0.871 ± 0.04 CET 0.349 ± 0.1 0.4 ± 0.11 0.749 ± 0.05 Test AReS 0.45 ± 0.08 0.298 ± 0.09 0.748 ± 0.09 CET 0.383 ± 0.12 0.318 ± 0.19 0.701 ± 0.12 Dataset Method Cost Loss Inva Train AReS 0.436 ± 0.06 0.435 ± 0.07 0.871 CET 0.349 ± 0.1 0.4 ± 0.11 0.749 Test AReS 0.45 ± 0.08 0.298 ± 0.09 0.748 CET 0.383 ± 0.12 0.318 ± 0.19 0.701 Method User Acc. Time [s] AReS 95.12% 784.8 ± 202 CET 100.0% 674.0 ± 392 CET (ఏҊख๏) AReS [Rawal+ 20]
  28. 2022/02/24ɹϑΥϨετϫʔΫγϣοϓɹK.Kanamori (HU) ·ͱΊ: ൓࣮Ծ૝આ໌໦ 31 ΞΫγϣϯΛେҬతʹཁ໿͢Δ৽ͨͳCEͷϑϨʔϜϫʔΫ • ཁ݅: ෳ਺ͷೖྗʹରͯ͠CEΛఏࣔ͢Δࡍʹ͸, ɹɹ

    ಁ໌ͰҰ؏ͨ͠ΞΫγϣϯͷׂΓ౰ͯํ๏͕ඞཁ • ೖྗۭؒશମʹׂΓ౰ͯΒΕΔΞΫγϣϯΛ ܾఆ໦Λ༻͍ͯཁ໿͢Δ৽ͨͳCEख๏Λ։ൃͨ͠ • ೖྗʹରͯ͠༗ޮͳΞΫγϣϯΛ ༧ଌ͢Δ൓࣮Ծ૝આ໌໦Λಋೖ͠, ͦͷֶश໰୊ΛఆࣜԽ • ֬཰తہॴ୳ࡧͱMILOʹجͮ͘ ֶशΞϧΰϦζϜΛఏҊ Action ೥ऩ: + 12K $ Action ࢒ۀ: ༗ → ແ Action ෦ॺ: Ӧۀ → ਓࣄ ࢒ۀ = ༗ ۀ੷ ≥ " :FT /P :FT /P
  29. 2022/02/24ɹϑΥϨετϫʔΫγϣοϓɹK.Kanamori (HU) ·ͱΊ 33 Ϟσϧ ͔Βॴ๬ͷ༧ଌ ΛಘΔͨΊͷΞΫγϣϯ Λઆ໌ͱͯ͠ఏࣔ: f y*

    a* a* = arg min a∈𝒜 C(a ∣ x) subject to f(x + a) = y* ൓࣮Ծ૝આ໌๏ (Counterfactual Explanation, CE) • DACE: σʔλ෼෍Λߟྀͨ͠CE ‣ ಛ௃ྔؒͷ૬ؔͱ֎Ε஋ϦεΫΛߟྀͨ͠ ݱ࣮తͳCEͷ৽ख๏Λ։ൃ (IJCAI-20) • OrdCE: มߋॱং΋ఏࣔ͢ΔCE ‣ ҼՌޮՌʹج͖ͮมߋॱংΛ࠷దԽͯ͠ ఏࣔ͢ΔCEͷ࿮૊ΈΛ։ൃ (AAAI-21) • CET: CEͷղऍՄೳͳେҬతཁ໿ ‣ ܾఆ໦ͰΞΫγϣϯΛཁ໿ɾ༧ଌ͢Δ CEͷ࿮૊ΈΛ։ൃ (AISTATS-22) 0 50 100 150 200 250 MSinceOldestTradeOpen 0 20 40 60 80 100 AverageMInFile TLPS DACE (ours) DACE Method Order Feature Action OrdCE + TLPS 1st “JobSkill” +1 2nd “Income” +6 OrdCE + DACE 1st “HealthStatus” +3 2nd “WorkPerDay” +1 3rd “Income” +4 Action ෦ॺ: Ӧۀ → ਓࣄ Action ࢒ۀ: ༗ → ແ Action ೥ऩ: + 12K $ ࢒ۀ = ༗ ۀ੷ ≥ " :FT /P :FT /P