Slide 1

Slide 1 text

ܦࡁֶऀʹ஌ͬͯ΄͍͠ػցֶश ~൓ࣄ࣮ϞσϧʹΑΔ༧ଌ~ Kazuki Taniguchi

Slide 2

Slide 2 text

• ৬ྺ • 2014.4-2019.3 • גࣜձࣾαΠόʔΤʔδΣϯτΞυςΫຊ෦ AI Lab • 2019.4- • ITܥϕϯνϟʔ (ϓϩμΫτ։ൃ/ϚʔέςΟϯά) • ϑϦʔϥϯε (AI/MLͷݚڀ։ൃ) • ݚڀ෼໺ • Pattern Recognition / Image Restoration • Recommendation / Response Prediction • Counterfactual ML ࣗݾ঺հ ୩ޱ ࿨ً (@kazk1018)

Slide 3

Slide 3 text

Introduction

Slide 4

Slide 4 text

(લஔ͖) ػցֶशͷఆٛ • ҰൠతͳػցֶशͷఆٛΑΓ΋ڱٛ f(x) .PEFM Input: Կ͔͠Βͷσʔλ (਺஋, ը૾, ςΩετ, etc…) Output: ग़ྗ݁Ռ (ϥϕϧ, ਺஋, ը૾, etc…) ༧ଌ ਪఆ

Slide 5

Slide 5 text

ػցֶशٕज़ͷٸ଎ͳීٴ ʮ/&$ͷإೝূɺճ࿈ଓͰੈքҐʹʯ ʮ"SUXPSL1FSTPOBMJ[BUJPOBU/FUqJYʯ ʮϥʔϝϯ࣍࿠ͷళฮΛ"VUP.-Ͱ൑ผʯ ʮ.BDIJOF-FBSOJOHBU6CFSʯ

Slide 6

Slide 6 text

ϥʔϝϯ࣍࿠ͷళฮΛAutoMLͰ൑ผ • 1170ຕ × 41ళฮ = ໿48,000ຕͷը૾ΛϥϕϧͱηοτͰ༻ҙ • Google AutoML VisionΛར༻ͯ͠95%ͷਫ਼౓Λ࣮ݱ • ػցֶशͷ஌͕ࣝͳ͍ਓؒͰ΋ֶश࣌ؒ͸18෼ͰϞσϧΛ֫ಘ

Slide 7

Slide 7 text

Artwork Personalization at Netflix ϢʔβA ϢʔβB ʮGood Will HuntingʯͷΞʔτϫʔΫͷྫ

Slide 8

Slide 8 text

ػցֶश΁ͷߋͳΔظ଴ ҩྍ਍அ ࣗಈӡస Ϗδωεߩݙ ༧ଌͷਫ਼౓͚ͩΛ௥͍ٻΊΔͷͰ͸ͳ͘ ݸਓ΍ࣾձʹରͯ͠ϦεΫ͕͋ΔྖҬ΁ͷظ଴

Slide 9

Slide 9 text

ػցֶश΁ͷߋͳΔظ଴ Explainability Stability (Robustness, Generalization) [1] ֶशͨ͠Ϟσϧͷઆ໌Մೳੑ ֶशͨ͠Ϟσϧͷ҆ఆੑ (ಛఆͷσʔλ΍λεΫʹґଘͤͣ, ظ଴ͨ͠ಈ࡞͕҆ఆͯ͠ಘΒΕΔ)

Slide 10

Slide 10 text

ػցֶशͷࠜຊతͳ՝୊ • ಠཱಉ෼෍ (i.i.d.) • ֤ඪຊ͕ಉ֬͡཰෼෍͔ΒಠཱʹಘΒΕΔ • ػցֶशͷଟ͘ͷΞϧΰϦζϜʹԾఆͱͯ͠༻͍Δ • ໬౓ • େ਺ͷ๏ଇ • த৺ۃݶఆཧ • … • ݱ࣮͸ֶशσʔλͱ༧ଌ͢Δσʔλͷ෼෍͸ҟͳΔ [1]

Slide 11

Slide 11 text

ػցֶशͷࠜຊతͳ՝୊ ֶशσʔλ ༧ଌ͢Δσʔλ ઇͷ্͸େৎ෉ʁ ͓΋ͪΌ͸େৎ෉ʁ ̋ ʁ ʁ ֶशσʔλʹ͍ۙ

Slide 12

Slide 12 text

ػցֶशͷࠜຊతͳ՝୊ • ༧ଌ͍ͨ͠σʔλͷ෼෍ͷఆٛ (Ϟσϧͷཁ݅) • ʮHorseʯ • ʮHorse on the grassʯ • ʮHorse Toyʯ • બ୒όΠΞε • ར༻͢ΔαϯϓϧΛબ୒͢Δ࣌఺Ͱൃੜ͢ΔόΠΞε • ֶश͢Δσʔλֶ͕श͢Δσʔλͦͷ΋ͷʹґଘͯ͠બ୒͞Ε͍ͯ Δ৔߹ʹൃੜ

Slide 13

Slide 13 text

޿ࠂ഑৴γεςϜͷࣄྫ

Slide 14

Slide 14 text

޿ࠂ഑৴γεςϜ • Real Time Bidding (RTB) imp click Request RTB click͕؍ଌͰ͖ͳ͍ = ϥϕϧ͕ଘࡏ͠ͳ͍ win = ޿ࠂ͕දࣔ(imp)͞ΕΔ lose = ޿ࠂ͕දࣔ(imp)͞Εͳ͍ SSP DSP

Slide 15

Slide 15 text

޿ࠂ഑৴γεςϜ • ΫϦοΫ༧ଌϞσϧ [2] *% ޿ࠂ Ϣʔβ JNQ DMJDL " B :FT # E /P $ F /P # C :FT $ F :FT ഑৴γεςϜͷϩάσʔλ

Slide 16

Slide 16 text

޿ࠂ഑৴γεςϜ • ΫϦοΫ༧ଌϞσϧ [2] *% ޿ࠂ Ϣʔβ JNQ DMJDL " B :FT # E /P $ F /P # C :FT $ F :FT ഑৴γεςϜͷϩάσʔλ ֶशʹར༻Ͱ͖Δσʔλ

Slide 17

Slide 17 text

޿ࠂ഑৴γεςϜͷ໰୊ઃఆ • ΫϦοΫ༧ଌϞσϧ [2] *% ޿ࠂ Ϣʔβ JNQ DMJDL " B :FT # E /P $ F /P # C :FT $ F :FT ഑৴γεςϜͷϩάσʔλ ࣮ࡍʹΫϦοΫΛ༧ଌ͍ͨ͠σʔλ ൓ࣄ࣮

Slide 18

Slide 18 text

޿ࠂ഑৴γεςϜͷ໰୊ઃఆ Ϟσϧͷֶशʹ༻͍Δσʔλͷ෼෍ͱ ༧ଌ͍ͨ͠σʔλͷ෼෍͕ҟͳΔ P(x) = q(win|x)P′(x) P′(x) ༧ଌ͍ͨ͠σʔλͷ֬཰෼෍: ֶशσʔλͷ֬཰෼෍: P(x) ≠ P′(x) ※ ͸RTBͰ ͕͖ͨͱ͖ͷwin͢Δ৚݅෇͖֬཰ q(win|x) x

Slide 19

Slide 19 text

ػցֶशʹΑΔΞϓϩʔν • ڞมྔγϑτ (Covariate Shift) • Unsupervised Domain Adaptation

Slide 20

Slide 20 text

ڞมྔγϑτ (Covariate Shift)

Slide 21

Slide 21 text

ڞมྔγϑτ [3] • ҎԼͷ৚݅ͷ໰୊Λѻ͏ p(x) ≠ p′(x) p(y|x) = p′(y|x) ͸ ʹಠཱʹै͍, ͸ ʹ ಠཱʹै͏ͱԾఆ͢Δ D = {(xi , yi )}n i=1 p(x, y) D′ = {x′ i }m i=1 ∫ p′(x, y)dy ͜ͷͱ͖, ࣍ͷΑ͏ͳ৚݅Λຬͨ͢ͱ͖Λڞมྔγϑτͱ͍͏

Slide 22

Slide 22 text

ڞมྔγϑτԼͷ༧ଌϞσϧ • ͱ Λ༻͍ͯ৽ͨͳೖྗ ʹର͢Δऔಘ Λ ༧ଌ͢ΔϞσϧ Λֶश͍ͨ͠ • ଛࣦؔ਺Λ ͱ͢Δͱ͖ڞมྔγϑτ͸ॏཁ౓ॏΈ෇ ͚Λ༻͍Δ͜ͱͰղܾ͢Δ͜ͱ͕஌ΒΕ͍ͯΔ [3] {(xi , yi )}n i=0 {x′ j }m j=0 x′ y fθ (x) loss(y, fθ ) minθ n ∑ i=0 w(xi )loss(yi , fθ (xi ))) w(xi ) = p′(xi ) p(xi )

Slide 23

Slide 23 text

ॏཁ౓ॏΈ෇͚Λ༻͍ͨΫϦοΫ༧ଌ P(x) ≠ P′(x) ※ ͸RTBͰ ͕͖ͨͱ͖ͷwin͢Δ৚݅෇͖֬཰ q(win|x) x ΛԾఆ͢Δͱ,
 ॏཁ౓ॏΈ෇͚ʹΑͬͯ༧ଌϞσϧΛֶश͢Δ͜ͱ͕Մೳ p(y|x) = p′(y|x) P(x) = q(win|x)P′(x) P′(x) ༧ଌ͍ͨ͠σʔλͷ֬཰෼෍: ֶशσʔλͷ֬཰෼෍:

Slide 24

Slide 24 text

ॏཁ౓ॏΈ෇͚Λ༻͍ͨΫϦοΫ༧ଌ • ॏཁ౓͸RTBʹ͓͍ͯwin͢Δ֬཰ͷٯ਺ • treatmentΛwinͱͨ͠ͱ͖ͷInverse probability of treatment weighting (IPTW) [4]ͱಉ༷ • ਅͷ ͸؍ଌͰ͖ͳ͍ͨΊPropensity ScoreΛਪఆ q(win|x) w(x) = 1 q(win|x)

Slide 25

Slide 25 text

Unsupervised Domain Adaptation

Slide 26

Slide 26 text

Unsupervised Domain Adaptation • Domain Adaptation (υϝΠϯదԠ) • े෼ͳ৘ใͷ͋ΔυϝΠϯ(Source Domain)ͷ஌ࣝΛ৘ใ͕গͳ͍
 υϝΠϯ(Target Domain)ʹదԠ͢Δٕज़ • సҠֶशͷҰछ • Unsupervised Domain Adaptation • Target Domainʹڭࢣϥϕϧ͕ͳ͍υϝΠϯదԠ

Slide 27

Slide 27 text

Domain Adaptation Neural Network (DANN) [5] labelͷ༧ଌʹ͓͚Δଛࣦ domainͷ༧ଌʹ͓͚Δଛࣦ ( ̂ θy , ̂ θf ) = arg min θy ,θf L(y, d, x) ̂ θd = arg max θd L(y, d, x) ಛ௃දݱ ͸υϝΠϯͷݟ෼͚͕͔ͭͳ͘ͳΔΑ͏ʹ ϥϕϧͷ༧ଌʹ͓͚ΔଛࣦΛ࠷খԽ͢Δ f

Slide 28

Slide 28 text

CTR Prediction by using DANN [6] • DANN͸ը૾Λର৅ͱͨ͠CNNϕʔεͷϞσϧ • RTBͷςʔϒϧσʔλʹରԠ͢ΔΑ͏ʹϞσϧΛվྑ

Slide 29

Slide 29 text

CTR Prediction by using DANN [6] • ࣮ݧ • σʔληοτ • Criteo CTR Prediction Contest (Kaggle) • ൺֱख๏ • Baseline: Deep Neural Network (Only source dataset) • Importance Sampling

Slide 30

Slide 30 text

4. ·ͱΊ

Slide 31

Slide 31 text

·ͱΊ • ػցֶशͷٸ଎ͳීٴͱظ଴ • ExplainabilityͱStability • ػցֶशͷ՝୊ • ޿ࠂ഑৴γεςϜͷࣄྫ • ޿ࠂ഑৴γεςϜͷ֓ཁ • ൓ࣄ࣮Λߟྀͨ͠ػցֶशʹΑΔΫϦοΫ༧ଌ • ڞมྔγϑτ • Unsupervised Domain Adaptation

Slide 32

Slide 32 text

References 1. Kun Kuang, Peng Cui, Susan Athey, Ruoxuan Xiong, and Bo Li, “Stable Prediction across Unknown Environments”, KDD, 2018 2. Oliver Chapelle, Eren Manavoglu, Romer Rosales, “Simple and scalable response prediction for display advertising”, TIST, 2015 3. Hidetoshi Shimodaira, “Improving predictive inference under covariate shift by weighting the log-likelihood function”, JSPI, 2000 4. James M. Robins, Andrea Rotnitzky, Lue Ping Zhao, “Estimation of Regression Coefficients When Some Regressors Are Not Always Observed”, JASA, 1994 5. Yaroslav Ganin, Victor Lempitsky, “Unsupervised Domain Adaptation by Backpropagation”, JMLR, 2015 6. ୩ޱ ࿨ً, ҆Ҫ ᠳଠ, “Domain Adaptation Neural NetworksΛ༻͍ͨΫϦοΫ༧ଌ”, JSAI, 2019

Slide 33

Slide 33 text

No content