Upgrade to Pro — share decks privately, control downloads, hide ads and more …

医療ディープラーニング勉強会
 DL勉強会 第3回 2020.4

医療ディープラーニング勉強会
 DL勉強会 第3回 2020.4

M.Inomata

April 01, 2020
Tweet

More Decks by M.Inomata

Other Decks in Technology

Transcript

  1. ࣗݾ঺հ ழມ ॆԝ (͍ͷ·ͨ ΈͭͻΖ) גࣜձࣾ tech vein ୅දऔక໾ ݉

    σϕϩούʔ twitter: @ino2222 IUUQTXXXUFDIWFJODPN
  2. ᶃA Primer in BERTology: What we know about how BERT

    works BERTֶͷೖ໳ॻɻBERT͕ͲͷΑ͏ʹػೳ͢Δ͔ʹ͍ͭͯ஌͍ͬͯΔ͜ͱ τϥϯεϑΥʔϚʔܕϞσϧ(Transformer-based models)͸ݱ ࡏɺNLPͰ޿͘࢖ΘΕ͍ͯ·͕͢ɺͦͷ಺෦ͷ࢓૊Έʹ͍ͭ ͯ͸·ͩ͋·Γཧղ͞Ε͍ͯ·ͤΜɻຊߘͰ͸ɺ༗໊ͳBERT Ϟσϧ(Devlin et al. 2019)ʹ͍ͭͯɺ40Ҏ্ͷղੳݚڀΛ߹ ੒ͯ͠ɺ͜Ε·Ͱʹ஌ΒΕ͍ͯΔ͜ͱΛઆ໌͠·͢ɻ·ͨɺ ఏҊ͞Ε͍ͯΔϞσϧͷमਖ਼ͱͦͷ܇࿅ϨδʔϜͷ֓ཁΛઆ ໌͠·͢ɻͦͯ͠ɺ͞ΒͳΔݚڀͷํ޲ੑΛ֓આ͠·͢ɻ https://arxiv.org/abs/2002.12327v1
  3. ᶄOn Feature Normalization and Data Augmentation ಛ௃ͷਖ਼نԽͱσʔλ֦ுʹ͍ͭͯ ݱ୅ͷχϡʔϥϧωοτϫʔΫ܇࿅͸ɺҰൠԽΛվળ͢ΔͨΊʹσʔλͷ૿ େʹେ͖͘ґଘ͍ͯ͠·͢ɻϥϕϧอଘܕͷ૿େ๏͕࠷ॳʹ੒ޭͨ͠ޙɺ࠷ۙ Ͱ͸ɺֶश͞Εܾͨఆ໘Λ׈Β͔ʹ͢ΔͨΊʹɺֶशαϯϓϧશମͷಛ௃ͱ

    ϥϕϧΛ૊Έ߹ΘͤΔϥϕϧઁಈ๏΁ͷؔ৺͕ߴ·͍ͬͯ·͢ɻຊ࿦จͰ͸ɺ ಛ௃ͷਖ਼نԽʹΑͬͯநग़͞Εͨୈ1ͱୈ2ͷϞʔϝϯτΛར༻ͨ͠৽͍͠૿ ڧ๏ΛఏҊ͠·͢ɻֶशͨ͠ಛ௃ྔͷϞʔϝϯτΛผͷֶशը૾ͷϞʔϝϯτ ʹஔ͖׵͑Δͱͱ΋ʹɺ໨ඪϥϕϧΛิؒ͢ΔɻզʑͷΞϓϩʔν͸ߴ଎Ͱ ͋Γɺಛ௃ۭؒશମͰಈ࡞͠ɺैདྷͷख๏ͱ͸ҟͳΔ৴߸Λࠞ߹͢ΔͨΊɺ طଘͷ૿ڧख๏ͱޮՌతʹ૊Έ߹ΘͤΔ͜ͱ͕Ͱ͖·͢ɻզʑ͸ɺίϯ ϐϡʔλϏδϣϯɺԻ੠ɺࣗવݴޠॲཧͷϕϯνϚʔΫσʔληοτʹ͓͍ ͯɺͦͷ༗ޮੑΛ࣮ূ͠·ͨ͠ɻ https://arxiv.org/abs/2002.11102v2
  4. ᶆBatch Normalization Biases Deep Residual Networks Towards Shallow Paths όονਖ਼نԽ͸ਂ͍࢒ࠩωοτϫʔΫΛઙ͍ܦ࿏ʹภΒͤΔ

    όονਖ਼نԽʹ͸ෳ਺ͷϝϦοτ͕͋Γ·͢ɻόονਖ਼نԽ͸ଛࣦϥϯυεέʔϓ ͷ৚݅෇͚Λվળ͠ɺڻ͘΄ͲޮՌతͳਖ਼ଇԽΛߦ͍·͢ɻ͔͠͠ɺόονਖ਼نԽ ͷ࠷΋ॏཁͳར఺͸࢒ࠩωοτϫʔΫ(Residual Network)ʹ͓͍ͯੜ͡·͢ɻॳظ Խͷࡍɺόονਖ਼نԽ͸ɺωοτϫʔΫͷਂ͞ͷฏํࠜʹൺྫͨ͠ਖ਼نԽ܎਺ʹ ΑͬͯɺεΩοϓ઀ଓʹର͢Δ࢒ࠩ෼ذΛμ΢ϯεέʔϧ͠·͢ɻ͜ΕʹΑΓɺτ Ϩʔχϯάͷॳظஈ֊Ͱ͸ɺਂ͍ਖ਼نԽ͞Εͨ࢒ࠩωοτϫʔΫʹΑͬͯܭࢉ͞Ε ͨؔ਺͸ɺྑ޷ͳޯ഑Λ࣋ͭઙ͍ύεʹΑͬͯࢧ഑͞ΕΔ͜ͱ͕อূ͞Ε·͢ɻ͜ ͷಎ࡯Λ༻͍ͯɺਖ਼نԽͳ͠Ͱඇৗʹਂ͍࢒ࠩωοτϫʔΫΛ܇࿅Ͱ͖Δ؆୯ͳॳ ظԽεΩʔϜΛ։ൃͨ͠ɻ·ͨɺόονਖ਼نԽ͸ΑΓେ͖ͳֶश཰Ͱ҆ఆֶͨ͠श ΛՄೳʹ͠·͕͢ɺ͜ͷར఺͸େ͖ͳόοναΠζͷֶशΛฒྻԽ͍ͨ͠৔߹ʹͷ Έ༗༻Ͱ͋Δ͜ͱΛ໌Β͔ʹ͠·ͨ͠ɻզʑͷ݁Ռ͸ɺҟͳΔΞʔΩςΫνϟʹ͓ ͚Δόονਖ਼نԽͷར఺Λ෼཭͢Δͷʹ໾ཱͪ·͢ɻ https://arxiv.org/abs/2002.10444v1
  5. ᶇAutoML-Zero: Evolving Machine Learning Algorithms From Scratch. AutoML-Zero: εΫϥον͔ΒͷػցֶशΞϧΰϦζϜͷਐԽ ػցֶशͷݚڀ͸ɺϞσϧߏ଄΍ֶशํ๏ͳͲଟ໘తʹਐΜͰ͍·͢ɻAutoMLͱͯ͠஌ΒΕΔ͜

    ͷΑ͏ͳݚڀΛࣗಈԽ͠Α͏ͱ͢Δ౒ྗ΋·ͨɺେ͖ͳਐาΛ਱͖͛ͯ·ͨ͠ɻ͔͠͠ɺ͜ͷਐา ͸ओʹχϡʔϥϧωοτϫʔΫͷΞʔΩςΫνϟʹয఺Λ౰ͯͨ΋ͷͰ͋Γɺ͜͜Ͱ͸ɺϏϧσΟ ϯάϒϩοΫͱͯ͠ߴ౓ͳઐ໳Ո͕ઃܭͨ͠૚ʹґଘ͍ͯ͠·ͨ͠--͋Δ͍͸ಉ༷ʹ੍ݶͷ͋Δ୳ ࡧۭؒʹґଘ͍ͯ͠·ͨ͠ɻࢲͨͪͷ໨ඪ͸ɺAutoML͕͞ΒʹਐԽͰ͖Δ͜ͱΛࣔ͢͜ͱͰ͋Γ ·͢ɻզʑ͸ɺҰൠతͳݕࡧۭؒΛ௨ͯ͠ਓؒͷόΠΞεΛେ෯ʹ௿ݮ͢Δ৽͍͠ϑϨʔϜϫʔΫ Λಋೖ͢Δ͜ͱʹΑͬͯɺ͜ΕΛ࣮ূ͠·͢ɻ͜ͷۭؒͷ޿େ͞ʹ΋͔͔ΘΒͣɺਐԽత୳ࡧ͸ όοΫϓϩύήʔγϣϯʹΑͬͯ܇࿅͞Εͨ2૚ͷχϡʔϥϧωοτϫʔΫΛൃݟ͢Δ͜ͱ͕Ͱ͖ ·͢ɻ͜ΕΒͷ୯७ͳχϡʔϥϧωοτϫʔΫ͸ɺͦͷޙɺؔ৺ͷ͋ΔλεΫɺྫ͑͹CIFAR-10ͷ มछͰ௚઀ਐԽͤ͞Δ͜ͱͰɺόΠϦχΞΠϯλϥΫγϣϯɺਖ਼نԽޯ഑ɺॏΈฏۉԽͳͲͷτο ϓΞϧΰϦζϜʹݱ୅తͳٕज़͕ݱΕΔ͜ͱͰ͙྇͜ͱ͕Ͱ͖·͢ɻ͞ΒʹɺਐԽ͸ΞϧΰϦζϜ ΛҟͳΔλεΫλΠϓʹదԠͤ͞·͢ɻθϩ͔ΒػցֶशΞϧΰϦζϜΛൃݟͨ͜͠ΕΒͷ༧උత ͳ੒ޭ͸ɺ͜ͷ෼໺ͷ༗๬ͳ৽͍͠ํ޲ੑΛ͍ࣔͯ͠Δͱ৴͍ͯ͡·͢ɻ https://arxiv.org/abs/2003.03384v1
  6. ᶈHyper-Parameter Optimization: A Review of Algorithms and Applications. ϋΠύʔύϥϝʔλ࠷దԽ. ΞϧΰϦζϜͱΞϓϦέʔγϣϯͷϨϏϡʔ

    σΟʔϓχϡʔϥϧωοτϫʔΫ͕։ൃ͞ΕͯҎདྷɺ೔ৗੜ׆ʹଟେͳߩݙΛ͖ͯ͠·ͨ͠ɻػ ցֶश͸ɺ೔ৗੜ׆ͷ΄΅͢΂ͯͷଆ໘ʹ͓͍ͯɺਓ͕ؒͰ͖ΔҎ্ͷ߹ཧతͳΞυόΠεΛఏ ڙͯ͘͠Ε·͢ɻ͔͠͠ɺ͜ͷΑ͏ͳ੒Ռʹ΋͔͔ΘΒͣɺχϡʔϥϧωοτϫʔΫͷઃܭͱ܇ ࿅͸ɺґવͱͯ͠ࠔ೉Ͱ༧ଌෆՄೳͳखॱͰ͢ɻҰൠతͳϢʔβʔͷٕज़తͳᮢ஋ΛԼ͛ΔͨΊ ʹɺࣗಈԽ͞ΕͨϋΠύʔύϥϝʔλ࠷దԽ(HPO)͸ɺֶज़తʹ΋࢈ۀతʹ΋ਓؾͷ͋Δτϐο Ϋͱͳ͍ͬͯ·͢ɻຊ࿦จͰ͸ɺϋΠύʔύϥϝʔλ࠷దԽʹؔ͢Δ࠷΋ॏཁͳτϐοΫͷϨ ϏϡʔΛߦ͍·͢ɻ࠷ॳʹɺϞσϧͷֶश΍ߏ଄ʹؔ࿈͢ΔओཁͳϋΠύʔύϥϝʔλΛ঺հ ͠ɺͦͷॏཁੑͱ஋ҬΛఆٛ͢Δํ๏Λ࿦͡·͢ɻ࣍ʹɺओཁͳ࠷దԽΞϧΰϦζϜͱͦͷద༻ ੑʹয఺Λ౰ͯɺಛʹਂ૚ֶशωοτϫʔΫʹର͢Δޮ཰ͱਫ਼౓Λ໢ཏ͍ͯ͠·͢ɻ࣍ʹɺHPO ͷͨΊͷओཁͳαʔϏε΍πʔϧΩοτΛϨϏϡʔ͠ɺ࠷ઌ୺ͷݕࡧΞϧΰϦζϜ΁ͷରԠɺओ ཁͳਂ૚ֶशϑϨʔϜϫʔΫͰͷ࣮ݱੑɺϢʔβ͕ઃܭͨ͠৽͍͠Ϟδϡʔϧ΁ͷ֦ுੑΛൺֱ ͠·͢ɻ࠷ޙʹɺHPOΛਂ૚ֶशʹద༻ͨ͠৔߹ͷ໰୊఺ɺ࠷దԽΞϧΰϦζϜؒͷൺֱɺݶΒ ΕͨܭࢉࢿݯͰͷϞσϧධՁͷͨΊͷஶ໊ͳΞϓϩʔνΛ঺հ͠ɺ࿦จΛకΊ͘͘Γ·͢ɻ https://arxiv.org/abs/2003.05689v1
  7. ᶉA Survey on Contextual Embeddings จ຺తΤϯϕοσΟϯάʹؔ͢Δௐࠪ ELMo΍BERTͳͲͷจ຺ʹج͍ͮͨΤϯϕοσΟϯά͸ɺ Word2VecͷΑ͏ͳάϩʔόϧͳ୯ޠදݱΛ௒͑ͯɺ෯޿͍ࣗવݴ ޠॲཧλεΫʹ͓͍ͯըظతͳύϑΥʔϚϯεΛ࣮ݱ͠·͢ɻจ຺ ʹج͍ͮͨΤϯϕοσΟϯά͸ɺ֤୯ޠʹͦͷจ຺ʹج͍ͮͨදݱ

    ΛׂΓ౰ͯΔ͜ͱͰɺ༷ʑͳจ຺Ͱͷ୯ޠͷ࢖༻Λัଊ͠ɺݴޠؒ Ͱ఻ୡ͞ΕΔ஌ࣝΛූ߸Խ͠·͢ɻຊௐࠪͰ͸ɺطଘͷจ຺ʹجͮ ͘ຒΊࠐΈϞσϧɺݴޠԣஅతͳϙϦάϩοτͷࣄલ܇࿅ɺԼྲྀλ εΫʹ͓͚Δจ຺ʹجͮ͘ຒΊࠐΈͷԠ༻ɺϞσϧѹॖɺϞσϧղ ੳΛϨϏϡʔ͠·͢ɻ https://arxiv.org/abs/2003.07278v1
  8. ᶊReZero is All You Need: Fast Convergence at Large Depth.

    ReZero͕͋Ε͹େৎ෉ɻେਂ౓Ͱͷߴ଎ऩଋ σΟʔϓωοτϫʔΫ͸ɺྖҬΛ௒͑ͯେ෯ͳੑೳ޲্ΛՄೳʹ͠·͕ͨ͠ɺଟ͘ͷ৔߹ɺফ ࣦ/രൃతͳޯ഑ʹ೰·͞Ε͍ͯ·͢ɻ͜Ε͸ಛʹτϥϯεϑΥʔϚʔΞʔΩςΫνϟʹ౰ͯ͸ ·Γɺେن໛ͳσʔληοτ΍ܭࢉ༧ࢉ͕ͳ͍ͱ12૚Λ௒͑Δਂ͞ͷֶश͕ࠔ೉Ͱ͢ɻҰൠత ʹɺඇޮ཰ͳ৴߸఻೻͕σΟʔϓωοτϫʔΫͷֶशΛ્֐͢Δ͜ͱ͕Θ͔͍ͬͯ·͢ɻτϥ ϯεͰ͸ɺϚϧνϔουͷࣗݾ஫ҙ͕͜ͷѱ͍৴߸఻೻ͷओͳݪҼͱͳ͍ͬͯ·͢ɻਂ૚৴߸ ఻೻Λଅਐ͢ΔͨΊʹɺզʑ͸ReZeroΛఏҊ͠·͢ɻ͜Ε͸ΞʔΩςΫνϟΛ؆୯ʹมߋͨ͠ ΋ͷͰɺϨΠϠʔ͝ͱʹ1ͭͷ௥ՃֶशύϥϝʔλΛ࢖༻ͯ͠ɺ೚ҙͷϨΠϠʔΛಉҰੑϚο ϓͱͯ͠ॳظԽ͢Δ΋ͷͰ͢ɻզʑ͸͜ͷٕज़ΛݴޠϞσϦϯάʹద༻͠ɺ100૚Ҏ্ͷ ReZero-τϥϯεϑΥʔϚʔωοτϫʔΫΛ؆୯ʹ܇࿅Ͱ͖Δ͜ͱΛൃݟ͠·ͨ͠ɻ12૚ͷτ ϥϯεϑΥʔϚʔʹద༻͢Δͱɺenwiki8ͰReZero͸56%଎͘ऩଋ͠·͢ɻReZero͸ TransformerΛ௒͑ͯଞͷ࢒ࠩωοτϫʔΫʹ΋ద༻͞Εɺਂ͍׬શʹ઀ଓ͞ΕͨωοτϫʔΫ Ͱ͸1,500%଎͘ऩଋ͠ɺCIFAR 10Ͱ܇࿅͞ΕͨResNet-56Ͱ͸32%଎͘ऩଋ͠·͢ɻ https://arxiv.org/abs/2003.04887v1
  9. ᶋLagrangian Neural Networks ϥάϥϯδϡχϡʔϥϧωοτϫʔΫ ੈքͷਖ਼֬ͳϞσϧ͸ɺͦͷجૅͱͳΔରশੑͷ֓೦ʹج͍ͮͯߏங͞Ε͍ͯ·͢ɻ෺ཧֶ Ͱ͸ɺ͜ΕΒͷରশੑ͸ΤωϧΪʔ΍ӡಈྔͳͲͷอଘଇʹରԠ͍ͯ͠·͢ɻ͔͠͠ɺ χϡʔϥϧωοτϫʔΫϞσϧ͸෺ཧֶ෼໺Ͱͷར༻͕૿͍͑ͯΔʹ΋͔͔ΘΒͣɺ͜ΕΒ ͷରশੑΛֶश͢Δͷʹۤ࿑͍ͯ͠·͢ɻຊ࿦จͰ͸ɺχϡʔϥϧωοτϫʔΫΛ༻͍ͯ೚ ҙͷϥάϥϯδΞϯΛύϥϝʔλԽͰ͖ΔϥάϥϯδΞϯχϡʔϥϧωοτϫʔΫ(LNN)Λ ఏҊ͠·͢ɻϋϛϧτχΞϯΛֶश͢ΔϞσϧͱ͸ରরతʹɺLNN͸ਖ਼४࠲ඪΛඞཁͱ͠ͳ

    ͍ͨΊɺਖ਼४ӡಈྔ͕ෆ໌Ͱ͋ͬͨΓɺܭࢉ͕ࠔ೉ͳ৔߹ʹ༗ޮͰ͢ɻ͜Ε·ͰͷΞϓϩʔ νͱ͸ҟͳΓɺզʑͷख๏͸ֶश͞ΕͨΤωϧΪʔͷؔ਺ܗࣜΛ੍ݶͤͣɺ༷ʑͳλεΫͷ ͨΊͷΤωϧΪʔอଘϞσϧΛੜ੒͠·͢ɻզʑ͸ɺೋॏৼΓࢠͱ૬ର࿦తཻࢠͰզʑͷΞ ϓϩʔνΛςετ͠ɺϕʔεϥΠϯΞϓϩʔνͰ͸ࢄҳ͕ൃੜ͢ΔΤωϧΪʔอଘΛ࣮ূ ͠ɺϋϛϧτχΞϯΞϓϩʔνͰ͸ࣦഊ͢Δਖ਼४࠲ඪͷͳ͍૬ରੑཧ࿦ΛϞσϧԽ͠·͢ɻ ࠷ޙʹɺϥάϥϯδϡάϥϑωοτϫʔΫΛ༻͍ͯɺ͜ͷϞσϧ͕ͲͷΑ͏ʹάϥϑ΍࿈ଓ ܥʹద༻Ͱ͖Δ͔Λࣔ͠ɺ1࣍ݩ೾ಈํఔ্ࣜͰ࣮ূ͠·͢ɻ https://arxiv.org/abs/2003.04630v1
  10. ᶌSet-Structured Latent Representations ू߹ߏ଄Խજࡏදݱ ߏ଄Խ͞Ε͍ͯͳ͍σʔλ͸ɺγʔϯͷΠϝʔδͷதͷΦϒδΣΫτͷΑ͏ ʹɺજࡏతͳߏ੒ཁૉͷߏ଄Λ͍࣋ͬͯΔ͜ͱ͕ଟ͍Ͱ͢ɻ͜ͷΑ͏ͳঢ়گͰ ͸ɺແடংͳίϨΫγϣϯ΍ set ͕જࡏతͳߏ଄ͱͳΓ·͢ɻ͔͠͠ɼ͜ͷΑ ͏ͳදݱΛσʔλ͔Β௚઀ֶश͢Δ͜ͱ͸ɼ཭ࢄతͰແடংͳߏ଄ͷͨΊࠔ೉

    Ͱ͢ɻ ͜͜Ͱ͸ɼू߹ߏ଄Λ࣋ͭજࡏදݱΛඍ෼Մೳʹֶश͢ΔͨΊͷϑϨʔ ϜϫʔΫΛ։ൃ͠·͢ɻ͜ͷϑϨʔϜϫʔΫΛ༻͍ͯɺը૾ͳͲͷσʔλΛࣗ વʹղऍՄೳͰҙຯͷ͋Δ੒෼ͷू߹ʹ෼ղ͢Δํ๏Λࣔ͠ɺطଘͷख๏Ͱ͸ ؔ࿈͢Δߏ଄Λద੾ʹ੾Γ཭͢͜ͱ͕Ͱ͖ͳ͍͜ͱΛࣔ͠·͢ɻ·ͨɺզʑͷ ํ๏࿦Λɺηοτݻ༗ͷૢ࡞Λ࢖༻͢ΔηοτϚονϯάͷΑ͏ͳԼྲྀͷλε Ϋʹ·Ͱ֦ு͢Δํ๏΋ࣔ͠·͢ɻզʑͷίʔυ͸ͪ͜Βͷhttps URL͔Βೖख ՄೳͰ͢ɻ https://arxiv.org/abs/2003.04448v1
  11. ᶃLearning to Simulate Complex Physics with Graph Networks άϥϑωοτϫʔΫΛ༻͍ͨෳࡶͳ෺ཧֶͷγϛϡϨʔγϣϯͷֶश ͜͜Ͱ͸ɺγϛϡϨʔγϣϯֶशͷͨΊͷҰൠతͳϑϨʔϜϫʔΫΛఏࣔ͠ɺྲྀମɺ߶ମɺ

    มܗՄೳͳ෺࣭͕૬ޓʹ࡞༻͍ͯ͠Δ༷ʑͳ෺ཧྖҬͰ࠷ઌ୺ͷੑೳΛൃش͢Δ୯ҰϞσϧ ͷ࣮૷Λఏڙ͠·͢ɻզʑͷϑϨʔϜϫʔΫʢզʑ͕ʮάϥϑωοτϫʔΫϕʔεγϛϡϨʔ λʯʢGNSʣͱݺͿʣ͸ɺ෺ཧγεςϜͷঢ়ଶΛཻࢠͰදݱ͠ɺάϥϑͷϊʔυͱͯ͠දݱ ͠ɺֶश͞ΕͨϝοηʔδύογϯάΛհͯ͠μΠφϛΫεΛܭࢉ͠·͢ɻͦͷ݁Ռɺզʑͷ Ϟσϧ͸ɺֶशதͷ਺ઍݸͷύʔςΟΫϧΛ༻͍ͨγϯάϧλΠϜεςοϓͷ༧ଌ͔Βɺҟͳ Δॳظ৚݅ɺ਺ઍݸͷλΠϜεςοϓɺࢼݧ࣌ʹ͸গͳ͘ͱ΋ҰܻҎ্ͷύʔςΟΫϧΛ༻͍ ͨ༧ଌ΁ͱҰൠԽͰ͖Δ͜ͱ͕ࣔ͞Ε·ͨ͠ɻզʑͷϞσϧ͸ɺ༷ʑͳධՁࢦඪͷϋΠύʔ ύϥϝʔλͷબ୒ʹରͯ͠ϩόετͰͨ͠ɻ௕ظతͳੑೳͷओͳܾఆཁҼ͸ɺϝοηʔδ௨ աεςοϓͷ਺ͱɺ܇࿅σʔλΛϊΠζͰഁյ͢Δ͜ͱʹΑΔΤϥʔͷ஝ੵΛܰݮ͢Δ͜ͱ Ͱͨ͠ɻզʑͷGNSϑϨʔϜϫʔΫ͸ɺ͜Ε·ͰͰ࠷΋ਖ਼֬ͳ൚༻ֶश෺ཧγϛϡϨʔλͰ ͋Γɺෳࡶͳॱํ޲͓Αͼٯํ޲ͷ໰୊Λ෯޿͘ղ͘͜ͱ͕ظ଴͞Ε͍ͯ·͢ɻ
  12. ᶄA Primer in BERTology: What we know about how BERT

    works BERTֶ ͷೖ໳ॻɻBERT͕ͲͷΑ͏ʹػೳ͢Δ͔ʹ͍ͭͯ஌͍ͬͯΔ͜ͱ τϥϯεϑΥʔϚʔܕϞσϧ(Transformer-based models)͸ݱ ࡏɺNLPͰ޿͘࢖ΘΕ͍ͯ·͕͢ɺͦͷ಺෦ͷ࢓૊Έʹ͍ͭ ͯ͸·ͩ͋·Γཧղ͞Ε͍ͯ·ͤΜɻຊߘͰ͸ɺ༗໊ͳBERT Ϟσϧ(Devlin et al. 2019)ʹ͍ͭͯɺ40Ҏ্ͷղੳݚڀΛ߹੒ ͯ͠ɺ͜Ε·Ͱʹ஌ΒΕ͍ͯΔ͜ͱΛઆ໌͠·͢ɻ·ͨɺఏ Ҋ͞Ε͍ͯΔϞσϧͷमਖ਼ͱͦͷ܇࿅ϨδʔϜͷ֓ཁΛઆ໌ ͠·͢ɻͦͯ͠ɺ͞ΒͳΔݚڀͷํ޲ੑΛ֓આ͠·͢ɻ https://arxiv.org/abs/2002.12327v1 ॏෳ
  13. ᶅLearning to Shade Hand-drawn Sketches खඳ͖εέονͷӄӨΛֶͿ ઢըεέονͱর໌ํ޲ͷϖΞ͔Βɺৄࡉ͔ͭਖ਼֬ͳܳज़తͳӨΛੜ੒͢ΔͨΊͷશࣗಈ ख๏Λఏࣔ͠·͢ɻ·ͨɺઢըͱӨͷϖΞ͔Βɺর໌ํ޲ͱλά෇͚͞Εͨ1,000ྫͷ৽ ͍͠σʔληοτΛఏڙ͠·͢ɻڻ͘΂͖͜ͱʹɼੜ੒͞ΕͨӨ͸ɼεέον͞Εͨγʔ ϯͷجૅͱͳΔ3Dߏ଄Λૉૣ͘఻͑·͢ɽͦͷ݁ՌɺզʑͷΞϓϩʔνʹΑͬͯੜ੒͞

    ΕͨӨ͸ɺ௚઀࢖༻͢Δ͜ͱ΋ɺΞʔςΟετͷͨΊͷ༏Εͨग़ൃ఺ͱͯ͠࢖༻͢Δ͜ͱ ΋Ͱ͖·͢ɻզʑ͕ఏҊ͢ΔσΟʔϓϥʔχϯάωοτϫʔΫ͕ɺखඳ͖ͷεέονΛड ͚औΓɺજࡏۭؒʹ3DϞσϧΛߏங͠ɺͦͷ݁Ռͱͯ͠ੜ੒͞ΕͨӨΛϨϯμϦϯά͢ Δ͜ͱΛ࣮ূ͍ͯ͠·͢ɻੜ੒͞ΕͨӨ͸ɺखඳ͖ͷઢͱͦͷԼͷ3࣍ݩۭؒΛଚॏ͠ɺ ࣗӨޮՌͷΑ͏ͳચ࿅͞Εͨਖ਼֬ͳσΟςʔϧΛؚΜͰ͍·͢ɻ͞Βʹɺੜ੒͞Εͨγϟ υ΢ʹ͸ɺैདྷͷ3DϨϯμϦϯάख๏Ͱ͸࣮ݱͰ͖ͳ͔ͬͨɺϦϜϥΠςΟϯά΍όο ΫϥΠςΟϯά͔ΒݱΕΔϋϩʔͳͲͷܳज़తͳޮՌؚ͕·Ε͍ͯ·͢ɻ https://arxiv.org/abs/2002.11812v1
  14. ᶆStyleGAN2 Distillation for Feed-forward Image Manipulation. StyleGAN2 ϑΟʔυϑΥϫʔυը૾ૢ࡞ͷͨΊͷৠཹ StyleGAN2͸ɺϦΞϧͳը૾Λੜ੒͢ΔͨΊͷ࠷ઌ୺ͷωοτϫʔΫͰ͢ɻStyleGAN2 ͸ɺજࡏۭؒ಺Ͱͷํ޲ੑ͕ҟͳΔΑ͏ʹ໌ࣔతʹ܇࿅͞Ε͓ͯΓɺજࡏҼࢠΛมԽͤ͞

    ͯޮ཰తͳը૾ૢ࡞ΛՄೳʹ͠·͢ɻطଘͷը૾Λฤू͢Δʹ͸ɺ༩͑ΒΕͨը૾Λ StyleGAN2ͷજࡏۭؒʹຒΊࠐΉඞཁ͕͋Γ·͢ɻόοΫϓϩύήʔγϣϯΛ༻͍ͨજࡏ ίʔυ࠷దԽ͸ɺ࣮ੈքͷը૾ͷ࣭తຒΊࠐΈʹҰൠతʹ༻͍ΒΕ͍ͯ·͕͢ɺଟ͘ͷΞ ϓϦέʔγϣϯͰ͸๏֎ʹ͕͔͔࣌ؒΓ·͢ɻզʑ͸ɺStyleGAN2ͷಛఆͷը૾ૢ࡞Λɺ ରʹͳֶͬͯश͞Εͨը૾ରը૾ωοτϫʔΫʹৠཹ͢Δํ๏ΛఏҊ͢Δɻ݁Ռͱͯ͠ಘ ΒΕΔύΠϓϥΠϯ͸ɺطଘͷGANͷ୅ସͱͯ͠ɺରʹͳ͍ͬͯͳ͍σʔλΛ༻ֶ͍ͯश ͞Ε·͢ɻຊݚڀͰ͸ɺਓؒͷإͷม׵݁ՌΛఏڙ͠·͢ɿੑผަ׵ɺՃྸɾएฦΓɺε λΠϧม׵ɺը૾ϞʔϑΟϯάɻզʑͷख๏Λ༻͍ͨੜ੒ͷ඼࣭͸ɺ͜ΕΒͷಛఆͷλεΫ ʹ͓͍ͯɺStyleGAN2όοΫϓϩύήʔγϣϯ΍ݱࡏͷ࠷ઌ୺ͷख๏ͱಉ౳Ͱ͋Δ͜ͱΛ ࣔ͠·͢ɻ https://arxiv.org/abs/2003.03581v1
  15. ᶇAutoML-Zero: Evolving Machine Learning Algorithms From Scratch. AutoML-Zero: εΫϥον͔ΒͷػցֶशΞϧΰϦζϜͷਐԽ ػցֶशͷݚڀ͸ɺϞσϧߏ଄΍ֶशํ๏ͳͲଟ໘తʹਐΜͰ͍ΔɻAutoMLͱͯ͠஌ΒΕΔ͜ͷ

    Α͏ͳݚڀΛࣗಈԽ͠Α͏ͱ͢Δ౒ྗ΋·ͨɺେ͖ͳਐาΛ਱͖͛ͯ·ͨ͠ɻ͔͠͠ɺ͜ͷਐา͸ ओʹχϡʔϥϧωοτϫʔΫͷΞʔΩςΫνϟʹয఺Λ౰ͯͨ΋ͷͰ͋Γɺ͜͜Ͱ͸ɺϏϧσΟϯ άϒϩοΫͱͯ͠ߴ౓ͳઐ໳Ո͕ઃܭͨ͠૚ʹґଘ͍ͯ͠·ͨ͠--͋Δ͍͸ಉ༷ʹ੍ݶͷ͋Δ୳ࡧ ۭؒʹґଘ͍ͯ͠·ͨ͠ɻࢲͨͪͷ໨ඪ͸ɺAutoML͕͞ΒʹਐԽͰ͖Δ͜ͱΛࣔ͢͜ͱͰ͋Γ· ͢ɻզʑ͸ɺҰൠతͳݕࡧۭؒΛ௨ͯ͠ਓؒͷόΠΞεΛେ෯ʹ௿ݮ͢Δ৽͍͠ϑϨʔϜϫʔΫΛ ಋೖ͢Δ͜ͱʹΑͬͯɺ͜ΕΛ࣮ূ͠·͢ɻ͜ͷۭؒͷ޿େ͞ʹ΋͔͔ΘΒͣɺਐԽత୳ࡧ͸όο ΫϓϩύήʔγϣϯʹΑͬͯ܇࿅͞Εͨ2૚ͷχϡʔϥϧωοτϫʔΫΛൃݟ͢Δ͜ͱ͕Ͱ͖· ͢ɻ͜ΕΒͷ୯७ͳχϡʔϥϧωοτϫʔΫ͸ɺͦͷޙɺؔ৺ͷ͋ΔλεΫɺྫ͑͹CIFAR-10ͷม छͰ௚઀ਐԽͤ͞Δ͜ͱͰɺόΠϦχΞΠϯλϥΫγϣϯɺਖ਼نԽޯ഑ɺॏΈฏۉԽͳͲͷτοϓ ΞϧΰϦζϜʹݱ୅తͳٕज़͕ݱΕΔ͜ͱͰ͙྇͜ͱ͕Ͱ͖·͢ɻ͞ΒʹɺਐԽ͸ΞϧΰϦζϜΛ ҟͳΔλεΫλΠϓʹదԠͤ͞·͢ɻθϩ͔ΒػցֶशΞϧΰϦζϜΛൃݟͨ͜͠ΕΒͷ༧උతͳ ੒ޭ͸ɺ͜ͷ෼໺ͷ༗๬ͳ৽͍͠ํ޲ੑΛ͍ࣔͯ͠Δͱ৴͍ͯ͡·͢ɻ https://arxiv.org/abs/2003.03384v1 ॏෳ
  16. ᶈLagrangian Neural Networks ϥάϥϯδϡχϡʔϥϧωοτϫʔΫ ੈքͷਖ਼֬ͳϞσϧ͸ɺͦͷجૅͱͳΔରশੑͷ֓೦ʹج͍ͮͯߏங͞Ε͍ͯ·͢ɻ෺ཧֶ Ͱ͸ɺ͜ΕΒͷରশੑ͸ΤωϧΪʔ΍ӡಈྔͳͲͷอଘଇʹରԠ͍ͯ͠·͢ɻ͔͠͠ɺ χϡʔϥϧωοτϫʔΫϞσϧ͸෺ཧֶ෼໺Ͱͷར༻͕૿͍͑ͯΔʹ΋͔͔ΘΒͣɺ͜ΕΒ ͷରশੑΛֶश͢Δͷʹۤ࿑͍ͯ͠·͢ɻຊ࿦จͰ͸ɺχϡʔϥϧωοτϫʔΫΛ༻͍ͯ೚ ҙͷϥάϥϯδΞϯΛύϥϝʔλԽͰ͖ΔϥάϥϯδΞϯχϡʔϥϧωοτϫʔΫ(LNN)Λఏ Ҋ͠·͢ɻϋϛϧτχΞϯΛֶश͢ΔϞσϧͱ͸ରরతʹɺLNN͸ਖ਼४࠲ඪΛඞཁͱ͠ͳ͍

    ͨΊɺਖ਼४ӡಈྔ͕ෆ໌Ͱ͋ͬͨΓɺܭࢉ͕ࠔ೉ͳ৔߹ʹ༗ޮͰ͢ɻ͜Ε·ͰͷΞϓϩʔν ͱ͸ҟͳΓɺզʑͷख๏͸ֶश͞ΕͨΤωϧΪʔͷؔ਺ܗࣜΛ੍ݶͤͣɺ༷ʑͳλεΫͷͨ ΊͷΤωϧΪʔอଘϞσϧΛੜ੒͠·͢ɻզʑ͸ɺೋॏৼΓࢠͱ૬ର࿦తཻࢠͰզʑͷΞϓ ϩʔνΛςετ͠ɺϕʔεϥΠϯΞϓϩʔνͰ͸ࢄҳ͕ൃੜ͢ΔΤωϧΪʔอଘΛ࣮ূ͠ɺ ϋϛϧτχΞϯΞϓϩʔνͰ͸ࣦഊ͢Δਖ਼४࠲ඪͷͳ͍૬ରੑཧ࿦ΛϞσϧԽ͠·͢ɻ࠷ޙ ʹɺϥάϥϯδϡάϥϑωοτϫʔΫΛ༻͍ͯɺ͜ͷϞσϧ͕ͲͷΑ͏ʹάϥϑ΍࿈ଓܥʹ ద༻Ͱ͖Δ͔Λࣔ͠ɺ1࣍ݩ೾ಈํఔ্ࣜͰ࣮ূ͠·͢ɻ https://arxiv.org/abs/2003.04630v1 ॏෳ
  17. ᶉMLIR: A Compiler Infrastructure for the End of Moore's Law.

    MLIR: ϜʔΞͷ๏ଇͷऴᖼͷͨΊͷίϯύΠϥج൫ ຊݚڀͰ͸ɺ࠶ར༻ՄೳͰ֦ுՄೳͳίϯύΠϥج൫Λߏங͢ΔͨΊͷ৽͍͠ΞϓϩʔνͰ͋Δ MLIRΛ঺հ͠·͢ɻMLIRͷ໨త͸ɺιϑτ΢ΣΞͷஅยԽʹରॲ͠ɺҟछϋʔυ΢ΣΞͷίϯύ ΠϧΛվળ͠ɺυϝΠϯݻ༗ͷίϯύΠϥΛߏங͢ΔͨΊͷίετΛେ෯ʹ࡟ݮ͠ɺطଘͷίϯύ ΠϥΛ઀ଓ͢ΔͷΛॿ͚Δ͜ͱͰ͢ɻMLIR͸ɺҟͳΔந৅౓ϨϕϧͰͷίʔυੜ੒ثɺτϥϯε ϨʔλɺΦϓςΟϚΠβͷઃܭͱ࣮૷Λ༰қʹ͠ɺ·ͨɺΞϓϦέʔγϣϯυϝΠϯɺϋʔυ΢Σ Ξλʔήοτɺ࣮ߦ؀ڥʹ·͕ͨΔίʔυੜ੒ثɺτϥϯεϨʔλɺΦϓςΟϚΠβͷઃܭͱ࣮૷ Λ༰қʹ͠·͢ɻຊݚڀͰ͸ɼ(1)֦ுͱਐԽͷͨΊʹߏங͞Εͨݚڀ੒Ռ෺ͱͯ͠ͷMLIRʹ͍ͭ ͯٞ࿦͠ɼઃܭɼηϚϯςΟΫεɼ࠷దԽ࢓༷ɼγεςϜɼΤϯδχΞϦϯάʹ͓͍ͯɼ͜ͷ৽͠ ͍ઃܭϙΠϯτ͕΋ͨΒ͢՝୊ͱػձΛ໌Β͔ʹ͠·͢ɽ(2) ίϯύΠϥߏஙίετΛ࡟ݮ͢ΔҰ ൠԽ͞ΕͨΠϯϑϥͱͯ͠ͷMLIRͷධՁ-ଟ༷ͳϢʔεέʔεΛ঺հ͠ɼকདྷͷϓϩάϥϛϯάݴ ޠɼίϯύΠϥɼ࣮ߦ؀ڥɼίϯϐϡʔλΞʔΩςΫνϟͷݚڀͱڭҭͷػձΛࣔ͠·͢ɽ·ͨɼ MLIRͷཧ࿦తࠜڌɼಠࣗͷઃܭݪཧɼߏ଄ɼҙຯ࿦ʹ͍ͭͯ΋঺հ͍ͯ͠·͢ɽ https://arxiv.org/abs/2002.11054v2 લճͱॏෳ
  18. ᶊSLIDE : In Defense of Smart Algorithms over Hardware Acceleration

    for Large-Scale Deep Learning Systems. SLIDE : େن໛σΟʔϓϥʔχϯάγεςϜͷͨΊͷϋʔυ΢ΣΞΞΫηϥϨʔ γϣϯΑΓ΋εϚʔτΞϧΰϦζϜΛकΔͨΊʹɻ σΟʔϓϥʔχϯάʢDLʣΞϧΰϦζϜ͸ɺݱ୅ͷػցֶशγεςϜͷத৺తͳয఺ͱͳ͍ͬͯ·͢ɻσʔ λྔͷ૿Ճʹ൐͍ɺ਺ԯݸͷύϥϝʔλΛ࣋ͭେن໛ͳχϡʔϥϧωοτϫʔΫΛ܇࿅ͯ͠ɺ͜ΕΒͷσʔ λྔΛهԱ͠ɺ࠷ઌ୺ͷਫ਼౓ΛಘΔͷʹे෼ͳ༰ྔΛҡ࣋͢Δ͜ͱ͕Ұൠతʹͳ͖͍ͬͯͯ·͢ɻେن໛ͳ Ϟσϧͱσʔλʹؔ࿈͢ΔߴֹͳܭࢉΛճආ͢ΔͨΊʹɺίϛϡχςΟͰ͸ϞσϧֶशͷͨΊͷઐ༻ϋʔυ ΢ΣΞ΁ͷ౤ࢿ͕૿Ճ͍ͯ͠·͢ɻ͔͠͠ɺಛघͳϋʔυ΢ΣΞ͸ߴՁͰ͋Γɺଟ͘ͷλεΫʹҰൠԽ͢Δ ͜ͱ͸ࠔ೉Ͱ͢ɻΞϧΰϦζϜͷਐา͸ɺNVIDIA-V100 GPUͷΑ͏ͳڧྗͳϋʔυ΢ΣΞʹରͯ͠௚઀త ͳ༏ҐੑΛࣔ͢͜ͱ͕Ͱ͖·ͤΜͰͨ͠ɻ͜ͷ࿦จ͸ྫ֎Λఏڙ͠·͢ɻզʑ͸ɺεϚʔτͳϥϯμϜԽΞ ϧΰϦζϜͱϚϧνίΞฒྻԽͱϫʔΫϩʔυ࠷దԽΛಠࣗʹ༥߹ͤͨ͞SLIDE (Sub-LInear Deep learning Engine)ΛఏҊ͠·͢ɻSLIDE͸CPUͷΈΛ࢖༻͢Δ͜ͱͰɺ࠷దԽ͞ΕͨTensorflow(TF)ͷ࣮૷ ΛGPU্Ͱ࣮ߦͨ͠৔߹ʹൺ΂ͯɺֶशͱਪ࿦ͷ྆ํͷܭࢉྔΛେ෯ʹ࡟ݮ͢Δ͜ͱ͕Ͱ͖·͢ɻۀքن໛ ͷਪ঑σʔληοτΛ༻͍ͨධՁͰ͸ɺ44ίΞͷCPUͰSLIDEΛ࢖༻ͨ͠৔߹ɺTesla V100ͰTFΛ࢖༻͠ ֶͯशͨ͠৔߹ͱൺֱͯ͠ɺ೚ҙͷਫ਼౓ϨϕϧͰ3.5ഒҎ্(1࣌ؒର3.5࣌ؒ)ͷ଎౓Ͱֶश͕ՄೳͰ͋Δ͜ ͱ͕ࣔ͞Ε͍ͯ·͢ɻಉ͡CPUͷϋʔυ΢ΣΞ্Ͱ͸ɺSLIDE͸TFΑΓ10ഒҎ্ߴ଎Ͱ͢ɻ࠶ݱੑͷͨΊͷ ίʔυͱεΫϦϓτΛఏڙ͠·͢ɻ https://arxiv.org/abs/1903.03129v2
  19. ᶋAn Empirical Evaluation of Generic Convolutional and Recurrent Networks for

    Sequence Modeling. γʔέϯεϞσϦϯάͷͨΊͷ൚༻ίϯϘϦϡʔγϣϯɾϦΧϨϯτωοτϫʔ Ϋͷ࣮ূతධՁ ΄ͱΜͲͷਂ૚ֶशͷઐ໳ՈʹͱͬͯɺγʔέϯεϞσϦϯά͸ϦΧϨϯτωοτϫʔΫͱಉٛͰ ͢ɻ ͔͠͠ɺ࠷ۙͷ݁Ռ͸ɺԻ੠߹੒΍ػց຋༁ͳͲͷλεΫʹ͓͍ͯɺίϯϘϦϡʔγϣϯɾ ΞʔΩςΫνϟ͕ϦΧϨϯτɾωοτϫʔΫΑΓ΋༏Ε͍ͯΔ͜ͱΛ͍ࣔͯ͠·͢ɻ ৽͍͠γʔέ ϯεϞσϦϯάͷλεΫ΍σʔληοτ͕ൃੜͨ͠৔߹ɺͲͷΞʔΩςΫνϟΛ࢖༻͢΂͖͔ʁ զʑ͸ɺγʔέϯεϞσϦϯάͷͨΊͷҰൠతͳ৞ΈࠐΈ͓ΑͼϦΧϨϯτΞʔΩςΫνϟͷମܥ తͳධՁΛߦ͍ͬͯ·͢ɻ ͜ΕΒͷϞσϧ͸ɺϦΧϨϯτɾωοτϫʔΫͷϕϯνϚʔΫʹҰൠత ʹ࢖༻͞Ε͍ͯΔ෯޿͍ඪ४λεΫͰධՁ͞Ε͍ͯ·͢ɻ ͦͷ݁Ռɺ୯७ͳ৞ΈࠐΈΞʔΩςΫ νϟ͸ɺଟ༷ͳλεΫ΍σʔληοτʹ͓͍ͯɺLSTMͷΑ͏ͳਖ਼نͷϦΧϨϯτωοτϫʔΫΑ Γ΋༏Ε͍ͯΔ͜ͱ͕ࣔ͞ΕɺҰํͰɺΑΓ௕͍༗ޮϝϞϦΛࣔ͠·ͨ͠ɻ զʑ͸ɺγʔέϯε ϞσϦϯάͱϦΧϨϯτωοτϫʔΫͷؒͷڞ௨ͷؔ࿈ੑΛ࠶ߟ͢΂͖Ͱ͋Γɺ৞ΈࠐΈωοτ ϫʔΫ͸γʔέϯεϞσϦϯάλεΫͷࣗવͳग़ൃ఺ͱΈͳ͢΂͖Ͱ͋Δͱ݁࿦෇͚·ͨ͠ɻ ؔ ࿈͢Δ࡞ۀΛࢧԉ͢ΔͨΊʹɺզʑ͸͜ͷhttp URLͰίʔυΛར༻Ͱ͖ΔΑ͏ʹ͠·ͨ͠ɻ https://arxiv.org/abs/1803.01271v2
  20. ᶌSparse Orthogonal Variational Inference for Gaussian Processes. Ψ΢εաఔͷͨΊͷૄͳ௚ަมྔਪ࿦. ༠ಋ఺Λ༻͍ͨΨ΢εաఔͷεύʔεม෼ۙࣅͷ৽͍͠ղऍΛ঺հ͠ ·͢ɻ͜Ε͸ɺΨ΢εաఔΛ2ͭͷಠཱͨ͠աఔͷ࿨ͱͯ͠෼ղ͢Δ

    ͜ͱʹج͍͍ͮͯ·͢ɻ1ͭ͸༠ಋ఺ͷ༗ݶجఈʹ·͕͓ͨͬͯΓɺ΋ ͏1ͭ͸࢒ΓͷมಈΛัଊ͠·͢ɻ͜ͷఆࣜԽ͕طଘͷۙࣅ஋Λճ෮ ͢Δͱಉ࣌ʹɺݶք໬౓ͷΑΓݫ͍͠Լݶ஋ͱ৽͍֬͠཰తม෼ਪ࿦ ΞϧΰϦζϜΛಘΔ͜ͱ͕Ͱ͖Δ͜ͱΛࣔ͠·͢ɻඪ४ճؼ͔Βʢਂ ͍ʣ৞ΈࠐΈΨ΢εաఔΛ༻͍ͨଟΫϥε෼ྨ·Ͱɺ͍͔ͭ͘ͷΨ΢ εաఔϞσϧʹ͓͍ͯ͜ΕΒͷΞϧΰϦζϜͷޮ཰ੑΛ࣮ূ͠ɺ७ਮ ʹGPϕʔεͷϞσϧͷதͰCIFAR-10Ͱͷ࠷৽ͷ݁ՌΛใࠂ͠·͢ɻ https://arxiv.org/abs/1910.10596v3
  21. ᶆStyleGAN2 Distillation for Feed-forward Image Manipulation. StyleGAN2 ϑΟʔυϑΥϫʔυը૾ૢ࡞ͷͨΊͷৠཹ StyleGAN2͸ɺϦΞϧͳը૾Λੜ੒͢ΔͨΊͷ࠷ઌ୺ͷωοτϫʔΫͰ͢ɻStyleGAN2 ͸ɺજࡏۭؒ಺Ͱͷํ޲ੑ͕ҟͳΔΑ͏ʹ໌ࣔతʹ܇࿅͞Ε͓ͯΓɺજࡏҼࢠΛมԽͤ͞

    ͯޮ཰తͳը૾ૢ࡞ΛՄೳʹ͠·͢ɻطଘͷը૾Λฤू͢Δʹ͸ɺ༩͑ΒΕͨը૾Λ StyleGAN2ͷજࡏۭؒʹຒΊࠐΉඞཁ͕͋Γ·͢ɻόοΫϓϩύήʔγϣϯΛ༻͍ͨજࡏ ίʔυ࠷దԽ͸ɺ࣮ੈքͷը૾ͷ࣭తຒΊࠐΈʹҰൠతʹ༻͍ΒΕ͍ͯ·͕͢ɺଟ͘ͷΞ ϓϦέʔγϣϯͰ͸๏֎ʹ͕͔͔࣌ؒΓ·͢ɻզʑ͸ɺStyleGAN2ͷಛఆͷը૾ૢ࡞Λɺ ରʹͳֶͬͯश͞Εͨը૾ରը૾ωοτϫʔΫʹৠཹ͢Δํ๏ΛఏҊ͢Δɻ݁Ռͱͯ͠ಘ ΒΕΔύΠϓϥΠϯ͸ɺطଘͷGANͷ୅ସͱͯ͠ɺରʹͳ͍ͬͯͳ͍σʔλΛ༻ֶ͍ͯश ͞Ε·͢ɻຊݚڀͰ͸ɺਓؒͷإͷม׵݁ՌΛఏڙ͠·͢ɿੑผަ׵ɺՃྸɾएฦΓɺε λΠϧม׵ɺը૾ϞʔϑΟϯάɻզʑͷख๏Λ༻͍ͨੜ੒ͷ඼࣭͸ɺ͜ΕΒͷಛఆͷλεΫ ʹ͓͍ͯɺStyleGAN2όοΫϓϩύήʔγϣϯ΍ݱࡏͷ࠷ઌ୺ͷख๏ͱಉ౳Ͱ͋Δ͜ͱΛ ࣔ͠·͢ɻ https://arxiv.org/abs/2003.03581v1