Top recent ᶇղઆॻΛͬͨࢦಋ (ݪจ: Teaching with Commentaries) σΟʔϓχϡʔϥϧωοτϫʔΫͷޮՌతͳֶशࠔͰ͋Γɺ͜ΕΒͷϞσϧΛ࠷దʹֶ श͢Δํ๏ʹ͍ͭͯଟ͘ͷະղܾͷ͕͍ͬͯ·͢ɻ࠷ۙ։ൃ͞Εͨχϡʔϥϧωο τϫʔΫͷֶशΛվળ͢ΔͨΊͷख๏ɺςΟʔνϯάʢֶशใΛֶशϓϩηεதʹఏڙ ͯ͠ԼྲྀͷϞσϧͷੑೳΛ্ͤ͞Δ͜ͱʣΛݕ౼͍ͯ͠ΔɻຊจͰɺςΟʔνϯάͷ ൣғΛ͛ΔͨΊͷҰาΛ౿Έग़͢ɻຊจͰɺಛఆͷλεΫσʔληοτͰͷֶशʹ ཱͭϝλֶशใͰ͋ΔղઆΛ༻͍ͨॊೈͳςΟʔνϯάϑϨʔϜϫʔΫΛఏҊ͢Δɻຊ จͰɺ࠷ۙͷ҉ͷࠩҟԽʹؔ͢ΔݚڀՌΛ׆༻ͯ͠ɺޮతͰεέʔϥϒϧͳޯ ϕʔεͷղઆจֶश๏ΛఏҊ͢Δɻݸʑͷ܇࿅ྫʹର͢ΔॏΈͷֶश͔Βɺϥϕϧʹґଘ͠ ͨσʔλ૿ڧϙϦγʔͷύϥϝʔλԽɺݦஶͳը૾ྖҬΛڧௐ͢ΔҙϚεΫͷදݱ·Ͱɺ ༷ʑͳ༻్Λ୳Δɻ͜ΕΒͷઃఆʹ͓͍ͯɺίϝϯλϦʔ܇࿅ੑೳΛ্ͤ͞ɺ σʔληοτͱ܇࿅ϓϩηεʹؔ͢ΔجຊతͳಎΛఏڙ͢Δ͜ͱ͕Ͱ͖Δ͜ͱΛൃݟ͢ Δɻ http://arxiv.org/abs/2011.03037v1 Google Research / MIT / University of Toronto ˠڭࢣσʔλΛՃֶͯ͠शΛิॿ͢ΔϞσϧ ղઆϞσϧ
Top10 Recent 1. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale 2. Do Wide and Deep Networks Learn the Same Things? Uncovering How Neural Network Representations Vary with Width and Depth 3. RelationNet++: Bridging Visual Representations for Object Detection via Transformer Decoder 4. Intriguing Properties of Contrastive Losses 5. Teaching with Commentaries 6. A Review of Uncertainty Quantification in Deep Learning: Techniques, Applications and Challenges 7. Learning Invariances in Neural Networks 8. Underspecification Presents Challenges for Credibility in Modern Machine Learning 9. Ridge Rider: Finding Diverse Solutions by Following Eigenvectors of the Hessian 10. Training Generative Adversarial Networks by Solving Ordinary Differential Equations
Top10 Hype 1. Fourier Neural Operator for Parametric Partial Differential Equations 2. Do Wide and Deep Networks Learn the Same Things? Uncovering How Neural Network Representations Vary with Width and Depth 3. Viewmaker Networks: Learning Views for Unsupervised Representation Learning 4. Large-scale multilingual audio visual dubbing 5. Text-to-Image Generation Grounded by Fine-Grained User Attention 6. Self Normalizing Flows 7. An Attack on InstaHide: Is Private Learning Possible with Instance Encoding? 8. Hyperparameter Ensembles for Robustness and Uncertainty Quantification 9. The geometry of integration in text classification RNNs 10. Scaling Laws for Autoregressive Generative Modeling
ᶄϫΠυωοτϫʔΫͱσΟʔϓωοτϫʔΫಉ͜͡ͱΛֶͿͷ͔ʁχϡʔϥ ϧωοτϫʔΫͷදݱ͕෯ͱਂ͞ʹΑͬͯͲͷΑ͏ʹมԽ͢Δ͔Λ໌Β͔ʹ͢Δ (ݪจ: Do Wide and Deep Networks Learn the Same Things? Uncovering How Neural Network Representations Vary with Width and Depth) σΟʔϓɾχϡʔϥϧɾωοτϫʔΫͷޭͷ伴ͱͳΔཁҼɺΞʔΩςΫνϟͷਂ͞ͱ෯ΛมԽͤ͞ ͯੑೳΛ্ͤ͞ΔͨΊʹϞσϧΛεέʔϦϯάͰ͖Δ͜ͱͰ͢ɻχϡʔϥϧωοτϫʔΫઃܭͷ͜ͷ ୯७ͳಛੑɺ༷ʑͳλεΫʹରͯ͠ඇৗʹޮՌతͳΞʔΩςΫνϟΛੜΈग़͖ͯ͠·ͨ͠ɻͦΕʹ ͔͔ΘΒͣɺֶश͞Εͨදݱʹର͢Δਂ͞ͱ෯ͷޮՌʹ͍ͭͯͷཧղݶΒΕ͍ͯΔɻຊจͰɺ͜ ͷجຊతͳΛݚڀ͢Δɻ·ͣɺਂ͞ͱ෯ͷมԽ͕ϞσϧͷӅΕදݱʹͲͷΑ͏ͳӨڹΛ༩͑Δ͔Λ ௐΔ͜ͱ͔Β࢝ΊɺΑΓେ͖ͳ༰ྔͷʢ෯͕͍·ͨਂ͍ʣϞσϧͷӅΕදݱʹಛతͳϒϩοΫ ߏΛൃݟ͢Δɻ͜ͷϒϩοΫߏɺϞσϧͷ༰ྔ͕܇࿅ηοτͷαΠζʹରͯ͠େ͖͍߹ʹੜ͡ Δ͜ͱΛ࣮ূ͠ɺجૅͱͳΔ͕ͦͷදݱͷࢧతͳओΛอ࣋͠ɺ͍ͯ͠Δ͜ͱΛ͍ࣔͯ͠· ͢ɻ͜ͷൃݟɺҟͳΔϞσϧʹΑֶͬͯश͞ΕΔಛʹॏཁͳӨڹΛ༩͑Δɻ͢ͳΘͪɺϒϩοΫߏ ͷ֎ଆͷදݱɺ෯ͱਂ͕͞ҟͳΔΞʔΩςΫνϟؒͰྨࣅ͍ͯ͠Δ͜ͱ͕ଟ͍͕ɺϒϩοΫߏ ֤Ϟσϧʹݻ༗ͷͷͰ͋ΔɻզʑɺҟͳΔϞσϧΞʔΩςΫνϟͷग़ྗ༧ଌΛੳ͠ɺશମతͳਫ਼ ͕ࣅ͍ͯΔ߹Ͱɺ෯ͷ͍ϞσϧͱԞߦ͖ͷਂ͍ϞσϧͰɺΫϥεؒͰಠಛͷΤϥʔύλʔϯ ͱมಈ͕ݟΒΕΔ͜ͱΛൃݟͨ͠ɻ http://arxiv.org/abs/2010.15327v1 Google Research ˠ෯ͱਂ͞ͷҧ͏ϞσϧΛੳͯ͠ɺͦΕͧΕͷಛੑΛௐͨɻ
ᶇղઆॻΛͬͨࢦಋ (ݪจ: Teaching with Commentaries) σΟʔϓχϡʔϥϧωοτϫʔΫͷޮՌతͳֶशࠔͰ͋Γɺ͜ΕΒͷϞσϧΛ࠷దʹֶ श͢Δํ๏ʹ͍ͭͯଟ͘ͷະղܾͷ͕͍ͬͯ·͢ɻ࠷ۙ։ൃ͞Εͨχϡʔϥϧωο τϫʔΫͷֶशΛվળ͢ΔͨΊͷख๏ɺςΟʔνϯάʢֶशใΛֶशϓϩηεதʹఏڙ ͯ͠ԼྲྀͷϞσϧͷੑೳΛ্ͤ͞Δ͜ͱʣΛݕ౼͍ͯ͠ΔɻຊจͰɺςΟʔνϯάͷ ൣғΛ͛ΔͨΊͷҰาΛ౿Έग़͢ɻຊจͰɺಛఆͷλεΫσʔληοτͰͷֶशʹ ཱͭϝλֶशใͰ͋ΔղઆΛ༻͍ͨॊೈͳςΟʔνϯάϑϨʔϜϫʔΫΛఏҊ͢Δɻຊ จͰɺ࠷ۙͷ҉ͷࠩҟԽʹؔ͢ΔݚڀՌΛ׆༻ͯ͠ɺޮతͰεέʔϥϒϧͳޯ ϕʔεͷղઆจֶश๏ΛఏҊ͢Δɻݸʑͷ܇࿅ྫʹର͢ΔॏΈͷֶश͔Βɺϥϕϧʹґଘ͠ ͨσʔλ૿ڧϙϦγʔͷύϥϝʔλԽɺݦஶͳը૾ྖҬΛڧௐ͢ΔҙϚεΫͷදݱ·Ͱɺ ༷ʑͳ༻్Λ୳Δɻ͜ΕΒͷઃఆʹ͓͍ͯɺίϝϯλϦʔ܇࿅ੑೳΛ্ͤ͞ɺ σʔληοτͱ܇࿅ϓϩηεʹؔ͢ΔجຊతͳಎΛఏڙ͢Δ͜ͱ͕Ͱ͖Δ͜ͱΛൃݟ͢ Δɻ http://arxiv.org/abs/2011.03037v1 Google Research / MIT / University of Toronto ˠڭࢣσʔλΛՃֶͯ͠शΛิॿ͢ΔϞσϧ ղઆϞσϧ
ᶄϫΠυωοτϫʔΫͱσΟʔϓωοτϫʔΫಉ͜͡ͱΛֶͿͷ͔ʁ χϡʔϥϧωοτϫʔΫͷදݱ͕෯ͱਂ͞ʹΑͬͯͲͷΑ͏ʹมԽ͢Δ͔ Λ໌Β͔ʹ͢Δ (ݪจ: Do Wide and Deep Networks Learn the Same Things? σΟʔϓɾχϡʔϥϧɾωοτϫʔΫͷޭͷ伴ͱͳΔཁҼɺΞʔΩςΫνϟͷਂ͞ ͱ෯ΛมԽͤͯ͞ੑೳΛ্ͤ͞ΔͨΊʹϞσϧΛεέʔϦϯάͰ͖Δ͜ͱͰ͢ɻ χϡʔϥϧωοτϫʔΫઃܭͷ͜ͷ୯७ͳಛੑɺ༷ʑͳλεΫʹରͯ͠ඇৗʹޮՌత ͳΞʔΩςΫνϟΛੜΈग़͖ͯ͠·ͨ͠ɻͦΕʹ͔͔ΘΒͣɺֶश͞Εͨදݱʹର͢ Δਂ͞ͱ෯ͷޮՌʹ͍ͭͯͷཧղݶΒΕ͍ͯΔɻຊจͰɺ͜ͷجຊతͳΛݚ ڀ͢Δɻ·ͣɺਂ͞ͱ෯ͷมԽ͕ϞσϧͷӅΕදݱʹͲͷΑ͏ͳӨڹΛ༩͑Δ͔Λௐ Δ͜ͱ͔Β࢝ΊɺΑΓେ͖ͳ༰ྔͷʢ෯͕͍·ͨਂ͍ʣϞσϧͷӅΕදݱʹಛత ͳϒϩοΫߏΛൃݟ͢Δɻ͜ͷϒϩοΫߏɺϞσϧͷ༰ྔ͕܇࿅ηοτͷαΠζ ʹରͯ͠େ͖͍߹ʹੜ͡Δ͜ͱΛ࣮ূ͠ɺجૅͱͳΔ͕ͦͷදݱͷࢧతͳओ Λҡ࣋͠ɺ͍ͯ͠Δ͜ͱΛ͍ࣔͯ͠·͢ɻ͜ͷൃݟɺҟͳΔϞσϧʹΑֶͬͯश ͞ΕΔಛʹॏཁͳӨڹΛ༩͑Δɻ͢ͳΘͪɺϒϩοΫߏͷ֎ଆͷදݱɺ෯ͱਂ͞ ͕ҟͳΔΞʔΩςΫνϟؒͰྨࣅ͍ͯ͠Δ͜ͱ͕ଟ͍͕ɺϒϩοΫߏ֤Ϟσϧʹݻ ༗ͷͷͰ͋ΔɻզʑɺҟͳΔϞσϧΞʔΩςΫνϟͷग़ྗ༧ଌΛੳ͠ɺશମతͳ ਫ਼͕ࣅ͍ͯΔ߹Ͱɺ෯ͷ͍ϞσϧͱԞߦ͖ͷਂ͍ϞσϧͰɺΫϥεؒͰಠಛ ͷΤϥʔύλʔϯͱมಈ͕ݟΒΕΔ͜ͱΛൃݟͨ͠ɻ http://arxiv.org/abs/2010.15327v1 SFDFOU ͱॏෳ Google Research
ᶋςΩετྨRNNʹ͓͚Δ౷߹ͷδΦϝτϦ (ݪจ: The geometry of integration in text classification RNNs) ϦΧϨϯτɾχϡʔϥϧɾωοτϫʔΫʢRNNʣ͕༷ʑͳλεΫʹ͘Ԡ༻͞Ε͍ͯΔʹ͔͔ΘΒͣɺ RNN͕ͲͷΑ͏ʹ͜ΕΒͷλεΫΛղܾ͢Δͷ͔ʹ͍ͭͯͷ౷ҰతͳཧղಘΒΕ͍ͯ·ͤΜɻಛʹɺ܇ ࿅͞ΕͨRNNʹͲͷΑ͏ͳಈతύλʔϯ͕ੜ͡Δͷ͔ɺ·ͨɺͦΕΒͷύλʔϯ͕܇࿅σʔληοτλ εΫʹͲͷΑ͏ʹґଘ͢Δͷ͔ෆ໌Ͱ͋ΔɻຊݚڀͰɺಛఆͷࣗવݴޠॲཧλεΫͰ͋ΔςΩετͷ ྨͱ͍͏จ຺Ͱ͜ΕΒͷʹऔΓΜͰ͍·͢ɻಈతγεςϜղੳͷπʔϧΛ༻͍ͯɺࣗવݴޠͱ߹ ݴޠͷ྆ํͷςΩετྨλεΫͰ܇࿅͞ΕͨϦΧϨϯτωοτϫʔΫΛݚڀ͍ͯ͠·͢ɻ͜ΕΒͷ܇ ࿅͞ΕͨRNNͷμΠφϛΫεɺղऍՄೳͰ࣍ݩͰ͋Δ͜ͱ͕Θ͔Γ·ͨ͠ɻ۩ମతʹɺΞʔΩςΫ νϟσʔληοτͷҧ͍ʹؔΘΒͣɺRNNςΩετΛॲཧ͢Δࡍʹ࣍ݩͷΞτϥΫλʔଟ༷ମΛج ຊతͳϝΧχζϜͱͯ͠༻ͯ͠ɺ֤ΫϥεͷূڌΛੵ͠·͢ɻ͞ΒʹɺΞτϥΫλଟ༷ମͷ࣍ݩੑͱ ܗঢ়ɺֶशσʔληοτͷߏʹΑܾͬͯఆ͞ΕΔʀಛʹɺֶशσʔληοτ্Ͱܭࢉ͞Εͨ୯७ͳ୯ ޠ౷ܭ͕ɺ͜ΕΒͷಛੑΛ༧ଌ͢ΔͨΊʹͲͷΑ͏ʹ༻Ͱ͖Δ͔ʹ͍ͭͯड़Δɻզʑͷ؍ଌɺෳ ͷΞʔΩςΫνϟͱσʔληοτʹ·͕͓ͨͬͯΓɺRNN͕ςΩετྨΛ࣮ߦ͢ΔͨΊʹ࠾༻͍ͯ͠ Δڞ௨ͷϝΧχζϜΛө͍ͯ͠·͢ɻҙࢥܾఆʹ͚ͨূڌͷ౷߹͕ڞ௨ͷܭࢉݪཧͰ͋Δఔʹɺ ຊݚڀɺಈతγεςϜٕज़Λ༻͍ͯRNNͷ෦ಈ࡞Λݚڀ͢ΔͨΊͷجૅΛங͘ͷͰ͋Δɻ http://arxiv.org/abs/2010.15114v1 University of Washington / Google ˠ3//͕Ͳ͏ͬͯλεΫΛղܾ͍ͯ͠Δ͔Λ ղੳͨ͠ݚڀจ