Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
What is Deep Learning ?
Search
urakarin
May 02, 2017
Technology
1
1k
What is Deep Learning ?
(Japanese document)
History and introductions of Neural Network,
社内ゼミでの資料です。
urakarin
May 02, 2017
Tweet
Share
More Decks by urakarin
See All by urakarin
WiFi講座(3)
urakarin
0
350
BadUSB
urakarin
0
420
Other Decks in Technology
See All in Technology
Sansan Engineering Unit 紹介資料
sansan33
PRO
1
3k
Databricks AI/BI Genie の「値ディクショナリー」をAmazonの奥地(S3)まで見に行く
kameitomohiro
1
380
Kubernetes self-healing of your workload
hwchiu
0
350
ヘンリー会社紹介資料(エンジニア向け) / company deck for engineer
henryofficial
0
320
Implementing and Evaluating a High-Level Language with WasmGC and the Wasm Component Model: Scala’s Case
tanishiking
0
170
AIフル活用で挑む!空間アプリ開発のリアル
taat
0
140
Introduction to Sansan, inc / Sansan Global Development Center, Inc.
sansan33
PRO
0
2.8k
Dylib Hijacking on macOS: Dead or Alive?
patrickwardle
0
440
Data Hubグループ 紹介資料
sansan33
PRO
0
2.2k
Claude Code Subagents 再入門 ~cc-sddの実装で学んだこと~
gotalab555
10
17k
それでも私が品質保証プロセスを作り続ける理由 #テストラジオ / Why I still continue to create QA process
pineapplecandy
0
160
AI時代の開発を加速する組織づくり - ブログでは書けなかったリアル
hiro8ma
1
250
Featured
See All Featured
Measuring & Analyzing Core Web Vitals
bluesmoon
9
630
CoffeeScript is Beautiful & I Never Want to Write Plain JavaScript Again
sstephenson
162
15k
Designing Experiences People Love
moore
142
24k
A designer walks into a library…
pauljervisheath
209
24k
BBQ
matthewcrist
89
9.8k
JavaScript: Past, Present, and Future - NDC Porto 2020
reverentgeek
52
5.7k
Speed Design
sergeychernyshev
32
1.2k
Music & Morning Musume
bryan
46
6.9k
A Modern Web Designer's Workflow
chriscoyier
697
190k
It's Worth the Effort
3n
187
28k
Facilitating Awesome Meetings
lara
57
6.6k
The Myth of the Modular Monolith - Day 2 Keynote - Rails World 2024
eileencodes
26
3.1k
Transcript
σΟʔϓϥʔχϯάͬͯԿʁ
[email protected]
2017.02.08
͢͜ͱɺ͞ͳ͍͜ͱ • ͢͜ͱ • χϡʔϥϧωοτϫʔΫͷֶతͳΈ • ॳظͷܾΊํɺධՁํ๏ • ύϥϝʔλྔɺܭࢉྔͷϘϦϡʔϜײ •
ϗοτͳ • ͞ͳ͍͜ͱ • πʔϧͷ • ࣜͷ • χϡʔϥϧωοτϫʔΫҎ֎ͷػցֶश • γϯΪϡϥϦςΟͳͲͷਓೳͷະདྷ ग़య wedge.ismedia.jp
Agenda • σΟʔϓϥʔχϯάͱʁ • ྺ࢙ • χϡʔϥϧωοτϫʔΫ͔ΒσΟʔϓͳχϡʔϥϧωοτϫʔΫ • ୈҰ࣍AIϒʔϜ •
ୈೋ࣍AIϒʔϜ • ୈࡾ࣍AIϒʔϜ • Ԡ༻ྫ • ·ͱΊ
• ਂֶशͱݴ͏ • ʢਆܦࡉ๔ʣͷಇ͖Λֶͨ͠शΞϧΰϦζϜͰ͋Δ Neural Network(NN)Λ༻͍ͨਓೳͷߏஙٕज़ͷ૯শ • ͦͷதͰਂ͘େنͳߏΛ࣋ͭ͜ͱ͕ಛ σΟʔϓϥʔχϯάͱʁ
GoogLeNet, 22Layers (ILSVRC 2014)
༻ޠͷؔੑ ਓೳʢAIʣ ػցֶश χϡʔϥϧωοτϫʔΫ ਂֶश
දతͳൃද Neural Networkͷ ϒϨʔΫεϧʔͱ ౙͷ࣌ දతͳਓࡐ֫ಘ Google ͕DNN ResearchΛങऩ )JOUPO
Google ͕ Deep MindΛങऩ Baidu ͕Institute of Deep LearningΛઃཱ "OESFX/H Facebook ͕AI Research Lab.Λઃཱ -F$VO SGD (Amari) Neocognitron (Fukushima) Boltzmann Machine (Hinton+) Conv. net (LeCun+) Sparse Coding (Olshausen&Field) 1950 1960 1970 1980 1990 2000 2010 2020 Microsoft ͕MaluubaΛങऩ #FOHJP ୈҰ࣍"*ϒʔϜ ਪɾ୳ࡧ ୈೋ࣍"*ϒʔϜ ࣝදݱ &YQFSU4ZTUFN ୈࡾ࣍"*ϒʔϜ ػցֶश ਂֶश Perceptron (Rosenblatt) Back Propagation (Rumelhart) Deep Learning (Hinton+) Big Data GPU Cloud Computing ઢܗෆՄೳ YPS͕ղ͚ͳ͍ ͍ɺաֶशɺ 47.ਓؾ χϡʔϥϧωοτϫʔΫͷྺ࢙ NN ୈҰͷౙ NN ୈೋͷౙ
Torontoେֶ New Yorkେֶ Montrealେֶ
NN ͔Β DNN Neural Network Deep Neural Network
ୈҰ࣍AIϒʔϜ
୯७ύʔηϓτϩϯ ʹྖҬఆثͱͯ͠ͷχϡʔϩϯ
NAND AND OR XOR ୯७ύʔηϓτϩϯ
ୈҰͷౙ • xor͕දݱͰ͖ͳ͍
ୈೋ࣍AIϒʔϜ
ڭࢣ৴߸ ޡࠩؔ ೖྗ ग़ྗ தؒ 1 1 1 x y
t ⇥w 4 ⇥ w3 ⇥w2 ⇥w1 x1 x2 x3 x4 ⇥w 0 ⌃f y1 y2 y3 1. ଟԽ 2. ׆ੑԽؔ 3. ޡࠩؔ 4. ޡࠩٯൖ๏ ଟύʔηϓτϩϯʢMLPʣ
ଟԽʹΑͬͯxorͷ࣮ݱ NAND OR AND s2 s1 x1 x2 y x1
x2 s1 s2 y 0 0 1 0 0 1 0 1 1 1 0 1 1 1 1 1 1 0 1 0 = 1. ଟԽ
γάϞΠυؔɾۂઢਖ਼ؔ ඍ͕Ͱ͖ͳ͍ ֶशͰ͖ͳ͍ ʢޡࠩٯൖ๏ʣ ʹೖྗ৴߸ͷ૯Λग़ྗ৴߸ʹม͢Δؔ ׆ੑԽؔ 2. ׆ੑԽؔ 1 ⇥w
4 ⇥ w3 ⇥w2 ⇥w1 x1 x2 x3 x4 ⇥w 0 ⌃f εςοϓؔ ύʔηϓτϩϯͷ߹
3. ଛࣦؔ ޡࠩؔʢଛࣦؔʣ 1 2 N X n=1 ky tk2
N Y n=1 p(dn | x ) d=0/1ͷࣄޙ֬pʹରͯ͠࠷ਪఆΛߦ͏ ೋޡࠩͱ͢Δ ڭࢣ৴߸ ޡࠩؔ ग़ྗ y t y1 y2 y3 ճؼ ೋྨ ଟΫϥεྨ ڭࢣ৴߸ΛOne-hotදݱͱ͠ɺ ࠷ऴஈͷ׆ੑԽؔΛιϑτϚοΫεؔͱ্ͨ͠Ͱ ަࠩΤϯτϩϐʔؔ
ڭࢣ৴߸ ޡࠩؔ ೖྗ ग़ྗ தؒ 1 1 1 x y
t ⇥w 4 ⇥ w3 ⇥w2 ⇥w1 x1 x2 x3 x4 ⇥w 0 ⌃f y1 y2 y3 4. ޡࠩٯൖ๏ ޡࠩٯൖ๏
+ ^2 x y t z @z @z @z @z
@z @t @z @z @z @t @t @x ͨͱ͑ z = ( x + y )2 ͱ͍͏ࣜ z = t2 t = x + y ͱ͍͏2ͭͷࣜͰߏ͞ΕΔɻ ࿈ͱɺ߹ؔͷඍʹ͍ͭͯͷੑ࣭Ͱ͋Δ @z @x = @z @t @t @x ޡࠩٯൖ๏ 4. ޡࠩٯൖ๏
ޡࠩٯൖ๏ Ճࢉϊʔυͷٯൖ + x y z + @L @z @L
@z · 1 @L @z · 1 ࢉϊʔυͷٯൖ x y z ⇥ @L @z ⇥ @L @z · x @L @z · y 4. ޡࠩٯൖ๏ 2 100 ⇥ ⇥ 200 1.1 220 1 1.1 200 110 2.2 ΓΜ͝ͷݸ ফඅ੫ ɹ۩ମྫɹ
֬తޯ߱Լ๏ • ϛχόονֶश • ֶशͷߋ৽ํ๏ • Momentum • AdaGrad •
Adam • RMSProp
ୈೋͷౙ • ܭࢉྔ͕ଟ͍͗ͯ͢ • ہॴղɾաֶशʹؕΓ͍͢ • ޯফࣦ
ୈࡾ࣍AIϒʔϜ
Deep Belief Network vs Auto Encoder ہॴղɾաֶश ରࡦ
Auto Encoder Deep Belief Network v3 h2 v1 h1 v2
Visible Hidden Visible Hidden Visible Hidden RBM RBM RBM 4UBDLFE "VUP&ODPEFS "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث ଟஈԽ ʴϩόετੑ "VUP&ODPEFS ࣗݾූ߸Խث AE %FOPJTJOH "VUP&ODPEFS DAE SAE pre-training + fine tuning ࠾༻ ֶश Input Hidden Output %FFQ#FMJFG /FUXPSL )PQpFME /FUXPSL #PMU[NBOO .BDIJOF ֬Ϟσϧ ܭࢉྔͷݮ ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ଟஈԽ ੍͖ #PMU[NBOO .BDIJOF RBM DBN pre-training + fine tuning
දతͳൃද Neural Networkͷ ϒϨʔΫεϧʔͱ ౙͷ࣌ දతͳਓࡐ֫ಘ Google ͕DNN ResearchΛങऩ )JOUPO
Google ͕ Deep MindΛങऩ Baidu ͕Institute of Deep LearningΛઃཱ "OESFX/H Facebook ͕AI Research Lab.Λઃཱ -F$VO SGD (Amari) Neocognitron (Fukushima) Boltzmann Machine (Hinton+) Conv. net (LeCun+) Sparse Coding (Olshausen&Field) 1950 1960 1970 1980 1990 2000 2010 2020 Microsoft ͕MaluubaΛങऩ #FOJHO ୈҰ࣍"*ϒʔϜ ਪɾ୳ࡧ ୈೋ࣍"*ϒʔϜ ࣝදݱ &YQFSU4ZTUFN ୈࡾ࣍"*ϒʔϜ ػցֶश ਂֶश Perceptron (Rosenblatt) Back Propagation (Rumelhart) Deep Learning (Hinton+) Big Data GPU Cloud Computing ઢܗෆՄೳ YPS͕ղ͚ͳ͍ ͍ɺաֶशɺ 47.ਓؾ χϡʔϥϧωοτϫʔΫͷྺ࢙ NN ୈҰͷౙ NN ୈೋͷౙ
ωο τϫʔΫͷΤωϧΪʔ͕࠷খʹͳΔΑ͏ʹঢ়ଶมԽΛ܁Γฦ͢ %FFQ#FMJFG /FUXPSL )PQpFME /FUXPSL #PMU[NBOO .BDIJOF ֬Ϟσϧ ܭࢉྔͷݮ
੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ଟஈԽ ੍͖ #PMU[NBOO .BDIJOF RBM DBN pre-training + fine tuning هԱ1 هԱ2 هԱΛࢥ͍ग़͢ ͍ۙ͠σʔλΛ༩͑Δͱ… ը૾ΛهԱͨ͠ωοτϫʔΫ Hopfield Networkͱ هԱΛߦྻܭࢉͰγϛϡϨʔτͯ͠ΈΑ͏ http://www.gaya.jp/spiking_neuron/matrix.htm
%FFQ#FMJFG /FUXPSL )PQpFME /FUXPSL #PMU[NBOO .BDIJOF ֬Ϟσϧ ܭࢉྔͷݮ ੍͖ #PMU[NBOO
.BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ଟஈԽ ੍͖ #PMU[NBOO .BDIJOF RBM DBN pre-training + fine tuning Boltzmann Machineͱ ֬Ϟσϧͷಋೖ Kullback LeiblerμΠόʔδΣϯε 2ͭͷۂઢʹ͍ͭͯɺॏͳΒͣʹ૬ҧʢμΠόʔδΣϯεʣ͍ͯ͠ΔྖҬʢࠩʣΛ࠷খԽ͢Δɻ ࣮ࡍͷೖྗʹ ΑΔ֬p ෮ݩ͞Εͨq ࠩͷੵ
%FFQ#FMJFG /FUXPSL )PQpFME /FUXPSL #PMU[NBOO .BDIJOF ֬Ϟσϧ ܭࢉྔͷݮ ੍͖ #PMU[NBOO
.BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ଟஈԽ ੍͖ #PMU[NBOO .BDIJOF RBM DBN pre-training + fine tuning ੍͖Boltzmann Machine (RBN)ͱ v3 h2 v1 h1 v2 Visible Hidden
%FFQ#FMJFG /FUXPSL )PQpFME /FUXPSL #PMU[NBOO .BDIJOF ֬Ϟσϧ ܭࢉྔͷݮ ੍͖ #PMU[NBOO
.BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ଟஈԽ ੍͖ #PMU[NBOO .BDIJOF RBM DBN pre-training + fine tuning Deep Belief Network (DBN)ͱ Visible Hidden Visible Hidden Visible Hidden RBM RBM RBM pre-training(ڭࢣͳ͠) + fine tuning (ڭࢣ͋Γ)
Auto Encoder Deep Belief Network v3 h2 v1 h1 v2
Visible Hidden Visible Hidden Visible Hidden RBM RBM RBM 4UBDLFE "VUP&ODPEFS "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث ଟஈԽ ʴϩόετੑ "VUP&ODPEFS ࣗݾූ߸Խث AE %FOPJTJOH "VUP&ODPEFS DAE SAE pre-training + fine tuning ࠾༻ ֶश Input Hidden Output %FFQ#FMJFG /FUXPSL )PQpFME /FUXPSL #PMU[NBOO .BDIJOF ֬Ϟσϧ ܭࢉྔͷݮ ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ଟஈԽ ੍͖ #PMU[NBOO .BDIJOF RBM DBN pre-training + fine tuning
4UBDLFE "VUP&ODPEFS "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث
ଟஈԽ ʴϩόετੑ "VUP&ODPEFS ࣗݾූ߸Խث AE %FOPJTJOH "VUP&ODPEFS DAE SAE pre-training + fine tuning ࠾༻ ֶश Input Hidden Output Auto Encoder (AE)ͱ
4UBDLFE "VUP&ODPEFS "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث
ଟஈԽ ʴϩόετੑ "VUP&ODPEFS ࣗݾූ߸Խث AE %FOPJTJOH "VUP&ODPEFS DAE SAE pre-training + fine tuning Denoising Auto Encoder (DAE)ͱ ࠾༻ ֶश Input Hidden Output ϊΠζ
4UBDLFE "VUP&ODPEFS "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث
ଟஈԽ ʴϩόετੑ "VUP&ODPEFS ࣗݾූ߸Խث AE %FOPJTJOH "VUP&ODPEFS DAE SAE pre-training + fine tuning Stacked Auto Encoder (SAE)ͱ
Auto Encoder Deep Belief Network v3 h2 v1 h1 v2
Visible Hidden Visible Hidden Visible Hidden RBM RBM RBM 4UBDLFE "VUP&ODPEFS "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث ଟஈԽ ʴϩόετੑ "VUP&ODPEFS ࣗݾූ߸Խث AE %FOPJTJOH "VUP&ODPEFS DAE SAE pre-training + fine tuning ࠾༻ ֶश Input Hidden Output %FFQ#FMJFG /FUXPSL )PQpFME /FUXPSL #PMU[NBOO .BDIJOF ֬Ϟσϧ ܭࢉྔͷݮ ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ଟஈԽ ੍͖ #PMU[NBOO .BDIJOF RBM DBN pre-training + fine tuning
γάϞΠυؔɾۂઢਖ਼ؔ ޯফࣦ ඍ ωοτϫʔΫ͕ਂ͍ͱޯ͕ফ͑ͯ͠·͏ɻɻɻ
γάϞΠυؔɾۂઢਖ਼ؔ ඍ ωοτϫʔΫ͕ਂ͍ͱޯ͕ফ͑ͯ͠·͏ɻɻɻ ReLU (Rectified Linear Unit) ൃՐ͍ͯ͠ͳ͍ ൃՐ͍ͯ͠Δ ޯফࣦͳ͠ʹൃՐ͍ͯ͠Δ
ࡉ๔ͷΈΛ௨ͬͯ͢Δɻ ޯফࣦ
• ϛχόονͷೖྗσʔλΛฏۉ0ɺࢄ1ͷσʔλʹม͢Δ • ׆ੑԽؔͷલɺ͘͠ޙʹૠೖ͢Δ͜ͱͰσʔλͷภΓΛݮΒ͢͜ͱ ͕Մೳ • ޮՌ • ֶशΛେ͖͘͢Δ͜ͱ͕ՄೳʢֶशΛૣ͘ਐߦͤ͞Δʣ •
ॳظʹͦΕ΄Ͳґଘ͠ͳ͍ • աֶशΛ੍͢Δ Batch Normalization
• DropOut (Drop Connect) • ΞϯαϯϒϧֶशʹରԠ • ਖ਼ଇԽ • Weight
DecayʢޡࠩؔʹL2ϊϧϜΛՃ͑Δʣ • εύʔεਖ਼ଇԽ • σʔλ֦ுʢϊΠζɺฏߦҠಈɺճసɺ৭ʣ ͦͷଞͷ
ॳظͷܾΊํ
• 0ʹ͢ΔʁˠॏΈ͕ۉҰʹͳͬͯ͠·͍ॏෳͨ͠ʹͳͬͯ͠·͏ • ϥϯμϜͳॳظ͕ඞཁ • ׆ੑԽؔʹɺγάϞΠυؔtanhؔΛ༻͢Δ߹ɺ ʮXavierͷॳظʯ͕ద • ReLUΛ༻͍Δ߹ɺʮHeͷॳظʯ͕ద ॏΈߦྻͷॳظ
• લͷϊʔυͷ͕ɹ ݸͷ߹ɺɹɹΛඪ४ภࠩͱ͢ΔΨε n r 2 n • લͷϊʔυͷ͕ɹ ݸͷ߹ɺɹɹΛඪ४ภࠩͱ͢ΔΨε n r 1 n
• ֤ͷχϡʔϩϯ • όοναΠζ • ֶशɺֶशͷมԽ • Weight decayʢՙॏݮਰʣ •
DropOut • ͳͲ ϋΠύʔύϥϝʔλ NNʹɺॏΈόΠΞεύϥϝʔλͱผʹɺ ਓ͕ઃఆ͖͢ϋΠύʔύϥϝʔλ͕ଘࡏ͢Δɻ ύϥϝʔλܾఆʹଟ͘ͷࢼߦࡨޡ͕͍ɺ Ϟσϧͷੑೳʹେ͖͘Өڹ͢Δɻ • ઐ༻ͷݕূσʔλΛ༻ҙ͢Δ • ܇࿅σʔλςετσʔλΛͬͯੑೳධՁΛ͍͚ͯ͠ͳ͍ • ରεέʔϧͷൣғ͔ΒϥϯμϜʹαϯϓϦϯάͯ͠ධՁ͠ɺ ൣғΛߜΓࠐΜͰ͍͖ɺ࠷ޙʹͻͱͭΛϐοΫΞοϓ͢Δ σʔληοτ ܇࿅σʔλ ςετσʔλ ݕূσʔλ ֶश༻ ֶश݁Ռͷ ධՁ༻ ϋΠύʔύϥϝʔλͷධՁ༻
༧ଌੑೳͷධՁ
܇࿅σʔλ ܇࿅σʔλ ܇࿅σʔλ ܇࿅σʔλ ςετσʔλ ܇࿅σʔλ ܇࿅σʔλ ܇࿅σʔλ ςετσʔλ ܇࿅σʔλ
܇࿅σʔλ ܇࿅σʔλ ςετσʔλ ܇࿅σʔλ ܇࿅σʔλ ܇࿅σʔλ ςετσʔλ ܇࿅σʔλ ܇࿅σʔλ ܇࿅σʔλ ςετσʔλ ܇࿅σʔλ ܇࿅σʔλ ܇࿅σʔλ ܇࿅σʔλ ݕূσʔλ ݕূσʔλ ݕূσʔλ ݕূσʔλ ݕূσʔλ ϗʔϧυΞτݕূ Kׂަࠩݕূ (Cross Validation) ੑೳධՁ
TP rate: ཅੑΛཅੑͱஅׂͨ͠߹ FP rate: ӄੑΛཅੑͱஅׂͨ͠߹ = = ROCۂઢͱAUC ROC:Receiver
Operating Characteristic ʢड৴ऀૢ࡞ಛੑʣ AUC:Area under the curve ʢROCۂઢԼ໘ੵʣ Predicted Condition Positive Negative True Condition Positive TP FN (type II error) Negative FP (Type I error) TN True Positive True Negative False Positive False Negative AUC
True Positive True Negative False Positive False Negative ࠶ݱ: ཅੑΛཅੑͱஅׂͨ͠߹
ʢRecallʣ = ద߹: ཅੑͱ༧ଌͨ͠σʔλͷ͏ͪɼ࣮ࡍʹཅੑͰ͋Δͷͷׂ߹ = ʢPrecisionʣ F: Fͷ࠷େ͓͓ΉͶذਫ਼ͱҰக͢Δɻ ௐฏۉɿٯͷฏۉͷٯ http://www004.upp.so-net.ne.jp/s_honma/mean/harmony2.htm Predicted Condition Positive Negative True Condition Positive TP FN (type II error) Negative FP (Type I error) TN
Ԡ༻ྫ • ը૾ೝࣝ (CNN) • ࣗવݴޠॲཧɺԻೝࣝ (RNN) • ը૾ʹର͢ΔΩϟϓγϣϯੜ (CNN
+ RNN) • ڧԽֶश (CNN + Qֶश) • ਂੜϞσϧ (CNN)
ը૾ೝࣝ • Convolutional Neural Network (ΈࠐΈχϡʔϥϧωοτϫʔΫ) • Convolution + Pooling
খ͞ͳը૾ͳΒ͜Ε·Ͱͷશ݁߹NNͰOK Convolution
ฏۉ ࠨӈʹΔΤοδ ্ԼʹΔΤοδ ͖ʹؔͳ͘Τοδ * ϑΟϧλྫ Convolution
None
None
ը૾ྨਓؒΛ͑ͨ ILSVRC = 2010͔Β࢝·ͬͨେنը૾ೝࣝͷڝٕձ 2012ͷILSVRCͰHintonઌੜͷνʔϜ͕Deep LearningͰѹউ 2015ʹILSVRCͷ݁ՌͰਓؒͷೝࣝੑೳΛ͑ͨɻ
ܭࢉྔ • CPUͱGPUͷੑೳͷҧ͍ • ಉ࣌ԋࢉՄೳʢ୯ਫ਼গʣ • CPU(Intel Core i7) :
AVX256bit -> 8ݸ • nVIDIA Pascal GP100 : 114,688ݸ
ࣗવݴޠॲཧɺԻೝࣝ • Recurrent Neural Network (RNN)
ڧԽֶश • CNN + Qֶश + …
Prisma
Prisma σΟʔϓϥʔχϯάΛͬͨΞʔτܥͷ จɺ͍Ζ͍Ζग़͍ͯΔ͕ Ұ൪ͷجૅͱͳΔͷ Gatys et al. 2016 ༻CNNVGG19ʢը૾ྨ༻ʹ܇࿅ࡁΈʣ͔Βશ݁߹Λ ൈ͍ͨͷ
“Image Style Transfer Using Convolutional Neural Networks”
Prisma ίϯςϯπ ελΠϧ http://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Gatys_Image_Style_Transfer_CVPR_2016_paper.pdf
Prisma ଛࣦؔʹίϯςϯπͷଛࣦʴελΠϧͷଛࣦ ࠷దԽʹ௨ৗೖྗ͕ݻఆͰॏΈ͕ߋ৽͞ΕΔ͕ɺٯͰॏΈ͕ݻఆͰೖྗը૾͕ߋ৽͞ΕΔ
Prisma ੜը૾ͷॳظ A:ίϯςϯπ B:ελΠϧ C:ϗϫΠτϊΠζ4ύλʔϯ ͲΕͰ΄ͱΜͲมΘΒͳ͍ͱ͍͏݁
Prisma
FaceApp
FaceApp VAE (Variational Autoencoder) CVAE (Conditional VA) Facial VAE
·ͱΊ • Deep LearningͱҰޱʹݴͬͯɺٕज़༻్༷ʑ • ը૾ೝࣝʢCNNʣ, ࣗવݴޠʢRNNʣ, ਂੜʢVAE, GANʣ,
ڧԽֶशʢDQNʣ, … • ଞͷٕज़ͪΐͬͱͨ͠ͳͲɺΞϓϩʔνํ๏ʹؔͯ͠ϒϧʔΦʔγϟϯͳ • 2014-2015ͷ2ؒͰɺ1500ͷؔ࿈จ • CNN + RNNͷΑ͏ͳɺֆʴԻɺݴ༿ʴֆɺηϯαʔʴจষɺͳͲɺ͜Ε·Ͱ༥߹ Ͱ͖ͳ͔ͬͨσʔλ͕༥߹͢Δ͜ͱͰ৽͍͠ՁΛੜΈग़͢༧ײ
ࢀߟࢿྉ • ॻ੶ • θϩ͔Β࡞ΔDeep Learning ―PythonͰֶͿσΟʔϓϥʔχϯάͷཧͱ࣮ http://amzn.asia/2CTyY4U • ػցֶशͷͨΊͷ֬ͱ౷ܭ
(ػցֶशϓϩϑΣογϣφϧγϦʔζ) http://amzn.asia/5SyEZVV • ΦϯϥΠϯػցֶश (ػցֶशϓϩϑΣογϣφϧγϦʔζ) http://amzn.asia/2kli98b • ΠϥετͰֶͿ σΟʔϓϥʔχϯά (KSใՊֶઐॻ) http://amzn.asia/8Kz11LV • ΠϥετͰֶͿ ػցֶश ࠷খೋ๏ʹΑΔࣝผϞσϧֶशΛத৺ʹ (KSใՊֶઐॻ) http://amzn.asia/6Zlo0pt • ਂֶश (ػցֶशϓϩϑΣογϣφϧγϦʔζ) http://amzn.asia/hZqrQ2w • ChainerʹΑΔ࣮ફਂֶश http://amzn.asia/5xDfvVJ • ࣮σΟʔϓϥʔχϯά http://amzn.asia/7YP7FPh • ͜Ε͔ΒͷڧԽֶश http://amzn.asia/gHUDp81 • ITΤϯδχΞͷͨΊͷػցֶशཧೖ http://amzn.asia/7SgiMwN • ҟৗݕͱมԽݕ (ػցֶशϓϩϑΣογϣφϧγϦʔζ) http://amzn.asia/6RC0jbt • PythonʹΑΔσʔλੳೖ ―NumPyɺpandasΛͬͨσʔλॲཧ http://amzn.asia/4f2ATnL • URL / SlideShare / pdf • ʢଟ͗ͯ͢লུʣ