Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
What is Deep Learning ?
Search
urakarin
May 02, 2017
Technology
1
1k
What is Deep Learning ?
(Japanese document)
History and introductions of Neural Network,
社内ゼミでの資料です。
urakarin
May 02, 2017
Tweet
Share
More Decks by urakarin
See All by urakarin
WiFi講座(3)
urakarin
0
340
BadUSB
urakarin
0
390
Other Decks in Technology
See All in Technology
Tech Blogを書きやすい環境づくり
lycorptech_jp
PRO
1
240
分解して理解する Aspire
nenonaninu
1
230
利用終了したドメイン名の最強終活〜観測環境を育てて、分析・供養している件〜 / The Ultimate End-of-Life Preparation for Discontinued Domain Names
nttcom
2
200
自動テストの世界に、この5年間で起きたこと
autifyhq
10
8.6k
エンジニアのためのドキュメント力基礎講座〜構造化思考から始めよう〜(2025/02/15jbug広島#15発表資料)
yasuoyasuo
17
6.8k
リアルタイム分析データベースで実現する SQLベースのオブザーバビリティ
mikimatsumoto
0
1.4k
Goで作って学ぶWebSocket
ryuichi1208
1
1.3k
データ資産をシームレスに伝達するためのイベント駆動型アーキテクチャ
kakehashi
PRO
2
540
現場で役立つAPIデザイン
nagix
33
12k
表現を育てる
kiyou77
1
210
オブザーバビリティの観点でみるAWS / AWS from observability perspective
ymotongpoo
8
1.5k
アジャイル開発とスクラム
araihara
0
170
Featured
See All Featured
Building a Modern Day E-commerce SEO Strategy
aleyda
38
7.1k
Optimizing for Happiness
mojombo
376
70k
Helping Users Find Their Own Way: Creating Modern Search Experiences
danielanewman
29
2.4k
Code Reviewing Like a Champion
maltzj
521
39k
Producing Creativity
orderedlist
PRO
344
39k
Navigating Team Friction
lara
183
15k
Learning to Love Humans: Emotional Interface Design
aarron
273
40k
"I'm Feeling Lucky" - Building Great Search Experiences for Today's Users (#IAC19)
danielanewman
226
22k
How GitHub (no longer) Works
holman
314
140k
The Myth of the Modular Monolith - Day 2 Keynote - Rails World 2024
eileencodes
21
2.5k
Scaling GitHub
holman
459
140k
CSS Pre-Processors: Stylus, Less & Sass
bermonpainter
356
29k
Transcript
σΟʔϓϥʔχϯάͬͯԿʁ
[email protected]
2017.02.08
͢͜ͱɺ͞ͳ͍͜ͱ • ͢͜ͱ • χϡʔϥϧωοτϫʔΫͷֶతͳΈ • ॳظͷܾΊํɺධՁํ๏ • ύϥϝʔλྔɺܭࢉྔͷϘϦϡʔϜײ •
ϗοτͳ • ͞ͳ͍͜ͱ • πʔϧͷ • ࣜͷ • χϡʔϥϧωοτϫʔΫҎ֎ͷػցֶश • γϯΪϡϥϦςΟͳͲͷਓೳͷະདྷ ग़య wedge.ismedia.jp
Agenda • σΟʔϓϥʔχϯάͱʁ • ྺ࢙ • χϡʔϥϧωοτϫʔΫ͔ΒσΟʔϓͳχϡʔϥϧωοτϫʔΫ • ୈҰ࣍AIϒʔϜ •
ୈೋ࣍AIϒʔϜ • ୈࡾ࣍AIϒʔϜ • Ԡ༻ྫ • ·ͱΊ
• ਂֶशͱݴ͏ • ʢਆܦࡉ๔ʣͷಇ͖Λֶͨ͠शΞϧΰϦζϜͰ͋Δ Neural Network(NN)Λ༻͍ͨਓೳͷߏஙٕज़ͷ૯শ • ͦͷதͰਂ͘େنͳߏΛ࣋ͭ͜ͱ͕ಛ σΟʔϓϥʔχϯάͱʁ
GoogLeNet, 22Layers (ILSVRC 2014)
༻ޠͷؔੑ ਓೳʢAIʣ ػցֶश χϡʔϥϧωοτϫʔΫ ਂֶश
දతͳൃද Neural Networkͷ ϒϨʔΫεϧʔͱ ౙͷ࣌ දతͳਓࡐ֫ಘ Google ͕DNN ResearchΛങऩ )JOUPO
Google ͕ Deep MindΛങऩ Baidu ͕Institute of Deep LearningΛઃཱ "OESFX/H Facebook ͕AI Research Lab.Λઃཱ -F$VO SGD (Amari) Neocognitron (Fukushima) Boltzmann Machine (Hinton+) Conv. net (LeCun+) Sparse Coding (Olshausen&Field) 1950 1960 1970 1980 1990 2000 2010 2020 Microsoft ͕MaluubaΛങऩ #FOHJP ୈҰ࣍"*ϒʔϜ ਪɾ୳ࡧ ୈೋ࣍"*ϒʔϜ ࣝදݱ &YQFSU4ZTUFN ୈࡾ࣍"*ϒʔϜ ػցֶश ਂֶश Perceptron (Rosenblatt) Back Propagation (Rumelhart) Deep Learning (Hinton+) Big Data GPU Cloud Computing ઢܗෆՄೳ YPS͕ղ͚ͳ͍ ͍ɺաֶशɺ 47.ਓؾ χϡʔϥϧωοτϫʔΫͷྺ࢙ NN ୈҰͷౙ NN ୈೋͷౙ
Torontoେֶ New Yorkେֶ Montrealେֶ
NN ͔Β DNN Neural Network Deep Neural Network
ୈҰ࣍AIϒʔϜ
୯७ύʔηϓτϩϯ ʹྖҬఆثͱͯ͠ͷχϡʔϩϯ
NAND AND OR XOR ୯७ύʔηϓτϩϯ
ୈҰͷౙ • xor͕දݱͰ͖ͳ͍
ୈೋ࣍AIϒʔϜ
ڭࢣ৴߸ ޡࠩؔ ೖྗ ग़ྗ தؒ 1 1 1 x y
t ⇥w 4 ⇥ w3 ⇥w2 ⇥w1 x1 x2 x3 x4 ⇥w 0 ⌃f y1 y2 y3 1. ଟԽ 2. ׆ੑԽؔ 3. ޡࠩؔ 4. ޡࠩٯൖ๏ ଟύʔηϓτϩϯʢMLPʣ
ଟԽʹΑͬͯxorͷ࣮ݱ NAND OR AND s2 s1 x1 x2 y x1
x2 s1 s2 y 0 0 1 0 0 1 0 1 1 1 0 1 1 1 1 1 1 0 1 0 = 1. ଟԽ
γάϞΠυؔɾۂઢਖ਼ؔ ඍ͕Ͱ͖ͳ͍ ֶशͰ͖ͳ͍ ʢޡࠩٯൖ๏ʣ ʹೖྗ৴߸ͷ૯Λग़ྗ৴߸ʹม͢Δؔ ׆ੑԽؔ 2. ׆ੑԽؔ 1 ⇥w
4 ⇥ w3 ⇥w2 ⇥w1 x1 x2 x3 x4 ⇥w 0 ⌃f εςοϓؔ ύʔηϓτϩϯͷ߹
3. ଛࣦؔ ޡࠩؔʢଛࣦؔʣ 1 2 N X n=1 ky tk2
N Y n=1 p(dn | x ) d=0/1ͷࣄޙ֬pʹରͯ͠࠷ਪఆΛߦ͏ ೋޡࠩͱ͢Δ ڭࢣ৴߸ ޡࠩؔ ग़ྗ y t y1 y2 y3 ճؼ ೋྨ ଟΫϥεྨ ڭࢣ৴߸ΛOne-hotදݱͱ͠ɺ ࠷ऴஈͷ׆ੑԽؔΛιϑτϚοΫεؔͱ্ͨ͠Ͱ ަࠩΤϯτϩϐʔؔ
ڭࢣ৴߸ ޡࠩؔ ೖྗ ग़ྗ தؒ 1 1 1 x y
t ⇥w 4 ⇥ w3 ⇥w2 ⇥w1 x1 x2 x3 x4 ⇥w 0 ⌃f y1 y2 y3 4. ޡࠩٯൖ๏ ޡࠩٯൖ๏
+ ^2 x y t z @z @z @z @z
@z @t @z @z @z @t @t @x ͨͱ͑ z = ( x + y )2 ͱ͍͏ࣜ z = t2 t = x + y ͱ͍͏2ͭͷࣜͰߏ͞ΕΔɻ ࿈ͱɺ߹ؔͷඍʹ͍ͭͯͷੑ࣭Ͱ͋Δ @z @x = @z @t @t @x ޡࠩٯൖ๏ 4. ޡࠩٯൖ๏
ޡࠩٯൖ๏ Ճࢉϊʔυͷٯൖ + x y z + @L @z @L
@z · 1 @L @z · 1 ࢉϊʔυͷٯൖ x y z ⇥ @L @z ⇥ @L @z · x @L @z · y 4. ޡࠩٯൖ๏ 2 100 ⇥ ⇥ 200 1.1 220 1 1.1 200 110 2.2 ΓΜ͝ͷݸ ফඅ੫ ɹ۩ମྫɹ
֬తޯ߱Լ๏ • ϛχόονֶश • ֶशͷߋ৽ํ๏ • Momentum • AdaGrad •
Adam • RMSProp
ୈೋͷౙ • ܭࢉྔ͕ଟ͍͗ͯ͢ • ہॴղɾաֶशʹؕΓ͍͢ • ޯফࣦ
ୈࡾ࣍AIϒʔϜ
Deep Belief Network vs Auto Encoder ہॴղɾաֶश ରࡦ
Auto Encoder Deep Belief Network v3 h2 v1 h1 v2
Visible Hidden Visible Hidden Visible Hidden RBM RBM RBM 4UBDLFE "VUP&ODPEFS "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث ଟஈԽ ʴϩόετੑ "VUP&ODPEFS ࣗݾූ߸Խث AE %FOPJTJOH "VUP&ODPEFS DAE SAE pre-training + fine tuning ࠾༻ ֶश Input Hidden Output %FFQ#FMJFG /FUXPSL )PQpFME /FUXPSL #PMU[NBOO .BDIJOF ֬Ϟσϧ ܭࢉྔͷݮ ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ଟஈԽ ੍͖ #PMU[NBOO .BDIJOF RBM DBN pre-training + fine tuning
දతͳൃද Neural Networkͷ ϒϨʔΫεϧʔͱ ౙͷ࣌ දతͳਓࡐ֫ಘ Google ͕DNN ResearchΛങऩ )JOUPO
Google ͕ Deep MindΛങऩ Baidu ͕Institute of Deep LearningΛઃཱ "OESFX/H Facebook ͕AI Research Lab.Λઃཱ -F$VO SGD (Amari) Neocognitron (Fukushima) Boltzmann Machine (Hinton+) Conv. net (LeCun+) Sparse Coding (Olshausen&Field) 1950 1960 1970 1980 1990 2000 2010 2020 Microsoft ͕MaluubaΛങऩ #FOJHO ୈҰ࣍"*ϒʔϜ ਪɾ୳ࡧ ୈೋ࣍"*ϒʔϜ ࣝදݱ &YQFSU4ZTUFN ୈࡾ࣍"*ϒʔϜ ػցֶश ਂֶश Perceptron (Rosenblatt) Back Propagation (Rumelhart) Deep Learning (Hinton+) Big Data GPU Cloud Computing ઢܗෆՄೳ YPS͕ղ͚ͳ͍ ͍ɺաֶशɺ 47.ਓؾ χϡʔϥϧωοτϫʔΫͷྺ࢙ NN ୈҰͷౙ NN ୈೋͷౙ
ωο τϫʔΫͷΤωϧΪʔ͕࠷খʹͳΔΑ͏ʹঢ়ଶมԽΛ܁Γฦ͢ %FFQ#FMJFG /FUXPSL )PQpFME /FUXPSL #PMU[NBOO .BDIJOF ֬Ϟσϧ ܭࢉྔͷݮ
੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ଟஈԽ ੍͖ #PMU[NBOO .BDIJOF RBM DBN pre-training + fine tuning هԱ1 هԱ2 هԱΛࢥ͍ग़͢ ͍ۙ͠σʔλΛ༩͑Δͱ… ը૾ΛهԱͨ͠ωοτϫʔΫ Hopfield Networkͱ هԱΛߦྻܭࢉͰγϛϡϨʔτͯ͠ΈΑ͏ http://www.gaya.jp/spiking_neuron/matrix.htm
%FFQ#FMJFG /FUXPSL )PQpFME /FUXPSL #PMU[NBOO .BDIJOF ֬Ϟσϧ ܭࢉྔͷݮ ੍͖ #PMU[NBOO
.BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ଟஈԽ ੍͖ #PMU[NBOO .BDIJOF RBM DBN pre-training + fine tuning Boltzmann Machineͱ ֬Ϟσϧͷಋೖ Kullback LeiblerμΠόʔδΣϯε 2ͭͷۂઢʹ͍ͭͯɺॏͳΒͣʹ૬ҧʢμΠόʔδΣϯεʣ͍ͯ͠ΔྖҬʢࠩʣΛ࠷খԽ͢Δɻ ࣮ࡍͷೖྗʹ ΑΔ֬p ෮ݩ͞Εͨq ࠩͷੵ
%FFQ#FMJFG /FUXPSL )PQpFME /FUXPSL #PMU[NBOO .BDIJOF ֬Ϟσϧ ܭࢉྔͷݮ ੍͖ #PMU[NBOO
.BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ଟஈԽ ੍͖ #PMU[NBOO .BDIJOF RBM DBN pre-training + fine tuning ੍͖Boltzmann Machine (RBN)ͱ v3 h2 v1 h1 v2 Visible Hidden
%FFQ#FMJFG /FUXPSL )PQpFME /FUXPSL #PMU[NBOO .BDIJOF ֬Ϟσϧ ܭࢉྔͷݮ ੍͖ #PMU[NBOO
.BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ଟஈԽ ੍͖ #PMU[NBOO .BDIJOF RBM DBN pre-training + fine tuning Deep Belief Network (DBN)ͱ Visible Hidden Visible Hidden Visible Hidden RBM RBM RBM pre-training(ڭࢣͳ͠) + fine tuning (ڭࢣ͋Γ)
Auto Encoder Deep Belief Network v3 h2 v1 h1 v2
Visible Hidden Visible Hidden Visible Hidden RBM RBM RBM 4UBDLFE "VUP&ODPEFS "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث ଟஈԽ ʴϩόετੑ "VUP&ODPEFS ࣗݾූ߸Խث AE %FOPJTJOH "VUP&ODPEFS DAE SAE pre-training + fine tuning ࠾༻ ֶश Input Hidden Output %FFQ#FMJFG /FUXPSL )PQpFME /FUXPSL #PMU[NBOO .BDIJOF ֬Ϟσϧ ܭࢉྔͷݮ ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ଟஈԽ ੍͖ #PMU[NBOO .BDIJOF RBM DBN pre-training + fine tuning
4UBDLFE "VUP&ODPEFS "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث
ଟஈԽ ʴϩόετੑ "VUP&ODPEFS ࣗݾූ߸Խث AE %FOPJTJOH "VUP&ODPEFS DAE SAE pre-training + fine tuning ࠾༻ ֶश Input Hidden Output Auto Encoder (AE)ͱ
4UBDLFE "VUP&ODPEFS "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث
ଟஈԽ ʴϩόετੑ "VUP&ODPEFS ࣗݾූ߸Խث AE %FOPJTJOH "VUP&ODPEFS DAE SAE pre-training + fine tuning Denoising Auto Encoder (DAE)ͱ ࠾༻ ֶश Input Hidden Output ϊΠζ
4UBDLFE "VUP&ODPEFS "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث
ଟஈԽ ʴϩόετੑ "VUP&ODPEFS ࣗݾූ߸Խث AE %FOPJTJOH "VUP&ODPEFS DAE SAE pre-training + fine tuning Stacked Auto Encoder (SAE)ͱ
Auto Encoder Deep Belief Network v3 h2 v1 h1 v2
Visible Hidden Visible Hidden Visible Hidden RBM RBM RBM 4UBDLFE "VUP&ODPEFS "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث ଟஈԽ ʴϩόετੑ "VUP&ODPEFS ࣗݾූ߸Խث AE %FOPJTJOH "VUP&ODPEFS DAE SAE pre-training + fine tuning ࠾༻ ֶश Input Hidden Output %FFQ#FMJFG /FUXPSL )PQpFME /FUXPSL #PMU[NBOO .BDIJOF ֬Ϟσϧ ܭࢉྔͷݮ ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ଟஈԽ ੍͖ #PMU[NBOO .BDIJOF RBM DBN pre-training + fine tuning
γάϞΠυؔɾۂઢਖ਼ؔ ޯফࣦ ඍ ωοτϫʔΫ͕ਂ͍ͱޯ͕ফ͑ͯ͠·͏ɻɻɻ
γάϞΠυؔɾۂઢਖ਼ؔ ඍ ωοτϫʔΫ͕ਂ͍ͱޯ͕ফ͑ͯ͠·͏ɻɻɻ ReLU (Rectified Linear Unit) ൃՐ͍ͯ͠ͳ͍ ൃՐ͍ͯ͠Δ ޯফࣦͳ͠ʹൃՐ͍ͯ͠Δ
ࡉ๔ͷΈΛ௨ͬͯ͢Δɻ ޯফࣦ
• ϛχόονͷೖྗσʔλΛฏۉ0ɺࢄ1ͷσʔλʹม͢Δ • ׆ੑԽؔͷલɺ͘͠ޙʹૠೖ͢Δ͜ͱͰσʔλͷภΓΛݮΒ͢͜ͱ ͕Մೳ • ޮՌ • ֶशΛେ͖͘͢Δ͜ͱ͕ՄೳʢֶशΛૣ͘ਐߦͤ͞Δʣ •
ॳظʹͦΕ΄Ͳґଘ͠ͳ͍ • աֶशΛ੍͢Δ Batch Normalization
• DropOut (Drop Connect) • ΞϯαϯϒϧֶशʹରԠ • ਖ਼ଇԽ • Weight
DecayʢޡࠩؔʹL2ϊϧϜΛՃ͑Δʣ • εύʔεਖ਼ଇԽ • σʔλ֦ுʢϊΠζɺฏߦҠಈɺճసɺ৭ʣ ͦͷଞͷ
ॳظͷܾΊํ
• 0ʹ͢ΔʁˠॏΈ͕ۉҰʹͳͬͯ͠·͍ॏෳͨ͠ʹͳͬͯ͠·͏ • ϥϯμϜͳॳظ͕ඞཁ • ׆ੑԽؔʹɺγάϞΠυؔtanhؔΛ༻͢Δ߹ɺ ʮXavierͷॳظʯ͕ద • ReLUΛ༻͍Δ߹ɺʮHeͷॳظʯ͕ద ॏΈߦྻͷॳظ
• લͷϊʔυͷ͕ɹ ݸͷ߹ɺɹɹΛඪ४ภࠩͱ͢ΔΨε n r 2 n • લͷϊʔυͷ͕ɹ ݸͷ߹ɺɹɹΛඪ४ภࠩͱ͢ΔΨε n r 1 n
• ֤ͷχϡʔϩϯ • όοναΠζ • ֶशɺֶशͷมԽ • Weight decayʢՙॏݮਰʣ •
DropOut • ͳͲ ϋΠύʔύϥϝʔλ NNʹɺॏΈόΠΞεύϥϝʔλͱผʹɺ ਓ͕ઃఆ͖͢ϋΠύʔύϥϝʔλ͕ଘࡏ͢Δɻ ύϥϝʔλܾఆʹଟ͘ͷࢼߦࡨޡ͕͍ɺ Ϟσϧͷੑೳʹେ͖͘Өڹ͢Δɻ • ઐ༻ͷݕূσʔλΛ༻ҙ͢Δ • ܇࿅σʔλςετσʔλΛͬͯੑೳධՁΛ͍͚ͯ͠ͳ͍ • ରεέʔϧͷൣғ͔ΒϥϯμϜʹαϯϓϦϯάͯ͠ධՁ͠ɺ ൣғΛߜΓࠐΜͰ͍͖ɺ࠷ޙʹͻͱͭΛϐοΫΞοϓ͢Δ σʔληοτ ܇࿅σʔλ ςετσʔλ ݕূσʔλ ֶश༻ ֶश݁Ռͷ ධՁ༻ ϋΠύʔύϥϝʔλͷධՁ༻
༧ଌੑೳͷධՁ
܇࿅σʔλ ܇࿅σʔλ ܇࿅σʔλ ܇࿅σʔλ ςετσʔλ ܇࿅σʔλ ܇࿅σʔλ ܇࿅σʔλ ςετσʔλ ܇࿅σʔλ
܇࿅σʔλ ܇࿅σʔλ ςετσʔλ ܇࿅σʔλ ܇࿅σʔλ ܇࿅σʔλ ςετσʔλ ܇࿅σʔλ ܇࿅σʔλ ܇࿅σʔλ ςετσʔλ ܇࿅σʔλ ܇࿅σʔλ ܇࿅σʔλ ܇࿅σʔλ ݕূσʔλ ݕূσʔλ ݕূσʔλ ݕূσʔλ ݕূσʔλ ϗʔϧυΞτݕূ Kׂަࠩݕূ (Cross Validation) ੑೳධՁ
TP rate: ཅੑΛཅੑͱஅׂͨ͠߹ FP rate: ӄੑΛཅੑͱஅׂͨ͠߹ = = ROCۂઢͱAUC ROC:Receiver
Operating Characteristic ʢड৴ऀૢ࡞ಛੑʣ AUC:Area under the curve ʢROCۂઢԼ໘ੵʣ Predicted Condition Positive Negative True Condition Positive TP FN (type II error) Negative FP (Type I error) TN True Positive True Negative False Positive False Negative AUC
True Positive True Negative False Positive False Negative ࠶ݱ: ཅੑΛཅੑͱஅׂͨ͠߹
ʢRecallʣ = ద߹: ཅੑͱ༧ଌͨ͠σʔλͷ͏ͪɼ࣮ࡍʹཅੑͰ͋Δͷͷׂ߹ = ʢPrecisionʣ F: Fͷ࠷େ͓͓ΉͶذਫ਼ͱҰக͢Δɻ ௐฏۉɿٯͷฏۉͷٯ http://www004.upp.so-net.ne.jp/s_honma/mean/harmony2.htm Predicted Condition Positive Negative True Condition Positive TP FN (type II error) Negative FP (Type I error) TN
Ԡ༻ྫ • ը૾ೝࣝ (CNN) • ࣗવݴޠॲཧɺԻೝࣝ (RNN) • ը૾ʹର͢ΔΩϟϓγϣϯੜ (CNN
+ RNN) • ڧԽֶश (CNN + Qֶश) • ਂੜϞσϧ (CNN)
ը૾ೝࣝ • Convolutional Neural Network (ΈࠐΈχϡʔϥϧωοτϫʔΫ) • Convolution + Pooling
খ͞ͳը૾ͳΒ͜Ε·Ͱͷશ݁߹NNͰOK Convolution
ฏۉ ࠨӈʹΔΤοδ ্ԼʹΔΤοδ ͖ʹؔͳ͘Τοδ * ϑΟϧλྫ Convolution
None
None
ը૾ྨਓؒΛ͑ͨ ILSVRC = 2010͔Β࢝·ͬͨେنը૾ೝࣝͷڝٕձ 2012ͷILSVRCͰHintonઌੜͷνʔϜ͕Deep LearningͰѹউ 2015ʹILSVRCͷ݁ՌͰਓؒͷೝࣝੑೳΛ͑ͨɻ
ܭࢉྔ • CPUͱGPUͷੑೳͷҧ͍ • ಉ࣌ԋࢉՄೳʢ୯ਫ਼গʣ • CPU(Intel Core i7) :
AVX256bit -> 8ݸ • nVIDIA Pascal GP100 : 114,688ݸ
ࣗવݴޠॲཧɺԻೝࣝ • Recurrent Neural Network (RNN)
ڧԽֶश • CNN + Qֶश + …
Prisma
Prisma σΟʔϓϥʔχϯάΛͬͨΞʔτܥͷ จɺ͍Ζ͍Ζग़͍ͯΔ͕ Ұ൪ͷجૅͱͳΔͷ Gatys et al. 2016 ༻CNNVGG19ʢը૾ྨ༻ʹ܇࿅ࡁΈʣ͔Βશ݁߹Λ ൈ͍ͨͷ
“Image Style Transfer Using Convolutional Neural Networks”
Prisma ίϯςϯπ ελΠϧ http://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Gatys_Image_Style_Transfer_CVPR_2016_paper.pdf
Prisma ଛࣦؔʹίϯςϯπͷଛࣦʴελΠϧͷଛࣦ ࠷దԽʹ௨ৗೖྗ͕ݻఆͰॏΈ͕ߋ৽͞ΕΔ͕ɺٯͰॏΈ͕ݻఆͰೖྗը૾͕ߋ৽͞ΕΔ
Prisma ੜը૾ͷॳظ A:ίϯςϯπ B:ελΠϧ C:ϗϫΠτϊΠζ4ύλʔϯ ͲΕͰ΄ͱΜͲมΘΒͳ͍ͱ͍͏݁
Prisma
FaceApp
FaceApp VAE (Variational Autoencoder) CVAE (Conditional VA) Facial VAE
·ͱΊ • Deep LearningͱҰޱʹݴͬͯɺٕज़༻్༷ʑ • ը૾ೝࣝʢCNNʣ, ࣗવݴޠʢRNNʣ, ਂੜʢVAE, GANʣ,
ڧԽֶशʢDQNʣ, … • ଞͷٕज़ͪΐͬͱͨ͠ͳͲɺΞϓϩʔνํ๏ʹؔͯ͠ϒϧʔΦʔγϟϯͳ • 2014-2015ͷ2ؒͰɺ1500ͷؔ࿈จ • CNN + RNNͷΑ͏ͳɺֆʴԻɺݴ༿ʴֆɺηϯαʔʴจষɺͳͲɺ͜Ε·Ͱ༥߹ Ͱ͖ͳ͔ͬͨσʔλ͕༥߹͢Δ͜ͱͰ৽͍͠ՁΛੜΈग़͢༧ײ
ࢀߟࢿྉ • ॻ੶ • θϩ͔Β࡞ΔDeep Learning ―PythonͰֶͿσΟʔϓϥʔχϯάͷཧͱ࣮ http://amzn.asia/2CTyY4U • ػցֶशͷͨΊͷ֬ͱ౷ܭ
(ػցֶशϓϩϑΣογϣφϧγϦʔζ) http://amzn.asia/5SyEZVV • ΦϯϥΠϯػցֶश (ػցֶशϓϩϑΣογϣφϧγϦʔζ) http://amzn.asia/2kli98b • ΠϥετͰֶͿ σΟʔϓϥʔχϯά (KSใՊֶઐॻ) http://amzn.asia/8Kz11LV • ΠϥετͰֶͿ ػցֶश ࠷খೋ๏ʹΑΔࣝผϞσϧֶशΛத৺ʹ (KSใՊֶઐॻ) http://amzn.asia/6Zlo0pt • ਂֶश (ػցֶशϓϩϑΣογϣφϧγϦʔζ) http://amzn.asia/hZqrQ2w • ChainerʹΑΔ࣮ફਂֶश http://amzn.asia/5xDfvVJ • ࣮σΟʔϓϥʔχϯά http://amzn.asia/7YP7FPh • ͜Ε͔ΒͷڧԽֶश http://amzn.asia/gHUDp81 • ITΤϯδχΞͷͨΊͷػցֶशཧೖ http://amzn.asia/7SgiMwN • ҟৗݕͱมԽݕ (ػցֶशϓϩϑΣογϣφϧγϦʔζ) http://amzn.asia/6RC0jbt • PythonʹΑΔσʔλੳೖ ―NumPyɺpandasΛͬͨσʔλॲཧ http://amzn.asia/4f2ATnL • URL / SlideShare / pdf • ʢଟ͗ͯ͢লུʣ