Slide 1

Slide 1 text

ਂ૚ֶशϑϨʔϜϫʔΫ֓ཁͱ $IBJOFSͷࣄྫ঺հ Stapy x 理研AIP オープンソース研究会 @理化学研究所 ⾰新知能統合研究センター 2017年5⽉19⽇ 株式会社Preferred Networks ⼤野健太 [email protected]

Slide 2

Slide 2 text

େ໺ ݈ଠ • twitter: @delta2323_ • 経歴 • 数学専攻(修⼠) → 2011.4 PFI → 2014.10 PFN • 担当分野 • バイオプロジェクト • Chainerコアチーム • インターン・採⽤チーム 2

Slide 3

Slide 3 text

$IBJOFS .FFUVQ 2017年6⽉10⽇ @⽇本マイクロソフト株式会社 品川オフィス 3

Slide 4

Slide 4 text

• 2014年3⽉設⽴ • 本社:東京 アメリカ⼦会社:カリフォルニア州サンマテオ • 社員数:約80名(8割以上はエンジニア・リサーチャー) • 事業内容:深層学習の産業、特に産業⽤ロボット・交通・バイオ ヘルスケアへの応⽤ 4 YouTube Channel preferredjp Factory Robot Healthcare Automotive

Slide 5

Slide 5 text

l8BSSJOHTUBUFTQFSJPEzPGEFFQMFBSOJOH GSBNFXPSLT 5

Slide 6

Slide 6 text

5FDIOPMPHZTUBDLPGB%-GSBNFXPSL 6 name functions example Graphical visualization DIGITS, TensorBoard Machine learning workflow management Dataset prep, Save/Load Training loop Keras, TF slim Computational graph(CG) management Build/Optimize CGs Forward/Back prop Theano, TensorFlow Torch.nn Multi-dimensional array processing High-level array manipulation NumPy, CuPy Eigen, Torch (core) Numerical computation Matrix operation Convolution BLAS(OpenBLAS, MKL), cuBLAS, cuDNN, MKL DNN Computational device CPU, GPU, TPU, FPGA

Slide 7

Slide 7 text

üconstructing NNs as a Python programming üdynamic NN construction üCPU/GPU agnostic code with CuPy 7 Mission Speed up research and development of deep learning and its applications. Features Flexible and intuitive description of complex NNs by http://chainer.org *NN = Neural Network

Slide 8

Slide 8 text

4PGUXBSFTUBDL 8 CPU CuPy NVIDIA GPU CUDA cuDNN BLAS NumPy Chainer MKL

Slide 9

Slide 9 text

f g x f g Static graph construction Data feed x y f z g Dynamic graph construction Define-And-Run (Most frameworks) Define-By-Run (Chainer) 9 x y f x z Static Dynamic Optimization ✓ △ Flexibility △ ✓

Slide 10

Slide 10 text

10 (MinPy) Era of dynamic graph frameworks

Slide 11

Slide 11 text

$POTUSVDU//TBT 1ZUIPOQSPHSBNNJOH class MLP(Link): def __int__(self): super(MLP, self).__init__( l1=Linear(784, 1000), l2=Linear(1000, 1000), l3=Linear(1000, 10)) def __call__(x): h1 = F.relu(self.l1(x)) h2 = F.relu(self.l2(l1)) return self.l3(h2) Linear l1 x W bias ReLU Linear l2 h1 W bias ReLU Linear l3 h2 W bias 11

Slide 12

Slide 12 text

3FMFBTFIJTUPSZ • 2015/06: v1.0.0 • 2015/09: v1.3.0 (CuPy) • 2015/11: v1.5.0 (Link/Chain, CuPy in Cython) • 2016/06: [MinPy] • 2016/07: v1.11.0 (Trainer) • 2017/01: [PyTorch, TensorFlow Fold] • 2017/02: v2.0.0a • 2017/04: v2.0.0b • 2017/05: v1.24.0 (Last v1 release)

Slide 13

Slide 13 text

$IBJOFS W • First major version up that breaks backward compatibility. • Important features (almost fixed) • CuPy separation • Unified configuration (chainer.config, esp. train mode) • train argument is removed from many functions • Variable updated: Parameter class, uninitialized var, volatile removed • Funcion.retain_inputs and retain_outputs to reduce memory usage • New-style parameter/child link registration (just setting them as an attribute) • UpdateRule customized for each parameter • Extention.initialize added, invoke_before_training removed • No duplicated memory between training graph and evaluation graph • Input size is made optional in many links (L.Linear(100)) • wscale option is removed from many links 13 Will release on May 30th 2017

Slide 14

Slide 14 text

-JCSBSJFTPOUPQPG$IBJOFS ChainerRL (beta): Reinforcement learning ChainerMN (v0.1.0): Multi-node distributed learning ChainerCV (v0.4.5): Computer vision

Slide 15

Slide 15 text

$IBJOFS3- 15 ü Implement latest deep reinforcement learning algorithms ü Work with OpenAI Gym

Slide 16

Slide 16 text

$IBJOFS./ 16 ü Distributed deep learning with MPI, NCCL ü approx. 100x speed with 128 GPUs

Slide 17

Slide 17 text

$IBJOFS$7 17 ü Dataset Wrapper For well-known CV datasets (CUB, Pascal VOC) ü Dataset transformer (random crop, random flop) ü Implements typical workflow in CV

Slide 18

Slide 18 text

"QQMJDBUJPOPG$IBJOFS PaintsChainer : Line draw colorization tool PonanzaChainer : Shogi AI with Deep Learning

Slide 19

Slide 19 text

1BJOUT$IBJOFS 19 https://paintschainer.preferred.tech

Slide 20

Slide 20 text

1POBO[B $IBJOFS 20

Slide 21

Slide 21 text

/VN1ZMJLF"1*BDDFMFSBUFEXJUI$6%" # CPU x_cpu = numpy.array([1, 2, 3]) l2_cpu = numpy.linalg.norm(x_cpu) # GPU x_gpu = cupy.array([1, 2, 3]) l2_gpu = cupy.linalg.norm(x_gpu) CuPy will be an independent project from Chainer from Chainer v2. >150 NumPy functions are supported

Slide 22

Slide 22 text

%FWFMPQNFOUUFBN BTPG.BZ • Chainer, CuPy • Core development team: approx. 10 members • Reviewer team: approx. 10 members • Chainer user group: approx. 5 members • Chainer RL, Chainer MN, Chainer CV: 2, 3 members for each • Paints Chainer: approx. 10 members 22 * some members overlap

Slide 23

Slide 23 text

$* 23 Travis CI Run all CPU tests of all PRs Jenkins installation test

Slide 24

Slide 24 text

$* 24 Jenkins dairy test (internal): Run all tests with various configuration

Slide 25

Slide 25 text

$PNNVOJUZBDUJWJUJFT 25 Chainer meetup (#0 - #4) #5 will be held in June 10th 2017 Deep learning mini course @UCSF application for biology Google group (ja, en), Slack (ja, en), Twitter (ja, en)

Slide 26

Slide 26 text

$PODMVTJPO • Chainer is a Python-based deep learning framework that leverages flexible and intuitive description of NNs. • Many libraries and services are being developed on top of Chainer (ChainerRL/MN/CV, PaintsChainer, PonanzaChainer, CuPy). • Introduced the development and the user-group teams of Chainer 26

Slide 27

Slide 27 text

5SZ$IBJOFS http://chainer.org 27 Seiya Tokui Kenta Oono Yuya Unno Ryosuke Okuta Chainer core development team Brian Vogel Gentaro Watanabe Shunta Saito Daisuke Nishino and many contributors ! Contact: [email protected] Google Group: Chainer User Group