Upgrade to Pro — share decks privately, control downloads, hide ads and more …

TensorFlowで 趣味の画像収集サーバーを作る 4月号

Sponsored · Your Podcast. Everywhere. Effortlessly. Share. Educate. Inspire. Entertain. You do you. We'll handle the rest.

TensorFlowで 趣味の画像収集サーバーを作る 4月号

TensorFlow勉強会(3)の発表資料です。

Avatar for ARIYAMA Keiji

ARIYAMA Keiji

April 13, 2016
Tweet

More Decks by ARIYAMA Keiji

Other Decks in Technology

Transcript

  1. C-LIS CO., LTD. # local3 with tf.variable_scope('local3') as scope: #

    Move everything into depth so we can perform a single matrix multiply. dim = 1 for d in pool2.get_shape()[1:].as_list(): dim *= d reshape = tf.reshape(pool2, [FLAGS.batch_size, dim]) weights = _variable_with_weight_decay('weights', shape=[dim, 384], stddev=0.04, wd=0.004) biases = _variable_on_cpu('biases', [384], tf.constant_initializer(0.1)) local3 = tf.nn.relu(tf.matmul(reshape, weights) + biases, name=scope.name) _activation_summary(local3) # local4 with tf.variable_scope('local4') as scope: weights = _variable_with_weight_decay('weights', shape=[384, 192], stddev=0.04, wd=0.004) biases = _variable_on_cpu('biases', [192], tf.constant_initializer(0.1)) local4 = tf.nn.relu(tf.matmul(local3, weights) + biases, name=scope.name) _activation_summary(local4)  DJGBSQZ
  2. C-LIS CO., LTD.  "This kind of layer is just

    like a convolutional layer, but without any weight-sharing. That is to say, a different set of filters is applied at every (x, y) location in the input image. Aside from that, it behaves exactly as a convolutional layer." IUUQTDPEFHPPHMFDPNQDVEBDPOWOFUXJLJ-BZFS1BSBNT-PDBMMZ DPOOFDUFE@MBZFS@XJUI@VOTIBSFE@XFJHIUT
  3. C-LIS CO., LTD. ંΓ৞Έ૚Λ૿΍ͯ͠Έͨ ˍύϥϝʔλʔΛௐ੔ # conv0
 with tf.variable_scope('conv0') as

    scope:
 kernel = _variable_with_weight_decay('weights', shape=[32, 32, 3, 32],
 stddev=1e-4, wd=0.0)
 conv = tf.nn.conv2d(images, kernel, [1, 1, 1, 1], padding='SAME')
 biases = _variable_on_cpu('biases', [32], tf.constant_initializer(0.0))
 bias = tf.nn.bias_add(conv, biases)
 conv0 = tf.nn.relu(bias, name=scope.name)
 _activation_summary(conv0)
 
 # pool0
 pool0 = tf.nn.max_pool(conv0, ksize=[1, 3, 3, 1], strides=[1, 2, 2, 1],
 padding='SAME', name='pool0')
 # norm0
 norm0 = tf.nn.lrn(pool0, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75,
 name='norm0')
 
 # conv1
 with tf.variable_scope('conv1') as scope:
 kernel = _variable_with_weight_decay('weights', shape=[16, 16, 32, 64],
 stddev=1e-4, wd=0.0)
 conv = tf.nn.conv2d(norm0, kernel, [1, 1, 1, 1], padding='SAME')
 biases = _variable_on_cpu('biases', [64], tf.constant_initializer(0.0))
 bias = tf.nn.bias_add(conv, biases)
 conv1 = tf.nn.relu(bias, name=scope.name)
 _activation_summary(conv1)
  4. 

  5. C-LIS CO., LTD. ໰୊ൃੜ ΦϑΟεͷαʔόʔϚγϯͷ(16͕࢖͑ͳ͘ͳΔ  $ python3 megane_co/cifar10_train.py I

    tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcublas.so locally I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcudnn.so locally I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcufft.so locally I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcuda.so.1 locally I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcurand.so locally Filling queue with 400 CIFAR images before starting to train. This will take a few minutes. E tensorflow/stream_executor/cuda/cuda_driver.cc:481] failed call to cuInit: CUDA_ERROR_NO_DEVICE
  6. C-LIS CO., LTD. ଎౓ͷൺֱ  $16 2016-04-13 10:40:51.188290: step 0,

    loss = 11.15 (65.8 examples/sec; 1.946 sec/batch) 2016-04-13 10:41:10.546296: step 10, loss = 10.67 (74.0 examples/sec; 1.731 sec/batch) 2016-04-13 10:41:29.646204: step 20, loss = 10.29 (74.0 examples/sec; 1.729 sec/batch) (16 2016-03-06 00:42:54.454566: step 90, loss = 10.39 (286.9 examples/sec; 0.446 sec/batch) 2016-03-06 00:42:59.533350: step 100, loss = 10.32 (290.1 examples/sec; 0.441 sec/batch) 2016-03-06 00:43:04.834940: step 110, loss = 10.27 (292.9 examples/sec; 0.437 sec/batch)