Upgrade to Pro — share decks privately, control downloads, hide ads and more …

20171209 Sakura ML Night

ARIYAMA Keiji
December 09, 2017

20171209 Sakura ML Night

2017年12月9日に大阪で開催された「さくらの機械学習ナイト」の発表資料です。

「TensorFlowによるNSFW(職場で不適切な)画像検出」について。

ARIYAMA Keiji

December 09, 2017
Tweet

More Decks by ARIYAMA Keiji

Other Decks in Technology

Transcript

  1. C-LIS CO., LTD.  ༗ࢁܓೋʢ,FJKJ"3*:"."ʣ $-*4$0 -5% "OESPJEΞϓϦ։ൃνϣοτσΩϧ Photo by

    Koji MORIGUCHI (MORIGCHOWDER) ػցֶश͸ͪΐͬͱ΍ͬͨ͜ͱ͋Γ·͢ Twitter΍ͬͯ·ͤΜ
  2. { "generator": "Region Cropper", "file_name": "haruki_g17.png", "regions": [ { "probability":

    1.0, "label": 2, "rect": { "left": 97.0, "top": 251.0, "right": 285.0, "bottom": 383.0 } }, { "probability": 1.0, "label": 2, "rect": { "left": 536.0, "top": 175.0, "right": 730.0, "bottom": 321.0 } } ] } Region Cropper: https://github.com/keiji/region_cropper 
  3. ཧ૝ͷߏ੒  Downloader Face Detection Megane Detection ֬ೝɾमਖ਼ ೝࣝ݁Ռ ֶशʢ܇࿅ʣ

    λΠϜϥΠϯ ϝσΟΞ σʔληοτ ֶशʢ܇࿅ʣ TensorFlow  rsync
  4. Ϟσϧͷߏ଄  conv 3x3x64 stride 1 conv 3x3x64
 stride 1

    ReLU ReLU conv 3x3x128
 stride 1 conv 3x3x128
 stride 1 ReLU conv 3x3x256
 stride 1 conv 3x3x256
 stride 1 ReLU output 1 256x256x1 max_pool 2x2 stride 2 max_pool 2x2 stride 2 ReLU ReLU Sigmoid max_pool 2x2 stride 2 conv 3x3x64
 stride 1 ReLU fc 768 ReLU bn bn bn
  5. # モデル定義 NUM_CLASSES = 1 NAME = 'model3' IMAGE_SIZE =

    256 CHANNELS = 3 def prepare_layers(image, training=False): with tf.variable_scope('inference'): conv1 = tf.layers.conv2d(image, 64, [3, 3], [1, 1], padding='SAME', activation=tf.nn.relu, use_bias=False, trainable=training, name='conv1_1') conv1 = tf.layers.conv2d(conv1, 64, [3, 3], [1, 1], padding='VALID', activation=tf.nn.relu, use_bias=False, trainable=training, name='conv1_2') conv1 = tf.layers.batch_normalization(conv1, trainable=training, name='bn_1') 
  6. conv2 = tf.layers.conv2d(pool1, 128, [3, 3], [1, 1], padding='VALID', activation=tf.nn.relu,

    use_bias=False, trainable=training, name='conv2_1') conv2 = tf.layers.conv2d(conv2, 128, [3, 3], [1, 1], padding='VALID', activation=tf.nn.relu, use_bias=False, trainable=training, name='conv2_2') conv2 = tf.layers.batch_normalization(conv2, trainable=training, name='bn_2') pool2 = tf.layers.max_pooling2d(conv2, [2, 2], [2, 2]) 
  7. conv3 = tf.layers.conv2d(pool2, 256, [3, 3], [1, 1], padding='VALID', activation=tf.nn.relu,

    use_bias=False, trainable=training, name='conv4_1') conv3 = tf.layers.conv2d(conv3, 256, [3, 3], [1, 1], padding='VALID', activation=tf.nn.relu, use_bias=False, trainable=training, name='conv4_2') conv3 = tf.layers.batch_normalization(conv3, trainable=training, name='bn_4') pool3 = tf.layers.max_pooling2d(conv3, [2, 2], [2, 2]) conv = tf.layers.conv2d(pool3, 64, [1, 1], [1, 1], padding='VALID', activation=tf.nn.relu, use_bias=True, trainable=training, name='conv') return conv 
  8. def output_layers(prev, batch_size, keep_prob=0.8, training=False): flatten = tf.reshape(prev, [batch_size, -1])

    fc1 = tf.layers.dense(flatten, 768, trainable=training, activation=tf.nn.relu, name='fc1') fc1 = tf.layers.dropout(fc1, rate=keep_prob, training=training) output = tf.layers.dense(fc1, NUM_CLASSES, trainable=training, activation=None, name='output') return output 
  9. def _loss(logits, labels, batch_size, positive_ratio): cross_entropy = tf.nn.sigmoid_cross_entropy_with_logits( labels=labels, logits=logits)

    loss = tf.reduce_mean(cross_entropy) return loss def _init_optimizer(learning_rate): return tf.train.AdamOptimizer(learning_rate=learning_rate)  ޡࠩؔ਺ͱ࠷దԽΞϧΰϦζϜ
  10. def _hard_negative_mining(loss, labels, batch_size): positive_count = tf.reduce_sum(labels) positive_count = tf.reduce_max((positive_count,

    1)) negative_count = positive_count * HARD_SAMPLE_MINING_RATIO negative_count = tf.reduce_max((negative_count, 1)) negative_count = tf.reduce_min((negative_count, batch_size)) positive_losses = loss * labels negative_losses = loss - positive_losses top_negative_losses, _ = tf.nn.top_k(negative_losses, k=tf.cast(negative_count, tf.int32)) loss = (tf.reduce_sum(positive_losses / positive_count) + tf.reduce_sum(top_negative_losses / negative_count)) return loss  )BSE/FHBUJWF.JOJOH
  11. 

  12. 

  13. 

  14. TAGS = [ 'original_art', 'nsfw', 'like', 'photo', 'illust', 'comic', 'face',

    'girl', 'megane',  ϥϕϧʢλάʣ 'school_uniform', 'blazer_uniform', 'sailor_uniform', 'gl', 'kemono', 'boy', 'bl', 'cat', 'dog', 'food', 'dislike', ]
  15. Ϋϥε൑ఆϞσϧ  conv 3x3x64 stride 1 conv 3x3x64
 stride 1

    ReLU ReLU conv 3x3x128
 stride 1 conv 3x3x128
 stride 1 ReLU conv 3x3x256
 stride 1 conv 3x3x256
 stride 1 ReLU output 20 256x256x1 max_pool 2x2 stride 2 max_pool 2x2 stride 2 ReLU ReLU Sigmoid max_pool 2x2 stride 2 conv 3x3x64
 stride 1 ReLU fc 768 ReLU bn bn bn