Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Tensor Must Flow (#DEFVESTBY 26.11.2016)

Sergey Kibish
November 26, 2016

Tensor Must Flow (#DEFVESTBY 26.11.2016)

During this talk we will figure out what is TensorFlow, how it works - key concepts and how it flow. We will find out what is hidden behind the magic words - "Artificial Neural Network". What additional tools you can use for visualisation and more stuff.

Sergey Kibish

November 26, 2016
Tweet

More Decks by Sergey Kibish

Other Decks in Technology

Transcript

  1. TensorFlow Is a library for distributed computations Dedicated for Machine

    Learning, but can be used not only for it Apache 2.0 3
  2. Who use it? Google - RankBrain (search ranking on google.com)

    Google - SmartReply (automatic email responses) Google - On-Device Vision for OCR (real- time translation) 5
  3. Tensor It is a matrix It is a 2-dimensional array

    It is a 2-dimensional Tensor 7 m = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
  4. Flow Computations are described with graph Nodes – mostly mathematical

    operations (but not only) Edges – input/output relation between nodes 9
  5. Flow node { name: "x" op: "Const" attr { key:

    "dtype" value { type: DT_FLOAT } } ... } node { name: "y" op: "Const" attr { key: "dtype" value { type: DT_FLOAT } } ... } node { name: "mul" op: "MatMul" input: "x" input: "y" ... } 13
  6. Session Graph must be placed in session to do computation

    Session places operations on Devices (CPU, GPU) 16
  7. Session 17 x = tf.constant([[2.,2.]], name="x") y = tf.constant([[3.],[3.]], name="y")

    out = tf.matmul(x, y, name="mul") with tf.Session() as session: v = session.run(out) v array([[ 12.]], dtype=float32)
  8. 19 x = tf.placeholder(dtype=tf.float32, shape=(1,2), name="x") y = tf.placeholder(dtype=tf.float32, shape=(2,1),

    name="y") v = tf.Variable([[0.]], dtype=tf.float32, name="out") out = tf.matmul(x, y, name="mul") update = tf.assign(v, out, name="update") init_op = tf.initialize_all_variables() with tf.Session() as sess: sess.run(init_op) sess.run(update, feed_dict={x: [[2.,2.]], y: [[3.],[3.]]}) print sess.run(v) [[ 12.]]
  9. Checkpoint (Save) 24 saver = tf.train.Saver() … with tf.Session() as

    sess: …# training here saver.save(sess, "/tmp/simple_ann.ckpt")
  10. Checkpoint (Restore) 25 saver = tf.train.Saver() … with tf.Session() as

    sess: saver.restore(sess, "/tmp/simple_ann.ckpt") … # training here
  11. Data 28 x1 x2 y 2,557 3,676 0 10,567 15,323

    1 4,873 5,212 0 … … …
  12. Simple ANN (1/2) 31 x = tf.placeholder("float",[None, 2]) y =

    tf.placeholder("float",[None, 1]) W = tf.Variable(tf.zeros([2, 1])) b = tf.Variable(tf.ones([1, 1])) activation = tf.nn.sigmoid(tf.matmul(x, W)+b) cost = tf.reduce_sum(tf.square(activation - y))/100 optimizer = tf.train.RMSPropOptimizer(.01).minimize(cost) init = tf.initialize_all_variables()
  13. Simple ANN (2/2) 32 with tf.Session() as sess: sess.run(init) for

    i in range(100): train_data = sess.run(optimizer, feed_dict={x: X, y: np.reshape(Y, [100, 1])}) result = sess.run(activation, feed_dict={x:X}) rounded = [round(x) for x in result] print rounded
  14. Example (Keras) 39 model = Sequential() model.add(Dense(1, input_dim=2, init='uniform', activation='sigmoid'))

    model.compile(optimizer='rmsprop', loss='binary_crossentropy') model.fit(X, Y, nb_epoch=100, batch_size=len(X), verbose=2)