Sergey Kibish
November 26, 2016

# Tensor Must Flow (#DEFVESTBY 26.11.2016)

During this talk we will figure out what is TensorFlow, how it works - key concepts and how it flow. We will find out what is hidden behind the magic words - "Artificial Neural Network". What additional tools you can use for visualisation and more stuff.

#### Sergey Kibish

November 26, 2016

## Transcript

3. ### TensorFlow Is a library for distributed computations Dedicated for Machine

Learning, but can be used not only for it Apache 2.0 3

7. ### Tensor It is a matrix It is a 2-dimensional array

It is a 2-dimensional Tensor 7 m = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]

9. ### Flow Computations are described with graph Nodes – mostly mathematical

operations (but not only) Edges – input/output relation between nodes 9

11. ### Flow 11 x = tf.constant([[2.,2.]], name="x") y = tf.constant([[3.],[3.]], name="y")

out = tf.matmul(x, y, name="mul")

13. ### Flow node { name: "x" op: "Const" attr { key:

"dtype" value { type: DT_FLOAT } } ... } node { name: "y" op: "Const" attr { key: "dtype" value { type: DT_FLOAT } } ... } node { name: "mul" op: "MatMul" input: "x" input: "y" ... } 13

16. ### Session Graph must be placed in session to do computation

Session places operations on Devices (CPU, GPU) 16
17. ### Session 17 x = tf.constant([[2.,2.]], name="x") y = tf.constant([[3.],[3.]], name="y")

out = tf.matmul(x, y, name="mul") with tf.Session() as session: v = session.run(out) v array([[ 12.]], dtype=ﬂoat32)

19. ### 19 x = tf.placeholder(dtype=tf.ﬂoat32, shape=(1,2), name="x") y = tf.placeholder(dtype=tf.ﬂoat32, shape=(2,1),

name="y") v = tf.Variable([[0.]], dtype=tf.ﬂoat32, name="out") out = tf.matmul(x, y, name="mul") update = tf.assign(v, out, name="update") init_op = tf.initialize_all_variables() with tf.Session() as sess: sess.run(init_op) sess.run(update, feed_dict={x: [[2.,2.]], y: [[3.],[3.]]}) print sess.run(v) [[ 12.]]

24. ### Checkpoint (Save) 24 saver = tf.train.Saver() … with tf.Session() as

sess: …# training here saver.save(sess, "/tmp/simple_ann.ckpt")
25. ### Checkpoint (Restore) 25 saver = tf.train.Saver() … with tf.Session() as

sess: saver.restore(sess, "/tmp/simple_ann.ckpt") … # training here

28. ### Data 28 x1 x2 y 2,557 3,676 0 10,567 15,323

1 4,873 5,212 0 … … …

30
31. ### Simple ANN (1/2) 31 x = tf.placeholder("ﬂoat",[None, 2]) y =

tf.placeholder("ﬂoat",[None, 1]) W = tf.Variable(tf.zeros([2, 1])) b = tf.Variable(tf.ones([1, 1])) activation = tf.nn.sigmoid(tf.matmul(x, W)+b) cost = tf.reduce_sum(tf.square(activation - y))/100 optimizer = tf.train.RMSPropOptimizer(.01).minimize(cost) init = tf.initialize_all_variables()
32. ### Simple ANN (2/2) 32 with tf.Session() as sess: sess.run(init) for

i in range(100): train_data = sess.run(optimizer, feed_dict={x: X, y: np.reshape(Y, [100, 1])}) result = sess.run(activation, feed_dict={x:X}) rounded = [round(x) for x in result] print rounded

39. ### Example (Keras) 39 model = Sequential() model.add(Dense(1, input_dim=2, init='uniform', activation='sigmoid'))

model.compile(optimizer='rmsprop', loss='binary_crossentropy') model.ﬁt(X, Y, nb_epoch=100, batch_size=len(X), verbose=2)
40. ### Example (Keras) 40 predictions = model.predict(X) rounded = [round(x) for

x in predictions]