3 with tf.GradientTape() as tape: 4 w = v * v 5 x = w * w 6 grad = tape.gradient(x, v) 7 8 print(grad) 9 # tf.Tensor(4.0, shape=(), dtype= float32) 1 v = tf.Variable(1.) 2 w = v * v 3 4 with tf.GradientTape() as tape: 5 x = w * w 6 grad = tape.gradient(x, v) 7 8 print(grad) 9 # None ▶ Computation graph is kept and gradient can be calculated in tf.GradientTape context 10
▶ deployability (without Python runtime) ▶ Mobile, TensorFlow Serving, TensorFlow.js, ... Cons of define-and-run ▶ A little bit difficult ▶ Hard to debug ▶ Especially researcher want to try and error rapidly ▶ Hard to implement dynamic network 12
tf.range(1000): x += i 4 return x 5 6 def fn_eager(x): 7 for i in tf.range(1000): x += i 8 return x 9 10 time_graph = timeit.timeit(lambda: linear_graph(0), number=10) 11 time_eager = timeit.timeit(lambda: linear_eager(0), number=10) 12 13 # time_graph: 0.027 time_eager: 0.082 16
writing your code ▶ Use @tf.function when running your code ▶ Good bye session.run forever... ▶ Nothing to do as long as using tf.keras ▶ But you have to know about AutoGraph for custom training 19