Upgrade to Pro — share decks privately, control downloads, hide ads and more …

TensorFlow DevSummit 2019 Recap

TensorFlow DevSummit 2019 Recap

Shuhei Fujiwara

March 20, 2019
Tweet

More Decks by Shuhei Fujiwara

Other Decks in Technology

Transcript

  1. TensorFlow DevSummit 2019 Recap
    What is a Fundamental Change of TensorFlow v2?
    @shuhei_fujiwara
    2019-03-20
    TensorFlow User Group
    1

    View Slide

  2. Self-Introduction
    藤原秀平 (FUJIWARA Shuhei)
    Twitter: @shuhei_fujiwara
    ▶ Merpay Inc., Machine Learning Team
    ▶ TensorFlow User Group Tokyo Organizer
    ▶ Google Developer Expert (Machine Learning)
    2

    View Slide

  3. Outline
    I’m talking about
    ▶ Major Change of TensorFlow v2

    AutoGraph

    Eager Execution
    ▶ Other Interesting Topics (if time permitted)
    3

    View Slide

  4. TensorFlow v2
    Alpha is released!
    ▶ Eager execution (define-by-run) by default
    ▶ AutoGraph

    Convert Python statements to TensorFlow graph
    ▶ Cleanup APIs

    tf.keras would be main stream
    4

    View Slide

  5. Talk About AutoGraph
    ▶ YouTube

    https://www.youtube.com/watch?v=Up9CvRLIIIw
    ▶ Official document

    https://www.tensorflow.org/alpha/tutorials/eager/tf_function
    5

    View Slide

  6. Define and Run in TensorFlow v1
    Define computation graph but not run for now
    constant 1
    constant 2
    matmul
    constant 3
    add
    6

    View Slide

  7. Define and Run in TensorFlow v1
    Execute a node by session.run()
    constant 1
    constant 2
    matmul
    constant 3
    add
    Run
    7

    View Slide

  8. Why Graph?
    We need to calculate gradient easily
    b
    x
    Add
    W
    matmul
    ReLU
    ...
    L
    dL/db
    dAdd
    dL/dW
    dMatMul
    dReLU
    ...
    1
    8

    View Slide

  9. Eager Execution (Define by Run)
    Listing 1: graph (define-and-run)
    1 >>> v1 = tf.constant(1)
    2 >>> v2 = tf.constant(2)
    3 >>> v3 = v1 + v2
    4 >>> print(v3)
    5 Tensor("add:0", shape=(), dtype=
    int32)
    6 >>> sess = tf.Session()
    7 >>> print(sess.run(v3))
    8 3
    Listing 2: eager (define-by-run)
    1 >>> v1 = tf.constant(1)
    2 >>> v2 = tf.constant(2)
    3 >>> v3 = v1 + v2
    4 >>> print(v3)
    5 tf.Tensor(3, shape=(), dtype=int32)
    6 >>> print(v3.numpy())
    7 3
    9

    View Slide

  10. Compute Gradient in Eager Mode
    1 v = tf.Variable(1.)
    2
    3 with tf.GradientTape() as tape:
    4 w = v * v
    5 x = w * w
    6 grad = tape.gradient(x, v)
    7
    8 print(grad)
    9 # tf.Tensor(4.0, shape=(), dtype=
    float32)
    1 v = tf.Variable(1.)
    2 w = v * v
    3
    4 with tf.GradientTape() as tape:
    5 x = w * w
    6 grad = tape.gradient(x, v)
    7
    8 print(grad)
    9 # None
    ▶ Computation graph is kept and gradient can be calculated
    in tf.GradientTape context
    10

    View Slide

  11. Use Optimizer in Custom Training
    1 model = tf.keras.Sequential([...])
    2 optim = tf.keras.optimizers.SGD(nesterov=True)
    3
    4 ...
    5
    6 with tf.GradientTape() as tape:
    7 logits = model(x)
    8 loss = ...
    9 grads = tape.gradient(loss, model.trainable_variables)
    10 optimizer.apply_gradients(zip(grads, model.trainable_variables))
    ▶ https://www.tensorflow.org/alpha/tutorials/quickstart/advanced
    11

    View Slide

  12. Eager Execution Resolves All Problem?
    Pros of define-and-run
    ▶ Performance
    ▶ deployability (without Python runtime)

    Mobile, TensorFlow Serving, TensorFlow.js, ...
    Cons of define-and-run
    ▶ A little bit difficult

    Hard to debug

    Especially researcher want to try and error rapidly
    ▶ Hard to implement dynamic network
    12

    View Slide

  13. Why AutoGraph in TensorFlow v2?
    We want to enjoy pros of define-and-run and define-by-run...
    ▶ Use eager mode by default
    ▶ Convert (compile) Python function to TensorFlow graph if needed
    13

    View Slide

  14. tf.function
    ▶ @tf.function decorator automatically compile function to graph

    if –> tf.cond

    while –> tf.while_loop
    1 @tf.function
    2 def abs(x):
    3 if x >= 0:
    4 return x
    5 else:
    6 return -x
    14

    View Slide

  15. Performance
    In terms of performance
    ▶ function with lots of small ops: @tf.function is effective

    ex. LSTM cell
    ▶ function with large ops: @tf.function is not effective

    ex. Convolutional network
    15

    View Slide

  16. Performance
    1 @tf.function
    2 def fn_graph(x):
    3 for i in tf.range(1000): x += i
    4 return x
    5
    6 def fn_eager(x):
    7 for i in tf.range(1000): x += i
    8 return x
    9
    10 time_graph = timeit.timeit(lambda: linear_graph(0), number=10)
    11 time_eager = timeit.timeit(lambda: linear_eager(0), number=10)
    12
    13 # time_graph: 0.027 time_eager: 0.082
    16

    View Slide

  17. Eager Mode is Disabled in @tf.function
    1 @tf.function
    2 def fn_graph():
    3 tf.print(tf.executing_eagerly())
    4 return
    5
    6 def fn_eager():
    7 tf.print(tf.executing_eagerly())
    8 return
    9
    10 fn_graph() # False
    11 fn_eager() # True
    17

    View Slide

  18. Note for tf.function
    ▶ Needless to say, tf.function can’t convert all Python code

    See Capabilities and Limitations
    ▶ tf.Variable in tf.function must be created only once
    18

    View Slide

  19. How to Use TensorFlow v2
    ▶ Use eager mode when writing your code
    ▶ Use @tf.function when running your code
    ▶ Good bye session.run forever...
    ▶ Nothing to do as long as using tf.keras

    But you have to know about AutoGraph for custom training
    19

    View Slide

  20. Any Questions?
    I’m talking about
    ▶ TensorFlow Datasets
    ▶ TFX
    ▶ Reinforcement Learning
    ▶ etc...
    if time permitted.
    20

    View Slide