Lock in $30 Savings on PRO—Offer Ends Soon! ⏳
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
Distributed TensorFlow: Scaling Deep Learning L...
Search
mactiendinh
December 28, 2017
Technology
0
86
Distributed TensorFlow: Scaling Deep Learning Library
#tensorflow #scale #distributed
mactiendinh
December 28, 2017
Tweet
Share
More Decks by mactiendinh
See All by mactiendinh
Chapter 5: Good design = Flexible softwave
mactiendinh
0
17
Overview chapter 4 Head First Object Oriented Design and Analysis
mactiendinh
0
42
Overview chapter 3 Head First Object Oriented Design and Analysis
mactiendinh
0
73
Other Decks in Technology
See All in Technology
学習データって増やせばいいんですか?
ftakahashi
2
280
形式手法特論:CEGAR を用いたモデル検査の状態空間削減 #kernelvm / Kernel VM Study Hokuriku Part 8
ytaka23
2
450
MapKitとオープンデータで実現する地図情報の拡張と可視化
zozotech
PRO
1
120
文字列の並び順 / Unicode Collation
tmtms
1
260
寫了幾年 Code,然後呢?軟體工程師必須重新認識的 DevOps
cheng_wei_chen
1
1.1k
Microsoft Agent 365 を 30 分でなんとなく理解する
skmkzyk
1
1k
打 造 A I 驅 動 的 G i t H u b ⾃ 動 化 ⼯ 作 流 程
appleboy
0
140
AI時代の開発フローとともに気を付けたいこと
kkamegawa
0
2.5k
技術以外の世界に『越境』しエンジニアとして進化を遂げる 〜Kotlinへの愛とDevHRとしての挑戦を添えて〜
subroh0508
1
400
RAG/Agent開発のアップデートまとめ
taka0709
0
150
re:Invent2025 コンテナ系アップデート振り返り(+CloudWatchログのアップデート紹介)
masukawa
0
320
因果AIへの招待
sshimizu2006
0
930
Featured
See All Featured
Mobile First: as difficult as doing things right
swwweet
225
10k
Typedesign – Prime Four
hannesfritz
42
2.9k
How Fast Is Fast Enough? [PerfNow 2025]
tammyeverts
3
390
Practical Orchestrator
shlominoach
190
11k
The Power of CSS Pseudo Elements
geoffreycrofte
80
6.1k
The Success of Rails: Ensuring Growth for the Next 100 Years
eileencodes
47
7.8k
Large-scale JavaScript Application Architecture
addyosmani
515
110k
Become a Pro
speakerdeck
PRO
31
5.7k
Stop Working from a Prison Cell
hatefulcrawdad
273
21k
jQuery: Nuts, Bolts and Bling
dougneiner
65
8.2k
Done Done
chrislema
186
16k
We Have a Design System, Now What?
morganepeng
54
7.9k
Transcript
Distributed TensorFlow Tien Dinh
None
None
None
None
TensorFlow: Expressing High-Level ML Computations Core in C++ • Very
• low overhead Different • front ends for specifying/driving the computation Python • and C++ today, easy to add more
Computation is a dataflow graph Graph of Nodes • ,
called Operations or ops Edges are N • -dimensional arrays: Tensors
Computation is a dataflow graph WITH STATE
Computation is a dataflow graph Distributed
Computation is a dataflow graph Assign Devices to Ops •
TensorFlow inserts Send/Recv Ops to transport tensors across devices • Recv ops pull data from Send ops
Computation is a dataflow graph Assign Devices to Ops TensorFlow
inserts Send/Recv Ops to transport tensors across devices • Recv • ops pull data from Send ops
Distrubuted Training with TensorFlow
Distrubuted Training with TensorFlow
Model Parallelism = split model, share data
Distrubuted Training
Distrubuted Training with TensorFlow
Data Parallelism
Data Parallelism
Data Parallelism
Data Parallelism
Data Parallelism
Data Parallelism
Distributed training mechanisms Graph structure and low-level graph primitives (queues)
allow us to play with synchronous vs. asynchronous update algorithms.
Thanks for your attention!