Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
Distributed TensorFlow: Scaling Deep Learning L...
Search
mactiendinh
December 28, 2017
Technology
0
84
Distributed TensorFlow: Scaling Deep Learning Library
#tensorflow #scale #distributed
mactiendinh
December 28, 2017
Tweet
Share
More Decks by mactiendinh
See All by mactiendinh
Chapter 5: Good design = Flexible softwave
mactiendinh
0
14
Overview chapter 4 Head First Object Oriented Design and Analysis
mactiendinh
0
41
Overview chapter 3 Head First Object Oriented Design and Analysis
mactiendinh
0
67
Other Decks in Technology
See All in Technology
CDK CLIで使ってたあの機能、CDK Toolkit Libraryではどうやるの?
smt7174
4
190
「全員プロダクトマネージャー」を実現する、Cursorによる仕様検討の自動運転
applism118
22
12k
Android Audio: Beyond Winning On It
atsushieno
0
3.4k
MagicPod導入から半年、オープンロジQAチームで実際にやったこと
tjoko
0
110
Bedrock で検索エージェントを再現しようとした話
ny7760
2
110
まずはマネコンでちゃちゃっと作ってから、それをCDKにしてみよか。
yamada_r
2
120
DroidKaigi 2025 Androidエンジニアとしてのキャリア
mhidaka
2
390
「その開発、認知負荷高すぎませんか?」Platform Engineeringで始める開発者体験カイゼン術
sansantech
PRO
2
880
【NoMapsTECH 2025】AI Edge Computing Workshop
akit37
0
230
[ JAWS-UG 東京 CommunityBuilders Night #2 ]SlackとAmazon Q Developerで 運用効率化を模索する
sh_fk2
3
460
LLMを搭載したプロダクトの品質保証の模索と学び
qa
1
1.1k
新規プロダクトでプロトタイプから正式リリースまでNext.jsで開発したリアル
kawanoriku0
1
220
Featured
See All Featured
Making Projects Easy
brettharned
117
6.4k
Docker and Python
trallard
46
3.6k
I Don’t Have Time: Getting Over the Fear to Launch Your Podcast
jcasabona
33
2.4k
Building Better People: How to give real-time feedback that sticks.
wjessup
368
19k
RailsConf & Balkan Ruby 2019: The Past, Present, and Future of Rails at GitHub
eileencodes
139
34k
Being A Developer After 40
akosma
90
590k
jQuery: Nuts, Bolts and Bling
dougneiner
64
7.9k
Done Done
chrislema
185
16k
Fashionably flexible responsive web design (full day workshop)
malarkey
407
66k
Into the Great Unknown - MozCon
thekraken
40
2k
Facilitating Awesome Meetings
lara
55
6.5k
Rebuilding a faster, lazier Slack
samanthasiow
83
9.2k
Transcript
Distributed TensorFlow Tien Dinh
None
None
None
None
TensorFlow: Expressing High-Level ML Computations Core in C++ • Very
• low overhead Different • front ends for specifying/driving the computation Python • and C++ today, easy to add more
Computation is a dataflow graph Graph of Nodes • ,
called Operations or ops Edges are N • -dimensional arrays: Tensors
Computation is a dataflow graph WITH STATE
Computation is a dataflow graph Distributed
Computation is a dataflow graph Assign Devices to Ops •
TensorFlow inserts Send/Recv Ops to transport tensors across devices • Recv ops pull data from Send ops
Computation is a dataflow graph Assign Devices to Ops TensorFlow
inserts Send/Recv Ops to transport tensors across devices • Recv • ops pull data from Send ops
Distrubuted Training with TensorFlow
Distrubuted Training with TensorFlow
Model Parallelism = split model, share data
Distrubuted Training
Distrubuted Training with TensorFlow
Data Parallelism
Data Parallelism
Data Parallelism
Data Parallelism
Data Parallelism
Data Parallelism
Distributed training mechanisms Graph structure and low-level graph primitives (queues)
allow us to play with synchronous vs. asynchronous update algorithms.
Thanks for your attention!