Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
ML Productivity
Search
Beomjun Shin
January 17, 2018
Research
1
75
ML Productivity
short talks on productivity of machine learning
Beomjun Shin
January 17, 2018
Tweet
Share
More Decks by Beomjun Shin
See All by Beomjun Shin
Convolution Transpose by yourself
shastakr
0
71
스마트폰 위의 딥러닝
shastakr
0
260
Design your CNN: historical inspirations
shastakr
0
30
"진짜 되는" 투자 전략 찾기: 금융전략과 통계적 검정
shastakr
0
75
Other Decks in Research
See All in Research
SSII2023 医療支援における画像処理研究の動向と展望
moda0
0
110
MLtraq: Track your AI experiments at hyperspeed
micheda
1
110
Source Code Diff Revolution (JetBrains Open Reading Club)
tsantalis
0
260
FMP L3 Year 1 Project Proposal
haiinya
0
150
LiDARセキュリティ最前線
kentaroy47
0
280
Alexander Mielke Hellinger--Kantorovich (a.k.a. Wasserstein-Fisher-Rao) Spaces and Gradient Flows
jjzhu
3
180
フルリモートワークでのスクラムのスケール
kmorita1111
2
1k
[2023 CCSE] ZOZOTOWN検索における 研究開発の取り組みについて
tomoyayama
0
130
20240127_熊本から今いちど真面目に都市交通~めざせ「車1割削減、渋滞半減、公共交通2倍」~ 全国路面電車サミット2024宇都宮
trafficbrain
1
660
MegaParticles: GPUを利用したStein Particle Filterによる点群6自由度姿勢推定
koide3
1
530
ICLR2024 LLMエージェントの研究動向
masatoto
6
2.2k
Scaling Rectified Flow Transformers for High-Resolution Image Synthesis / Stable Diffusion 3
shunk031
0
460
Featured
See All Featured
The Cult of Friendly URLs
andyhume
74
5.7k
Web development in the modern age
philhawksworth
202
10k
Automating Front-end Workflow
addyosmani
1356
200k
Atom: Resistance is Futile
akmur
259
25k
Building Effective Engineering Teams - LeadDev
addyosmani
28
1.9k
Intergalactic Javascript Robots from Outer Space
tanoku
266
26k
Writing Fast Ruby
sferik
621
60k
GitHub's CSS Performance
jonrohan
1025
450k
ParisWeb 2013: Learning to Love: Crash Course in Emotional UX Design
dotmariusz
104
6.6k
The Language of Interfaces
destraynor
151
23k
ピンチをチャンスに:未来をつくるプロダクトロードマップ #pmconf2020
aki_iinuma
79
43k
[Rails World 2023 - Day 1 Closing Keynote] - The Magic of Rails
eileencodes
2
1.3k
Transcript
ML Productivity Ben (Beomjun Shin) 2018-01-17 (Wed) © Beomjun Shin
Productivity is about not waiting © Beomjun Shin
Time Scales © Beomjun Shin
• Immediate: less than 60 seconds. • Bathroom break: less
than 5 minutes. • Lunch break: less than 1 hour. • Overnight: less than 12 hours. WE MUST ESTIMATE TIME BEFORE RUNNING! © Beomjun Shin
Productivity == Iteration © Beomjun Shin
© Beomjun Shin
class timeit(object): def __init__(self, name): self.name = name def __call__(self,
f): @wraps(f) def wrap(*args, **kw): ts = time.time() result = f(*args, **kw) te = time.time() logger.info("%s %s" % (self.name, humanfriendly.format_timespan(te - ts))) return result return wrap © Beomjun Shin
@contextlib.contextmanager def timer(name): """ Example. with timer("Some Routines"): routine1() routine2()
""" start = time.clock() yield end = time.clock() duration = end - start readable_duration = format_timespan(duration) logger.info("%s %s" % (name, readable_duration)) © Beomjun Shin
Use Less Data • Sampled data • Various data •
Synthesis data to validate hypothesis © Beomjun Shin
Sublinear Debugging • Prefer pre-trained model to training from scratch
• Prefer "proven(open-sourced)" code to coding from scratch • Prefer "SGD" to "complex" optimization algorithm © Beomjun Shin
Sublinear Debugging • Logging as many as possible: • First
N step BatchNorm Mean/Variance tracking • Scale of Logit, Activation • Rigorous validation of data quality, preprocessing, augmentation • 2 days of validation is worth enough • Insert assertions as many as possible © Beomjun Shin
Linear Feature Engineering engineering features for a linear model and
then switching to a more complicated model on the same representation © Beomjun Shin
Flexible Code • We can sacrifice "Code Efficiency" for "Flexibility"
• Exchange "raw" data between models and preprocessing by code • Unlike API server, in machine learning task so many assumption can be changed • We should always be prepare to build whole pipeline from scratch © Beomjun Shin
Reproducible preprocessing • Every data preprocessing will be fail in
first iteration • let's fall in love with shell © Beomjun Shin
Shell commands © Beomjun Shin
# Move each directory's files into subdirectory named dummy; #
mv doesn't support mv many files for x in *; do for xx in $x/*; do command mv $xx $x/dummy; done; done; # Recursively counting files in a Linux directory find $DIR -type f | wc -l # Remove whitespace from filename (using shell subsitition) for x in *\ .jpg; do echo $x ${x//\ /}; done # bash rm large directory find . -name '*.mol' -exec rm {} \; # kill process contains partial string ps -ef | grep [some_string] | grep -v grep | awk '{print $2}' | xargs kill -9 # Parallel imagemagick preprocessing ls *.jpg | parallel -j 48 convert {} -crop 240x320+0+0 {} 2> error.log © Beomjun Shin
How many commands are you familiar? • echo, touch, awk,
sed, cat, cut, grep, xargs, find • wait, background(&), redirect(>) • ps, netstat • for, if, function • parallel, imagemagick(convert) © Beomjun Shin
#!/bin/zsh set -x trap 'pkill -P $$' SIGINT SIGTERM EXIT
multitailc () { args="" for file in "$@"; do args+="-cT ANSI $file " done multitail $args } export CUDA_VISIBLE_DEVICES=0 python train.py &> a.log & export CUDA_VISIBLE_DEVICES=1 python train.py &> b.log & multitailc *.log wait echo "Finish Experiments" © Beomjun Shin
Working Process 1. Prepare "proven" data, model or idea 2.
Data validation 3. Setup evaluation metrics (at least two) • one is for model comparison, the other is for human 4. Code and test whether it is "well" trained or not 5. Model improvement (iteration) © Beomjun Shin
Build our best practice • datawrapper - model - trainer
• data/ folder in project root • experiment management © Beomjun Shin
Be aware of ML's technical debt • Recommend to read
Machine Learning: The High- Interest Credit Card of Technical Debt from Google © Beomjun Shin
References • Productivity is about not waiting • Machine Learning:
The High-Interest Credit Card of Technical Debt • Patterns for Research in Machine Learning • Development workflows for Data Scientists © Beomjun Shin