Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
ML Productivity
Search
Beomjun Shin
January 17, 2018
Research
1
79
ML Productivity
short talks on productivity of machine learning
Beomjun Shin
January 17, 2018
Tweet
Share
More Decks by Beomjun Shin
See All by Beomjun Shin
Convolution Transpose by yourself
shastakr
0
77
스마트폰 위의 딥러닝
shastakr
0
270
Design your CNN: historical inspirations
shastakr
0
35
"진짜 되는" 투자 전략 찾기: 금융전략과 통계적 검정
shastakr
0
80
Other Decks in Research
See All in Research
AI in Enterprises - Java and Open Source to the Rescue
ivargrimstad
0
800
説明可能な機械学習と数理最適化
kelicht
0
220
CVPR2025論文紹介:Unboxed
murakawatakuya
0
180
問いを起点に、社会と共鳴する知を育む場へ
matsumoto_r
PRO
0
660
スキマバイトサービスにおける現場起点でのデザインアプローチ
yoshioshingyouji
0
250
MIRU2025 チュートリアル講演「ロボット基盤モデルの最前線」
haraduka
15
9.1k
能動適応的実験計画
masakat0
2
860
Vision and LanguageからのEmbodied AIとAI for Science
yushiku
PRO
1
570
引力・斥力を制御可能なランダム部分集合の確率分布
wasyro
0
260
ウェブ・ソーシャルメディア論文読み会 第31回: The rising entropy of English in the attention economy. (Commun Psychology, 2024)
hkefka385
1
110
20250605_新交通システム推進議連_熊本都市圏「車1割削減、渋滞半減、公共交通2倍」から考える地方都市交通政策
trafficbrain
0
880
Galileo: Learning Global & Local Features of Many Remote Sensing Modalities
satai
3
370
Featured
See All Featured
Why Our Code Smells
bkeepers
PRO
340
57k
For a Future-Friendly Web
brad_frost
180
10k
A better future with KSS
kneath
239
18k
Large-scale JavaScript Application Architecture
addyosmani
514
110k
Facilitating Awesome Meetings
lara
56
6.6k
Gamification - CAS2011
davidbonilla
81
5.5k
What's in a price? How to price your products and services
michaelherold
246
12k
How to Think Like a Performance Engineer
csswizardry
27
2.1k
Into the Great Unknown - MozCon
thekraken
40
2.1k
Code Reviewing Like a Champion
maltzj
526
40k
GraphQLの誤解/rethinking-graphql
sonatard
73
11k
Building an army of robots
kneath
306
46k
Transcript
ML Productivity Ben (Beomjun Shin) 2018-01-17 (Wed) © Beomjun Shin
Productivity is about not waiting © Beomjun Shin
Time Scales © Beomjun Shin
• Immediate: less than 60 seconds. • Bathroom break: less
than 5 minutes. • Lunch break: less than 1 hour. • Overnight: less than 12 hours. WE MUST ESTIMATE TIME BEFORE RUNNING! © Beomjun Shin
Productivity == Iteration © Beomjun Shin
© Beomjun Shin
class timeit(object): def __init__(self, name): self.name = name def __call__(self,
f): @wraps(f) def wrap(*args, **kw): ts = time.time() result = f(*args, **kw) te = time.time() logger.info("%s %s" % (self.name, humanfriendly.format_timespan(te - ts))) return result return wrap © Beomjun Shin
@contextlib.contextmanager def timer(name): """ Example. with timer("Some Routines"): routine1() routine2()
""" start = time.clock() yield end = time.clock() duration = end - start readable_duration = format_timespan(duration) logger.info("%s %s" % (name, readable_duration)) © Beomjun Shin
Use Less Data • Sampled data • Various data •
Synthesis data to validate hypothesis © Beomjun Shin
Sublinear Debugging • Prefer pre-trained model to training from scratch
• Prefer "proven(open-sourced)" code to coding from scratch • Prefer "SGD" to "complex" optimization algorithm © Beomjun Shin
Sublinear Debugging • Logging as many as possible: • First
N step BatchNorm Mean/Variance tracking • Scale of Logit, Activation • Rigorous validation of data quality, preprocessing, augmentation • 2 days of validation is worth enough • Insert assertions as many as possible © Beomjun Shin
Linear Feature Engineering engineering features for a linear model and
then switching to a more complicated model on the same representation © Beomjun Shin
Flexible Code • We can sacrifice "Code Efficiency" for "Flexibility"
• Exchange "raw" data between models and preprocessing by code • Unlike API server, in machine learning task so many assumption can be changed • We should always be prepare to build whole pipeline from scratch © Beomjun Shin
Reproducible preprocessing • Every data preprocessing will be fail in
first iteration • let's fall in love with shell © Beomjun Shin
Shell commands © Beomjun Shin
# Move each directory's files into subdirectory named dummy; #
mv doesn't support mv many files for x in *; do for xx in $x/*; do command mv $xx $x/dummy; done; done; # Recursively counting files in a Linux directory find $DIR -type f | wc -l # Remove whitespace from filename (using shell subsitition) for x in *\ .jpg; do echo $x ${x//\ /}; done # bash rm large directory find . -name '*.mol' -exec rm {} \; # kill process contains partial string ps -ef | grep [some_string] | grep -v grep | awk '{print $2}' | xargs kill -9 # Parallel imagemagick preprocessing ls *.jpg | parallel -j 48 convert {} -crop 240x320+0+0 {} 2> error.log © Beomjun Shin
How many commands are you familiar? • echo, touch, awk,
sed, cat, cut, grep, xargs, find • wait, background(&), redirect(>) • ps, netstat • for, if, function • parallel, imagemagick(convert) © Beomjun Shin
#!/bin/zsh set -x trap 'pkill -P $$' SIGINT SIGTERM EXIT
multitailc () { args="" for file in "$@"; do args+="-cT ANSI $file " done multitail $args } export CUDA_VISIBLE_DEVICES=0 python train.py &> a.log & export CUDA_VISIBLE_DEVICES=1 python train.py &> b.log & multitailc *.log wait echo "Finish Experiments" © Beomjun Shin
Working Process 1. Prepare "proven" data, model or idea 2.
Data validation 3. Setup evaluation metrics (at least two) • one is for model comparison, the other is for human 4. Code and test whether it is "well" trained or not 5. Model improvement (iteration) © Beomjun Shin
Build our best practice • datawrapper - model - trainer
• data/ folder in project root • experiment management © Beomjun Shin
Be aware of ML's technical debt • Recommend to read
Machine Learning: The High- Interest Credit Card of Technical Debt from Google © Beomjun Shin
References • Productivity is about not waiting • Machine Learning:
The High-Interest Credit Card of Technical Debt • Patterns for Research in Machine Learning • Development workflows for Data Scientists © Beomjun Shin