$30 off During Our Annual Pro Sale. View Details »
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
ML Productivity
Search
Beomjun Shin
January 17, 2018
Research
1
79
ML Productivity
short talks on productivity of machine learning
Beomjun Shin
January 17, 2018
Tweet
Share
More Decks by Beomjun Shin
See All by Beomjun Shin
Convolution Transpose by yourself
shastakr
0
77
스마트폰 위의 딥러닝
shastakr
0
270
Design your CNN: historical inspirations
shastakr
0
35
"진짜 되는" 투자 전략 찾기: 금융전략과 통계적 검정
shastakr
0
80
Other Decks in Research
See All in Research
Satellites Reveal Mobility: A Commuting Origin-destination Flow Generator for Global Cities
satai
3
210
Agentic AI Era におけるサプライチェーン最適化
mickey_kubo
0
100
地域丸ごとデイサービス「Go トレ」の紹介
smartfukushilab1
0
640
Stealing LUKS Keys via TPM and UUID Spoofing in 10 Minutes - BSides 2025
anykeyshik
0
170
論文紹介: ReGenesis: LLMs can Grow into Reasoning Generalists via Self-Improvement
hisaokatsumi
0
150
Learning to (Learn at Test Time): RNNs with Expressive Hidden States
kurita
1
300
Combining Deep Learning and Street View Imagery to Map Smallholder Crop Types
satai
3
290
財務諸表監査のための逐次検定
masakat0
0
210
機械学習と数理最適化の融合 (MOAI) による革新
mickey_kubo
1
440
GPUを利用したStein Particle Filterによる点群6自由度モンテカルロSLAM
takuminakao
0
650
長期・短期メモリを活用したエージェントの個別最適化
isidaitc
0
340
Language Models Are Implicitly Continuous
eumesy
PRO
0
360
Featured
See All Featured
Building AI with AI
inesmontani
PRO
1
560
Navigating the Design Leadership Dip - Product Design Week Design Leaders+ Conference 2024
apolaine
0
110
Build your cross-platform service in a week with App Engine
jlugia
234
18k
Odyssey Design
rkendrick25
PRO
0
420
Templates, Plugins, & Blocks: Oh My! Creating the theme that thinks of everything
marktimemedia
31
2.6k
Claude Code どこまでも/ Claude Code Everywhere
nwiizo
61
47k
Ten Tips & Tricks for a 🌱 transition
stuffmc
0
28
Optimising Largest Contentful Paint
csswizardry
37
3.5k
Bash Introduction
62gerente
615
210k
Mind Mapping
helmedeiros
PRO
0
35
Performance Is Good for Brains [We Love Speed 2024]
tammyeverts
12
1.3k
Applied NLP in the Age of Generative AI
inesmontani
PRO
3
1.9k
Transcript
ML Productivity Ben (Beomjun Shin) 2018-01-17 (Wed) © Beomjun Shin
Productivity is about not waiting © Beomjun Shin
Time Scales © Beomjun Shin
• Immediate: less than 60 seconds. • Bathroom break: less
than 5 minutes. • Lunch break: less than 1 hour. • Overnight: less than 12 hours. WE MUST ESTIMATE TIME BEFORE RUNNING! © Beomjun Shin
Productivity == Iteration © Beomjun Shin
© Beomjun Shin
class timeit(object): def __init__(self, name): self.name = name def __call__(self,
f): @wraps(f) def wrap(*args, **kw): ts = time.time() result = f(*args, **kw) te = time.time() logger.info("%s %s" % (self.name, humanfriendly.format_timespan(te - ts))) return result return wrap © Beomjun Shin
@contextlib.contextmanager def timer(name): """ Example. with timer("Some Routines"): routine1() routine2()
""" start = time.clock() yield end = time.clock() duration = end - start readable_duration = format_timespan(duration) logger.info("%s %s" % (name, readable_duration)) © Beomjun Shin
Use Less Data • Sampled data • Various data •
Synthesis data to validate hypothesis © Beomjun Shin
Sublinear Debugging • Prefer pre-trained model to training from scratch
• Prefer "proven(open-sourced)" code to coding from scratch • Prefer "SGD" to "complex" optimization algorithm © Beomjun Shin
Sublinear Debugging • Logging as many as possible: • First
N step BatchNorm Mean/Variance tracking • Scale of Logit, Activation • Rigorous validation of data quality, preprocessing, augmentation • 2 days of validation is worth enough • Insert assertions as many as possible © Beomjun Shin
Linear Feature Engineering engineering features for a linear model and
then switching to a more complicated model on the same representation © Beomjun Shin
Flexible Code • We can sacrifice "Code Efficiency" for "Flexibility"
• Exchange "raw" data between models and preprocessing by code • Unlike API server, in machine learning task so many assumption can be changed • We should always be prepare to build whole pipeline from scratch © Beomjun Shin
Reproducible preprocessing • Every data preprocessing will be fail in
first iteration • let's fall in love with shell © Beomjun Shin
Shell commands © Beomjun Shin
# Move each directory's files into subdirectory named dummy; #
mv doesn't support mv many files for x in *; do for xx in $x/*; do command mv $xx $x/dummy; done; done; # Recursively counting files in a Linux directory find $DIR -type f | wc -l # Remove whitespace from filename (using shell subsitition) for x in *\ .jpg; do echo $x ${x//\ /}; done # bash rm large directory find . -name '*.mol' -exec rm {} \; # kill process contains partial string ps -ef | grep [some_string] | grep -v grep | awk '{print $2}' | xargs kill -9 # Parallel imagemagick preprocessing ls *.jpg | parallel -j 48 convert {} -crop 240x320+0+0 {} 2> error.log © Beomjun Shin
How many commands are you familiar? • echo, touch, awk,
sed, cat, cut, grep, xargs, find • wait, background(&), redirect(>) • ps, netstat • for, if, function • parallel, imagemagick(convert) © Beomjun Shin
#!/bin/zsh set -x trap 'pkill -P $$' SIGINT SIGTERM EXIT
multitailc () { args="" for file in "$@"; do args+="-cT ANSI $file " done multitail $args } export CUDA_VISIBLE_DEVICES=0 python train.py &> a.log & export CUDA_VISIBLE_DEVICES=1 python train.py &> b.log & multitailc *.log wait echo "Finish Experiments" © Beomjun Shin
Working Process 1. Prepare "proven" data, model or idea 2.
Data validation 3. Setup evaluation metrics (at least two) • one is for model comparison, the other is for human 4. Code and test whether it is "well" trained or not 5. Model improvement (iteration) © Beomjun Shin
Build our best practice • datawrapper - model - trainer
• data/ folder in project root • experiment management © Beomjun Shin
Be aware of ML's technical debt • Recommend to read
Machine Learning: The High- Interest Credit Card of Technical Debt from Google © Beomjun Shin
References • Productivity is about not waiting • Machine Learning:
The High-Interest Credit Card of Technical Debt • Patterns for Research in Machine Learning • Development workflows for Data Scientists © Beomjun Shin