Upgrade to PRO for Only $50/Year—Limited-Time Offer! 🔥
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
Productionizing Big Data - stories from the tre...
Search
Roksolana
September 14, 2023
Technology
0
72
Productionizing Big Data - stories from the trenches
Presented at ScalaDays 2023 (Madrid, Spain)
Roksolana
September 14, 2023
Tweet
Share
More Decks by Roksolana
See All by Roksolana
Pain of engineering management
roksolanad
1
86
Alice and the return to the world of pods and higher-order functions
roksolanad
0
190
Modern data pipelines in AdTech - life in the trenches
roksolanad
1
300
Alice and travelling back in time
roksolanad
0
170
Big Data at AdTech
roksolanad
0
350
Alice and the Mad Hatter: Predict or not to predict
roksolanad
0
200
Alice in the world of machine learning
roksolanad
0
120
Alice and the lost pod: practical guide to Kubernetes in Scala
roksolanad
1
340
Scala meets Kubernetes
roksolanad
0
510
Other Decks in Technology
See All in Technology
生成AI活用の型ハンズオン〜顧客課題起点で設計する7つのステップ
yushin_n
0
130
意外とあった SQL Server 関連アップデート + Database Savings Plans
stknohg
PRO
0
310
Microsoft Agent 365 を 30 分でなんとなく理解する
skmkzyk
1
1.1k
OCI Oracle Database Services新機能アップデート(2025/09-2025/11)
oracle4engineer
PRO
1
110
世界最速級 memcached 互換サーバー作った
yasukata
0
330
[CMU-DB-2025FALL] Apache Fluss - A Streaming Storage for Real-Time Lakehouse
jark
0
110
ML PM Talk #1 - ML PMの分類に関する考察
lycorptech_jp
PRO
1
800
A Compass of Thought: Guiding the Future of Test Automation ( #jassttokai25 , #jassttokai )
teyamagu
PRO
1
250
Reinforcement Fine-tuning 基礎〜実践まで
ch6noota
0
170
直接メモリアクセス
koba789
0
290
30分であなたをOmniのファンにしてみせます~分析画面のクリック操作をそのままコード化できるAI-ReadyなBIツール~
sagara
0
120
AIと二人三脚で育てた、個人開発アプリグロース術
zozotech
PRO
1
710
Featured
See All Featured
Six Lessons from altMBA
skipperchong
29
4.1k
[SF Ruby Conf 2025] Rails X
palkan
0
510
RailsConf 2023
tenderlove
30
1.3k
How GitHub (no longer) Works
holman
316
140k
How STYLIGHT went responsive
nonsquared
100
6k
Become a Pro
speakerdeck
PRO
31
5.7k
Learning to Love Humans: Emotional Interface Design
aarron
274
41k
Faster Mobile Websites
deanohume
310
31k
A designer walks into a library…
pauljervisheath
210
24k
Intergalactic Javascript Robots from Outer Space
tanoku
273
27k
The World Runs on Bad Software
bkeepers
PRO
72
12k
Site-Speed That Sticks
csswizardry
13
1k
Transcript
Productionizing big data - stories from the trenches
Roksolana Diachuk •Engineering manager at Captify •Women Who Code Kyiv
Data Engineering Lead •Speaker
AdTech methodologies deliver the right content at the right time
to the right consumer AdTech
None
You have your pipelines in production What’s next?
Types of issues • Low performance • Human errors •
Data source errors
Story #1. Unlucky query
Problem Drop 13 months of user profiles
Reporting
Problem 13 months hour=22042001
Loading mechanism loader.ImpalaLoaderConfig.periodToLoad: “P5D” loader.ImpalaLoaderConfig.periodToLoad: “P13M” val minTime = currentDay.minus(config.feedPeriod)
listFiles.filter(file => file.eventDateTime isAfter minTime)
Solution loader.ImpalaLoaderConfig.periodToLoad: “P5D” loader.ImpalaLoaderConfig.periodToLoad: “P1M” loader.ImpalaLoaderConfig.periodToLoad: “P13M” …
Story #2. Missing data
Data ingestion Data from Partner X Data costs attribution Extractor
Problem XX Advertiser ID, Language, XX Device Type, …, XX
Media Cost (USD) X Advertiser ID, Language, X Device Type, …, X Media Cost (USD)
Solution • Rename old columns • Reload data for the
week
Solution val colRegex: Regex = “””X (.+)“””.r val oldNewColumnsMapping =
df.schema.collect { case oldColdName@colRegex(pattern) => (oldColName.name, (“XX “ + pattern)) } oldNewColumnsMapping.foldLeft(df) { case (data, (oldName, newName)) => data.withColumnRenamed(oldName, newName) }
XX Advertiser ID, Language, XX Device Type, …, XX Media
Cost (USD) Solution
Story #3. Divide and conquer
Problem processing_time part-*.parquet filtering aggregations created part-*.parquet
• Slow processing • Large parquet files • Failing job
that consumes lots of resources Problem
• Write new partitioned state • Run downstream jobs with
smaller states • Generate seed partition column - xxhash64(fullUrl, domain) Solution
processing_time part-*.parquet created bucket=0 part-*.parquet part-*.parquet … bucket=9 part-*.parquet part-*.parquet
processing_time part-*.parquet Solution
Story #4. Catch the evolution train
Data organisation evolution
Problem • Missing columns from the source • Impala to
Databricks migration speed • Dependency with another team • Unhappy users
Log-level data Mapper Ingestor Transformer Data costs calculator Data costs
attribution
Data costs attribution Data costs attribution Data extractor Impala loader
Data costs attribution Data extractor Impala loader Data costs attribution
Solution XX Advertiser ID, Language, XX Device Type, …, XX
Partner Currency, XX CPM Fee (USD) XX Advertiser ID, Language, XX Device Type, …, XX Media Cost (USD) 26 columns 82 columns
Solution Data extractor New ingestion job
//final step is writing the data df.write .partitionBy(“event_date”, “event_hour”) .mode(SaveMode.Overwrite)
.parquet(dstPath) Solution
Why this solution doesn’t work data_feed clicks.csv.gz views.csv.gz activity.csv.gz event_date
clicks1.parquet clicks2.parquet
Impressions Clicks Conversions Attribution data source
Solution impressions clicks conversions clicks.csv.gz views.csv.gz activity.csv.gz
Story #5. Cleanup time
Corrupted data Data from Partner X Ingestor
Corrupted data Data from Partner X Ingestor IllegalArgumentException: Can't convert
value to BinaryType data type
Solution • Adjust pipeline • Reload data for 3 days
on S3 • Relaunch Databricks autoloader
Current solution impressions videoevents conversions impressions conversions Clicks clicks videoevents
Current solution impressions conversions clicks videoevents
Better solution impressions videoevents conversions impressions conversions clicks clicks videoevents
Conclusions
2. Observability is the key 4. Plan major changes carefully
1. Set up clear expectations with stakeholders Prevention mechanisms 3. Distribute data transformation load
2. Errors can be prevented 4. Data evolution is hard
1. Data setup is always changing Conclusions 3. There are multiple approaches with different tools
None
dead_ fl owers22 roksolana-d roksolanadiachuk roksolanad My contact info