Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
Productionizing Big Data - stories from the tre...
Search
Roksolana
September 14, 2023
Technology
0
61
Productionizing Big Data - stories from the trenches
Presented at ScalaDays 2023 (Madrid, Spain)
Roksolana
September 14, 2023
Tweet
Share
More Decks by Roksolana
See All by Roksolana
Pain of engineering management
roksolanad
1
69
Alice and the return to the world of pods and higher-order functions
roksolanad
0
170
Modern data pipelines in AdTech - life in the trenches
roksolanad
1
270
Alice and travelling back in time
roksolanad
0
140
Big Data at AdTech
roksolanad
0
310
Alice and the Mad Hatter: Predict or not to predict
roksolanad
0
160
Alice in the world of machine learning
roksolanad
0
100
Alice and the lost pod: practical guide to Kubernetes in Scala
roksolanad
1
310
Scala meets Kubernetes
roksolanad
0
470
Other Decks in Technology
See All in Technology
(機械学習システムでも) SLO から始める信頼性構築 - ゆる SRE#9 2025/02/21
daigo0927
0
260
大規模アジャイルフレームワークから学ぶエンジニアマネジメントの本質
staka121
PRO
3
710
Iceberg Meetup Japan #1 : Iceberg and Databricks
databricksjapan
0
310
日経のデータベース事業とElasticsearch
hinatades
PRO
0
210
Active Directory攻防
cryptopeg
PRO
8
5.3k
内製化を加速させるlaC活用術
nrinetcom
PRO
2
130
偏光画像処理ライブラリを作った話
elerac
1
160
開発組織を進化させる!AWSで実践するチームトポロジー
iwamot
1
160
Raycast AI APIを使ってちょっと便利な拡張機能を作ってみた / created-a-handy-extension-using-the-raycast-ai-api
kawamataryo
0
210
ディスプレイ広告(Yahoo!広告・LINE広告)におけるバックエンド開発
lycorptech_jp
PRO
0
280
プロダクトエンジニア 360°フィードバックを実施した話
hacomono
PRO
0
140
設計を積み重ねてシステムを刷新する
sansantech
PRO
0
150
Featured
See All Featured
StorybookのUI Testing Handbookを読んだ
zakiyama
28
5.5k
Code Review Best Practice
trishagee
67
18k
Building an army of robots
kneath
303
45k
The Invisible Side of Design
smashingmag
299
50k
Design and Strategy: How to Deal with People Who Don’t "Get" Design
morganepeng
129
19k
I Don’t Have Time: Getting Over the Fear to Launch Your Podcast
jcasabona
32
2.1k
4 Signs Your Business is Dying
shpigford
182
22k
The Language of Interfaces
destraynor
156
24k
Performance Is Good for Brains [We Love Speed 2024]
tammyeverts
7
640
The Art of Delivering Value - GDevCon NA Keynote
reverentgeek
10
1.3k
Building Adaptive Systems
keathley
40
2.4k
KATA
mclloyd
29
14k
Transcript
Productionizing big data - stories from the trenches
Roksolana Diachuk •Engineering manager at Captify •Women Who Code Kyiv
Data Engineering Lead •Speaker
AdTech methodologies deliver the right content at the right time
to the right consumer AdTech
None
You have your pipelines in production What’s next?
Types of issues • Low performance • Human errors •
Data source errors
Story #1. Unlucky query
Problem Drop 13 months of user profiles
Reporting
Problem 13 months hour=22042001
Loading mechanism loader.ImpalaLoaderConfig.periodToLoad: “P5D” loader.ImpalaLoaderConfig.periodToLoad: “P13M” val minTime = currentDay.minus(config.feedPeriod)
listFiles.filter(file => file.eventDateTime isAfter minTime)
Solution loader.ImpalaLoaderConfig.periodToLoad: “P5D” loader.ImpalaLoaderConfig.periodToLoad: “P1M” loader.ImpalaLoaderConfig.periodToLoad: “P13M” …
Story #2. Missing data
Data ingestion Data from Partner X Data costs attribution Extractor
Problem XX Advertiser ID, Language, XX Device Type, …, XX
Media Cost (USD) X Advertiser ID, Language, X Device Type, …, X Media Cost (USD)
Solution • Rename old columns • Reload data for the
week
Solution val colRegex: Regex = “””X (.+)“””.r val oldNewColumnsMapping =
df.schema.collect { case oldColdName@colRegex(pattern) => (oldColName.name, (“XX “ + pattern)) } oldNewColumnsMapping.foldLeft(df) { case (data, (oldName, newName)) => data.withColumnRenamed(oldName, newName) }
XX Advertiser ID, Language, XX Device Type, …, XX Media
Cost (USD) Solution
Story #3. Divide and conquer
Problem processing_time part-*.parquet filtering aggregations created part-*.parquet
• Slow processing • Large parquet files • Failing job
that consumes lots of resources Problem
• Write new partitioned state • Run downstream jobs with
smaller states • Generate seed partition column - xxhash64(fullUrl, domain) Solution
processing_time part-*.parquet created bucket=0 part-*.parquet part-*.parquet … bucket=9 part-*.parquet part-*.parquet
processing_time part-*.parquet Solution
Story #4. Catch the evolution train
Data organisation evolution
Problem • Missing columns from the source • Impala to
Databricks migration speed • Dependency with another team • Unhappy users
Log-level data Mapper Ingestor Transformer Data costs calculator Data costs
attribution
Data costs attribution Data costs attribution Data extractor Impala loader
Data costs attribution Data extractor Impala loader Data costs attribution
Solution XX Advertiser ID, Language, XX Device Type, …, XX
Partner Currency, XX CPM Fee (USD) XX Advertiser ID, Language, XX Device Type, …, XX Media Cost (USD) 26 columns 82 columns
Solution Data extractor New ingestion job
//final step is writing the data df.write .partitionBy(“event_date”, “event_hour”) .mode(SaveMode.Overwrite)
.parquet(dstPath) Solution
Why this solution doesn’t work data_feed clicks.csv.gz views.csv.gz activity.csv.gz event_date
clicks1.parquet clicks2.parquet
Impressions Clicks Conversions Attribution data source
Solution impressions clicks conversions clicks.csv.gz views.csv.gz activity.csv.gz
Story #5. Cleanup time
Corrupted data Data from Partner X Ingestor
Corrupted data Data from Partner X Ingestor IllegalArgumentException: Can't convert
value to BinaryType data type
Solution • Adjust pipeline • Reload data for 3 days
on S3 • Relaunch Databricks autoloader
Current solution impressions videoevents conversions impressions conversions Clicks clicks videoevents
Current solution impressions conversions clicks videoevents
Better solution impressions videoevents conversions impressions conversions clicks clicks videoevents
Conclusions
2. Observability is the key 4. Plan major changes carefully
1. Set up clear expectations with stakeholders Prevention mechanisms 3. Distribute data transformation load
2. Errors can be prevented 4. Data evolution is hard
1. Data setup is always changing Conclusions 3. There are multiple approaches with different tools
None
dead_ fl owers22 roksolana-d roksolanadiachuk roksolanad My contact info