Slide 1

Slide 1 text

Modern Data Pipelines in AdTech - life in the trenches

Slide 2

Slide 2 text

Roksolana Diachuk •Big Data Developer at Captify •Diversity & Inclusion ambassador at Captify •Women Who Code Kyiv Data Engineering Lead •Speaker

Slide 3

Slide 3 text

Agenda 1. What is AdTech? 2. Data pipelines in AdTech 3. Practical examples 4. Historical data reprocessing 5. Conclusions

Slide 4

Slide 4 text

AdTech methodologies deliver the right content at the right time to the right consumer AdTech

Slide 5

Slide 5 text

What Captify does? Captify’s technologies unite to collect, connect and categorise billions of real-time search events from 2.3 billion consumers.

Slide 6

Slide 6 text

No content

Slide 7

Slide 7 text

Data pipelines in AdTech •Reporting

Slide 8

Slide 8 text

Data pipelines in AdTech •Reporting •Insights

Slide 9

Slide 9 text

Data pipelines in AdTech •Reporting •Insights •Data costs attribution

Slide 10

Slide 10 text

Data pipelines in AdTech •Reporting •Insights •Data costs attribution •Users audiences building

Slide 11

Slide 11 text

Data pipelines in AdTech •Reporting •Insights •Data costs attribution •Users audiences building •All kinds of data processing/storage

Slide 12

Slide 12 text

Reporting

Slide 13

Slide 13 text

Reporting Data provider Transformer Loader

Slide 14

Slide 14 text

Data ingestion

Slide 15

Slide 15 text

Data ingestion Schema Dates parsing S3 lister

Slide 16

Slide 16 text

Data loading

Slide 17

Slide 17 text

Metadata handling First upload vs scheduled ones Schema de fi nition Data loading

Slide 18

Slide 18 text

Challenges •Diverse data types

Slide 19

Slide 19 text

•Time dependency Challenges •Diverse data types

Slide 20

Slide 20 text

•Time dependency Challenges •Diverse data types •External data storage

Slide 21

Slide 21 text

•Time dependency Challenges •Diverse data types •External data storage •Constant connection with end users

Slide 22

Slide 22 text

Data costs attribution

Slide 23

Slide 23 text

Log-level data Mapper Ingestor Transformer Data costs attribution Data costs calculator

Slide 24

Slide 24 text

Data costs attribution

Slide 25

Slide 25 text

Attribution data source Standard feed Segment feed

Slide 26

Slide 26 text

Attribution data source Impressions Clicks Conversions

Slide 27

Slide 27 text

Data costs attribution Mapping Ingestion Transformation Data costs calculator

Slide 28

Slide 28 text

Data costs attribution Mapping Ingestion Transformation Data costs calculator S3 lister Uni fi ed schema Data costs Feeds join

Slide 29

Slide 29 text

Challenges •Processing and storing really large data volumes (!)

Slide 30

Slide 30 text

•Failures handling Challenges •Processing and storing really large data volumes (!)

Slide 31

Slide 31 text

•Failures handling Challenges •Processing and storing really large data volumes (!) •Historical data reprocessing

Slide 32

Slide 32 text

Historical data reprocessing

Slide 33

Slide 33 text

Business use case

Slide 34

Slide 34 text

Attribution pipeline

Slide 35

Slide 35 text

standardfeed.transformer.Config.feedPeriod: “P30D” standardfeed.transformer.Config.startDateTime: 2022-03-01T00:00 Reprocessing mechanism val minTime = currentDay.minus(config.feedPeriod) 
 listFiles.filter(file => file.eventDateTime isAfter minTime)

Slide 36

Slide 36 text

Reprocessing Production version Reprocessed version is_impression=true is_impression=false is_impression=true is_impression=false

Slide 37

Slide 37 text

Reprocessing Production version Reprocessed version is_impression=true is_impression=false is_impression=true is_impression=false

Slide 38

Slide 38 text

Reprocessing Production version Reprocessed version

Slide 39

Slide 39 text

Production version Reprocessed version attribution_ reprocessed attribution_ prod Reporting

Slide 40

Slide 40 text

Reporting Reprocessed table Dropped partitions

Slide 41

Slide 41 text

Future with Delta lake Time travel Keeping track of changes Schema enforcement

Slide 42

Slide 42 text

Parquet files => Delta files Spark tables => Delta tables … Leveraging data versions through Delta tables history Vacuum unsuitable data Future with Delta lake

Slide 43

Slide 43 text

Challenges •Computing resources

Slide 44

Slide 44 text

•Speed of processing Challenges •Computing resources

Slide 45

Slide 45 text

•Speed of processing Challenges •Computing resources •Complexity

Slide 46

Slide 46 text

•Speed of processing Challenges •Computing resources •Complexity •High cost of the errors

Slide 47

Slide 47 text

No content

Slide 48

Slide 48 text

Conclusions 1. AdTech is an exciting domain for big data

Slide 49

Slide 49 text

2. There is more than one approach to leveraging data Conclusions 1. AdTech is an exciting domain for big data

Slide 50

Slide 50 text

2. There is more than one approach to leveraging data Conclusions 3. There is always a room for improvement 1. AdTech is an exciting domain for big data

Slide 51

Slide 51 text

dead_ fl owers22 roksolana-d roksolanadiachuk roksolanad My contact info

Slide 52

Slide 52 text

Stand With Ukraine