Slide 1

Slide 1 text

ᇌ KIBA ETL Past, present & future of data processing with Ruby RubyKaigi 2018 - Thibaut Barrère (France), independent consultant [email protected] / twitter.com/thibaut_barrere

Slide 2

Slide 2 text

KIBA ETL? lightweight, generic data processing framework for Ruby initially released in 2015 & now at v2.

Slide 3

Slide 3 text

Kiba == Fang (scrunch your data)

Slide 4

Slide 4 text

EXTRACT TRANSFORM LOAD (see http://thibautbarrere.com)

Slide 5

Slide 5 text

No content

Slide 6

Slide 6 text

Back in time Why was Kiba ETL created?

Slide 7

Slide 7 text

2006 ❤ activewarehouse-etl Thanks Anthony Eden! (now founder at DNSimple)

Slide 8

Slide 8 text

source :in, { :file => 'extracted/crm-changes.csv', :parser => :delimited, :skip_lines => 1 }, [:first_name, :last_name] after_read :ensure_fields_presence, { :fields => fields } transform(:id_partenaire) { |n,v,r| some_computation } destination :out, { :file => output_file, :include_headers => true, :separator => ";" }

Slide 9

Slide 9 text

transform(:email_provider) do |name, value, row| row.fetch(:email).downcase.split('@').last end transform :email_provider, :default, :default_value => 'Unknown' before_write do |r| r[:email_provider] =~ /Hotmail/ ? nil : r end

Slide 10

Slide 10 text

before_write do |r| position, status = geocode_with_redis_cache(row) row[:geocoding_status] = status row[:latitude] = position.latitude row[:longitude] = position.longitude row end

Slide 11

Slide 11 text

$ etl my_etl_script.etl

Slide 12

Slide 12 text

Use cases 2006-2012 Extract CSV data from a ($$$) CRM Import to MySQL (low-cost Business Intelligence) Geocode & export to "nearby search" Rails app IBAN validation & export to COBOL back-end More details available in RuLu 2012 talk1 1 See Youtube video & SpeakerDeck slides

Slide 13

Slide 13 text

2011 Took over maintenance

Slide 14

Slide 14 text

ETL::Parser::Parser: SaxParser, FixedWidthParser, NokogiriXmlParser, XmlParser, ExcelParser, CsvParser ETL::Processor::Processor: FtpUploaderProcessor, FtpDownloaderProcessor, EscapeCsvProcessor, ZipFileProcessor, SftpDownloaderProcessor, SftpUploaderProcessor, ImapattachmentDownloaderProcessor, EncodeProcessor, TruncateProcessor, BulkImportProcessor, Pop3attachmentDownloaderProcessor ETL::Processor::RowProcessor: BlockProcessor, EnsureFieldsPresenceProcessor, DatabaseJoinProcessor, RequireNonBlankProcessor, CopyFieldProcessor, SequenceProcessor, PrintRowProcessor, SurrogateKeyProcessor, FilterRowProcessor, CheckExistProcessor, HierarchyExploderProcessor, CheckUniqueProcessor, RenameProcessor ETL::Transform: StringToDateTimeTransform, DateToStringTransform, HierarchyLookupTransform, SplitFieldsTransform, DecodeTransform, StringToTimeTransform, Sha1Transform, TypeTransform, OrdinalizeTransform, BlockTransform, StringToDateTransform, Md5Transform, CalculationTransform, TrimTransform, ForeignKeyLookupTransform Source: ModelSource, DatabaseSource, FileSource Destination: CsvDestination, ExcelDestination, YamlDestination, DatabaseDestination, InsertUpdateDatabaseDestination, UpdateDatabaseDestination, FileDestination

Slide 15

Slide 15 text

module ETL #:nodoc: module Parser #:nodoc: # Parses CSV files class CsvParser < ETL::Parser::Parser # Initialize the parser # * source: The Source object # * options: Hash of options for the parser, defaults to an empty hash def initialize(source, options={}) super configure end attr_reader :validate_rows def get_fields_names(file) File.open(file) do |input| fields = CSV.parse(input.readline, options).first new_fields = [] fields.each_with_index do |field,index| # compute the index of occurrence of this specific occurrence of the field (usually, will be 1) occurrence_index = fields[0..index].find_all { |e| e == field }.size number_of_occurrences = fields.find_all { |e| e == field }.size new_field = field + (number_of_occurrences > 1 ? "_#{occurrence_index}" : "") new_fields << Field.new(new_field.to_sym) end return new_fields end end # Returns each row. def each Dir.glob(file).each do |file| ETL::Engine.logger.debug "parsing #{file}" if fields.length == 0 ETL::Engine.logger.debug "no columns specified so reading names from first line of #{file}" @fields = get_fields_names(file) end line = 0 lines_skipped = 0 CSV.foreach(file, options) do |raw_row| if lines_skipped < source.skip_lines ETL::Engine.logger.debug "skipping line" lines_skipped += 1 next end line += 1 row = {} validate_row(raw_row, line, file) if self.validate_rows raw_row.each_with_index do |value, index| f = fields[index] row[f.name] = value end yield row end end end # Get an array of defined fields def fields @fields ||= [] end private def validate_row(row, line, file) ETL::Engine.logger.debug "validating line #{line} in file #{file}" if row.length != fields.length raise_with_info( MismatchError, "The number of columns from the source (#{row.length}) does not match the number of columns in the definition (#{fields.length})", line, file ) end end def configure @validate_rows = if source.configuration.has_key?(:validate_rows) source.configuration[:validate_rows] else true end source.definition.each do |options| case options when Symbol fields << Field.new(options) when Hash fields << Field.new(options[:name]) else raise DefinitionError, "Each field definition must either be a symbol or a hash" end end end class Field #:nodoc: attr_reader :name def initialize(name) @name = name end end end end end

Slide 16

Slide 16 text

Photo Credit: National Fire Protection Association

Slide 17

Slide 17 text

2013 - Maintenance stopped Too many features Yet, features lacking flexibility Custom components complicated to implement Costly to maintain (code + CI)

Slide 18

Slide 18 text

But I still have data processing needs! source transform transform transform destination and that row-based DSL syntax is really great for maintenance

Slide 19

Slide 19 text

2008 TinyTL (a tiny ETL) Clean-room implementation Minimalistic (113 lines total) 2 small production apps

Slide 20

Slide 20 text

2015 Kiba ETL v0.5.0 Lightweight. Flexible (write your own components) Simple assumptions (PORO) Standalone (no ActiveRecord dependency) Strong focus on documentation & articles Easy to "keep alive" as a maintainer

Slide 21

Slide 21 text

5 keywords only pre_process source transform destination post_process

Slide 22

Slide 22 text

source CSVSource, filename: 'data/extract.csv', csv_options: { col_sep: ';', encoding: 'ISO-8859-1:UTF-8' } transform MyLookup, config: { xxx } transform do |row| { siren: row.fetch('SIREN'), libapen: row.fetch('LIBAPEN') } end destination CSVDestination, filename: 'data/output.csv'

Slide 23

Slide 23 text

Source def initialize(*args) def each (yields N rows)

Slide 24

Slide 24 text

class CSVSource def initialize(options) @filename = options.fetch(:filename) @csv_options = options.fetch(:csv_options) end def each CSV.foreach(@filename, @csv_options) do |row| yield(row) end end end

Slide 25

Slide 25 text

Transform (as class) def initialize(*args) def process(row) (returns 1 or 0 row) Transform (as block) transform { |row| xxx } (returns 1 or 0 row)

Slide 26

Slide 26 text

Destination def initialize(*args) def write(row) (writes 1 row) def close

Slide 27

Slide 27 text

class CSVDestination def initialize(filename:, csv_options: {}, headers:) @filename = filename @csv_options = csv_options @headers = headers end def write(row) @csv ||= CSV.open(filename, 'wb', csv_options) @headers ||= row.keys @headers_written ||= (csv << headers ; true) csv << row.fetch_values(*@headers) end def close @csv&.close end end

Slide 28

Slide 28 text

2015-2018 Numerous production uses Almost no core changes to Kiba Components easy to author & maintain "Long Term Support" of gem possible as solo dev

Slide 29

Slide 29 text

Sustainable project evolution

Slide 30

Slide 30 text

#1 Micro-batches < 50k rows per job Very near realtime sync

Slide 31

Slide 31 text

No content

Slide 32

Slide 32 text

Push mode

Slide 33

Slide 33 text

Pull mode (with push notify)

Slide 34

Slide 34 text

Don't shell out from HTTP or Sidekiq ❌ ❌ ❌ def perform system("kiba my-import-job.etl") end ❌ ❌ ❌

Slide 35

Slide 35 text

Use the Programmatic API (e.g call from Sidekiq2) class PartnersUploadProcessorWorker include Sidekiq::Worker def perform(options) job = Kiba.parse do source CSVSource, filename: options.fetch('filename') transform ... transform ... transform ... destination SQLUpsert, connection: SEQUEL_DB, table: xxx end Kiba.run(job) end end 2 Kiba ETL Wiki: Considerations for running Kiba jobs programmatically

Slide 36

Slide 36 text

Extract the job declaration module ETL module SyncPartners module_function def setup(source_file, sequel_connection, logger) Kiba.parse do source transform transform transform destination end end end end

Slide 37

Slide 37 text

#2 Multistep batch processing (enterprise data aggregation) N different ERP systems (1 per SaaS client) Different extraction methods Different input formats (fields & data) Target: a single, common schema

Slide 38

Slide 38 text

No content

Slide 39

Slide 39 text

No content

Slide 40

Slide 40 text

Keep things simple (cron + bash) #!/usr/bin/env bash set -e bundle exec kiba common/s3_downloader.etl bundle exec kiba client_acme/extractor.etl bundle exec kiba client_acme/transformer.etl bundle exec kiba common/api_uploader.etl bundle exec kiba common/cleaner.etl

Slide 41

Slide 41 text

No content

Slide 42

Slide 42 text

Amazon S3 = inbox Email from ERP (CSV/XLSX) -> Amazon SES -> S3 ERP -> SFTP export -> S3 Capybara -> XLSX download from ERP -> S3

Slide 43

Slide 43 text

Pre-extraction from S3 to local file system Generic local download step (Kiba script) Move from /unprocessed to /processed S3 lifecycle rules to ensure data removal config = { aws_region: ENV.fetch('ACME_CORP_AWS_REGION'), aws_bucket: ENV.fetch('ACME_CORP_AWS_BUCKET'), aws_kms_key_id: ENV.fetch('ACME_CORP_AWS_KMS_KEY_ID') } source ETL::S3::ListObjects, config transform ETL::S3::Downloader

Slide 44

Slide 44 text

Amazon S3 = message passing between machines (scalability!)

Slide 45

Slide 45 text

Keep things DRY & enforce re-use Sources, transforms, destinations "Meta-transforms" Steps (whole Kiba scripts)

Slide 46

Slide 46 text

Actual components re-use between pipelines module DSLExtensions module DefaultValue def default_value(field, default_value) transform do |row| value = row.fetch(field) row.merge(field => value.blank? ? default_value : value) end end end end extend DSLExtensions::DefaultValue default_value :some_field, 'Unknown'

Slide 47

Slide 47 text

More actual components re-use between pipelines AddTimestamps DefaultValue EnforceValuesChoice RenameField HashLookup RaiseOnBlanks RemapBoolean (more...)

Slide 48

Slide 48 text

#3 Automation of internal tasks

Slide 49

Slide 49 text

No content

Slide 50

Slide 50 text

No content

Slide 51

Slide 51 text

No content

Slide 52

Slide 52 text

source ETL::Sources::HTTPPaginatingFetcher, base_url: bank_base_url, headers: { Authorization: bank_auth } assert_equal :content_type, 'application/json' transform { |r| r.fetch(:body) } transform { |r| JSON.parse(r) } transform { |r| r.fetch('transactions') } transform Kiba::Common::Transforms::EnumerableExploder assert_equal 'currency', 'EUR' # SNIP destination Kiba::Pro::Destinations::SQLUpsert, database: ENV.fetch('DATABASE_URL'), unique_key: :fit_id, table: :bank_transactions

Slide 53

Slide 53 text

source ETL::Sources::OFXSource, dir_pattern: "files/*.ofx" transform do |row| { fit_id: row.fit_id, memo: row.memo, name: row.name, posted_at: row.posted_at.to_date, amount: row.amount } end destination Kiba::Pro::Destinations::SQLUpsert, database: ENV.fetch('DATABASE_URL'), unique_key: :fit_id, table: :bank_transactions

Slide 54

Slide 54 text

Source = PDF files

Slide 55

Slide 55 text

PDF download & transform pre_process do system!("wget -A pdf ... https://vat-site.com") end source Kiba::Common::Enumerable, -> { Dir["downloads/*.pdf"] } transform { |r| { filename: r } } transform PDFToText, :filename_key => :filename # SNIP - extraction, verification destination CSVDestination, file: 'vat_rates.csv'

Slide 56

Slide 56 text

class PDFToText def initialize(filename_key, content_key, pdftotext_args) @filename = filename_key @content = content_key @pdftotext_args = pdftotext_args end def process(row) cmd = "pdftotext #{@pdftotext_args} #{row[@filename]} -" output = system_with_output!(cmd) row[@content] = output row end end

Slide 57

Slide 57 text

#4 Big rewrites & data migrations

Slide 58

Slide 58 text

No content

Slide 59

Slide 59 text

No content

Slide 60

Slide 60 text

Use MiniTest & Sequel for "data screens" class Screens < ETL::BaseTest def test_well_known_record amount = db[:orders].where(id: 12456).amount_with_vat assert_equal 100.27, amount end def test_counts count = db[:orders].count assert_in_delta(230_000, count, 10_000) end def test_states states = db[:orders].distinct.select('state') assert_equal ['complete', 'pending'], states end end

Slide 61

Slide 61 text

Key takeaways from data migrations Use "Bulk insert" (Kiba Pro, Sequel, ...) Use deterministic ids & reset sequence Bypass ActiveRecord validations validate_uniqueness_of :email, unless: -> { @etl_running } Work on a data subset for iteration Measure & optimize continuously Use screens to test specific & global data

Slide 62

Slide 62 text

Why Kiba v2? mostly a drop-in replacement

Slide 63

Slide 63 text

class XMLSource def initialize(...) @dir_pattern = dir_pattern end def each Dir[@dir_pattern].sort.each do |file| doc = Nokogiri::XML(IO.binread(file)) doc.search('/invoices/invoice').each do |item| yield(item) end end end end

Slide 64

Slide 64 text

class CSVSource def initialize(...) @dir_pattern = dir_pattern end def each Dir[@dir_pattern].sort.each do |file| CSV.foreach(file, csv_options) do |row| yield(row) end end end end

Slide 65

Slide 65 text

Kiba v2 "StreamingRunner" def transform_stream(stream, t) Enumerator.new do |y| stream.each do |input_row| returned_row = t.process(input_row) do |yielded_row| y << yielded_row end y << returned_row if returned_row end end end ❤ Ruby's Enumerator

Slide 66

Slide 66 text

v2 transforms can yield N rows, not just sources More components re-use + more composition class EnumerableExploder def process(row) row.each { |item| yield(item) } nil end end source DirectoryLister, dir_pattern: '*.csv' transform XMLReader transform XMLSearcher, selector: '/invoices/invoice' transform EnumerableExploder

Slide 67

Slide 67 text

Happy with current data processing toolkit Kiba (core, OSS) Kiba Common (extra generic components, OSS) Kiba Pro (fast SQL, S3, profilers...) Tons of gems for pretty much any format

Slide 68

Slide 68 text

More performance available soon TruffleRuby Ruby 2.6.0 JIT improvements GIL & Guilds

Slide 69

Slide 69 text

Ruby ETL performance trend4 4 https://github.com/thbar/kiba-ruby-benchmarks

Slide 70

Slide 70 text

Data + Ruby = ❤ Great asset for data enterprise glue Efficient code reuse between data pipelines Components can be tested quite easily (PORO) Wealth of libraries to tap into Raw speed (single threaded) good & improving Concurrency story will improve too!