The ninja elephant

The ninja elephant

The ninja elephant, scaling the analytics database in Transferwise

F48fa173c6ddf4342e2c7b74ddec3bbe?s=128

Federico Campoli

July 04, 2017
Tweet

Transcript

  1. The ninja elephant Scaling the analytics database in Transferwise Federico

    Campoli Transferwise 4th July 2017 Federico Campoli (Transferwise) The ninja elephant 4th July 2017 1 / 38
  2. Few words about the speaker Born in 1972 Passionate about

    IT since 1982 mostly because of the TRON movie Federico Campoli (Transferwise) The ninja elephant 4th July 2017 2 / 38
  3. Few words about the speaker Born in 1972 Passionate about

    IT since 1982 mostly because of the TRON movie Joined the Oracle DBA secret society in 2004 In love with PostgreSQL since 2006 Currently runs the Brighton PostgreSQL User group Works at Transferwise as Data Engineer Federico Campoli (Transferwise) The ninja elephant 4th July 2017 2 / 38
  4. Table of contents 1 We have an appointment, and we

    are late! 2 The eye of the storm 3 How we did it 4 Maximum effort 5 Lessons learned 6 Wrap up Federico Campoli (Transferwise) The ninja elephant 4th July 2017 3 / 38
  5. We have an appointment, and we are late! Federico Campoli

    (Transferwise) The ninja elephant 4th July 2017 4 / 38
  6. The Gordian Knot of analytics db I started the data

    engineer job in July 2016 I was involved in a task not customer facing However the task was very critical to the business Federico Campoli (Transferwise) The ninja elephant 4th July 2017 5 / 38
  7. The Gordian Knot of analytics db I started the data

    engineer job in July 2016 I was involved in a task not customer facing However the task was very critical to the business I had to fix the performance issues on the MySQL analytics database Which performed bad, despite the considerable resources assigned to the VM Federico Campoli (Transferwise) The ninja elephant 4th July 2017 5 / 38
  8. The frog effect If you drop a frog in a

    pot of boiling water, it will of course frantically try to clamber out. But if you place it gently in a pot of tepid water and turn the heat will be slowly boiled to death. The performance issues worsened day by day without a single reason The data size on the MySQL master increased over time Sensitive data were obfuscated with custom views MySQL’s optimiser materialised the views on the fly per each query The analytics tools struggled just under normal load In busy periods the database became almost unusable Analysts were busy to tune existing queries rather writing new The replica had several conflicts with the analytics queries A new solution was needed Federico Campoli (Transferwise) The ninja elephant 4th July 2017 6 / 38
  9. The eye of the storm Federico Campoli (Transferwise) The ninja

    elephant 4th July 2017 7 / 38
  10. One size doesn’t fits all It was clear that MySQL

    was no longer a good fit. However the new solution’s requirements had to meet some specific needs. Data updated in almost real time from the live database Personally identifiable information (PII) obfuscated for the analysts PII available in clear for the power users The system should be able to scale out for several years Modern SQL for better analytics queries Federico Campoli (Transferwise) The ninja elephant 4th July 2017 8 / 38
  11. May the best database win The analysts team shortlisted few

    solutions. Each solution covered partially the requirements. Google BigQuery Amazon RedShift Snowflake PostgreSQL Federico Campoli (Transferwise) The ninja elephant 4th July 2017 9 / 38
  12. May the best database win The analysts team shortlisted few

    solutions. Each solution covered partially the requirements. Google BigQuery Amazon RedShift Snowflake PostgreSQL Google BigQuery and Amazon RedShift did not suffice the analytics requirements and were removed from the list. Both PostgreSQL and Snowflake offered very good performance and modern SQL. Neither of them offered a replication system from the MySQL system. Federico Campoli (Transferwise) The ninja elephant 4th July 2017 9 / 38
  13. Straight into the cloud Snowflake is a cloud based data

    warehouse service. It’s based on Amazon S3 and comes with different sizing. Their pricing system is very appealing and the preliminary tests shown Snowflake outperforming PostgreSQL1. Using FiveTran, an impressive multi technology data pipeline, the data would flow in real time from our production server to Snowflake. 1PostgreSQL single machine vs cloud based parallel processing Federico Campoli (Transferwise) The ninja elephant 4th July 2017 10 / 38
  14. Straight into the cloud Snowflake is a cloud based data

    warehouse service. It’s based on Amazon S3 and comes with different sizing. Their pricing system is very appealing and the preliminary tests shown Snowflake outperforming PostgreSQL1. Using FiveTran, an impressive multi technology data pipeline, the data would flow in real time from our production server to Snowflake. Unfortunately there was just one little catch. There was no support for obfuscation. 1PostgreSQL single machine vs cloud based parallel processing Federico Campoli (Transferwise) The ninja elephant 4th July 2017 10 / 38
  15. Customer comes first In Transferwise we really care about the

    customer’s data security. Our policy for the PII data is that any personal information moving outside our perimeter shall be obfuscated. Federico Campoli (Transferwise) The ninja elephant 4th July 2017 11 / 38
  16. Proactive development The DBA sense tingled. I foresaw the need

    to have a MySQL to PostgreSQL replica and in my spare time I built a proof of concept derived from the replica tool pg chameleon I was playing with. The initial tests on a reduced dataset were successful. Adding the obfuscation in real time required minimal changes. Federico Campoli (Transferwise) The ninja elephant 4th July 2017 12 / 38
  17. And the winner is... The initial idea was to use

    PostgreSQL to perform the obfuscation and then transfer the data in Snowflake using Fivetran. Federico Campoli (Transferwise) The ninja elephant 4th July 2017 13 / 38
  18. And the winner is... The initial idea was to use

    PostgreSQL to perform the obfuscation and then transfer the data in Snowflake using Fivetran. However, PostgreSQL proved to have good performance with good margin for scaling up. The final decision was to keep the data analytics data behind our perimeter. Federico Campoli (Transferwise) The ninja elephant 4th July 2017 13 / 38
  19. How we did it Federico Campoli (Transferwise) The ninja elephant

    4th July 2017 14 / 38
  20. Replica and obfuscation I built a minimum viable product using

    pg chameleon. The project was forked into a transferwise repository then I added the the obfuscation capabilities. As this project adds some custom needs to the pure replica provided by pg chameleon, we decided to rename it to pg ninja. Federico Campoli (Transferwise) The ninja elephant 4th July 2017 15 / 38
  21. Mighty morphing power elephant The replica initialisation locks the mysql

    tables in read only mode. To avoid the main database to be locked for several hours we used a secondary MySQL replica with the local query logging enabled. On this replica we also enabled the ROW binlog format required by pg ninja. On the master we still have the MIXED strategy. Federico Campoli (Transferwise) The ninja elephant 4th July 2017 16 / 38
  22. Swiss knife SQL The procedure pulls the data out from

    mysql using the CSV format for a fast load in PostgreSQL with the COPY command. Unfortunately MySQL doens’t have the COPY command. SELECT CASE WHEN data_type="enum" THEN SUBSTRING(COLUMN_TYPE ,5) END AS enum_list , CASE WHEN data_type IN (’"""+" ’,’". join(self.hexify)+""" ’) THEN concat(’hex(’,column_name ,’)’) WHEN data_type IN (’bit ’) THEN concat(’cast(‘’,column_name ,’‘ AS unsigned)’) ELSE concat(’‘’,column_name ,’‘’) END AS column_csv FROM information_schema .COLUMNS WHERE table_schema =%s AND table_name =%s ORDER BY ordinal_position ; Federico Campoli (Transferwise) The ninja elephant 4th July 2017 17 / 38
  23. Combining the pieces The auto generated column list then is

    combined in a MySQL specific syntax which generates the CSV output for column in table_columns : column_list.append("COALESCE(REPLACE("+column["column_csv "]+", ’\"’, ’\"\"’) ,’NULL ’) ") columns_csv="REPLACE(CONCAT (’\"’, CONCAT_WS (’\",\"’,"+’,’.join( column_list )+") ,’\"’) ,’\"NULL\"’,’NULL ’)" csv_data="" sql_out="SELECT "+ columns_csv +" as data FROM "+table_name +";" self.mysql_con. connect_db_ubf () try: self.logger.debug("Executing query for table %s" % (table_name , )) self.mysql_con. my_cursor_ubf .execute(sql_out) except: self.logger.debug("error when pulling the data from the table %s - sql executed: %s" % (table_name , sql_out)) Federico Campoli (Transferwise) The ninja elephant 4th July 2017 18 / 38
  24. Fallback on failure The CSV data is pulled out in

    slices in order to avoid memory overload. The file is then pushed into PostgreSQL using the COPY command. However... COPY is fast but is single transaction One failure and the entire batch is rolled back If this happens the procedure loads the same data using the INSERT statements Which can be very slow But at least discards only the problematic rows Federico Campoli (Transferwise) The ninja elephant 4th July 2017 19 / 38
  25. obfuscation setup A simple yaml file is used to list

    table, column and obfuscation strategy u s e r d e t a i l s : last name: mode: normal n o n h a s h s t a r t : 0 n on h as h l e n g t h : 0 phone number: mode: normal n o n h a s h s t a r t : 1 n on h as h l e n g t h : 2 d a t e o f b i r t h : mode: date Federico Campoli (Transferwise) The ninja elephant 4th July 2017 20 / 38
  26. Obfuscation when initialising The obfuscation process is quite simple and

    uses the extension pgcrypt for hashing in sha256. When the replica is initialised the data is copied into the schema in clear The table locks are released The tables with personally identifiable information (PII) are copied and obfuscated in a separate schema The process builds the indices on the schemas with data in clear and obfuscated The tables without PII data are exposed to the normal users using simple views All the varchar fields in the obfuscated schema are converted in text fields Federico Campoli (Transferwise) The ninja elephant 4th July 2017 21 / 38
  27. Obfuscation on the fly The obfuscation is also applied when

    the data is replicated. The approach is very simple. When a row image is captured the process checks if the table contains PII data In that case the process generates a second jsonb element with the PII data obfuscated Federico Campoli (Transferwise) The ninja elephant 4th July 2017 22 / 38
  28. Obfuscation on the fly {’global_data ’: { ’binlog ’: u’mysql

    -bin .000227 ’ , ’logpos ’: 1543 , ’action ’: ’update ’, ’batch_id ’: 2L, ’table ’: u’user ’, ’log_table ’: ’t_log_replica_2 ’, ’schema ’: ’sch_clear ’ }, ’event_data ’: { u’email ’: u’foo@bar.com ’ } } {’global_data ’: { ’binlog ’: u’mysql -bin .000227 ’ , ’logpos ’: 1543 , ’action ’: ’update ’, ’batch_id ’: 2L, ’table ’: u’user ’, ’log_table ’: ’t_log_replica_2 ’, ’schema ’: ’sch_obf ’ }, ’event_data ’: { u’email ’: u’2 bc5aa7720b6a3462cdf8c1ae25ed8dc45b1d9e1b0cd960aa15ac72acfe20433 ’ } } Federico Campoli (Transferwise) The ninja elephant 4th July 2017 23 / 38
  29. Maximum effort Federico Campoli (Transferwise) The ninja elephant 4th July

    2017 24 / 38
  30. Timing Query MySQL PostgreSQL PostgreSQL cached ETL daily procedure 20

    hours 4 hours N/A Extracting sharing ibans2 didn’t complete 3 minutes 1 minute Adyen notification3 6 minutes 2 minutes 6 seconds 2small table with complex aggregations 3big table scan with simple filters Federico Campoli (Transferwise) The ninja elephant 4th July 2017 25 / 38
  31. Resource comparison Initial setup Resource MySQL PostgreSQL Storage Size 940

    GB 664 GB Server CPUs 18 8 Server Memory 68 GB 48 GB Shared Memory 50 GB 5 GB Max connections 500 100 Federico Campoli (Transferwise) The ninja elephant 4th July 2017 26 / 38
  32. Resource comparison Initial setup Resource MySQL PostgreSQL Storage Size 940

    GB 664 GB Server CPUs 18 8 Server Memory 68 GB 48 GB Shared Memory 50 GB 5 GB Max connections 500 100 Current setup (after the entire company knew of the new faster analytics db) Resource MySQL PostgreSQL Storage Size 940 GB 1.2 TB Server CPUs 18 24 Server Memory 68 GB 48 GB Shared Memory 50 GB 5 GB Max connections 500 500 Federico Campoli (Transferwise) The ninja elephant 4th July 2017 26 / 38
  33. Advantages using PostgreSQL Stronger security model Better resource optimisation (See

    previous slide) No invalid views No performance issues with views Complex analytics functions partitioning (thanks pg pathman!) BRIN indices Federico Campoli (Transferwise) The ninja elephant 4th July 2017 27 / 38
  34. Advantages using PostgreSQL Stronger security model Better resource optimisation (See

    previous slide) No invalid views No performance issues with views Complex analytics functions partitioning (thanks pg pathman!) BRIN indices some code was optimised inside, but actually very little - maybe 10-20% was improved. We’ll do more of that in the future, but not yet. The good thing is that the performance gains we have can mostly be attributed just to PG vs MySQL. So there’s a lot of scope to improve further. Jeff McClelland - Growth Analyst, data guru Federico Campoli (Transferwise) The ninja elephant 4th July 2017 27 / 38
  35. Lessons learned Federico Campoli (Transferwise) The ninja elephant 4th July

    2017 28 / 38
  36. Strictness is an illusion. MySQL doubly so MySQL’s lack of

    strictness is not a mystery. The replica broke down several times because of the funny way the NOT NULL is managed by MySQL. To prevent any further replica breakdown the fields with NOT NULL added with ALTER TABLE, in PostgreSQL are always as NULLable. MySQL truncates the strings of characters at the varchar size automatically. This is a problem if the field is obfuscated on PostgreSQL because the hashed string could not fit into the corresponding varchar field. Therefore all the character varying on the obfuscated schema are converted to text. Federico Campoli (Transferwise) The ninja elephant 4th July 2017 29 / 38
  37. I feel your lack of constraint disturbing Rubbish data in

    MySQL can be stored without errors raised by the DBMS. Evidence video #1 - Why you shouldn’t use MySQL When this happens the replicator traps the error and discards the problematic row. The failed row image is hexified and saved in the table t discarded rows for further analysis. Federico Campoli (Transferwise) The ninja elephant 4th July 2017 30 / 38
  38. The DDL. A real pain in the back The DDL

    replica is possible with the usage of the regular expressions. Federico Campoli (Transferwise) The ninja elephant 4th July 2017 31 / 38
  39. The DDL. A real pain in the back The DDL

    replica is possible with the usage of the regular expressions. Yes, I know. I have 2 problems now! Federico Campoli (Transferwise) The ninja elephant 4th July 2017 31 / 38
  40. The DDL. A real pain in the back The DDL

    replica is possible with the usage of the regular expressions. Yes, I know. I have 2 problems now! MySQL even in ROW format emits the DDL as statements The class sql token uses the regular expressions to tokenise the DDL The tokenised data is used to build the DDL in the PostgreSQL dialect Federico Campoli (Transferwise) The ninja elephant 4th July 2017 31 / 38
  41. Wrap up Federico Campoli (Transferwise) The ninja elephant 4th July

    2017 32 / 38
  42. pg ninja the replica and obfuscation tool pg ninja is

    now open source. You can download the tool on github. github:https://github.com/transferwise/pg ninja pypi coming soon:https://pypi.python.org/pypi/pg ninja License: Apache 2.0 Limitations Python 2.7 No daemon Replica requires primary keys Still optimising the newest additions (add table, sync obfuscation etc.) PR Absolutely Welcome! Federico Campoli (Transferwise) The ninja elephant 4th July 2017 33 / 38
  43. Did you say hire? WE ARE HIRING! https://transferwise.com/jobs/ Federico Campoli

    (Transferwise) The ninja elephant 4th July 2017 34 / 38
  44. That’s all folks! Thank you for listening! Federico Campoli (Transferwise)

    The ninja elephant 4th July 2017 35 / 38
  45. Contacts and license Twitter: 4thdoctor scarf Transferwise: https://transferwise.com/ Blog:http://www.pgdba.co.uk This

    document is distributed under the terms of the Creative Commons Federico Campoli (Transferwise) The ninja elephant 4th July 2017 36 / 38
  46. Boring legal stuff The 4th doctor meme - source memecrunch.com

    The eye, phantom playground, light end tunnel - Copyright Federico Campoli The dolphin picture - Copyright artnoose It could work. Young Frankenstein - source quickmeme Tron image source Tron Wikia Deadpool Maximum Effort - source Deadpool Zoeiro Pinkie Pie that’s all folks, Copyright by dan232323, used with permission Federico Campoli (Transferwise) The ninja elephant 4th July 2017 37 / 38
  47. The ninja elephant Scaling the analytics database in Transferwise Federico

    Campoli Transferwise 4th July 2017 Federico Campoli (Transferwise) The ninja elephant 4th July 2017 38 / 38