Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Introduction to Percona XtraDB Cluster and HAProxy

Bo-Yi Wu
April 12, 2014

Introduction to Percona XtraDB Cluster and HAProxy

Introduction to Percona XtraDB Cluster and HAProxy

Bo-Yi Wu

April 12, 2014
Tweet

More Decks by Bo-Yi Wu

Other Decks in Technology

Transcript

  1. Introduction to Introduction to Percona XtraDB Cluster Percona XtraDB Cluster

    and HAProxy and HAProxy 2014.04.12 2014.04.12 Bo-Yi Wu Bo-Yi Wu appleboy appleboy
  2. 2 2 About me About me Github: @appleboy Github: @appleboy

    Twitter: @appleboy Twitter: @appleboy Blog: http://blog.wu-boy.com Blog: http://blog.wu-boy.com
  3. 3 3 Agenda Agenda  About Percona XtraDB Cluster About

    Percona XtraDB Cluster  Install the first node of the cluster Install the first node of the cluster  Install subsequent nodes to the cluster Install subsequent nodes to the cluster  Install HAProxy on the application server Install HAProxy on the application server  Testing with a real-world application Testing with a real-world application
  4. 12 12 Percona XtraDB Cluster Percona XtraDB Cluster  Synchronous

    replication Synchronous replication  Multi-master replication Multi-master replication  Parallel applying on slaves Parallel applying on slaves  Data consistency Data consistency  Automatic node provisioning Automatic node provisioning
  5. 20 20 Write N threads Apply N thread Parallel apply:

    XtraDB Cluster Parallel apply: XtraDB Cluster
  6. 26 3 nodes is the minimal 3 nodes is the

    minimal recommended configuration recommended configuration >=3 nodes for quorum purpose
  7. 31 Only Support InnoDB Table Only Support InnoDB Table MyISAM

    support is limited MyISAM support is limited
  8. 35 35 State Transfer State Transfer  Full data SST

    Full data SST – New node New node – Node long time disconnected Node long time disconnected  Incremental IST Incremental IST – Node disconnected short time Node disconnected short time
  9. 36 36 Snapshot State Transfer Snapshot State Transfer  Mysqldump

    Mysqldump – Small databases Small databases  Rsync Rsync – Donor disconnected for copy time Donor disconnected for copy time – faster faster  XtraBackup XtraBackup – Donor disconnected for short time Donor disconnected for short time – slower slower
  10. 37 37 Incremental State Transfer Incremental State Transfer  Node

    was in cluster Node was in cluster – Disconnected for maintenance Disconnected for maintenance – Node Crashed Node Crashed
  11. 39 $ yum -y install \ $ yum -y install

    \ Percona-XtraDB-Cluster-server \ Percona-XtraDB-Cluster-server \ Percona-XtraDB-Cluster-client \ Percona-XtraDB-Cluster-client \ Percona-Server-shared-compat \ Percona-Server-shared-compat \ percona-xtrabackup percona-xtrabackup
  12. 41  wsrep_cluster_address=gcomm:// wsrep_cluster_address=gcomm:// – Initializes a new cluster for

    first node Initializes a new cluster for first node  wsrep_cluster_address=gcomm://<IP addr>, wsrep_cluster_address=gcomm://<IP addr>, <IP addr>, <IP addr> <IP addr>, <IP addr> – Default port: 4567 Default port: 4567
  13. 42 Don’t use wsrep_urls Don’t use wsrep_urls wsrep_urls is deprecated

    since version wsrep_urls is deprecated since version 5.5.28 5.5.28
  14. 43 43 Configuring the first node Configuring the first node

     [mysqld] [mysqld]  wsrep_provider=/usr/lib64/libgalera_smm.so wsrep_provider=/usr/lib64/libgalera_smm.so  wsrep_cluster_address = " wsrep_cluster_address = "gcomm:// gcomm://" "  wsrep_sst_auth=username:password wsrep_sst_auth=username:password  wsrep_provider_options="gcache.size=4G" wsrep_provider_options="gcache.size=4G"  wsrep_cluster_name=Percona wsrep_cluster_name=Percona  wsrep_sst_method=xtrabackup wsrep_sst_method=xtrabackup  wsrep_node_name=db_01 wsrep_node_name=db_01  wsrep_slave_threads=4 wsrep_slave_threads=4  log_slave_updates log_slave_updates  innodb_locks_unsafe_for_binlog=1 innodb_locks_unsafe_for_binlog=1  innodb_autoinc_lock_mode=2 innodb_autoinc_lock_mode=2
  15. 44 44 Configuring subsequent nodes Configuring subsequent nodes  [mysqld]

    [mysqld]  wsrep_provider=/usr/lib64/libgalera_smm.so wsrep_provider=/usr/lib64/libgalera_smm.so  wsrep_cluster_address = " wsrep_cluster_address = "gcomm://xxxx,xxxx gcomm://xxxx,xxxx" "  wsrep_sst_auth=username:password wsrep_sst_auth=username:password  wsrep_provider_options="gcache.size=4G" wsrep_provider_options="gcache.size=4G"  wsrep_cluster_name=Percona wsrep_cluster_name=Percona  wsrep_sst_method=xtrabackup wsrep_sst_method=xtrabackup  wsrep_node_name=db_01 wsrep_node_name=db_01  wsrep_slave_threads=4 wsrep_slave_threads=4  log_slave_updates log_slave_updates  innodb_locks_unsafe_for_binlog=1 innodb_locks_unsafe_for_binlog=1  innodb_autoinc_lock_mode=2 innodb_autoinc_lock_mode=2
  16. 45 Monitoring MySQL Status Monitoring MySQL Status show global status

    like ' show global status like 'wsrep% wsrep%'; ';
  17. 46 46 Cluster integrity Cluster integrity  wsrep_cluster_size wsrep_cluster_size –

    Configuration version Configuration version  wsrep_conf_id wsrep_conf_id – Number of active nodes Number of active nodes  wsrep_cluster_status wsrep_cluster_status – Should be “Primary” Should be “Primary”
  18. 47 47 Node Status Node Status  wsrep_ready wsrep_ready –

    Should be “On” Should be “On”  wsrep_local_state_comment wsrep_local_state_comment – Status message Status message  wsep_local_send_q_avg wsep_local_send_q_avg – Possible network bottleneck Possible network bottleneck  wsrep_flow_control_paused wsrep_flow_control_paused – Replication lag Replication lag
  19. 49 49 Realtime Wsrep Status Realtime Wsrep Status Percona /

    db_03 / Galera 2.8(r165) Percona / db_03 / Galera 2.8(r165) Wsrep Cluster Node Queue Ops Bytes Flow Conflct PApply Commit Wsrep Cluster Node Queue Ops Bytes Flow Conflct PApply Commit time P cnf # cmt sta Up Dn Up Dn Up Dn p_ms snt lcf bfa dst oooe oool wind time P cnf # cmt sta Up Dn Up Dn Up Dn p_ms snt lcf bfa dst oooe oool wind 11:47:39 P 73 3 Sync T/T 0 0 5 356 30K 149K 0.0 0 0 0 125 0 0 0 11:47:39 P 73 3 Sync T/T 0 0 5 356 30K 149K 0.0 0 0 0 125 0 0 0 11:47:40 P 73 3 Sync T/T 0 0 0 0 0 0 0.0 0 0 0 125 0 0 0 11:47:40 P 73 3 Sync T/T 0 0 0 0 0 0 0.0 0 0 0 125 0 0 0 11:47:41 P 73 3 Sync T/T 0 0 0 0 0 0 0.0 0 0 0 125 0 0 0 11:47:41 P 73 3 Sync T/T 0 0 0 0 0 0 0.0 0 0 0 125 0 0 0 11:47:42 P 73 3 Sync T/T 0 0 0 0 0 0 0.0 0 0 0 125 0 0 0 11:47:42 P 73 3 Sync T/T 0 0 0 0 0 0 0.0 0 0 0 125 0 0 0 11:47:43 P 73 3 Sync T/T 0 0 0 0 0 0 0.0 0 0 0 125 0 0 0 11:47:43 P 73 3 Sync T/T 0 0 0 0 0 0 0.0 0 0 0 125 0 0 0 11:47:44 P 73 3 Sync T/T 0 0 0 0 0 0 0.0 0 0 0 125 0 0 0 11:47:44 P 73 3 Sync T/T 0 0 0 0 0 0 0.0 0 0 0 125 0 0 0 11:47:45 P 73 3 Sync T/T 0 0 0 3 0 1.1K 0.0 0 0 0 126 67 0 1 11:47:45 P 73 3 Sync T/T 0 0 0 3 0 1.1K 0.0 0 0 0 126 67 0 1 11:47:46 P 73 3 Sync T/T 0 0 0 2 0 994 0.0 0 0 0 126 0 0 0 11:47:46 P 73 3 Sync T/T 0 0 0 2 0 994 0.0 0 0 0 126 0 0 0 ./myq_status -t 1 -h 127.0.0.1 wsrep
  20. 53 53 Cluster replication Cluster replication  Before source return

    commits Before source return commits – Certify trx on all other nodes Certify trx on all other nodes  Nodes reject on locking conflicts Nodes reject on locking conflicts  Commit successfully if no conflicts on Commit successfully if no conflicts on any node any node
  21. 54 Node 1 Tx Source Node 2 Accepted Node 3

    Certify Fails Client 2 Client 1 Update t set col = '12' where id = '1' Update t set col = '12' where id = '1' Update t set col = '12' where id = '1'
  22. 56 Write to all nodes Write to all nodes Increase

    of deadlock errors Increase of deadlock errors
  23. 58 58 How to avoid deadlock How to avoid deadlock

     Writing to only one node Writing to only one node – All pessimistic locking happens on one node All pessimistic locking happens on one node  Different nodes can handle writes for Different nodes can handle writes for different datasets different datasets – Different database, tables, rows etc. Different database, tables, rows etc.
  24. 60 60 Application to cluster Application to cluster  For

    writes For writes – Best practice: single node Best practice: single node  For reads For reads – All nodes load balanced All nodes load balanced  glbd – Galera Load Balancer glbd – Galera Load Balancer  Haproxy Haproxy
  25. 64  frontend pxc-front frontend pxc-front  bind *:3307 bind

    *:3307  mode tcp mode tcp  default_backend pxc-back default_backend pxc-back  backend pxc-back backend pxc-back  mode tcp mode tcp  balance leastconn balance leastconn  option httpchk option httpchk  server db1 192.168.1.100:3306 check port 9200 inter server db1 192.168.1.100:3306 check port 9200 inter 12000 rise 3 fall 3 12000 rise 3 fall 3  server db2 192.168.1.101:3306 check port 9200 inter server db2 192.168.1.101:3306 check port 9200 inter 12000 rise 3 fall 3 12000 rise 3 fall 3  server db3 192.168.1.102:3306 check port 9200 inter server db3 192.168.1.102:3306 check port 9200 inter 12000 rise 3 fall 3 12000 rise 3 fall 3
  26. 66  frontend pxc-onenode-front frontend pxc-onenode-front  bind *:3308 bind

    *:3308  mode tcp mode tcp  default_backend pxc-onenode-back default_backend pxc-onenode-back  backend pxc-onenode-back backend pxc-onenode-back  mode tcp mode tcp  balance leastconn balance leastconn  option httpchk option httpchk  server db1 192.168.1.100:3306 check port 9200 inter server db1 192.168.1.100:3306 check port 9200 inter 12000 rise 3 fall 3 12000 rise 3 fall 3  server db2 192.168.1.101:3306 check port 9200 inter server db2 192.168.1.101:3306 check port 9200 inter 12000 rise 3 fall 3 12000 rise 3 fall 3 backup backup  server db3 192.168.1.102:3306 check port 9200 inter server db3 192.168.1.102:3306 check port 9200 inter 12000 rise 3 fall 3 12000 rise 3 fall 3 backup backup
  27. 67 67 Application server Application server  CentOS 6 base

    installation CentOS 6 base installation  EPEL repo added EPEL repo added  HaProxy installed from EPEL repo HaProxy installed from EPEL repo  Sysbench 0.5 package Sysbench 0.5 package