Upgrade to Pro — share decks privately, control downloads, hide ads and more …

How to Set Up Your First Slony Replication Cluster

Richard Yen
November 03, 2010

How to Set Up Your First Slony Replication Cluster

Presented at PG West 2010 in San Francisco. Demo of Slony-I, a master-to-multiple-slaves replication engine for Postgres. It replicates DML and DDL changes from one machine to any number of machines in your database cluster. Slony also supports cascading replication architectures, meaning that you can basically have a slave that replicates data changes to another slave. Note that this is NOT master-master replication. It's more like parent-child-grandchild replication. Finally, Slony provides a couple of features that make failover and High Availability possible for the database cluster.

Richard Yen

November 03, 2010
Tweet

More Decks by Richard Yen

Other Decks in Technology

Transcript

  1. What is Slony? Asynchronous master-to-multiple-slaves replication engine for PostgreSQL Support

    for cascading Support for failover Very active project (updates approx. every 4 months)
  2. What Can Slony Do? Replicate any number of tables in

    the database Supports multiple replication sets Supports cascaded replication Provides log-shipping Allows seamless upgrades and maintenance of the database cluster
  3. Slony Limitations Automated DDL replication Master-master replication Synchronous replication Changes

    to large objects (BLOBS) Changes to users and roles Changes made by the TRUNCATE command
  4. WAL-based replication may be easier to set up than Slony

    WAL-based replication automatically propagates DDL changes WAL-based replication propagates ALL changes in binary form Makes upgrades difficult Can’t carve up your schema What about Postgres 9.0?
  5. Replication Architecture Node - a Postgres instance that participates in

    replication Replication Set - a set of tables to be replicated Cluster - several Postgres instances among which replication takes place
  6. Replication Architecture Origin - the place where data changes are

    permitted. Also called the “master provider” Subscriber - any node that receives instructions about data changes Provider - any node that tells other nodes of data changes Note that a Subscriber can also be a Provider
  7. slon and slonik slon - daemon that processes SYNCs and

    notifies each node about what’s going on slonik - app that processes changes to the cluster and replication sets
  8. slon Configuration sync_interval - amount of time between checks for

    data changes sync_interval_timeout - Interval to generate a SYNC event anyways, to keep subscribers up-to-date sync_group_maxsize - amount of SYNCs to group together in case of lag desired_sync_time - tells slon to keep SYNC groups small enough to finish each group in this amount of time
  9. slon Configuration vac_frequency - interval at which to perform maintenance

    on Slony-related tables archive_dir - path for Slony to put log-shipping files lag_interval - amount of time to keep a subscriber behind its provider
  10. Helper Scripts look in the tools/altperl directory of your Slony

    tarball creates slonik scripts for you takes a bit of work to set up slony1-ctl (http://pgfoundry.org/projects/slony1-ctl/) similar to pre-packaged altperl scripts a LOT easier to use
  11. Demo Each table to be replicated needs a primary key

    Need to establish schema at each node Data WILL be erased on subscribers
  12. Things to watch out for Keep an eye on sl_status.st_lag_time

    May be necessary at times, but try to use EXECUTE SCHEMA instead of issuing DDL by hand Got Questions? Send email to the mailing list: [email protected]
  13. Q&A