Slide 1

Slide 1 text

How to Set Up Your First Slony Replication Cluster Richard Yen - iParadigms, LLC PGWest 2010

Slide 2

Slide 2 text

What is Slony? Asynchronous master-to-multiple-slaves replication engine for PostgreSQL Support for cascading Support for failover Very active project (updates approx. every 4 months)

Slide 3

Slide 3 text

What Can Slony Do? Replicate any number of tables in the database Supports multiple replication sets Supports cascaded replication Provides log-shipping Allows seamless upgrades and maintenance of the database cluster

Slide 4

Slide 4 text

What Can Slony Do? Seamless switching between provider and subscriber nodes Failover Facilitated DDL propagation

Slide 5

Slide 5 text

Slony Limitations Automated DDL replication Master-master replication Synchronous replication Changes to large objects (BLOBS) Changes to users and roles Changes made by the TRUNCATE command

Slide 6

Slide 6 text

WAL-based replication may be easier to set up than Slony WAL-based replication automatically propagates DDL changes WAL-based replication propagates ALL changes in binary form Makes upgrades difficult Can’t carve up your schema What about Postgres 9.0?

Slide 7

Slide 7 text

Replication Architecture Node - a Postgres instance that participates in replication Replication Set - a set of tables to be replicated Cluster - several Postgres instances among which replication takes place

Slide 8

Slide 8 text

Replication Architecture Origin - the place where data changes are permitted. Also called the “master provider” Subscriber - any node that receives instructions about data changes Provider - any node that tells other nodes of data changes Note that a Subscriber can also be a Provider

Slide 9

Slide 9 text

No content

Slide 10

Slide 10 text

No content

Slide 11

Slide 11 text

No content

Slide 12

Slide 12 text

No content

Slide 13

Slide 13 text

No content

Slide 14

Slide 14 text

slon and slonik slon - daemon that processes SYNCs and notifies each node about what’s going on slonik - app that processes changes to the cluster and replication sets

Slide 15

Slide 15 text

slon Configuration sync_interval - amount of time between checks for data changes sync_interval_timeout - Interval to generate a SYNC event anyways, to keep subscribers up-to-date sync_group_maxsize - amount of SYNCs to group together in case of lag desired_sync_time - tells slon to keep SYNC groups small enough to finish each group in this amount of time

Slide 16

Slide 16 text

slon Configuration vac_frequency - interval at which to perform maintenance on Slony-related tables archive_dir - path for Slony to put log-shipping files lag_interval - amount of time to keep a subscriber behind its provider

Slide 17

Slide 17 text

Helper Scripts look in the tools/altperl directory of your Slony tarball creates slonik scripts for you takes a bit of work to set up slony1-ctl (http://pgfoundry.org/projects/slony1-ctl/) similar to pre-packaged altperl scripts a LOT easier to use

Slide 18

Slide 18 text

Demo Each table to be replicated needs a primary key Need to establish schema at each node Data WILL be erased on subscribers

Slide 19

Slide 19 text

Things to watch out for Keep an eye on sl_status.st_lag_time May be necessary at times, but try to use EXECUTE SCHEMA instead of issuing DDL by hand Got Questions? Send email to the mailing list: [email protected]

Slide 20

Slide 20 text

Q&A

Slide 21

Slide 21 text

The End