Slony-I 1.1.5_RC2 Documentation | ||||
---|---|---|---|---|
Prev | Fast Backward | Fast Forward | Next |
In the altperl directory in the CVS tree, there is a sizable set of Perl scripts that may be used to administer a set of Slony-I instances, which support having arbitrary numbers of nodes.
Most of them generate Slonik scripts that are then to be passed on to the slonik utility to be submitted to all of the Slony-I nodes in a particular cluster. At one time, this embedded running slonik on the slonik scripts. Unfortunately, this turned out to be a pretty large calibre "foot gun", as minor typos on the command line led, on a couple of occasions, to pretty calamitous actions, so the behavior has been changed so that the scripts simply submit output to standard output. The savvy administrator should review the script before submitting it to slonik.
The UNIX environment variable SLONYNODES is used to determine what Perl configuration file will be used to control the shape of the nodes in a Slony-I cluster.
What variables are set up.
$CLUSTER_NAME=orglogs; # What is the name of the replication cluster?
$LOGDIR='/opt/OXRS/log/LOGDBS'; # What is the base directory for logs?
$APACHE_ROTATOR="/opt/twcsds004/OXRS/apache/rotatelogs"; # If set, where to find Apache log rotator
foldCase # If set to 1, object names (including schema names) will be folded to lower case. By default, your object names will be left alone. Note that PostgreSQL itself folds object names to lower case; if you create a table via the command CREATE TABLE SOME_THING (Id INTEGER, STudlYName text);, the result will be that all of those components are forced to lower case, thus equivalent to create table some_thing (id integer, studlyname text);, and the name of table and, in this case, the fields will all, in fact, be lower case.
You then define the set of nodes that are to be replicated
using a set of calls to add_node()
.
add_node (host => '10.20.30.40', dbname => 'orglogs', port => 5437, user => 'postgres', node => 4, parent => 1);
The set of parameters for add_node()
are thus:
my %PARAMS = (host=> undef, # Host name dbname => 'template1', # database name port => 5432, # Port number user => 'postgres', # user to connect as node => undef, # node number password => undef, # password for user parent => 1, # which node is parent to this node noforward => undef # shall this node be set up to forward results? sslmode => undef # SSL mode argument - determine # priority of SSL usage # = disable,allow,prefer,require );
The UNIX environment variable SLONYSET is used to determine what Perl configuration file will be used to determine what objects will be contained in a particular replication set.
Unlike SLONYNODES, which is essential for all of the slonik-generating scripts, this only needs to be set when running create_set, as that is the only script used to control what tables will be in a particular replication set.
What variables are set up.
$TABLE_ID = 44;
Each table must be identified by a unique number; this variable controls where numbering starts
$SEQUENCE_ID = 17;
Each sequence must be identified by a unique number; this variable controls where numbering starts
@PKEYEDTABLES
An array of names of tables to be replicated that have a defined primary key so that Slony-I can automatically select its key
%KEYEDTABLES
A hash table of tables to be replicated, where the hash index is the table name, and the hash value is the name of a unique not null index suitable as a "candidate primary key."
@SERIALTABLES
An array of names of tables to be replicated that have no candidate for primary key. Slony-I will add a key field based on a sequence that Slony-I generates
@SEQUENCES
An array of names of sequences that are to be replicated
Queries a database, generating output hopefully suitable for slon_tools.conf consisting of:
a set of add_node()
calls to configure the cluster
The arrays @KEYEDTABLES, nvar>@SERIALTnvar>, and @SEQUENCES
This requires SLONYSET to be set as well as SLONYNODES; it is used to generate the slonik script to set up a replication set consisting of a set of tables and sequences that are to be replicated.
Generates Slonik script to drop a node from a Slony-I cluster.
Generates Slonik script to drop a replication set (e.g. - set of tables and sequences) from a Slony-I cluster.
Generates Slonik script to push DDL changes to a replication set.
Generates Slonik script to request failover from a dead node to some new origin
Generates Slonik script to initialize a whole Slony-I cluster, including setting up the nodes, communications paths, and the listener routing.
Generates Slonik script to merge two replication sets together.
Generates Slonik script to move the origin of a particular set to a different node.
Script to test whether Slony-I is successfully replicating data.
Generates Slonik script to request the restart of a node. This was particularly useful pre-1.0.5 when nodes could get snarled up when slon daemons died.
Generates Slonik script to restart all nodes in the cluster. Not particularly useful.
Displays an overview of how the environment (e.g. - SLONYNODES) is set to configure things.
Kills slony watchdog and all slon daemons for the specified set. It only works if those processes are running on the local host, of course!
This starts a slon daemon for the specified cluster and node, and uses slon_watchdog to keep it running.
Used by slon_start.
This is a somewhat smarter watchdog; it monitors a particular Slony-I node, and restarts the slon process if it hasn't seen updates go in in 20 minutes or more.
This is helpful if there is an unreliable network connection such that the slon sometimes stops working without becoming aware of it.
Adds a node to an existing cluster.
Generates Slonik script to subscribe a particular node to a particular replication set.
This goes through and drops the Slony-I schema from each node; use this if you want to destroy replication throughout a cluster. This is a VERY unsafe script!
Generates Slonik script to unsubscribe a node from a replication set.
Generates Slonik script to tell all the nodes to update the Slony-I functions. This will typically be needed when you upgrade from one version of Slony-I to another.
This script connects to a Slony-I node, and queries various tables (sl_set, sl_node, sl_subscribe, sl_path) to compute what STORE LISTEN requests should be submitted to the cluster.
See the documentation in Section 8.3 for more details on how this works.