mirror of
https://github.com/postgres/postgres.git
synced 2026-04-12 20:46:42 -04:00
recovery stop time was used. This avoids a corner-case risk of trying to overwrite an existing archived copy of the last WAL segment, and seems simpler and cleaner all around than the original definition. Per example from Jon Colverson and subsequent analysis by Simon.
1726 lines
75 KiB
Text
1726 lines
75 KiB
Text
<!-- $PostgreSQL: pgsql/doc/src/sgml/backup.sgml,v 2.95.2.3 2007/09/29 01:36:19 tgl Exp $ -->
|
|
|
|
<chapter id="backup">
|
|
<title>Backup and Restore</title>
|
|
|
|
<indexterm zone="backup"><primary>backup</></>
|
|
|
|
<para>
|
|
As with everything that contains valuable data, <productname>PostgreSQL</>
|
|
databases should be backed up regularly. While the procedure is
|
|
essentially simple, it is important to have a basic understanding of
|
|
the underlying techniques and assumptions.
|
|
</para>
|
|
|
|
<para>
|
|
There are three fundamentally different approaches to backing up
|
|
<productname>PostgreSQL</> data:
|
|
<itemizedlist>
|
|
<listitem><para><acronym>SQL</> dump</para></listitem>
|
|
<listitem><para>File system level backup</para></listitem>
|
|
<listitem><para>Continuous archiving</para></listitem>
|
|
</itemizedlist>
|
|
Each has its own strengths and weaknesses.
|
|
</para>
|
|
|
|
<sect1 id="backup-dump">
|
|
<title><acronym>SQL</> Dump</title>
|
|
|
|
<para>
|
|
The idea behind this dump method is to generate a text file with SQL
|
|
commands that, when fed back to the server, will recreate the
|
|
database in the same state as it was at the time of the dump.
|
|
<productname>PostgreSQL</> provides the utility program
|
|
<xref linkend="app-pgdump"> for this purpose. The basic usage of this
|
|
command is:
|
|
<synopsis>
|
|
pg_dump <replaceable class="parameter">dbname</replaceable> > <replaceable class="parameter">outfile</replaceable>
|
|
</synopsis>
|
|
As you see, <application>pg_dump</> writes its results to the
|
|
standard output. We will see below how this can be useful.
|
|
</para>
|
|
|
|
<para>
|
|
<application>pg_dump</> is a regular <productname>PostgreSQL</>
|
|
client application (albeit a particularly clever one). This means
|
|
that you can do this backup procedure from any remote host that has
|
|
access to the database. But remember that <application>pg_dump</>
|
|
does not operate with special permissions. In particular, it must
|
|
have read access to all tables that you want to back up, so in
|
|
practice you almost always have to run it as a database superuser.
|
|
</para>
|
|
|
|
<para>
|
|
To specify which database server <application>pg_dump</> should
|
|
contact, use the command line options <option>-h
|
|
<replaceable>host</></> and <option>-p <replaceable>port</></>. The
|
|
default host is the local host or whatever your
|
|
<envar>PGHOST</envar> environment variable specifies. Similarly,
|
|
the default port is indicated by the <envar>PGPORT</envar>
|
|
environment variable or, failing that, by the compiled-in default.
|
|
(Conveniently, the server will normally have the same compiled-in
|
|
default.)
|
|
</para>
|
|
|
|
<para>
|
|
As any other <productname>PostgreSQL</> client application,
|
|
<application>pg_dump</> will by default connect with the database
|
|
user name that is equal to the current operating system user name. To override
|
|
this, either specify the <option>-U</option> option or set the
|
|
environment variable <envar>PGUSER</envar>. Remember that
|
|
<application>pg_dump</> connections are subject to the normal
|
|
client authentication mechanisms (which are described in <xref
|
|
linkend="client-authentication">).
|
|
</para>
|
|
|
|
<para>
|
|
Dumps created by <application>pg_dump</> are internally consistent,
|
|
that is, updates to the database while <application>pg_dump</> is
|
|
running will not be in the dump. <application>pg_dump</> does not
|
|
block other operations on the database while it is working.
|
|
(Exceptions are those operations that need to operate with an
|
|
exclusive lock, such as <command>VACUUM FULL</command>.)
|
|
</para>
|
|
|
|
<important>
|
|
<para>
|
|
If your database schema relies on OIDs (for instance as foreign
|
|
keys) you must instruct <application>pg_dump</> to dump the OIDs
|
|
as well. To do this, use the <option>-o</option> command line
|
|
option.
|
|
</para>
|
|
</important>
|
|
|
|
<sect2 id="backup-dump-restore">
|
|
<title>Restoring the dump</title>
|
|
|
|
<para>
|
|
The text files created by <application>pg_dump</> are intended to
|
|
be read in by the <application>psql</application> program. The
|
|
general command form to restore a dump is
|
|
<synopsis>
|
|
psql <replaceable class="parameter">dbname</replaceable> < <replaceable class="parameter">infile</replaceable>
|
|
</synopsis>
|
|
where <replaceable class="parameter">infile</replaceable> is what
|
|
you used as <replaceable class="parameter">outfile</replaceable>
|
|
for the <application>pg_dump</> command. The database <replaceable
|
|
class="parameter">dbname</replaceable> will not be created by this
|
|
command, so you must create it yourself from <literal>template0</>
|
|
before executing <application>psql</> (e.g., with
|
|
<literal>createdb -T template0 <replaceable
|
|
class="parameter">dbname</></literal>). <application>psql</>
|
|
supports similar options to <application>pg_dump</> for specifying
|
|
the database server to connect to and the user name to use. See
|
|
the <xref linkend="app-psql"> reference page for more information.
|
|
</para>
|
|
|
|
<para>
|
|
Before restoring a SQL dump, all the users who own objects or were
|
|
granted permissions on objects in the dumped database must already
|
|
exist. If they do not, then the restore will fail to recreate the
|
|
objects with the original ownership and/or permissions.
|
|
(Sometimes this is what you want, but usually it is not.)
|
|
</para>
|
|
|
|
<para>
|
|
By default, the <application>psql</> script will continue to
|
|
execute after an SQL error is encountered. You may wish to use the
|
|
following command at the top of the script to alter that
|
|
behaviour and have <application>psql</application> exit with an
|
|
exit status of 3 if an SQL error occurs:
|
|
<programlisting>
|
|
\set ON_ERROR_STOP
|
|
</programlisting>
|
|
Either way, you will only have a partially restored
|
|
dump. Alternatively, you can specify that the whole dump should be
|
|
restored as a single transaction, so the restore is either fully
|
|
completed or fully rolled back. This mode can be specified by
|
|
passing the <option>-1</> or <option>--single-transaction</>
|
|
command-line options to <application>psql</>. When using this
|
|
mode, be aware that even the smallest of errors can rollback a
|
|
restore that has already run for many hours. However, that may
|
|
still be preferable to manually cleaning up a complex database
|
|
after a partially restored dump.
|
|
</para>
|
|
|
|
<para>
|
|
The ability of <application>pg_dump</> and <application>psql</> to
|
|
write to or read from pipes makes it possible to dump a database
|
|
directly from one server to another; for example:
|
|
<programlisting>
|
|
pg_dump -h <replaceable>host1</> <replaceable>dbname</> | psql -h <replaceable>host2</> <replaceable>dbname</>
|
|
</programlisting>
|
|
</para>
|
|
|
|
<important>
|
|
<para>
|
|
The dumps produced by <application>pg_dump</> are relative to
|
|
<literal>template0</>. This means that any languages, procedures,
|
|
etc. added to <literal>template1</> will also be dumped by
|
|
<application>pg_dump</>. As a result, when restoring, if you are
|
|
using a customized <literal>template1</>, you must create the
|
|
empty database from <literal>template0</>, as in the example
|
|
above.
|
|
</para>
|
|
</important>
|
|
|
|
<para>
|
|
After restoring a backup, it is wise to run <xref
|
|
linkend="sql-analyze" endterm="sql-analyze-title"> on each
|
|
database so the query optimizer has useful statistics. An easy way
|
|
to do this is to run <command>vacuumdb -a -z</>; this is
|
|
equivalent to running <command>VACUUM ANALYZE</> on each database
|
|
manually. For more advice on how to load large amounts of data
|
|
into <productname>PostgreSQL</> efficiently, refer to <xref
|
|
linkend="populate">.
|
|
</para>
|
|
</sect2>
|
|
|
|
<sect2 id="backup-dump-all">
|
|
<title>Using <application>pg_dumpall</></title>
|
|
|
|
<para>
|
|
<application>pg_dump</> dumps only a single database at a time,
|
|
and it does not dump information about roles or tablespaces
|
|
(because those are cluster-wide rather than per-database).
|
|
To support convenient dumping of the entire contents of a database
|
|
cluster, the <xref linkend="app-pg-dumpall"> program is provided.
|
|
<application>pg_dumpall</> backs up each database in a given
|
|
cluster, and also preserves cluster-wide data such as role and
|
|
tablespace definitions. The basic usage of this command is:
|
|
<synopsis>
|
|
pg_dumpall > <replaceable>outfile</>
|
|
</synopsis>
|
|
The resulting dump can be restored with <application>psql</>:
|
|
<synopsis>
|
|
psql -f <replaceable class="parameter">infile</replaceable> postgres
|
|
</synopsis>
|
|
(Actually, you can specify any existing database name to start from,
|
|
but if you are reloading in an empty cluster then <literal>postgres</>
|
|
should generally be used.) It is always necessary to have
|
|
database superuser access when restoring a <application>pg_dumpall</>
|
|
dump, as that is required to restore the role and tablespace information.
|
|
If you use tablespaces, be careful that the tablespace paths in the
|
|
dump are appropriate for the new installation.
|
|
</para>
|
|
</sect2>
|
|
|
|
<sect2 id="backup-dump-large">
|
|
<title>Handling large databases</title>
|
|
|
|
<para>
|
|
Since <productname>PostgreSQL</productname> allows tables larger
|
|
than the maximum file size on your system, it can be problematic
|
|
to dump such a table to a file, since the resulting file will likely
|
|
be larger than the maximum size allowed by your system. Since
|
|
<application>pg_dump</> can write to the standard output, you can
|
|
use standard Unix tools to work around this possible problem.
|
|
</para>
|
|
|
|
<formalpara>
|
|
<title>Use compressed dumps.</title>
|
|
<para>
|
|
You can use your favorite compression program, for example
|
|
<application>gzip</application>.
|
|
|
|
<programlisting>
|
|
pg_dump <replaceable class="parameter">dbname</replaceable> | gzip > <replaceable class="parameter">filename</replaceable>.gz
|
|
</programlisting>
|
|
|
|
Reload with
|
|
|
|
<programlisting>
|
|
createdb <replaceable class="parameter">dbname</replaceable>
|
|
gunzip -c <replaceable class="parameter">filename</replaceable>.gz | psql <replaceable class="parameter">dbname</replaceable>
|
|
</programlisting>
|
|
|
|
or
|
|
|
|
<programlisting>
|
|
cat <replaceable class="parameter">filename</replaceable>.gz | gunzip | psql <replaceable class="parameter">dbname</replaceable>
|
|
</programlisting>
|
|
</para>
|
|
</formalpara>
|
|
|
|
<formalpara>
|
|
<title>Use <command>split</>.</title>
|
|
<para>
|
|
The <command>split</command> command
|
|
allows you to split the output into pieces that are
|
|
acceptable in size to the underlying file system. For example, to
|
|
make chunks of 1 megabyte:
|
|
|
|
<programlisting>
|
|
pg_dump <replaceable class="parameter">dbname</replaceable> | split -b 1m - <replaceable class="parameter">filename</replaceable>
|
|
</programlisting>
|
|
|
|
Reload with
|
|
|
|
<programlisting>
|
|
createdb <replaceable class="parameter">dbname</replaceable>
|
|
cat <replaceable class="parameter">filename</replaceable>* | psql <replaceable class="parameter">dbname</replaceable>
|
|
</programlisting>
|
|
</para>
|
|
</formalpara>
|
|
|
|
<formalpara>
|
|
<title>Use the custom dump format.</title>
|
|
<para>
|
|
If <productname>PostgreSQL</productname> was built on a system with the
|
|
<application>zlib</> compression library installed, the custom dump
|
|
format will compress data as it writes it to the output file. This will
|
|
produce dump file sizes similar to using <command>gzip</command>, but it
|
|
has the added advantage that tables can be restored selectively. The
|
|
following command dumps a database using the custom dump format:
|
|
|
|
<programlisting>
|
|
pg_dump -Fc <replaceable class="parameter">dbname</replaceable> > <replaceable class="parameter">filename</replaceable>
|
|
</programlisting>
|
|
|
|
A custom-format dump is not a script for <application>psql</>, but
|
|
instead must be restored with <application>pg_restore</>.
|
|
See the <xref linkend="app-pgdump"> and <xref
|
|
linkend="app-pgrestore"> reference pages for details.
|
|
</para>
|
|
</formalpara>
|
|
|
|
</sect2>
|
|
</sect1>
|
|
|
|
<sect1 id="backup-file">
|
|
<title>File System Level Backup</title>
|
|
|
|
<para>
|
|
An alternative backup strategy is to directly copy the files that
|
|
<productname>PostgreSQL</> uses to store the data in the database. In
|
|
<xref linkend="creating-cluster"> it is explained where these files
|
|
are located, but you have probably found them already if you are
|
|
interested in this method. You can use whatever method you prefer
|
|
for doing usual file system backups, for example
|
|
|
|
<programlisting>
|
|
tar -cf backup.tar /usr/local/pgsql/data
|
|
</programlisting>
|
|
</para>
|
|
|
|
<para>
|
|
There are two restrictions, however, which make this method
|
|
impractical, or at least inferior to the <application>pg_dump</>
|
|
method:
|
|
|
|
<orderedlist>
|
|
<listitem>
|
|
<para>
|
|
The database server <emphasis>must</> be shut down in order to
|
|
get a usable backup. Half-way measures such as disallowing all
|
|
connections will <emphasis>not</emphasis> work
|
|
(mainly because <command>tar</command> and similar tools do not take an
|
|
atomic snapshot of the state of the file system at a point in
|
|
time). Information about stopping the server can be found in
|
|
<xref linkend="server-shutdown">. Needless to say that you
|
|
also need to shut down the server before restoring the data.
|
|
</para>
|
|
</listitem>
|
|
|
|
<listitem>
|
|
<para>
|
|
If you have dug into the details of the file system layout of the
|
|
database, you may be tempted to try to back up or restore only certain
|
|
individual tables or databases from their respective files or
|
|
directories. This will <emphasis>not</> work because the
|
|
information contained in these files contains only half the
|
|
truth. The other half is in the commit log files
|
|
<filename>pg_clog/*</filename>, which contain the commit status of
|
|
all transactions. A table file is only usable with this
|
|
information. Of course it is also impossible to restore only a
|
|
table and the associated <filename>pg_clog</filename> data
|
|
because that would render all other tables in the database
|
|
cluster useless. So file system backups only work for complete
|
|
restoration of an entire database cluster.
|
|
</para>
|
|
</listitem>
|
|
</orderedlist>
|
|
</para>
|
|
|
|
<para>
|
|
An alternative file-system backup approach is to make a
|
|
<quote>consistent snapshot</quote> of the data directory, if the
|
|
file system supports that functionality (and you are willing to
|
|
trust that it is implemented correctly). The typical procedure is
|
|
to make a <quote>frozen snapshot</> of the volume containing the
|
|
database, then copy the whole data directory (not just parts, see
|
|
above) from the snapshot to a backup device, then release the frozen
|
|
snapshot. This will work even while the database server is running.
|
|
However, a backup created in this way saves
|
|
the database files in a state where the database server was not
|
|
properly shut down; therefore, when you start the database server
|
|
on the backed-up data, it will think the server had crashed
|
|
and replay the WAL log. This is not a problem, just be aware of
|
|
it (and be sure to include the WAL files in your backup).
|
|
</para>
|
|
|
|
<para>
|
|
If your database is spread across multiple file systems, there may not
|
|
be any way to obtain exactly-simultaneous frozen snapshots of all
|
|
the volumes. For example, if your data files and WAL log are on different
|
|
disks, or if tablespaces are on different file systems, it might
|
|
not be possible to use snapshot backup because the snapshots must be
|
|
simultaneous.
|
|
Read your file system documentation very carefully before trusting
|
|
to the consistent-snapshot technique in such situations. The safest
|
|
approach is to shut down the database server for long enough to
|
|
establish all the frozen snapshots.
|
|
</para>
|
|
|
|
<para>
|
|
Another option is to use <application>rsync</> to perform a file
|
|
system backup. This is done by first running <application>rsync</>
|
|
while the database server is running, then shutting down the database
|
|
server just long enough to do a second <application>rsync</>. The
|
|
second <application>rsync</> will be much quicker than the first,
|
|
because it has relatively little data to transfer, and the end result
|
|
will be consistent because the server was down. This method
|
|
allows a file system backup to be performed with minimal downtime.
|
|
</para>
|
|
|
|
<para>
|
|
Note that a file system backup will not necessarily be
|
|
smaller than an SQL dump. On the contrary, it will most likely be
|
|
larger. (<application>pg_dump</application> does not need to dump
|
|
the contents of indexes for example, just the commands to recreate
|
|
them.)
|
|
</para>
|
|
</sect1>
|
|
|
|
<sect1 id="continuous-archiving">
|
|
<title>Continuous Archiving and Point-In-Time Recovery (PITR)</title>
|
|
|
|
<indexterm zone="backup">
|
|
<primary>continuous archiving</primary>
|
|
</indexterm>
|
|
|
|
<indexterm zone="backup">
|
|
<primary>point-in-time recovery</primary>
|
|
</indexterm>
|
|
|
|
<indexterm zone="backup">
|
|
<primary>PITR</primary>
|
|
</indexterm>
|
|
|
|
<para>
|
|
At all times, <productname>PostgreSQL</> maintains a
|
|
<firstterm>write ahead log</> (WAL) in the <filename>pg_xlog/</>
|
|
subdirectory of the cluster's data directory. The log describes
|
|
every change made to the database's data files. This log exists
|
|
primarily for crash-safety purposes: if the system crashes, the
|
|
database can be restored to consistency by <quote>replaying</> the
|
|
log entries made since the last checkpoint. However, the existence
|
|
of the log makes it possible to use a third strategy for backing up
|
|
databases: we can combine a file-system-level backup with backup of
|
|
the WAL files. If recovery is needed, we restore the backup and
|
|
then replay from the backed-up WAL files to bring the backup up to
|
|
current time. This approach is more complex to administer than
|
|
either of the previous approaches, but it has some significant
|
|
benefits:
|
|
<itemizedlist>
|
|
<listitem>
|
|
<para>
|
|
We do not need a perfectly consistent backup as the starting point.
|
|
Any internal inconsistency in the backup will be corrected by log
|
|
replay (this is not significantly different from what happens during
|
|
crash recovery). So we don't need file system snapshot capability,
|
|
just <application>tar</> or a similar archiving tool.
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>
|
|
Since we can string together an indefinitely long sequence of WAL files
|
|
for replay, continuous backup can be achieved simply by continuing to archive
|
|
the WAL files. This is particularly valuable for large databases, where
|
|
it may not be convenient to take a full backup frequently.
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>
|
|
There is nothing that says we have to replay the WAL entries all the
|
|
way to the end. We could stop the replay at any point and have a
|
|
consistent snapshot of the database as it was at that time. Thus,
|
|
this technique supports <firstterm>point-in-time recovery</>: it is
|
|
possible to restore the database to its state at any time since your base
|
|
backup was taken.
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>
|
|
If we continuously feed the series of WAL files to another
|
|
machine that has been loaded with the same base backup file, we
|
|
have a <firstterm>warm standby</> system: at any point we can bring up
|
|
the second machine and it will have a nearly-current copy of the
|
|
database.
|
|
</para>
|
|
</listitem>
|
|
</itemizedlist>
|
|
</para>
|
|
|
|
<para>
|
|
As with the plain file-system-backup technique, this method can only
|
|
support restoration of an entire database cluster, not a subset.
|
|
Also, it requires a lot of archival storage: the base backup may be bulky,
|
|
and a busy system will generate many megabytes of WAL traffic that
|
|
have to be archived. Still, it is the preferred backup technique in
|
|
many situations where high reliability is needed.
|
|
</para>
|
|
|
|
<para>
|
|
To recover successfully using continuous archiving (also called "online
|
|
backup" by many database vendors), you need a continuous
|
|
sequence of archived WAL files that extends back at least as far as the
|
|
start time of your backup. So to get started, you should setup and test
|
|
your procedure for archiving WAL files <emphasis>before</> you take your
|
|
first base backup. Accordingly, we first discuss the mechanics of
|
|
archiving WAL files.
|
|
</para>
|
|
|
|
<sect2 id="backup-archiving-wal">
|
|
<title>Setting up WAL archiving</title>
|
|
|
|
<para>
|
|
In an abstract sense, a running <productname>PostgreSQL</> system
|
|
produces an indefinitely long sequence of WAL records. The system
|
|
physically divides this sequence into WAL <firstterm>segment
|
|
files</>, which are normally 16MB apiece (although the size can be
|
|
altered when building <productname>PostgreSQL</>). The segment
|
|
files are given numeric names that reflect their position in the
|
|
abstract WAL sequence. When not using WAL archiving, the system
|
|
normally creates just a few segment files and then
|
|
<quote>recycles</> them by renaming no-longer-needed segment files
|
|
to higher segment numbers. It's assumed that a segment file whose
|
|
contents precede the checkpoint-before-last is no longer of
|
|
interest and can be recycled.
|
|
</para>
|
|
|
|
<para>
|
|
When archiving WAL data, we want to capture the contents of each segment
|
|
file once it is filled, and save that data somewhere before the segment
|
|
file is recycled for reuse. Depending on the application and the
|
|
available hardware, there could be many different ways of <quote>saving
|
|
the data somewhere</>: we could copy the segment files to an NFS-mounted
|
|
directory on another machine, write them onto a tape drive (ensuring that
|
|
you have a way of identifying the original name of each file), or batch
|
|
them together and burn them onto CDs, or something else entirely. To
|
|
provide the database administrator with as much flexibility as possible,
|
|
<productname>PostgreSQL</> tries not to make any assumptions about how
|
|
the archiving will be done. Instead, <productname>PostgreSQL</> lets
|
|
the administrator specify a shell command to be executed to copy a
|
|
completed segment file to wherever it needs to go. The command could be
|
|
as simple as a <literal>cp</>, or it could invoke a complex shell
|
|
script — it's all up to you.
|
|
</para>
|
|
|
|
<para>
|
|
The shell command to use is specified by the <xref
|
|
linkend="guc-archive-command"> configuration parameter, which in practice
|
|
will always be placed in the <filename>postgresql.conf</filename> file.
|
|
In this string,
|
|
any <literal>%p</> is replaced by the path name of the file to
|
|
archive, while any <literal>%f</> is replaced by the file name only.
|
|
(The path name is relative to the working directory of the server,
|
|
i.e., the cluster's data directory.)
|
|
Write <literal>%%</> if you need to embed an actual <literal>%</>
|
|
character in the command. The simplest useful command is something
|
|
like
|
|
<programlisting>
|
|
archive_command = 'cp -i %p /mnt/server/archivedir/%f </dev/null'
|
|
</programlisting>
|
|
which will copy archivable WAL segments to the directory
|
|
<filename>/mnt/server/archivedir</>. (This is an example, not a
|
|
recommendation, and may not work on all platforms.)
|
|
</para>
|
|
|
|
<para>
|
|
The archive command will be executed under the ownership of the same
|
|
user that the <productname>PostgreSQL</> server is running as. Since
|
|
the series of WAL files being archived contains effectively everything
|
|
in your database, you will want to be sure that the archived data is
|
|
protected from prying eyes; for example, archive into a directory that
|
|
does not have group or world read access.
|
|
</para>
|
|
|
|
<para>
|
|
It is important that the archive command return zero exit status if and
|
|
only if it succeeded. Upon getting a zero result,
|
|
<productname>PostgreSQL</> will assume that the WAL segment file has been
|
|
successfully archived, and will remove or recycle it.
|
|
However, a nonzero status tells
|
|
<productname>PostgreSQL</> that the file was not archived; it will try
|
|
again periodically until it succeeds.
|
|
</para>
|
|
|
|
<para>
|
|
The archive command should generally be designed to refuse to overwrite
|
|
any pre-existing archive file. This is an important safety feature to
|
|
preserve the integrity of your archive in case of administrator error
|
|
(such as sending the output of two different servers to the same archive
|
|
directory).
|
|
It is advisable to test your proposed archive command to ensure that it
|
|
indeed does not overwrite an existing file, <emphasis>and that it returns
|
|
nonzero status in this case</>. We have found that <literal>cp -i</> does
|
|
this correctly on some platforms but not others. If the chosen command
|
|
does not itself handle this case correctly, you should add a command
|
|
to test for pre-existence of the archive file. For example, something
|
|
like
|
|
<programlisting>
|
|
archive_command = 'test ! -f .../%f && cp %p .../%f'
|
|
</programlisting>
|
|
works correctly on most Unix variants.
|
|
</para>
|
|
|
|
<para>
|
|
While designing your archiving setup, consider what will happen if
|
|
the archive command fails repeatedly because some aspect requires
|
|
operator intervention or the archive runs out of space. For example, this
|
|
could occur if you write to tape without an autochanger; when the tape
|
|
fills, nothing further can be archived until the tape is swapped.
|
|
You should ensure that any error condition or request to a human operator
|
|
is reported appropriately so that the situation can be
|
|
resolved relatively quickly. The <filename>pg_xlog/</> directory will
|
|
continue to fill with WAL segment files until the situation is resolved.
|
|
</para>
|
|
|
|
<para>
|
|
The speed of the archiving command is not important, so long as it can keep up
|
|
with the average rate at which your server generates WAL data. Normal
|
|
operation continues even if the archiving process falls a little behind.
|
|
If archiving falls significantly behind, this will increase the amount of
|
|
data that would be lost in the event of a disaster. It will also mean that
|
|
the <filename>pg_xlog/</> directory will contain large numbers of
|
|
not-yet-archived segment files, which could eventually exceed available
|
|
disk space. You are advised to monitor the archiving process to ensure that
|
|
it is working as you intend.
|
|
</para>
|
|
|
|
<para>
|
|
In writing your archive command, you should assume that the file names to
|
|
be archived may be up to 64 characters long and may contain any
|
|
combination of ASCII letters, digits, and dots. It is not necessary to
|
|
remember the original relative path (<literal>%p</>) but it is necessary to
|
|
remember the file name (<literal>%f</>).
|
|
</para>
|
|
|
|
<para>
|
|
Note that although WAL archiving will allow you to restore any
|
|
modifications made to the data in your <productname>PostgreSQL</> database,
|
|
it will not restore changes made to configuration files (that is,
|
|
<filename>postgresql.conf</>, <filename>pg_hba.conf</> and
|
|
<filename>pg_ident.conf</>), since those are edited manually rather
|
|
than through SQL operations.
|
|
You may wish to keep the configuration files in a location that will
|
|
be backed up by your regular file system backup procedures. See
|
|
<xref linkend="runtime-config-file-locations"> for how to relocate the
|
|
configuration files.
|
|
</para>
|
|
|
|
<para>
|
|
The archive command is only invoked on completed WAL segments. Hence,
|
|
if your server generates only little WAL traffic (or has slack periods
|
|
where it does so), there could be a long delay between the completion
|
|
of a transaction and its safe recording in archive storage. To put
|
|
a limit on how old unarchived data can be, you can set
|
|
<xref linkend="guc-archive-timeout"> to force the server to switch
|
|
to a new WAL segment file at least that often. Note that archived
|
|
files that are ended early due to a forced switch are still the same
|
|
length as completely full files. It is therefore unwise to set a very
|
|
short <varname>archive_timeout</> — it will bloat your archive
|
|
storage. <varname>archive_timeout</> settings of a minute or so are
|
|
usually reasonable.
|
|
</para>
|
|
|
|
<para>
|
|
Also, you can force a segment switch manually with
|
|
<function>pg_switch_xlog</>, if you want to ensure that a
|
|
just-finished transaction is archived immediately. Other utility
|
|
functions related to WAL management are listed in <xref
|
|
linkend="functions-admin-backup-table">.
|
|
</para>
|
|
</sect2>
|
|
|
|
<sect2 id="backup-base-backup">
|
|
<title>Making a Base Backup</title>
|
|
|
|
<para>
|
|
The procedure for making a base backup is relatively simple:
|
|
<orderedlist>
|
|
<listitem>
|
|
<para>
|
|
Ensure that WAL archiving is enabled and working.
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>
|
|
Connect to the database as a superuser, and issue the command
|
|
<programlisting>
|
|
SELECT pg_start_backup('label');
|
|
</programlisting>
|
|
where <literal>label</> is any string you want to use to uniquely
|
|
identify this backup operation. (One good practice is to use the
|
|
full path where you intend to put the backup dump file.)
|
|
<function>pg_start_backup</> creates a <firstterm>backup label</> file,
|
|
called <filename>backup_label</>, in the cluster directory with
|
|
information about your backup.
|
|
</para>
|
|
|
|
<para>
|
|
It does not matter which database within the cluster you connect to to
|
|
issue this command. You can ignore the result returned by the function;
|
|
but if it reports an error, deal with that before proceeding.
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>
|
|
Perform the backup, using any convenient file-system-backup tool
|
|
such as <application>tar</> or <application>cpio</>. It is neither
|
|
necessary nor desirable to stop normal operation of the database
|
|
while you do this.
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>
|
|
Again connect to the database as a superuser, and issue the command
|
|
<programlisting>
|
|
SELECT pg_stop_backup();
|
|
</programlisting>
|
|
This terminates the backup mode and performs an automatic switch to
|
|
the next WAL segment. The reason for the switch is to arrange that
|
|
the last WAL segment file written during the backup interval is
|
|
immediately ready to archive.
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>
|
|
Once the WAL segment files used during the backup are archived, you are
|
|
done. The file identified by <function>pg_stop_backup</>'s result is
|
|
the last segment that needs to be archived to complete the backup.
|
|
Archival of these files will happen automatically, since you have
|
|
already configured <varname>archive_command</>. In many cases, this
|
|
happens fairly quickly, but you are advised to monitor your archival
|
|
system to ensure this has taken place so that you can be certain you
|
|
have a complete backup.
|
|
</para>
|
|
</listitem>
|
|
</orderedlist>
|
|
</para>
|
|
|
|
<para>
|
|
Some backup tools that you might wish to use emit warnings or errors
|
|
if the files they are trying to copy change while the copy proceeds.
|
|
This situation is normal, and not an error, when taking a base backup of
|
|
an active database; so you need to ensure that you can distinguish
|
|
complaints of this sort from real errors. For example, some versions
|
|
of <application>rsync</> return a separate exit code for <quote>vanished
|
|
source files</>, and you can write a driver script to accept this exit
|
|
code as a non-error case. Also, some versions of GNU
|
|
<application>tar</> return an error code indistinguishable from a
|
|
fatal error if a file was truncated while <application>tar</> was
|
|
copying it. Fortunately, GNU <application>tar</> versions 1.16 and
|
|
later exits with <literal>1</> if a file was changed during the backup,
|
|
and <literal>2</> for other errors.
|
|
</para>
|
|
|
|
<para>
|
|
It is not necessary to be very concerned about the amount of time elapsed
|
|
between <function>pg_start_backup</> and the start of the actual backup,
|
|
nor between the end of the backup and <function>pg_stop_backup</>; a
|
|
few minutes' delay won't hurt anything. (However, if you normally run the
|
|
server with <varname>full_page_writes</> disabled, you may notice a drop
|
|
in performance between <function>pg_start_backup</> and
|
|
<function>pg_stop_backup</>, since <varname>full_page_writes</> is
|
|
effectively forced on during backup mode.) You must ensure that these
|
|
steps are carried out in sequence without any possible
|
|
overlap, or you will invalidate the backup.
|
|
</para>
|
|
|
|
<para>
|
|
Be certain that your backup dump includes all of the files underneath
|
|
the database cluster directory (e.g., <filename>/usr/local/pgsql/data</>).
|
|
If you are using tablespaces that do not reside underneath this directory,
|
|
be careful to include them as well (and be sure that your backup dump
|
|
archives symbolic links as links, otherwise the restore will mess up
|
|
your tablespaces).
|
|
</para>
|
|
|
|
<para>
|
|
You may, however, omit from the backup dump the files within the
|
|
<filename>pg_xlog/</> subdirectory of the cluster directory. This
|
|
slight complication is worthwhile because it reduces the risk
|
|
of mistakes when restoring. This is easy to arrange if
|
|
<filename>pg_xlog/</> is a symbolic link pointing to someplace outside
|
|
the cluster directory, which is a common setup anyway for performance
|
|
reasons.
|
|
</para>
|
|
|
|
<para>
|
|
To make use of the backup, you will need to keep around all the WAL
|
|
segment files generated during and after the file system backup.
|
|
To aid you in doing this, the <function>pg_stop_backup</> function
|
|
creates a <firstterm>backup history file</> that is immediately
|
|
stored into the WAL archive area. This file is named after the first
|
|
WAL segment file that you need to have to make use of the backup.
|
|
For example, if the starting WAL file is
|
|
<literal>0000000100001234000055CD</> the backup history file will be
|
|
named something like
|
|
<literal>0000000100001234000055CD.007C9330.backup</>. (The second
|
|
number in the file name stands for an exact position within the WAL
|
|
file, and can ordinarily be ignored.) Once you have safely archived
|
|
the file system backup and the WAL segment files used during the
|
|
backup (as specified in the backup history file), all archived WAL
|
|
segments with names numerically less are no longer needed to recover
|
|
the file system backup and may be deleted. However, you should
|
|
consider keeping several backup sets to be absolutely certain that
|
|
you can recover your data.
|
|
</para>
|
|
|
|
<para>
|
|
The backup history file is just a small text file. It contains the
|
|
label string you gave to <function>pg_start_backup</>, as well as
|
|
the starting and ending times and WAL segments of the backup.
|
|
If you used the label to identify where the associated dump file is kept,
|
|
then the archived history file is enough to tell you which dump file to
|
|
restore, should you need to do so.
|
|
</para>
|
|
|
|
<para>
|
|
Since you have to keep around all the archived WAL files back to your
|
|
last base backup, the interval between base backups should usually be
|
|
chosen based on how much storage you want to expend on archived WAL
|
|
files. You should also consider how long you are prepared to spend
|
|
recovering, if recovery should be necessary — the system will have to
|
|
replay all those WAL segments, and that could take awhile if it has
|
|
been a long time since the last base backup.
|
|
</para>
|
|
|
|
<para>
|
|
It's also worth noting that the <function>pg_start_backup</> function
|
|
makes a file named <filename>backup_label</> in the database cluster
|
|
directory, which is then removed again by <function>pg_stop_backup</>.
|
|
This file will of course be archived as a part of your backup dump file.
|
|
The backup label file includes the label string you gave to
|
|
<function>pg_start_backup</>, as well as the time at which
|
|
<function>pg_start_backup</> was run, and the name of the starting WAL
|
|
file. In case of confusion it will
|
|
therefore be possible to look inside a backup dump file and determine
|
|
exactly which backup session the dump file came from.
|
|
</para>
|
|
|
|
<para>
|
|
It is also possible to make a backup dump while the server is
|
|
stopped. In this case, you obviously cannot use
|
|
<function>pg_start_backup</> or <function>pg_stop_backup</>, and
|
|
you will therefore be left to your own devices to keep track of which
|
|
backup dump is which and how far back the associated WAL files go.
|
|
It is generally better to follow the continuous archiving procedure above.
|
|
</para>
|
|
</sect2>
|
|
|
|
<sect2 id="backup-pitr-recovery">
|
|
<title>Recovering using a Continuous Archive Backup</title>
|
|
|
|
<para>
|
|
Okay, the worst has happened and you need to recover from your backup.
|
|
Here is the procedure:
|
|
<orderedlist>
|
|
<listitem>
|
|
<para>
|
|
Stop the server, if it's running.
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>
|
|
If you have the space to do so,
|
|
copy the whole cluster data directory and any tablespaces to a temporary
|
|
location in case you need them later. Note that this precaution will
|
|
require that you have enough free space on your system to hold two
|
|
copies of your existing database. If you do not have enough space,
|
|
you need at the least to copy the contents of the <filename>pg_xlog</>
|
|
subdirectory of the cluster data directory, as it may contain logs which
|
|
were not archived before the system went down.
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>
|
|
Clean out all existing files and subdirectories under the cluster data
|
|
directory and under the root directories of any tablespaces you are using.
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>
|
|
Restore the database files from your backup dump. Be careful that they
|
|
are restored with the right ownership (the database system user, not
|
|
root!) and with the right permissions. If you are using tablespaces,
|
|
you should verify that the symbolic links in <filename>pg_tblspc/</>
|
|
were correctly restored.
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>
|
|
Remove any files present in <filename>pg_xlog/</>; these came from the
|
|
backup dump and are therefore probably obsolete rather than current.
|
|
If you didn't archive <filename>pg_xlog/</> at all, then recreate it,
|
|
and be sure to recreate the subdirectory
|
|
<filename>pg_xlog/archive_status/</> as well.
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>
|
|
If you had unarchived WAL segment files that you saved in step 2,
|
|
copy them into <filename>pg_xlog/</>. (It is best to copy them,
|
|
not move them, so that you still have the unmodified files if a
|
|
problem occurs and you have to start over.)
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>
|
|
Create a recovery command file <filename>recovery.conf</> in the cluster
|
|
data directory (see <xref linkend="recovery-config-settings">). You may
|
|
also want to temporarily modify <filename>pg_hba.conf</> to prevent
|
|
ordinary users from connecting until you are sure the recovery has worked.
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>
|
|
Start the server. The server will go into recovery mode and
|
|
proceed to read through the archived WAL files it needs. Should the
|
|
recovery be terminated because of an external error, the server can
|
|
simply be restarted and it will continue recovery. Upon completion
|
|
of the recovery process, the server will rename
|
|
<filename>recovery.conf</> to <filename>recovery.done</> (to prevent
|
|
accidentally re-entering recovery mode in case of a crash later) and then
|
|
commence normal database operations.
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>
|
|
Inspect the contents of the database to ensure you have recovered to
|
|
where you want to be. If not, return to step 1. If all is well,
|
|
let in your users by restoring <filename>pg_hba.conf</> to normal.
|
|
</para>
|
|
</listitem>
|
|
</orderedlist>
|
|
</para>
|
|
|
|
<para>
|
|
The key part of all this is to setup a recovery command file that
|
|
describes how you want to recover and how far the recovery should
|
|
run. You can use <filename>recovery.conf.sample</> (normally
|
|
installed in the installation <filename>share/</> directory) as a
|
|
prototype. The one thing that you absolutely must specify in
|
|
<filename>recovery.conf</> is the <varname>restore_command</>,
|
|
which tells <productname>PostgreSQL</> how to get back archived
|
|
WAL file segments. Like the <varname>archive_command</>, this is
|
|
a shell command string. It may contain <literal>%f</>, which is
|
|
replaced by the name of the desired log file, and <literal>%p</>,
|
|
which is replaced by the path name to copy the log file to.
|
|
(The path name is relative to the working directory of the server,
|
|
i.e., the cluster's data directory.)
|
|
Write <literal>%%</> if you need to embed an actual <literal>%</>
|
|
character in the command. The simplest useful command is
|
|
something like
|
|
<programlisting>
|
|
restore_command = 'cp /mnt/server/archivedir/%f %p'
|
|
</programlisting>
|
|
which will copy previously archived WAL segments from the directory
|
|
<filename>/mnt/server/archivedir</>. You could of course use something
|
|
much more complicated, perhaps even a shell script that requests the
|
|
operator to mount an appropriate tape.
|
|
</para>
|
|
|
|
<para>
|
|
It is important that the command return nonzero exit status on failure.
|
|
The command <emphasis>will</> be asked for log files that are not present
|
|
in the archive; it must return nonzero when so asked. This is not an
|
|
error condition. Be aware also that the base name of the <literal>%p</>
|
|
path will be different from <literal>%f</>; do not expect them to be
|
|
interchangeable.
|
|
</para>
|
|
|
|
<para>
|
|
WAL segments that cannot be found in the archive will be sought in
|
|
<filename>pg_xlog/</>; this allows use of recent un-archived segments.
|
|
However segments that are available from the archive will be used in
|
|
preference to files in <filename>pg_xlog/</>. The system will not
|
|
overwrite the existing contents of <filename>pg_xlog/</> when retrieving
|
|
archived files.
|
|
</para>
|
|
|
|
<para>
|
|
Normally, recovery will proceed through all available WAL segments,
|
|
thereby restoring the database to the current point in time (or as
|
|
close as we can get given the available WAL segments). But if you want
|
|
to recover to some previous point in time (say, right before the junior
|
|
DBA dropped your main transaction table), just specify the required
|
|
stopping point in <filename>recovery.conf</>. You can specify the stop
|
|
point, known as the <quote>recovery target</>, either by date/time or
|
|
by completion of a specific transaction ID. As of this writing only
|
|
the date/time option is very usable, since there are no tools to help
|
|
you identify with any accuracy which transaction ID to use.
|
|
</para>
|
|
|
|
<note>
|
|
<para>
|
|
The stop point must be after the ending time of the base backup (the
|
|
time of <function>pg_stop_backup</>). You cannot use a base backup
|
|
to recover to a time when that backup was still going on. (To
|
|
recover to such a time, you must go back to your previous base backup
|
|
and roll forward from there.)
|
|
</para>
|
|
</note>
|
|
|
|
<para>
|
|
If recovery finds a corruption in the WAL data then recovery will
|
|
complete at that point and the server will not start. In such a case the
|
|
recovery process could be re-run from the beginning, specifying a
|
|
<quote>recovery target</> before the point of corruption so that recovery
|
|
can complete normally.
|
|
If recovery fails for an external reason, such as a system crash or
|
|
if the WAL archive has become inaccessible, then the recovery can simply
|
|
be restarted and it will restart almost from where it failed.
|
|
Recovery restart works much like checkpointing in normal operation:
|
|
the server periodically forces all its state to disk, and then updates
|
|
the <filename>pg_control</> file to indicate that the already-processed
|
|
WAL data need not be scanned again.
|
|
</para>
|
|
|
|
|
|
<sect3 id="recovery-config-settings" xreflabel="Recovery Settings">
|
|
<title>Recovery Settings</title>
|
|
|
|
<para>
|
|
These settings can only be made in the <filename>recovery.conf</>
|
|
file, and apply only for the duration of the recovery. They must be
|
|
reset for any subsequent recovery you wish to perform. They cannot be
|
|
changed once recovery has begun.
|
|
</para>
|
|
|
|
<variablelist>
|
|
|
|
<varlistentry id="restore-command" xreflabel="restore_command">
|
|
<term><varname>restore_command</varname> (<type>string</type>)</term>
|
|
<listitem>
|
|
<para>
|
|
The shell command to execute to retrieve an archived segment of
|
|
the WAL file series. This parameter is required.
|
|
Any <literal>%f</> in the string is
|
|
replaced by the name of the file to retrieve from the archive,
|
|
and any <literal>%p</> is replaced by the path name to copy
|
|
it to on the server.
|
|
(The path name is relative to the working directory of the server,
|
|
i.e., the cluster's data directory.)
|
|
Write <literal>%%</> to embed an actual <literal>%</> character
|
|
in the command.
|
|
</para>
|
|
<para>
|
|
It is important for the command to return a zero exit status if and
|
|
only if it succeeds. The command <emphasis>will</> be asked for file
|
|
names that are not present in the archive; it must return nonzero
|
|
when so asked. Examples:
|
|
<programlisting>
|
|
restore_command = 'cp /mnt/server/archivedir/%f "%p"'
|
|
restore_command = 'copy /mnt/server/archivedir/%f "%p"' # Windows
|
|
</programlisting>
|
|
</para>
|
|
</listitem>
|
|
</varlistentry>
|
|
|
|
<varlistentry id="recovery-target-time" xreflabel="recovery_target_time">
|
|
<term><varname>recovery_target_time</varname>
|
|
(<type>timestamp</type>)
|
|
</term>
|
|
<listitem>
|
|
<para>
|
|
This parameter specifies the time stamp up to which recovery
|
|
will proceed.
|
|
At most one of <varname>recovery_target_time</> and
|
|
<xref linkend="recovery-target-xid"> can be specified.
|
|
The default is to recover to the end of the WAL log.
|
|
The precise stopping point is also influenced by
|
|
<xref linkend="recovery-target-inclusive">.
|
|
</para>
|
|
</listitem>
|
|
</varlistentry>
|
|
|
|
<varlistentry id="recovery-target-xid" xreflabel="recovery_target_xid">
|
|
<term><varname>recovery_target_xid</varname> (<type>string</type>)</term>
|
|
<listitem>
|
|
<para>
|
|
This parameter specifies the transaction ID up to which recovery
|
|
will proceed. Keep in mind
|
|
that while transaction IDs are assigned sequentially at transaction
|
|
start, transactions can complete in a different numeric order.
|
|
The transactions that will be recovered are those that committed
|
|
before (and optionally including) the specified one.
|
|
At most one of <varname>recovery_target_xid</> and
|
|
<xref linkend="recovery-target-time"> can be specified.
|
|
The default is to recover to the end of the WAL log.
|
|
The precise stopping point is also influenced by
|
|
<xref linkend="recovery-target-inclusive">.
|
|
</para>
|
|
</listitem>
|
|
</varlistentry>
|
|
|
|
<varlistentry id="recovery-target-inclusive"
|
|
xreflabel="recovery_target_inclusive">
|
|
<term><varname>recovery_target_inclusive</varname>
|
|
(<type>boolean</type>)
|
|
</term>
|
|
<listitem>
|
|
<para>
|
|
Specifies whether we stop just after the specified recovery target
|
|
(<literal>true</literal>), or just before the recovery target
|
|
(<literal>false</literal>).
|
|
Applies to both <xref linkend="recovery-target-time">
|
|
and <xref linkend="recovery-target-xid">, whichever one is
|
|
specified for this recovery. This indicates whether transactions
|
|
having exactly the target commit time or ID, respectively, will
|
|
be included in the recovery. Default is <literal>true</>.
|
|
</para>
|
|
</listitem>
|
|
</varlistentry>
|
|
|
|
<varlistentry id="recovery-target-timeline"
|
|
xreflabel="recovery_target_timeline">
|
|
<term><varname>recovery_target_timeline</varname>
|
|
(<type>string</type>)
|
|
</term>
|
|
<listitem>
|
|
<para>
|
|
Specifies recovering into a particular timeline. The default is
|
|
to recover along the same timeline that was current when the
|
|
base backup was taken. You would only need to set this parameter
|
|
in complex re-recovery situations, where you need to return to
|
|
a state that itself was reached after a point-in-time recovery.
|
|
See <xref linkend="backup-timelines"> for discussion.
|
|
</para>
|
|
</listitem>
|
|
</varlistentry>
|
|
|
|
</variablelist>
|
|
|
|
</sect3>
|
|
|
|
</sect2>
|
|
|
|
<sect2 id="backup-timelines">
|
|
<title>Timelines</title>
|
|
|
|
<indexterm zone="backup">
|
|
<primary>timelines</primary>
|
|
</indexterm>
|
|
|
|
<para>
|
|
The ability to restore the database to a previous point in time creates
|
|
some complexities that are akin to science-fiction stories about time
|
|
travel and parallel universes. In the original history of the database,
|
|
perhaps you dropped a critical table at 5:15PM on Tuesday evening.
|
|
Unfazed, you get out your backup, restore to the point-in-time 5:14PM
|
|
Tuesday evening, and are up and running. In <emphasis>this</> history of
|
|
the database universe, you never dropped the table at all. But suppose
|
|
you later realize this wasn't such a great idea after all, and would like
|
|
to return to some later point in the original history. You won't be able
|
|
to if, while your database was up-and-running, it overwrote some of the
|
|
sequence of WAL segment files that led up to the time you now wish you
|
|
could get back to. So you really want to distinguish the series of
|
|
WAL records generated after you've done a point-in-time recovery from
|
|
those that were generated in the original database history.
|
|
</para>
|
|
|
|
<para>
|
|
To deal with these problems, <productname>PostgreSQL</> has a notion
|
|
of <firstterm>timelines</>. Whenever an archive recovery is completed,
|
|
a new timeline is created to identify the series of WAL records
|
|
generated after that recovery. The timeline
|
|
ID number is part of WAL segment file names, and so a new timeline does
|
|
not overwrite the WAL data generated by previous timelines. It is
|
|
in fact possible to archive many different timelines. While that might
|
|
seem like a useless feature, it's often a lifesaver. Consider the
|
|
situation where you aren't quite sure what point-in-time to recover to,
|
|
and so have to do several point-in-time recoveries by trial and error
|
|
until you find the best place to branch off from the old history. Without
|
|
timelines this process would soon generate an unmanageable mess. With
|
|
timelines, you can recover to <emphasis>any</> prior state, including
|
|
states in timeline branches that you later abandoned.
|
|
</para>
|
|
|
|
<para>
|
|
Each time a new timeline is created, <productname>PostgreSQL</> creates
|
|
a <quote>timeline history</> file that shows which timeline it branched
|
|
off from and when. These history files are necessary to allow the system
|
|
to pick the right WAL segment files when recovering from an archive that
|
|
contains multiple timelines. Therefore, they are archived into the WAL
|
|
archive area just like WAL segment files. The history files are just
|
|
small text files, so it's cheap and appropriate to keep them around
|
|
indefinitely (unlike the segment files which are large). You can, if
|
|
you like, add comments to a history file to make your own notes about
|
|
how and why this particular timeline came to be. Such comments will be
|
|
especially valuable when you have a thicket of different timelines as
|
|
a result of experimentation.
|
|
</para>
|
|
|
|
<para>
|
|
The default behavior of recovery is to recover along the same timeline
|
|
that was current when the base backup was taken. If you want to recover
|
|
into some child timeline (that is, you want to return to some state that
|
|
was itself generated after a recovery attempt), you need to specify the
|
|
target timeline ID in <filename>recovery.conf</>. You cannot recover into
|
|
timelines that branched off earlier than the base backup.
|
|
</para>
|
|
</sect2>
|
|
|
|
<sect2 id="continuous-archiving-caveats">
|
|
<title>Caveats</title>
|
|
|
|
<para>
|
|
At this writing, there are several limitations of the continuous archiving
|
|
technique. These will probably be fixed in future releases:
|
|
|
|
<itemizedlist>
|
|
<listitem>
|
|
<para>
|
|
Operations on hash indexes are not presently WAL-logged, so
|
|
replay will not update these indexes. The recommended workaround
|
|
is to manually <xref linkend="sql-reindex" endterm="sql-reindex-title">
|
|
each such index after completing a recovery operation.
|
|
</para>
|
|
</listitem>
|
|
|
|
<listitem>
|
|
<para>
|
|
If a <xref linkend="sql-createdatabase" endterm="sql-createdatabase-title">
|
|
command is executed while a base backup is being taken, and then
|
|
the template database that the <command>CREATE DATABASE</> copied
|
|
is modified while the base backup is still in progress, it is
|
|
possible that recovery will cause those modifications to be
|
|
propagated into the created database as well. This is of course
|
|
undesirable. To avoid this risk, it is best not to modify any
|
|
template databases while taking a base backup.
|
|
</para>
|
|
</listitem>
|
|
|
|
<listitem>
|
|
<para>
|
|
<xref linkend="sql-createtablespace" endterm="sql-createtablespace-title">
|
|
commands are WAL-logged with the literal absolute path, and will
|
|
therefore be replayed as tablespace creations with the same
|
|
absolute path. This might be undesirable if the log is being
|
|
replayed on a different machine. It can be dangerous even if the
|
|
log is being replayed on the same machine, but into a new data
|
|
directory: the replay will still overwrite the contents of the
|
|
original tablespace. To avoid potential gotchas of this sort,
|
|
the best practice is to take a new base backup after creating or
|
|
dropping tablespaces.
|
|
</para>
|
|
</listitem>
|
|
</itemizedlist>
|
|
</para>
|
|
|
|
<para>
|
|
It should also be noted that the default <acronym>WAL</acronym>
|
|
format is fairly bulky since it includes many disk page snapshots.
|
|
These page snapshots are designed to support crash recovery, since
|
|
we may need to fix partially-written disk pages. Depending on
|
|
your system hardware and software, the risk of partial writes may
|
|
be small enough to ignore, in which case you can significantly
|
|
reduce the total volume of archived logs by turning off page
|
|
snapshots using the <xref linkend="guc-full-page-writes">
|
|
parameter. (Read the notes and warnings in <xref linkend="wal">
|
|
before you do so.) Turning off page snapshots does not prevent
|
|
use of the logs for PITR operations. An area for future
|
|
development is to compress archived WAL data by removing
|
|
unnecessary page copies even when <varname>full_page_writes</> is
|
|
on. In the meantime, administrators may wish to reduce the number
|
|
of page snapshots included in WAL by increasing the checkpoint
|
|
interval parameters as much as feasible.
|
|
</para>
|
|
</sect2>
|
|
</sect1>
|
|
|
|
<sect1 id="warm-standby">
|
|
<title>Warm Standby Servers for High Availability</title>
|
|
|
|
<indexterm zone="backup">
|
|
<primary>warm standby</primary>
|
|
</indexterm>
|
|
|
|
<indexterm zone="backup">
|
|
<primary>PITR standby</primary>
|
|
</indexterm>
|
|
|
|
<indexterm zone="backup">
|
|
<primary>standby server</primary>
|
|
</indexterm>
|
|
|
|
<indexterm zone="backup">
|
|
<primary>log shipping</primary>
|
|
</indexterm>
|
|
|
|
<indexterm zone="backup">
|
|
<primary>witness server</primary>
|
|
</indexterm>
|
|
|
|
<indexterm zone="backup">
|
|
<primary>STONITH</primary>
|
|
</indexterm>
|
|
|
|
<indexterm zone="backup">
|
|
<primary>high availability</primary>
|
|
</indexterm>
|
|
|
|
<para>
|
|
Continuous archiving can be used to create a <firstterm>high
|
|
availability</> (HA) cluster configuration with one or more
|
|
<firstterm>standby servers</> ready to take
|
|
over operations if the primary server fails. This
|
|
capability is widely referred to as <firstterm>warm standby</>
|
|
or <firstterm>log shipping</>.
|
|
</para>
|
|
|
|
<para>
|
|
The primary and standby server work together to provide this capability,
|
|
though the servers are only loosely coupled. The primary server operates
|
|
in continuous archiving mode, while each standby server operates in
|
|
continuous recovery mode, reading the WAL files from the primary. No
|
|
changes to the database tables are required to enable this capability,
|
|
so it offers low administration overhead in comparison with some other
|
|
replication approaches. This configuration also has relatively low
|
|
performance impact on the primary server.
|
|
</para>
|
|
|
|
<para>
|
|
Directly moving WAL or "log" records from one database server to another
|
|
is typically described as log shipping. <productname>PostgreSQL</>
|
|
implements file-based log shipping, which means that WAL records are
|
|
transferred one file (WAL segment) at a time. WAL
|
|
files can be shipped easily and cheaply over any distance, whether it be
|
|
to an adjacent system, another system on the same site or another system
|
|
on the far side of the globe. The bandwidth required for this technique
|
|
varies according to the transaction rate of the primary server.
|
|
Record-based log shipping is also possible with custom-developed
|
|
procedures, as discussed in <xref linkend="warm-standby-record">.
|
|
</para>
|
|
|
|
<para>
|
|
It should be noted that the log shipping is asynchronous, i.e. the
|
|
WAL records are shipped after transaction commit. As a result there
|
|
is a window for data loss should the primary server
|
|
suffer a catastrophic failure: transactions not yet shipped will be lost.
|
|
The length of the window of data loss
|
|
can be limited by use of the <varname>archive_timeout</varname> parameter,
|
|
which can be set as low as a few seconds if required. However such low
|
|
settings will substantially increase the bandwidth requirements for file
|
|
shipping. If you need a window of less than a minute or so, it's probably
|
|
better to look into record-based log shipping.
|
|
</para>
|
|
|
|
<para>
|
|
The standby server is not available for access, since it is continually
|
|
performing recovery processing. Recovery performance is sufficiently
|
|
good that the standby will typically be only moments away from full
|
|
availability once it has been activated. As a result, we refer to this
|
|
capability as a warm standby configuration that offers high
|
|
availability. Restoring a server from an archived base backup and
|
|
rollforward will take considerably longer, so that technique only
|
|
really offers a solution for disaster recovery, not HA.
|
|
</para>
|
|
|
|
<sect2 id="warm-standby-planning">
|
|
<title>Planning</title>
|
|
|
|
<para>
|
|
It is usually wise to create the primary and standby servers
|
|
so that they are as similar as possible, at least from the
|
|
perspective of the database server. In particular, the path names
|
|
associated with tablespaces will be passed across as-is, so both
|
|
primary and standby servers must have the same mount paths for
|
|
tablespaces if that feature is used. Keep in mind that if
|
|
<xref linkend="sql-createtablespace" endterm="sql-createtablespace-title">
|
|
is executed on the primary, any new mount point needed for it must
|
|
be created on both the primary and all standby servers before the command
|
|
is executed. Hardware need not be exactly the same, but experience shows
|
|
that maintaining two identical systems is easier than maintaining two
|
|
dissimilar ones over the lifetime of the application and system.
|
|
In any case the hardware architecture must be the same — shipping
|
|
from, say, a 32-bit to a 64-bit system will not work.
|
|
</para>
|
|
|
|
<para>
|
|
In general, log shipping between servers running different major release
|
|
levels will not be possible. It is the policy of the PostgreSQL Global
|
|
Development Group not to make changes to disk formats during minor release
|
|
upgrades, so it is likely that running different minor release levels
|
|
on primary and standby servers will work successfully. However, no
|
|
formal support for that is offered and you are advised to keep primary
|
|
and standby servers at the same release level as much as possible.
|
|
When updating to a new minor release, the safest policy is to update
|
|
the standby servers first — a new minor release is more likely
|
|
to be able to read WAL files from a previous minor release than vice
|
|
versa.
|
|
</para>
|
|
|
|
<para>
|
|
There is no special mode required to enable a standby server. The
|
|
operations that occur on both primary and standby servers are entirely
|
|
normal continuous archiving and recovery tasks. The only point of
|
|
contact between the two database servers is the archive of WAL files
|
|
that both share: primary writing to the archive, standby reading from
|
|
the archive. Care must be taken to ensure that WAL archives for separate
|
|
primary servers do not become mixed together or confused.
|
|
</para>
|
|
|
|
<para>
|
|
The magic that makes the two loosely coupled servers work together
|
|
is simply a <varname>restore_command</> used on the standby that waits for
|
|
the next WAL file to become available from the primary. The
|
|
<varname>restore_command</> is specified in the <filename>recovery.conf</>
|
|
file on the standby
|
|
server. Normal recovery processing would request a file from the
|
|
WAL archive, reporting failure if the file was unavailable. For
|
|
standby processing it is normal for the next file to be
|
|
unavailable, so we must be patient and wait for it to appear. A
|
|
waiting <varname>restore_command</> can be written as a custom
|
|
script that loops after polling for the existence of the next WAL
|
|
file. There must also be some way to trigger failover, which
|
|
should interrupt the <varname>restore_command</>, break the loop
|
|
and return a file-not-found error to the standby server. This
|
|
ends recovery and the standby will then come up as a normal
|
|
server.
|
|
</para>
|
|
|
|
<para>
|
|
Pseudocode for a suitable <varname>restore_command</> is:
|
|
<programlisting>
|
|
triggered = false;
|
|
while (!NextWALFileReady() && !triggered)
|
|
{
|
|
sleep(100000L); /* wait for ~0.1 sec */
|
|
if (CheckForExternalTrigger())
|
|
triggered = true;
|
|
}
|
|
if (!triggered)
|
|
CopyWALFileForRecovery();
|
|
</programlisting>
|
|
</para>
|
|
|
|
<para>
|
|
<productname>PostgreSQL</productname> does not provide the system
|
|
software required to identify a failure on the primary and notify
|
|
the standby system and then the standby database server. Many such
|
|
tools exist and are well integrated with other aspects required for
|
|
successful failover, such as IP address migration.
|
|
</para>
|
|
|
|
<para>
|
|
The means for triggering failover is an important part of planning and
|
|
design. The <varname>restore_command</> is executed in full once
|
|
for each WAL file. The process running the <varname>restore_command</>
|
|
is therefore created and dies for each file, so there is no daemon
|
|
or server process and so we cannot use signals and a signal
|
|
handler. A more permanent notification is required to trigger the
|
|
failover. It is possible to use a simple timeout facility,
|
|
especially if used in conjunction with a known
|
|
<varname>archive_timeout</> setting on the primary. This is
|
|
somewhat error prone since a network problem or busy primary server might
|
|
be sufficient to initiate failover. A notification mechanism such
|
|
as the explicit creation of a trigger file is less error prone, if
|
|
this can be arranged.
|
|
</para>
|
|
</sect2>
|
|
|
|
<sect2 id="warm-standby-config">
|
|
<title>Implementation</title>
|
|
|
|
<para>
|
|
The short procedure for configuring a standby server is as follows. For
|
|
full details of each step, refer to previous sections as noted.
|
|
<orderedlist>
|
|
<listitem>
|
|
<para>
|
|
Set up primary and standby systems as near identically as
|
|
possible, including two identical copies of
|
|
<productname>PostgreSQL</> at the same release level.
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>
|
|
Set up continuous archiving from the primary to a WAL archive located
|
|
in a directory on the standby server. Ensure that <xref
|
|
linkend="guc-archive-command"> and <xref linkend="guc-archive-timeout">
|
|
are set appropriately on the primary
|
|
(see <xref linkend="backup-archiving-wal">).
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>
|
|
Make a base backup of the primary server (see <xref
|
|
linkend="backup-base-backup">), and load this data onto the standby.
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>
|
|
Begin recovery on the standby server from the local WAL
|
|
archive, using a <filename>recovery.conf</> that specifies a
|
|
<varname>restore_command</> that waits as described
|
|
previously (see <xref linkend="backup-pitr-recovery">).
|
|
</para>
|
|
</listitem>
|
|
</orderedlist>
|
|
</para>
|
|
|
|
<para>
|
|
Recovery treats the WAL archive as read-only, so once a WAL file has
|
|
been copied to the standby system it can be copied to tape at the same
|
|
time as it is being read by the standby database server.
|
|
Thus, running a standby server for high availability can be performed at
|
|
the same time as files are stored for longer term disaster recovery
|
|
purposes.
|
|
</para>
|
|
|
|
<para>
|
|
For testing purposes, it is possible to run both primary and standby
|
|
servers on the same system. This does not provide any worthwhile
|
|
improvement in server robustness, nor would it be described as HA.
|
|
</para>
|
|
</sect2>
|
|
|
|
<sect2 id="warm-standby-failover">
|
|
<title>Failover</title>
|
|
|
|
<para>
|
|
If the primary server fails then the standby server should begin
|
|
failover procedures.
|
|
</para>
|
|
|
|
<para>
|
|
If the standby server fails then no failover need take place. If the
|
|
standby server can be restarted, even some time later, then the recovery
|
|
process can also be immediately restarted, taking advantage of
|
|
restartable recovery. If the standby server cannot be restarted, then a
|
|
full new standby server should be created.
|
|
</para>
|
|
|
|
<para>
|
|
If the primary server fails and then immediately restarts, you must have
|
|
a mechanism for informing it that it is no longer the primary. This is
|
|
sometimes known as STONITH (Shoot the Other Node In The Head), which is
|
|
necessary to avoid situations where both systems think they are the
|
|
primary, which can lead to confusion and ultimately data loss.
|
|
</para>
|
|
|
|
<para>
|
|
Many failover systems use just two systems, the primary and the standby,
|
|
connected by some kind of heartbeat mechanism to continually verify the
|
|
connectivity between the two and the viability of the primary. It is
|
|
also possible to use a third system (called a witness server) to avoid
|
|
some problems of inappropriate failover, but the additional complexity
|
|
may not be worthwhile unless it is set-up with sufficient care and
|
|
rigorous testing.
|
|
</para>
|
|
|
|
<para>
|
|
Once failover to the standby occurs, we have only a
|
|
single server in operation. This is known as a degenerate state.
|
|
The former standby is now the primary, but the former primary is down
|
|
and may stay down. To return to normal operation we must
|
|
fully recreate a standby server,
|
|
either on the former primary system when it comes up, or on a third,
|
|
possibly new, system. Once complete the primary and standby can be
|
|
considered to have switched roles. Some people choose to use a third
|
|
server to provide backup to the new primary until the new standby
|
|
server is recreated,
|
|
though clearly this complicates the system configuration and
|
|
operational processes.
|
|
</para>
|
|
|
|
<para>
|
|
So, switching from primary to standby server can be fast but requires
|
|
some time to re-prepare the failover cluster. Regular switching from
|
|
primary to standby is encouraged, since it allows regular downtime on
|
|
each system for maintenance. This also acts as a test of the
|
|
failover mechanism to ensure that it will really work when you need it.
|
|
Written administration procedures are advised.
|
|
</para>
|
|
</sect2>
|
|
|
|
<sect2 id="warm-standby-record">
|
|
<title>Record-based Log Shipping</title>
|
|
|
|
<para>
|
|
<productname>PostgreSQL</productname> directly supports file-based
|
|
log shipping as described above. It is also possible to implement
|
|
record-based log shipping, though this requires custom development.
|
|
</para>
|
|
|
|
<para>
|
|
An external program can call the <function>pg_xlogfile_name_offset()</>
|
|
function (see <xref linkend="functions-admin">)
|
|
to find out the file name and the exact byte offset within it of
|
|
the current end of WAL. It can then access the WAL file directly
|
|
and copy the data from the last known end of WAL through the current end
|
|
over to the standby server(s). With this approach, the window for data
|
|
loss is the polling cycle time of the copying program, which can be very
|
|
small, but there is no wasted bandwidth from forcing partially-used
|
|
segment files to be archived. Note that the standby servers'
|
|
<varname>restore_command</> scripts still deal in whole WAL files,
|
|
so the incrementally copied data is not ordinarily made available to
|
|
the standby servers. It is of use only when the primary dies —
|
|
then the last partial WAL file is fed to the standby before allowing
|
|
it to come up. So correct implementation of this process requires
|
|
cooperation of the <varname>restore_command</> script with the data
|
|
copying program.
|
|
</para>
|
|
</sect2>
|
|
|
|
<sect2 id="backup-incremental-updated">
|
|
<title>Incrementally Updated Backups</title>
|
|
|
|
<indexterm zone="backup">
|
|
<primary>incrementally updated backups</primary>
|
|
</indexterm>
|
|
|
|
<indexterm zone="backup">
|
|
<primary>change accumulation</primary>
|
|
</indexterm>
|
|
|
|
<para>
|
|
In a warm standby configuration, it is possible to offload the expense of
|
|
taking periodic base backups from the primary server; instead base backups
|
|
can be made by backing
|
|
up a standby server's files. This concept is generally known as
|
|
incrementally updated backups, log change accumulation or more simply,
|
|
change accumulation.
|
|
</para>
|
|
|
|
<para>
|
|
If we take a backup of the standby server's files while it is following
|
|
logs shipped from the primary, we will be able to reload that data and
|
|
restart the standby's recovery process from the last restart point.
|
|
We no longer need to keep WAL files from before the restart point.
|
|
If we need to recover, it will be faster to recover from the incrementally
|
|
updated backup than from the original base backup.
|
|
</para>
|
|
|
|
<para>
|
|
Since the standby server is not <quote>live</>, it is not possible to
|
|
use <function>pg_start_backup()</> and <function>pg_stop_backup()</>
|
|
to manage the backup process; it will be up to you to determine how
|
|
far back you need to keep WAL segment files to have a recoverable
|
|
backup. You can do this by running <application>pg_controldata</>
|
|
on the standby server to inspect the control file and determine the
|
|
current checkpoint WAL location.
|
|
</para>
|
|
</sect2>
|
|
</sect1>
|
|
|
|
<sect1 id="migration">
|
|
<title>Migration Between Releases</title>
|
|
|
|
<indexterm zone="migration">
|
|
<primary>upgrading</primary>
|
|
</indexterm>
|
|
|
|
<indexterm zone="migration">
|
|
<primary>version</primary>
|
|
<secondary>compatibility</secondary>
|
|
</indexterm>
|
|
|
|
<para>
|
|
This section discusses how to migrate your database data from one
|
|
<productname>PostgreSQL</> release to a newer one.
|
|
The software installation procedure <foreignphrase>per se</> is not the
|
|
subject of this section; those details are in <xref linkend="installation">.
|
|
</para>
|
|
|
|
<para>
|
|
As a general rule, the internal data storage format is subject to
|
|
change between major releases of <productname>PostgreSQL</> (where
|
|
the number after the first dot changes). This does not apply to
|
|
different minor releases under the same major release (where the
|
|
number after the second dot changes); these always have compatible
|
|
storage formats. For example, releases 7.2.1, 7.3.2, and 7.4 are
|
|
not compatible, whereas 7.2.1 and 7.2.2 are. When you update
|
|
between compatible versions, you can simply replace the executables
|
|
and reuse the data directory on disk. Otherwise you need to back
|
|
up your data and restore it on the new server. This has to be done
|
|
using <application>pg_dump</>; file system level backup methods
|
|
obviously won't work. There are checks in place that prevent you
|
|
from using a data directory with an incompatible version of
|
|
<productname>PostgreSQL</productname>, so no great harm can be done by
|
|
trying to start the wrong server version on a data directory.
|
|
</para>
|
|
|
|
<para>
|
|
It is recommended that you use the <application>pg_dump</> and
|
|
<application>pg_dumpall</> programs from the newer version of
|
|
<productname>PostgreSQL</>, to take advantage of any enhancements
|
|
that may have been made in these programs. Current releases of the
|
|
dump programs can read data from any server version back to 7.0.
|
|
</para>
|
|
|
|
<para>
|
|
The least downtime can be achieved by installing the new server in
|
|
a different directory and running both the old and the new servers
|
|
in parallel, on different ports. Then you can use something like
|
|
|
|
<programlisting>
|
|
pg_dumpall -p 5432 | psql -d postgres -p 6543
|
|
</programlisting>
|
|
|
|
to transfer your data. Or use an intermediate file if you want.
|
|
Then you can shut down the old server and start the new server at
|
|
the port the old one was running at. You should make sure that the
|
|
old database is not updated after you run <application>pg_dumpall</>,
|
|
otherwise you will obviously lose that data. See <xref
|
|
linkend="client-authentication"> for information on how to prohibit
|
|
access.
|
|
</para>
|
|
|
|
<para>
|
|
In practice you probably want to test your client
|
|
applications on the new setup before switching over completely.
|
|
This is another reason for setting up concurrent installations
|
|
of old and new versions.
|
|
</para>
|
|
|
|
<para>
|
|
If you cannot or do not want to run two servers in parallel you can
|
|
do the backup step before installing the new version, bring down
|
|
the server, move the old version out of the way, install the new
|
|
version, start the new server, restore the data. For example:
|
|
|
|
<programlisting>
|
|
pg_dumpall > backup
|
|
pg_ctl stop
|
|
mv /usr/local/pgsql /usr/local/pgsql.old
|
|
cd ~/postgresql-&version;
|
|
gmake install
|
|
initdb -D /usr/local/pgsql/data
|
|
postgres -D /usr/local/pgsql/data
|
|
psql -f backup postgres
|
|
</programlisting>
|
|
|
|
See <xref linkend="runtime"> about ways to start and stop the
|
|
server and other details. The installation instructions will advise
|
|
you of strategic places to perform these steps.
|
|
</para>
|
|
|
|
<note>
|
|
<para>
|
|
When you <quote>move the old installation out of the way</quote>
|
|
it may no longer be perfectly usable. Some of the executable programs
|
|
contain absolute paths to various installed programs and data files.
|
|
This is usually not a big problem but if you plan on using two
|
|
installations in parallel for a while you should assign them
|
|
different installation directories at build time. (This problem
|
|
is rectified in <productname>PostgreSQL</> 8.0 and later, but you
|
|
need to be wary of moving older installations.)
|
|
</para>
|
|
</note>
|
|
</sect1>
|
|
</chapter>
|