From 7b9dc71405931f698d953378676d7c51ab2d9591 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Wed, 24 Jan 2001 23:15:19 +0000 Subject: [PATCH] WAL documentation, from Oliver Elphick and Vadim Mikheev. --- doc/src/sgml/admin.sgml | 3 +- doc/src/sgml/filelist.sgml | 3 +- doc/src/sgml/runtime.sgml | 53 +++++++- doc/src/sgml/wal.sgml | 321 +++++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 377 insertions(+), 3 deletions(-) create mode 100644 doc/src/sgml/wal.sgml diff --git a/doc/src/sgml/admin.sgml b/doc/src/sgml/admin.sgml index 3379eba1b9..fc8fe19323 100644 --- a/doc/src/sgml/admin.sgml +++ b/doc/src/sgml/admin.sgml @@ -1,5 +1,5 @@ @@ -58,6 +58,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/Attic/admin.sgml,v 1.30 2001/01/24 19:42:46 &manage-ag; &user-manag; &backup; + &wal; &recovery; ®ress; &release; diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml index 5d784d7dcc..21174ae420 100644 --- a/doc/src/sgml/filelist.sgml +++ b/doc/src/sgml/filelist.sgml @@ -1,4 +1,4 @@ - + @@ -54,6 +54,7 @@ + diff --git a/doc/src/sgml/runtime.sgml b/doc/src/sgml/runtime.sgml index 17dcc10f84..3f68431c2c 100644 --- a/doc/src/sgml/runtime.sgml +++ b/doc/src/sgml/runtime.sgml @@ -1,5 +1,5 @@ @@ -1159,6 +1159,57 @@ env PGOPTIONS='-c geqo=off' psql + + WAL + + + See also for details on WAL + tuning. + + + + CHECKPOINT_TIMEOUT (integer) + + + Frequency of automatic WAL checkpoints, in seconds. + + + + + + WAL_BUFFERS (integer) + + + Number of buffers for WAL. This option can only be set at + server start. + + + + + + WAL_DEBUG (integer) + + + If non-zero, turn on WAL-related debugging output on standard + error. + + + + + + WAL_FILES (integer) + + + Number of log files that are created in advance at checkpoint + time. This option can only be set at server start. + + + + + + + + Short options diff --git a/doc/src/sgml/wal.sgml b/doc/src/sgml/wal.sgml new file mode 100644 index 0000000000..06198bd6e1 --- /dev/null +++ b/doc/src/sgml/wal.sgml @@ -0,0 +1,321 @@ + + + + Write-Ahead Logging (<acronym>WAL</acronym>) + + + Author + + Vadim Mikheev and Oliver Elphick + + + + + General Description + + + Write Ahead Logging (WAL) + is a standard approach to transaction logging. Its detailed + description may be found in most (if not all) books about + transaction processing. Briefly, WAL's central + concept is that changes to data files (where tables and indices + reside) must be written only after those changes have been logged - + that is, when log records have been flushed to permanent + storage. When we follow this procedure, we do not need to flush + data pages to disk on every transaction commit, because we know + that in the event of a crash we will be able to recover the + database using the log: any changes that have not been applied to + the data pages will first be redone from the log records (this is + roll-forward recovery, also known as REDO) and then changes made by + uncommitted transactions will be removed from the data pages + (roll-backward recovery - UNDO). + + + + Immediate Benefits of <acronym>WAL</acronym> + + + The first obvious benefit of using WAL is a + significantly reduced number of disk writes, since only the log + file needs to be flushed to disk at the time of transaction + commit; in multi-user environments, commits of many transactions + may be accomplished with a single fsync() of + the log file. Furthermore, the log file is written sequentially, + and so the cost of syncing the log is much less than the cost of + flushing the data pages. + + + + The next benefit is consistency of the data pages. The truth is + that, before WAL, + PostgreSQL was never able to guarantee + consistency in the case of a crash. Before + WAL, any crash during writing could result in: + + + + index tuples pointing to non-existent table rows + + + + index tuples lost in split operations + + + + totally corrupted table or index page content, because + of partially written data pages + + + + Problems with indices (problems 1 and 2) could possibly have been + fixed by additional fsync() calls, but it is + not obvious how to handle the last case without + WAL; WAL saves the entire + data page content in the log if that is required to ensure page + consistency for after-crash recovery. + + + + + Future Benefits + + + In this first release of WAL, UNDO operation is + not implemented, because of lack of time. This means that changes + made by aborted transactions will still occupy disk space and that + we still need a permanent pg_log file to hold + the status of transactions, since we are not able to re-use + transaction identifiers. Once UNDO is implemented, + pg_log will no longer be required to be + permanent; it will be possible to remove + pg_log at shutdown, split it into segments + and remove old segments. + + + + With UNDO, it will also be possible to implement + savepoints to allow partial rollback of + invalid transaction operations (parser errors caused by mistyping + commands, insertion of duplicate primary/unique keys and so on) + with the ability to continue or commit valid operations made by + the transaction before the error. At present, any error will + invalidate the whole transaction and require a transaction abort. + + + + WAL offers the opportunity for a new method for + database on-line backup and restore (BAR). To + use this method, one would have to make periodic saves of data + files to another disk, a tape or another host and also archive the + WAL log files. The database file copy and the + archived log files could be used to restore just as if one were + restoring after a crash. Each time a new database file copy was + made the old log files could be removed. Implementing this + facility will require the logging of data file and index creation + and deletion; it will also require development of a method for + copying the data files (operating system copy commands are not + suitable). + + + + + + Implementation + + + WAL is automatically enabled from release 7.1 + onwards. No action is required from the administrator with the + exception of ensuring that the additional disk-space requirements + of the WAL logs are met, and that any necessary + tuning is done (see ). + + + + WAL logs are stored in the directory + $PGDATA/pg_xlog, as + a set of segment files, each 16 MB in size. Each segment is + divided into 8 kB pages. The log record headers are described in + access/xlog.h; record content is dependent on + the type of event that is being logged. Segment files are given + sequential numbers as names, starting at + 0000000000000000. The numbers do not wrap, at + present, but it should take a very long time to exhaust the + available stock of numbers. + + + + The WAL buffers and control structure are in + shared memory, and are handled by the backends; they are protected + by spinlocks. The demand on shared memory is dependent on the + number of buffers; the default size of the WAL + buffers is 64 kB. + + + + It is of advantage if the log is located on another disk than the + main database files. This may be achieved by moving the directory, + pg_xlog, to another location (while the + postmaster is shut down, of course) and creating a symbolic link + from the original location in $PGDATA to + the new location. + + + + The aim of WAL, to ensure that the log is + written before database records are altered, may be subverted by + disk drives that falsely report a successful write to the kernel, + when, in fact, they have only cached the data and not yet stored it + on the disk. A power failure in such a situation may still lead to + irrecoverable data corruption; administrators should try to ensure + that disks holding PostgreSQL's data and + log files do not make such false reports. + + + + Database Recovery with <acronym>WAL</acronym> + + + After a checkpoint has been made and the log flushed, the + checkpoint's position is saved in the file + pg_control. Therefore, when recovery is to be + done, the backend first reads pg_control and + then the checkpoint record; next it reads the redo record, whose + position is saved in the checkpoint, and begins the REDO operation. + Because the entire content of the pages is saved in the log on the + first page modification after a checkpoint, the pages will be first + restored to a consistent state. + + + + Using pg_control to get the checkpoint + position speeds up the recovery process, but to handle possible + corruption of pg_control, we should actually + implement the reading of existing log segments in reverse order -- + newest to oldest -- in order to find the last checkpoint. This has + not yet been done in release 7.1. + + + + + + <acronym>WAL</acronym> Configuration + + + There are several WAL-related parameters that + affect database performance. This section explains their use. + Consult for details about setting + configuration parameters. + + + + There are two commonly used WAL functions: + LogInsert and LogFlush. + LogInsert is used to place a new record into + the WAL buffers in shared memory. If there is no + space for the new record, LogInsert will have + to write (move to kernel cache) a few filled WAL + buffers. This is undesirable because LogInsert + is used on every database low level modification (for example, + tuple insertion) at a time when an exclusive lock is held on + affected data pages and the operation is supposed to be as fast as + possible; what is worse, writing WAL buffers may + also cause the creation of a new log segment, which takes even more + time. Normally, WAL buffers should be written + and flushed by a LogFlush request, which is + made, for the most part, at transaction commit time to ensure that + transaction records are flushed to permanent storage. On systems + with high log output, LogFlush requests may + not occur often enough to prevent WAL buffers + being written by LogInsert. On such systems + one should increase the number of WAL buffers by + modifying the WAL_BUFFERS parameter. The default + number of WAL buffers is 8. Increasing this + value will have an impact on shared memory usage. + + + + Checkpoints are points in the sequence of + transactions at which it is guaranteed that the data files have + been updated with all information logged before the checkpoint. At + checkpoint time, all dirty data pages are flushed to disk and a + special checkpoint record is written to the log file. As result, in + the event of a crash, the recoverer knows from what record in the + log (known as the redo record) it should start the REDO operation, + since any changes made to data files before that record are already + on disk. After a checkpoint has been made, any log segments written + before the redo record are removed, so checkpoints are used to free + disk space in the WAL directory. (When + WAL-based BAR is implemented, + the log segments can be archived instead of just being removed.) + The checkpoint maker is also able to create a few log segments for + future use, so as to avoid the need for + LogInsert or LogFlush to + spend time in creating them. + + + + The WAL log is held on the disk as a set of 16 + MB files called segments. By default a new + segment is created only if more than 75% of the current segment is + used. One can instruct the server to create up to 64 log segments + at checkpoint time by modifying the WAL_FILES + configuration parameter. + + + + For faster after-crash recovery, it would be better to create + checkpoints more often. However, one should balance this against + the cost of flushing dirty data pages; in addition, to ensure data + page consistency, the first modification of a data page after each + checkpoint results in logging the entire page content, thus + increasing output to log and the log's size. + + + + By default, the postmaster spawns a special backend process to + create the next checkpoint 300 seconds after the previous + checkpoint's creation. One can change this interval by modifying + the CHECKPOINT_TIMEOUT parameter. It is also + possible to force a checkpoint by using the SQL command + CHECKPOINT. + + + + Setting the WAL_DEBUG parameter to any non-zero + value will result in each LogInsert and + LogFlush WAL call being + logged to standard error. At present, it makes no difference what + the non-zero value is. This option may be replaced by a more + general mechanism in the future. + + + + The COMMIT_DELAY parameter defines for how long + the backend will be forced to sleep after writing a commit record + to the log with LogInsert call but before + performing a LogFlush. This delay allows other + backends to add their commit records to the log so as to have all + of them flushed with a single log sync. Unfortunately, this + mechanism is not fully implemented at release 7.1, so there is at + present no point in changing this parameter from its default value + of 5 microseconds. + + + + + -- 2.11.0