Commit graph

7873 commits

Author SHA1 Message Date
Daniel Gustafsson
b3a37ffbc5 Use PG_DATA_CHECKSUM_OFF instead of hardcoded value
For a long time, the online checksums patchset kept the "off" state as
literal zero without a label to be consistent with the previous coding
which only had a label for the "on" state.  Later, when an "off" label
was made not all uses in the code got the memo.  Fix by setting these
to PG_DATA_CHECKSUM_OFF.

While there, fix a duplicate word in a comment introduced by the same
commit.

Author: Aleksander Alekseev <aleksander@tigerdata.com>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/CAJ7c6TPRTnQFXXX1CRcYoTLXw2swtDH==uSz1MYoMKdLrKZHjA@mail.gmail.com
2026-04-06 22:11:53 +02:00
Álvaro Herrera
28d534e2ae
Add CONCURRENTLY option to REPACK
When this flag is specified, REPACK no longer acquires access-exclusive
lock while the new copy of the table is being created; instead, it
creates the initial copy under share-update-exclusive lock only (same as
vacuum, etc), and it follows an MVCC snapshot; it sets up a replication
slot starting at that snapshot, and uses a concurrent background worker
to do logical decoding starting at the snapshot to populate a stash of
concurrent data changes.  Those changes can then be re-applied to the
new copy of the table just before swapping the relfilenodes.
Applications can continue to access the original copy of the table
normally until just before the swap, which is the only point at which
the access-exclusive lock is needed.

There are some loose ends in this commit:
1. concurrent repack needs its own replication slot in order to apply
   logical decoding, which are a scarce resource and easy to run out of.
2. due to the way the historic snapshot is initially set up, only one
   REPACK process can be running at any one time on the whole system.
3. there's a danger of deadlocking (and thus abort) due to the lock
   upgrade required at the final phase.

These issues will be addressed in upcoming commits.

The design and most of the code are by Antonin Houska, heavily based on
his own pg_squeeze third-party implementation.

Author: Antonin Houska <ah@cybertec.at>
Co-authored-by: Mihail Nikalayeu <mihailnikalayeu@gmail.com>
Co-authored-by: Álvaro Herrera <alvherre@kurilemu.de>
Reviewed-by: Matthias van de Meent <boekewurm+postgres@gmail.com>
Reviewed-by: Srinath Reddy Sadipiralla <srinath2133@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: Jim Jones <jim.jones@uni-muenster.de>
Reviewed-by: Robert Treat <rob@xzilla.net>
Reviewed-by: Noriyoshi Shinoda <noriyoshi.shinoda@hpe.com>
Reviewed-by: vignesh C <vignesh21@gmail.com>
Discussion: https://postgr.es/m/5186.1706694913@antos
Discussion: https://postgr.es/m/202507262156.sb455angijk6@alvherre.pgsql
2026-04-06 21:55:08 +02:00
Tom Lane
d516974840 Support more object types within CREATE SCHEMA.
Having rejected the principle that we should know how to re-order
the sub-commands of CREATE SCHEMA, there is not really anything
except a little coding to stop us from supporting more object types.
This patch adds support for creating functions (including procedures
and aggregates), operators, types (including domains), collations,
and text search objects.

SQL:2021 specifies that we should allow functions, procedures,
types, domains, and collations, so this moves us a great deal
closer to full SQL compatibility of CREATE SCHEMA.  What remains
missing from their list are casts, transforms, roles, and some
object types we don't support yet (e.g. CREATE CHARACTER SET).
Supporting casts or transforms would be problematic because
they don't have names at all, let alone schema-qualified names,
so it'd be quite a stretch to say that they belong to a schema.
Roles likewise are not schema-qualified, plus they are global
to a cluster, making it even less reasonable to consider them
as belonging to a schema.  So I don't see us trying to complete
the list.

User-defined aggregates and operators are outside the spec's ken,
as are text search objects, so adding them does not do anything for
spec compatibility.  But they go along with these other object types,
plus it takes no additional code to support them since they are
represented as DefineStmts like some variants of CREATE TYPE.
It would indeed take some effort to reject them.

Author: Kirill Reshke <reshkekirill@gmail.com>
Author: Jian He <jian.universality@gmail.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CALdSSPh4jUSDsWu3K58hjO60wnTRR0DuO4CKRcwa8EVuOSfXxg@mail.gmail.com
2026-04-06 15:16:25 -04:00
Masahiko Sawada
1ff3180ca0 Allow autovacuum to use parallel vacuum workers.
Previously, autovacuum always disabled parallel vacuum regardless of
the table's index count or configuration. This commit enables
autovacuum workers to use parallel index vacuuming and index cleanup,
using the same parallel vacuum infrastructure as manual VACUUM.

Two new configuration options control the feature. The GUC
autovacuum_max_parallel_workers sets the maximum number of parallel
workers a single autovacuum worker may launch; it defaults to 0,
preserving existing behavior unless explicitly enabled. The per-table
storage parameter autovacuum_parallel_workers provides per-table
limits. A value of 0 disables parallel vacuum for the table, a
positive value caps the worker count (still bounded by the GUC), and
-1 (the default) defers to the GUC.

To handle cases where autovacuum workers receive a SIGHUP and update
their cost-based vacuum delay parameters mid-operation, a new
propagation mechanism is added to vacuumparallel.c. The leader stores
its effective cost parameters in a DSM segment. Parallel vacuum
workers poll for changes in vacuum_delay_point(); if an update is
detected, they apply the new values locally via VacuumUpdateCosts().

A new test module, src/test/modules/test_autovacuum, is added to
verify that parallel autovacuum workers are correctly launched and
that cost-parameter updates are propagated as expected.

The patch was originally proposed by Maxim Orlov, but the
implementation has undergone significant architectural changes
since then during the review process.

Author: Daniil Davydov <3danissimo@gmail.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Sami Imseih <samimseih@gmail.com>
Reviewed-by: Matheus Alcantara <matheusssilv97@gmail.com>
Reviewed-by: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Reviewed-by: Alexander Korotkov <aekorotkov@gmail.com>
Reviewed-by: zengman <zengman@halodbtech.com>
Discussion: https://postgr.es/m/CACG=ezZOrNsuLoETLD1gAswZMuH2nGGq7Ogcc0QOE5hhWaw=cw@mail.gmail.com
2026-04-06 11:48:29 -07:00
Daniel Gustafsson
f19c0eccae Online enabling and disabling of data checksums
This allows data checksums to be enabled, or disabled, in a running
cluster without restricting access to the cluster during processing.

Data checksums could prior to this only be enabled during initdb or
when the cluster is offline using the pg_checksums app. This commit
introduce functionality to enable, or disable, data checksums while
the cluster is running regardless of how it was initialized.

A background worker launcher process is responsible for launching a
dynamic per-database background worker which will mark all buffers
dirty for all relation with storage in order for them to have data
checksums calculated on write.  Once all relations in all databases
have been processed, the data_checksums state will be set to on and
the cluster will at that point be identical to one which had data
checksums enabled during initialization or via offline processing.

When data checksums are being enabled, concurrent I/O operations
from backends other than the data checksums worker will write the
checksums but not verify them on reading.  Only when all backends
have absorbed the procsignalbarrier for setting data_checksums to
on will they also start verifying checksums on reading.  The same
process is repeated during disabling; all backends write checksums
but do not verify them until the barrier for setting the state to
off has been absorbed by all.  This in-progress state is used to
ensure there are no false negatives (or positives) due to reading
a checksum which is not in sync with the page.

A new testmodule, test_checksums, is introduced with an extensive
set of tests covering both online and offline data checksum mode
changes.  The tests which run concurrent pgbdench during online
processing are gated behind the PG_TEST_EXTRA flag due to being
very expensive to run.  Two levels of PG_TEST_EXTRA flags exist
to turn on a subset of the expensive tests, or the full suite of
multiple runs.

This work is based on an earlier version of this patch which was
reviewed by among others Heikki Linnakangas, Robert Haas, Andres
Freund, Tomas Vondra, Michael Banck and Andrey Borodin.  During
the work on this new version, Tomas Vondra has given invaluable
assistance with not only coding and reviewing but very in-depth
testing.

Author: Daniel Gustafsson <daniel@yesql.se>
Author: Magnus Hagander <magnus@hagander.net>
Co-authored-by: Tomas Vondra <tomas@vondra.me>
Reviewed-by: Tomas Vondra <tomas@vondra.me>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Discussion: https://postgr.es/m/CABUevExz9hUUOLnJVr2kpw9Cx=o4MCr1SVKwbupzuxP7ckNutA@mail.gmail.com
Discussion: https://postgr.es/m/20181030051643.elbxjww5jjgnjaxg@alap3.anarazel.de
Discussion: https://postgr.es/m/CABUevEwE3urLtwxxqdgd5O2oQz9J717ZzMbh+ziCSa5YLLU_BA@mail.gmail.com
2026-04-03 22:58:51 +02:00
Heikki Linnakangas
79534f9065 Change default of max_locks_per_transactions to 128
The previous commits reduced the amount of memory available for locks
by eliminating the "safety margins" and by settling the split between
LOCK and PROCLOCK tables at startup. The allocation is now more
deterministic, but it also means that you often hit one of the limits
sooner than before. To compensate for that, bump up
max_locks_per_transactions from 64 to 128. With that there is a little
more space in the both hash tables than what was the effective maximum
size for either table before the previous commits.

This only changes the default, so if you had changed
max_locks_per_transactions in postgresql.conf, you will still have
fewer locks available than before for the same setting value. This
should be noted in the release notes. A good rule of thumb is that if
you double max_locks_per_transactions, you should be able to get as
many locks as before.

Reviewed-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>
Reviewed-by: Matthias van de Meent <boekewurm+postgres@gmail.com>
Discussion: https://www.postgresql.org/message-id/e07be2ba-856b-4ff5-8313-8b58b6b4e4d0@iki.fi
2026-04-03 20:27:46 +03:00
Tom Lane
ebba64c08d Further harden tests that might use not-so-compatible tar versions.
Buildfarm testing shows that OpenSUSE (and perhaps related platforms?)
configures GNU tar in such a way that it'll archive sparse WAL files
by default, thus triggering the pax-extension detection code added by
bc30c704a.  Thus, we need something similar to 852de579a but for
GNU tar's option set.  "--format=ustar" seems to do the trick.

Moreover, the buildfarm shows that pg_verifybackup's 003_corruption.pl
test script is also triggering creation of pax-format tar files on
that platform.  We had not noticed because those test cases all fail
(intentionally) before getting to the point of trying to verify WAL
data.

Since that means two TAP scripts need this option-selection logic, and
plausibly more will do so in future, factor it out into a subroutine
in Test::Utils.  We also need to back-patch the 003_corruption.pl fix
into v18, where it's also failing.

While at it, clean up some places where guards for $tar being empty
or undefined were incomplete or even outright backwards.  Presumably,
we missed noticing because the set of machines that run TAP tests
and don't have tar installed is empty.  But if we're going to try
to handle that scenario, we should do it correctly.

Reported-by: Tomas Vondra <tomas@vondra.me>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/02770bea-b3f3-4015-8a43-443ae345379c@vondra.me
Backpatch-through: 18
2026-04-02 17:21:27 -04:00
Tom Lane
bc30c704ad Harden astreamer tar parsing logic against archives it can't handle.
Previously, there was essentially no verification in this code that
the input is a tar file at all, let alone that it fits into the
subset of valid tar files that we can handle.  This was exposed by
the discovery that we couldn't handle files that FreeBSD's tar
makes, because it's fairly aggressive about converting sparse WAL
files into sparse tar entries.  To fix:

* Bail out if we find a pax extension header.  This covers the
sparse-file case, and also protects us against scenarios where
the pax header changes other file properties that we care about.
(Eventually we may extend the logic to actually handle such
headers, but that won't happen in time for v19.)

* Be more wary about tar file type codes in general: do not assume
that anything that's neither a directory nor a symlink must be a
regular file.  Instead, we just ignore entries that are none of the
three supported types.

* Apply pg_dump's isValidTarHeader to verify that a purported
header block is actually in tar format.  To make this possible,
move isValidTarHeader into src/port/tar.c, which is probably where
it should have been since that file was created.

I also took the opportunity to const-ify the arguments of
isValidTarHeader and tarChecksum, and to use symbols not hard-wired
constants inside tarChecksum.

Back-patch to v18 but not further.  Although this code exists inside
pg_basebackup in older branches, it's not really exposed in that
usage to tar files that weren't generated by our own code, so it
doesn't seem worth back-porting these changes across 3c9056981
and f80b09bac.  I did choose to include a back-patch of 5868372bb
into v18 though, to minimize cosmetic differences between these
two branches.

Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Thomas Munro <thomas.munro@gmail.com>
Discussion: https://postgr.es/m/3049460.1775067940@sss.pgh.pa.us>
Backpatch-through: 18
2026-04-02 12:20:36 -04:00
Andrew Dunstan
bb6ae9707c Use command_ok for pg_regress calls in 002_pg_upgrade and 027_stream_regress
Now that command_ok() captures and displays failure output, use it
instead of system() plus manual diff-dumping in these two tests.  This
simplifies both scripts and produces consistent, truncated output on
failure.

Author: Jelte Fennema-Nio <postgres@jeltef.nl>
Reviewed-by: Andrew Dunstan <andrew@dunslane.net>
Reviewed-by: Corey Huinker <corey.huinker@gmail.com>
Discussion: https://postgr.es/m/DFYFWM053WHS.10K8ZPJ605UFK@jeltef.nl
2026-04-02 08:13:44 -04:00
Andrew Dunstan
1402b8d2fc perl tap: Show failed command output
Capture stdout and stderr from command_ok() and command_fails() and emit
them as TAP diagnostics on failure.  Output is truncated to the first
and last 30 lines per channel to avoid flooding.

A new helper _diag_command_output() is introduced in Utils.pm so
both functions share the same truncation and formatting logic.

Author: Jelte Fennema-Nio <postgres@jeltef.nl>
Reviewed-by: Andrew Dunstan <andrew@dunslane.net>
Reviewed-by: Corey Huinker <corey.huinker@gmail.com>
Reviewed-by: Zsolt Parragi <zsolt.parragi@percona.com>
Reviewed-by: Nazir Bilal Yavuz <byavuz81@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/DFYFWM053WHS.10K8ZPJ605UFK@jeltef.nl
2026-04-02 08:13:44 -04:00
Andres Freund
82c0cb4e67 pg_test_timing: Reduce per-loop overhead
The pg_test_timing program was previously using INSTR_TIME_GET_NANOSEC on an
absolute instr_time value in order to do a diff, which goes against the spirit
of how the GET_* macros are supposed to be used, and will cause overhead in a
future change that assumes these macros are typically used on intervals only.

Additionally the program was doing unnecessary work in the test loop by
measuring the time elapsed, instead of checking the existing current time
measurement against a target end time. To support that, introduce a new
INSTR_TIME_ADD_NANOSEC macro that allows adding user-defined nanoseconds
to an instr_time variable.

While modifying the relevant code anyway, simplify it by not handling
durations <= 0 in test_timing(), since duration is unsigned and 0 is
disallowed by the caller.

Author: Lukas Fittl <lukas@fittl.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CAP53Pkyxv3-3gX+aOxC5tX0p2v9RHU+XH0iyvb64+ZnBXj92vg@mail.gmail.com
2026-04-01 20:07:38 -04:00
Thomas Munro
852de579a6 Fix pg_waldump/t/001_basic.pl with BSD tar on ZFS.
The new test fails with an error about a missing WAL file on that
stack, because it is archived in GNU tar's --sparse --format=posix
format.  BSD tar uses that format by default, unlike GNU tar itself, and
ZFS triggers it by implicitly creating sparse files when it sees a lot
of zeroes.

The problem will surely also affect real users of the new tar support in
pg_waldump (commit b15c1513) and pg_verifybackup (commit b3cf461b3) on
such systems.  Ideas under discussion, but for now the test is made to
pass by disabling sparse file detection in BSD tar.

Diagnosed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Andrew Dunstan <andrew@dunslane.net>
Discussion: https://postgr.es/m/1624716.1774736283%40sss.pgh.pa.us
2026-04-01 14:37:01 +13:00
Amit Kapila
5984ea868e Change syntax of EXCEPT TABLE clause in publication commands.
Adjust the syntax of the EXCEPT clause in CREATE/ALTER PUBLICATION
added in commits fd366065e0 and 493f8c6439 to move the TABLE keyword
inside the relation list.

Old syntax:
CREATE PUBLICATION ... FOR ALL TABLES EXCEPT TABLE (t1, ...);
ALTER PUBLICATION  ... SET ALL TABLES EXCEPT TABLE (t1, ...);

New syntax:
CREATE PUBLICATION ... FOR ALL TABLES EXCEPT (TABLE t1, ...);
ALTER PUBLICATION  ... SET ALL TABLES EXCEPT (TABLE t1, ...);

This is to ensure that inclusion and exclusion list can be specified in
a same way. Previously, the exclusion table list can be specified as
TABLE (t1, t2, t3) and inclusion list can be specified as TABLE t1, t2,
t3, or TABLE t1, TABLE t2, TABLE t3.

This change is purely syntactic and does not alter behavior.

Reported-by: Masahiko Sawada <sawada.mshk@gmail.com>
Author: vignesh C <vignesh21@gmail.com>
Author: Shlok Kyal <shlok.kyal.oss@gmail.com>
Reviewed-by: shveta malik <shveta.malik@gmail.com>
Reviewed-by: Peter Smith <smithpb2250@gmail.com>
Reviewed-by: Zhijie Hou <houzj.fnst@fujitsu.com>
Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com>
Reviewed-by: SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Discussion: https://postgr.es/m/CAD21AoCC8XuwfX62qKBSfHUAoww_XB3_84HjswgL9jxQy696yw@mail.gmail.com
Discussion: https://postgr.es/m/CALDaNm3=JrucjhiiwsYQw5-PGtBHFONa6F7hhWCXMsGvh=tamA@mail.gmail.com
2026-03-31 09:40:51 +05:30
Nathan Bossart
bab2f27eaa Remove bits* typedefs.
In addition to removing the bits8, bits16, and bits32 typedefs,
this commit replaces all uses with uint8, uint16, or uint32.  bits*
provided little benefit beyond establishing the intent of the
variable, and they were inconsistently used for that purpose.
Third-party code should instead use the corresponding uint*
typedef.

Suggested-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: Melanie Plageman <melanieplageman@gmail.com>
Reviewed-by: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
Discussion: https://postgr.es/m/absbX33E4eaA0Ity%40nathan
2026-03-30 16:12:08 -05:00
Peter Eisentraut
75a5914d00 Fix accidentally casting away const
Recently introduced in commit b15c151398.
2026-03-30 09:24:10 +02:00
Fujii Masao
1a11405a43 psql: Make \d+ inheritance tables list formatting consistent with other objects
This followw up on the previous change (commit 7bff9f106a) for partitions by
applying the same formatting to inheritance tables lists.

Previously, \d+ <table> displayed inheritance tables differently from other
object lists: the first inheritance table appeared on the same line as the
"Inherits" header. For example:

    Inherits: test_like_5,
              test_like_5x

This commit updates the output so that inheritance tables are listed
consistently with other objects, with each entry on its own line starting
below the header:

    Inherits:
        test_like_5
        test_like_5x

Author: Peter Smith <smithpb2250@gmail.com>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Reviewed-by: Neil Chen <carpenter.nail.cz@gmail.com>
Reviewed-by: Greg Sabino Mullane <htamfids@gmail.com>
Reviewed-by: Soumya S Murali <soumyamurali.work@gmail.com>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/CAHut+Pu1puO00C-OhgLnAcECzww8MB3Q8DCsvx0cZWHRfs4gBQ@mail.gmail.com
2026-03-30 11:21:22 +09:00
Fujii Masao
7bff9f106a psql: Make \d+ partition list formatting consistent with other objects
Previously, \d+ <table> displayed partitions differently from other object
lists: the first partition appeared on the same line as the "Partitions"
header. For example:

    Partitions: pt12 FOR VALUES IN (1, 2),
                pt34 FOR VALUES IN (3, 4)

This commit updates the output so that partitions are listed consistently
with other objects, with each entry on its own line starting below the header:

    Partitions:
        pt12 FOR VALUES IN (1, 2)
        pt34 FOR VALUES IN (3, 4)

Author: Peter Smith <smithpb2250@gmail.com>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Reviewed-by: Neil Chen <carpenter.nail.cz@gmail.com>
Reviewed-by: Greg Sabino Mullane <htamfids@gmail.com>
Reviewed-by: Soumya S Murali <soumyamurali.work@gmail.com>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/CAHut+Pu1puO00C-OhgLnAcECzww8MB3Q8DCsvx0cZWHRfs4gBQ@mail.gmail.com
2026-03-30 11:06:42 +09:00
Tom Lane
41d69e6dcc Add labels to help make psql's hidden queries more understandable.
We recommend looking at psql's "-E" output to help understand the
system catalogs, but in some cases (particularly table displays)
there's a bunch of rather impenetrable SQL there.  As a small
improvement, label each query issued by describe.c with a short
description of its purpose.  The code is arranged so that the
labels also appear as SQL comments in the server log, if the
server is logging these commands.

We could expand this policy to every use of PSQLexec(), but most of
the ones outside describe.c are issuing simple commands like "BEGIN"
or "COMMIT", which don't seem to need such glosses.  I did add
labels to the commands issued by \sf, \sv and friends.

Also, make the -E and log output for hidden queries say
"INTERNAL QUERY" not just "QUERY", to distinguish them from
user-written queries.

Author: Greg Sabino Mullane <htamfids@gmail.com>
Co-authored-by: David Christensen <david+pg@pgguru.net>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CAKAnmmJz8Hh=8Ru8jgzySPWmLBhnv4=oc_0KRiz-UORJ0Dex+w@mail.gmail.com
2026-03-26 11:36:52 -04:00
Tom Lane
e9d723487b Remove a low-value, high-risk optimization in pg_waldump.
The code removed here deleted already-used data from a partially-read
WAL segment's hashtable entry.  The intent was evidently to try to
keep the entry's memory consumption below the WAL segment's total
size, but we don't use WAL segments that are so large as to make that
a big win.  The important memory-space optimization is to remove
hashtable entries altogether when done with them, and that's handled
elsewhere.  To buy that, we must accept a substantially more complex
(and under-documented) logical invariant about what is in entry->buf,
as well as complex and under-documented interactions with the entry
spilling logic, various re-checking code paths in xlogreader.c,
and pg_waldump's overall data processing order.  Any of those aspects
could have bugs lurking still, and are quite likely to be prone to
new bugs after future code changes.

Given the number of bugs we've already found in commit b15c15139,
I judge that simplifying anything we possibly can is a good decision.

While here, revise and extend some related comments.

Discussion: https://postgr.es/m/374225.1774459521@sss.pgh.pa.us
2026-03-25 19:15:52 -04:00
Tom Lane
ff84efe4fd Fix misuse of simplehash.h hash operations in pg_waldump.
Both ArchivedWAL_insert() and ArchivedWAL_delete_item() can cause
existing hashtable entries to move.  The code didn't account for this
and could leave privateInfo->cur_file pointing at a dead or incorrect
entry, with hilarity ensuing.  Likewise, read_archive_wal_page calls
read_archive_file which could result in movement of the hashtable
entry it is working with.

I believe these bugs explain some odd buildfarm failures, although
the amount of data we use in pg_waldump's TAP tests isn't enough to
trigger them reliably.

This code's all new as of commit b15c15139, so no need for back-patch.

Discussion: https://postgr.es/m/374225.1774459521@sss.pgh.pa.us
2026-03-25 18:37:28 -04:00
Tom Lane
03b1e30e7a Fix file descriptor leakages in pg_waldump.
TarWALDumpCloseSegment was of the opinion that it didn't need to
do anything.  It was mistaken: it has to close the open file if
any, because nothing else will, leading to a descriptor leak.

In addition, we failed to ensure that any file being read by the
XLogReader machinery gets closed before the atexit callback tries to
cleanup the temporary directory holding spilled WAL files.  While the
file would have been closed already in case of a success exit, this
doesn't happen in case of pg_fatal() exits.  The least messy way
to fix that is to move the atexit function into pg_waldump.c,
where it has easier access to the XLogReaderState pointer and to
WALDumpCloseSegment.

These FD leakages are pretty insignificant on Unix-ish platforms,
but they're a bug on Windows, because they prevent successful cleanup
of the temporary directory for extracted WAL files.  (Windows can't
delete a directory that holds a deleted-but-still-open file.)
This is visible in occasional buildfarm failures.

This code's all new as of commit b15c15139, so no need for back-patch.

Discussion: https://postgr.es/m/374225.1774459521@sss.pgh.pa.us
2026-03-25 18:28:42 -04:00
Masahiko Sawada
5fa7837d9a psql: Fix tab completion for FOREIGN DATA WRAPPER and SUBSCRIPTION.
Commit 8185bb5347 extended the CREATE/ALTER SUBSCRIPTION and
CREATE/ALTER FOREIGN DATA WRAPPER commands, but missed the
corresponding tab-completion logic. This commit fixes that oversight
by adding completion support for:

- The CONNECTION keyword in CREATE/ALTER FOREIGN DATA WRAPPER.
- The list of foreign servers in CREATE/ALTER SUBSCRIPTION.

Author: Yamaguchi Atsuo <acrobatcoder@gmail.com>
Discussion: https://postgr.es/m/CAKSyusJWdWcUKVd3qJXcEaQxJewGymQWV_r3-mc=Knrqo0AZ_g@mail.gmail.com
2026-03-25 09:30:26 -07:00
Amit Kapila
6b5b7eae3a pg_createsubscriber: Add -l/--logdir option to redirect output to files.
This commit introduces a -l (or --logdir) argument to pg_createsubscriber,
allowing users to specify a directory for log files.

When enabled, a timestamped subdirectory is created within the specified
log directory, containing:

pg_createsubscriber_server.log: Captures logs from the standby server
during its start/stop cycles.
pg_createsubscriber_internal.log: Captures the tool's own internal
diagnostic and progress messages.

This ensures that transient server and utility messages are preserved for
troubleshooting after the subscriber creation process completes or errored
out.

Author: Gyan Sreejith <gyan.sreejith@gmail.com>
Author: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: vignesh C <vignesh21@gmail.com>
Reviewed-by: Euler Taveira <euler@eulerto.com>
Reviewed-by: shveta malik <shveta.malik@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: Shlok Kyal <shlok.kyal.oss@gmail.com>
Reviewed-by: Peter Smith <smithpb2250@gmail.com>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://postgr.es/m/CAEqnbaUthOQARV1dscGvB_EsqC-YfxiM6rWkVDHc+G+f4oSUHw@mail.gmail.com
2026-03-25 11:22:07 +05:30
Tom Lane
ca1f1ade3f Remove read_archive_file()'s "count" parameter.
Instead, always try to fill the allocated buffer completely.
The previous coding apparently intended (though it's undocumented)
to read only small amounts of data until we are able to identify the
WAL segment size and begin filtering out unwanted segments.  However
this extra complication has no measurable value according to simple
testing here, and it could easily be a net loss if there is a
substantial amount of non-WAL data in the archive file before the
first WAL file.

Discussion: https://postgr.es/m/3341199.1774221191@sss.pgh.pa.us
2026-03-24 12:17:12 -04:00
Peter Eisentraut
6bc7449eac Fix accidentally casting away const
Recently introduced in commit 4b5ba0c4ca.
2026-03-24 14:34:50 +01:00
Fujii Masao
1c162c965a Report detailed errors from XLogFindNextRecord() failures.
Previously, XLogFindNextRecord() did not return detailed error information
when it failed to find a valid WAL record. As a result, callers such as
the WAL summarizer, pg_waldump, and pg_walinspect could only report generic
errors (e.g., "could not find a valid record after ..."), making
troubleshooting difficult.

This commit fix the issue by extending XLogFindNextRecord() to return
detailed error information on failure, and updating its callers to include
those details in their error messages.

For example, when pg_waldump is run on a WAL file with an invalid magic number,
it now reports not only the generic error but also the specific cause
(e.g., "invalid magic number").

Author: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>
Reviewed-by: Mircea Cadariu <cadariu.mircea@gmail.com>
Reviewed-by: Japin Li <japinli@hotmail.com>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/CAO6_XqoxJXddcT4wkd9Xd+cD6Sz-fyspRGuV4Bq-wbXG4pVNzA@mail.gmail.com
2026-03-24 22:33:09 +09:00
Amit Kapila
d6628a5ea0 pg_createsubscriber: Introduce module-specific logging functions.
Replace generic pg_log_* calls with report_createsub_log() and
report_createsub_fatal(). This refactor provides the necessary
infrastructure to support logging to external files via the -l option.

These new functions enable the utility to route messages to both the
terminal and a log file based on the logging configuration and verbosity
levels provided by the user.

Author: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Author: Gyan Sreejith <gyan.sreejith@gmail.com>
Reviewed-by: shveta malik <shveta.malik@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Discussion: https://postgr.es/m/CAEqnbaUthOQARV1dscGvB_EsqC-YfxiM6rWkVDHc+G+f4oSUHw@mail.gmail.com
2026-03-23 09:23:20 +05:30
Tom Lane
69c57466a7 Fix another buglet in archive_waldump.c.
While re-reading 860359ea0, I noticed another problem: when
spilling to a temp file, it did not bother to check the result
of fclose().  This is bad since write errors (like ENOSPC)
may not be reported until close time.
2026-03-22 18:48:38 -04:00
Tom Lane
860359ea02 Fix assorted bugs in archive_waldump.c.
1. archive_waldump.c called astreamer_finalize() nowhere.  This meant
that any data retained in decompression buffers at the moment we
detect archive EOF would never reach astreamer_waldump_content(),
resulting in surprising failures if we actually need the last few
bytes of the archive file.

To fix that, make read_archive_file() do the finalize once it detects
EOF.  Change its API to return a boolean "yes there's more data"
rather than the entirely-misleading raw count of bytes read.

2. init_archive_reader() relied on privateInfo->cur_file to track
which WAL segment was being read, but cur_file can become NULL if a
member trailer is processed during a read_archive_file() call.  This
could cause unreproducible "could not find WAL in archive" failures,
particularly with compressed archives where all the WAL data fits in
a small number of compressed bytes.

Fix by scanning the hash table after each read to find any cached
WAL segment with sufficient data, instead of depending on cur_file.
Also reduce the minimum data requirement from XLOG_BLCKSZ to
sizeof(XLogLongPageHeaderData), since we only need the long page
header to extract the segment size.

We likewise need to fix init_archive_reader() to scan the whole
hash table for irrelevant entries, since we might have already
loaded more than one entry when the data is compressible enough.

3. get_archive_wal_entry() relied on tracking cur_file to identify
WAL hash table entries that need to be spilled to disk.  However,
this can't work for entries that are read completely within a
single read_archive_file call: the caller will never see cur_file
pointing at such an entry.  Instead, scan the WAL hash table to
find entries we should spill.  This also fixes a buglet that any
hash table entries completely loaded during init_archive_reader
were never considered for spilling.

Also, simplify the logic tremendously by not attempting to spill
entries that haven't been read fully.  I am not convinced that the old
logic handled that correctly in every path, and it's really not worth
the complication and risk of bugs to try to spill entries on the fly.
We can just write them in a single go once they are no longer the
cur_file.

4. Fix a rather critical performance problem: the code thought that
resetStringInfo() will reclaim storage, but it doesn't.  So by the
end of the run we'd have consumed storage space equal to the total
amount of WAL read, negating all the effort of the spill logic.

Also document the contract that cur_file can change (or become NULL)
during a single read_archive_file() call, since the decompression
pipeline may produce enough output to trigger multiple astreamer
callbacks.

Author: Tom Lane <tgl@sss.pgh.pa.us>
Co-authored-by: Andrew Dunstan <andrew@dunslane.net>
Discussion: https://postgr.es/m/2178517.1774064942@sss.pgh.pa.us
2026-03-22 18:24:42 -04:00
Andrew Dunstan
b3cf461b3c pg_verifybackup: Enable WAL parsing for tar-format backups
Now that pg_waldump supports reading WAL from tar archives, remove the
restriction that forced --no-parse-wal for tar-format backups.

pg_verifybackup now automatically locates the WAL archive: it looks for
a separate pg_wal.tar first, then falls back to the main base.tar.  A
new --wal-path option (replacing the old --wal-directory, which is kept
as a silent alias) accepts either a directory or a tar archive path.

The default WAL directory preparation is deferred until the backup
format is known, since tar-format backups resolve the WAL path
differently from plain-format ones.

Author: Amul Sul <sulamul@gmail.com>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Jakub Wartak <jakub.wartak@enterprisedb.com>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Reviewed-by: Euler Taveira <euler@eulerto.com>
Reviewed-by: Andrew Dunstan <andrew@dunslane.net>
discussion: https://postgr.es/m/CAAJ_b94bqdWN3h2J-PzzzQ2Npbwct5ZQHggn_QoYGhC2rn-=WQ@mail.gmail.com
2026-03-20 15:31:35 -04:00
Andrew Dunstan
b15c151398 pg_waldump: Add support for reading WAL from tar archives
pg_waldump can now accept the path to a tar archive (optionally
compressed with gzip, lz4, or zstd) containing WAL files and decode
them.  This was added primarily for pg_verifybackup, which previously
had to skip WAL parsing for tar-format backups.

The implementation uses the existing archive streamer infrastructure
with a hash table to track WAL segments read from the archive.  If WAL
files within the archive are not in sequential order, out-of-order
segments are written to a temporary directory (created via mkdtemp under
$TMPDIR or the archive's directory) and read back when needed.  An
atexit callback ensures the temporary directory is cleaned up.

The --follow option is not supported when reading from a tar archive.

Author: Amul Sul <sulamul@gmail.com>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Jakub Wartak <jakub.wartak@enterprisedb.com>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Reviewed-by: Euler Taveira <euler@eulerto.com>
Reviewed-by: Andrew Dunstan <andrew@dunslane.net>
Reviewed-by: Zsolt Parragi <zsolt.parragi@percona.com>
discussion: https://postgr.es/m/CAAJ_b94bqdWN3h2J-PzzzQ2Npbwct5ZQHggn_QoYGhC2rn-=WQ@mail.gmail.com
2026-03-20 15:31:35 -04:00
Andrew Dunstan
f8a0cd2671 pg_waldump: Preparatory refactoring for tar archive WAL decoding.
Several refactoring steps in preparation for adding tar archive WAL
decoding support to pg_waldump:

- Move XLogDumpPrivate and related declarations into a new pg_waldump.h
  header, allowing a second source file to share them.

- Factor out required_read_len() so the read-size calculation can be
  reused for both regular WAL files and tar-archived WAL.

- Move the WAL segment size variable into XLogDumpPrivate and rename it
  to segsize, making it accessible to the archive streamer code.

Author: Amul Sul <sulamul@gmail.com>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Jakub Wartak <jakub.wartak@enterprisedb.com>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Reviewed-by: Euler Taveira <euler@eulerto.com>
Reviewed-by: Andrew Dunstan <andrew@dunslane.net>
discussion: https://postgr.es/m/CAAJ_b94bqdWN3h2J-PzzzQ2Npbwct5ZQHggn_QoYGhC2rn-=WQ@mail.gmail.com
2026-03-20 15:31:35 -04:00
Andrew Dunstan
c8a350a439 Move tar detection and compression logic to common.
Consolidate tar archive identification and compression-type detection
logic into a shared location. Currently used by pg_basebackup and
pg_verifybackup, this functionality is also required for upcoming
pg_waldump enhancements.

This change promotes code reuse and simplifies maintenance across
frontend tools.

Author: Amul Sul <sulamul@gmail.com>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Jakub Wartak <jakub.wartak@enterprisedb.com>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Reviewed-by: Euler Taveira <euler@eulerto.com>
Reviewed-by: Andrew Dunstan <andrew@dunslane.net>
Reviewed-by: Zsolt Parragi <zsolt.parragi@percona.com>
discussion: https://postgr.es/m/CAAJ_b94bqdWN3h2J-PzzzQ2Npbwct5ZQHggn_QoYGhC2rn-=WQ@mail.gmail.com
2026-03-20 15:31:35 -04:00
Andrew Dunstan
4c0390ac53 Add option force_array for COPY JSON FORMAT
This adds the force_array option, which is available exclusively
when using COPY TO with the JSON format.

When enabled, this option wraps the output in a top-level JSON array
(enclosed in square brackets with comma-separated elements), making the
entire result a valid single JSON value.  Without this option, the
default behavior is to output a stream of independent JSON objects.

Attempting to use this option with COPY FROM or with formats other than
JSON will raise an error.

Author: Joe Conway <mail@joeconway.com>
Author: jian he <jian.universality@gmail.com>
Reviewed-by: Junwang Zhao <zhjwpku@gmail.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Florents Tselai <florents.tselai@gmail.com>
Reviewed-by: Andrew Dunstan <andrew@dunslane.net>
Discussion: https://postgr.es/m/CALvfUkBxTYy5uWPFVwpk_7ii2zgT07t3d-yR_cy4sfrrLU%3Dkcg%40mail.gmail.com
Discussion: https://postgr.es/m/6a04628d-0d53-41d9-9e35-5a8dc302c34c@joeconway.com
2026-03-20 08:40:17 -04:00
Andrew Dunstan
7dadd38cda json format for COPY TO
This introduces the JSON format option for the COPY TO command, allowing
users to export query results or table data directly as a stream of JSON
objects (one per line, NDJSON style).

The JSON format is currently supported only for COPY TO operations; it
is not available for COPY FROM.

JSON format is incompatible with some standard text/CSV formatting
options, including HEADER, DEFAULT, NULL, DELIMITER, FORCE QUOTE,
FORCE NOT NULL, and FORCE NULL.

Column list support is included: when a column list is specified, only
the named columns are emitted in each JSON object.

Regression tests covering valid JSON exports and error handling for
incompatible options have been added to src/test/regress/sql/copy.sql.

Author: Joe Conway <mail@joeconway.com>
Author: jian he <jian.universality@gmail.com>
Co-Authored-By: Andrew Dunstan <andrew@dunslane.net>
Reviewed-by: Andrey M. Borodin <x4mmm@yandex-team.ru>
Reviewed-by: Dean Rasheed <dean.a.rasheed@gmail.com>
Reviewed-by: Daniel Verite <daniel@manitou-mail.org>
Reviewed-by: Davin Shearer <davin@apache.org>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org>
Reviewed-by: Junwang Zhao <zhjwpku@gmail.com>
Discussion: https://postgr.es/m/CALvfUkBxTYy5uWPFVwpk_7ii2zgT07t3d-yR_cy4sfrrLU%3Dkcg%40mail.gmail.com
Discussion: https://postgr.es/m/6a04628d-0d53-41d9-9e35-5a8dc302c34c@joeconway.com
2026-03-20 08:40:04 -04:00
Amit Kapila
493f8c6439 Add support for EXCEPT TABLE in ALTER PUBLICATION.
Following commit fd366065e0, which added EXCEPT TABLE support to
CREATE PUBLICATION, this commit extends ALTER PUBLICATION to allow
modifying the exclusion list.

New Syntax:
ALTER PUBLICATION name SET  publication_all_object [, ... ]

where publication_all_object is one of:
ALL TABLES [ EXCEPT TABLE ( except_table_object [, ... ] ) ]
ALL SEQUENCES

If the EXCEPT clause is provided, the existing exclusion list in
pg_publication_rel is replaced with the specified relations. If the
EXCEPT clause is omitted, any existing exclusions for the publication
are cleared. Similarly, SET ALL SEQUENCES updates

Note that because this is a SET command, specifying only one object
type (e.g., SET ALL SEQUENCES) will reset the other unspecified flags
(e.g., setting puballtables to false).

Consistent with CREATE PUBLICATION, only root partitioned tables or
standard tables can be specified in the EXCEPT list. Specifying a
partition child will result in an error.

Author: vignesh C <vignesh21@gmail.com>
Reviewed-by: shveta malik <shveta.malik@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: Peter Smith <smithpb2250@gmail.com>
Reviewed-by: Nisha Moond <nisha.moond412@gmail.com>
Discussion: https://postgr.es/m/CALDaNm3=JrucjhiiwsYQw5-PGtBHFONa6F7hhWCXMsGvh=tamA@mail.gmail.com
2026-03-20 11:36:09 +05:30
Tom Lane
8b02c22bb4 Avoid leaking duplicated file descriptors in corner cases.
pg_dump's compression modules had variations on the theme of

	fp = fdopen(dup(fd), mode);
	if (fp == NULL)
	    // fail, reporting errno

which is problematic for two reasons.  First, if dup() succeeds but
fdopen() fails, we'd leak the duplicated FD.  That's not important
at present since the program will just exit immediately after failure
anyway; but perhaps someday we'll try to continue, making the resource
leak potentially significant.  Second, if dup() fails then fdopen()
will overwrite the useful errno (perhaps EMFILE) with a misleading
value EBADF, making it difficult to understand what went wrong.
Fix both issues by testing for dup() failure before proceeding to
the next call.

These failures are sufficiently unlikely, and the consequences minor
enough, that this doesn't seem worth the effort to back-patch.
But let's fix it in HEAD.

Author: Jianghua Yang <yjhjstz@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/62bbe34d-2315-4b42-b768-56d901aa83e1@gmail.com
2026-03-19 14:25:26 -04:00
Nathan Bossart
ec80215c03 pg_restore: Remove unnecessary strlen() calls in options parsing.
Unlike pg_dump and pg_dumpall, pg_restore first checks whether the
argument passed to --format, --host, and --port is empty before
setting the corresponding variable.  Consequently, pg_restore does
not error if given an empty format name, whereas pg_dump and
pg_dumpall do.  Empty arguments for --host and --port are ignored
by all three applications, so this commit produces no functionality
changes there.  This behavior should perhaps be reconsidered, but
that is left as a future exercise.  As with other recent changes to
option handling for these applications (commits b2898baaf7,
7c8280eeb5, and be0d0b457c), no back-patch.

Author: Mahendra Singh Thalor <mahi6run@gmail.com>
Reviewed-by: Srinath Reddy Sadipiralla <srinath2133@gmail.com>
Discussion: https://postgr.es/m/CAKYtNApkh%3DVy2DpNRCnEJmPpxNuksbAh_QBav%3D2fLmVjBhGwFw%40mail.gmail.com
2026-03-18 14:22:15 -05:00
Jeff Davis
b71bf3b845 Fix pg_dump for CREATE FOREIGN DATA WRAPPER ... CONNECTION.
Discussion: https://postgr.es/m/7eb0c03b4312b32cb76d340023b39a751745a1f9.camel@j-davis.com
2026-03-18 09:58:42 -07:00
Peter Eisentraut
29bf4ee749 Enable -Wstrict-prototypes and -Wold-style-definition by default
Those are available in all gcc and clang versions that support C11 and as C11
is required as of f5e0186f86, then we can add them without capability test.

Having them enabled by default avoid having to chase these manually like
11171fe1fc, cdf4b9aff2, 0e72b9d440, 7069dbcc31, f1283ed6cc, 7b66e2c086,
e95126cf04 and 9f7c527af3 have done.

Also, readline headers trigger a lot of warnings with -Wstrict-prototypes, so
we make use of the system_header pragma to hide the warnings.

Author: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Discussion: https://postgr.es/m/13d51b20-a69c-4ac1-8546-ec4fc278064f%40eisentraut.org
Discussion: https://postgr.es/m/aTFctZwWSpl2/LG5%40ip-10-97-1-34.eu-west-3.compute.internal
2026-03-18 14:31:50 +01:00
Daniel Gustafsson
4f433025f6 ssl: Serverside SNI support for libpq
Support for SNI was added to clientside libpq in 5c55dc8b47 with the
sslsni parameter, but there was no support for utilizing it serverside.
This adds support for serverside SNI such that certificate/key handling
is available per host.  A new config file, $datadir/pg_hosts.conf, is
used for configuring which certificate and key should be used for which
hostname.  In order to use SNI the ssl_sni GUC must be set to on, when
it is off the ssl configuration works just like before.  If ssl_sni is
enabled and pg_hosts.conf is non-empty it will take precedence over
the regular SSL GUCs, if it is empty or missing the regular GUCs will
be used just as before this commit with no hostname specific handling.
The TLS init hook is not compatible with ssl_sni since it operates on
a single TLS configuration and SNI break that assumption.  If the init
hook and ssl_sni are both enabled, a WARNING will be issued.

Host configuration can either be for a literal hostname to match, non-
SNI connections using the no_sni keyword or a default fallback matching
all connections.  By omitting no_sni and the fallback a strict mode
can be achieved where only connections using sslsni=1 and a specified
hostname are allowed.

CRL file(s) are applied from postgresql.conf to all configured hostnames.

Serverside SNI requires OpenSSL, currently LibreSSL does not support
the required infrastructure to update the SSL context during the TLS
handshake.

Author: Daniel Gustafsson <daniel@yesql.se>
Co-authored-by: Jacob Champion <jacob.champion@enterprisedb.com>
Reviewed-by: Jacob Champion <jacob.champion@enterprisedb.com>
Reviewed-by: Zsolt Parragi <zsolt.parragi@percona.com>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Reviewed-by: Dewei Dai <daidewei1970@163.com>
Reviewed-by: Cary Huang <cary.huang@highgo.ca>
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Discussion: https://postgr.es/m/1C81CD0D-407E-44F9-833A-DD0331C202E5@yesql.se
2026-03-18 12:37:11 +01:00
Nathan Bossart
4b5ba0c4ca pg_dump: Simplify query for retrieving attribute statistics.
This query fetches information from pg_stats, which did not return
table OIDs until recent commit 3b88e50d6c.  Because of this, we had
to cart around arrays of schema and table names, and we needed an
extra filter clause to hopefully convince the planner to use the
correct index.  With the introduction of pg_stats.tableid, we can
instead just use an array of OIDs, and we no longer need the extra
filter clause hack.

Author: Corey Huinker <corey.huinker@gmail.com>
Reviewed-by: Sami Imseih <samimseih@gmail.com>
Discussion: https://postgr.es/m/CADkLM%3DcoCVy92QkVUUTLdo5eO2bMDtwMrzRn_8miAhX%2BuPaqXg%40mail.gmail.com
2026-03-17 11:32:40 -05:00
Heikki Linnakangas
2c1a7d421f Don't leave behind files in src dir in 007_multixact_conversion.pl
pg_upgrade test 007_multixact_conversion.pl was leaving files like
delete_old_cluster.sh in the source directory for VPATH and meson
builds. To fix, change the tmp_check directory before running the
test, like in the other pg_upgrade tests.

Author: Hayato Kuroda <kuroda.hayato@fujitsu.com>
https://www.postgresql.org/message-id/TYRPR01MB121563A4DA8B2FE9A2ECB79F5F541A@TYRPR01MB12156.jpnprd01.prod.outlook.com
2026-03-17 11:24:52 +02:00
Peter Eisentraut
182cdf5aea pg_dump: Add appropriate version check
Some code added by commit 2f094e7ac6 needs to be behind a version
check so that it is not run against older databases.

Author: Andrew Dunstan <andrew@dunslane.net>
Reviewed-by: Junwang Zhao <zhjwpku@gmail.com>
Discussion: https://www.postgresql.org/message-id/afe3f099-3271-4fc4-8e32-467b5309affb%40dunslane.net
2026-03-17 09:46:06 +01:00
Nathan Bossart
be0d0b457c pg_dumpall: Fix handling of incompatible options.
This commit teaches pg_dumpall to fail when both --clean and
--data-only are specified.  Previously, it passed the options
through to pg_dump, which would fail after pg_dumpall had already
started producing output.  Like recent commits b2898baaf7 and
7c8280eeb5, no back-patch.

Author: Mahendra Singh Thalor <mahi6run@gmail.com>
Reviewed-by: Srinath Reddy Sadipiralla <srinath2133@gmail.com>
Discussion: https://postgr.es/m/CAKYtNArrHiJ0LDB9BFZiUWs6tC78QkBN50wiwO07WhxewYDS3Q%40mail.gmail.com
2026-03-16 11:01:20 -05:00
Peter Eisentraut
1e67508730 Fix pg_upgrade failure when extension_control_path is used
When an extension is located via extension_control_path and it has a
hardcoded $libdir/ path, this is stripped by the
extension_control_path mechanism.  But when pg_upgrade verifies the
extension using LOAD, this stripping does not happen, and so
pg_upgrade will fail because it cannot load the extension.  To work
around that, change pg_upgrade to itself strip the prefix when it runs
its checks.  A test case is also added.

Author: Jonathan Gonzalez V. <jonathan.abdiel@gmail.com>
Reviewed-by: Niccolò Fei <niccolo.fei@enterprisedb.com>
Reviewed-by: Matheus Alcantara <matheusssilv97@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/43b3691c673a8b9158f5a09f06eacc3c63e2c02d.camel%40gmail.com
2026-03-16 11:52:26 +01:00
Peter Eisentraut
2f094e7ac6 SQL Property Graph Queries (SQL/PGQ)
Implementation of SQL property graph queries, according to SQL/PGQ
standard (ISO/IEC 9075-16:2023).

This adds:

- GRAPH_TABLE table function for graph pattern matching
- DDL commands CREATE/ALTER/DROP PROPERTY GRAPH
- several new system catalogs and information schema views
- psql \dG command
- pg_get_propgraphdef() function for pg_dump and psql

A property graph is a relation with a new relkind RELKIND_PROPGRAPH.
It acts like a view in many ways.  It is rewritten to a standard
relational query in the rewriter.  Access privileges act similar to a
security invoker view.  (The security definer variant is not currently
implemented.)

Starting documentation can be found in doc/src/sgml/ddl.sgml and
doc/src/sgml/queries.sgml.

Author: Peter Eisentraut <peter@eisentraut.org>
Author: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>
Reviewed-by: Junwang Zhao <zhjwpku@gmail.com>
Reviewed-by: Ajay Pal <ajay.pal.k@gmail.com>
Reviewed-by: Henson Choi <assam258@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/a855795d-e697-4fa5-8698-d20122126567@eisentraut.org
2026-03-16 10:14:18 +01:00
Tom Lane
bb53b8d359 Fix small memory leak in get_dbname_oid_list_from_mfile().
Coverity complained that this function leaked the dumpdirpath string,
which it did.  But we don't need to make a copy at all, because
there's not really any point in trimming trailing slashes from the
directory name here.  If that were needed, the initial
file_exists_in_directory() test would have failed, since it doesn't
bother with that (and neither does anyplace else in this file).
Moreover, if we did want that, reimplementing canonicalize_path()
poorly is not the way to proceed.  Arguably, all of this code should
be reexamined with an eye to using src/port/path.c's facilities, but
for today I'll settle for getting rid of the memory leak.
2026-03-15 15:24:04 -04:00
Andrew Dunstan
a793677e57 pg_restore: Remove dead code in restore_all_databases()
Cleanup from commit 763aaa06f0.

Author: Mahendra Singh Thalor <mahi6run@gmail.com>
Discussion: https://postgr.es/m/CAKYtNAqN49Hqd4v0wWH3uW6d6QsH+8e8bR_MVf4CboTZSzd+Aw@mail.gmail.com
2026-03-15 12:13:19 -04:00
Álvaro Herrera
ac58465e06
Introduce the REPACK command
REPACK absorbs the functionality of VACUUM FULL and CLUSTER in a single
command.  Because this functionality is completely different from
regular VACUUM, having it separate from VACUUM makes it easier for users
to understand; as for CLUSTER, the term is heavily overloaded in the
IT world and even in Postgres itself, so it's good that we can avoid it.

We retain those older commands, but de-emphasize them in the
documentation, in favor of REPACK; the difference between VACUUM FULL
and CLUSTER (namely, the fact that tuples are written in a specific
ordering) is neatly absorbed as two different modes of REPACK.

This allows us to introduce further functionality in the future that
works regardless of whether an ordering is being applied, such as (and
especially) a concurrent mode.

Author: Antonin Houska <ah@cybertec.at>
Reviewed-by: Mihail Nikalayeu <mihailnikalayeu@gmail.com>
Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de>
Reviewed-by: Robert Treat <rob@xzilla.net>
Reviewed-by: Euler Taveira <euler@eulerto.com>
Reviewed-by: Matheus Alcantara <matheusssilv97@gmail.com>
Reviewed-by: Junwang Zhao <zhjwpku@gmail.com>
Reviewed-by: jian he <jian.universality@gmail.com>
Discussion: https://postgr.es/m/82651.1720540558@antos
Discussion: https://postgr.es/m/202507262156.sb455angijk6@alvherre.pgsql
2026-03-10 19:56:39 +01:00