Use disallowed instead of reserved, cannot instead of can not, and
double quotes instead of single quotes.
Also add a test to cover the bug which started this discussion.
Per discussion with Tom.
It emerges that some Perl versions before 5.8.9 have a bug with regexps
that use the /m flag and contain "$". This is the reason why jacana
is still failing on HEAD, and I was able to duplicate the failure on
prairiedog's host. There's no real need for "$" in these patterns,
since they are already matching through the statement-terminating
semicolons (or matching an explicit \n in some cases). So just
remove it.
Note: the reason jacana hasn't actually reported any failures in the
last little while is that the way the pg_dump TAP tests are set up, any
failure of this sort results in echoing the entire pg_dump dump output
to stderr. Since there were about a hundred such failures, that resulted
in a 30MB log file which choked the buildfarm upload script. There is
room for improvement here :-(.
Per off-list discussion with Andrew and Stephen.
This patch essentially reverts commit 4c6780fd17, in favor of a much
simpler solution for the case where the new cluster would choose to create
a TOAST table but the old cluster doesn't have one: just don't create a
TOAST table.
The existing code failed in at least two different ways if the situation
arose: (1) ALTER TABLE RESET didn't grab an exclusive lock, so that the
lock sanity check in create_toast_table failed; (2) pg_upgrade did not
provide a pg_type OID for the new toast table, so that the crosscheck in
TypeCreate failed. While both these problems were introduced by later
patches, they show that the hack being used to cause TOAST table creation
is overwhelmingly fragile (and untested). I also note that before the
TypeCreate crosscheck was added, the code would have resulted in assigning
an indeterminate pg_type OID to the toast table, possibly causing a later
OID conflict in that catalog; so that it didn't really work even when
committed.
If we simply don't create a TOAST table, there will only be a problem if
the code tries to store a tuple that's wider than a page, and field
compression isn't sufficient to get it under a page. Given that the TOAST
creation threshold is intended to be about a quarter of a page, it's very
hard to believe that cross-version differences in the do-we-need-a-toast-
table heuristic could result in an observable problem. So let's just
follow the old version's conclusion about whether a TOAST table is needed.
(If we ever do change needs_toast_table() so much that this conclusion
doesn't apply, we can devise a solution at that time, and hopefully do
it in a less klugy way than 4c6780fd17 did.)
Back-patch to 9.3, like the previous patch.
Discussion: <8110.1462291671@sss.pgh.pa.us>
Buildfarm member jacana appears to have an issue with running this
test. It's not entirely clear to me why, but rather than try to
fight with it, just disable it for now.
None of the other tests try to write out from psql directly as
this test does, so it seems likely that the rest of the tests will
be fine (as they have been on numerous other systems).
We need to use a new branch due to the 9.5 addition of bypassrls
when adding in the clause to exclude pg_* roles from being dumped
by pg_dumpall.
Pointed out by Noah, patch by me.
The Makefile for test_pg_dump shouldn't have a MODULES_big line
because there's no actual compiled bit for that extension. Hopefully
this will fix the Windows buildfarm members which were complaining.
In passing, also add the 'prove_installcheck' bit to the pg_dump and
test_pg_dump Makefiles, to get the buildfarm members to actually run
those tests.
Ordinarily, pg_upgrade shouldn't have any difficulty in matching up all
the relations it sees in the old and new databases. If it does, however,
it just goes belly-up with a pretty unhelpful error message. That seemed
fine as long as we expected the case never to occur in the wild, but
Alvaro reported that it had been seen in a database whose pg_largeobject
table had somehow acquired a TOAST table. That doesn't quite seem like
a case that pg_upgrade actually needs to handle, but it would be good if
the report were more diagnosable. Hence, extend the logic to print out
as much information as we can about the mismatch(es) before we quit.
In passing, improve the readability of get_rel_infos()'s data collection
query, which had suffered seriously from lets-not-bother-to-update-comments
syndrome, and generally was unnecessarily disrespectful to readers.
It could be argued that this is a bug fix, but given that we have so few
reports, I don't feel a need to back-patch; at least not before this has
baked awhile in HEAD.
This TAP test suite will create a new cluster, populate it based on
the 'create_sql' values in the '%tests' hash, run all of the runs
defined in the '%pgdump_runs' hash, and then for each test in the
'%tests' hash, compare each run's output the the regular expression
defined for the test under the 'like' and 'unlike' functions, as
appropriate.
While this test suite covers a fair bit of ground (67% of pg_dump.c
and quite a bit of the other files in src/bin/pg_dump), there is
still quite a bit which remains to be added to provide better code
coverage. Still, this is quite a bit better than we had, and has
found a few bugs already (note that the CREATE TRANSFORM test is
commented out, as it is currently failing).
Idea for using the TAP system from Tom, though all of the code is mine.
Reviewing the cases where we need to LOCK a given table during a dump,
it was pointed out by Tom that we really don't need to LOCK a table if
we are only looking to dump the ACL for it, or certain other
components. After reviewing the queries run for all of the component
pieces, a list of components were determined to not require LOCK'ing
of the table.
This implements a check to avoid LOCK'ing those tables.
Initial complaint from Rushabh Lathia, discussed with Robert and Tom,
the patch is mine.
Do not try to dump objects which do not have ACLs when only ACLs are
being requested. This results in a significant performance improvement
as we can avoid querying for further information on these objects when
we don't need to.
When limiting the components to dump for an extension, consider what
components have been requested. Initially, we incorrectly hard-coded
the components of the extension objects to dump, which would mean that
we wouldn't dump some components even with they were asked for and in
other cases we would dump components which weren't requested.
Correct defaultACLs to use 'dump_contains' instead of 'dump'. The
defaultACL is considered a member of the namespace and should be
dumped based on the same set of components that the other objects in
the schema are, not based on what we're dumping for the namespace
itself (which might not include ACLs, if the namespace has just the
default or initial ACL).
Use DUMP_COMPONENT_ACL for from-initdb objects, to allow users to
change their ACLs, should they wish to. This just extends what we
are doing for the pg_catalog namespace to objects which are not
members of namespaces.
Due to column ACLs being treated a bit differently from other ACLs
(they are actually reset to NULL when all privileges are revoked),
adjust the query which gathers column-level ACLs to consider all of
the ACL-relevant columns.
The query to grab the function/aggregate information is now joining
to pg_init_privs, so we can simplify (and correct) the WHERE clause
used to determine if a given function's ACL has changed from the
initial ACL on the function.
Bug found by Noah, patch by me.
The previous coding always stored variable values as strings, doing
conversion on-the-fly when a numeric value was needed or a number was to be
assigned. This was a bit inefficient and risked loss of precision for
floating-point values. The precision aspect had been hacked around by
printing doubles in "%.18e" format, which is ugly and has machine-dependent
results. Instead, arrange to preserve an assigned numeric value in the
original binary numeric format, converting to string only when and if
needed. When we do need to convert a double to string, convert in "%g"
format with DBL_DIG precision, which is the standard way to do it and
produces the least surprising results in most cases.
The implementation supports storing both a string value and a numeric
value for any one variable, with lazy conversion between them. I also
arranged for lazy re-sorting of the variable array when new variables are
added. That was mainly to allow a clean refactoring of putVariable()
into two levels of subroutine, but it may allow us to save a few sorts.
Discussion: <9188.1462475559@sss.pgh.pa.us>
Move fmtReloptionsArray() from pg_dump.c to string_utils.c so that it
is available to other frontend code. In particular psql's \ev and \sv
commands need it to handle view reloptions. Also rename the function
to appendReloptionsArray(), which is a more accurate description of
what it does.
Author: Dean Rasheed
Reviewed-by: Peter Eisentraut
Discussion: http://www.postgresql.org/message-id/CAEZATCWZjCgKRyM-agE0p8ax15j9uyQoF=qew7D2xB6cF76T8A@mail.gmail.com
These functions behave like the backend's least/greatest functions,
not like min/max, so the originally-chosen names invite confusion.
Per discussion, rename to least/greatest.
I also took it upon myself to make them return double if any input is
double. The previous behavior of silently coercing all inputs to int
surely does not meet the principle of least astonishment.
Copy-edit some of the other new functions' documentation, too.
Adjust the way we detect the locale. As a result the minumum Windows
version supported by VS2015 and later is Windows Vista. Add some tweaks
to remove new compiler warnings. Remove documentation references to the
now obsolete msysGit.
Michael Paquier, somewhat edited by me, reviewed by Christian Ullrich.
Backpatch to 9.5
In commit 6f0d6a507 I added a duplicate copy of psqlscanslash's identifier
downcasing code, but actually it's not hard to split that out as a callable
subroutine and avoid the duplication.
\crosstabview interpreted its arguments in an unusual way, including
doing case-insensitive matching of unquoted column names, which is
surely not the right thing. Rip that out in favor of doing something
equivalent to the dequoting/case-folding rules used by other psql
commands. To keep it simple, change the syntax so that the optional
sort column is specified as a separate argument, instead of the
also-quite-unusual syntax that attached it to the colH argument with
a colon.
Also, rework the error messages to be closer to project style.
As reported by Michael Feld, pg_upgrade'ing an installation having
extensions with operator families that contain just a single operator class
failed to reproduce the extension membership of those operator families.
This caused no immediate ill effects, but would create problems when later
trying to do a plain dump and restore, because the seemingly-not-part-of-
the-extension operator families would appear separately in the pg_dump
output, and then would conflict with the families created by loading the
extension. This has been broken ever since extensions were introduced,
and many of the standard contrib extensions are affected, so it's a bit
astonishing nobody complained before.
The cause of the problem is a perhaps-ill-considered decision to omit
such operator families from pg_dump's output on the grounds that the
CREATE OPERATOR CLASS commands could recreate them, and having explicit
CREATE OPERATOR FAMILY commands would impede loading the dump script into
pre-8.3 servers. Whatever the merits of that decision when 8.3 was being
written, it looks like a poor tradeoff now. We can fix the pg_upgrade
problem simply by removing that code, so that the operator families are
dumped explicitly (and then will be properly made to be part of their
extensions).
Although this fixes the behavior of future pg_upgrade runs, it does nothing
to clean up existing installations that may have improperly-linked operator
families. Given the small number of complaints to date, maybe we don't
need to worry about providing an automated solution for that; anyone who
needs to clean it up can do so with manual "ALTER EXTENSION ADD OPERATOR
FAMILY" commands, or even just ignore the duplicate-opfamily errors they
get during a pg_restore. In any case we need this fix.
Back-patch to all supported branches.
Discussion: <20228.1460575691@sss.pgh.pa.us>
Coverity complained about this code, not without reason because it was
rather messy. Adjust it to not scribble on the passed string; that adds
one malloc/free cycle per column name, which is going to be insignificant
in context. We can actually const-ify both the string argument and the
PGresult.
Daniel Verité, with some further cleanup by me
Coverity complained about this resource leak (why now, I don't know,
since it's been like that a long time). Our general policy in pg_dump
is that PQExpBuffers are worth cleaning up, so do it here too. But
don't bother with a back-patch, because it seems unlikely that very
many databases contain enough FOREIGN SERVER objects to notice.
\crosstabview is a completely different way to display results from a
query: instead of a vertical display of rows, the data values are placed
in a grid where the column and row headers come from the data itself,
similar to a spreadsheet.
The sort order of the horizontal header can be specified by using
another column in the query, and the vertical header determines its
ordering from the order in which they appear in the query.
This only allows displaying a single value in each cell. If more than
one value correspond to the same cell, an error is thrown. Merging of
values can be done in the query itself, if necessary. This may be
revisited in the future.
Author: Daniel Verité
Reviewed-by: Pavel Stehule, Dean Rasheed
This will prevent users from creating roles which begin with "pg_" and
will check for those roles before allowing an upgrade using pg_upgrade.
This will allow for default roles to be provided at initdb time.
Reviews by José Luis Tallón and Robert Haas
Pretty sure I removed this based on some incorrect thinking that it was
no longer possible to reach this point for a table which will not be
dumped, but that's clearly wrong.
Pointed out on IRC by Erik Rijkers.
Create a "bsd" auth method that works the same as "password" so far as
clients are concerned, but calls the BSD Authentication service to
check the password. This is currently only available on OpenBSD.
Marisa Emerson, reviewed by Thomas Munro
Now indexes (but only B-tree for now) can contain "extra" column(s) which
doesn't participate in index structure, they are just stored in leaf
tuples. It allows to use index only scan by using single index instead
of two or more indexes.
Author: Anastasia Lubennikova with minor editorializing by me
Reviewers: David Rowley, Peter Geoghegan, Jeff Janes
The code that estimates what parallel degree should be uesd for the
scan of a relation is currently rather stupid, so add a parallel_degree
reloption that can be used to override the planner's rather limited
judgement.
Julien Rouhaud, reviewed by David Rowley, James Sewell, Amit Kapila,
and me. Some further hacking by me.
Now that all of the infrastructure exists, add in the ability to
dump out the ACLs of the objects inside of pg_catalog or the ACLs
for objects which are members of extensions, but only if they have
been changed from their original values.
The original values are tracked in pg_init_privs. When pg_dump'ing
9.6-and-above databases, we will dump out the ACLs for all objects
in pg_catalog and the ACLs for all extension members, where the ACL
has been changed from the original value which was set during either
initdb or CREATE EXTENSION.
This should not change dumps against pre-9.6 databases.
Reviews by Alexander Korotkov, Jose Luis Tallon
Historically, the "dump" component of the namespace has been used
to decide if the objects inside of the namespace should be dumped
also. Given that "dump" is now a bitmask and may be partial, and
we may want to dump out all components of the namespace object but
only some of the components of objects contained in the namespace,
create a "dump_contains" bitmask which will represent what components
of the objects inside of a namespace should be dumped out.
No behavior change here, but in preparation for a change where we
will dump out just the ACLs of objects in pg_catalog, but we might
not dump out the ACL of the pg_catalog namespace itself (for instance,
when it hasn't been changed from the value set at initdb time).
Reviews by Alexander Korotkov, Jose Luis Tallon
pg_dump has historically used a simple boolean 'dump' value to indicate
if a given object should be included in the dump or not. Instead, use
a bitmap which breaks down the components of an object into their
distinct pieces and use that bitmap to only include the components
requested.
This does not include any behavioral change, but is in preperation for
the change to dump out just ACLs for objects in pg_catalog.
Reviews by Alexander Korotkov, Jose Luis Tallon
This new catalog holds the privileges which the system was
initialized with at initdb time, along with any permissions set
by extensions at CREATE EXTENSION time. This allows pg_dump
(and any other similar use-cases) to detect when the privileges
set on initdb-created or extension-created objects have been
changed from what they were set to at initdb/extension-creation
time and handle those changes appropriately.
Reviews by Alexander Korotkov, Jose Luis Tallon
Some object types have names that are only unique for one table. But
for those we generally didn't put the table name into the dump TOC tag.
So it was impossible to identify these objects if the same name was used
for multiple tables. This affects policies, column defaults,
constraints, triggers, and rules.
Fix by adding the table name to the TOC tag, so that it now reads
"$schema $table $object".
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
API and mechanism to allow generic messages to be inserted into WAL that are
intended to be read by logical decoding plugins. This commit adds an optional
new callback to the logical decoding API.
Messages are either text or bytea. Messages can be transactional, or not, and
are identified by a prefix to allow multiple concurrent decoding plugins.
(Not to be confused with Generic WAL records, which are intended to allow crash
recovery of extensible objects.)
Author: Petr Jelinek and Andres Freund
Reviewers: Artur Zakirov, Tomas Vondra, Simon Riggs
Discussion: 5685F999.6010202@2ndquadrant.com
\gexec executes the just-entered query, like \g, but instead of printing
the results it takes each field as a SQL command to send to the server.
Computing a series of queries to be executed is a fairly common thing,
but up to now you always had to resort to kluges like writing the queries
to a file and then inputting the file. Now it can be done with no
intermediate step.
The implementation is fairly straightforward except for its interaction
with FETCH_COUNT. ExecQueryUsingCursor isn't capable of being called
recursively, and even if it were, its need to create a transaction
block interferes unpleasantly with the desired behavior of \gexec after
a failure of a generated query (i.e., that it can continue). Therefore,
disable use of ExecQueryUsingCursor when doing the master \gexec query.
We can still apply it to individual generated queries, however, and there
might be some value in doing so.
While testing this feature's interaction with single-step mode, I (tgl) was
led to conclude that SendQuery needs to recognize SIGINT (cancel_pressed)
as a negative response to the single-step prompt. Perhaps that's a
back-patchable bug fix, but for now I just included it here.
Corey Huinker, reviewed by Jim Nasby, Daniel Vérité, and myself
Often, upon getting an unexpected error in psql, one's first wish is that
the verbosity setting had been higher; for example, to be able to see the
schema-name field or the server code location info. Up to now the only way
has been to adjust the VERBOSITY variable and repeat the failing query.
That's a pain, and it doesn't work if the error isn't reproducible.
This commit adds a psql feature that redisplays the most recent server
error at full verbosity, without needing to make any variable changes or
re-execute the failed command. We just need to hang onto the latest error
PGresult in case the user executes \errverbose, and then apply libpq's
new PQresultVerboseErrorMessage() function to it. This will consume
some trivial amount of psql memory, but otherwise the cost when the
feature isn't used should be negligible.
Alex Shulgin, reviewed by Daniel Vérité, some improvements by me
This interface is designed to give an access to WAL for extensions which
could implement new access method, for example. Previously it was
impossible because restoring from custom WAL would need to access system
catalog to find a redo custom function. This patch suggests generic way
to describe changes on page with standart layout.
Bump XLOG_PAGE_MAGIC because of new record type.
Author: Alexander Korotkov with a help of Petr Jelinek, Markus Nullmeier and
minor editorization by my
Reviewers: Petr Jelinek, Alvaro Herrera, Teodor Sigaev, Jim Nasby,
Michael Paquier
This is necessary infrastructure for supporting parallel aggregation
for aggregates whose transition type is "internal". Such values
can't be passed between cooperating processes, because they are
just pointers.
David Rowley, reviewed by Tomas Vondra and by me.
This refines the previous weight range and allows a script to be "turned
off" by passing a zero weight, which is useful when scripting multiple
pgbench runs.
I did not apply the suggested warning when a script uses zero weight; we
use the principle elsewhere that if there's nothing to be done, do
nothing quietly.
Adjust docs accordingly.
Author: Jeff Janes, Fabien Coelho