Test the interactions with permissions and LOCK TABLE. Specifically
ROW EXCLUSIVE, ACCESS SHARE, and ACCESS EXCLUSIVE modes against
SELECT, INSERT, UPDATE, DELETE, and TRUNCATE permissions. Discussed
by Stephen Frost and Michael Paquier, patch by the latter. Backpatch
to 9.5 where matching behavior was first committed.
This allows convenient checking for existence of a GUC from SQL, which is
particularly useful when dealing with custom variables.
David Christensen, reviewed by Jeevan Chalke
In commit 9e3ad1aac5 I modified plpgsql
to use exec_stmt_return's simple-variables fast path in more cases.
However, I overlooked that there are really two different return
conventions in use here, depending on whether estate->retistuple is true,
and the existing fast-path code had only bothered to handle one of them.
So trying to return a scalar in a function returning composite, or vice
versa, could lead to unexpected error messages (typically "cache lookup
failed for type 0") or to a null-pointer-dereference crash.
In the DTYPE_VAR case, we can just throw error if retistuple is true,
corresponding to what happens in the general-expression code path that was
being used previously. (Perhaps someday both of these code paths should
attempt a coercion, but today is not that day.)
In the REC and ROW cases, just hand the problem to exec_eval_datum()
when not retistuple. Also clean up the ROW coding slightly so it looks
more like exec_eval_datum().
The previous commit also caused exec_stmt_return_next() to be used in
more cases, but that code seems to be OK as-is.
Per off-list report from Serge Rielau. This bug is new in 9.5 so no need
to back-patch.
We already tried to improve this once, but the "improved" text was rather
off-target if you had provided a USING clause. Also, it seems helpful
to provide the exact text of a suggested USING clause, so users can just
copy-and-paste it when needed. Per complaint from Keith Rarick and a
suggestion from Merlin Moncure.
Back-patch to 9.2 where the current wording was adopted.
Following recent discussion on -hackers. The underlying function is
also renamed to jsonb_delete_path. The regression tests now don't need
ugly type casts to avoid the ambiguity, so they are also removed.
Catalog version bumped.
Supporting deletion of JSON pairs within jsonb objects using an
array-style integer subscript allowed for surprising outcomes. This was
mostly due to the implementation-defined ordering of pairs within
objects for jsonb.
It also seems desirable to make jsonb integer subscript deletion
consistent with the 9.4 era general purpose integer subscripting
operator for jsonb (although that operator returns NULL when an object
is encountered, while we prefer here to throw an error).
Peter Geoghegan, following discussion on -hackers.
This reverts commit 5cdf25e168,
which was almost immediately proven insufficient by the buildfarm.
On second thought, the tables involved are not large enough that
autovacuum or autoanalyze would notice them; what seems far more
likely to be the culprit is the database-wide "vacuum analyze"
in the concurrent gist test. That thing has given us one headache
too many, so get rid of it in favor of targeted vacuuming of that
test's own tables only.
Verify that the number of matches is exactly what it should be, not just
that it not be zero. This should help us detect any environment-dependent
issues.
Also, verify that we're getting the expected type of scan plan (either
bitmap or seqscan as appropriate). Right now, this is failing on the
cidrcol test cases, as shown in the output file. I'll look into that
in a bit, but it seems good to commit this as-is temporarily to verify
that it behaves as expected on the buildfarm.
Casting to char, without quotes, does not give the same results as casting
to "char". That meant we were not testing the brin "char" paths at all,
since we ended up with a text operator not a "char" operator.
This test used seqscans on tenk1, with LIMIT, to build test data.
That works most of the time, but if the synchronized-seqscan logic
kicks in, we get varying test data. This seems likely to explain
the erratic test failures on buildfarm member chipmunk, which uses
smaller-than-default shared_buffers. To fix, add ORDER BY clauses to
force the ordering to be what it was implicitly being assumed to be.
Peter Geoghegan had noticed this with respect to one of the trouble
spots, though not the ones actually causing the chipmunk issue.
Some recent buildfarm failures can be explained by supposing that
autovacuum or autoanalyze fired on the tables created by this test,
resulting in plan changes. Do a proactive VACUUM ANALYZE on the
test's principal tables to try to forestall such changes.
The function is given a fourth parameter, which defaults to true. When
this parameter is true, if the last element of the path is missing
in the original json, jsonb_set creates it in the result and assigns it
the new value. If it is false then the function does nothing unless all
elements of the path are present, including the last.
Based on some original code from Dmitry Dolgov, heavily modified by me.
Catalog version bumped.
opr_sanity.sql has a test checking that relevant properties of built-in
functions match when the same C function is referenced by multiple pg_proc
entries. The test neglected to check proleakproof, though, and when
I added that condition it exposed that xideqint4 hadn't been updated to
match xideq. So fix that as well, and in consequence bump catversion.
This isn't very critical, so no need to worry about fixing back branches.
brin.sql included a call of brin_summarize_new_values(), and expected
it to always report exactly 5 summarization events. This failed sometimes
during parallel regression tests, as a consequence of the database-wide
VACUUM in gist.sql getting there first. The most future-proof way
to avoid variation in the test results is to forget about using
brin_summarize_new_values() and just do a plain "VACUUM brintest",
which will exercise the same code anyway.
Having done that, there's no need for preventing autovacuum on brintest;
doing so just reduces the scope of test coverage, so let's not.
Paul Ramsey reported that commit 35fcb1b3d0
induced a core dump on commuted ORDER BY expressions, because it was
assuming that the indexorderby expression could be found verbatim in the
relevant equivalence class, but it wasn't there. We really don't need
anything that complicated anyway; for the data types likely to be used for
index ORDER BY operators in the foreseeable future, the exprType() of the
ORDER BY expression will serve fine. (The case where we'd have to work
harder is where the ORDER BY expression's result is only binary-compatible
with the declared input type of the ordering operator; long before worrying
about that, one would need to get rid of GiST's hard-wired assumption that
said datatype is float8.)
Aside from fixing that crash and adding a regression test for the case,
I did some desultory code review:
nodeIndexscan.c was likewise overthinking how hard it ought to work to
identify the datatype of the ORDER BY expressions.
Add comments explaining how come nodeIndexscan.c can get away with
simplifying assumptions about NULLS LAST ordering and no backward scan.
Revert no-longer-needed changes of find_ec_member_for_tle(); while the
new definition was no worse than the old, it wasn't better either, and
it might cause back-patching pain.
Revert entirely bogus additions to genam.h.
Neither the deparsing of the new alias for INSERT's target table, nor of
the inference clause was supported. Also fixup a typo in an error
message.
Add regression tests to test those code paths.
Author: Peter Geoghegan
No index in template0 should have collation-dependent ordering, especially
not indexes on shared catalogs. For most textual columns we avoid this
issue by using type "name" (which sorts per strcmp()). However there are a
few indexed columns that we'd prefer to use "text" for, and for that, the
default opclass text_ops is unsafe. Fortunately, text_pattern_ops is safe
(it sorts per memcmp()), and it has no real functional disadvantage for our
purposes. So change the indexes on pg_seclabel.provider and
pg_shseclabel.provider to use text_pattern_ops.
In passing, also mark pg_replication_origin.roname as using
text_pattern_ops --- for some reason it was labeled varchar_pattern_ops
which is just wrong, even though it accidentally worked.
Add regression test queries to catch future errors of these kinds.
We still can't do anything about the misdeclared pg_seclabel and
pg_shseclabel indexes in back branches :-(
The previous coding was a leftover from attempting to hang all the on
conflict logic onto modify table's child nodes. It appears to not have
actually caused problems except for explain.
Add test exercising the broken and some other code paths.
Author: Peter Geoghegan and Andres Freund
This SQL standard functionality allows to aggregate data by different
GROUP BY clauses at once. Each grouping set returns rows with columns
grouped by in other sets set to NULL.
This could previously be achieved by doing each grouping as a separate
query, conjoined by UNION ALLs. Besides being considerably more concise,
grouping sets will in many cases be faster, requiring only one scan over
the underlying data.
The current implementation of grouping sets only supports using sorting
for input. Individual sets that share a sort order are computed in one
pass. If there are sets that don't share a sort order, additional sort &
aggregation steps are performed. These additional passes are sourced by
the previous sort step; thus avoiding repeated scans of the source data.
The code is structured in a way that adding support for purely using
hash aggregation or a mix of hashing and sorting is possible. Sorting
was chosen to be supported first, as it is the most generic method of
implementation.
Instead of, as in an earlier versions of the patch, representing the
chain of sort and aggregation steps as full blown planner and executor
nodes, all but the first sort are performed inside the aggregation node
itself. This avoids the need to do some unusual gymnastics to handle
having to return aggregated and non-aggregated tuples from underlying
nodes, as well as having to shut down underlying nodes early to limit
memory usage. The optimizer still builds Sort/Agg node to describe each
phase, but they're not part of the plan tree, but instead additional
data for the aggregation node. They're a convenient and preexisting way
to describe aggregation and sorting. The first (and possibly only) sort
step is still performed as a separate execution step. That retains
similarity with existing group by plans, makes rescans fairly simple,
avoids very deep plans (leading to slow explains) and easily allows to
avoid the sorting step if the underlying data is sorted by other means.
A somewhat ugly side of this patch is having to deal with a grammar
ambiguity between the new CUBE keyword and the cube extension/functions
named cube (and rollup). To avoid breaking existing deployments of the
cube extension it has not been renamed, neither has cube been made a
reserved keyword. Instead precedence hacking is used to make GROUP BY
cube(..) refer to the CUBE grouping sets feature, and not the function
cube(). To actually group by a function cube(), unlikely as that might
be, the function name has to be quoted.
Needs a catversion bump because stored rules may change.
Author: Andrew Gierth and Atri Sharma, with contributions from Andres Freund
Reviewed-By: Andres Freund, Noah Misch, Tom Lane, Svenne Krap, Tomas
Vondra, Erik Rijkers, Marti Raudsepp, Pavel Stehule
Discussion: CAOeZVidmVRe2jU6aMk_5qkxnB7dfmPROzM7Ur8JPW5j8Y5X-Lw@mail.gmail.com
This lets BRIN be used with R-Tree-like indexing strategies.
Also provided are operator classes for range types, box and inet/cidr.
The infrastructure provided here should be sufficient to create operator
classes for similar datatypes; for instance, opclasses for PostGIS
geometries should be doable, though we didn't try to implement one.
(A box/point opclass was also submitted, but we ripped it out before
commit because the handling of floating point comparisons in existing
code is inconsistent and would generate corrupt indexes.)
Author: Emre Hasegeli. Cosmetic changes by me
Review: Andreas Karlsson
Add a TABLESAMPLE clause to SELECT statements that allows
user to specify random BERNOULLI sampling or block level
SYSTEM sampling. Implementation allows for extensible
sampling functions to be written, using a standard API.
Basic version follows SQLStandard exactly. Usable
concrete use cases for the sampling API follow in later
commits.
Petr Jelinek
Reviewed by Michael Paquier and Simon Riggs
The expected output contained some floating point values which might get
rounded slightly differently on different platforms. The exact output isn't
very interesting in this test, so just round it.
Per buildfarm member rover_firefly.
The distance function can now set *recheck = false, like index quals. The
executor will then re-check the ORDER BY expressions, and use a queue to
reorder the results on the fly.
This makes it possible to do kNN-searches on polygons and circles, which
don't store the exact value in the index, but just a bounding box.
Alexander Korotkov and me
When this option is specified, a progress report is printed as each index
is reindexed.
Per discussion, we agreed on the following syntax for the extensibility of
the options.
REINDEX (flexible options) { INDEX | ... } name
Sawada Masahiko.
Reviewed by Robert Haas, Fabrízio Mello, Alvaro Herrera, Kyotaro Horiguchi,
Jim Nasby and me.
Discussion: CAD21AoA0pK3YcOZAFzMae+2fcc3oGp5zoRggDyMNg5zoaWDhdQ@mail.gmail.com
These operations now error out if attempted on scalars, and simply
return the input if attempted on empty arrays or objects. Along the way
we remove the unnecessary cloning of the input when it's known to be
unchanged. Regression tests covering these cases are added.
Specifically the tlist and rti of the pseudo "excluded" relation weren't
properly treated by expression_tree_walker, which lead to errors when
excluded was referenced inside a rule because the varnos where not
properly adjusted. Similar omissions in OffsetVarNodes and
expression_tree_mutator had less impact, but should obviously be fixed
nonetheless.
A couple tests of for ON CONFLICT UPDATE into INSERT rule bearing
relations have been added.
In passing I updated a couple comments.
jsonb_pretty(jsonb) produces nicely indented json output.
jsonb || jsonb concatenates two jsonb values.
jsonb - text removes a key and its associated value from the json
jsonb - int removes the designated array element
jsonb - text[] removes a key and associated value or array element at
the designated path
jsonb_replace(jsonb,text[],jsonb) replaces the array element designated
by the path or the value associated with the key designated by the path
with the given value.
Original work by Dmitry Dolgov, adapted and reworked for PostgreSQL core
by Andrew Dunstan, reviewed and tidied up by Petr Jelinek.
The new type has the scope of whole the database cluster so it doesn't
behave the same as the existing OID alias types which have database
scope,
concerning object dependency. To avoid confusion constants of the new
type are prohibited from appearing where dependencies are made involving
it.
Also, add a note to the docs about possible MVCC violation and
optimization issues, which are general over the all reg* types.
Kyotaro Horiguchi
The newly added ON CONFLICT clause allows to specify an alternative to
raising a unique or exclusion constraint violation error when inserting.
ON CONFLICT refers to constraints that can either be specified using a
inference clause (by specifying the columns of a unique constraint) or
by naming a unique or exclusion constraint. DO NOTHING avoids the
constraint violation, without touching the pre-existing row. DO UPDATE
SET ... [WHERE ...] updates the pre-existing tuple, and has access to
both the tuple proposed for insertion and the existing tuple; the
optional WHERE clause can be used to prevent an update from being
executed. The UPDATE SET and WHERE clauses have access to the tuple
proposed for insertion using the "magic" EXCLUDED alias, and to the
pre-existing tuple using the table name or its alias.
This feature is often referred to as upsert.
This is implemented using a new infrastructure called "speculative
insertion". It is an optimistic variant of regular insertion that first
does a pre-check for existing tuples and then attempts an insert. If a
violating tuple was inserted concurrently, the speculatively inserted
tuple is deleted and a new attempt is made. If the pre-check finds a
matching tuple the alternative DO NOTHING or DO UPDATE action is taken.
If the insertion succeeds without detecting a conflict, the tuple is
deemed inserted.
To handle the possible ambiguity between the excluded alias and a table
named excluded, and for convenience with long relation names, INSERT
INTO now can alias its target table.
Bumps catversion as stored rules change.
Author: Peter Geoghegan, with significant contributions from Heikki
Linnakangas and Andres Freund. Testing infrastructure by Jeff Janes.
Reviewed-By: Heikki Linnakangas, Andres Freund, Robert Haas, Simon Riggs,
Dean Rasheed, Stephen Frost and many others.
The minmax opclass was using the wrong support functions when
cross-datatypes queries were run. Instead of trying to fix the
pg_amproc definitions (which apparently is not possible), use the
already correct pg_amop entries instead. This requires jumping through
more hoops (read: extra syscache lookups) to obtain the underlying
functions to execute, but it is necessary for correctness.
Author: Emre Hasegeli, tweaked by Álvaro
Review: Andreas Karlsson
Also change BrinOpcInfo to record each stored type's typecache entry
instead of just the OID. Turns out that the full type cache is
necessary in brin_deform_tuple: the original code used the indexed
type's byval and typlen properties to extract the stored tuple, which is
correct in Minmax; but in other implementations that want to store
something different, that's wrong. The realization that this is a bug
comes from Emre also, but I did not use his patch.
I also adopted Emre's regression test code (with smallish changes),
which is more complete.
This commit adds the following functions:
box(point) -> box
bound_box(box, box) -> box
inet_same_family(inet, inet) -> bool
inet_merge(inet, inet) -> cidr
range_merge(anyrange, anyrange) -> anyrange
The first of these is also used to implement a new assignment cast from
point to box.
These functions are the first part of a base to implement an "inclusion"
operator class for BRIN, for multidimensional data types.
Author: Emre Hasegeli
Reviewed by: Andreas Karlsson
When altering the deferredness state of a foreign key constraint, we
correctly updated the catalogs and then invalidated the relcache state for
the target relation ... but that's not the only relation with relevant
triggers. Must invalidate the other table as well, or the state change
fails to take effect promptly for operations triggered on the other table.
Per bug #13224 from Christian Ullrich.
In passing, reorganize regression test case for this feature so that it
isn't randomly injected into the middle of an unrelated test sequence.
Oversight in commit f177cbfe67. Back-patch
to 9.4 where the faulty code was added.
The original security barrier view implementation, on which RLS is
built, prevented all non-leakproof functions from being pushed down to
below the view, even when the function was not receiving any data from
the view. This optimization improves on that situation by, instead of
checking strictly for non-leakproof functions, it checks for Vars being
passed to non-leakproof functions and allows functions which do not
accept arguments or whose arguments are not from the current query level
(eg: constants can be particularly useful) to be pushed down.
As discussed, this does mean that a function which is pushed down might
gain some idea that there are rows meeting a certain criteria based on
the number of times the function is called, but this isn't a
particularly new issue and the documentation in rules.sgml already
addressed similar covert-channel risks. That documentation is updated
to reflect that non-leakproof functions may be pushed down now, if
they meet the above-described criteria.
Author: Dean Rasheed, with a bit of rework to make things clearer,
along with comment and documentation updates from me.
An outer join appearing within the RHS of an antijoin can't commute with
the antijoin, but somehow I missed teaching make_outerjoininfo() about
that. In Teodor Sigaev's recent trouble report, this manifests as a
"could not find RelOptInfo for given relids" error within eqjoinsel();
but I think silently wrong query results are possible too, if the planner
misorders the joins and doesn't happen to trigger any internal consistency
checks. It's broken as far back as we had antijoins, so back-patch to all
supported branches.
The RLS capability is built on top of the WITH CHECK OPTION
system which was added for auto-updatable views, however, unlike
WCOs on views (which are mandated by the SQL spec to not fire until
after all other constraints and checks are done), it makes much more
sense for RLS checks to happen earlier than constraint and uniqueness
checks.
This patch reworks the structure which holds the WCOs a bit to be
explicitly either VIEW or RLS checks and the RLS-related checks are
done prior to the constraint and uniqueness checks. This also allows
better error reporting as we are now reporting when a violation is due
to a WITH CHECK OPTION and when it's due to an RLS policy violation,
which was independently noted by Craig Ringer as being confusing.
The documentation is also updated to include a paragraph about when RLS
WITH CHECK handling is performed, as there have been a number of
questions regarding that and the documentation was previously silent on
the matter.
Author: Dean Rasheed, with some kabitzing and comment changes by me.
Commit a2e35b53c3 neglected to update the type OID to use further
down in DefineType when TypeShellMake was changed to return
ObjectAddress instead of OID (it got it right in DefineRange, however.)
This resulted in an internal error message being issued when looking up
I/O functions.
Author: Michael Paquier
Also add Asserts() to a couple of other places to ensure that the type
OID being used is as expected.
In prepend_row_security_policies(), defaultDeny was always true, so if
there were any hook policies, the RLS policies on the table would just
get discarded. Fixed to start off with defaultDeny as false and then
properly set later if we detect that only the default deny policy exists
for the internal policies.
The infinite recursion detection in fireRIRrules() didn't properly
manage the activeRIRs list in the case of WCOs, so it would incorrectly
report infinite recusion if the same relation with RLS appeared more
than once in the rtable, for example "UPDATE t ... FROM t ...".
Further, the RLS expansion code in fireRIRrules() was handling RLS in
the main loop through the rtable, which lead to RTEs being visited twice
if they contained sublink subqueries, which
prepend_row_security_policies() attempted to handle by exiting early if
the RTE already had securityQuals. That doesn't work, however, since
if the query involved a security barrier view on top of a table with
RLS, the RTE would already have securityQuals (from the view) by the
time fireRIRrules() was invoked, and so the table's RLS policies would
be ignored. This is fixed in fireRIRrules() by handling RLS in a
separate loop at the end, after dealing with any other sublink
subqueries, thus ensuring that each RTE is only visited once for RLS
expansion.
The inheritance planner code didn't correctly handle non-target
relations with RLS, which would get turned into subqueries during
planning. Thus an update of the form "UPDATE t1 ... FROM t2 ..." where
t1 has inheritance and t2 has RLS quals would fail. Fix by making sure
to copy in and update the securityQuals when they exist for non-target
relations.
process_policies() was adding WCOs to non-target relations, which is
unnecessary, and could lead to a lot of wasted time in the rewriter and
the planner. Fix by only adding WCO policies when working on the result
relation. Also in process_policies, we should be copying the USING
policies to the WITH CHECK policies on a per-policy basis, fix by moving
the copying up into the per-policy loop.
Lastly, as noted by Dean, we were simply adding policies returned by the
hook provided to the list of quals being AND'd, meaning that they would
actually restrict records returned and there was no option to have
internal policies and hook-based policies work together permissively (as
all internal policies currently work). Instead, explicitly add support
for both permissive and restrictive policies by having a hook for each
and combining the results appropriately. To ensure this is all done
correctly, add a new test module (test_rls_hooks) to test the various
combinations of internal, permissive, and restrictive hook policies.
Largely from Dean Rasheed (thanks!):
CAEZATCVmFUfUOwwhnBTcgi6AquyjQ0-1fyKd0T3xBWJvn+xsFA@mail.gmail.com
Author: Dean Rasheed, though I added the new hooks and test module.
It now also reports temporary objects dropped that are local to the
backend. Previously we weren't reporting any temp objects because it
was deemed unnecessary; but as it turns out, it is necessary if we want
to keep close track of DDL command execution inside one session. Temp
objects are reported as living in schema pg_temp, which works because
such a schema-qualification always refers to the temp objects of the
current session.