Commit graph

1453 commits

Author SHA1 Message Date
Tom Lane
41dd50e84d Fix corner-case behaviors in JSON/JSONB field extraction operators.
Cause the path extraction operators to return their lefthand input,
not NULL, if the path array has no elements.  This seems more consistent
since the case ought to correspond to applying the simple extraction
operator (->) zero times.

Cause other corner cases in field/element/path extraction to return NULL
rather than failing.  This behavior is arguably more useful than throwing
an error, since it allows an expression index using these operators to be
built even when not all values in the column are suitable for the
extraction being indexed.  Moreover, we already had multiple
inconsistencies between the path extraction operators and the simple
extraction operators, as well as inconsistencies between the JSON and
JSONB code paths.  Adopt a uniform rule of returning NULL rather than
throwing an error when the JSON input does not have a structure that
permits the request to be satisfied.

Back-patch to 9.4.  Update the release notes to list this as a behavior
change since 9.3.
2014-08-22 13:17:58 -04:00
Tom Lane
fa069822f5 More regression test cases for json/jsonb extraction operators.
Cover some cases I omitted before, such as null and empty-string
elements in the path array.  This exposes another inconsistency:
json_extract_path complains about empty path elements but
jsonb_extract_path does not.
2014-08-20 19:05:05 -04:00
Tom Lane
9bac66020d Fix core dump in jsonb #> operator, and add regression test cases.
jsonb's #> operator segfaulted (dereferencing a null pointer) if the RHS
was a zero-length array, as reported in bug #11207 from Justin Van Winkle.
json's #> operator returns NULL in such cases, so for the moment let's
make jsonb act likewise.

Also add a bunch of regression test queries memorializing the -> and #>
operators' behavior for this and other corner cases.

There is a good argument for changing some of these behaviors, as they
are not very consistent with each other, and throwing an error isn't
necessarily a desirable behavior for operators that are likely to be
used in indexes.  However, everybody can agree that a core dump is the
Wrong Thing, and we need test cases even if we decide to change their
expected output later.
2014-08-20 16:48:53 -04:00
Greg Stark
458ef6bad1 Fix further concerns about psql wrapping in expanded mode having
collateral damage on other formats, by Sergey Muraviov.
2014-08-18 12:20:32 +01:00
Tom Lane
a068b5b65f Add opr_sanity queries to inspect commutator/negator links more closely.
Make lists of the names of all operators that are claimed to be commutator
pairs or negator pairs.  This is analogous to the existing queries that
make lists of all operator names appearing in particular opclass strategy
slots.  Unexpected additions to these lists are likely to be mistakes; had
we had these queries in place before, bug #11178 might've been prevented.
2014-08-16 13:22:52 -04:00
Andrew Dunstan
4ebe3519e1 Allow empty string object keys in json_object().
This makes the behaviour consistent with the json parser, other
json-generating functions, and the JSON standards.
2014-07-22 11:27:31 -04:00
Tom Lane
9b35ddce93 Partial fix for dropped columns in functions returning composite.
When a view has a function-returning-composite in FROM, and there are
some dropped columns in the underlying composite type, ruleutils.c
printed junk in the column alias list for the reconstructed FROM entry.
Before 9.3, this was prevented by doing get_rte_attribute_is_dropped
tests while printing the column alias list; but that solution is not
currently available to us for reasons I'll explain below.  Instead,
check for empty-string entries in the alias list, which can only exist
if that column position had been dropped at the time the view was made.
(The parser fills in empty strings to preserve the invariant that the
aliases correspond to physical column positions.)

While this is sufficient to handle the case of columns dropped before
the view was made, we have still got issues with columns dropped after
the view was made.  In particular, the view could contain Vars that
explicitly reference such columns!  The dependency machinery really
ought to refuse the column drop attempt in such cases, as it would do
when trying to drop a table column that's explicitly referenced in
views.  However, we currently neglect to store dependencies on columns
of composite types, and fixing that is likely to be too big to be
back-patchable (not to mention that existing views in existing databases
would not have the needed pg_depend entries anyway).  So I'll leave that
for a separate patch.

Pre-9.3, ruleutils would print such Vars normally (with their original
column names) even though it suppressed their entries in the RTE's
column alias list.  This is certainly bogus, since the printed view
definition would fail to reload, but at least it didn't crash.  However,
as of 9.3 the printed column alias list is tightly tied to the names
printed for Vars; so we can't treat columns as dropped for one purpose
and not dropped for the other.  This is why we can't just put back the
get_rte_attribute_is_dropped test: it results in an assertion failure
if the view in fact contains any Vars referencing the dropped column.
Once we've got dependencies preventing such cases, we'll probably want
to do it that way instead of relying on the empty-string test used here.

This fix turned up a very ancient bug in outfuncs/readfuncs, namely
that T_String nodes containing empty strings were not dumped/reloaded
correctly: the node was printed as "<>" which is read as a string
value of <>.  Since (per SQL) we disallow empty-string identifiers,
such nodes don't occur normally, which is why we'd not noticed.
(Such nodes aren't used for literal constants, just identifiers.)

Per report from Marc Schablewski.  Back-patch to 9.3 which is where
the rule printing behavior changed.  The dangling-variable case is
broken all the way back, but that's not what his complaint is about.
2014-07-19 14:28:52 -04:00
Tom Lane
f15821eefd Allow join removal in some cases involving a left join to a subquery.
We can remove a left join to a relation if the relation's output is
provably distinct for the columns involved in the join clause (considering
only equijoin clauses) and the relation supplies no variables needed above
the join.  Previously, the join removal logic could only prove distinctness
by reference to unique indexes of a table.  This patch extends the logic
to consider subquery relations, wherein distinctness might be proven by
reference to GROUP BY, DISTINCT, etc.

We actually already had some code to check that a subquery's output was
provably distinct, but it was hidden inside pathnode.c; which was a pretty
bad place for it really, since that file is mostly boilerplate Path
construction and comparison.  Move that code to analyzejoins.c, which is
arguably a more appropriate location, and is certainly the site of the
new usage for it.

David Rowley, reviewed by Simon Riggs
2014-07-15 21:12:43 -04:00
Tom Lane
d685814835 Fix bug with whole-row references to append subplans.
ExecEvalWholeRowVar incorrectly supposed that it could "bless" the source
TupleTableSlot just once per query.  But if the input is coming from an
Append (or, perhaps, other cases?) more than one slot might be returned
over the query run.  This led to "record type has not been registered"
errors when a composite datum was extracted from a non-blessed slot.

This bug has been there a long time; I guess it escaped notice because when
dealing with subqueries the planner tends to expand whole-row Vars into
RowExprs, which don't have the same problem.  It is possible to trigger
the problem in all active branches, though, as illustrated by the added
regression test.
2014-07-11 19:12:35 -04:00
Tom Lane
59efda3e50 Implement IMPORT FOREIGN SCHEMA.
This command provides an automated way to create foreign table definitions
that match remote tables, thereby reducing tedium and chances for error.
In this patch, we provide the necessary core-server infrastructure and
implement the feature fully in the postgres_fdw foreign-data wrapper.
Other wrappers will throw a "feature not supported" error until/unless
they are updated.

Ronan Dunklau and Michael Paquier, additional work by me
2014-07-10 15:01:43 -04:00
Tom Lane
9e2f2d7a05 Don't assume a subquery's output is unique if there's a SRF in its tlist.
While the x output of "select x from t group by x" can be presumed unique,
this does not hold for "select x, generate_series(1,10) from t group by x",
because we may expand the set-returning function after the grouping step.
(Perhaps that should be re-thought; but considering all the other oddities
involved with SRFs in targetlists, it seems unlikely we'll change it.)
Put a check in query_is_distinct_for() so it's not fooled by such cases.

Back-patch to all supported branches.

David Rowley
2014-07-08 14:03:56 -04:00
Tom Lane
a749a23d7a Remove use_json_as_text options from json_to_record/json_populate_record.
The "false" case was really quite useless since all it did was to throw
an error; a definition not helped in the least by making it the default.
Instead let's just have the "true" case, which emits nested objects and
arrays in JSON syntax.  We might later want to provide the ability to
emit sub-objects in Postgres record or array syntax, but we'd be best off
to drive that off a check of the target field datatype, not a separate
argument.

For the functions newly added in 9.4, we can just remove the flag arguments
outright.  We can't do that for json_populate_record[set], which already
existed in 9.3, but we can ignore the argument and always behave as if it
were "true".  It helps that the flag arguments were optional and not
documented in any useful fashion anyway.
2014-06-29 13:50:58 -04:00
Tom Lane
d222585a9f Allow pushdown of WHERE quals into subqueries with window functions.
We can allow this even without any specific knowledge of the semantics
of the window function, so long as pushed-down quals will either accept
every row in a given window partition, or reject every such row.  Because
window functions act only within a partition, such a case can't result
in changing the window functions' outputs for any surviving row.
Eliminating entire partitions in this way obviously can reduce the cost
of the window-function computations substantially.

The fly in the ointment is that it's hard to be entirely sure whether
this is true for an arbitrary qual condition.  This patch allows pushdown
if (a) the qual references only partitioning columns, and (b) the qual
contains no volatile functions.  We are at risk of incorrect results if
the qual can produce different answers for values that the partitioning
equality operator sees as equal.  While it's not hard to invent cases
for which that can happen, it seems to seldom be a problem in practice,
since no one has complained about a similar assumption that we've had
for many years with respect to DISTINCT.  The potential performance
gains seem to be worth the risk.

David Rowley, reviewed by Vik Fearing; some credit is due also to
Thomas Mayer who did considerable preliminary investigation.
2014-06-27 23:08:08 -07:00
Tom Lane
1147035203 Disallow pushing volatile qual expressions down into DISTINCT subqueries.
A WHERE clause applied to the output of a subquery with DISTINCT should
theoretically be applied only once per distinct row; but if we push it
into the subquery then it will be evaluated at each row before duplicate
elimination occurs.  If the qual is volatile this can give rise to
observably wrong results, so don't do that.

While at it, refactor a little bit to allow subquery_is_pushdown_safe
to report more than one kind of restrictive condition without indefinitely
expanding its argument list.

Although this is a bug fix, it seems unwise to back-patch it into released
branches, since it might de-optimize plans for queries that aren't giving
any trouble in practice.  So apply to 9.4 but not further back.
2014-06-27 11:08:48 -07:00
Tom Lane
344eed91e9 Forward-patch regression test for "could not find pathkey item to sort".
Commit a87c729153 already fixed the bug this
is checking for, but the regression test case it added didn't cover this
scenario.  Since we managed to miss the fact that there was a bug at all,
it seems like a good idea to propagate the extra test case forward to HEAD.
2014-06-26 10:41:48 -07:00
Tom Lane
57d8c1270e Fix handling of nested JSON objects in json_populate_recordset and friends.
populate_recordset_object_start() improperly created a new hash table
(overwriting the link to the existing one) if called at nest levels
greater than one.  This resulted in previous fields not appearing in
the final output, as reported by Matti Hameister in bug #10728.
In 9.4 the problem also affects json_to_recordset.

This perhaps missed detection earlier because the default behavior is to
throw an error for nested objects: you have to pass use_json_as_text = true
to see the problem.

In addition, fix query-lifespan leakage of the hashtable created by
json_populate_record().  This is pretty much the same problem recently
fixed in dblink: creating an intended-to-be-temporary context underneath
the executor's per-tuple context isn't enough to make it go away at the
end of the tuple cycle, because MemoryContextReset is not
MemoryContextResetAndDeleteChildren.

Michael Paquier and Tom Lane
2014-06-24 21:22:40 -07:00
Tom Lane
8f889b1083 Implement UPDATE tab SET (col1,col2,...) = (SELECT ...), ...
This SQL-standard feature allows a sub-SELECT yielding multiple columns
(but only one row) to be used to compute the new values of several columns
to be updated.  While the same results can be had with an independent
sub-SELECT per column, such a workaround can require a great deal of
duplicated computation.

The standard actually says that the source for a multi-column assignment
could be any row-valued expression.  The implementation used here is
tightly tied to our existing sub-SELECT support and can't handle other
cases; the Bison grammar would have some issues with them too.  However,
I don't feel too bad about this since other cases can be converted into
sub-SELECTs.  For instance, "SET (a,b,c) = row_valued_function(x)" could
be written "SET (a,b,c) = (SELECT * FROM row_valued_function(x))".
2014-06-18 13:22:34 -04:00
Noah Misch
f3fdd257a4 Harden pg_filenode_relation test against concurrent DROP TABLE.
Per buildfarm member prairiedog.  Back-patch to 9.4, where the test was
introduced.

Reviewed by Tom Lane.
2014-06-13 19:57:59 -04:00
Tom Lane
2dd352d4b0 Add regression test to prevent future breakage of legacy query in libpq.
Memorialize the expected output of the query that libpq has been using for
many years to get the OIDs of large-object support functions.  Although
we really ought to change the way libpq does this, we must expect that
this query will remain in use in the field for the foreseeable future,
so until we're ready to break compatibility with old libpq versions
we'd better check the results stay the same.  See the recent lo_create()
fiasco.
2014-06-12 15:54:13 -04:00
Tom Lane
ab76208e3d Forward-port regression test for bug #10587 into 9.3 and HEAD.
Although this bug is already fixed in post-9.2 branches, the case
triggering it is quite different from what was under consideration
at the time.  It seems worth memorializing this example in HEAD
just to make sure it doesn't get broken again in future.

Extracted from commit 187ae17300.
2014-06-09 21:37:18 -04:00
Andres Freund
e0cb4aa89d Move regression test listing of builtin leakproof functions to opr_sanity.sql.
The original location in create_function_3.sql didn't invite the close
structinity warranted for adding new leakproof functions. Add comments
to the test explaining that functions should only be added after
careful consideration and understanding what a leakproof function is.

Per complaint from Tom Lane after 5eebb8d954.
2014-06-05 13:54:25 +02:00
Tom Lane
d4d48a5edd Tweak new regression test case for better portability.
Buildfarm says we get different plans on 32-bit and 64-bit platforms,
probably because of MAXALIGN-related differences in memory-consumption
calculations.  Add some dummy WHERE clauses so that the planner estimates
different sizes for the three generate_series() relations; that should
stabilize the choice of join order.
2014-06-04 21:31:41 -04:00
Tom Lane
4c8ab1b91d Add btree and hash opclasses for pg_lsn.
This is needed to allow ORDER BY, DISTINCT, etc to work as expected for
pg_lsn values.

We had previously decided to put this off for 9.5, but in view of commit
eeca4cd35e there's no reason to avoid a
catversion bump for 9.4beta2, and this does make a pretty significant
usability difference for pg_lsn.

Michael Paquier, with fixes from Andres Freund and Tom Lane
2014-06-04 20:45:56 -04:00
Andrew Dunstan
0ad1a81632 Do not escape a unicode sequence when escaping JSON text.
Previously, any backslash in text being escaped for JSON was doubled so
that the result was still valid JSON. However, this led to some perverse
results in the case of Unicode sequences, These are now detected and the
initial backslash is no longer escaped. All other backslashes are
still escaped. No validity check is performed, all that is looked for is
\uXXXX where X is a hexidecimal digit.

This is a change from the 9.2 and 9.3 behaviour as noted in the Release
notes.

Per complaint from Teodor Sigaev.
2014-06-03 16:11:31 -04:00
Andrew Dunstan
f30015b6d7 Output timestamps in ISO 8601 format when rendering JSON.
Many JSON processors require timestamp strings in ISO 8601 format in
order to convert the strings. When converting a timestamp, with or
without timezone, to a JSON datum we therefore now use such a format
rather than the type's default text output, in functions such as
to_json().

This is a change in behaviour from 9.2 and 9.3, as noted in the release
notes.
2014-06-03 13:56:53 -04:00
Andres Freund
5eebb8d954 Use unaligned output in another regression test query to reduce diff noise.
Use the unaligned/no rowcount output mode in a regression tests that
shows all built-in leakproof functions. Currently a new leakproof
function will often change the alignment of all existing functions,
making it hard to see the actual difference and creating unnecessary
patch conflicts.

Noticed while looking over a patch introducing new leakproof functions.
2014-06-03 12:19:18 +02:00
Heikki Linnakangas
8f9b9590d7 Handle duplicate XIDs in txid_snapshot.
The proc array can contain duplicate XIDs, when a transaction is just being
prepared for two-phase commit. To cope, remove any duplicates in
txid_current_snapshot(). Also ignore duplicates in the input functions, so
that if e.g. you have an old pg_dump file that already contains duplicates,
it will be accepted.

Report and fix by Jan Wieck. Backpatch to all supported versions.
2014-05-15 18:29:20 +03:00
Tom Lane
66b737cd9a Be more wary in choice of timezone names to test make_timestamptz with.
America/Metlakatla hasn't been in the IANA database all that long, so
some installations might not have it.  It does seem worthwhile to test
with a fractional-minute GMT offset, but we can get that from almost
any pre-1900 date; I chose Europe/Paris, whose LMT offset from Greenwich
should be pretty darn well established.

Also, assuming that Mars/Mons_Olympus will never be in the IANA database
seems less than future-proof, so let's use a more fanciful location for
the bad-zone-name check.

Per complaint from Christoph Berg.
2014-05-12 20:21:16 -04:00
Tom Lane
12e611d43e Rename jsonb_hash_ops to jsonb_path_ops.
There's no longer much pressure to switch the default GIN opclass for
jsonb, but there was still some unhappiness with the name "jsonb_hash_ops",
since hashing is no longer a distinguishing property of that opclass,
and anyway it seems like a relatively minor detail.  At the suggestion of
Heikki Linnakangas, we'll use "jsonb_path_ops" instead; that captures the
important characteristic that each index entry depends on the entire path
from the document root to the indexed value.

Also add a user-facing explanation of the implementation properties of
these two opclasses.
2014-05-11 12:06:04 -04:00
Tom Lane
46dddf7673 Improve key representation for GIN jsonb_ops, and fix existence-search bug.
Change the key representation so that values that would exceed 127 bytes
are hashed into short strings, and so that the original JSON datatype of
each value is recorded in the index.  The hashing rule eliminates the major
objection to having this opclass be the default for jsonb, namely that it
could fail for plausible input data (due to GIN's restrictions on maximum
key length).  Preserving datatype information doesn't really buy us much
right now, but it requires no extra space compared to the previous way,
and it might be useful later.

Also, change the consistency-checking functions to request recheck for
exists (jsonb ? text) and related operators.  The original analysis that
this is an exactly checkable query was incorrect, since the index does
not preserve information about whether a key appears at top level in
the indexed JSON object.  Add a test case demonstrating the problem.

Make some other, mostly cosmetic improvements to the code in jsonb_gin.c
as well.

catversion bump due to on-disk data format change in jsonb_ops indexes.
2014-05-09 08:41:26 -04:00
Tom Lane
a16d421ca4 Revert "Auto-tune effective_cache size to be 4x shared buffers"
This reverts commit ee1e5662d8, as well as
a remarkably large number of followup commits, which were mostly concerned
with the fact that the implementation didn't work terribly well.  It still
doesn't: we probably need some rather basic work in the GUC infrastructure
if we want to fully support GUCs whose default varies depending on the
value of another GUC.  Meanwhile, it also emerged that there wasn't really
consensus in favor of the definition the patch tried to implement (ie,
effective_cache_size should default to 4 times shared_buffers).  So whack
it all back to where it was.  In a followup commit, I'll do what was
recently agreed to, which is to simply change the default to a higher
value.
2014-05-08 20:49:38 -04:00
Tom Lane
04e5025be8 Fix failure to set ActiveSnapshot while rewinding a cursor.
ActiveSnapshot needs to be set when we call ExecutorRewind because some
plan node types may execute user-defined functions during their ReScan
calls (nodeLimit.c does so, at least).  The wisdom of that is somewhat
debatable, perhaps, but for now the simplest fix is to make sure the
required context is valid.  Failure to do this typically led to a
null-pointer-dereference core dump, though it's possible that in more
complex cases a function could be executed with the wrong snapshot
leading to very subtle misbehavior.

Per report from Leif Jensen.  It's been broken for a long time, so
back-patch to all active branches.
2014-05-07 14:25:11 -04:00
Jeff Davis
348aa75a67 Fix interval test, which was broken for floating-point timestamps.
Commit 4318daecc9 introduced a test that
couldn't be made consistent between integer and floating-point
timestamps.

It was designed to test the longest possible interval output length,
so removing four zeros from the number of hours, as this patch does,
is not ideal. But the test still has some utility for its original
purpose, and there aren't a lot of other good options.

Noah Misch suggested a different approach where we test that the
output either matches what we expect from integer timestamps or what
we expect from floating-point timestamps. That seemed to obscure an
otherwise simple test, however.

Reviewed by Tom Lane and Noah Misch.
2014-05-06 19:53:59 -07:00
Tom Lane
91e16b9806 Fix yet another corner case in dumping rules/views with USING clauses.
ruleutils.c tries to cope with additions/deletions/renamings of columns in
tables referenced by views, by means of adding machine-generated aliases to
the printed form of a view when needed to preserve the original semantics.
A recent blog post by Marko Tiikkaja pointed out a case I'd missed though:
if one input of a join with USING is itself a join, there is nothing to
stop the user from adding a column of the same name as the USING column to
whichever side of the sub-join didn't provide the USING column.  And then
there'll be an error when the view is re-parsed, since now the sub-join
exposes two columns matching the USING specification.  We were catching a
lot of related cases, but not this one, so add some logic to cope with it.

Back-patch to 9.3, which is the first release that makes any serious
attempt to cope with such cases (cf commit 2ffa740be and follow-ons).
2014-05-01 20:22:37 -04:00
Tom Lane
3f8c8e3c61 Fix failure to detoast fields in composite elements of structured types.
If we have an array of records stored on disk, the individual record fields
cannot contain out-of-line TOAST pointers: the tuptoaster.c mechanisms are
only prepared to deal with TOAST pointers appearing in top-level fields of
a stored row.  The same applies for ranges over composite types, nested
composites, etc.  However, the existing code only took care of expanding
sub-field TOAST pointers for the case of nested composites, not for other
structured types containing composites.  For example, given a command such
as

UPDATE tab SET arraycol = ARRAY[(ROW(x,42)::mycompositetype] ...

where x is a direct reference to a field of an on-disk tuple, if that field
is long enough to be toasted out-of-line then the TOAST pointer would be
inserted as-is into the array column.  If the source record for x is later
deleted, the array field value would become a dangling pointer, leading
to errors along the line of "missing chunk number 0 for toast value ..."
when the value is referenced.  A reproducible test case for this was
provided by Jan Pecek, but it seems likely that some of the "missing chunk
number" reports we've heard in the past were caused by similar issues.

Code-wise, the problem is that PG_DETOAST_DATUM() is not adequate to
produce a self-contained Datum value if the Datum is of composite type.
Seen in this light, the problem is not just confined to arrays and ranges,
but could also affect some other places where detoasting is done in that
way, for example form_index_tuple().

I tried teaching the array code to apply toast_flatten_tuple_attribute()
along with PG_DETOAST_DATUM() when the array element type is composite,
but this was messy and imposed extra cache lookup costs whether or not any
TOAST pointers were present, indeed sometimes when the array element type
isn't even composite (since sometimes it takes a typcache lookup to find
that out).  The idea of extending that approach to all the places that
currently use PG_DETOAST_DATUM() wasn't attractive at all.

This patch instead solves the problem by decreeing that composite Datum
values must not contain any out-of-line TOAST pointers in the first place;
that is, we expand out-of-line fields at the point of constructing a
composite Datum, not at the point where we're about to insert it into a
larger tuple.  This rule is applied only to true composite Datums, not
to tuples that are being passed around the system as tuples, so it's not
as invasive as it might sound at first.  With this approach, the amount
of code that has to be touched for a full solution is greatly reduced,
and added cache lookup costs are avoided except when there actually is
a TOAST pointer that needs to be inlined.

The main drawback of this approach is that we might sometimes dereference
a TOAST pointer that will never actually be used by the query, imposing a
rather large cost that wasn't there before.  On the other side of the coin,
if the field value is used multiple times then we'll come out ahead by
avoiding repeat detoastings.  Experimentation suggests that common SQL
coding patterns are unaffected either way, though.  Applications that are
very negatively affected could be advised to modify their code to not fetch
columns they won't be using.

In future, we might consider reverting this solution in favor of detoasting
only at the point where data is about to be stored to disk, using some
method that can drill down into multiple levels of nested structured types.
That will require defining new APIs for structured types, though, so it
doesn't seem feasible as a back-patchable fix.

Note that this patch changes HeapTupleGetDatum() from a macro to a function
call; this means that any third-party code using that macro will not get
protection against creating TOAST-pointer-containing Datums until it's
recompiled.  The same applies to any uses of PG_RETURN_HEAPTUPLEHEADER().
It seems likely that this is not a big problem in practice: most of the
tuple-returning functions in core and contrib produce outputs that could
not possibly be toasted anyway, and the same probably holds for third-party
extensions.

This bug has existed since TOAST was invented, so back-patch to all
supported branches.
2014-05-01 15:19:06 -04:00
Tom Lane
95811032d7 Improve planner to drop constant-NULL inputs of AND/OR where it's legal.
In general we can't discard constant-NULL inputs, since they could change
the result of the AND/OR to be NULL.  But at top level of WHERE, we do not
need to distinguish a NULL result from a FALSE result, so it's okay to
treat NULL as FALSE and then simplify AND/OR accordingly.

This is a very ancient oversight, but in 9.2 and later it can lead to
failure to optimize queries that previous releases did optimize, as a
result of more aggressive parameter substitution rules making it possible
to reduce more subexpressions to NULL constants.  This is the root cause of
bug #10171 from Arnold Scheffler.  We could alternatively have fixed that
by teaching orclauses.c to ignore constant-NULL OR arms, but it seems
better to get rid of them globally.

I resisted the temptation to back-patch this change into all active
branches, but it seems appropriate to back-patch as far as 9.2 so that
there will not be performance regressions of the kind shown in this bug.
2014-04-29 13:12:46 -04:00
Greg Stark
6513633b94 Add support for wrapping to psql's "extended" mode. This makes it very
feasible to display tables that have both many columns and some large
data in some columns (such as pg_stats).

Emre Hasegeli with review and rewriting from Sergey Muraviov and
reviewed by Greg Stark
2014-04-28 18:41:36 +01:00
Tom Lane
a0f9358149 Fix incorrect pg_proc.proallargtypes entries for two built-in functions.
pg_sequence_parameters() and pg_identify_object() have had incorrect
proallargtypes entries since 9.1 and 9.3 respectively.  This was mostly
masked by the correct information in proargtypes, but a few operations
such as pg_get_function_arguments() (and thus psql's \df display) would
show the wrong data types for these functions' input parameters.

In HEAD, fix the wrong info, bump catversion, and add an opr_sanity
regression test to catch future mistakes of this sort.

In the back branches, just fix the wrong info so that installations
initdb'd with future minor releases will have the right data.  We
can't force an initdb, and it doesn't seem like a good idea to add
a regression test that will fail on existing installations.

Andres Freund
2014-04-23 21:21:05 -04:00
Tom Lane
f0fedfe82c Allow polymorphic aggregates to have non-polymorphic state data types.
Before 9.4, such an aggregate couldn't be declared, because its final
function would have to have polymorphic result type but no polymorphic
argument, which CREATE FUNCTION would quite properly reject.  The
ordered-set-aggregate patch found a workaround: allow the final function
to be declared as accepting additional dummy arguments that have types
matching the aggregate's regular input arguments.  However, we failed
to notice that this problem applies just as much to regular aggregates,
despite the fact that we had a built-in regular aggregate array_agg()
that was known to be undeclarable in SQL because its final function
had an illegal signature.  So what we should have done, and what this
patch does, is to decouple the extra-dummy-arguments behavior from
ordered-set aggregates and make it generally available for all aggregate
declarations.  We have to put this into 9.4 rather than waiting till
later because it slightly alters the rules for declaring ordered-set
aggregates.

The patch turned out a bit bigger than I'd hoped because it proved
necessary to record the extra-arguments option in a new pg_aggregate
column.  I'd thought we could just look at the final function's pronargs
at runtime, but that didn't work well for variadic final functions.
It's probably just as well though, because it simplifies life for pg_dump
to record the option explicitly.

While at it, fix array_agg() to have a valid final-function signature,
and add an opr_sanity test to notice future deviations from polymorphic
consistency.  I also marked the percentile_cont() aggregates as not
needing extra arguments, since they don't.
2014-04-23 19:17:41 -04:00
Bruce Momjian
2985e16031 regression test: fix hot standby tests by using repeatable read
Serializable transactions won't work on a Hot Standby.  Also fix
VACUUM/ANALYZE label mixup.

Patch by Martín Marqués
2014-04-22 17:23:58 -04:00
Bruce Momjian
7ec73783d8 copy: update docs for FORCE_NULL and FORCE_NOT_NULL combination
Also update regression tests

Patch by Michael Paquier
2014-04-22 16:06:37 -04:00
Tom Lane
cbb5e23bfa Update oidjoins regression test for 9.4.
Now that we're pretty much feature-frozen, it's time to update the checks
on system catalog foreign-key references.

(It looks like we missed doing this altogether for 9.3.  Sigh.)
2014-04-16 14:28:59 -04:00
Robert Haas
dfc0219f64 Add to_regprocedure() and to_regoperator().
These are natural complements to the functions added by commit
0886fc6a5c, but they weren't included
in the original patch for some reason.  Add them.

Patch by me, per a complaint by Tom Lane.  Review by Tatsuo
Ishii.
2014-04-16 12:21:43 -04:00
Bruce Momjian
4168c00a5d psql: conditionally display oids and replication identity
In psql \d+, display oids only when they exist, and display replication
identity only when it is non-default.  Also document the defaults for
replication identity for system and non-system tables.  Update
regression output.
2014-04-15 13:28:54 -04:00
Stephen Frost
b3e6593716 Add ANALYZE into regression tests
Looks like we can end up with different plans happening on the
buildfarm, which breaks the regression tests when we include
EXPLAIN output (which is done in the regression tests for
updatable security views, to ensure that the user-defined
function isn't pushed down to a level where it could view the
rows before the security quals are applied).

This adds in ANALYZE to hopefully make the plans consistent.
The ANALYZE ends up changing the original plan too, so the
update looks bigger than it really is.  The new plan looks
perfectly valid, of course.
2014-04-13 00:41:33 -04:00
Tom Lane
d95425c8b9 Provide moving-aggregate support for boolean aggregates.
David Rowley and Florian Pflug, reviewed by Dean Rasheed
2014-04-13 00:01:46 -04:00
Stephen Frost
842faa714c Make security barrier views automatically updatable
Views which are marked as security_barrier must have their quals
applied before any user-defined quals are called, to prevent
user-defined functions from being able to see rows which the
security barrier view is intended to prevent them from seeing.

Remove the restriction on security barrier views being automatically
updatable by adding a new securityQuals list to the RTE structure
which keeps track of the quals from security barrier views at each
level, independently of the user-supplied quals.  When RTEs are
later discovered which have securityQuals populated, they are turned
into subquery RTEs which are marked as security_barrier to prevent
any user-supplied quals being pushed down (modulo LEAKPROOF quals).

Dean Rasheed, reviewed by Craig Ringer, Simon Riggs, KaiGai Kohei
2014-04-12 21:04:58 -04:00
Tom Lane
9d229f399e Provide moving-aggregate support for a bunch of numerical aggregates.
First installment of the promised moving-aggregate support in built-in
aggregates: count(), sum(), avg(), stddev() and variance() for
assorted datatypes, though not for float4/float8.

In passing, remove a 2001-vintage kluge in interval_accum(): interval
array elements have been properly aligned since around 2003, but
nobody remembered to take out this workaround.  Also, fix a thinko
in the opr_sanity tests for moving-aggregate catalog entries.

David Rowley and Florian Pflug, reviewed by Dean Rasheed
2014-04-12 20:33:09 -04:00
Tom Lane
a9d9acbf21 Create infrastructure for moving-aggregate optimization.
Until now, when executing an aggregate function as a window function
within a window with moving frame start (that is, any frame start mode
except UNBOUNDED PRECEDING), we had to recalculate the aggregate from
scratch each time the frame head moved.  This patch allows an aggregate
definition to include an alternate "moving aggregate" implementation
that includes an inverse transition function for removing rows from
the aggregate's running state.  As long as this can be done successfully,
runtime is proportional to the total number of input rows, rather than
to the number of input rows times the average frame length.

This commit includes the core infrastructure, documentation, and regression
tests using user-defined aggregates.  Follow-on commits will update some
of the built-in aggregates to use this feature.

David Rowley and Florian Pflug, reviewed by Dean Rasheed; additional
hacking by me
2014-04-12 12:03:30 -04:00
Tom Lane
f23a5630eb Add an in-core GiST index opclass for inet/cidr types.
This operator class can accelerate subnet/supernet tests as well as
btree-equivalent ordered comparisons.  It also handles a new network
operator inet && inet (overlaps, a/k/a "is supernet or subnet of"),
which is expected to be useful in exclusion constraints.

Ideally this opclass would be the default for GiST with inet/cidr data,
but we can't mark it that way until we figure out how to do a more or
less graceful transition from the current situation, in which the
really-completely-bogus inet/cidr opclasses in contrib/btree_gist are
marked as default.  Having the opclass in core and not default is better
than not having it at all, though.

While at it, add new documentation sections to allow us to officially
document GiST/GIN/SP-GiST opclasses, something there was never a clear
place to do before.  I filled these in with some simple tables listing
the existing opclasses and the operators they support, but there's
certainly scope to put more information there.

Emre Hasegeli, reviewed by Andreas Karlsson, further hacking by me
2014-04-08 15:46:43 -04:00