As reported in issue #343, there is one case where a NULL stream can
still be dereferenced, when getting &s->txn->flags. Let's protect all
assignments to stay on the safe side for future additions.
No backport is needed.
Debug commands will usually mark the fate of the process. We'd rather
have them counted and visible in a core or in stats output than trying
to guess how a flag combination could happen. The counter is only
incremented when the command is about to be issued however, so that
failed attempts are ignored.
Instead of relying on DEBUG_DEV for most debugging commands, which is
limiting, let's condition them to expert mode. Only one ("debug dev exec")
remains conditionned to DEBUG_DEV because it can have a security implication
on the system. The commands are not listed unless "expert-mode on" was first
entered on the CLI :
> expert-mode on
> help
debug dev close <fd> : close this file descriptor
debug dev delay [ms] : sleep this long
debug dev exec [cmd] ... : show this command's output
debug dev exit [code] : immediately exit the process
debug dev hex <addr> [len]: dump a memory area
debug dev log [msg] ... : send this msg to global logs
debug dev loop [ms] : loop this long
debug dev panic : immediately trigger a panic
debug dev stream ... : show/manipulate stream flags
debug dev tkill [thr] [sig] : send signal to thread
> debug dev stream
Usage: debug dev stream { <obj> <op> <value> | wake }*
<obj> = {strm | strm.f | sif.f | sif.s | sif.x | sib.f | sib.s | sib.x |
txn.f | req.f | req.r | req.w | res.f | res.r | res.w}
<op> = {'' (show) | '=' (assign) | '^' (xor) | '+' (or) | '-' (andnot)}
<value> = 'now' | 64-bit dec/hex integer (0x prefix supported)
'wake' wakes the stream asssigned to 'strm' (default: current)
Some commands like the debug ones are not enabled by default but can be
useful on some production environments. In order to avoid the temptation
of using them incorrectly, let's introduce an "expert" mode for a CLI
connection, which allows some commands to appear and be used. It is
enabled by command "expert-mode on" which is not listed by default.
This function adds some control by verifying that the target address is
really readable. It will not protect against writing to wrong places,
but will at least protect against a large number of mistakes such as
incorrectly copy-pasted addresses.
This new "debug dev stream" command allows to manipulate flags, timeouts,
states for streams, channels and stream interfaces, as well as waking a
stream up. These may be used to help reproduce certain bugs during
development. The operations are performed to the stream assigned by
"strm" which defaults to the CLI's stream. This stream pointer can be
chosen from one of those reported in "show sess". Example:
socat - /tmp/sock1 <<< "debug dev stream strm=0x1555b80 req.f=-1 req.r=now wake"
We never enter val_fc_time_value when an associated fetcher such as `fc_rtt` is
called without argument. meaning `type == ARGT_STOP` will never be true and so
the default `data.sint = TIME_UNIT_MS` will never be set. remove this part to
avoid thinking default data.sint is set to ms while reading the code.
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
[Cf: This patch may safely backported as far as 1.7. But no matter if not.]
Commit 541a534 ("BUG/MINOR: ssl/cli: fix build of SCTL and OCSP")
introduced a bug in which we iterate outside the array durint a 'set ssl
cert' if we didn't built with the ocsp or sctl.
To avoid affecting too much the traffic during a certificate update,
create the SNIs in a IO handler which yield every 10 ckch instances.
This way haproxy continues to respond even if we tries to update a
certificate which have 50 000 instances.
When updating a certificate from the CLI, it is not possible to revert
some of the changes if part of the certicate update failed. We now
creates a copy of the ckch_store for the changes so we can revert back
if something goes wrong.
Even if the ckch_store was affected before this change, it wasn't
affecting the SSL_CTXs used for the traffic. It was only a problem if we
try to update a certificate after we failed to do it the first time.
The new ckch_store is also linked to the new sni_ctxs so it's easy to
insert the sni_ctxs before removing the old ones.
ssl_sock_copy_cert_key_and_chain() copy the content of a
<src> cert_key_and_chain to a <dst>.
It applies a refcount increasing on every SSL structures (X509, DH,
privte key..) and allocate new buffers for the other fields.
It is now possible to update new parts of a CKCH from the CLI.
Currently you will be able to update a PEM (by default), a OCSP response
in base64, an issuer file, and a SCTL file.
Each update will creates a new CKCH and new sni_ctx structure so we will
need a "commit" command later to apply several changes and create the
sni_ctx only once.
Split the ssl_sock_load_crt_file_into_ckch() in two functions:
- ssl_sock_load_files_into_ckch() which is dedicated to opening every
files related to a filename during the configuration parsing (PEM, sctl,
ocsp, issuer etc)
- ssl_sock_load_pem_into_ckch() which is dedicated to opening a PEM,
either in a file or a buffer
ssl_sock_load_issuer_file_into_ckch() is a new function which is able to
load an issuer from a buffer or from a file to a CKCH.
Use this function directly in ssl_sock_load_crt_file_into_ckch()
The ssl_sock_load_sctl_from_file() function was modified to
fill directly a struct cert_key_and_chain.
The function prototype was normalized in order to be used with the CLI
payload parser.
This function either read text from a buffer or read a file on the
filesystem.
It fills the ocsp_response buffer of the struct cert_key_and_chain.
The ssl_sock_load_ocsp_response_from_file() function was modified to
fill directly a struct cert_key_and_chain.
The function prototype was normalized in order to be used with the CLI
payload parser.
This function either read a base64 from a buffer or read a binary file
on the filesystem.
It fills the ocsp_response buffer of the struct cert_key_and_chain.
The logs were added to the H2 mux so that we can report logs in case
of errors that prevent a stream from being created, but as a side effect
these logs are emitted twice for backend connections: once by the H2 mux
itself and another time by the upper layer stream. It can even happen
more with connection retries.
This patch makes sure we do not emit logs for backend connections.
It should be backported to 2.0 and 1.9.
As reported in issue #335, a lot of contention happens on the PATLRU lock
when performing expensive regex lookups. This is absurd since the purpose
of the LRU cache was to have a fast cache for expressions, thus the cache
must not be shared between threads and must remain lockless.
This commit makes the LRU cache thread-local and gets rid of the PATLRU
lock. A test with 7 threads on 4 cores climbed from 67kH/s to 369kH/s,
or a scalability factor of 5.5.
Given the huge performance difference and the regression caused to
users migrating from processes to threads, this should be backported at
least to 2.0.
Thanks to Brian Diekelman for his detailed report about this regression.
As reported in issue #331, the code used to cast a 32-bit to a 64-bit
stick-table key is wrong. It only copies the 32 lower bits in place on
little endian machines or overwrites the 32 higher ones on big endian
machines. It ought to simply remove the wrong cast dereference.
This bug was introduced when changing stick table keys to samples in
1.6-dev4 by commit bc8c404449 ("MAJOR: stick-tables: use sample types
in place of dedicated types") so it the fix must be backported as far
as 1.6.
A trick is used to set SESSION_ID, and SESSION_ID_CONTEXT lengths
to 0 and avoid ASN1 encoding of these values.
There is no specific function to set the length of those parameters
to 0 so we fake this calling these function to a different value
with the same buffer but a length to zero.
But those functions don't seem to check the length of zero before
performing a memcpy of length zero but with src and dst buf on the
same pointer, causing valgrind to bark.
So the code was re-work to pass them different pointers even
if buffer content is un-used.
In a second time, reseting value, a memcpy overlap
happened on the SESSION_ID_CONTEXT. It was re-worked and this is
now reset using the constant global value SHCTX_APPNAME which is a
different pointer with the same content.
This patch should be backported in every version since ssl
support was added to haproxy if we want valgrind to shut up.
This is tracked in github issue #56.
Processing of SRV record weight was inaccurate and when a SRV record's
weight was set to 0, HAProxy enforced it to '1'.
This patch aims at fixing this without breaking compability with
previous behavior.
Backport status: 1.8 to 2.0
fopen() can return NULL when state file is missing. This patch adds a
check of fopen() return value so we can skip processing in such case.
No backport needed.
Previously an expression like:
path,field(2,/) -m found
always returned `true`.
Bug exists since the `field` converter exists. That is:
f399b0debf
The fix should be backported to 1.6+.
When running haproxy -c, the cache parser is trying to allocate the size
of the cache. This can be a problem in an environment where the RAM is
limited.
This patch moves the cache allocation in the post_check callback which
is not executed during a -c.
This patch may be backported at least to 2.0 and 1.9. In 1.9, the callbacks
registration mechanism is not the same. So the patch will have to be adapted. No
need to backport it to 1.8, the code is probably too different.
When a stick counter is fetched, it is important that the requested counter does
not exceed (MAX_SESS_STKCTR -1). Actually, there is no bug with a default build
because, by construction, MAX_SESS_STKCTR is defined to 3 and we know that we
never exceed the max value. scN_* sample fetches are numbered from 0 to 2. For
other sample fetches, the value is tested.
But there is a bug if MAX_SESS_STKCTR is set to a lower value. For instance
1. In this case the counters sc1_* and sc2_* may be undefined.
This patch fixes the issue #330. It must be backported as far as 1.7.
When an error occurred in the function bind_parse_tls_ticket_keys(), during the
configuration parsing, the opened file is not always closed. To fix the bug, all
errors are catched at the same place, where all ressources are released.
This patch fixes the bug #325. It must be backported as far as 1.7.
When using the master CLI with 'fd@', during a reload, the master CLI
proxy is stopped. Unfortunately if this is an inherited FD it is closed
too, and the master CLI won't be able to bind again during the
re-execution. It lead the master to fallback in waitpid mode.
This patch forbids the inherited FDs in the master's listeners to be
closed during a proxy_stop().
This patch is mandatory to use the -W option in VTest versions that contain the
-mcli feature.
(86e65f1024)
Should be backported as far as 1.9.
If openssl 1.1.1 is used, c2aae74f0 commit mistakenly enables DH automatic
feature from openssl instead of ECDH automatic feature. There is no impact for
the ECDH one because the feature is always enabled for that version. But doing
this, the 'tune.ssl.default-dh-param' was completely ignored for DH parameters.
This patch fix the bug calling 'SSL_CTX_set_ecdh_auto' instead of
'SSL_CTX_set_dh_auto'.
Currently some users may use a 2048 DH bits parameter, thinking they're using a
1024 bits one. Doing this, they may experience performance issue on light hardware.
This patch warns the user if haproxy fails to configure the given DH parameter.
In this case and if openssl version is > 1.1.0, haproxy will let openssl to
automatically choose a default DH parameter. For other openssl versions, the DH
ciphers won't be usable.
A commonly case of failure is due to the security level of openssl.cnf
which could refuse a 1024 bits DH parameter for a 2048 bits key:
$ cat /etc/ssl/openssl.cnf
...
[system_default_sect]
MinProtocol = TLSv1
CipherString = DEFAULT@SECLEVEL=2
This should be backport into any branch containing the commit c2aae74f0.
It requires all or part of the previous CLEANUP series.
This addresses github issue #324.
All bind keyword parsing message were show as alerts.
With this patch if the message is flagged only with ERR_WARN and
not ERR_ALERT it will show a label [WARNING] and not [ALERT].
ssl_sock_load_dh_params used to return >0 or -1 to indicate success
or failure. Make it return a set of ERR_* instead so that its callers
can transparently report its status. Given that its callers only used
to know about ERR_ALERT | ERR_FATAL, this is the only code returned
for now. An error message was added in the case of failure and the
comment was updated.
ssl_sock_put_ckch_into_ctx used to return 0 or >0 to indicate success
or failure. Make it return a set of ERR_* instead so that its callers
can transparently report its status. Given that its callers only used
to know about ERR_ALERT | ERR_FATAL, this is the only code returned
for now. And a comment was updated.
ckch_inst_new_load_store() and ckch_inst_new_load_multi_store used to
return 0 or >0 to indicate success or failure. Make it return a set of
ERR_* instead so that its callers can transparently report its status.
Given that its callers only used to know about ERR_ALERT | ERR_FATAL,
his is the only code returned for now. And the comment was updated.
ssl_sock_load_ckchs() used to return 0 or >0 to indicate success or
failure even though this was not documented. Make it return a set of
ERR_* instead so that its callers can transparently report its status.
Given that its callers only used to know about ERR_ALERT | ERR_FATAL,
this is the only code returned for now. And a comment was added.
These functions were returning only 0 or 1 to mention success or error,
and made it impossible to return a warning. Let's make them return error
codes from ERR_* and map all errors to ERR_ALERT|ERR_FATAL for now since
this is the only code that was set on non-zero return value.
In addition some missing comments were added or adjusted around the
functions' return values.
In mux_pt_io_cb(), instead of always calling the wake method, only do so
if nobody subscribed for receive. If we have a subscription, just wake the
associated tasklet up.
This should be backported to 1.9 and 2.0.
There's a small window where the mux_pt tasklet may be woken up, and thus
mux_pt_io_cb() get scheduled, and then the connection is attached to a new
stream. If this happen, don't do anything, and just let the stream know
by calling its wake method. If the connection had an error, the stream should
take care of destroying it by calling the detach method.
This should be backported to 2.0 and 1.9.
This reverts commit "BUG/MEDIUM: mux_pt: Make sure we don't have a
conn_stream before freeing.".
mux_pt_io_cb() is only used if we have no associated stream, so we will
never have a cs, so there's no need to check that, and we of course have to
destroy the mux in mux_pt_detach() if we have no associated session, or if
there's an error on the connection.
This should be backported to 2.0 and 1.9.
The idle cleanup tasks' masks are wrong for threads 32 to 64, which
causes the wrong thread to wake up and clean the connections that it
does not own, with a risk of crash or infinite loop depending on
concurrent accesses. For thread 32, any thread between 32 and 64 will
be woken up, but for threads 33 to 64, in fact threads 1 to 32 will
run the task instead.
This issue only affects deployments enabling more than 32 threads. While
is it not common in 1.9 where this has to be explicit, and can easily be
dealt with by lowering the number of threads, it can be more common in
2.0 since by default the thread count is determined based on the number
of available processors, hence the MAJOR tag which is mostly relevant
to 2.x.
The problem was first introduced into 1.9-dev9 by commit 0c18a6fe3
("MEDIUM: servers: Add a way to keep idle connections alive.") and was
later moved to cfgparse.c by commit 980855bd9 ("BUG/MEDIUM: server:
initialize the orphaned conns lists and tasks at the end").
This patch needs to be backported as far as 1.9, with care as 1.9 is
slightly different there (uses idle_task[] instead of idle_conn_cleanup[]
like in 2.x).
On error, make sure we don't have a conn_stream before freeing the connection
and the associated mux context. Otherwise a stream will still reference
the connection, and attempt to use it.
If we still have a conn_stream, it will properly be free'd when the detach
method is called, anyway.
This should be backported to 2.0 and 1.9.
There are 2 kinds of tcp info fetchers. Those returning a time value (fc_rtt and
fc_rttval) and those returning a counter (fc_unacked, fc_sacked, fc_retrans,
fc_fackets, fc_lost, fc_reordering). Because of a bug, the counters were handled
as time values, and by default, were divided by 1000 (because of an invalid
conversion from us to ms). To work around this bug and have the right value, the
argument "us" had to be specified.
So now, tcp info fetchers returning a counter don't support any argument
anymore. To not break old configurations, if an argument is provided, it is
ignored and a warning is emitted during the configuration parsing.
In addition, parameter validiation is now performed during the configuration
parsing.
This patch must be backported as far as 1.7.
Patch 56996da ("BUG/MINOR: mworker/ssl: close OpenSSL FDs on reload")
fixes a issue where the /dev/random FD was leaked by OpenSSL upon a
reload in master worker mode. Indeed the FD was not flagged with
CLOEXEC.
The fix was checking if ssl_used_frontend or ssl_used_backend were set
to close the FD. This is wrong, indeed the lua init code creates an SSL
server without increasing the backend value, so the deinit is never
done when you don't use SSL in your configuration.
To reproduce the problem you just need to build haproxy with openssl and
lua with an openssl which does not use the getrandom() syscall. No
openssl nor lua configuration are required for haproxy.
This patch must be backported as far as 1.8.
Fix issue #314.
The recent changes to address URI issues mixed with the recent fix to
stop caching absolute URIs have caused the cache not to cache H2 requests
anymore since these ones come with a scheme and authority. Let's unbreak
this by using absolute URIs all the time, now that we keep host and
authority in sync. So what is done now is that if we have an authority,
we take the whole URI as it is as the cache key. This covers H2 and H1
absolute requests. If no authority is present (most H1 origin requests),
then we prepend "https://" and the Host header. The reason for https://
is that most of the time we don't care about the scheme, but since about
all H2 clients use this scheme, at least we can share the cache between
H1 and H2.
No backport is needed since the breakage only affects 2.1-dev.
When a response generated by HAProxy is handled by the mux H1, if the
corresponding request has not fully been received, the close mode is
forced. Thus, the client is notified the connection will certainly be closed
abruptly, without waiting the end of the request.
The flag HTX_FL_PROXY_RESP is now set on responses generated by HAProxy,
excluding responses returned by applets and services. It is an informative flag
set by the applicative layer.
When an error file was loaded, the flag HTX_SL_F_XFER_LEN was never set on the
HTX start line because of a bug. During the headers parsing, the flag
H1_MF_XFER_LEN is never set on the h1m. But it was the condition to set
HTX_SL_F_XFER_LEN on the HTX start-line. Instead, we must only rely on the flags
H1_MF_CLEN or H1_MF_CHNK.
Because of this bug, it was impossible to keep a connection alive for a response
generated by HAProxy. Now the flag HTX_SL_F_XFER_LEN is set when an error file
have a content length (chunked responses are unsupported at this stage) and the
connection may be kept alive if there is no connection header specified to
explicitly close it.
This patch must be backported to 2.0 and 1.9.
It currently is not possible to figure the exact haproxy version from a
core file for the sole reason that the version is stored into a const
string and as such ends up in the .text section that is not part of a
core file. By turning them into variables we move them to the data
section and they appear in core files. In order to help finding them,
we just prepend an extra variable in front of them and we're able to
immediately spot the version strings from a core file:
$ strings core | fgrep -A2 'HAProxy version'
HAProxy version follows
2.1-dev2-e0f48a-88
2019/10/15
(These are haproxy_version and haproxy_date respectively). This may be
backported to 2.0 since this part is not support to impact anything but
the developer's time spent debugging.
246c024 ("MINOR: ssl: load the ocsp in/from the ckch") broke the loading
of OCSP files. The function ssl_sock_load_ocsp_response_from_file() was
not returning 0 upon success which lead to an error after the .ocsp was
read.
The error messages for OCSP in ssl_sock_load_crt_file_into_ckch() add a
double extension to the filename, that can be confusing. The messages
reference a .issuer.issuer file.
If the user agent data contains text that has special characters that
are used to format the output from the vfprintf() function, haproxy
crashes. String "%s %s %s" may be used as an example.
% curl -A "%s %s %s" localhost:10080/index.html
curl: (52) Empty reply from server
haproxy log:
00000000:WURFL-test.clireq[00c7:ffffffff]: GET /index.html HTTP/1.1
00000000:WURFL-test.clihdr[00c7:ffffffff]: host: localhost:10080
00000000:WURFL-test.clihdr[00c7:ffffffff]: user-agent: %s %s %s
00000000:WURFL-test.clihdr[00c7:ffffffff]: accept: */*
segmentation fault (core dumped)
gdb 'where' output:
#0 strlen () at ../sysdeps/x86_64/strlen.S:106
#1 0x00007f7c014a8da8 in _IO_vfprintf_internal (s=s@entry=0x7ffc808fe750, format=<optimized out>,
format@entry=0x7ffc808fe9c0 "WURFL: retrieve header request returns [%s %s %s]\n",
ap=ap@entry=0x7ffc808fe8b8) at vfprintf.c:1637
#2 0x00007f7c014cfe89 in _IO_vsnprintf (
string=0x55cb772c34e0 "WURFL: retrieve header request returns [(null) %s %s %s B,w\313U",
maxlen=<optimized out>,
format=format@entry=0x7ffc808fe9c0 "WURFL: retrieve header request returns [%s %s %s]\n",
args=args@entry=0x7ffc808fe8b8) at vsnprintf.c:114
#3 0x000055cb758f898f in send_log (p=p@entry=0x0, level=level@entry=5,
format=format@entry=0x7ffc808fe9c0 "WURFL: retrieve header request returns [%s %s %s]\n")
at src/log.c:1477
#4 0x000055cb75845e0b in ha_wurfl_log (
message=message@entry=0x55cb75989460 "WURFL: retrieve header request returns [%s]\n") at src/wurfl.c:47
#5 0x000055cb7584614a in ha_wurfl_retrieve_header (header_name=<optimized out>, wh=0x7ffc808fec70)
at src/wurfl.c:763
In case WURFL (actually HAProxy) is not compiled with debug option
enabled (-DWURFL_DEBUG), this bug does not come to light.
This patch could be backported in every version supporting
the ScientiaMobile's WURFL. (as far as 1.7)
As stated in the RCF7230#5.4, a client must send a field-value for the header
host that is identical to the authority if the target URI includes one. So, now,
by default, if the authority, when provided, does not match the value of the
header host, an error is triggered. To mitigate this behavior, it is possible to
set the option "accept-invalid-http-request". In that case, an http error is
captured without interrupting the request parsing.
There is no reason for a client to send several headers host. It even may be
considered as a bug. However, it is totally invalid to have different values for
those. So now, in such case, an error is triggered during the request
parsing. In addition, when several headers host are found with the same value,
only the first instance is kept and others are skipped.
When the option "accept-invalid-http-request" is enabled, some parsing errors
are ignored. But the position of the error is reported. In legacy HTTP mode,
such errors were captured. So, we now do the same in the H1 multiplexer.
If required, this patch may be backported to 2.0 and 1.9.
When an outgoing HTX message is formatted to a raw message, DATA blocks may be
splitted to not tranfser more data than expected. But if the buffer is almost
full, the formatting is interrupted, leaving some unused free space in the
buffer, because data are too large to be copied in one time.
Now, we transfer as much data as possible. When the message is chunked, we also
count the size used to encode the data.
When an outgoing HTX message is formatted to a raw message, if we fail to copy
data of an HTX block into the output buffer, we mark it as full. Before it was
only done calling the function buf_room_for_htx_data(). But this function is
designed to optimize input processing.
This patch must be backported to 2.0 and 1.9.
In functions htx_*_to_h1(), most of time several calls to chunk_memcat() are
chained. The expected size is always compared to available room in the buffer to
be sure the full copy will succeed. But it is a bit risky because it relies on
the fact the function chunk_memcat() evaluates the available room in the buffer
in a same way than htx ones. And, unfortunately, it does not. A bug in
chunk_memcat() will always leave a byte unused in the buffer. So, for instance,
when a chunk is copied in an almost full buffer, the last CRLF may be skipped.
To fix the issue, we now rely on the result of chunk_memcat() only.
This patch must be backported to 2.0 and 1.9.
The SSL engines code was written below the OCSP #ifdef, which means you
can't build the engines code if the OCSP is deactived in the SSL lib.
Could be backported in every version since 1.8.
A NULL dereference can occur when inserting SNIs. In the case of
checking for duplicates, if there is already several sni_ctx with the
same key.
Fix issue #321.
Don't try to load the files containing the issuer and the OCSP response
each time we generate a SSL_CTX.
The .ocsp and the .issuer are now loaded in the struct
cert_key_and_chain only once and then loaded from this structure when
creating a SSL_CTX.
Don't try to load the file containing the sctl each time we generate a
SSL_CTX.
The .sctl is now loaded in the struct cert_key_and_chain only once and
then loaded from this structure when creating a SSL_CTX.
Note that this now make possible the use of sctl with multi-cert
bundles.
$ echo -e "set ssl cert certificate.pem <<\n$(cat certificate2.pem)\n" | \
socat stdio /var/run/haproxy.stat
Certificate updated!
The operation is locked at the ckch level with a HA_SPINLOCK_T which
prevents the ckch architecture (ckch_store, ckch_inst..) to be modified
at the same time. So you can't do a certificate update at the same time
from multiple CLI connections.
SNI trees are also locked with a HA_RWLOCK_T so reading operations are
locked only during a certificate update.
Bundles are supported but you need to update each file (.rsa|ecdsa|.dsa)
independently. If a file is used in the configuration as a bundle AND
as a unique certificate, both will be updated.
Bundles, directories and crt-list are supported, however filters in
crt-list are currently unsupported.
The code tries to allocate every SNIs and certificate instances first,
so it can rollback the operation if that was unsuccessful.
If you have too much instances of the certificate (at least 20000 in my
tests on my laptop), the function can take too much time and be killed
by the watchdog. This will be fixed later. Also with too much
certificates it's possible that socat exits before the end of the
generation without displaying a message, consider changing the socat
timeout in this case (-t2 for example).
The size of the certificate is currently limited by the maximum size of
a payload, that must fit in a buffer.
The ssl_sock_load_{multi}_ckchs() function were renamed and modified:
- allocate a ckch_inst and loads the sni in it
- return a ckch_inst or NULL
- the sni_ctx are not added anymore in the sni trees from there
- renamed in ckch_inst_new_load_{multi}_store()
- new ssl_sock_load_ckchs() function calls
ckch_inst_new_load_{multi}_store() and add the sni_ctx to the sni trees.
ssl_sock_load_multi_ckchs() is now able to fail without polluting the
bind_conf trees and leaking memory.
It is a prerequisite to load certificate on-the-fly with the CLI.
The insertion of the sni_ctxs in the trees are done once everything has
been allocated correctly.
ssl_sock_load_ckchn() is now able to fail without polluting the
bind_conf trees and leaking memory.
It is a prerequisite to load certificate on-the-fly with the CLI.
The insertion of the sni_ctxs in the trees are done once everything has
been allocated correctly.
In order to allow the creation of sni_ctx in runtime, we need to split
the function to allow rollback.
We need to be able to allocate all sni_ctxs required before inserting
them in case we need to rollback if we didn't succeed the allocation.
The function was splitted in 2 parts.
The first one ckch_inst_add_cert_sni() allocates a struct sni_ctx, fill
it with the right data and insert it in the ckch_inst's list of sni_ctx.
The second will take every sni_ctx in the ckch_inst and insert them in
the bind_conf's sni tree.
struct ckch_inst represents an instance of a certificate (ckch_node)
used in a bind_conf. Every sni_ctx created for 1 ckch_node in a
bind_conf are linked in this structure.
This patch allocate the ckch_inst for each bind_conf and inserts the
sni_ctx in its linked list.
The ssl_sock_populate_sni_keytypes_hplr() function does not return an
error upon an allocation failure.
The process would probably crash during the configuration parsing if the
allocation fail since it tries to copy some data in the allocated
memory.
This patch could be backported as far as 1.5.
This patch frees the sni_keytype nodes once the sni_ctxs have been
allocated in ssl_sock_load_multi_ckchn();
Could be backported in every version using the multi-cert SSL bundles.
The ssl_sock_add_cert_sni() function never return an error when a
sni_ctx allocation fail. It silently ignores the problem and continues
to try to allocate other snis.
It is unlikely that a sni allocation will succeed after one failure and
start a configuration without all the snis. But to avoid any problem we
return a -1 upon an sni allocation error and stop the configuration
parsing.
This patch must be backported in every version supporting the crt-list
sni filters. (as far as 1.5)
A ckch_store is a storage which contains a pointer to one or several
cert_key_and_chain structures.
This patch renames ckch_node to ckch_store, and ckch_n, ckchn to ckchs.
As using an mt_list for the tasklet list is costly, instead use a regular list,
but add an mt_list for tasklet woken up by other threads, to be run on the
current thread. At the beginning of process_runnable_tasks(), we just take
the new list, and merge it into the task_list.
This should give us performances comparable to before we started using a
mt_list, but allow us to use tasklet_wakeup() from other threads.
I introduced this mistake when adding the description for the stats
metrics, it's even amazing it built and worked at all! This was
reported by Travis CI on non-GNU platforms :
src/stats.c:92:39: warning: use of GNU 'missing =' extension in designator [-Wgnu-designator]
[INF_NAME] { .name = "Name", .desc = "Product name" },
^
=
No backport is needed.
In issue #277 is reported a strange problem related to a fast-spinning
applet which seems to show valid progress being made. It's uncertain how
this can happen, maybe some very specific timing patterns manage to place
just a few bytes in each buffer and result in the peers applet being called
a lot. But it appears possible to artificially cross the spinning threshold
by asking for monster stats page (500 MB) and limiting the send() size to
1 MSS (1460 bytes), causing the stats page to be called for very small
blocks which most often do not leave enough room to place a new chunk.
The idea developed in this patch consists in not crashing for an applet
which reaches a very high call rate if it shows some indication of
progress. Detecting progress on applets is not trivial but in our case
we know that they must at least not claim to wait for a buffer allocation
if this buffer is present, wait for room if the buffer is empty, ask for
more data without polling if such data are still present, nor leave with
an empty input buffer without having written anything nor read anything
from the other side while a shutw is pending.
Doing so doesn't affect normal behaviors nor abuses of our existing
applets and does at least protect against an applet performing an
early return without processing events, or one causing an endless
loop by asking for impossible conditions.
This must be backported to 2.0.
Now "show info desc", "show info typed desc" and "show stat typed desc"
will report (hopefully) accurate descriptions of each field. These ones
were verified in the code. When some metrics are specific to the process
or the thread, they are indicated. Sometimes a config option is known
for a setting and it is reported as well. The purpose mainly is to help
sysadmins in field more easily sort out issues vs non-issues. In part
inspired by this very informative talk :
https://kernel-recipes.org/en/2019/metrics-are-money/
Example:
$ socat - /var/run/haproxy.sock <<< "show info desc"
Name: HAProxy:"Product name"
Version: 2.1-dev2-991035-31:"Product version"
Release_date: 2019/10/09:"Date of latest source code update"
Nbthread: 1:"Number of started threads (global.nbthread)"
Nbproc: 1:"Number of started worker processes (global.nbproc)"
Process_num: 1:"Relative process number (1..Nbproc)"
Pid: 11975:"This worker process identifier for the system"
Uptime: 0d 0h00m10s:"How long ago this worker process was started (days+hours+minutes+seconds)"
Uptime_sec: 10:"How long ago this worker process was started (seconds)"
Memmax_MB: 0:"Worker process's hard limit on memory usage in MB (-m on command line)"
PoolAlloc_MB: 0:"Amount of memory allocated in pools (in MB)"
PoolUsed_MB: 0:"Amount of pool memory currently used (in MB)"
PoolFailed: 0:"Number of failed pool allocations since this worker was started"
Ulimit-n: 300000:"Hard limit on the number of per-process file descriptors"
Maxsock: 300000:"Hard limit on the number of per-process sockets"
Maxconn: 149982:"Hard limit on the number of per-process connections (configured or imposed by Ulimit-n)"
Hard_maxconn: 149982:"Hard limit on the number of per-process connections (imposed by Memmax_MB or Ulimit-n)"
CurrConns: 0:"Current number of connections on this worker process"
CumConns: 1:"Total number of connections on this worker process since started"
CumReq: 1:"Total number of requests on this worker process since started"
MaxSslConns: 0:"Hard limit on the number of per-process SSL endpoints (front+back), 0=unlimited"
CurrSslConns: 0:"Current number of SSL endpoints on this worker process (front+back)"
CumSslConns: 0:"Total number of SSL endpoints on this worker process since started (front+back)"
Maxpipes: 0:"Hard limit on the number of pipes for splicing, 0=unlimited"
PipesUsed: 0:"Current number of pipes in use in this worker process"
PipesFree: 0:"Current number of allocated and available pipes in this worker process"
ConnRate: 0:"Number of front connections created on this worker process over the last second"
ConnRateLimit: 0:"Hard limit for ConnRate (global.maxconnrate)"
MaxConnRate: 0:"Highest ConnRate reached on this worker process since started (in connections per second)"
SessRate: 0:"Number of sessions created on this worker process over the last second"
SessRateLimit: 0:"Hard limit for SessRate (global.maxsessrate)"
MaxSessRate: 0:"Highest SessRate reached on this worker process since started (in sessions per second)"
SslRate: 0:"Number of SSL connections created on this worker process over the last second"
SslRateLimit: 0:"Hard limit for SslRate (global.maxsslrate)"
MaxSslRate: 0:"Highest SslRate reached on this worker process since started (in connections per second)"
SslFrontendKeyRate: 0:"Number of SSL keys created on frontends in this worker process over the last second"
SslFrontendMaxKeyRate: 0:"Highest SslFrontendKeyRate reached on this worker process since started (in SSL keys per second)"
SslFrontendSessionReuse_pct: 0:"Percent of frontend SSL connections which did not require a new key"
SslBackendKeyRate: 0:"Number of SSL keys created on backends in this worker process over the last second"
SslBackendMaxKeyRate: 0:"Highest SslBackendKeyRate reached on this worker process since started (in SSL keys per second)"
SslCacheLookups: 0:"Total number of SSL session ID lookups in the SSL session cache on this worker since started"
SslCacheMisses: 0:"Total number of SSL session ID lookups that didn't find a session in the SSL session cache on this worker since started"
CompressBpsIn: 0:"Number of bytes submitted to HTTP compression in this worker process over the last second"
CompressBpsOut: 0:"Number of bytes out of HTTP compression in this worker process over the last second"
CompressBpsRateLim: 0:"Limit of CompressBpsOut beyond which HTTP compression is automatically disabled"
Tasks: 10:"Total number of tasks in the current worker process (active + sleeping)"
Run_queue: 1:"Total number of active tasks+tasklets in the current worker process"
Idle_pct: 100:"Percentage of last second spent waiting in the current worker thread"
node: wtap.local:"Node name (global.node)"
Stopping: 0:"1 if the worker process is currently stopping, otherwise zero"
Jobs: 14:"Current number of active jobs on the current worker process (frontend connections, master connections, listeners)"
Unstoppable Jobs: 0:"Current number of unstoppable jobs on the current worker process (master connections)"
Listeners: 13:"Current number of active listeners on the current worker process"
ActivePeers: 0:"Current number of verified active peers connections on the current worker process"
ConnectedPeers: 0:"Current number of peers having passed the connection step on the current worker process"
DroppedLogs: 0:"Total number of dropped logs for current worker process since started"
BusyPolling: 0:"1 if busy-polling is currently in use on the worker process, otherwise zero (config.busy-polling)"
FailedResolutions: 0:"Total number of failed DNS resolutions in current worker process since started"
TotalBytesOut: 0:"Total number of bytes emitted by current worker process since started"
BytesOutRate: 0:"Number of bytes emitted by current worker process over the last second"
Now "show info" supports "desc" after the default and "typed" formats,
and "show stat" supports this after the typed format. In both cases
this appends the description for the represented metric between double
quotes. The same could be done for JSON output but would possibly require
to update the schema first.
Several times some users have expressed the non-intuitive aspect of some
of our stat/info metrics and suggested to add some help. This patch
replaces the char* arrays with an array of name_desc so that we now have
some reserved room to store a description with each stat or info field.
These descriptions are currently empty and not reported yet.
Now "show info" and "show stat" can parse "desc" as an output format
modifier that will be passed down the chain to add some descriptions
to the fields depending on the format in use. For now it is not
exploited.
Some functions used to take flags + appctx with flags==appctx.flags,
others neither, others just one of them. Some functions used to have
the flags before the object being dumped (server) while others had
it after (listener). This patch aims at cleaning this up a little bit
by following this principle:
- low-level functions which do not need the appctx take flags only
- medium-level functions which already use the appctx for other
reasons do not keep the flags
- top-level functions which already have the stream-int don't need
the flags nor the appctx.
This flag is used to decide to show the check box in front of a proxy
on the HTML stat page. It is always equal to STAT_ADMIN except when the
proxy has no backend capability (i.e. a pure frontend) or has no server,
in which case it's only used to avoid leaving an empty column at the
beginning of the table. Not only this is pretty useless, but it also
causes the columns not to align well when mixing multiple proxies with
or without servers.
Let's simply always use STAT_ADMIN and get rid of this flag.
Now we only use the appctx flags everywhere in the code, and the uri_auth
flags are read only by the HTTP analyser which presets the appctx ones.
This will allow to simplify access to the flags everywhere.
We used to rely on some config flags defined in uri_auth.h set during
parsing, and another set of STAT_* flags defined in stats.h set at run
time, with a somewhat gray area between the two sets. This is confusing
in the stats code as both are called "flags" in various functions and
it's quite hard to know which one describes what.
This patch cleans this up by replacing all ST_* by a newly assigned
value from the STAT_* set so that we can now use unified flags to
describe both the configuration and the current state. There is no
functional change at all.
This flag was added in 1.4-rc1 by commit 329f74d463 ("[BUG] uri_auth: do
not attemp to convert uri_auth -> http-request more than once") to
address the case where two proxies inherit the stats settings from
the defaults instance, and the first one compiles the expression while
the second one uses it. In this case since they use the exact same
uri_auth pointer, only the first one should compile and the second one
must not fail the check. This was addressed by adding an ST_CONVDONE
flag indicating that the expression conversion was completed and didn't
need to be done again. But this is a hack and it becomes cumbersome in
the middle of the other flags which are all relevant to the stats
applet. Let's instead fix it by checking if we're dealing with an
alias of the defaults instance and refrain from compiling this twice.
This allows us to remove the ST_CONVDONE flag.
A typical config requiring this check is :
defaults
mode http
stats auth foo:bar
listen l1
bind :8080
listen l2
bind :8181
Without this (or previous) check it would cmoplain when checking l2's
validity since the rule was already built.
Both "show info" and "show stat" support the "typed" output format and
the "json" output format. I just never can remind them, which is an
indication that some help is missing.
H2 strongly recommends that clients exclusively use the absolute form
for requests, which contains a scheme, an authority and a path, instead
of the old format involving the Host header and a path. Thus there is
no way to distinguish between a request intended for a proxy and an
origin request, and as such proxied requests are lost.
This patch makes sure to keep the encoding of all absolute form requests
so that the URI is kept end-to-end. If the scheme is http or https, there
is an uncertainty so the request is tagged as a normalized URI so that
the other end (H1) can decide to emit it in origin form as this is by far
the most commonly expected one, and it's certain that quite a number of
H1 setups are not ready to cope with absolute URIs.
There is a direct visible impact of this change, which is that the uri
sample fetch will now return absolute URIs (as they really come on the
wire) whenever these are used. It also means that default http logs will
report absolute URIs.
If a situation is once met where a client uses H2 to join an H1 proxy
with haproxy in the middle, then it will be trivial to add an option to
ask the H1 output to use absolute encoding for such requests.
Later we may be able to consider that the normalized URI is the default
output format and stop sending them in origin form unless an option is
set.
Now chaining multiple instances keeps the semantics as far as possible
along the whole chain :
1) H1 to H1
H1:"GET /" --> H1:"GET /" # log: /
H1:"GET http://" --> H1:"GET http://" # log: http://
H1:"GET ftp://" --> H1:"GET ftp://" # log: ftp://
2) H2 to H1
H2:"GET /" --> H1:"GET /" # log: /
H2:"GET http://" --> H1:"GET /" # log: http://
H2:"GET ftp://" --> H1:"GET ftp://" # log: ftp://
3) H1 to H2 to H2 to H1
H1:"GET /" --> H2:"GET /" --> H2:"GET /" --> H1:"GET /"
H1:"GET http://" --> H2:"GET http://" --> H2:"GET http://" --> H1:"GET /"
H1:"GET ftp://" --> H2:"GET ftp://" --> H2:"GET ftp://" --> H1:"GET ftp://"
Thus there is zero loss on H1->H1, H1->H2 nor H2->H2, and H2->H1 is
normalized in origin format if ambiguous.
Instead of mapping the Host header field to :authority, we now act
differently if the request is in origin form or in absolute form.
If it's absolute, we extract the scheme and the authority from the
request, fix the path if it's empty, and drop the Host header.
Otherwise we take the scheme from the http/https flags in the HTX
layer, make the URI be the path only, and emit the Host header,
as indicated in RFC7540#8.1.2.3. This allows to distinguish between
absolute and origin requests for H1 to H2 conversions.
Till now we've been producing path components of the URI and using the
:authority header only to be placed into the host part. But this practice
is not correct, as if we're used to convey H1 proxy requests over H2 then
over H1, the absolute URI is presented as a path on output, which is not
valid. In addition the scheme on output is not updated from the absolute
URI either.
Now the request parser will continue to deliver origin-form for request
received using the http/https schemes, but will use the absolute-form
when dealing with other schemes, by concatenating the scheme, the authority
and the path if it's not '*'.
When a request start-line is converted to its raw representation, if its URI is
normalized, only the path part is used. Most of H2 clients send requests using
the absolute form (:scheme + :authority + :path), regardless the request is sent
to a proxy or not. But, when the request is relayed to an H1 origin server, it
is unusual to send it using the absolute form. And, even if the servers must
support this form, some old servers may reject it. So, for such requests, we
only get the path of the absolute URI. Most of time, it will be the right
choice. However, an option will probably by added to customize this behavior.
In HTTP, the request authority, if any, and the Host header must be identical
(excluding any userinfo subcomponent and its "@" delimiter). So now, during the
request analysis, when the Host header is updated, the start-line is also
updated. The authority of an absolute URI is changed accordingly. Symmetrically,
if the URI is changed, if it contains an authority, then then Host header is
also changed. In this latter case, the flags of the start-line are also updated
to reflect the changes on the URI.
The function http_get_authority() may be used to parse a URI and looks for the
authority, between the scheme and the path. An option may be used to skip the
user info (part before the '@'). Most of time, the user info will be ignored.
The H2 request parsing is not trivial given that we have multiple
possible syntaxes. Mainly we can have :authority or not, and when
a CONNECT method is seen, :scheme and :path are missing. This mostly
updates the functions' comments and header index assignments to make
them less confusing. Functionally there is no change.
In these muxes, when an integer value is provided in a trace, it must be the 4th
argument. The 3rd one, if defined, is always an HTX message. Unfortunately, some
traces are buggy and the 4th argument is erroneously passed in 3rd position.
No backport needed.
There are some reports of users not being able to pass "enterprise"
traffic through haproxy when using H2 because it doesn't emit CONTINUATION
frames and as such is limited to headers no longer than the negociated
max-frame-size which usually is 16 kB.
This patch implements support form emitting CONTINUATION when a HEADERS
frame cannot fit within a limit of mfs. It does this by first filling a
buffer-wise frame, then truncating it starting from the tail to append
CONTINUATION frames. This makes sure that we can truncate on any byte
without being forced to stop on a header boundary, and ensures that the
common case (no fragmentation) doesn't add any extra cost. By moving
the tail first we make sure that each byte is moved only once, thus the
performance impact remains negligible.
This addresses github issue #249.
If a request contains an absolute URI and gets its Host header field
rewritten, or just the request's URI without touching the Host header
field, it can lead to different Host and authority parts. The cache
will always concatenate the Host and the path while a server behind
would instead ignore the Host and use the authority found in the URI,
leading to incorrect content possibly being cached.
Let's simply refrain from caching absolute requests for now, which
also matches what the comment at the top of the function says. Later
we can improve this by having a special handling of the authority.
This should be backported as far as 1.8.
As for the mux h1 and h2, traces are now supported in the mux fcgi. All parts of
the multiplexer is covered by these traces. Events are splitted by categories
(fconn, fstrm, stream, rx, tx and rsp) for a total of ~40 different events with
5 verboisty levels.
In traces, the first argument is always a connection. So it is easy to get the
fconn (conn->ctx). The second argument is always a fstrm. The third one is an
HTX message. Depending on the context it is the request or the response. In all
cases it is owned by a channel. Finally, the fourth argument is an integer
value. Its meaning depends on the calling context.
When the output buffer allocation failed, we block stream processing. When
finally a buffer is available and we succed to allocate the output buffer, it
seems fair to wake up the stream.
When an outgoing h1 message is formatted, if it is considered as chunked but the
corresponding header is missing, we add it. And as all other h1 headers, if
configured so, the case of this header must be adjusted.
No backport needed.
It is not explicitly stated in the documentation, but some users rely on this
behavior. When the server name is inserted in a request, headers with the same
name are first removed.
This patch is not tagged as a bug, because it is not explicitly documented. We
choose to keep the same implicit behavior to not break existing
configuration. Because this option is used very little, it is not a big deal.
As for the mux h2, traces are now supported in the mux h1. All parts of the
multiplexer is covered by these traces. Events are splitted by categories (h1c,
h1s, stream, rx and tx) for a total of ~30 different events with 5 verboisty
levels.
In traces, the first argument is always a connection. So it is easy to get the
h1c (conn->ctx). The second argument is always a h1s. The third one is an HTX
message. Depending on the context it is the request or the response. In all
cases it is owned by a channel. Finally, the fourth argument is an integer
value. Its meaning depends on the calling context.
This function now uses the address of the pointer to the htx message where the
copy must be performed. This way, when a zero-copy is performed, there is no
need to refresh the caller's htx message. It is a bit easier to do that way,
especially to add traces in the mux-h1.
When a new H2 connection is initialized, the connection context is not changed
before the end. So, traces emitted during this initialization are buggy, except
the last one when no error occurred, because the connection context is not an
h2c.
To fix the bug, the connection context is saved and set as soon as possible. So,
the connection can always safely be used in all traces, except for the very
first one. And on error, the connection context is restored.
No need to backport.
When we configure a "peers" section without local peer, this makes haproxy
old process crash on reload.
Such a configuration file allows to reproduce this issue:
global
stats socket /tmp/sock1 mode 666 level admin
stats timeout 10s
peers peers
peer localhost 127.0.0.1:1024
This bug was introduced by this commit:
"MINOR: cfgparse: Make "peer" lines be parsed as "server" lines"
This commit introduced a new condition to detect a "peers" section without
local peer. This is a "peers" section with a frontend struct which has no ->id
initialized member. Such a "peers" section must be removed.
This patch adds this new condition to remove such peers sections without local
peer as this was always done before.
Must be backported to 2.0.
When executing tasks, don't forget to decrement tasks_run_queue once we
popped one task from the task_list. tasks_run_queue used to be decremented
by __tasklet_remove_from_tasklet_list(), but we now call MT_LIST_POP().
Alexandre Derumier reported issue #308 in which the client timeout will
strike on an H2 mux when it's shorter than the server's response time.
What happens in practice is that there is no activity on the connection
and there's no data pending on output so we can expire it. But this does
not take into account the possibility that some streams are in fact
waiting for the data layer above. So what we do now is that we enforce
the timeout when:
- there are no more streams
- some data are pending in the output buffer
- some streams are blocked on the connection's flow control
- some streams are blocked on their own flow control
- some streams are in the send/sending list
In all other cases the connection will not timeout as it means that some
streams are actively used by the data layer.
This fix must be backported to 2.0, 1.9 and probably 1.8 as well. It
depends on the new "blocked_list" field introduced by "MINOR: mux-h2:
add a per-connection list of blocked streams". It would be nice to
also backport "ebtree: make eb_is_empty() and eb_is_dup() take a const"
to avoid a build warning.
Currently the H2 mux doesn't have a list of all the streams blocking on
the H2 side. It only knows about those trying to send or waiting for a
connection window update. It is problematic to enforce timeouts because
we never know if a stream has to live as long as the data layer wants or
has to be timed out becase it's waiting for a stream window update. This
patch adds a new list, "blocked_list", to store streams blocking on
stream flow control, or later, dependencies. Streams blocked on sfctl
are now added there. It doesn't modify the rest of the logic.
This reverts commit 1263540fe8.
As discussed in issues #214 and #251, this is not the correct way to
cache CORS responses, since it relies on hacking the cache to cache
the OPTIONS method which is explicitly non-cacheable and for which
we cannot rely on any standard caching semantics (cache headers etc
are not expected there). Let's roll this back for now and keep that
for a more reliable and flexible CORS-specific solution later.
@davidmogar reported a github issue (#227) about problems with
do-resolve action when the request contains a body.
The variable was never populated in such case, despite tcpdump shows a
valid DNS response coming back.
The do-resolve action is a task in HAProxy and so it's waken by the
scheduler each time the scheduler think such task may have some work to
do.
When a simple HTTP request is sent, then the task is called, it sends
the DNS request, then the scheduler will wake up the task again later
once the DNS response is there.
Now, when the client send a PUT or a POST request (or any other type)
with a BODY, then the do-resolve action if first waken up once the
headers are processed. It sends the DNS request. Then, when the bytes
for the body are processed by HAProxy AND the DNS response has not yet
been received, then the action simply terminates and cleans up all the
data associated to this resolution...
This patch detect such behavior and if the action is now waken up while
a DNS resolution is in RUNNING state, then the action will tell the
scheduler to wake it up again later.
Backport status: 2.0 and above
src/ssl_sock.c:2928:12: warning: ‘ssl_sock_is_ckch_valid’ defined but not used [-Wunused-function]
static int ssl_sock_is_ckch_valid(struct cert_key_and_chain *ckch)
This function is only used with openssl >= 1.0.2, this patch adds a
condition to build the function.
`size` is used in conditional jumps and valgrind complains:
==24145== Conditional jump or move depends on uninitialised value(s)
==24145== at 0x4B3028: smp_is_safe (sample.h:98)
==24145== by 0x4B3028: smp_make_safe (sample.h:125)
==24145== by 0x4B3028: smp_to_stkey (stick_table.c:936)
==24145== by 0x4B3F2A: sample_conv_in_table (stick_table.c:1113)
==24145== by 0x420AD4: hlua_run_sample_conv (hlua.c:3418)
==24145== by 0x54A308F: ??? (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0)
==24145== by 0x54AFEFC: ??? (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0)
==24145== by 0x54A29F1: ??? (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0)
==24145== by 0x54A3523: lua_resume (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0)
==24145== by 0x426433: hlua_ctx_resume (hlua.c:1097)
==24145== by 0x42D7F6: hlua_action (hlua.c:6218)
==24145== by 0x43A414: http_req_get_intercept_rule (http_ana.c:3044)
==24145== by 0x43D946: http_process_req_common (http_ana.c:500)
==24145== by 0x457892: process_stream (stream.c:2084)
Found while investigating issue #306.
A variant of this issue exists since 55da165301,
which was using the old `chunk` API instead of the `buffer` API thus this patch
must be backported to HAProxy 1.6 and higher.
A break is missing in the switch statement in the function
stats_emit_json_data_field(). This bug was introduced in the commit 88a0db28a
("MINOR: stats: Add the support of float fields in stats").
This patch fixes the issue #302 and #303. It must be backported to 2.0.
Ilya reported in bug #300 that ASAN found a read overflow during startup
in the fcgi code due to a missing empty element at the end of the list
of sample fetches. The effect is that will randomly either work or crash
on startup.
No backport is needed, this is solely for 2.1-dev.
It is now possible to format stats counters as floats. But the stats applet does
not use it.
This patch is required by the Prometheus exporter to send the time averages in
seconds. If the promex change is backported, this patch must be backported
first.
the option "http-send-name-header" is an eyesore. It was responsible of several
bugs because it is handled after the message analysis. With the HTX
representation, the situation is cleaner because no rewind on forwarded data is
required. But it remains ugly.
With recent changes in HAProxy, we have the opportunity to make it fairly
better. The message formatting in now done in the HTTP multiplexers. So it seems
to be the right place to handle this option. Now, the server name is added by
the HTTP multiplexers (h1, h2 and fcgi).
A different engine-id is now generated for each thread. So, it is possible to
enable the async mode with several threads.
This patch may be backported to older versions.
Use the same algo than the sample fetch uuid(). This one was added recently. So
it is better to use the same way to generate UUIDs.
This patch may be backported to older versions.
SPOE engine-id is the same for all processes when nbproc is more than 1. So, in
async mode, an agent receiving a NOTIFY frame from a process may send the ACK to
another process. It is abviously wrong. A different engine-id must be generated
for each process.
This patch must be backported to 2.0, 1.9 and 1.8.
When a request is received, if the h2 preface is matched, an implicit upgrade
from h1 to h2 is performed. This must only be done for the first request on a
connection. But a test was missing to unsure it is really the first request.
This patch must be backported to 2.0.
When a frame is received for a unknown or already closed stream, it must be
skipped. This also happens when a stream error is reported. But we must be sure
to only skip received data. In the loop in h2_process_demux(), when such frames
are handled, all the frame lenght is systematically skipped. If the frame
payload is partially received, it leaves the demux buffer in an undefined
state. Because of this bug, all sort of errors may be observed, like crash or
intermittent freeze.
This patch must be backported to 2.0, 1.9 and 1.8.
Since the commit 6884aa3e ("BUG/MAJOR: mux-h2: Handle HEADERS frames received
after a RST_STREAM frame"), HEADERS frames received for an unknown or already
closed stream are decoded. Once decoded, an error is reported for the
stream. But because it is a dummy stream (h2_closed_stream), its state cannot be
changed. So instead, we must return the dummy error stream (h2_error_stream).
This patch must be backported to 2.0 and 1.9.
Consecutive to commit 6884aa3eb0 ("BUG/MAJOR: mux-h2: Handle HEADERS frames
received after a RST_STREAM frame") some valid frames on closed streams
(RST_STREAM, PRIORITY, WINDOW_UPDATE) were now rejected. It turns out that
the previous condition was in fact intentional to catch only sensitive
frames, which was indeed a mistake since these ones needed to be decoded
to keep HPACK synchronized. But we must absolutely accept WINDOW_UPDATES
or we risk to stall some transfers. And RST/PRIO definitely are valid.
Let's adjust the condition to reflect that and update the comment to
explain the reason for this unobvious condition.
This must be backported to 2.0 and 1.9 after the commit above is brought
there.
This way we now always have the events date which were really missing,
especially when used with traces :
<0>2019-09-26T07:57:25.183845 [00|h2|1|mux_h2.c:3024] receiving H2 HEADERS frame : h2c=0x1ddcad0(B,FRP) h2s=0x1dde9e0(3,HCL)
<0>2019-09-26T07:57:25.183845 [00|h2|4|mux_h2.c:2505] h2c_bck_handle_headers(): entering : h2c=0x1ddcad0(B,FRP) h2s=0x1dde9>
<0>2019-09-26T07:57:25.183846 [00|h2|4|mux_h2.c:4096] h2c_decode_headers(): entering : h2c=0x1ddcad0(B,FRP)
<0>2019-09-26T07:57:25.183847 [00|h2|4|mux_h2.c:4298] h2c_decode_headers(): leaving : h2c=0x1ddcad0(B,FRH)
<0>2019-09-26T07:57:25.183848 [00|h2|0|mux_h2.c:2559] rcvd H2 response : h2c=0x1ddcad0(B,FRH) : [3] H2 RES: HTTP/2.0 200
<0>2019-09-26T07:57:25.183849 [00|h2|4|mux_h2.c:2560] h2c_bck_handle_headers(): leaving : h2c=0x1ddcad0(B,FRH) h2s=0x1dde9e>
<0>2019-09-26T07:57:25.183849 [00|h2|4|mux_h2.c:2866] h2_process_demux(): no more Rx data : h2c=0x1ddcad0(B,FRH)
<0>2019-09-26T07:57:25.183849 [00|h2|4|mux_h2.c:3123] h2_process_demux(): notifying stream before switching SID : h2c=0x1dd>
<0>2019-09-26T07:57:25.183850 [00|h2|4|mux_h2.c:1014] h2s_notify_recv(): in : h2c=0x1ddcad0(B,FRH) h2s=0x1dde9e0(3,HCL)
<0>2019-09-26T07:57:25.183850 [00|h2|4|mux_h2.c:3135] h2_process_demux(): leaving : h2c=0x1ddcad0(B,FRH)
<0>2019-09-26T07:57:25.183851 [00|h2|4|mux_h2.c:3319] h2_send(): entering : h2c=0x1ddcad0(B,FRH)
<0>2019-09-26T07:57:25.183851 [00|h2|4|mux_h2.c:3145] h2_process_mux(): entering : h2c=0x1ddcad0(B,FRH)
<0>2019-09-26T07:57:25.183851 [00|h2|4|mux_h2.c:3234] h2_process_mux(): leaving : h2c=0x1ddcad0(B,FRH)
<0>2019-09-26T07:57:25.183852 [00|h2|4|mux_h2.c:3428] h2_send(): leaving with everything sent : h2c=0x1ddcad0(B,FRH)
<0>2019-09-26T07:57:25.183852 [00|h2|4|mux_h2.c:3319] h2_send(): entering : h2c=0x1ddcad0(B,FRH)
It looks like some format options could finally be separate from the sink,
or maybe enforced. For example we could imagine making the date optional
or its resolution configurable within a same buffer.
Similarly, maybe trace events would like to always emit the date even
on stdout, while traffic logs would prefer not to emit the date in the
ring buffer given that there's already one in the message.