Make it so for each server, instead of specifying a hostname, one can use
a SRV label.
When doing so, haproxy will first resolve the SRV label, then use the
resulting hostnames, as well as port and weight (priority is ignored right
now), to each server using the SRV label.
It is resolved periodically, and any server disappearing from the SRV records
will be removed, and any server appearing will be added, assuming there're
free servers in haproxy.
As DNS servers may not return all IPs in one answer, we want to cache the
previous entries. Those entries are removed when considered obsolete, which
happens when the IP hasn't been returned by the DNS server for a time
defined in the "hold obsolete" parameter of the resolver section. The default
is 30s.
With strict-sni, ssl connection will fail if no certificate match. Have no
certificate in bind line, fail on all ssl connections. It's ok with the
behavior of strict-sni. When 'generate-certificates' is set 'strict-sni' is
never used. When 'strict-sni' is set, default_ctx is never used. Allow to start
without certificate only in this case.
Use case is to start haproxy with ssl before customer start to use certificates.
Typically with 'crt' on a empty directory and 'strict-sni' parameters.
Since the commit f6b37c67 ["BUG/MEDIUM: ssl: in bind line, ssl-options after
'crt' are ignored."], the certificates generation is broken.
To generate a certificate, we retrieved the private key of the default
certificate using the SSL object. But since the commit f6b37c67, the SSL object
is created with a dummy certificate (initial_ctx).
So to fix the bug, we use directly the default certificate in the bind_conf
structure. We use SSL_CTX_get0_privatekey function to do so. Because this
function does not exist for OpenSSL < 1.0.2 and for LibreSSL, it has been added
in openssl-compat.h with the right #ifdef.
This one dumps the fdtab for all active FDs with some quickly interpretable
characters to read the flags (like upper case=set, lower case=unset). It
can probably be improved to report fdupdt[] and/or fdinfo[] but at least it
provides a good start and allows to see how FDs are seen. When the fd owner
is a connection, its flags are also reported as it can help compare with the
polling status, and the target (fe/px/sv) as well. When it's a listener, the
listener's state is reported as well as the frontend it belongs to.
The logical operations were inverted so enable/disable operations did
the opposite.
The bug is present since 1.7 so the fix should be backported there.
Commits 2ab8867 ("MINOR: ssl: compare server certificate names to the
SNI on outgoing connections") and 96c7b8d ("BUG/MINOR: ssl: Fix check
against SNI during server certificate verification") made it possible
to check that the server's certificate matches the name presented in
the SNI field. While it solves a class of problems, it opens another
one which is that by failing such a connection, we'll retry it and put
more load on the server. It can be a real problem if a user can trigger
this issue, which is what will very often happen when the SNI is forwarded
from the client to the server.
This patch solves this by detecting that this very specific hostname
verification failed and that the hostname was provided using SNI, and
then it simply disables retries and the failure is immediate.
At the time of writing this patch, the previous patches were not backported
(yet), so no backport is needed for this one unless the aforementionned
patches are backported as well. This patch requires previous patches
"BUG/MINOR: ssl: make use of the name in SNI before verifyhost" and
"MINOR: ssl: add a new error code for wrong server certificates".
If a server presents an unexpected certificate to haproxy, that is, a
certificate that doesn't match the expected name as configured in
verifyhost or as requested using SNI, we want to store that precious
information. Fortunately we have access to the connection in the
verification callback so it's possible to store an error code there.
For this purpose we use CO_ER_SSL_MISMATCH_SNI (for when the cert name
didn't match the one requested using SNI) and CO_ER_SSL_MISMATCH for
when it doesn't match verifyhost.
Commit 2ab8867 ("MINOR: ssl: compare server certificate names to the SNI
on outgoing connections") introduced the ability to check server cert
names against the name provided with in the SNI, but verifyhost was kept
as a way to force the name to check against. This was a mistake, because :
- if an SNI is used, any static hostname in verifyhost will be wrong ;
worse, if it matches and doesn't match the SNI, the server presented
the wrong certificate ;
- there's no way to have a default name to check against for health
checks anymore because the point above mandates the removal of the
verifyhost directive
This patch reverses the ordering of the check : whenever SNI is used, the
name provided always has precedence (ie the server must always present a
certificate that matches the requested name). And if no SNI is provided,
then verifyhost is used, and will be configured to match the server's
default certificate name. This will work both when SNI is not used and
for health checks.
If the commit 2ab8867 is backported in 1.7 and/or 1.6, this one must be
backported too.
This patch fixes the commit 2ab8867 ("MINOR: ssl: compare server certificate
names to the SNI on outgoing connections")
When we check the certificate sent by a server, in the verify callback, we get
the SNI from the session (SSL_SESSION object). In OpenSSL, tlsext_hostname value
for this session is copied from the ssl connection (SSL object). But the copy is
done only if the "server_name" extension is found in the server hello
message. This means the server has found a certificate matching the client's
SNI.
When the server returns a default certificate not matching the client's SNI, it
doesn't set any "server_name" extension in the server hello message. So no SNI
is set on the SSL session and SSL_SESSION_get0_hostname always returns NULL.
To fix the problemn, we get the SNI directly from the SSL connection. It is
always defined with the value set by the client.
If the commit 2ab8867 is backported in 1.7 and/or 1.6, this one must be
backported too.
Note: it's worth mentionning that by making the SNI check work, we
introduce another problem by which failed SNI checks can cause
long connection retries on the server, and in certain cases the
SNI value used comes from the client. So this patch series must
not be backported until this issue is resolved.
Adis Nezirovic reports:
While playing with Lua API I've noticed that core.proxies attribute
doesn't return all the proxies, more precisely the ones with same names
(e.g. for frontend and backend with the same name it would only return
the latter one).
So, this patch fixes this problem without breaking the actual behaviour.
We have two case of proxies with frontend/backend capabilities:
The first case is the listen. This case is not a problem because the
proxy object process these two entities as only one and it is the
expected behavior. With these case the "proxies" list works fine.
The second case is the frontend and backend with the same name. i think
that this case is possible for compatibility with 'listen' declaration.
These two proxes with same name and different capabilities must not
processed with the same object (different statitics, differents orders).
In fact, one the the two object crush the other one whoch is no longer
accessible.
To fix this problem, this patch adds two lists which are "frontends" and
"backends", each of these list contains specialized proxy, but warning
the "listen" proxy are declare in each list.
By Adis Nezirovic:
This is just for convenience and uniformity, Proxy.servers/listeners
returns a table/hash of objects with names as keys, but for example when
I want to pass such object to some other Lua function I have to manually
copy the name (or wrap the object), since the object itself doesn't
expose name info.
This patch simply adds the proxy name as member of the proxy object.
The recent scheduler change broke the Lua co-sockets due to
hlua_applet_wakeup() returning NULL after waking the applet up. With the
previous scheduler, returning NULL was a way to do nothing on return.
With the new one it keeps TASK_RUNNING set, causing all new notifications
to end up into t->pending_state instead of t->state, and prevents the
task from being added into the run queue again, so and it's never woken
up anymore.
The applet keeps waking up, causing hlua_socket_handler() to do nothing
new, then si_applet_wake_cb() calling stream_int_notify() to try to wake
the task up, which it can't do due to the TASK_RUNNING flag, then decide
that since the associated task is not in the run queue, it needs to call
stream_int_update_applet() to propagate the update. This last one finds
that the applet needs to be woken up to deal with the last reported events
and calling appctx_wakeup() again. Previously, this situation didn't exist
because the task was always added in the run queue despite the TASK_RUNNING
flag.
By returning the task instead in hlua_applet_wakeup(), we can ensure its
flag is properly cleared and the task is requeued if needed or just sits
waiting for new events to happen.
This fix requires the previous ones ("BUG/MINOR: lua: always detach the
tcp/http tasks before freeing them") and MINOR: task: always preinitialize
the task's timeout in task_init().
Thanks to Thierry, Christopher and Emeric for the long head-scratching
session!
No backport is needed as the bug doesn't appear in older versions and
it's unsure whether we'll not break something by backporting it.
In hlua_{http,tcp}_applet_release(), a call to task_free() is performed
to release the task, but no task_delete() is made on these tasks. Till
now it wasn't much of a problem because this was normally not done with
the task in the run queue, and the task was never put into the wait queue
since it doesn't have any timer. But with threading it will become an
issue. And not having this already prevents another bug from being fixed.
Thanks to Christopher for spotting this one. A backport to 1.7 and 1.6 is
preferred for safety.
For known methods (GET,POST...), in samples, an enum is used instead of a chunk
to reference the method. So there is no needs to allocate memory when a variable
is stored with this kind of sample.
First, the type SMP_T_METH was not handled by smp_dup function. It was never
called with this kind of samples, so it's not really a problem. But, this could
be useful in future.
For all known HTTP methods (GET, POST...), there is no extra space allocated for
a sample of type SMP_T_METH. But for unkown methods, it uses a chunk. So, like
for strings, we duplicate data, using a trash chunk.
The get_addr() method of the Lua Server class incorrectly used
INET_ADDRSTRLEN for IPv6 addresses resulting in failing to convert
longer IPv6 addresses to strings.
This fix should be backported to 1.7.
The get_addr() method of the Lua Server class was using the
'sockaddr_storage addr' member to get the port value. HAProxy does not
store ports in this member as it uses a separate member, called
'svc_port'.
This fix should be backported to 1.7.
In commit "MINOR: http: Switch requests/responses in TUNNEL mode only by
checking txn flags", it is possible to have an infinite loop on HTTP_MSG_CLOSING
state.
Akhnin Nikita reported that Lua doesn't build on Solaris 10 because
the code uses timegm() to parse a date, which is not provided there.
The recommended way to implement timegm() is broken in the man page,
as it is based on a change of the TZ environment variable at run time
before calling the function (which is obviously not thread safe, and
terribly inefficient).
Here instead we rely on the new my_timegm() function, it should be
sufficient for all known use cases.
timegm() is not provided everywhere and the documentation on how to
replace it is bogus as it proposes an inefficient and non-thread safe
alternative.
Here we reimplement everything needed to compute the number of seconds
since Epoch based on the broken down fields in struct tm. It is only
guaranteed to return correct values for correct inputs. It was successfully
tested with all possible 32-bit values of time_t converted to struct tm
using gmtime() and back to time_t using the legacy timegm() and this
function, and both functions always produced the same result.
Thanks to Benoît Garnier for an instructive discussion and detailed
explanations of the various time functions, leading to this solution.
The commit 5db33cbd "MEDIUM: ssl: ssl_methods implementation is reworked and
factored for min/max tlsxx" drop the case when ssl lib have removed SSLv3.
The commit 1e59fcc5 "BUG/MINOR: ssl: Be sure that SSLv3 connection methods
exist for openssl < 1.1.0" fix build but it's false because haproxy think
that ssl lib support SSLv3.
SSL_OP_NO_* are flags to set in ssl_options and is the way haproxy do the
link between ssl capabilities and haproxy configuration. (The mapping table
is done via methodVersions). SSL_OP_NO_* is set to 0 when ssl lib doesn't
support a new TLS version. Older version (like SSLv3) can be removed at
build or unsupported (like libressl). In all case OPENSSL_NO_SSL3 is define.
To keep the same logic, this patch alter SSL_OP_NO_SSLv3 to 0 when SSLv3 is
not supported by ssl lib (when OPENSSL_NO_SSL3 is define).
The previous patch ("MINOR: http: Rely on analyzers mask to end processing in
forward_body functions") contains a bug for keep-alive transactions.
For these transactions, AN_REQ_FLT_END and AN_RES_FLT_END analyzers must be
removed only when all outgoing data was forwarded.
Instead of relying on request or response state, we use "chn->analysers" mask as
all other analyzers. So now, http_resync_states does not return anything
anymore.
The debug message in http_resync_states has been improved.
When the body length of a HTTP response is undefined, the HTTP parser is blocked
in the body parsing. Before HAProxy 1.7, in this case, because
AN_RES_HTTP_XFER_BODY is never set, there is no visible effect. When the server
closes its connection to terminate the response, HAProxy catches it as a normal
closure. Since 1.7, we always set this analyzer to enter at least once in
http_response_forward_body. But, in the present case, when the server connection
is closed, http_response_forward_body is called one time too many. The response
is correctly sent to the client, but an error is catched and logged with "SD--"
flags.
To reproduce the bug, you can use the configuration "tests/test-fsm.cfg". The
tests 3 and 21 hit the bug.
Idea to fix the bug is to switch the response in TUNNEL mode without switching
the request. This is possible because of previous patches.
First, we need to detect responses with undefined body length during states
synchronization. Excluding tunnelled transactions, when the response length is
undefined, TX_CON_WANT_CLO is always set on the transaction. So, when states are
synchronized, if TX_CON_WANT_CLO is set, the response is switched in TUNNEL mode
and the request remains unchanged.
Then, in http_msg_forward_body, we add a specific check to switch the response
in DONE mode if the body length is undefined and if there is no data filter.
This patch depends on following previous commits:
* MINOR: http: Switch requests/responses in TUNNEL mode only by checking txn flags
* MINOR: http: Reorder/rewrite checks in http_resync_states
This patch must be backported in 1.7 with 2 previous ones.
Today, the only way to have a request or a response in HTTP_MSG_TUNNEL state is
to have the flag TX_CON_WANT_TUN set on the transaction. So this is a symmetric
state. Both the request and the response are switch in same time in this
state. This can be done only by checking transaction flags instead of relying on
the other side state. This is the purpose of this patch.
This way, if for any reason we need to switch only one side in TUNNEL mode, it
will be possible. And to prepare asymmetric cases, we check channel flags in
DONE _AND_ TUNNEL states.
WARNING: This patch will be used to fix a bug. The fix will be commited in a
very next commit. So if the fix is backported, this one must be backported too.
The previous patch removed the forced symmetry of the TUNNEL mode during the
state synchronization. Here, we take care to remove body analyzer only on the
channel in TUNNEL mode. In fact, today, this change has no effect because both
sides are switched in same time. But this way, with some changes, it will be
possible to keep body analyzer on a side (to finish the states synchronization)
with the other one in TUNNEL mode.
WARNING: This patch will be used to fix a bug. The fix will be commited in a
very next commit. So if the fix is backported, this one must be backported too.
We cannot perform garbage collection on unreferenced thread.
This memory is now free and another Lua process can use it for
other things.
HAProxy is monothread, so this bug doesn't cause crash.
This patch must be backported in 1.6 and 1.7
In some cases, the socket is misused. The user can open socket and never
close it, or open the socket and close it without sending data. This
causes resources leak on all resources associated to the stream (buffer,
spoe, ...)
This is caused by the stream_shutdown function which is called outside
of the stream execution process. Sometimes, the shtudown is required
while the stream is not started, so the cleanup is ignored.
This patch change the shutdown mode of the session. Now if the session is
no longer used and the Lua want to destroy it, it just set a destroy flag
and the session kill itself.
This patch should be backported in 1.6 and 1.7
When we destroy the Lua session, we manipulates Lua stack,
so errors can raises. It will be better to catch these errors.
This patch should be backported in 1.6 and 1.7
Just forgot of reset the safe mode. This have not consequences
the safe mode just set a pointer on fucntion which is called only
and initialises a longjmp.
Out of lua execution, this longjmp is never executed and the
function is never called.
This patch should be backported in 1.6 and 1.7
When several stick-tables were configured with several peers sections,
only a part of them could be synchronized: the ones attached to the last
parsed 'peers' section. This was due to the fact that, at least, the peer I/O handler
refered to the wrong peer section list, in fact always the same: the last one parsed.
The fact that the global peer section list was named "struct peers *peers"
lead to this issue. This variable name is dangerous ;).
So this patch renames global 'peers' variable to 'cfg_peers' to ensure that
no such wrong references are still in use, then all the functions wich used
old 'peers' variable have been modified to refer to the correct peer list.
Must be backported to 1.6 and 1.7.
In ssl_sock_to_buf(), when we face a small read, we used to consider it
as an indication for the end of incoming data, as is the case with plain
text. The problem is that here it's quite different, SSL records are
returned at once so doing so make us wake all the upper layers for each
and every record. Given that SSL records are 16kB by default, this is
rarely observed unless the protocol employs small records or the buffers
are increased. But with 64kB buffers while trying to deal with HTTP/2
frames, the exchanges are obviously suboptimal as there are two messages
per frame (one for the frame header and another one for the frame payload),
causing the H2 parser to be woken up half of the times without being able
to proceed :
try=65536 ret=45
try=65536 ret=16384
try=49152 ret=9
try=49143 ret=16384
try=32759 ret=9
try=32750 ret=16384
try=16366 ret=9
try=32795 ret=27
try=49161 ret=9
try=49152 ret=16384
try=49116 ret=9
try=49107 ret=16384
try=32723 ret=9
try=32714 ret=16384
try=16330 ret=9
try=32831 ret=63
try=49161 ret=9
try=49152 ret=16384
try=49080 ret=9
try=49071 ret=2181
With this change, the buffer can safely be filled with all pending frames
at once when they are available.
Only 100 was considered informational instead of all 1xx. This can be
a problem when facing a 102 ("progress") or with the upcoming 103 for
early hints. Let's properly handle all 1xx now, leaving a special case
for 101 which is used for the upgrade.
This fix should be backported to 1.7, 1.6 and 1.5. In 1.4 the code is
different but the backport should be made there as well.
With this patch additional information are added to stick-table definition
messages so that to make external application capable of learning peer
stick-table configurations. First stick-table entries duration is added
followed by the frequency counters type IDs and values.
May be backported to 1.7 and 1.6.
In the commit 2b553de5 ("BUG/MINOR: filters: Don't force the stream's wakeup
when we wait in flt_end_analyze"), we removed a task_wakeup in flt_end_analyze
to no consume too much CPU by looping in certain circumstances.
But this fix was too drastic. For Keep-Alive transactions, flt_end_analyze is
often called only for the response. Then the stream is paused until a timeout is
hitted or the next request is received. We need first let a chance to both
channels to call flt_end_analyze function. Then if a filter need to wait here,
it is its responsibility to wake up the stream when needed. To fix the bug, and
thanks to previous commits, we set the flag CF_WAKE_ONCE on channels to pretend
there is an activity. On the current channel, the flag will be removed without
any effect, but for the other side the analyzer will be called immediatly.
Thanks for Lukas Tribus for his detailed analysis of the bug.
This patch must be backported in 1.7 with the 2 previous ones:
* a94fda3 ("BUG/MINOR: http: Don't reset the transaction if there are still data to send")
* cdaea89 ("BUG/MINOR: stream: Don't forget to remove CF_WAKE_ONCE flag on response channel")
To reset an HTTP transaction, we need to be sure all data were sent, for the
request and the response. There are tests on request and response buffers for
that in http_resync_states function. But the return code was wrong. We must
return 0 to wait.
This patch must be backported in 1.7
This flag can be set on a channel to pretend there is activity on it. This is a
way to wake-up the corresponding stream and evaluate stream analyzers on the
channel. It is correctly handled on both channels but removed only on the
request channel.
This patch is flagged as a bug but for now, CF_WAKE_ONCE is never set on the
response channel.
When support for passing SNI to the server was added in 1.6-dev3, there
was no way to validate that the certificate presented by the server would
really match the name requested in the SNI, which is quite a problem as
it allows other (valid) certificates to be presented instead (when hitting
the wrong server or due to a man in the middle).
This patch adds the missing check against the value passed in the SNI.
The "verifyhost" value keeps precedence if set. If no SNI is used and
no verifyhost directive is specified, then the certificate name is not
checked (this is unchanged).
In order to extract the SNI value, it was necessary to make use of
SSL_SESSION_get0_hostname(), which appeared in openssl 1.1.0. This is
a trivial function which returns the value of s->tlsext_hostname, so
it was provided in the compat layer for older versions. After some
refinements from Emmanuel, it now builds with openssl 1.0.2, openssl
1.1.0 and boringssl. A test file was provided to ease testing all cases.
After some careful observation period it may make sense to backport
this to 1.7 and 1.6 as some users rightfully consider this limitation
as a bug.
Cc: Emmanuel Hocdet <manu@gandi.net>
Signed-off-by: Willy Tarreau <w@1wt.eu>
The pool used to log the uri was created with a size of 0 because the
configuration and 'tune.http.logurilen' were parsed too earlier.
The fix consist to postpone the pool_create as it is done for
cookie captures.
Regression introduced with 'MINOR: log: Add logurilen tunable'
The bug: Maps/ACLs using the same file/id can mistakenly inherit
their flags from the last declared one.
i.e.
$ cat haproxy.conf
listen mylistener
mode http
bind 0.0.0.0:8080
acl myacl1 url -i -f mine.acl
acl myacl2 url -f mine.acl
acl myacl3 url -i -f mine.acl
redirect location / if myacl2
$ cat mine.acl
foobar
Shows an unexpected redirect for request 'GET /FOObAR HTTP/1.0\n\n'.
This fix should be backported on mainline branches v1.6 and v1.7.
Introduced regression with 'MAJOR: applet scheduler rework' (1.8-dev only).
The fix consist to re-enable the appctx immediatly from the
applet wake cb if the process_stream is not pending in runqueue
and the applet want perform a put or a get and the WAIT_ROOM
flag was removed by stream_int_notify.
Instead of doing a malloc/free to each HTTP transaction to allocate the
compression state (when the HTTP compression is enabled), we use a memory pool.
This patch fixes an obvious memory leak in the compression filter. The
compression state (comp_state) is allocated when a HTTP transaction starts, in
channel_start_analyze callback, Whether we are able to compression the response
or not. So it must be released when the transaction ends, in channel_end_analyze
callback.
But there is a bug here. The state is released on the response side only. So, if
a transaction ends before the response is started, it is never released. This
happens when a connection is closed before the response is started.
To fix the bug, statistics about the HTTP compression are now updated in
http_end callback, when the response parsing ends. It happens only if no error
is encountered and when the response is compressed. So, it is safe to release
the compression state in channel_end_analyze callback, regardless the
channel's type.
This patch must be backported in 1.7.
The reference of the current map/acl element to dump could
be destroyed if map is updated from an 'http-request del-map'
configuration rule or throught a 'del map/acl' on CLI.
We use a 'back_refs' chaining element to fix this. As it
is done to dump sessions.
This patch needs also fix:
'BUG/MAJOR: cli: fix custom io_release was crushed by NULL.'
To clean the back_ref and avoid a crash on a further
del/clear map operation.
Those fixes should be backported on mainline branches 1.7 and 1.6.
This patch wont directly apply on 1.6.
Recently merged commit 0cfe388 ("MINOR: frontend: retrieve the ALPN name when
available") assumed that the connection is always known in frontend_accept()
which is not true for outgoing peers connections for example.
No backport needed.
In order to authorize call of appctx_wakeup on running task:
- from within the task handler itself.
- in futur, from another thread.
The appctx is considered paused as default after running the handler.
The handler should explicitly call appctx_wakeup to be re-called.
When the appctx_free is called on a running handler. The real
free is postponed at the end of the handler process.
It's more efficient this way, as it allows to flush a send buffer before
receiving data in the other one. This can lead to a slightly faster buffer
recycling, thus slightly less memory and a small performance increase by
using a hotter cache.
In order to implement hot-pluggable applets like we'll need for HTTP/2
which will speak a different protocol than the expected one, it will be
mandatory to be able to clear all analysers from the request and response
channel and/or to keep only the ones the applet initializer installed.
Unfortunately for now in sess_establish() we systematically place a number
of analysers inherited from the frontend, backend and some hard-coded ones.
This patch reuses the now unused SF_TUNNEL flag on the stream to indicate
we're dealing with a tunnel and don't want to add more analysers anymore.
It will be usable to install such a specific applet.
Ideally over the long term it might be nice to be able to set the mode on
the stream instead of the proxy so that we can decide to change a stream's
mode (eg: TCP, HTTP, HTTP/2) at run time. But it would require many more
changes for a gain which is not yet obvious.
This is used to retrieve the TLS ALPN information from a connection. We
also support a fallback to NPN if ALPN doesn't find anything or is not
available on the existing implementation. It happens that depending on
the library version, either one or the other is available. NPN was
present in openssl 1.0.1 (very common) while ALPN is in 1.0.2 and onwards
(still uncommon at the time of writing). Clients are used to send either
one or the other to ensure a smooth transition.
For HTTP/2 we'll have to choose the upper layer based on the
advertised protocol name here and we want to keep debugging,
so let's move debugging earlier.
It doesn't make sense that stream_new() doesn't sets the target nor
analysers and that the caller has to do it even if it doesn't know
about streams (eg: in session_accept_fd()). This causes trouble for
H2 where the applet handling the protocol cannot properly change
these information during its init phase.
Let's ensure it's always set and that the callers don't set it anymore.
Note: peers and lua don't use analysers and that's properly handled.
The task_wakeup was called on stream_new, but the task/stream
wasn't fully initialized yet. The task_wakeup must be called
explicitly by the caller once the task/stream is initialized.
In order to authorize call of task_wakeup on running task:
- from within the task handler itself.
- in futur, from another thread.
The lookups on runqueue and waitqueue are re-worked
to prepare multithread stuff.
If task_wakeup is called on a running task, the woken
message flags are savec in the 'pending_state' attribute of
the state. The real wakeup is postponed at the end of the handler
process and the woken messages are copied from pending_state
to the state attribute of the task.
It's important to note that this change will cause a very minor
(though measurable) performance loss but it is necessary to make
forward progress on a multi-threaded scheduler. Most users won't
ever notice.
Mathias Weiersmueller reported an interesting issue with logs which Lukas
diagnosed as dating back from commit 9b061e332 (1.5-dev9). When front
connection information (ip, port) are logged in TCP mode and the log is
emitted at the end of the connection (eg: because %B or any log tag
requiring LW_BYTES is set), the log is emitted after the connection is
closed, so the address and ports cannot be retrieved anymore.
It could be argued that we'd make a special case of these to immediatly
retrieve the source and destination addresses from the connection, but it
seems cleaner to simply pin the front connection, marking it "tracked" by
adding the LW_XPRT flag to mention that we'll need some of these elements
at the last moment. Only LW_FRTIP and LW_CLIP are affected. Note that after
this change, LW_FRTIP could simply be removed as it's not used anywhere.
Note that the problem doesn't happen when using %[src] or %[dst] since
all sample expressions set LW_XPRT.
This must be backported to 1.7, 1.6 and 1.5.
We cannot store more than 32K headers in the structure hdr_idx, because
internaly we use signed short integers. To avoid any bugs (due to an integers
overflow), a check has been added on tune.http.maxhdr to be sure to not set a
value greater than 32767 and lower than 1 (because this is a nonsense to set
this parameter to a value <= 0).
The documentation has been updated accordingly.
This patch can be backported in 1.7, 1.6 and 1.5.
When a peer task has sent a synchronization request to remote peers
its next expiration date was updated based on a resynchronization timeout
value which itself may have already expired leading the underlying
poller to wait for 0ms during a fraction of second (consuming high CPU
resources).
With this patch we update such peer task expiration dates only if
the resynchronization timeout is not already expired.
Thanks to Patrick Hemmer who reported an issue with nice traces
which helped in finding this one.
This patch may be backported to 1.7 and 1.6.
When starting the master worker with -sf or -st, the PIDs will be reused
on the next reload, which is a problem if new processes on the system
took those PIDs.
This patch ensures that we don't register old PIDs in the reload system
when launching the master worker.
Don't copy the -x argument anymore in copy_argv() since it's already
allocated in mworker_reload().
Make the copy_argv() more consistent when used with multiple arguments
to strip.
It prevents multiple -x on reload, which is not supported.
This patch fixes a segfault in the command line parser.
When haproxy is launched with -x with no argument and -x is the latest
option in argv it segfaults.
Use usage() insteads of exit() on error.
James Brown reported some cases where a race condition happens between
the old and the new processes resulting in the leaving process removing
a newly bound unix socket. Jeff gave all the details he observed here :
https://www.mail-archive.com/haproxy@formilux.org/msg25001.html
The unix socket removal was an attempt at an optimal cleanup, which
almost never works anyway since the process is supposed to be chrooted.
And in the rare cases where it works it occasionally creates trouble.
There was already a workaround in place to avoid removing this socket
when it's been inherited from a parent's file descriptor.
So let's finally kill this useless stuff now to definitely get rid of
this persistent problem.
This fix should be backported to all stable releases.
A peer session which has just been created upon reconnect timeout expirations,
could be right after shutdown (at peer session level) because the remote
side peer could also righ after have connected. In such a case the underlying
TCP session was still running (connect()/accept()) and finally left in CLOSE_WAIT
state after the remote side stopped writting (shutdown(SHUT_WR)).
Now on, with this patch we never shutdown such peer sessions wich have just
been created. We leave them connect to the remote peer which is already
connected and must shutdown its own peer session.
Thanks to Patric Hemmer and Yves Lafon at w3.org for reporting this issue,
and for having tested this patch on the field.
Thanks also to Willy and Yelp blogs which helped me a lot in fixing it
(see https://www.haproxy.com/blog/truly-seamless-reloads-with-haproxy-no-more-hacks/ and
https://engineeringblog.yelp.com/2015/04/true-zero-downtime-haproxy-reloads.htmll).
A filter can choose to loop when a HTTP message is in the state
HTTP_MSG_ENDING. But the transaction is terminated with an error if the input is
closed (CF_SHUTR set on the channel). At this step, we have received all data,
so we can wait.
So now, we also check the parser state before leaving. This fix only affects
configs that use a filter that can wait in http_forward_data or http_end
callbacks, when all data were parsed.
For openssl 1.0.2, SSLv3_server_method and SSLv3_client_method are undefined if
OPENSSL_NO_SSL3_METHOD is set. So we must add a check on this macro before using
these functions.
For an ACL, we can load patterns from a map using the flag -M. For example:
acl test hdr(host) -M -f hosts.map
The file is parsed as a map et the ACL will be executed as expected. But the
reference flag is wrong. It is set to PAT_REF_ACL. So the map will never be
listed by a "show map" on the stat socket. Setting the reference flag to
PAT_REF_ACL|PAT_REF_MAP fixes the bug.
Jean Lubatti reported a crash on haproxy using a config involving cookies
and tarpit rules. It just happens that since 1.7-dev3 with commit 83a2c3d
("BUG/MINOR : allow to log cookie for tarpit and denied request"), function
manage_client_side_cookies() was called after erasing the request buffer in
case of a tarpit action. The problem is that this function must absolutely
not be called with an empty buffer since it moves parts of it. A typical
reproducer consists in sending :
"GET / HTTP/1.1\r\nCookie: S=1\r\n\r\n"
On such a config :
listen crash
bind :8001
mode http
reqitarpit .
cookie S insert indirect
server s1 127.0.0.1:8000 cookie 1
The fix simply consists in moving the call to the function before the call
to buffer_erase().
Many thanks to Jean for testing instrumented code and providing a usable
core.
This fix must be backported to all stable versions since the fix introducing
this bug was backported as well.
Commit cb11fd2 ("MEDIUM: mworker: wait mode on reload failure")
introduced a regression, when HAProxy is used in daemon mode, it exits 1
after forking its children.
HAProxy should exit(0), the exit(EXIT_FAILURE) was expected to be use
when the master fail in master-worker mode.
Thanks to Emmanuel Hocdet for reporting this bug. No backport needed.
The commit 201c07f68 ("MAJOR/REORG: dns: DNS resolution task and
requester queues") introduces a warning during compilation:
src/dns.c: In function ‘dns_resolve_recv’:
src/dns.c:487:6: warning: ‘need_resend’ may be used uninitialized in this function [-Wmaybe-uninitialized]
if (need_resend) {
^
This patch initialize the variable and remove the comment about it.
The commit 872f9c213 ("MEDIUM: ssl: add basic support for OpenSSL crypto
engine") broke the build without openssl support.
The ssl_free_dh() function is not defined when USE_OPENSSL is not
defined and leads to a compilation failure.
This patch modifies the way to re-enable the connection from the async fd
handler calling conn_update_sock_polling instead of the conn_fd_handler.
It also ensures that the polling is really stopped on the async fd.
The Openssl's ASYNC API does'nt support moving buffers on SSL_read/write
This patch disables the ASYNC mode dynamically when the handshake
is left and re-enables it on reneg.
This patch ensure that the ASYNC fd handlers won't be wake up
too early, disabling the event cache for this fd on connection close
and when a WANT_ASYNC is rised by Openssl.
The calls to SSL_read/SSL_write/SSL_do_handshake before rising a real read
event from the ASYNC fd, generated an EAGAIN followed by a context switch
for some engines, or a blocked read for the others.
On connection close it resulted in a too early call to SSL_free followed
by a segmentation fault.
SSL/TLS version can be changed per certificat if and only if openssl lib support
earlier callback on handshake and, of course, is implemented in haproxy. It's ok
for BoringSSL. For Openssl, version 1.1.1 have such callback and could support it.
This patch cleanup the usage of set_version func with a more suitable name:
ctx_set_version. It introduce ssl_set_version func (unused for the moment).
Very early in the connection rework process leading to v1.5-dev12, commit
56a77e5 ("MEDIUM: connection: complete the polling cleanups") marked the
end of use for this flag which since was never set anymore, but it continues
to be tested. Let's kill it now.
When dumping data at various places in the code, it's hard to figure
what is present where. To make this easier, this patch slightly modifies
debug_hexdump() to take a prefix string which is prepended in front of
each output line.
This patch is a major upgrade of the internal run-time DNS resolver in
HAProxy and it brings the following 2 main changes:
1. DNS resolution task
Up to now, DNS resolution was triggered by the health check task.
From now, DNS resolution task is autonomous. It is started by HAProxy
right after the scheduler is available and it is woken either when a
network IO occurs for one of its nameserver or when a timeout is
matched.
From now, this means we can enable DNS resolution for a server without
enabling health checking.
2. Introduction of a dns_requester structure
Up to now, DNS resolution was purposely made for resolving server
hostnames.
The idea, is to ensure that any HAProxy internal object should be able
to trigger a DNS resolution. For this purpose, 2 things has to be done:
- clean up the DNS code from the server structure (this was already
quite clean actually) and clean up the server's callbacks from
manipulating too much DNS resolution
- create an agnostic structure which allows linking a DNS resolution
and a requester of any type (using obj_type enum)
3. Manage requesters through queues
Up to now, there was an uniq relationship between a resolution and it's
owner (aka the requester now). It's a shame, because in some cases,
multiple objects may share the same hostname and may benefit from a
resolution being performed by a third party.
This patch introduces the notion of queues, which are basically lists of
either currently running resolution or waiting ones.
The resolutions are now available as a pool, which belongs to the resolvers.
The pool has has a default size of 64 resolutions per resolvers and is
allocated at configuration parsing.
This patch introduces a bit of roundrobin in the records stored in our
local cache.
Purpose is to allow some kind of distribution of the IPs found in a
response.
Note that distribution properly applies only when the IP used by many
requesters disappear and is replaced by an other one.
ancount is the number of answers available in a DNS response.
Before this patch, HAProxy used to store the ancount found in the buffer
(sent by the DNS server).
Unfortunately, this is now inaccurate and does not correspond to the
number of records effectively stored in our local version of the
response. In Example, the CNAMEs are not stored.
This patch updates ancount field in to make it match what is effectively
stored in our version.
Introduction of a DNS response LRU cache in HAProxy.
When a positive response is received from a DNS server, HAProxy stores
it in the struct resolution and then also populates a LRU cache with the
response.
For now, the key in the cache is a XXHASH64 of the hostname in the
domain name format concatened to the query type in string format.
Prior this patch, the DNS responses were stored in a pre-allocated
memory area (allocated at HAProxy's startup).
The problem is that this memory is erased for each new DNS responses
received and processed.
This patch removes the global memory allocation (which was not thread
safe by the way) and introduces a storage of the dns response in the
struct
resolution.
The memory in the struct resolution is also reserved at start up and is
thread safe, since each resolution structure will have its own memory
area.
For now, we simply store the response and use it atomically per
response per server.
In the process of breaking links between dns_* functions and other
structures (mainly server and a bit of resolution), the function
dns_get_ip_from_response needs to be reworked: it now can call
"callback" functions based on resolution's owner type to allow modifying
the way the response is processed.
For now, main purpose of the callback function is to check that an IP
address is not already affected to an element of the same type.
For now, only server type has a callback.
This patch introduces a some re-organisation around the DNS code in
HAProxy.
1. make the dns_* functions less dependent on 'struct server' and 'struct resolution'.
With this in mind, the following changes were performed:
- 'struct dns_options' has been removed from 'struct resolution' (well,
we might need it back at some point later, we'll see)
==> we'll use the 'struct dns_options' from the owner of the resolution
- dns_get_ip_from_response(): takes a 'struct dns_options' instead of
'struct resolution'
==> so the caller can pass its own dns options to get the most
appropriate IP from the response
- dns_process_resolve(): struct dns_option is deduced from new
resolution->requester_type parameter
2. add hostname_dn and hostname_dn_len into struct server
In order to avoid recomputing a server's hostname into its domain name
format (and use a trash buffer to store the result), it is safer to
compute it once at configuration parsing and to store it into the struct
server.
In the mean time, the struct resolution linked to the server doesn't
need anymore to store the hostname in domain name format. A simple
pointer to the server one will make the trick.
The function srv_alloc_dns_resolution() properly manages everything for
us: memory allocation, pointer updates, etc...
3. move resolvers pointer into struct server
This patch makes the pointer to struct dns_resolvers from struct
dns_resolution obsolete.
Purpose is to make the resolution as "neutral" as possible and since the
requester is already linked to the resolvers, then we don't need this
information anymore in the resolution itself.
A couple of new functions to allocate and free memory for a DNS
resolution structure. Main purpose is to to make the code related to DNS
more consistent.
They allocate or free memory for the structure itself. Later, if needed,
they should also allocate / free the buffers, etc, used by this structure.
They don't set/unset any parameters, this is the role of the caller.
This patch also implement calls to these function eveywhere it is
required.
The default len of request uri in log messages is 1024. In some use
cases, you need to keep the long trail of GET parameters. The only
way to increase this len is to recompile with DEFINE=-DREQURI_LEN=2048.
This commit introduces a tune.http.logurilen configuration directive,
allowing to tune this at runtime.
This patch ensure that the children will exit when the master quits,
even if the master didn't send any signal.
The master and the workers are connected through a pipe, when the pipe
closes the children leave.
This option exits every workers when one of the current workers die.
It allows you to monitor the master process in order to relaunch
everything on a failure.
For example it can be used with systemd and Restart=on-failure in a spec
file.
In master worker mode, you can't specify the stats socket where you get
your listeners FDs on a reload, because the command line of the re-exec
is launched by the master.
To solve the problem, when -x is found on the command line, its
parameter is rewritten on a reexec with the first stats socket with the
capability to send sockets. It tries to reuse the original parameter if
it has this capability.
In Master Worker mode, when the reloading of the configuration fail,
the process is exiting leaving the children without their father.
To handle this, we register an exit function with atexit(3), which is
reexecuting the binary in a special mode. This particular mode of
HAProxy don't reload the configuration, it only loops on wait().
The master-worker will reload itself on SIGUSR2/SIGHUP
It's inherited from the systemd wrapper, when the SIGUSR2 signal is
received, the master process will reexecute itself with the -sf flag
followed by the PIDs of the children.
In the systemd wrapper, the children were using a pipe to notify when
the config has been parsed and when the new process is ready. The goal
was to ensure that the process couldn't reload during the parsing of the
configuration, before signals were send to old process.
With the new mworker model, the master parses the configuration and is
aware of all the children. We don't need a pipe, but we need to block
those signals before the end of a reload, to ensure that the process
won't be killed during a reload.
The SIGUSR1 signal is forwarded to the children to soft-stop HAProxy.
The SIGTERM and SIGINT signals are forwarded to the children in order to
terminate them.
This commit remove the -Ds systemd mode in HAProxy in order to replace
it by a more generic master worker system. It aims to replace entirely
the systemd wrapper in the near future.
The master worker mode implements a new way of managing HAProxy
processes. The master is in charge of parsing the configuration
file and is responsible for spawning child processes.
The master worker mode can be invoked by using the -W flag. It can be
used either in background mode (-D) or foreground mode. When used in
background mode, the master will fork to daemonize.
In master worker background mode, chroot, setuid and setgid are done in
each child rather than in the master process, because the master process
will still need access to filesystem to reload the configuration.
This patch adds the support of a maximum of 32 engines
in async mode.
Some tests have been done using 2 engines simultaneously.
This patch also removes specific 'async' attribute from the connection
structure. All the code relies only on Openssl functions.
ssl-mode-async is a global configuration parameter which enables
asynchronous processing in OPENSSL for all SSL connections haproxy
handles. With SSL_MODE_ASYNC set, TLS I/O operations may indicate a
retry with SSL_ERROR_WANT_ASYNC with this mode set if an asynchronous
capable engine is used to perform cryptographic operations. Currently
async mode only supports one async-capable engine.
This is the latest version of the patchset which includes Emeric's
updates :
- improved async fd cleaning when openssl reports an fd to delete
- prevent conn_fd_handler from calling SSL_{read,write,handshake} until
the async fd is ready, as these operations are very slow and waste CPU
- postpone of SSL_free to ensure the async operation can complete and
does not cause a dereference a released SSL.
- proper removal of async fd from the fdtab and removal of the unused async
flag.
This patch adds the global 'ssl-engine' keyword. First arg is an engine
identifier followed by a list of default_algorithms the engine will
operate.
If the openssl version is too old, an error is reported when the option
is used.
When HAProxy is running with multiple processes and some listeners
arebound to processes, the unused sockets were not closed in the other
processes. The aim was to be able to send those listening sockets using
the -x option.
However to ensure the previous behavior which was to close those
sockets, we provided the "no-unused-socket" global option.
This patch changes this behavior, it will close unused sockets which are
not in the same process as an expose-fd socket, making the
"no-unused-socket" option useless.
The "no-unused-socket" option was removed in this patch.
This patch changes the stats socket rights for allowing the sending of
listening sockets.
The previous behavior was to allow any unix stats socket with admin
level to send sockets. It's not possible anymore, you have to set this
option to activate the socket sending.
Example:
stats socket /var/run/haproxy4.sock mode 666 expose-fd listeners level user process 4
The current level variable use only 2 bits for storing the 3 access
level (user, oper and admin).
This patch add a bitmask which allows to use the remaining bits for
other usage.
In the case of a Lua sample-fetch or converter doesn't return any
value, an acces outside the Lua stack can be performed. This patch
check the stack size before converting the top value to a HAProxy
internal sample.
A workaround consist to check that a value value is always returned
with sample fetches and converters.
This patch should be backported in the version 1.6 and 1.7
Add "b64dec" as a new converter which can be used to decode a base64
encoded string into its binary representation. It performs the inverse
operation of the "base64" converter.
Some DNS related network sockets were closed without unregistering their file
descriptors from their underlying kqueue event sets. This patch replaces calls to
close() by fd_delete() calls to that to delete such events attached to DNS
network sockets from the kqueue before closing the sockets.
The bug was introduced by commit 26c6eb8 ("BUG/MAJOR: dns: restart sockets
after fork()") which was backported in 1.7 so this fix has to be backported
there as well.
Thanks to Jim Pingle who reported it and indicated the faulty commit, and
to Lukas Tribus for the trace showing the bad file descriptor.
In haproxy < 1.8, no-sslv3/no-tlsv1x are ignored when force-sslv3/force-tlsv1x
is used (without warning). With this patch, no-sslv3/no-tlsv1x are ignored when
ssl-min-ver or ssl-max-ver is used (with warning).
When all SSL/TLS versions are disable: generate an error, not a warning.
example: ssl-min-ver TLSV1.3 (or force-tlsv13) with a openssl <= 1.1.0.
'ssl-min-ver' and 'ssl-max-ver' with argument SSLv3, TLSv1.0, TLSv1.1, TLSv1.2
or TLSv1.3 limit the SSL negotiation version to a continuous range. ssl-min-ver
and ssl-max-ver should be used in replacement of no-tls* and no-sslv3. Warning
and documentation are set accordingly.
Plan is to add min-tlsxx max-tlsxx configuration, more consistent than no-tlsxx.
Find the real min/max versions (openssl capabilities and haproxy configuration)
and generate warning with bad versions range.
'no-tlsxx' can generate 'holes':
"The list of protocols available can be further limited using the SSL_OP_NO_X
options of the SSL_CTX_set_options or SSL_set_options functions. Clients should
avoid creating 'holes' in the set of protocols they support, when disabling a
protocol, make sure that you also disable either all previous or all subsequent
protocol versions. In clients, when a protocol version is disabled without
disabling all previous protocol versions, the effect is to also disable all
subsequent protocol versions."
To not break compatibility, "holes" is authorized with warning, because openssl
1.1.0 and boringssl deal with it (keep the upper or lower range depending the
case and version).
Plan is to add min-tlsxx max-tlsxx configuration, more consistent than no-tlsxx.
This patch introduce internal min/max and replace force-tlsxx implementation.
SSL method configuration is store in 'struct tls_version_filter'.
SSL method configuration to openssl setting is abstract in 'methodVersions' table.
With openssl < 1.1.0, SSL_CTX_set_ssl_version is used for force (min == max).
With openssl >= 1.1.0, SSL_CTX_set_min/max_proto_version is used.
Plan is to add min-tlsxx max-tlsxx configuration, more consistent than no-tlsxx.
min-tlsxx and max-tlsxx can be overwrite on local definition. This directives
should be the only ones needed in default-server.
To simplify next patches (rework of tls versions settings with min/max) all
ssl/tls version settings relative to default-server are reverted first:
remove: 'sslv3', 'tls*', 'no-force-sslv3', 'no-force-tls*'.
remove from default-server: 'no-sslv3', 'no-tls*'.
Note:
. force-tlsxx == min-tlsxx + max-tlsxx : would be ok in default-server.
. no-tlsxx is keep for compatibility: should not be propagated to default-server.
James Brown reported that agent-check mistakenly sends the proxy
protocol header when it's configured. This is obviously wrong as
the agent is an independant servie and not a traffic port, let's
disable this.
This fix must be backported to 1.7 and possibly 1.6.
This patch adds a new stats socket command to modify server
FQDNs at run time.
Its syntax:
set server <backend>/<server> fqdn <FQDN>
This patch also adds FQDNs to server state file at the end
of each line for backward compatibility ("-" if not present).
This patch replaces the calls to TLSvX_X_client/server/_method
by the new TLS_client/server_method and it uses the new functions
SSL_set_min_proto_version and SSL_set_max_proto_version, setting them
at the wanted protocol version using 'force-' statements.
The sample fetch returns all headers including the last jump line.
The last jump line is used to determine if the block of headers is
truncated or not.
These encoding functions does general stuff and can be used in
other context than spoe. This patch moves the function spoe_encode_varint
and spoe_decode_varint from spoe to common. It also remove the prefix spoe.
These functions will be used for encoding values in new binary sample fetch.
in chash_get_server_hash, we find the nearest server entries both
before and after the request hash. If the next and prev entries both
point to the same server, the function would exit early and return that
server, to save work.
Before hash-balance-factor this was a valid optimization -- one of nsrv
and psrv would definitely be chosen, so if they are the same there's no
need to choose between them. But with hash-balance-factor it's possible
that adding another request to that server would overload it
(chash_server_is_eligible returns false) and we go further around the
ring. So it's not valid to return before checking for that.
This commit simply removes the early return, as it provides a minimal
savings even when it's correct.
The priv context is not cleaned when we set a new priv context.
This is caused by a stupid swap between two parameter of the
luaL_unref() function.
workaround: use set_priv only once when we process a stream.
This patch should be backported in version 1.7 and 1.6
This patch adds server_template_init() function used to initialize servers
from server templates. It is called just after having parsed a 'server-template'
line.
This patch makes backend sections support 'server-template' new keyword.
Such 'server-template' objects are parsed similarly to a 'server' object
by parse_server() function, but its first arguments are as follows:
server-template <ID prefix> <nb | range> <ip | fqdn>:<port> ...
The remaining arguments are the same as for 'server' lines.
With such server template declarations, servers may be allocated with IDs
built from <ID prefix> and <nb | range> arguments.
For instance declaring:
server-template foo 1-5 google.com:80 ...
or
server-template foo 5 google.com:80 ...
would be equivalent to declare:
server foo1 google.com:80 ...
server foo2 google.com:80 ...
server foo3 google.com:80 ...
server foo4 google.com:80 ...
server foo5 google.com:80 ...
This patch moves the code which is responsible of finalizing server initializations
after having fully parsed a 'server' line (health-check, agent check and SNI expression
initializations) from parse_server() to new functions.
This patch moves the code responsible of copying default server settings
to a new server instance from parse_server() function to new defsrv_*_cpy()
functions which may be used both during server lines parsing and during server
templates initializations to come.
These defsrv_*_cpy() do not make any reference to anything else than default
server settings.
'resolvers' setting was not duplicated from default server setting to
new server instances when parsing 'server' lines.
This fix is simple: strdup() default resolvers <id> string argument after
having allocated a new server when parsing 'server' lines.
This patch must be backported to 1.7 and 1.6.
This bug occurs when a redirect rule is applied during the request analysis on a
persistent connection, on a proxy without any server. This means, in a frontend
section or in a listen/backend section with no "server" line.
Because the transaction processing is shortened, no server can be selected to
perform the connection. So if we try to establish it, this fails and a 503 error
is returned, while a 3XX was already sent. So, in this case, HAProxy generates 2
replies and only the first one is expected.
Here is the configuration snippet to easily reproduce the problem:
listen www
bind :8080
mode http
timeout connect 5s
timeout client 3s
timeout server 6s
redirect location /
A simple HTTP/1.1 request without body will trigger the bug:
$ telnet 0 8080
Trying 0.0.0.0...
Connected to 0.
Escape character is '^]'.
GET / HTTP/1.1
HTTP/1.1 302 Found
Cache-Control: no-cache
Content-length: 0
Location: /
HTTP/1.0 503 Service Unavailable
Cache-Control: no-cache
Connection: close
Content-Type: text/html
<html><body><h1>503 Service Unavailable</h1>
No server is available to handle this request.
</body></html>
Connection closed by foreign host.
[wt: only 1.8-dev is impacted though the bug is present in older ones]
In server_parse_sni_expr(), we use the "proxy" global variable, when we
should probably be using "px" given as an argument.
It happens to work by accident right now, but may not in the future.
[wt: better backport it]