2006-06-25 20:48:02 -04:00
/*
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
* Stream management functions .
2006-06-25 20:48:02 -04:00
*
2012-04-19 13:28:33 -04:00
* Copyright 2000 - 2012 Willy Tarreau < w @ 1 wt . eu >
2006-06-25 20:48:02 -04:00
*
* This program is free software ; you can redistribute it and / or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation ; either version
* 2 of the License , or ( at your option ) any later version .
*
*/
# include <stdlib.h>
2010-06-01 11:45:26 -04:00
# include <unistd.h>
# include <fcntl.h>
2006-06-29 12:54:54 -04:00
2015-09-27 13:29:33 -04:00
# include <common/cfgparse.h>
2006-06-29 12:54:54 -04:00
# include <common/config.h>
2012-10-12 17:49:43 -04:00
# include <common/buffer.h>
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 09:04:11 -04:00
# include <common/debug.h>
2006-06-29 11:53:05 -04:00
# include <common/memory.h>
2006-06-25 20:48:02 -04:00
2015-04-13 07:24:54 -04:00
# include <types/applet.h>
2006-06-25 20:48:02 -04:00
# include <types/capture.h>
2016-11-21 02:51:11 -05:00
# include <types/cli.h>
MAJOR: filters: Add filters support
This patch adds the support of filters in HAProxy. The main idea is to have a
way to "easely" extend HAProxy by adding some "modules", called filters, that
will be able to change HAProxy behavior in a programmatic way.
To do so, many entry points has been added in code to let filters to hook up to
different steps of the processing. A filter must define a flt_ops sutrctures
(see include/types/filters.h for details). This structure contains all available
callbacks that a filter can define:
struct flt_ops {
/*
* Callbacks to manage the filter lifecycle
*/
int (*init) (struct proxy *p);
void (*deinit)(struct proxy *p);
int (*check) (struct proxy *p);
/*
* Stream callbacks
*/
void (*stream_start) (struct stream *s);
void (*stream_accept) (struct stream *s);
void (*session_establish)(struct stream *s);
void (*stream_stop) (struct stream *s);
/*
* HTTP callbacks
*/
int (*http_start) (struct stream *s, struct http_msg *msg);
int (*http_start_body) (struct stream *s, struct http_msg *msg);
int (*http_start_chunk) (struct stream *s, struct http_msg *msg);
int (*http_data) (struct stream *s, struct http_msg *msg);
int (*http_last_chunk) (struct stream *s, struct http_msg *msg);
int (*http_end_chunk) (struct stream *s, struct http_msg *msg);
int (*http_chunk_trailers)(struct stream *s, struct http_msg *msg);
int (*http_end_body) (struct stream *s, struct http_msg *msg);
void (*http_end) (struct stream *s, struct http_msg *msg);
void (*http_reset) (struct stream *s, struct http_msg *msg);
int (*http_pre_process) (struct stream *s, struct http_msg *msg);
int (*http_post_process) (struct stream *s, struct http_msg *msg);
void (*http_reply) (struct stream *s, short status,
const struct chunk *msg);
};
To declare and use a filter, in the configuration, the "filter" keyword must be
used in a listener/frontend section:
frontend test
...
filter <FILTER-NAME> [OPTIONS...]
The filter referenced by the <FILTER-NAME> must declare a configuration parser
on its own name to fill flt_ops and filter_conf field in the proxy's
structure. An exemple will be provided later to make it perfectly clear.
For now, filters cannot be used in backend section. But this is only a matter of
time. Documentation will also be added later. This is the first commit of a long
list about filters.
It is possible to have several filters on the same listener/frontend. These
filters are stored in an array of at most MAX_FILTERS elements (define in
include/types/filters.h). Again, this will be replaced later by a list of
filters.
The filter API has been highly refactored. Main changes are:
* Now, HA supports an infinite number of filters per proxy. To do so, filters
are stored in list.
* Because filters are stored in list, filters state has been moved from the
channel structure to the filter structure. This is cleaner because there is no
more info about filters in channel structure.
* It is possible to defined filters on backends only. For such filters,
stream_start/stream_stop callbacks are not called. Of course, it is possible
to mix frontend and backend filters.
* Now, TCP streams are also filtered. All callbacks without the 'http_' prefix
are called for all kind of streams. In addition, 2 new callbacks were added to
filter data exchanged through a TCP stream:
- tcp_data: it is called when new data are available or when old unprocessed
data are still waiting.
- tcp_forward_data: it is called when some data can be consumed.
* New callbacks attached to channel were added:
- channel_start_analyze: it is called when a filter is ready to process data
exchanged through a channel. 2 new analyzers (a frontend and a backend)
are attached to channels to call this callback. For a frontend filter, it
is called before any other analyzer. For a backend filter, it is called
when a backend is attached to a stream. So some processing cannot be
filtered in that case.
- channel_analyze: it is called before each analyzer attached to a channel,
expects analyzers responsible for data sending.
- channel_end_analyze: it is called when all other analyzers have finished
their processing. A new analyzers is attached to channels to call this
callback. For a TCP stream, this is always the last one called. For a HTTP
one, the callback is called when a request/response ends, so it is called
one time for each request/response.
* 'session_established' callback has been removed. Everything that is done in
this callback can be handled by 'channel_start_analyze' on the response
channel.
* 'http_pre_process' and 'http_post_process' callbacks have been replaced by
'channel_analyze'.
* 'http_start' callback has been replaced by 'http_headers'. This new one is
called just before headers sending and parsing of the body.
* 'http_end' callback has been replaced by 'channel_end_analyze'.
* It is possible to set a forwarder for TCP channels. It was already possible to
do it for HTTP ones.
* Forwarders can partially consumed forwardable data. For this reason a new
HTTP message state was added before HTTP_MSG_DONE : HTTP_MSG_ENDING.
Now all filters can define corresponding callbacks (http_forward_data
and tcp_forward_data). Each filter owns 2 offsets relative to buf->p, next and
forward, to track, respectively, input data already parsed but not forwarded yet
by the filter and parsed data considered as forwarded by the filter. A any time,
we have the warranty that a filter cannot parse or forward more input than
previous ones. And, of course, it cannot forward more input than it has
parsed. 2 macros has been added to retrieve these offets: FLT_NXT and FLT_FWD.
In addition, 2 functions has been added to change the 'next size' and the
'forward size' of a filter. When a filter parses input data, it can alter these
data, so the size of these data can vary. This action has an effet on all
previous filters that must be handled. To do so, the function
'filter_change_next_size' must be called, passing the size variation. In the
same spirit, if a filter alter forwarded data, it must call the function
'filter_change_forward_size'. 'filter_change_next_size' can be called in
'http_data' and 'tcp_data' callbacks and only these ones. And
'filter_change_forward_size' can be called in 'http_forward_data' and
'tcp_forward_data' callbacks and only these ones. The data changes are the
filter responsability, but with some limitation. It must not change already
parsed/forwarded data or data that previous filters have not parsed/forwarded
yet.
Because filters can be used on backends, when we the backend is set for a
stream, we add filters defined for this backend in the filter list of the
stream. But we must only do that when the backend and the frontend of the stream
are not the same. Else same filters are added a second time leading to undefined
behavior.
The HTTP compression code had to be moved.
So it simplifies http_response_forward_body function. To do so, the way the data
are forwarded has changed. Now, a filter (and only one) can forward data. In a
commit to come, this limitation will be removed to let all filters take part to
data forwarding. There are 2 new functions that filters should use to deal with
this feature:
* flt_set_http_data_forwarder: This function sets the filter (using its id)
that will forward data for the specified HTTP message. It is possible if it
was not already set by another filter _AND_ if no data was yet forwarded
(msg->msg_state <= HTTP_MSG_BODY). It returns -1 if an error occurs.
* flt_http_data_forwarder: This function returns the filter id that will
forward data for the specified HTTP message. If there is no forwarder set, it
returns -1.
When an HTTP data forwarder is set for the response, the HTTP compression is
disabled. Of course, this is not definitive.
2015-04-30 05:48:27 -04:00
# include <types/filters.h>
2008-11-30 12:47:21 -05:00
# include <types/global.h>
2016-11-21 11:49:11 -05:00
# include <types/stats.h>
2006-06-25 20:48:02 -04:00
2009-07-07 09:10:31 -04:00
# include <proto/acl.h>
2015-09-27 13:29:33 -04:00
# include <proto/action.h>
2012-04-19 12:42:05 -04:00
# include <proto/arg.h>
2008-11-30 12:47:21 -05:00
# include <proto/backend.h>
2012-08-24 13:22:53 -04:00
# include <proto/channel.h>
2009-12-15 16:31:24 -05:00
# include <proto/checks.h>
2016-11-21 02:51:11 -05:00
# include <proto/cli.h>
2012-07-06 08:29:45 -04:00
# include <proto/connection.h>
2016-11-21 11:49:11 -05:00
# include <proto/stats.h>
2012-09-02 16:34:23 -04:00
# include <proto/fd.h>
MAJOR: filters: Add filters support
This patch adds the support of filters in HAProxy. The main idea is to have a
way to "easely" extend HAProxy by adding some "modules", called filters, that
will be able to change HAProxy behavior in a programmatic way.
To do so, many entry points has been added in code to let filters to hook up to
different steps of the processing. A filter must define a flt_ops sutrctures
(see include/types/filters.h for details). This structure contains all available
callbacks that a filter can define:
struct flt_ops {
/*
* Callbacks to manage the filter lifecycle
*/
int (*init) (struct proxy *p);
void (*deinit)(struct proxy *p);
int (*check) (struct proxy *p);
/*
* Stream callbacks
*/
void (*stream_start) (struct stream *s);
void (*stream_accept) (struct stream *s);
void (*session_establish)(struct stream *s);
void (*stream_stop) (struct stream *s);
/*
* HTTP callbacks
*/
int (*http_start) (struct stream *s, struct http_msg *msg);
int (*http_start_body) (struct stream *s, struct http_msg *msg);
int (*http_start_chunk) (struct stream *s, struct http_msg *msg);
int (*http_data) (struct stream *s, struct http_msg *msg);
int (*http_last_chunk) (struct stream *s, struct http_msg *msg);
int (*http_end_chunk) (struct stream *s, struct http_msg *msg);
int (*http_chunk_trailers)(struct stream *s, struct http_msg *msg);
int (*http_end_body) (struct stream *s, struct http_msg *msg);
void (*http_end) (struct stream *s, struct http_msg *msg);
void (*http_reset) (struct stream *s, struct http_msg *msg);
int (*http_pre_process) (struct stream *s, struct http_msg *msg);
int (*http_post_process) (struct stream *s, struct http_msg *msg);
void (*http_reply) (struct stream *s, short status,
const struct chunk *msg);
};
To declare and use a filter, in the configuration, the "filter" keyword must be
used in a listener/frontend section:
frontend test
...
filter <FILTER-NAME> [OPTIONS...]
The filter referenced by the <FILTER-NAME> must declare a configuration parser
on its own name to fill flt_ops and filter_conf field in the proxy's
structure. An exemple will be provided later to make it perfectly clear.
For now, filters cannot be used in backend section. But this is only a matter of
time. Documentation will also be added later. This is the first commit of a long
list about filters.
It is possible to have several filters on the same listener/frontend. These
filters are stored in an array of at most MAX_FILTERS elements (define in
include/types/filters.h). Again, this will be replaced later by a list of
filters.
The filter API has been highly refactored. Main changes are:
* Now, HA supports an infinite number of filters per proxy. To do so, filters
are stored in list.
* Because filters are stored in list, filters state has been moved from the
channel structure to the filter structure. This is cleaner because there is no
more info about filters in channel structure.
* It is possible to defined filters on backends only. For such filters,
stream_start/stream_stop callbacks are not called. Of course, it is possible
to mix frontend and backend filters.
* Now, TCP streams are also filtered. All callbacks without the 'http_' prefix
are called for all kind of streams. In addition, 2 new callbacks were added to
filter data exchanged through a TCP stream:
- tcp_data: it is called when new data are available or when old unprocessed
data are still waiting.
- tcp_forward_data: it is called when some data can be consumed.
* New callbacks attached to channel were added:
- channel_start_analyze: it is called when a filter is ready to process data
exchanged through a channel. 2 new analyzers (a frontend and a backend)
are attached to channels to call this callback. For a frontend filter, it
is called before any other analyzer. For a backend filter, it is called
when a backend is attached to a stream. So some processing cannot be
filtered in that case.
- channel_analyze: it is called before each analyzer attached to a channel,
expects analyzers responsible for data sending.
- channel_end_analyze: it is called when all other analyzers have finished
their processing. A new analyzers is attached to channels to call this
callback. For a TCP stream, this is always the last one called. For a HTTP
one, the callback is called when a request/response ends, so it is called
one time for each request/response.
* 'session_established' callback has been removed. Everything that is done in
this callback can be handled by 'channel_start_analyze' on the response
channel.
* 'http_pre_process' and 'http_post_process' callbacks have been replaced by
'channel_analyze'.
* 'http_start' callback has been replaced by 'http_headers'. This new one is
called just before headers sending and parsing of the body.
* 'http_end' callback has been replaced by 'channel_end_analyze'.
* It is possible to set a forwarder for TCP channels. It was already possible to
do it for HTTP ones.
* Forwarders can partially consumed forwardable data. For this reason a new
HTTP message state was added before HTTP_MSG_DONE : HTTP_MSG_ENDING.
Now all filters can define corresponding callbacks (http_forward_data
and tcp_forward_data). Each filter owns 2 offsets relative to buf->p, next and
forward, to track, respectively, input data already parsed but not forwarded yet
by the filter and parsed data considered as forwarded by the filter. A any time,
we have the warranty that a filter cannot parse or forward more input than
previous ones. And, of course, it cannot forward more input than it has
parsed. 2 macros has been added to retrieve these offets: FLT_NXT and FLT_FWD.
In addition, 2 functions has been added to change the 'next size' and the
'forward size' of a filter. When a filter parses input data, it can alter these
data, so the size of these data can vary. This action has an effet on all
previous filters that must be handled. To do so, the function
'filter_change_next_size' must be called, passing the size variation. In the
same spirit, if a filter alter forwarded data, it must call the function
'filter_change_forward_size'. 'filter_change_next_size' can be called in
'http_data' and 'tcp_data' callbacks and only these ones. And
'filter_change_forward_size' can be called in 'http_forward_data' and
'tcp_forward_data' callbacks and only these ones. The data changes are the
filter responsability, but with some limitation. It must not change already
parsed/forwarded data or data that previous filters have not parsed/forwarded
yet.
Because filters can be used on backends, when we the backend is set for a
stream, we add filters defined for this backend in the filter list of the
stream. But we must only do that when the backend and the frontend of the stream
are not the same. Else same filters are added a second time leading to undefined
behavior.
The HTTP compression code had to be moved.
So it simplifies http_response_forward_body function. To do so, the way the data
are forwarded has changed. Now, a filter (and only one) can forward data. In a
commit to come, this limitation will be removed to let all filters take part to
data forwarding. There are 2 new functions that filters should use to deal with
this feature:
* flt_set_http_data_forwarder: This function sets the filter (using its id)
that will forward data for the specified HTTP message. It is possible if it
was not already set by another filter _AND_ if no data was yet forwarded
(msg->msg_state <= HTTP_MSG_BODY). It returns -1 if an error occurs.
* flt_http_data_forwarder: This function returns the filter id that will
forward data for the specified HTTP message. If there is no forwarder set, it
returns -1.
When an HTTP data forwarder is set for the response, the HTTP compression is
disabled. Of course, this is not definitive.
2015-04-30 05:48:27 -04:00
# include <proto/filters.h>
2010-06-20 05:19:22 -04:00
# include <proto/freq_ctr.h>
2010-10-15 17:25:20 -04:00
# include <proto/frontend.h>
2007-01-21 13:16:41 -05:00
# include <proto/hdr_idx.h>
2015-02-16 14:11:43 -05:00
# include <proto/hlua.h>
2012-09-12 16:58:11 -04:00
# include <proto/listener.h>
2007-05-13 15:36:56 -04:00
# include <proto/log.h>
2012-09-06 05:32:07 -04:00
# include <proto/raw_sock.h>
2015-04-03 08:10:06 -04:00
# include <proto/session.h>
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
# include <proto/stream.h>
2009-01-25 07:56:13 -05:00
# include <proto/pipe.h>
2008-11-30 12:47:21 -05:00
# include <proto/proto_http.h>
# include <proto/proto_tcp.h>
2009-07-07 09:10:31 -04:00
# include <proto/proxy.h>
2006-06-25 20:48:02 -04:00
# include <proto/queue.h>
2009-03-05 12:43:00 -05:00
# include <proto/server.h>
2012-04-27 15:52:18 -04:00
# include <proto/sample.h>
2010-01-04 09:47:17 -05:00
# include <proto/stick_table.h>
2008-11-30 12:47:21 -05:00
# include <proto/stream_interface.h>
# include <proto/task.h>
2015-06-06 13:29:07 -04:00
# include <proto/vars.h>
2006-06-25 20:48:02 -04:00
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
struct pool_head * pool2_stream ;
struct list streams ;
2006-06-25 20:48:02 -04:00
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
/* list of streams waiting for at least one buffer */
MAJOR: session: implement a wait-queue for sessions who need a buffer
When a session_alloc_buffers() fails to allocate one or two buffers,
it subscribes the session to buffer_wq, and waits for another session
to release buffers. It's then removed from the queue and woken up with
TASK_WAKE_RES, and can attempt its allocation again.
We decide to try to wake as many waiters as we release buffers so
that if we release 2 and two waiters need only once, they both have
their chance. We must never come to the situation where we don't wake
enough tasks up.
It's common to release buffers after the completion of an I/O callback,
which can happen even if the I/O could not be performed due to half a
failure on memory allocation. In this situation, we don't want to move
out of the wait queue the session that was just added, otherwise it
will never get any buffer. Thus, we only force ourselves out of the
queue when freeing the session.
Note: at the moment, since session_alloc_buffers() is not used, no task
is subscribed to the wait queue.
2014-11-25 15:10:35 -05:00
struct list buffer_wq = LIST_HEAD_INIT ( buffer_wq ) ;
2015-09-27 13:29:33 -04:00
/* List of all use-service keywords. */
static struct list service_keywords = LIST_HEAD_INIT ( service_keywords ) ;
2015-04-04 12:50:31 -04:00
/* This function is called from the session handler which detects the end of
2015-04-08 12:26:29 -04:00
* handshake , in order to complete initialization of a valid stream . It must be
* called with a session ( which may be embryonic ) . It returns the pointer to
* the newly created stream , or NULL in case of fatal error . The client - facing
* end point is assigned to < origin > , which must be valid . The task ' s context
* is set to the new stream , and its function is set to process_stream ( ) .
* Target and analysers are null .
2012-08-31 10:01:23 -04:00
*/
2015-04-08 12:26:29 -04:00
struct stream * stream_new ( struct session * sess , struct task * t , enum obj_type * origin )
2012-08-31 10:01:23 -04:00
{
2015-04-04 12:08:21 -04:00
struct stream * s ;
2015-04-08 12:26:29 -04:00
struct connection * conn = objt_conn ( origin ) ;
struct appctx * appctx = objt_appctx ( origin ) ;
2012-08-31 10:01:23 -04:00
2015-04-04 12:08:21 -04:00
if ( unlikely ( ( s = pool_alloc2 ( pool2_stream ) ) = = NULL ) )
2015-04-05 06:00:52 -04:00
return s ;
2015-04-04 12:08:21 -04:00
/* minimum stream initialization required for an embryonic stream is
* fairly low . We need very little to execute L4 ACLs , then we need a
* task to make the client - side connection live on its own .
* - flags
* - stick - entry tracking
*/
s - > flags = 0 ;
2015-04-05 12:19:23 -04:00
s - > logs . logwait = sess - > fe - > to_log ;
2015-04-04 12:08:21 -04:00
s - > logs . level = 0 ;
2015-04-05 06:03:54 -04:00
s - > logs . accept_date = sess - > accept_date ; /* user-visible date for logging */
s - > logs . tv_accept = sess - > tv_accept ; /* corrected date for internal use */
MEDIUM: log: Decompose %Tq in %Th %Ti %TR
Tq is the time between the instant the connection is accepted and a
complete valid request is received. This time includes the handshake
(SSL / Proxy-Protocol), the idle when the browser does preconnect and
the request reception.
This patch decomposes %Tq in 3 measurements names %Th, %Ti, and %TR
which returns respectively the handshake time, the idle time and the
duration of valid request reception. It also adds %Ta which reports
the request's active time, which is the total time without %Th nor %Ti.
It replaces %Tt as the total time, reporting accurate measurements for
HTTP persistent connections.
%Th is avalaible for TCP and HTTP sessions, %Ti, %TR and %Ta are only
avalaible for HTTP connections.
In addition to this, we have new timestamps %tr, %trg and %trl, which
log the date of start of receipt of the request, respectively in the
default format, in GMT time and in local time (by analogy with %t, %T
and %Tl). All of them are obviously only available for HTTP. These values
are more relevant as they more accurately represent the request date
without being skewed by a browser's preconnect nor a keep-alive idle
time.
The HTTP log format and the CLF log format have been modified to
use %tr, %TR, and %Ta respectively instead of %t, %Tq and %Tt. This
way the default log formats now produce the expected output for users
who don't want to manually fiddle with the log-format directive.
Example with the following log-format :
log-format "%ci:%cp [%tr] %ft %b/%s h=%Th/i=%Ti/R=%TR/w=%Tw/c=%Tc/r=%Tr/a=%Ta/t=%Tt %ST %B %CC %CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}r"
The request was sent by hand using "openssl s_client -connect" :
Aug 23 14:43:20 haproxy[25446]: 127.0.0.1:45636 [23/Aug/2016:14:43:20.221] test~ test/test h=6/i=2375/R=261/w=0/c=1/r=0/a=262/t=2643 200 145 - - ---- 1/1/0/0/0 0/0 "GET / HTTP/1.1"
=> 6 ms of SSL handshake, 2375 waiting before sending the first char (in
fact the time to type the first line), 261 ms before the end of the request,
no time spent in queue, 1 ms spend connecting to the server, immediate
response, total active time for this request = 262ms. Total time from accept
to close : 2643 ms.
The timing now decomposes like this :
first request 2nd request
|<-------------------------------->|<-------------- ...
t tr t tr ...
---|----|----|----|----|----|----|----|----|--
: Th Ti TR Tw Tc Tr Td : Ti ...
:<---- Tq ---->: :
:<-------------- Tt -------------->:
:<--------- Ta --------->:
2016-07-28 11:19:45 -04:00
/* This function is called just after the handshake, so the handshake duration is
* between the accept time and now .
*/
s - > logs . t_handshake = tv_ms_elapsed ( & sess - > tv_accept , & now ) ;
s - > logs . t_idle = - 1 ;
2015-04-05 06:03:54 -04:00
tv_zero ( & s - > logs . tv_request ) ;
s - > logs . t_queue = - 1 ;
s - > logs . t_connect = - 1 ;
s - > logs . t_data = - 1 ;
s - > logs . t_close = 0 ;
s - > logs . bytes_in = s - > logs . bytes_out = 0 ;
s - > logs . prx_queue_size = 0 ; /* we get the number of pending conns before us */
s - > logs . srv_queue_size = 0 ; /* we will get this number soon */
/* default logging function */
s - > do_log = strm_log ;
/* default error reporting function, may be changed by analysers */
s - > srv_error = default_srv_error ;
2015-04-04 12:08:21 -04:00
/* Initialise the current rule list pointer to NULL. We are sure that
* any rulelist match the NULL pointer .
*/
s - > current_rule_list = NULL ;
2015-07-22 11:10:58 -04:00
s - > current_rule = NULL ;
2015-04-04 12:08:21 -04:00
2015-09-21 11:48:24 -04:00
/* Copy SC counters for the stream. We don't touch refcounts because
* any reference we have is inherited from the session . Since the stream
* doesn ' t exist without the session , the session ' s existence guarantees
* we don ' t lose the entry . During the store operation , the stream won ' t
* touch these ones .
2015-08-18 05:34:18 -04:00
*/
2015-08-16 06:03:39 -04:00
memcpy ( s - > stkctr , sess - > stkctr , sizeof ( s - > stkctr ) ) ;
2015-04-04 12:08:21 -04:00
s - > sess = sess ;
s - > si [ 0 ] . flags = SI_FL_NONE ;
s - > si [ 1 ] . flags = SI_FL_ISBACK ;
s - > uniq_id = global . req_count + + ;
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
/* OK, we're keeping the stream, so let's properly initialize the stream */
LIST_ADDQ ( & streams , & s - > list ) ;
2012-08-31 10:01:23 -04:00
LIST_INIT ( & s - > back_refs ) ;
MAJOR: session: implement a wait-queue for sessions who need a buffer
When a session_alloc_buffers() fails to allocate one or two buffers,
it subscribes the session to buffer_wq, and waits for another session
to release buffers. It's then removed from the queue and woken up with
TASK_WAKE_RES, and can attempt its allocation again.
We decide to try to wake as many waiters as we release buffers so
that if we release 2 and two waiters need only once, they both have
their chance. We must never come to the situation where we don't wake
enough tasks up.
It's common to release buffers after the completion of an I/O callback,
which can happen even if the I/O could not be performed due to half a
failure on memory allocation. In this situation, we don't want to move
out of the wait queue the session that was just added, otherwise it
will never get any buffer. Thus, we only force ourselves out of the
queue when freeing the session.
Note: at the moment, since session_alloc_buffers() is not used, no task
is subscribed to the wait queue.
2014-11-25 15:10:35 -05:00
LIST_INIT ( & s - > buffer_wait ) ;
2013-10-14 15:32:07 -04:00
2015-04-02 19:14:29 -04:00
s - > flags | = SF_INITIALIZED ;
2012-08-31 10:01:23 -04:00
s - > unique_id = NULL ;
2015-04-04 12:08:21 -04:00
s - > task = t ;
2015-04-05 18:25:48 -04:00
t - > process = process_stream ;
2012-08-31 10:01:23 -04:00
t - > context = s ;
t - > expire = TICK_ETERNITY ;
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
/* Note: initially, the stream's backend points to the frontend.
2012-08-31 10:01:23 -04:00
* This changes later when switching rules are executed or
* when the default backend is assigned .
*/
2015-04-03 09:40:56 -04:00
s - > be = sess - > fe ;
2014-11-27 14:45:39 -05:00
s - > req . buf = s - > res . buf = NULL ;
2015-04-03 16:16:32 -04:00
s - > req_cap = NULL ;
s - > res_cap = NULL ;
2010-11-11 04:56:04 -05:00
2015-06-19 05:59:02 -04:00
/* Initialise all the variables contexts even if not used.
* This permits to prune these contexts without errors .
2015-06-06 13:29:07 -04:00
*/
vars_init ( & s - > vars_txn , SCOPE_TXN ) ;
vars_init ( & s - > vars_reqres , SCOPE_REQ ) ;
2013-10-11 13:34:20 -04:00
/* this part should be common with other protocols */
2014-11-28 06:12:34 -05:00
si_reset ( & s - > si [ 0 ] ) ;
2013-10-11 13:34:20 -04:00
si_set_state ( & s - > si [ 0 ] , SI_ST_EST ) ;
2015-04-04 08:28:46 -04:00
/* attach the incoming connection to the stream interface now. */
2015-04-04 19:30:42 -04:00
if ( conn )
si_attach_conn ( & s - > si [ 0 ] , conn ) ;
2015-04-04 19:33:13 -04:00
else if ( appctx )
si_attach_appctx ( & s - > si [ 0 ] , appctx ) ;
2013-10-11 13:34:20 -04:00
2015-04-03 09:40:56 -04:00
if ( likely ( sess - > fe - > options2 & PR_O2_INDEPSTR ) )
2013-10-11 13:34:20 -04:00
s - > si [ 0 ] . flags | = SI_FL_INDEP_STR ;
2010-06-01 11:45:26 -04:00
/* pre-initialize the other side's stream interface to an INIT state. The
* callbacks will be initialized before attempting to connect .
*/
2014-11-28 06:12:34 -05:00
si_reset ( & s - > si [ 1 ] ) ;
2013-10-01 04:45:07 -04:00
2015-04-03 09:40:56 -04:00
if ( likely ( sess - > fe - > options2 & PR_O2_INDEPSTR ) )
2010-06-01 11:45:26 -04:00
s - > si [ 1 ] . flags | = SI_FL_INDEP_STR ;
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
stream_init_srv_conn ( s ) ;
2015-04-05 18:25:48 -04:00
s - > target = NULL ;
2010-06-01 11:45:26 -04:00
s - > pend_pos = NULL ;
/* init store persistence */
s - > store_count = 0 ;
2014-11-27 14:45:39 -05:00
channel_init ( & s - > req ) ;
s - > req . flags | = CF_READ_ATTACHED ; /* the producer is already connected */
2015-04-05 18:25:48 -04:00
s - > req . analysers = 0 ;
channel_auto_connect ( & s - > req ) ; /* don't wait to establish connection */
channel_auto_close ( & s - > req ) ; /* let the producer forward close requests */
2015-04-05 12:15:59 -04:00
s - > req . rto = sess - > fe - > timeout . client ;
2014-11-27 14:45:39 -05:00
s - > req . wto = TICK_ETERNITY ;
s - > req . rex = TICK_ETERNITY ;
s - > req . wex = TICK_ETERNITY ;
s - > req . analyse_exp = TICK_ETERNITY ;
2014-11-24 05:36:57 -05:00
2014-11-27 14:45:39 -05:00
channel_init ( & s - > res ) ;
2014-11-28 08:17:09 -05:00
s - > res . flags | = CF_ISRESP ;
2014-11-27 14:45:39 -05:00
s - > res . analysers = 0 ;
2010-06-01 11:45:26 -04:00
2015-04-03 09:40:56 -04:00
if ( sess - > fe - > options2 & PR_O2_NODELAY ) {
2014-11-27 14:45:39 -05:00
s - > req . flags | = CF_NEVER_WAIT ;
s - > res . flags | = CF_NEVER_WAIT ;
2011-05-30 12:10:30 -04:00
}
2015-04-05 12:15:59 -04:00
s - > res . wto = sess - > fe - > timeout . client ;
2014-11-27 14:45:39 -05:00
s - > res . rto = TICK_ETERNITY ;
s - > res . rex = TICK_ETERNITY ;
s - > res . wex = TICK_ETERNITY ;
s - > res . analyse_exp = TICK_ETERNITY ;
2010-06-01 11:45:26 -04:00
2015-04-03 17:46:31 -04:00
s - > txn = NULL ;
2012-03-09 05:32:30 -05:00
2015-03-04 10:48:34 -05:00
HLUA_INIT ( & s - > hlua ) ;
2015-02-16 14:11:43 -05:00
2015-11-05 07:35:03 -05:00
if ( flt_stream_init ( s ) < 0 | | flt_stream_start ( s ) < 0 )
MAJOR: filters: Add filters support
This patch adds the support of filters in HAProxy. The main idea is to have a
way to "easely" extend HAProxy by adding some "modules", called filters, that
will be able to change HAProxy behavior in a programmatic way.
To do so, many entry points has been added in code to let filters to hook up to
different steps of the processing. A filter must define a flt_ops sutrctures
(see include/types/filters.h for details). This structure contains all available
callbacks that a filter can define:
struct flt_ops {
/*
* Callbacks to manage the filter lifecycle
*/
int (*init) (struct proxy *p);
void (*deinit)(struct proxy *p);
int (*check) (struct proxy *p);
/*
* Stream callbacks
*/
void (*stream_start) (struct stream *s);
void (*stream_accept) (struct stream *s);
void (*session_establish)(struct stream *s);
void (*stream_stop) (struct stream *s);
/*
* HTTP callbacks
*/
int (*http_start) (struct stream *s, struct http_msg *msg);
int (*http_start_body) (struct stream *s, struct http_msg *msg);
int (*http_start_chunk) (struct stream *s, struct http_msg *msg);
int (*http_data) (struct stream *s, struct http_msg *msg);
int (*http_last_chunk) (struct stream *s, struct http_msg *msg);
int (*http_end_chunk) (struct stream *s, struct http_msg *msg);
int (*http_chunk_trailers)(struct stream *s, struct http_msg *msg);
int (*http_end_body) (struct stream *s, struct http_msg *msg);
void (*http_end) (struct stream *s, struct http_msg *msg);
void (*http_reset) (struct stream *s, struct http_msg *msg);
int (*http_pre_process) (struct stream *s, struct http_msg *msg);
int (*http_post_process) (struct stream *s, struct http_msg *msg);
void (*http_reply) (struct stream *s, short status,
const struct chunk *msg);
};
To declare and use a filter, in the configuration, the "filter" keyword must be
used in a listener/frontend section:
frontend test
...
filter <FILTER-NAME> [OPTIONS...]
The filter referenced by the <FILTER-NAME> must declare a configuration parser
on its own name to fill flt_ops and filter_conf field in the proxy's
structure. An exemple will be provided later to make it perfectly clear.
For now, filters cannot be used in backend section. But this is only a matter of
time. Documentation will also be added later. This is the first commit of a long
list about filters.
It is possible to have several filters on the same listener/frontend. These
filters are stored in an array of at most MAX_FILTERS elements (define in
include/types/filters.h). Again, this will be replaced later by a list of
filters.
The filter API has been highly refactored. Main changes are:
* Now, HA supports an infinite number of filters per proxy. To do so, filters
are stored in list.
* Because filters are stored in list, filters state has been moved from the
channel structure to the filter structure. This is cleaner because there is no
more info about filters in channel structure.
* It is possible to defined filters on backends only. For such filters,
stream_start/stream_stop callbacks are not called. Of course, it is possible
to mix frontend and backend filters.
* Now, TCP streams are also filtered. All callbacks without the 'http_' prefix
are called for all kind of streams. In addition, 2 new callbacks were added to
filter data exchanged through a TCP stream:
- tcp_data: it is called when new data are available or when old unprocessed
data are still waiting.
- tcp_forward_data: it is called when some data can be consumed.
* New callbacks attached to channel were added:
- channel_start_analyze: it is called when a filter is ready to process data
exchanged through a channel. 2 new analyzers (a frontend and a backend)
are attached to channels to call this callback. For a frontend filter, it
is called before any other analyzer. For a backend filter, it is called
when a backend is attached to a stream. So some processing cannot be
filtered in that case.
- channel_analyze: it is called before each analyzer attached to a channel,
expects analyzers responsible for data sending.
- channel_end_analyze: it is called when all other analyzers have finished
their processing. A new analyzers is attached to channels to call this
callback. For a TCP stream, this is always the last one called. For a HTTP
one, the callback is called when a request/response ends, so it is called
one time for each request/response.
* 'session_established' callback has been removed. Everything that is done in
this callback can be handled by 'channel_start_analyze' on the response
channel.
* 'http_pre_process' and 'http_post_process' callbacks have been replaced by
'channel_analyze'.
* 'http_start' callback has been replaced by 'http_headers'. This new one is
called just before headers sending and parsing of the body.
* 'http_end' callback has been replaced by 'channel_end_analyze'.
* It is possible to set a forwarder for TCP channels. It was already possible to
do it for HTTP ones.
* Forwarders can partially consumed forwardable data. For this reason a new
HTTP message state was added before HTTP_MSG_DONE : HTTP_MSG_ENDING.
Now all filters can define corresponding callbacks (http_forward_data
and tcp_forward_data). Each filter owns 2 offsets relative to buf->p, next and
forward, to track, respectively, input data already parsed but not forwarded yet
by the filter and parsed data considered as forwarded by the filter. A any time,
we have the warranty that a filter cannot parse or forward more input than
previous ones. And, of course, it cannot forward more input than it has
parsed. 2 macros has been added to retrieve these offets: FLT_NXT and FLT_FWD.
In addition, 2 functions has been added to change the 'next size' and the
'forward size' of a filter. When a filter parses input data, it can alter these
data, so the size of these data can vary. This action has an effet on all
previous filters that must be handled. To do so, the function
'filter_change_next_size' must be called, passing the size variation. In the
same spirit, if a filter alter forwarded data, it must call the function
'filter_change_forward_size'. 'filter_change_next_size' can be called in
'http_data' and 'tcp_data' callbacks and only these ones. And
'filter_change_forward_size' can be called in 'http_forward_data' and
'tcp_forward_data' callbacks and only these ones. The data changes are the
filter responsability, but with some limitation. It must not change already
parsed/forwarded data or data that previous filters have not parsed/forwarded
yet.
Because filters can be used on backends, when we the backend is set for a
stream, we add filters defined for this backend in the filter list of the
stream. But we must only do that when the backend and the frontend of the stream
are not the same. Else same filters are added a second time leading to undefined
behavior.
The HTTP compression code had to be moved.
So it simplifies http_response_forward_body function. To do so, the way the data
are forwarded has changed. Now, a filter (and only one) can forward data. In a
commit to come, this limitation will be removed to let all filters take part to
data forwarding. There are 2 new functions that filters should use to deal with
this feature:
* flt_set_http_data_forwarder: This function sets the filter (using its id)
that will forward data for the specified HTTP message. It is possible if it
was not already set by another filter _AND_ if no data was yet forwarded
(msg->msg_state <= HTTP_MSG_BODY). It returns -1 if an error occurs.
* flt_http_data_forwarder: This function returns the filter id that will
forward data for the specified HTTP message. If there is no forwarder set, it
returns -1.
When an HTTP data forwarder is set for the response, the HTTP compression is
disabled. Of course, this is not definitive.
2015-04-30 05:48:27 -04:00
goto out_fail_accept ;
2010-06-01 11:45:26 -04:00
/* finish initialization of the accepted file descriptor */
2015-04-04 19:30:42 -04:00
if ( conn )
conn_data_want_recv ( conn ) ;
2015-04-04 19:33:13 -04:00
else if ( appctx )
2015-04-21 13:23:39 -04:00
si_applet_want_get ( & s - > si [ 0 ] ) ;
2010-06-01 11:45:26 -04:00
2015-04-05 12:19:23 -04:00
if ( sess - > fe - > accept & & sess - > fe - > accept ( s ) < 0 )
2015-04-05 05:52:08 -04:00
goto out_fail_accept ;
2010-06-01 11:45:26 -04:00
/* it is important not to call the wakeup function directly but to
* pass through task_wakeup ( ) , because this one knows how to apply
* priorities to tasks .
*/
task_wakeup ( t , TASK_WOKEN_INIT ) ;
2015-04-05 06:00:52 -04:00
return s ;
2010-06-01 11:45:26 -04:00
/* Error unrolling */
2015-04-05 05:52:08 -04:00
out_fail_accept :
2015-11-05 07:35:03 -05:00
flt_stream_release ( s , 0 ) ;
2014-11-25 11:10:33 -05:00
LIST_DEL ( & s - > list ) ;
2015-04-04 12:08:21 -04:00
pool_free2 ( pool2_stream , s ) ;
2015-04-05 06:00:52 -04:00
return NULL ;
2010-06-01 11:45:26 -04:00
}
2006-06-25 20:48:02 -04:00
/*
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
* frees the context associated to a stream . It must have been removed first .
2006-06-25 20:48:02 -04:00
*/
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
static void stream_free ( struct stream * s )
2006-06-25 20:48:02 -04:00
{
2015-04-03 13:19:59 -04:00
struct session * sess = strm_sess ( s ) ;
struct proxy * fe = sess - > fe ;
2008-12-07 14:16:23 -05:00
struct bref * bref , * back ;
2015-04-03 13:19:59 -04:00
struct connection * cli_conn = objt_conn ( sess - > origin ) ;
2010-06-06 12:28:49 -04:00
int i ;
2007-01-07 09:46:13 -05:00
2006-06-25 20:48:02 -04:00
if ( s - > pend_pos )
pendconn_free ( s - > pend_pos ) ;
2008-12-04 03:33:58 -05:00
2012-11-11 18:42:33 -05:00
if ( objt_server ( s - > target ) ) { /* there may be requests left pending in queue */
2015-04-02 19:14:29 -04:00
if ( s - > flags & SF_CURR_SESS ) {
s - > flags & = ~ SF_CURR_SESS ;
2012-11-11 18:42:33 -05:00
objt_server ( s - > target ) - > cur_sess - - ;
2008-11-11 14:20:02 -05:00
}
2012-11-11 18:42:33 -05:00
if ( may_dequeue_tasks ( objt_server ( s - > target ) , s - > be ) )
process_srv_queue ( objt_server ( s - > target ) ) ;
2008-11-11 14:20:02 -05:00
}
2008-12-04 03:33:58 -05:00
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 09:04:11 -04:00
if ( unlikely ( s - > srv_conn ) ) {
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
/* the stream still has a reserved slot on a server, but
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 09:04:11 -04:00
* it should normally be only the same as the one above ,
* so this should not happen in fact .
*/
sess_change_server ( s , NULL ) ;
}
2014-11-27 14:45:39 -05:00
if ( s - > req . pipe )
put_pipe ( s - > req . pipe ) ;
2009-01-18 15:56:21 -05:00
2014-11-27 14:45:39 -05:00
if ( s - > res . pipe )
put_pipe ( s - > res . pipe ) ;
2009-01-18 15:56:21 -05:00
MAJOR: session: implement a wait-queue for sessions who need a buffer
When a session_alloc_buffers() fails to allocate one or two buffers,
it subscribes the session to buffer_wq, and waits for another session
to release buffers. It's then removed from the queue and woken up with
TASK_WAKE_RES, and can attempt its allocation again.
We decide to try to wake as many waiters as we release buffers so
that if we release 2 and two waiters need only once, they both have
their chance. We must never come to the situation where we don't wake
enough tasks up.
It's common to release buffers after the completion of an I/O callback,
which can happen even if the I/O could not be performed due to half a
failure on memory allocation. In this situation, we don't want to move
out of the wait queue the session that was just added, otherwise it
will never get any buffer. Thus, we only force ourselves out of the
queue when freeing the session.
Note: at the moment, since session_alloc_buffers() is not used, no task
is subscribed to the wait queue.
2014-11-25 15:10:35 -05:00
/* We may still be present in the buffer wait queue */
if ( ! LIST_ISEMPTY ( & s - > buffer_wait ) ) {
LIST_DEL ( & s - > buffer_wait ) ;
LIST_INIT ( & s - > buffer_wait ) ;
}
2014-11-27 14:45:39 -05:00
b_drop ( & s - > req . buf ) ;
b_drop ( & s - > res . buf ) ;
MAJOR: session: implement a wait-queue for sessions who need a buffer
When a session_alloc_buffers() fails to allocate one or two buffers,
it subscribes the session to buffer_wq, and waits for another session
to release buffers. It's then removed from the queue and woken up with
TASK_WAKE_RES, and can attempt its allocation again.
We decide to try to wake as many waiters as we release buffers so
that if we release 2 and two waiters need only once, they both have
their chance. We must never come to the situation where we don't wake
enough tasks up.
It's common to release buffers after the completion of an I/O callback,
which can happen even if the I/O could not be performed due to half a
failure on memory allocation. In this situation, we don't want to move
out of the wait queue the session that was just added, otherwise it
will never get any buffer. Thus, we only force ourselves out of the
queue when freeing the session.
Note: at the moment, since session_alloc_buffers() is not used, no task
is subscribed to the wait queue.
2014-11-25 15:10:35 -05:00
if ( ! LIST_ISEMPTY ( & buffer_wq ) )
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
stream_offer_buffers ( ) ;
2012-10-12 17:49:43 -04:00
2015-03-04 10:48:34 -05:00
hlua_ctx_destroy ( & s - > hlua ) ;
2015-04-03 17:46:31 -04:00
if ( s - > txn )
http_end_txn ( s ) ;
2010-01-07 16:51:47 -05:00
2012-10-12 11:50:05 -04:00
/* ensure the client-side transport layer is destroyed */
MAJOR: connection: add two new flags to indicate readiness of control/transport
Currently the control and transport layers of a connection are supposed
to be initialized when their respective pointers are not NULL. This will
not work anymore when we plan to reuse connections, because there is an
asymmetry between the accept() side and the connect() side :
- on accept() side, the fd is set first, then the ctrl layer then the
transport layer ; upon error, they must be undone in the reverse order,
then the FD must be closed. The FD must not be deleted if the control
layer was not yet initialized ;
- on the connect() side, the fd is set last and there is no reliable way
to know if it has been initialized or not. In practice it's initialized
to -1 first but this is hackish and supposes that local FDs only will
be used forever. Also, there are even less solutions for keeping trace
of the transport layer's state.
Also it is possible to support delayed close() when something (eg: logs)
tracks some information requiring the transport and/or control layers,
making it even more difficult to clean them.
So the proposed solution is to add two flags to the connection :
- CO_FL_CTRL_READY is set when the control layer is initialized (fd_insert)
and cleared after it's released (fd_delete).
- CO_FL_XPRT_READY is set when the control layer is initialized (xprt->init)
and cleared after it's released (xprt->close).
The functions have been adapted to rely on this and not on the pointers
anymore. conn_xprt_close() was unused and dangerous : it did not close
the control layer (eg: the socket itself) but still marks the transport
layer as closed, preventing any future call to conn_full_close() from
finishing the job.
The problem comes from conn_full_close() in fact. It needs to close the
xprt and ctrl layers independantly. After that we're still having an issue :
we don't know based on ->ctrl alone whether the fd was registered or not.
For this we use the two new flags CO_FL_XPRT_READY and CO_FL_CTRL_READY. We
now rely on this and not on conn->xprt nor conn->ctrl anymore to decide what
remains to be done on the connection.
In order not to miss some flag assignments, we introduce conn_ctrl_init()
to initialize the control layer, register the fd using fd_insert() and set
the flag, and conn_ctrl_close() which unregisters the fd and removes the
flag, but only if the transport layer was closed.
Similarly, at the transport layer, conn_xprt_init() calls ->init and sets
the flag, while conn_xprt_close() checks the flag, calls ->close and clears
the flag, regardless xprt_ctx or xprt_st. This also ensures that the ->init
and the ->close functions are called only once each and in the correct order.
Note that conn_xprt_close() does nothing if the transport layer is still
tracked.
conn_full_close() now simply calls conn_xprt_close() then conn_full_close()
in turn, which do nothing if CO_FL_XPRT_TRACKED is set.
In order to handle the error path, we also provide conn_force_close() which
ignores CO_FL_XPRT_TRACKED and closes the transport and the control layers
in turns. All relevant instances of fd_delete() have been replaced with
conn_force_close(). Now we always know what state the connection is in and
we can expect to split its initialization.
2013-10-21 10:30:56 -04:00
if ( cli_conn )
conn_force_close ( cli_conn ) ;
2012-10-12 11:50:05 -04:00
2010-06-06 12:28:49 -04:00
for ( i = 0 ; i < s - > store_count ; i + + ) {
if ( ! s - > store [ i ] . ts )
continue ;
stksess_free ( s - > store [ i ] . table , s - > store [ i ] . ts ) ;
s - > store [ i ] . ts = NULL ;
}
2015-04-03 17:46:31 -04:00
if ( s - > txn ) {
pool_free2 ( pool2_hdr_idx , s - > txn - > hdr_idx . v ) ;
pool_free2 ( pool2_http_txn , s - > txn ) ;
s - > txn = NULL ;
}
MAJOR: filters: Add filters support
This patch adds the support of filters in HAProxy. The main idea is to have a
way to "easely" extend HAProxy by adding some "modules", called filters, that
will be able to change HAProxy behavior in a programmatic way.
To do so, many entry points has been added in code to let filters to hook up to
different steps of the processing. A filter must define a flt_ops sutrctures
(see include/types/filters.h for details). This structure contains all available
callbacks that a filter can define:
struct flt_ops {
/*
* Callbacks to manage the filter lifecycle
*/
int (*init) (struct proxy *p);
void (*deinit)(struct proxy *p);
int (*check) (struct proxy *p);
/*
* Stream callbacks
*/
void (*stream_start) (struct stream *s);
void (*stream_accept) (struct stream *s);
void (*session_establish)(struct stream *s);
void (*stream_stop) (struct stream *s);
/*
* HTTP callbacks
*/
int (*http_start) (struct stream *s, struct http_msg *msg);
int (*http_start_body) (struct stream *s, struct http_msg *msg);
int (*http_start_chunk) (struct stream *s, struct http_msg *msg);
int (*http_data) (struct stream *s, struct http_msg *msg);
int (*http_last_chunk) (struct stream *s, struct http_msg *msg);
int (*http_end_chunk) (struct stream *s, struct http_msg *msg);
int (*http_chunk_trailers)(struct stream *s, struct http_msg *msg);
int (*http_end_body) (struct stream *s, struct http_msg *msg);
void (*http_end) (struct stream *s, struct http_msg *msg);
void (*http_reset) (struct stream *s, struct http_msg *msg);
int (*http_pre_process) (struct stream *s, struct http_msg *msg);
int (*http_post_process) (struct stream *s, struct http_msg *msg);
void (*http_reply) (struct stream *s, short status,
const struct chunk *msg);
};
To declare and use a filter, in the configuration, the "filter" keyword must be
used in a listener/frontend section:
frontend test
...
filter <FILTER-NAME> [OPTIONS...]
The filter referenced by the <FILTER-NAME> must declare a configuration parser
on its own name to fill flt_ops and filter_conf field in the proxy's
structure. An exemple will be provided later to make it perfectly clear.
For now, filters cannot be used in backend section. But this is only a matter of
time. Documentation will also be added later. This is the first commit of a long
list about filters.
It is possible to have several filters on the same listener/frontend. These
filters are stored in an array of at most MAX_FILTERS elements (define in
include/types/filters.h). Again, this will be replaced later by a list of
filters.
The filter API has been highly refactored. Main changes are:
* Now, HA supports an infinite number of filters per proxy. To do so, filters
are stored in list.
* Because filters are stored in list, filters state has been moved from the
channel structure to the filter structure. This is cleaner because there is no
more info about filters in channel structure.
* It is possible to defined filters on backends only. For such filters,
stream_start/stream_stop callbacks are not called. Of course, it is possible
to mix frontend and backend filters.
* Now, TCP streams are also filtered. All callbacks without the 'http_' prefix
are called for all kind of streams. In addition, 2 new callbacks were added to
filter data exchanged through a TCP stream:
- tcp_data: it is called when new data are available or when old unprocessed
data are still waiting.
- tcp_forward_data: it is called when some data can be consumed.
* New callbacks attached to channel were added:
- channel_start_analyze: it is called when a filter is ready to process data
exchanged through a channel. 2 new analyzers (a frontend and a backend)
are attached to channels to call this callback. For a frontend filter, it
is called before any other analyzer. For a backend filter, it is called
when a backend is attached to a stream. So some processing cannot be
filtered in that case.
- channel_analyze: it is called before each analyzer attached to a channel,
expects analyzers responsible for data sending.
- channel_end_analyze: it is called when all other analyzers have finished
their processing. A new analyzers is attached to channels to call this
callback. For a TCP stream, this is always the last one called. For a HTTP
one, the callback is called when a request/response ends, so it is called
one time for each request/response.
* 'session_established' callback has been removed. Everything that is done in
this callback can be handled by 'channel_start_analyze' on the response
channel.
* 'http_pre_process' and 'http_post_process' callbacks have been replaced by
'channel_analyze'.
* 'http_start' callback has been replaced by 'http_headers'. This new one is
called just before headers sending and parsing of the body.
* 'http_end' callback has been replaced by 'channel_end_analyze'.
* It is possible to set a forwarder for TCP channels. It was already possible to
do it for HTTP ones.
* Forwarders can partially consumed forwardable data. For this reason a new
HTTP message state was added before HTTP_MSG_DONE : HTTP_MSG_ENDING.
Now all filters can define corresponding callbacks (http_forward_data
and tcp_forward_data). Each filter owns 2 offsets relative to buf->p, next and
forward, to track, respectively, input data already parsed but not forwarded yet
by the filter and parsed data considered as forwarded by the filter. A any time,
we have the warranty that a filter cannot parse or forward more input than
previous ones. And, of course, it cannot forward more input than it has
parsed. 2 macros has been added to retrieve these offets: FLT_NXT and FLT_FWD.
In addition, 2 functions has been added to change the 'next size' and the
'forward size' of a filter. When a filter parses input data, it can alter these
data, so the size of these data can vary. This action has an effet on all
previous filters that must be handled. To do so, the function
'filter_change_next_size' must be called, passing the size variation. In the
same spirit, if a filter alter forwarded data, it must call the function
'filter_change_forward_size'. 'filter_change_next_size' can be called in
'http_data' and 'tcp_data' callbacks and only these ones. And
'filter_change_forward_size' can be called in 'http_forward_data' and
'tcp_forward_data' callbacks and only these ones. The data changes are the
filter responsability, but with some limitation. It must not change already
parsed/forwarded data or data that previous filters have not parsed/forwarded
yet.
Because filters can be used on backends, when we the backend is set for a
stream, we add filters defined for this backend in the filter list of the
stream. But we must only do that when the backend and the frontend of the stream
are not the same. Else same filters are added a second time leading to undefined
behavior.
The HTTP compression code had to be moved.
So it simplifies http_response_forward_body function. To do so, the way the data
are forwarded has changed. Now, a filter (and only one) can forward data. In a
commit to come, this limitation will be removed to let all filters take part to
data forwarding. There are 2 new functions that filters should use to deal with
this feature:
* flt_set_http_data_forwarder: This function sets the filter (using its id)
that will forward data for the specified HTTP message. It is possible if it
was not already set by another filter _AND_ if no data was yet forwarded
(msg->msg_state <= HTTP_MSG_BODY). It returns -1 if an error occurs.
* flt_http_data_forwarder: This function returns the filter id that will
forward data for the specified HTTP message. If there is no forwarder set, it
returns -1.
When an HTTP data forwarder is set for the response, the HTTP compression is
disabled. Of course, this is not definitive.
2015-04-30 05:48:27 -04:00
flt_stream_stop ( s ) ;
2015-11-05 07:35:03 -05:00
flt_stream_release ( s , 0 ) ;
MAJOR: filters: Add filters support
This patch adds the support of filters in HAProxy. The main idea is to have a
way to "easely" extend HAProxy by adding some "modules", called filters, that
will be able to change HAProxy behavior in a programmatic way.
To do so, many entry points has been added in code to let filters to hook up to
different steps of the processing. A filter must define a flt_ops sutrctures
(see include/types/filters.h for details). This structure contains all available
callbacks that a filter can define:
struct flt_ops {
/*
* Callbacks to manage the filter lifecycle
*/
int (*init) (struct proxy *p);
void (*deinit)(struct proxy *p);
int (*check) (struct proxy *p);
/*
* Stream callbacks
*/
void (*stream_start) (struct stream *s);
void (*stream_accept) (struct stream *s);
void (*session_establish)(struct stream *s);
void (*stream_stop) (struct stream *s);
/*
* HTTP callbacks
*/
int (*http_start) (struct stream *s, struct http_msg *msg);
int (*http_start_body) (struct stream *s, struct http_msg *msg);
int (*http_start_chunk) (struct stream *s, struct http_msg *msg);
int (*http_data) (struct stream *s, struct http_msg *msg);
int (*http_last_chunk) (struct stream *s, struct http_msg *msg);
int (*http_end_chunk) (struct stream *s, struct http_msg *msg);
int (*http_chunk_trailers)(struct stream *s, struct http_msg *msg);
int (*http_end_body) (struct stream *s, struct http_msg *msg);
void (*http_end) (struct stream *s, struct http_msg *msg);
void (*http_reset) (struct stream *s, struct http_msg *msg);
int (*http_pre_process) (struct stream *s, struct http_msg *msg);
int (*http_post_process) (struct stream *s, struct http_msg *msg);
void (*http_reply) (struct stream *s, short status,
const struct chunk *msg);
};
To declare and use a filter, in the configuration, the "filter" keyword must be
used in a listener/frontend section:
frontend test
...
filter <FILTER-NAME> [OPTIONS...]
The filter referenced by the <FILTER-NAME> must declare a configuration parser
on its own name to fill flt_ops and filter_conf field in the proxy's
structure. An exemple will be provided later to make it perfectly clear.
For now, filters cannot be used in backend section. But this is only a matter of
time. Documentation will also be added later. This is the first commit of a long
list about filters.
It is possible to have several filters on the same listener/frontend. These
filters are stored in an array of at most MAX_FILTERS elements (define in
include/types/filters.h). Again, this will be replaced later by a list of
filters.
The filter API has been highly refactored. Main changes are:
* Now, HA supports an infinite number of filters per proxy. To do so, filters
are stored in list.
* Because filters are stored in list, filters state has been moved from the
channel structure to the filter structure. This is cleaner because there is no
more info about filters in channel structure.
* It is possible to defined filters on backends only. For such filters,
stream_start/stream_stop callbacks are not called. Of course, it is possible
to mix frontend and backend filters.
* Now, TCP streams are also filtered. All callbacks without the 'http_' prefix
are called for all kind of streams. In addition, 2 new callbacks were added to
filter data exchanged through a TCP stream:
- tcp_data: it is called when new data are available or when old unprocessed
data are still waiting.
- tcp_forward_data: it is called when some data can be consumed.
* New callbacks attached to channel were added:
- channel_start_analyze: it is called when a filter is ready to process data
exchanged through a channel. 2 new analyzers (a frontend and a backend)
are attached to channels to call this callback. For a frontend filter, it
is called before any other analyzer. For a backend filter, it is called
when a backend is attached to a stream. So some processing cannot be
filtered in that case.
- channel_analyze: it is called before each analyzer attached to a channel,
expects analyzers responsible for data sending.
- channel_end_analyze: it is called when all other analyzers have finished
their processing. A new analyzers is attached to channels to call this
callback. For a TCP stream, this is always the last one called. For a HTTP
one, the callback is called when a request/response ends, so it is called
one time for each request/response.
* 'session_established' callback has been removed. Everything that is done in
this callback can be handled by 'channel_start_analyze' on the response
channel.
* 'http_pre_process' and 'http_post_process' callbacks have been replaced by
'channel_analyze'.
* 'http_start' callback has been replaced by 'http_headers'. This new one is
called just before headers sending and parsing of the body.
* 'http_end' callback has been replaced by 'channel_end_analyze'.
* It is possible to set a forwarder for TCP channels. It was already possible to
do it for HTTP ones.
* Forwarders can partially consumed forwardable data. For this reason a new
HTTP message state was added before HTTP_MSG_DONE : HTTP_MSG_ENDING.
Now all filters can define corresponding callbacks (http_forward_data
and tcp_forward_data). Each filter owns 2 offsets relative to buf->p, next and
forward, to track, respectively, input data already parsed but not forwarded yet
by the filter and parsed data considered as forwarded by the filter. A any time,
we have the warranty that a filter cannot parse or forward more input than
previous ones. And, of course, it cannot forward more input than it has
parsed. 2 macros has been added to retrieve these offets: FLT_NXT and FLT_FWD.
In addition, 2 functions has been added to change the 'next size' and the
'forward size' of a filter. When a filter parses input data, it can alter these
data, so the size of these data can vary. This action has an effet on all
previous filters that must be handled. To do so, the function
'filter_change_next_size' must be called, passing the size variation. In the
same spirit, if a filter alter forwarded data, it must call the function
'filter_change_forward_size'. 'filter_change_next_size' can be called in
'http_data' and 'tcp_data' callbacks and only these ones. And
'filter_change_forward_size' can be called in 'http_forward_data' and
'tcp_forward_data' callbacks and only these ones. The data changes are the
filter responsability, but with some limitation. It must not change already
parsed/forwarded data or data that previous filters have not parsed/forwarded
yet.
Because filters can be used on backends, when we the backend is set for a
stream, we add filters defined for this backend in the filter list of the
stream. But we must only do that when the backend and the frontend of the stream
are not the same. Else same filters are added a second time leading to undefined
behavior.
The HTTP compression code had to be moved.
So it simplifies http_response_forward_body function. To do so, the way the data
are forwarded has changed. Now, a filter (and only one) can forward data. In a
commit to come, this limitation will be removed to let all filters take part to
data forwarding. There are 2 new functions that filters should use to deal with
this feature:
* flt_set_http_data_forwarder: This function sets the filter (using its id)
that will forward data for the specified HTTP message. It is possible if it
was not already set by another filter _AND_ if no data was yet forwarded
(msg->msg_state <= HTTP_MSG_BODY). It returns -1 if an error occurs.
* flt_http_data_forwarder: This function returns the filter id that will
forward data for the specified HTTP message. If there is no forwarder set, it
returns -1.
When an HTTP data forwarder is set for the response, the HTTP compression is
disabled. Of course, this is not definitive.
2015-04-30 05:48:27 -04:00
2007-10-16 11:34:28 -04:00
if ( fe ) {
2015-04-03 16:16:32 -04:00
pool_free2 ( fe - > rsp_cap_pool , s - > res_cap ) ;
pool_free2 ( fe - > req_cap_pool , s - > req_cap ) ;
2006-06-25 20:48:02 -04:00
}
2009-12-22 09:03:09 -05:00
2015-06-06 13:29:07 -04:00
/* Cleanup all variable contexts. */
2016-03-10 10:33:04 -05:00
vars_prune ( & s - > vars_txn , s - > sess , s ) ;
vars_prune ( & s - > vars_reqres , s - > sess , s ) ;
2015-06-06 13:29:07 -04:00
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
stream_store_counters ( s ) ;
2010-06-14 15:04:55 -04:00
2008-12-07 14:16:23 -05:00
list_for_each_entry_safe ( bref , back , & s - > back_refs , users ) {
2009-02-22 09:17:24 -05:00
/* we have to unlink all watchers. We must not relink them if
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
* this stream was the last one in the list .
2009-02-22 09:17:24 -05:00
*/
2008-12-07 14:16:23 -05:00
LIST_DEL ( & bref - > users ) ;
2009-02-22 09:17:24 -05:00
LIST_INIT ( & bref - > users ) ;
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
if ( s - > list . n ! = & streams )
LIST_ADDQ ( & LIST_ELEM ( s - > list . n , struct stream * , list ) - > back_refs , & bref - > users ) ;
2008-12-07 14:16:23 -05:00
bref - > ref = s - > list . n ;
}
2008-11-23 13:53:55 -05:00
LIST_DEL ( & s - > list ) ;
2013-10-11 13:34:20 -04:00
si_release_endpoint ( & s - > si [ 1 ] ) ;
si_release_endpoint ( & s - > si [ 0 ] ) ;
2015-04-03 08:10:06 -04:00
/* FIXME: for now we have a 1:1 relation between stream and session so
* the stream must free the session .
*/
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
pool_free2 ( pool2_stream , s ) ;
2015-04-04 09:54:03 -04:00
session_free ( sess ) ;
2007-07-11 04:42:35 -04:00
/* We may want to free the maximum amount of pools if the proxy is stopping */
2007-10-16 11:34:28 -04:00
if ( fe & & unlikely ( fe - > state = = PR_STSTOPPED ) ) {
2012-10-12 17:49:43 -04:00
pool_flush2 ( pool2_buffer ) ;
2015-04-03 16:55:33 -04:00
pool_flush2 ( pool2_http_txn ) ;
2011-10-24 12:15:04 -04:00
pool_flush2 ( pool2_hdr_idx ) ;
2008-08-03 11:41:33 -04:00
pool_flush2 ( pool2_requri ) ;
pool_flush2 ( pool2_capture ) ;
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
pool_flush2 ( pool2_stream ) ;
2015-04-03 08:10:06 -04:00
pool_flush2 ( pool2_session ) ;
2014-11-13 10:46:28 -05:00
pool_flush2 ( pool2_connection ) ;
pool_flush2 ( pool2_pendconn ) ;
2008-08-03 11:41:33 -04:00
pool_flush2 ( fe - > req_cap_pool ) ;
pool_flush2 ( fe - > rsp_cap_pool ) ;
2007-07-11 04:42:35 -04:00
}
2007-05-13 13:43:47 -04:00
}
2014-12-28 07:09:02 -05:00
/* Allocates a receive buffer for channel <chn>, but only if it's guaranteed
* that it ' s not the last available buffer or it ' s the response buffer . Unless
* the buffer is the response buffer , an extra control is made so that we always
* keep < tune . buffers . reserved > buffers available after this allocation . To be
* called at the beginning of recv ( ) callbacks to ensure that the required
* buffers are properly allocated . Returns 0 in case of failure , non - zero
* otherwise .
2014-11-25 13:46:36 -05:00
*/
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
int stream_alloc_recv_buffer ( struct channel * chn )
2014-11-25 13:46:36 -05:00
{
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
struct stream * s ;
2014-11-25 13:46:36 -05:00
struct buffer * b ;
MAJOR: session: only wake up as many sessions as available buffers permit
We've already experimented with three wake up algorithms when releasing
buffers : the first naive one used to wake up far too many sessions,
causing many of them not to get any buffer. The second approach which
was still in use prior to this patch consisted in waking up either 1
or 2 sessions depending on the number of FDs we had released. And this
was still inaccurate. The third one tried to cover the accuracy issues
of the second and took into consideration the number of FDs the sessions
would be willing to use, but most of the time we ended up waking up too
many of them for nothing, or deadlocking by lack of buffers.
This patch completely removes the need to allocate two buffers at once.
Instead it splits allocations into critical and non-critical ones and
implements a reserve in the pool for this. The deadlock situation happens
when all buffers are be allocated for requests pending in a maxconn-limited
server queue, because then there's no more way to allocate buffers for
responses, and these responses are critical to release the servers's
connection in order to release the pending requests. In fact maxconn on
a server creates a dependence between sessions and particularly between
oldest session's responses and latest session's requests. Thus, it is
mandatory to get a free buffer for a response in order to release a
server connection which will permit to release a request buffer.
Since we definitely have non-symmetrical buffers, we need to implement
this logic in the buffer allocation mechanism. What this commit does is
implement a reserve of buffers which can only be allocated for responses
and that will never be allocated for requests. This is made possible by
the requester indicating how much margin it wants to leave after the
allocation succeeds. Thus it is a cooperative allocation mechanism : the
requester (process_session() in general) prefers not to get a buffer in
order to respect other's need for response buffers. The session management
code always knows if a buffer will be used for requests or responses, so
that is not difficult :
- either there's an applet on the initiator side and we really need
the request buffer (since currently the applet is called in the
context of the session)
- or we have a connection and we really need the response buffer (in
order to support building and sending an error message back)
This reserve ensures that we don't take all allocatable buffers for
requests waiting in a queue. The downside is that all the extra buffers
are really allocated to ensure they can be allocated. But with small
values it is not an issue.
With this change, we don't observe any more deadlocks even when running
with maxconn 1 on a server under severely constrained memory conditions.
The code becomes a bit tricky, it relies on the scheduler's run queue to
estimate how many sessions are already expected to run so that it doesn't
wake up everyone with too few resources. A better solution would probably
consist in having two queues, one for urgent requests and one for normal
requests. A failed allocation for a session dealing with an error, a
connection event, or the need for a response (or request when there's an
applet on the left) would go to the urgent request queue, while other
requests would go to the other queue. Urgent requests would be served
from 1 entry in the pool, while the regular ones would be served only
according to the reserve. Despite not yet having this, it works
remarkably well.
This mechanism is quite efficient, we don't perform too many wake up calls
anymore. For 1 million sessions elapsed during massive memory contention,
we observe about 4.5M calls to process_session() compared to 4.0M without
memory constraints. Previously we used to observe up to 16M calls, which
rougly means 12M failures.
During a test run under high memory constraints (limit enforced to 27 MB
instead of the 58 MB normally needed), performance used to drop by 53% prior
to this patch. Now with this patch instead it *increases* by about 1.5%.
The best effect of this change is that by limiting the memory usage to about
2/3 to 3/4 of what is needed by default, it's possible to increase performance
by up to about 18% mainly due to the fact that pools are reused more often
and remain hot in the CPU cache (observed on regular HTTP traffic with 20k
objects, buffers.limit = maxconn/10, buffers.reserve = limit/2).
Below is an example of scenario which used to cause a deadlock previously :
- connection is received
- two buffers are allocated in process_session() then released
- one is allocated when receiving an HTTP request
- the second buffer is allocated then released in process_session()
for request parsing then connection establishment.
- poll() says we can send, so the request buffer is sent and released
- process session gets notified that the connection is now established
and allocates two buffers then releases them
- all other sessions do the same till one cannot get the request buffer
without hitting the margin
- and now the server responds. stream_interface allocates the response
buffer and manages to get it since it's higher priority being for a
response.
- but process_session() cannot allocate the request buffer anymore
=> We could end up with all buffers used by responses so that none may
be allocated for a request in process_session().
When the applet processing leaves the session context, the test will have
to be changed so that we always allocate a response buffer regardless of
the left side (eg: H2->H1 gateway). A final improvement would consists in
being able to only retry the failed I/O operation without waking up a
task, but to date all experiments to achieve this have proven not to be
reliable enough.
2014-11-26 19:11:56 -05:00
int margin = 0 ;
2014-11-25 13:46:36 -05:00
2014-12-28 07:09:02 -05:00
if ( ! ( chn - > flags & CF_ISRESP ) )
MAJOR: session: only wake up as many sessions as available buffers permit
We've already experimented with three wake up algorithms when releasing
buffers : the first naive one used to wake up far too many sessions,
causing many of them not to get any buffer. The second approach which
was still in use prior to this patch consisted in waking up either 1
or 2 sessions depending on the number of FDs we had released. And this
was still inaccurate. The third one tried to cover the accuracy issues
of the second and took into consideration the number of FDs the sessions
would be willing to use, but most of the time we ended up waking up too
many of them for nothing, or deadlocking by lack of buffers.
This patch completely removes the need to allocate two buffers at once.
Instead it splits allocations into critical and non-critical ones and
implements a reserve in the pool for this. The deadlock situation happens
when all buffers are be allocated for requests pending in a maxconn-limited
server queue, because then there's no more way to allocate buffers for
responses, and these responses are critical to release the servers's
connection in order to release the pending requests. In fact maxconn on
a server creates a dependence between sessions and particularly between
oldest session's responses and latest session's requests. Thus, it is
mandatory to get a free buffer for a response in order to release a
server connection which will permit to release a request buffer.
Since we definitely have non-symmetrical buffers, we need to implement
this logic in the buffer allocation mechanism. What this commit does is
implement a reserve of buffers which can only be allocated for responses
and that will never be allocated for requests. This is made possible by
the requester indicating how much margin it wants to leave after the
allocation succeeds. Thus it is a cooperative allocation mechanism : the
requester (process_session() in general) prefers not to get a buffer in
order to respect other's need for response buffers. The session management
code always knows if a buffer will be used for requests or responses, so
that is not difficult :
- either there's an applet on the initiator side and we really need
the request buffer (since currently the applet is called in the
context of the session)
- or we have a connection and we really need the response buffer (in
order to support building and sending an error message back)
This reserve ensures that we don't take all allocatable buffers for
requests waiting in a queue. The downside is that all the extra buffers
are really allocated to ensure they can be allocated. But with small
values it is not an issue.
With this change, we don't observe any more deadlocks even when running
with maxconn 1 on a server under severely constrained memory conditions.
The code becomes a bit tricky, it relies on the scheduler's run queue to
estimate how many sessions are already expected to run so that it doesn't
wake up everyone with too few resources. A better solution would probably
consist in having two queues, one for urgent requests and one for normal
requests. A failed allocation for a session dealing with an error, a
connection event, or the need for a response (or request when there's an
applet on the left) would go to the urgent request queue, while other
requests would go to the other queue. Urgent requests would be served
from 1 entry in the pool, while the regular ones would be served only
according to the reserve. Despite not yet having this, it works
remarkably well.
This mechanism is quite efficient, we don't perform too many wake up calls
anymore. For 1 million sessions elapsed during massive memory contention,
we observe about 4.5M calls to process_session() compared to 4.0M without
memory constraints. Previously we used to observe up to 16M calls, which
rougly means 12M failures.
During a test run under high memory constraints (limit enforced to 27 MB
instead of the 58 MB normally needed), performance used to drop by 53% prior
to this patch. Now with this patch instead it *increases* by about 1.5%.
The best effect of this change is that by limiting the memory usage to about
2/3 to 3/4 of what is needed by default, it's possible to increase performance
by up to about 18% mainly due to the fact that pools are reused more often
and remain hot in the CPU cache (observed on regular HTTP traffic with 20k
objects, buffers.limit = maxconn/10, buffers.reserve = limit/2).
Below is an example of scenario which used to cause a deadlock previously :
- connection is received
- two buffers are allocated in process_session() then released
- one is allocated when receiving an HTTP request
- the second buffer is allocated then released in process_session()
for request parsing then connection establishment.
- poll() says we can send, so the request buffer is sent and released
- process session gets notified that the connection is now established
and allocates two buffers then releases them
- all other sessions do the same till one cannot get the request buffer
without hitting the margin
- and now the server responds. stream_interface allocates the response
buffer and manages to get it since it's higher priority being for a
response.
- but process_session() cannot allocate the request buffer anymore
=> We could end up with all buffers used by responses so that none may
be allocated for a request in process_session().
When the applet processing leaves the session context, the test will have
to be changed so that we always allocate a response buffer regardless of
the left side (eg: H2->H1 gateway). A final improvement would consists in
being able to only retry the failed I/O operation without waking up a
task, but to date all experiments to achieve this have proven not to be
reliable enough.
2014-11-26 19:11:56 -05:00
margin = global . tune . reserved_bufs ;
2015-09-25 02:36:11 -04:00
s = chn_strm ( chn ) ;
2014-12-28 07:09:02 -05:00
b = b_alloc_margin ( & chn - > buf , margin ) ;
2014-11-25 13:46:36 -05:00
if ( b )
return 1 ;
MAJOR: session: implement a wait-queue for sessions who need a buffer
When a session_alloc_buffers() fails to allocate one or two buffers,
it subscribes the session to buffer_wq, and waits for another session
to release buffers. It's then removed from the queue and woken up with
TASK_WAKE_RES, and can attempt its allocation again.
We decide to try to wake as many waiters as we release buffers so
that if we release 2 and two waiters need only once, they both have
their chance. We must never come to the situation where we don't wake
enough tasks up.
It's common to release buffers after the completion of an I/O callback,
which can happen even if the I/O could not be performed due to half a
failure on memory allocation. In this situation, we don't want to move
out of the wait queue the session that was just added, otherwise it
will never get any buffer. Thus, we only force ourselves out of the
queue when freeing the session.
Note: at the moment, since session_alloc_buffers() is not used, no task
is subscribed to the wait queue.
2014-11-25 15:10:35 -05:00
if ( LIST_ISEMPTY ( & s - > buffer_wait ) )
LIST_ADDQ ( & buffer_wq , & s - > buffer_wait ) ;
2014-11-25 13:46:36 -05:00
return 0 ;
}
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
/* Allocates a work buffer for stream <s>. It is meant to be called inside
* process_stream ( ) . It will only allocate the side needed for the function
2015-04-20 09:52:18 -04:00
* to work fine , which is the response buffer so that an error message may be
* built and returned . Response buffers may be allocated from the reserve , this
* is critical to ensure that a response may always flow and will never block a
* server from releasing a connection . Returns 0 in case of failure , non - zero
* otherwise .
2014-11-25 13:46:36 -05:00
*/
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
int stream_alloc_work_buffer ( struct stream * s )
2014-11-25 13:46:36 -05:00
{
MAJOR: session: implement a wait-queue for sessions who need a buffer
When a session_alloc_buffers() fails to allocate one or two buffers,
it subscribes the session to buffer_wq, and waits for another session
to release buffers. It's then removed from the queue and woken up with
TASK_WAKE_RES, and can attempt its allocation again.
We decide to try to wake as many waiters as we release buffers so
that if we release 2 and two waiters need only once, they both have
their chance. We must never come to the situation where we don't wake
enough tasks up.
It's common to release buffers after the completion of an I/O callback,
which can happen even if the I/O could not be performed due to half a
failure on memory allocation. In this situation, we don't want to move
out of the wait queue the session that was just added, otherwise it
will never get any buffer. Thus, we only force ourselves out of the
queue when freeing the session.
Note: at the moment, since session_alloc_buffers() is not used, no task
is subscribed to the wait queue.
2014-11-25 15:10:35 -05:00
if ( ! LIST_ISEMPTY ( & s - > buffer_wait ) ) {
LIST_DEL ( & s - > buffer_wait ) ;
LIST_INIT ( & s - > buffer_wait ) ;
}
2014-11-25 13:46:36 -05:00
2015-04-20 09:52:18 -04:00
if ( b_alloc_margin ( & s - > res . buf , 0 ) )
2014-11-25 13:46:36 -05:00
return 1 ;
MAJOR: session: implement a wait-queue for sessions who need a buffer
When a session_alloc_buffers() fails to allocate one or two buffers,
it subscribes the session to buffer_wq, and waits for another session
to release buffers. It's then removed from the queue and woken up with
TASK_WAKE_RES, and can attempt its allocation again.
We decide to try to wake as many waiters as we release buffers so
that if we release 2 and two waiters need only once, they both have
their chance. We must never come to the situation where we don't wake
enough tasks up.
It's common to release buffers after the completion of an I/O callback,
which can happen even if the I/O could not be performed due to half a
failure on memory allocation. In this situation, we don't want to move
out of the wait queue the session that was just added, otherwise it
will never get any buffer. Thus, we only force ourselves out of the
queue when freeing the session.
Note: at the moment, since session_alloc_buffers() is not used, no task
is subscribed to the wait queue.
2014-11-25 15:10:35 -05:00
LIST_ADDQ ( & buffer_wq , & s - > buffer_wait ) ;
2014-11-25 13:46:36 -05:00
return 0 ;
}
/* releases unused buffers after processing. Typically used at the end of the
MAJOR: session: implement a wait-queue for sessions who need a buffer
When a session_alloc_buffers() fails to allocate one or two buffers,
it subscribes the session to buffer_wq, and waits for another session
to release buffers. It's then removed from the queue and woken up with
TASK_WAKE_RES, and can attempt its allocation again.
We decide to try to wake as many waiters as we release buffers so
that if we release 2 and two waiters need only once, they both have
their chance. We must never come to the situation where we don't wake
enough tasks up.
It's common to release buffers after the completion of an I/O callback,
which can happen even if the I/O could not be performed due to half a
failure on memory allocation. In this situation, we don't want to move
out of the wait queue the session that was just added, otherwise it
will never get any buffer. Thus, we only force ourselves out of the
queue when freeing the session.
Note: at the moment, since session_alloc_buffers() is not used, no task
is subscribed to the wait queue.
2014-11-25 15:10:35 -05:00
* update ( ) functions . It will try to wake up as many tasks as the number of
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
* buffers that it releases . In practice , most often streams are blocked on
MAJOR: session: implement a wait-queue for sessions who need a buffer
When a session_alloc_buffers() fails to allocate one or two buffers,
it subscribes the session to buffer_wq, and waits for another session
to release buffers. It's then removed from the queue and woken up with
TASK_WAKE_RES, and can attempt its allocation again.
We decide to try to wake as many waiters as we release buffers so
that if we release 2 and two waiters need only once, they both have
their chance. We must never come to the situation where we don't wake
enough tasks up.
It's common to release buffers after the completion of an I/O callback,
which can happen even if the I/O could not be performed due to half a
failure on memory allocation. In this situation, we don't want to move
out of the wait queue the session that was just added, otherwise it
will never get any buffer. Thus, we only force ourselves out of the
queue when freeing the session.
Note: at the moment, since session_alloc_buffers() is not used, no task
is subscribed to the wait queue.
2014-11-25 15:10:35 -05:00
* a single buffer , so it makes sense to try to wake two up when two buffers
* are released at once .
2014-11-25 13:46:36 -05:00
*/
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
void stream_release_buffers ( struct stream * s )
2014-11-25 13:46:36 -05:00
{
2014-11-27 14:45:39 -05:00
if ( s - > req . buf - > size & & buffer_empty ( s - > req . buf ) )
b_free ( & s - > req . buf ) ;
2014-11-25 13:46:36 -05:00
2014-11-27 14:45:39 -05:00
if ( s - > res . buf - > size & & buffer_empty ( s - > res . buf ) )
b_free ( & s - > res . buf ) ;
2014-11-25 13:46:36 -05:00
MAJOR: session: implement a wait-queue for sessions who need a buffer
When a session_alloc_buffers() fails to allocate one or two buffers,
it subscribes the session to buffer_wq, and waits for another session
to release buffers. It's then removed from the queue and woken up with
TASK_WAKE_RES, and can attempt its allocation again.
We decide to try to wake as many waiters as we release buffers so
that if we release 2 and two waiters need only once, they both have
their chance. We must never come to the situation where we don't wake
enough tasks up.
It's common to release buffers after the completion of an I/O callback,
which can happen even if the I/O could not be performed due to half a
failure on memory allocation. In this situation, we don't want to move
out of the wait queue the session that was just added, otherwise it
will never get any buffer. Thus, we only force ourselves out of the
queue when freeing the session.
Note: at the moment, since session_alloc_buffers() is not used, no task
is subscribed to the wait queue.
2014-11-25 15:10:35 -05:00
/* if we're certain to have at least 1 buffer available, and there is
* someone waiting , we can wake up a waiter and offer them .
*/
MAJOR: session: only wake up as many sessions as available buffers permit
We've already experimented with three wake up algorithms when releasing
buffers : the first naive one used to wake up far too many sessions,
causing many of them not to get any buffer. The second approach which
was still in use prior to this patch consisted in waking up either 1
or 2 sessions depending on the number of FDs we had released. And this
was still inaccurate. The third one tried to cover the accuracy issues
of the second and took into consideration the number of FDs the sessions
would be willing to use, but most of the time we ended up waking up too
many of them for nothing, or deadlocking by lack of buffers.
This patch completely removes the need to allocate two buffers at once.
Instead it splits allocations into critical and non-critical ones and
implements a reserve in the pool for this. The deadlock situation happens
when all buffers are be allocated for requests pending in a maxconn-limited
server queue, because then there's no more way to allocate buffers for
responses, and these responses are critical to release the servers's
connection in order to release the pending requests. In fact maxconn on
a server creates a dependence between sessions and particularly between
oldest session's responses and latest session's requests. Thus, it is
mandatory to get a free buffer for a response in order to release a
server connection which will permit to release a request buffer.
Since we definitely have non-symmetrical buffers, we need to implement
this logic in the buffer allocation mechanism. What this commit does is
implement a reserve of buffers which can only be allocated for responses
and that will never be allocated for requests. This is made possible by
the requester indicating how much margin it wants to leave after the
allocation succeeds. Thus it is a cooperative allocation mechanism : the
requester (process_session() in general) prefers not to get a buffer in
order to respect other's need for response buffers. The session management
code always knows if a buffer will be used for requests or responses, so
that is not difficult :
- either there's an applet on the initiator side and we really need
the request buffer (since currently the applet is called in the
context of the session)
- or we have a connection and we really need the response buffer (in
order to support building and sending an error message back)
This reserve ensures that we don't take all allocatable buffers for
requests waiting in a queue. The downside is that all the extra buffers
are really allocated to ensure they can be allocated. But with small
values it is not an issue.
With this change, we don't observe any more deadlocks even when running
with maxconn 1 on a server under severely constrained memory conditions.
The code becomes a bit tricky, it relies on the scheduler's run queue to
estimate how many sessions are already expected to run so that it doesn't
wake up everyone with too few resources. A better solution would probably
consist in having two queues, one for urgent requests and one for normal
requests. A failed allocation for a session dealing with an error, a
connection event, or the need for a response (or request when there's an
applet on the left) would go to the urgent request queue, while other
requests would go to the other queue. Urgent requests would be served
from 1 entry in the pool, while the regular ones would be served only
according to the reserve. Despite not yet having this, it works
remarkably well.
This mechanism is quite efficient, we don't perform too many wake up calls
anymore. For 1 million sessions elapsed during massive memory contention,
we observe about 4.5M calls to process_session() compared to 4.0M without
memory constraints. Previously we used to observe up to 16M calls, which
rougly means 12M failures.
During a test run under high memory constraints (limit enforced to 27 MB
instead of the 58 MB normally needed), performance used to drop by 53% prior
to this patch. Now with this patch instead it *increases* by about 1.5%.
The best effect of this change is that by limiting the memory usage to about
2/3 to 3/4 of what is needed by default, it's possible to increase performance
by up to about 18% mainly due to the fact that pools are reused more often
and remain hot in the CPU cache (observed on regular HTTP traffic with 20k
objects, buffers.limit = maxconn/10, buffers.reserve = limit/2).
Below is an example of scenario which used to cause a deadlock previously :
- connection is received
- two buffers are allocated in process_session() then released
- one is allocated when receiving an HTTP request
- the second buffer is allocated then released in process_session()
for request parsing then connection establishment.
- poll() says we can send, so the request buffer is sent and released
- process session gets notified that the connection is now established
and allocates two buffers then releases them
- all other sessions do the same till one cannot get the request buffer
without hitting the margin
- and now the server responds. stream_interface allocates the response
buffer and manages to get it since it's higher priority being for a
response.
- but process_session() cannot allocate the request buffer anymore
=> We could end up with all buffers used by responses so that none may
be allocated for a request in process_session().
When the applet processing leaves the session context, the test will have
to be changed so that we always allocate a response buffer regardless of
the left side (eg: H2->H1 gateway). A final improvement would consists in
being able to only retry the failed I/O operation without waking up a
task, but to date all experiments to achieve this have proven not to be
reliable enough.
2014-11-26 19:11:56 -05:00
if ( ! LIST_ISEMPTY ( & buffer_wq ) )
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
stream_offer_buffers ( ) ;
MAJOR: session: implement a wait-queue for sessions who need a buffer
When a session_alloc_buffers() fails to allocate one or two buffers,
it subscribes the session to buffer_wq, and waits for another session
to release buffers. It's then removed from the queue and woken up with
TASK_WAKE_RES, and can attempt its allocation again.
We decide to try to wake as many waiters as we release buffers so
that if we release 2 and two waiters need only once, they both have
their chance. We must never come to the situation where we don't wake
enough tasks up.
It's common to release buffers after the completion of an I/O callback,
which can happen even if the I/O could not be performed due to half a
failure on memory allocation. In this situation, we don't want to move
out of the wait queue the session that was just added, otherwise it
will never get any buffer. Thus, we only force ourselves out of the
queue when freeing the session.
Note: at the moment, since session_alloc_buffers() is not used, no task
is subscribed to the wait queue.
2014-11-25 15:10:35 -05:00
}
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
/* Runs across the list of pending streams waiting for a buffer and wakes one
MAJOR: session: only wake up as many sessions as available buffers permit
We've already experimented with three wake up algorithms when releasing
buffers : the first naive one used to wake up far too many sessions,
causing many of them not to get any buffer. The second approach which
was still in use prior to this patch consisted in waking up either 1
or 2 sessions depending on the number of FDs we had released. And this
was still inaccurate. The third one tried to cover the accuracy issues
of the second and took into consideration the number of FDs the sessions
would be willing to use, but most of the time we ended up waking up too
many of them for nothing, or deadlocking by lack of buffers.
This patch completely removes the need to allocate two buffers at once.
Instead it splits allocations into critical and non-critical ones and
implements a reserve in the pool for this. The deadlock situation happens
when all buffers are be allocated for requests pending in a maxconn-limited
server queue, because then there's no more way to allocate buffers for
responses, and these responses are critical to release the servers's
connection in order to release the pending requests. In fact maxconn on
a server creates a dependence between sessions and particularly between
oldest session's responses and latest session's requests. Thus, it is
mandatory to get a free buffer for a response in order to release a
server connection which will permit to release a request buffer.
Since we definitely have non-symmetrical buffers, we need to implement
this logic in the buffer allocation mechanism. What this commit does is
implement a reserve of buffers which can only be allocated for responses
and that will never be allocated for requests. This is made possible by
the requester indicating how much margin it wants to leave after the
allocation succeeds. Thus it is a cooperative allocation mechanism : the
requester (process_session() in general) prefers not to get a buffer in
order to respect other's need for response buffers. The session management
code always knows if a buffer will be used for requests or responses, so
that is not difficult :
- either there's an applet on the initiator side and we really need
the request buffer (since currently the applet is called in the
context of the session)
- or we have a connection and we really need the response buffer (in
order to support building and sending an error message back)
This reserve ensures that we don't take all allocatable buffers for
requests waiting in a queue. The downside is that all the extra buffers
are really allocated to ensure they can be allocated. But with small
values it is not an issue.
With this change, we don't observe any more deadlocks even when running
with maxconn 1 on a server under severely constrained memory conditions.
The code becomes a bit tricky, it relies on the scheduler's run queue to
estimate how many sessions are already expected to run so that it doesn't
wake up everyone with too few resources. A better solution would probably
consist in having two queues, one for urgent requests and one for normal
requests. A failed allocation for a session dealing with an error, a
connection event, or the need for a response (or request when there's an
applet on the left) would go to the urgent request queue, while other
requests would go to the other queue. Urgent requests would be served
from 1 entry in the pool, while the regular ones would be served only
according to the reserve. Despite not yet having this, it works
remarkably well.
This mechanism is quite efficient, we don't perform too many wake up calls
anymore. For 1 million sessions elapsed during massive memory contention,
we observe about 4.5M calls to process_session() compared to 4.0M without
memory constraints. Previously we used to observe up to 16M calls, which
rougly means 12M failures.
During a test run under high memory constraints (limit enforced to 27 MB
instead of the 58 MB normally needed), performance used to drop by 53% prior
to this patch. Now with this patch instead it *increases* by about 1.5%.
The best effect of this change is that by limiting the memory usage to about
2/3 to 3/4 of what is needed by default, it's possible to increase performance
by up to about 18% mainly due to the fact that pools are reused more often
and remain hot in the CPU cache (observed on regular HTTP traffic with 20k
objects, buffers.limit = maxconn/10, buffers.reserve = limit/2).
Below is an example of scenario which used to cause a deadlock previously :
- connection is received
- two buffers are allocated in process_session() then released
- one is allocated when receiving an HTTP request
- the second buffer is allocated then released in process_session()
for request parsing then connection establishment.
- poll() says we can send, so the request buffer is sent and released
- process session gets notified that the connection is now established
and allocates two buffers then releases them
- all other sessions do the same till one cannot get the request buffer
without hitting the margin
- and now the server responds. stream_interface allocates the response
buffer and manages to get it since it's higher priority being for a
response.
- but process_session() cannot allocate the request buffer anymore
=> We could end up with all buffers used by responses so that none may
be allocated for a request in process_session().
When the applet processing leaves the session context, the test will have
to be changed so that we always allocate a response buffer regardless of
the left side (eg: H2->H1 gateway). A final improvement would consists in
being able to only retry the failed I/O operation without waking up a
task, but to date all experiments to achieve this have proven not to be
reliable enough.
2014-11-26 19:11:56 -05:00
* up if buffers are available . Will stop when the run queue reaches < rqlimit > .
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
* Should not be called directly , use stream_offer_buffers ( ) instead .
MAJOR: session: implement a wait-queue for sessions who need a buffer
When a session_alloc_buffers() fails to allocate one or two buffers,
it subscribes the session to buffer_wq, and waits for another session
to release buffers. It's then removed from the queue and woken up with
TASK_WAKE_RES, and can attempt its allocation again.
We decide to try to wake as many waiters as we release buffers so
that if we release 2 and two waiters need only once, they both have
their chance. We must never come to the situation where we don't wake
enough tasks up.
It's common to release buffers after the completion of an I/O callback,
which can happen even if the I/O could not be performed due to half a
failure on memory allocation. In this situation, we don't want to move
out of the wait queue the session that was just added, otherwise it
will never get any buffer. Thus, we only force ourselves out of the
queue when freeing the session.
Note: at the moment, since session_alloc_buffers() is not used, no task
is subscribed to the wait queue.
2014-11-25 15:10:35 -05:00
*/
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
void __stream_offer_buffers ( int rqlimit )
MAJOR: session: implement a wait-queue for sessions who need a buffer
When a session_alloc_buffers() fails to allocate one or two buffers,
it subscribes the session to buffer_wq, and waits for another session
to release buffers. It's then removed from the queue and woken up with
TASK_WAKE_RES, and can attempt its allocation again.
We decide to try to wake as many waiters as we release buffers so
that if we release 2 and two waiters need only once, they both have
their chance. We must never come to the situation where we don't wake
enough tasks up.
It's common to release buffers after the completion of an I/O callback,
which can happen even if the I/O could not be performed due to half a
failure on memory allocation. In this situation, we don't want to move
out of the wait queue the session that was just added, otherwise it
will never get any buffer. Thus, we only force ourselves out of the
queue when freeing the session.
Note: at the moment, since session_alloc_buffers() is not used, no task
is subscribed to the wait queue.
2014-11-25 15:10:35 -05:00
{
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
struct stream * sess , * bak ;
MAJOR: session: implement a wait-queue for sessions who need a buffer
When a session_alloc_buffers() fails to allocate one or two buffers,
it subscribes the session to buffer_wq, and waits for another session
to release buffers. It's then removed from the queue and woken up with
TASK_WAKE_RES, and can attempt its allocation again.
We decide to try to wake as many waiters as we release buffers so
that if we release 2 and two waiters need only once, they both have
their chance. We must never come to the situation where we don't wake
enough tasks up.
It's common to release buffers after the completion of an I/O callback,
which can happen even if the I/O could not be performed due to half a
failure on memory allocation. In this situation, we don't want to move
out of the wait queue the session that was just added, otherwise it
will never get any buffer. Thus, we only force ourselves out of the
queue when freeing the session.
Note: at the moment, since session_alloc_buffers() is not used, no task
is subscribed to the wait queue.
2014-11-25 15:10:35 -05:00
list_for_each_entry_safe ( sess , bak , & buffer_wq , buffer_wait ) {
MAJOR: session: only wake up as many sessions as available buffers permit
We've already experimented with three wake up algorithms when releasing
buffers : the first naive one used to wake up far too many sessions,
causing many of them not to get any buffer. The second approach which
was still in use prior to this patch consisted in waking up either 1
or 2 sessions depending on the number of FDs we had released. And this
was still inaccurate. The third one tried to cover the accuracy issues
of the second and took into consideration the number of FDs the sessions
would be willing to use, but most of the time we ended up waking up too
many of them for nothing, or deadlocking by lack of buffers.
This patch completely removes the need to allocate two buffers at once.
Instead it splits allocations into critical and non-critical ones and
implements a reserve in the pool for this. The deadlock situation happens
when all buffers are be allocated for requests pending in a maxconn-limited
server queue, because then there's no more way to allocate buffers for
responses, and these responses are critical to release the servers's
connection in order to release the pending requests. In fact maxconn on
a server creates a dependence between sessions and particularly between
oldest session's responses and latest session's requests. Thus, it is
mandatory to get a free buffer for a response in order to release a
server connection which will permit to release a request buffer.
Since we definitely have non-symmetrical buffers, we need to implement
this logic in the buffer allocation mechanism. What this commit does is
implement a reserve of buffers which can only be allocated for responses
and that will never be allocated for requests. This is made possible by
the requester indicating how much margin it wants to leave after the
allocation succeeds. Thus it is a cooperative allocation mechanism : the
requester (process_session() in general) prefers not to get a buffer in
order to respect other's need for response buffers. The session management
code always knows if a buffer will be used for requests or responses, so
that is not difficult :
- either there's an applet on the initiator side and we really need
the request buffer (since currently the applet is called in the
context of the session)
- or we have a connection and we really need the response buffer (in
order to support building and sending an error message back)
This reserve ensures that we don't take all allocatable buffers for
requests waiting in a queue. The downside is that all the extra buffers
are really allocated to ensure they can be allocated. But with small
values it is not an issue.
With this change, we don't observe any more deadlocks even when running
with maxconn 1 on a server under severely constrained memory conditions.
The code becomes a bit tricky, it relies on the scheduler's run queue to
estimate how many sessions are already expected to run so that it doesn't
wake up everyone with too few resources. A better solution would probably
consist in having two queues, one for urgent requests and one for normal
requests. A failed allocation for a session dealing with an error, a
connection event, or the need for a response (or request when there's an
applet on the left) would go to the urgent request queue, while other
requests would go to the other queue. Urgent requests would be served
from 1 entry in the pool, while the regular ones would be served only
according to the reserve. Despite not yet having this, it works
remarkably well.
This mechanism is quite efficient, we don't perform too many wake up calls
anymore. For 1 million sessions elapsed during massive memory contention,
we observe about 4.5M calls to process_session() compared to 4.0M without
memory constraints. Previously we used to observe up to 16M calls, which
rougly means 12M failures.
During a test run under high memory constraints (limit enforced to 27 MB
instead of the 58 MB normally needed), performance used to drop by 53% prior
to this patch. Now with this patch instead it *increases* by about 1.5%.
The best effect of this change is that by limiting the memory usage to about
2/3 to 3/4 of what is needed by default, it's possible to increase performance
by up to about 18% mainly due to the fact that pools are reused more often
and remain hot in the CPU cache (observed on regular HTTP traffic with 20k
objects, buffers.limit = maxconn/10, buffers.reserve = limit/2).
Below is an example of scenario which used to cause a deadlock previously :
- connection is received
- two buffers are allocated in process_session() then released
- one is allocated when receiving an HTTP request
- the second buffer is allocated then released in process_session()
for request parsing then connection establishment.
- poll() says we can send, so the request buffer is sent and released
- process session gets notified that the connection is now established
and allocates two buffers then releases them
- all other sessions do the same till one cannot get the request buffer
without hitting the margin
- and now the server responds. stream_interface allocates the response
buffer and manages to get it since it's higher priority being for a
response.
- but process_session() cannot allocate the request buffer anymore
=> We could end up with all buffers used by responses so that none may
be allocated for a request in process_session().
When the applet processing leaves the session context, the test will have
to be changed so that we always allocate a response buffer regardless of
the left side (eg: H2->H1 gateway). A final improvement would consists in
being able to only retry the failed I/O operation without waking up a
task, but to date all experiments to achieve this have proven not to be
reliable enough.
2014-11-26 19:11:56 -05:00
if ( rqlimit < = run_queue )
break ;
MAJOR: session: implement a wait-queue for sessions who need a buffer
When a session_alloc_buffers() fails to allocate one or two buffers,
it subscribes the session to buffer_wq, and waits for another session
to release buffers. It's then removed from the queue and woken up with
TASK_WAKE_RES, and can attempt its allocation again.
We decide to try to wake as many waiters as we release buffers so
that if we release 2 and two waiters need only once, they both have
their chance. We must never come to the situation where we don't wake
enough tasks up.
It's common to release buffers after the completion of an I/O callback,
which can happen even if the I/O could not be performed due to half a
failure on memory allocation. In this situation, we don't want to move
out of the wait queue the session that was just added, otherwise it
will never get any buffer. Thus, we only force ourselves out of the
queue when freeing the session.
Note: at the moment, since session_alloc_buffers() is not used, no task
is subscribed to the wait queue.
2014-11-25 15:10:35 -05:00
if ( sess - > task - > state & TASK_RUNNING )
continue ;
LIST_DEL ( & sess - > buffer_wait ) ;
LIST_INIT ( & sess - > buffer_wait ) ;
task_wakeup ( sess - > task , TASK_WOKEN_RES ) ;
}
2014-11-25 13:46:36 -05:00
}
2007-05-13 13:43:47 -04:00
/* perform minimal intializations, report 0 in case of error, 1 if OK. */
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
int init_stream ( )
2007-05-13 13:43:47 -04:00
{
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
LIST_INIT ( & streams ) ;
pool2_stream = create_pool ( " stream " , sizeof ( struct stream ) , MEM_F_SHARED ) ;
return pool2_stream ! = NULL ;
2006-06-25 20:48:02 -04:00
}
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
void stream_process_counters ( struct stream * s )
2007-11-26 14:15:35 -05:00
{
2015-04-03 08:46:27 -04:00
struct session * sess = s - > sess ;
2007-11-24 16:12:47 -05:00
unsigned long long bytes ;
2015-06-15 12:29:57 -04:00
void * ptr1 , * ptr2 ;
2012-12-09 09:55:40 -05:00
int i ;
2007-11-24 16:12:47 -05:00
2014-11-27 14:45:39 -05:00
bytes = s - > req . total - s - > logs . bytes_in ;
s - > logs . bytes_in = s - > req . total ;
if ( bytes ) {
2015-04-03 09:40:56 -04:00
sess - > fe - > fe_counters . bytes_in + = bytes ;
2007-11-24 16:12:47 -05:00
2014-11-27 14:45:39 -05:00
s - > be - > be_counters . bytes_in + = bytes ;
2009-10-04 09:43:17 -04:00
2014-11-27 14:45:39 -05:00
if ( objt_server ( s - > target ) )
objt_server ( s - > target ) - > counters . bytes_in + = bytes ;
2010-06-18 12:33:32 -04:00
2015-04-03 08:46:27 -04:00
if ( sess - > listener & & sess - > listener - > counters )
sess - > listener - > counters - > bytes_in + = bytes ;
2010-06-20 05:56:30 -04:00
2014-11-27 14:45:39 -05:00
for ( i = 0 ; i < MAX_SESS_STKCTR ; i + + ) {
2015-04-04 10:29:12 -04:00
struct stkctr * stkctr = & s - > stkctr [ i ] ;
if ( ! stkctr_entry ( stkctr ) ) {
stkctr = & sess - > stkctr [ i ] ;
if ( ! stkctr_entry ( stkctr ) )
continue ;
}
2010-06-20 05:56:30 -04:00
2015-06-15 12:29:57 -04:00
ptr1 = stktable_data_ptr ( stkctr - > table , stkctr_entry ( stkctr ) , STKTABLE_DT_BYTES_IN_CNT ) ;
if ( ptr1 )
stktable_data_cast ( ptr1 , bytes_in_cnt ) + = bytes ;
2014-11-27 14:45:39 -05:00
2015-06-15 12:29:57 -04:00
ptr2 = stktable_data_ptr ( stkctr - > table , stkctr_entry ( stkctr ) , STKTABLE_DT_BYTES_IN_RATE ) ;
if ( ptr2 )
update_freq_ctr_period ( & stktable_data_cast ( ptr2 , bytes_in_rate ) ,
2015-04-04 10:29:12 -04:00
stkctr - > table - > data_arg [ STKTABLE_DT_BYTES_IN_RATE ] . u , bytes ) ;
2015-06-15 12:29:57 -04:00
/* If data was modified, we need to touch to re-schedule sync */
if ( ptr1 | | ptr2 )
stktable_touch ( stkctr - > table , stkctr_entry ( stkctr ) , 1 ) ;
2007-11-26 14:15:35 -05:00
}
2007-11-24 16:12:47 -05:00
}
2014-11-27 14:45:39 -05:00
bytes = s - > res . total - s - > logs . bytes_out ;
s - > logs . bytes_out = s - > res . total ;
if ( bytes ) {
2015-04-03 09:40:56 -04:00
sess - > fe - > fe_counters . bytes_out + = bytes ;
2007-11-24 16:12:47 -05:00
2014-11-27 14:45:39 -05:00
s - > be - > be_counters . bytes_out + = bytes ;
2009-10-04 09:43:17 -04:00
2014-11-27 14:45:39 -05:00
if ( objt_server ( s - > target ) )
objt_server ( s - > target ) - > counters . bytes_out + = bytes ;
2010-08-03 10:29:52 -04:00
2015-04-03 08:46:27 -04:00
if ( sess - > listener & & sess - > listener - > counters )
sess - > listener - > counters - > bytes_out + = bytes ;
2010-06-20 05:56:30 -04:00
2014-11-27 14:45:39 -05:00
for ( i = 0 ; i < MAX_SESS_STKCTR ; i + + ) {
2015-04-04 10:29:12 -04:00
struct stkctr * stkctr = & s - > stkctr [ i ] ;
if ( ! stkctr_entry ( stkctr ) ) {
stkctr = & sess - > stkctr [ i ] ;
if ( ! stkctr_entry ( stkctr ) )
continue ;
}
2010-06-20 05:56:30 -04:00
2015-06-15 12:29:57 -04:00
ptr1 = stktable_data_ptr ( stkctr - > table , stkctr_entry ( stkctr ) , STKTABLE_DT_BYTES_OUT_CNT ) ;
if ( ptr1 )
stktable_data_cast ( ptr1 , bytes_out_cnt ) + = bytes ;
2014-11-27 14:45:39 -05:00
2015-06-15 12:29:57 -04:00
ptr2 = stktable_data_ptr ( stkctr - > table , stkctr_entry ( stkctr ) , STKTABLE_DT_BYTES_OUT_RATE ) ;
if ( ptr2 )
update_freq_ctr_period ( & stktable_data_cast ( ptr2 , bytes_out_rate ) ,
2015-04-04 10:29:12 -04:00
stkctr - > table - > data_arg [ STKTABLE_DT_BYTES_OUT_RATE ] . u , bytes ) ;
2015-06-15 12:29:57 -04:00
/* If data was modified, we need to touch to re-schedule sync */
if ( ptr1 | | ptr2 )
stktable_touch ( stkctr - > table , stkctr_entry ( stkctr ) , 1 ) ;
2007-11-26 14:15:35 -05:00
}
2007-11-24 16:12:47 -05:00
}
}
2006-06-25 20:48:02 -04:00
2008-11-30 12:47:21 -05:00
/* This function is called with (si->state == SI_ST_CON) meaning that a
* connection was attempted and that the file descriptor is already allocated .
* We must check for establishment , error and abort . Possible output states
* are SI_ST_EST ( established ) , SI_ST_CER ( error ) , SI_ST_DIS ( abort ) , and
* SI_ST_CON ( no change ) . The function returns 0 if it switches to SI_ST_CER ,
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
* otherwise 1. This only works with connection - based streams .
2008-11-30 12:47:21 -05:00
*/
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
static int sess_update_st_con_tcp ( struct stream * s )
2008-11-30 12:47:21 -05:00
{
2014-11-28 09:15:44 -05:00
struct stream_interface * si = & s - > si [ 1 ] ;
struct channel * req = & s - > req ;
struct channel * rep = & s - > res ;
2016-11-17 06:05:13 -05:00
struct connection * srv_conn = __objt_conn ( si - > end ) ;
2008-11-30 12:47:21 -05:00
/* If we got an error, or if nothing happened and the connection timed
* out , we must give up . The CER state handler will take care of retry
* attempts and error reports .
*/
if ( unlikely ( si - > flags & ( SI_FL_EXP | SI_FL_ERR ) ) ) {
2014-11-28 09:26:12 -05:00
if ( unlikely ( req - > flags & CF_WRITE_PARTIAL ) ) {
2012-10-29 17:41:31 -04:00
/* Some data were sent past the connection establishment,
* so we need to pretend we ' re established to log correctly
* and let later states handle the failure .
*/
si - > state = SI_ST_EST ;
si - > err_type = SI_ET_DATA_ERR ;
2014-11-28 09:26:12 -05:00
rep - > flags | = CF_READ_ERROR | CF_WRITE_ERROR ;
2012-10-29 17:41:31 -04:00
return 1 ;
}
2009-03-28 05:47:26 -04:00
si - > exp = TICK_ETERNITY ;
2008-11-30 12:47:21 -05:00
si - > state = SI_ST_CER ;
2016-11-17 06:05:13 -05:00
conn_force_close ( srv_conn ) ;
2012-12-08 02:44:02 -05:00
2008-11-30 12:47:21 -05:00
if ( si - > err_type )
return 0 ;
if ( si - > flags & SI_FL_ERR )
si - > err_type = SI_ET_CONN_ERR ;
else
si - > err_type = SI_ET_CONN_TO ;
return 0 ;
}
/* OK, maybe we want to abort */
2012-12-29 18:50:35 -05:00
if ( ! ( req - > flags & CF_WRITE_PARTIAL ) & &
unlikely ( ( rep - > flags & CF_SHUTW ) | |
2012-08-27 17:14:58 -04:00
( ( req - > flags & CF_SHUTW_NOW ) & & /* FIXME: this should not prevent a connection from establishing */
( ( ! ( req - > flags & CF_WRITE_ACTIVITY ) & & channel_is_empty ( req ) ) | |
2008-11-30 12:47:21 -05:00
s - > be - > options & PR_O_ABRT_CLOSE ) ) ) ) {
/* give up */
2012-05-21 10:31:45 -04:00
si_shutw ( si ) ;
2008-11-30 12:47:21 -05:00
si - > err_type | = SI_ET_CONN_ABRT ;
2009-03-15 17:34:05 -04:00
if ( s - > srv_error )
s - > srv_error ( s , si ) ;
2008-11-30 12:47:21 -05:00
return 1 ;
}
/* we need to wait a bit more if there was no activity either */
2012-08-27 17:14:58 -04:00
if ( ! ( req - > flags & CF_WRITE_ACTIVITY ) )
2008-11-30 12:47:21 -05:00
return 1 ;
/* OK, this means that a connection succeeded. The caller will be
* responsible for handling the transition from CON to EST .
*/
si - > state = SI_ST_EST ;
si - > err_type = SI_ET_NONE ;
return 1 ;
}
/* This function is called with (si->state == SI_ST_CER) meaning that a
* previous connection attempt has failed and that the file descriptor
* has already been released . Possible causes include asynchronous error
* notification and time out . Possible output states are SI_ST_CLO when
* retries are exhausted , SI_ST_TAR when a delay is wanted before a new
* connection attempt , SI_ST_ASS when it ' s wise to retry on the same server ,
* and SI_ST_REQ when an immediate redispatch is wanted . The buffers are
* marked as in error state . It returns 0.
*/
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
static int sess_update_st_cer ( struct stream * s )
2008-11-30 12:47:21 -05:00
{
2014-11-28 09:15:44 -05:00
struct stream_interface * si = & s - > si [ 1 ] ;
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
/* we probably have to release last stream from the server */
2012-11-11 18:42:33 -05:00
if ( objt_server ( s - > target ) ) {
health_adjust ( objt_server ( s - > target ) , HANA_STATUS_L4_ERR ) ;
2009-12-15 16:31:24 -05:00
2015-04-02 19:14:29 -04:00
if ( s - > flags & SF_CURR_SESS ) {
s - > flags & = ~ SF_CURR_SESS ;
2012-11-11 18:42:33 -05:00
objt_server ( s - > target ) - > cur_sess - - ;
2008-11-30 12:47:21 -05:00
}
}
/* ensure that we have enough retries left */
2010-06-01 03:51:00 -04:00
si - > conn_retries - - ;
if ( si - > conn_retries < 0 ) {
2008-11-30 12:47:21 -05:00
if ( ! si - > err_type ) {
si - > err_type = SI_ET_CONN_ERR ;
}
2012-11-11 18:42:33 -05:00
if ( objt_server ( s - > target ) )
objt_server ( s - > target ) - > counters . failed_conns + + ;
2011-03-10 17:25:56 -05:00
s - > be - > be_counters . failed_conns + + ;
2010-12-29 08:32:28 -05:00
sess_change_server ( s , NULL ) ;
2012-11-11 18:42:33 -05:00
if ( may_dequeue_tasks ( objt_server ( s - > target ) , s - > be ) )
process_srv_queue ( objt_server ( s - > target ) ) ;
2008-11-30 12:47:21 -05:00
/* shutw is enough so stop a connecting socket */
2012-05-21 10:31:45 -04:00
si_shutw ( si ) ;
2014-11-28 09:26:12 -05:00
s - > req . flags | = CF_WRITE_ERROR ;
s - > res . flags | = CF_READ_ERROR ;
2008-11-30 12:47:21 -05:00
si - > state = SI_ST_CLO ;
2008-11-30 14:44:17 -05:00
if ( s - > srv_error )
s - > srv_error ( s , si ) ;
2008-11-30 12:47:21 -05:00
return 0 ;
}
/* If the "redispatch" option is set on the backend, we are allowed to
2015-05-12 02:25:34 -04:00
* retry on another server . By default this redispatch occurs on the
* last retry , but if configured we allow redispatches to occur on
* configurable intervals , e . g . on every retry . In order to achieve this ,
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
* we must mark the stream unassigned , and eventually clear the DIRECT
2008-11-30 12:47:21 -05:00
* bit to ignore any persistence cookie . We won ' t count a retry nor a
* redispatch yet , because this will depend on what server is selected .
2014-06-13 11:49:40 -04:00
* If the connection is not persistent , the balancing algorithm is not
* determinist ( round robin ) and there is more than one active server ,
* we accept to perform an immediate redispatch without waiting since
* we don ' t care about this particular server .
2008-11-30 12:47:21 -05:00
*/
2014-06-13 11:49:40 -04:00
if ( objt_server ( s - > target ) & &
2015-05-12 02:25:34 -04:00
( s - > be - > options & PR_O_REDISP ) & & ! ( s - > flags & SF_FORCE_PRST ) & &
2016-01-13 01:58:44 -05:00
( ( __objt_server ( s - > target ) - > state < SRV_ST_RUNNING ) | |
( ( ( s - > be - > redispatch_after > 0 ) & &
2015-05-12 02:25:34 -04:00
( ( s - > be - > conn_retries - si - > conn_retries ) %
s - > be - > redispatch_after = = 0 ) ) | |
( ( s - > be - > redispatch_after < 0 ) & &
( ( s - > be - > conn_retries - si - > conn_retries ) %
( s - > be - > conn_retries + 1 + s - > be - > redispatch_after ) = = 0 ) ) ) | |
2015-04-02 19:14:29 -04:00
( ! ( s - > flags & SF_DIRECT ) & & s - > be - > srv_act > 1 & &
2015-05-12 02:25:34 -04:00
( ( s - > be - > lbprm . algo & BE_LB_KIND ) = = BE_LB_KIND_RR ) ) ) ) {
2010-12-29 08:32:28 -05:00
sess_change_server ( s , NULL ) ;
2012-11-11 18:42:33 -05:00
if ( may_dequeue_tasks ( objt_server ( s - > target ) , s - > be ) )
process_srv_queue ( objt_server ( s - > target ) ) ;
2008-11-30 12:47:21 -05:00
2015-04-02 19:14:29 -04:00
s - > flags & = ~ ( SF_DIRECT | SF_ASSIGNED | SF_ADDR_SET ) ;
2008-11-30 12:47:21 -05:00
si - > state = SI_ST_REQ ;
} else {
2012-11-11 18:42:33 -05:00
if ( objt_server ( s - > target ) )
objt_server ( s - > target ) - > counters . retries + + ;
2011-03-10 17:25:56 -05:00
s - > be - > be_counters . retries + + ;
2008-11-30 12:47:21 -05:00
si - > state = SI_ST_ASS ;
}
if ( si - > flags & SI_FL_ERR ) {
/* The error was an asynchronous connection error, and we will
* likely have to retry connecting to the same server , most
* likely leading to the same result . To avoid this , we wait
2014-06-13 11:04:44 -04:00
* MIN ( one second , connect timeout ) before retrying .
2008-11-30 12:47:21 -05:00
*/
2014-06-13 11:04:44 -04:00
int delay = 1000 ;
if ( s - > be - > timeout . connect & & s - > be - > timeout . connect < delay )
delay = s - > be - > timeout . connect ;
2008-11-30 12:47:21 -05:00
if ( ! si - > err_type )
si - > err_type = SI_ET_CONN_ERR ;
2014-06-13 11:40:15 -04:00
/* only wait when we're retrying on the same server */
if ( si - > state = = SI_ST_ASS | |
( s - > be - > lbprm . algo & BE_LB_KIND ) ! = BE_LB_KIND_RR | |
( s - > be - > srv_act < = 1 ) ) {
si - > state = SI_ST_TAR ;
si - > exp = tick_add ( now_ms , MS_TO_TICKS ( delay ) ) ;
}
2008-11-30 12:47:21 -05:00
return 0 ;
}
return 0 ;
}
/*
* This function handles the transition between the SI_ST_CON state and the
2010-05-31 05:57:51 -04:00
* SI_ST_EST state . It must only be called after switching from SI_ST_CON ( or
2012-05-07 12:12:14 -04:00
* SI_ST_INI ) to SI_ST_EST , but only when a - > proto is defined .
2008-11-30 12:47:21 -05:00
*/
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
static void sess_establish ( struct stream * s )
2008-11-30 12:47:21 -05:00
{
2014-11-28 09:15:44 -05:00
struct stream_interface * si = & s - > si [ 1 ] ;
struct channel * req = & s - > req ;
struct channel * rep = & s - > res ;
2008-11-30 12:47:21 -05:00
2013-12-31 17:06:46 -05:00
/* First, centralize the timers information */
s - > logs . t_connect = tv_ms_elapsed ( & s - > logs . tv_accept , & now ) ;
si - > exp = TICK_ETERNITY ;
2012-11-11 18:42:33 -05:00
if ( objt_server ( s - > target ) )
health_adjust ( objt_server ( s - > target ) , HANA_STATUS_L4_OK ) ;
2009-12-15 16:31:24 -05:00
2008-11-30 12:47:21 -05:00
if ( s - > be - > mode = = PR_MODE_TCP ) { /* let's allow immediate data connection in this case */
/* if the user wants to log as soon as possible, without counting
* bytes from the server , then this is the right moment . */
2015-04-03 20:10:38 -04:00
if ( ! LIST_ISEMPTY ( & strm_fe ( s ) - > logformat ) & & ! ( s - > logs . logwait & LW_BYTES ) ) {
2008-11-30 12:47:21 -05:00
s - > logs . t_close = s - > logs . t_connect ; /* to get a valid end date */
2008-11-30 13:02:32 -05:00
s - > do_log ( s ) ;
2008-11-30 12:47:21 -05:00
}
}
else {
2013-12-31 16:33:13 -05:00
rep - > flags | = CF_READ_DONTWAIT ; /* a single read is enough to get response headers */
2008-11-30 12:47:21 -05:00
}
2015-04-03 20:10:38 -04:00
rep - > analysers | = strm_fe ( s ) - > fe_rsp_ana | s - > be - > be_rsp_ana ;
2015-12-02 03:57:32 -05:00
/* Be sure to filter response headers if the backend is an HTTP proxy
* and if there are filters attached to the stream . */
if ( s - > be - > mode = = PR_MODE_HTTP & & HAS_FILTERS ( s ) )
rep - > analysers | = AN_FLT_HTTP_HDRS ;
2012-08-27 17:14:58 -04:00
rep - > flags | = CF_READ_ATTACHED ; /* producer is now attached */
BUG/MAJOR: http: connection setup may stall on balance url_param
On the mailing list, seri0528@naver.com reported an issue when
using balance url_param or balance uri. The request would sometimes
stall forever.
Cyril Bont managed to reproduce it with the configuration below :
listen test :80
mode http
balance url_param q
hash-type consistent
server s demo.1wt.eu:80
and found it appeared with this commit : 80a92c0 ("BUG/MEDIUM: http:
don't start to forward request data before the connect").
The bug is subtle but real. The problem is that the HTTP request
forwarding analyzer refrains from starting to parse the request
body when some LB algorithms might need the body contents, in order
to preserve the data pointer and avoid moving things around during
analysis in case a redispatch is later needed. And in order to detect
that the connection establishes, it watches the response channel's
CF_READ_ATTACHED flag.
The problem is that a request analyzer is not subscribed to a response
channel, so it will only see changes when woken for other (generally
correlated) reasons, such as the fact that part of the request could
be sent. And since the CF_READ_ATTACHED flag is cleared once leaving
process_session(), it is important not to miss it. It simply happens
that sometimes the server starts to respond in a sequence that validates
the connection in the middle of process_session(), that it is detected
after the analysers, and that the newly assigned CF_READ_ATTACHED is
not used to detect that the request analysers need to be called again,
then the flag is lost.
The CF_WAKE_WRITE flag doesn't work either because it's cleared upon
entry into process_session(), ie if we spend more than one call not
connecting.
Thus we need a new flag to tell the connection initiator that we are
specifically interested in being notified about connection establishment.
This new flag is CF_WAKE_CONNECT. It is set by the requester, and is
cleared once the connection succeeds, where CF_WAKE_ONCE is set instead,
causing the request analysers to be scanned again.
For future versions, some better options will have to be considered :
- let all analysers subscribe to both request and response events ;
- let analysers subscribe to stream interface events (reduces number
of useless calls)
- change CF_WAKE_WRITE's semantics to persist across calls to
process_session(), but that is different from validating a
connection establishment (eg: no data sent, or no data to send)
The bug was introduced in 1.5-dev23, no backport is needed.
2014-04-30 12:11:11 -04:00
if ( req - > flags & CF_WAKE_CONNECT ) {
req - > flags | = CF_WAKE_ONCE ;
req - > flags & = ~ CF_WAKE_CONNECT ;
}
2013-10-01 04:45:07 -04:00
if ( objt_conn ( si - > end ) ) {
2010-05-31 06:31:35 -04:00
/* real connections have timeouts */
req - > wto = s - > be - > timeout . server ;
rep - > rto = s - > be - > timeout . server ;
}
2008-11-30 12:47:21 -05:00
req - > wex = TICK_ETERNITY ;
}
OPTIM/MINOR: session: abort if possible before connecting to the backend
Depending on the path that led to sess_update_stream_int(), it's
possible that we had a read error on the frontend, but that we haven't
checked if we may abort the connection.
This was seen in particular the following setup: tcp mode, with
abortonclose set, frontend using ssl. If the ssl connection had a first
successful read, but the second read failed, we would stil try to open a
connection to the backend, although we had enough information to close
the connection early.
sess_update_stream_int() had some logic to handle that case in the
SI_ST_QUE and SI_ST_TAR, but that was missing in the SI_ST_ASS case.
This patches addresses the issue by verifying the state of the req
channel (and the abortonclose option) right before opening the
connection to the backend, so we have the opportunity to close the
connection there, and factorizes the shared SI_ST_{QUE,TAR,ASS} code.
2016-04-07 12:01:04 -04:00
/* Check if the connection request is in such a state that it can be aborted. */
static int check_req_may_abort ( struct channel * req , struct stream * s )
{
return ( ( req - > flags & ( CF_READ_ERROR ) ) | |
( ( req - > flags & CF_SHUTW_NOW ) & & /* empty and client aborted */
( channel_is_empty ( req ) | | s - > be - > options & PR_O_ABRT_CLOSE ) ) ) ;
}
2014-11-28 09:15:44 -05:00
/* Update back stream interface status for input states SI_ST_ASS, SI_ST_QUE,
* SI_ST_TAR . Other input states are simply ignored .
2013-12-31 17:32:12 -05:00
* Possible output states are SI_ST_CLO , SI_ST_TAR , SI_ST_ASS , SI_ST_REQ , SI_ST_CON
* and SI_ST_EST . Flags must have previously been updated for timeouts and other
* conditions .
2008-11-30 12:47:21 -05:00
*/
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
static void sess_update_stream_int ( struct stream * s )
2008-11-30 12:47:21 -05:00
{
2012-11-11 18:42:33 -05:00
struct server * srv = objt_server ( s - > target ) ;
2014-11-28 09:15:44 -05:00
struct stream_interface * si = & s - > si [ 1 ] ;
2014-11-28 09:26:12 -05:00
struct channel * req = & s - > req ;
2011-03-10 10:55:02 -05:00
2012-03-01 12:19:58 -05:00
DPRINTF ( stderr , " [%u] %s: sess=%p rq=%p, rp=%p, exp(r,w)=%u,%u rqf=%08x rpf=%08x rqh=%d rqt=%d rph=%d rpt=%d cs=%d ss=%d \n " ,
2008-11-30 12:47:21 -05:00
now_ms , __FUNCTION__ ,
s ,
2015-05-02 16:35:24 -04:00
req , & s - > res ,
2014-11-28 09:26:12 -05:00
req - > rex , s - > res . wex ,
req - > flags , s - > res . flags ,
2015-05-02 16:35:24 -04:00
req - > buf - > i , req - > buf - > o , s - > res . buf - > i , s - > res . buf - > o , s - > si [ 0 ] . state , s - > si [ 1 ] . state ) ;
2008-11-30 12:47:21 -05:00
if ( si - > state = = SI_ST_ASS ) {
/* Server assigned to connection request, we have to try to connect now */
int conn_err ;
OPTIM/MINOR: session: abort if possible before connecting to the backend
Depending on the path that led to sess_update_stream_int(), it's
possible that we had a read error on the frontend, but that we haven't
checked if we may abort the connection.
This was seen in particular the following setup: tcp mode, with
abortonclose set, frontend using ssl. If the ssl connection had a first
successful read, but the second read failed, we would stil try to open a
connection to the backend, although we had enough information to close
the connection early.
sess_update_stream_int() had some logic to handle that case in the
SI_ST_QUE and SI_ST_TAR, but that was missing in the SI_ST_ASS case.
This patches addresses the issue by verifying the state of the req
channel (and the abortonclose option) right before opening the
connection to the backend, so we have the opportunity to close the
connection there, and factorizes the shared SI_ST_{QUE,TAR,ASS} code.
2016-04-07 12:01:04 -04:00
/* Before we try to initiate the connection, see if the
* request may be aborted instead .
*/
if ( check_req_may_abort ( req , s ) ) {
si - > err_type | = SI_ET_CONN_ABRT ;
goto abort_connection ;
}
2008-11-30 12:47:21 -05:00
conn_err = connect_server ( s ) ;
2012-11-11 18:42:33 -05:00
srv = objt_server ( s - > target ) ;
2011-03-10 10:55:02 -05:00
2015-04-02 19:14:29 -04:00
if ( conn_err = = SF_ERR_NONE ) {
2013-12-31 17:32:12 -05:00
/* state = SI_ST_CON or SI_ST_EST now */
2011-03-10 10:55:02 -05:00
if ( srv )
srv_inc_sess_ctr ( srv ) ;
2014-02-03 16:26:46 -05:00
if ( srv )
srv_set_sess_last ( srv ) ;
2008-11-30 12:47:21 -05:00
return ;
}
/* We have received a synchronous error. We might have to
* abort , retry immediately or redispatch .
*/
2015-04-02 19:14:29 -04:00
if ( conn_err = = SF_ERR_INTERNAL ) {
2008-11-30 12:47:21 -05:00
if ( ! si - > err_type ) {
si - > err_type = SI_ET_CONN_OTHER ;
}
2011-03-10 10:55:02 -05:00
if ( srv )
srv_inc_sess_ctr ( srv ) ;
2014-02-03 16:26:46 -05:00
if ( srv )
srv_set_sess_last ( srv ) ;
2011-03-10 10:55:02 -05:00
if ( srv )
srv - > counters . failed_conns + + ;
2011-03-10 17:25:56 -05:00
s - > be - > be_counters . failed_conns + + ;
2008-11-30 12:47:21 -05:00
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
/* release other streams waiting for this server */
2010-12-29 08:32:28 -05:00
sess_change_server ( s , NULL ) ;
2011-03-10 10:55:02 -05:00
if ( may_dequeue_tasks ( srv , s - > be ) )
process_srv_queue ( srv ) ;
2008-11-30 12:47:21 -05:00
/* Failed and not retryable. */
2012-05-21 10:31:45 -04:00
si_shutr ( si ) ;
si_shutw ( si ) ;
2014-11-28 09:26:12 -05:00
req - > flags | = CF_WRITE_ERROR ;
2008-11-30 12:47:21 -05:00
s - > logs . t_queue = tv_ms_elapsed ( & s - > logs . tv_accept , & now ) ;
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
/* no stream was ever accounted for this server */
2008-11-30 12:47:21 -05:00
si - > state = SI_ST_CLO ;
2008-11-30 14:44:17 -05:00
if ( s - > srv_error )
s - > srv_error ( s , si ) ;
2008-11-30 12:47:21 -05:00
return ;
}
/* We are facing a retryable error, but we don't want to run a
* turn - around now , as the problem is likely a source port
* allocation problem , so we want to retry now .
*/
si - > state = SI_ST_CER ;
si - > flags & = ~ SI_FL_ERR ;
2014-11-28 09:15:44 -05:00
sess_update_st_cer ( s ) ;
2008-11-30 12:47:21 -05:00
/* now si->state is one of SI_ST_CLO, SI_ST_TAR, SI_ST_ASS, SI_ST_REQ */
return ;
}
else if ( si - > state = = SI_ST_QUE ) {
/* connection request was queued, check for any update */
if ( ! s - > pend_pos ) {
/* The connection is not in the queue anymore. Either
* we have a server connection slot available and we
* go directly to the assigned state , or we need to
* load - balance first and go to the INI state .
*/
si - > exp = TICK_ETERNITY ;
2015-04-02 19:14:29 -04:00
if ( unlikely ( ! ( s - > flags & SF_ASSIGNED ) ) )
2008-11-30 12:47:21 -05:00
si - > state = SI_ST_REQ ;
else {
s - > logs . t_queue = tv_ms_elapsed ( & s - > logs . tv_accept , & now ) ;
si - > state = SI_ST_ASS ;
}
return ;
}
/* Connection request still in queue... */
if ( si - > flags & SI_FL_EXP ) {
/* ... and timeout expired */
si - > exp = TICK_ETERNITY ;
s - > logs . t_queue = tv_ms_elapsed ( & s - > logs . tv_accept , & now ) ;
2011-03-10 10:55:02 -05:00
if ( srv )
srv - > counters . failed_conns + + ;
2011-03-10 17:25:56 -05:00
s - > be - > be_counters . failed_conns + + ;
2012-05-21 10:31:45 -04:00
si_shutr ( si ) ;
si_shutw ( si ) ;
2014-11-28 09:26:12 -05:00
req - > flags | = CF_WRITE_TIMEOUT ;
2008-11-30 12:47:21 -05:00
if ( ! si - > err_type )
si - > err_type = SI_ET_QUEUE_TO ;
si - > state = SI_ST_CLO ;
2008-11-30 14:44:17 -05:00
if ( s - > srv_error )
s - > srv_error ( s , si ) ;
2008-11-30 12:47:21 -05:00
return ;
}
/* Connection remains in queue, check if we have to abort it */
OPTIM/MINOR: session: abort if possible before connecting to the backend
Depending on the path that led to sess_update_stream_int(), it's
possible that we had a read error on the frontend, but that we haven't
checked if we may abort the connection.
This was seen in particular the following setup: tcp mode, with
abortonclose set, frontend using ssl. If the ssl connection had a first
successful read, but the second read failed, we would stil try to open a
connection to the backend, although we had enough information to close
the connection early.
sess_update_stream_int() had some logic to handle that case in the
SI_ST_QUE and SI_ST_TAR, but that was missing in the SI_ST_ASS case.
This patches addresses the issue by verifying the state of the req
channel (and the abortonclose option) right before opening the
connection to the backend, so we have the opportunity to close the
connection there, and factorizes the shared SI_ST_{QUE,TAR,ASS} code.
2016-04-07 12:01:04 -04:00
if ( check_req_may_abort ( req , s ) ) {
2008-11-30 12:47:21 -05:00
s - > logs . t_queue = tv_ms_elapsed ( & s - > logs . tv_accept , & now ) ;
si - > err_type | = SI_ET_QUEUE_ABRT ;
OPTIM/MINOR: session: abort if possible before connecting to the backend
Depending on the path that led to sess_update_stream_int(), it's
possible that we had a read error on the frontend, but that we haven't
checked if we may abort the connection.
This was seen in particular the following setup: tcp mode, with
abortonclose set, frontend using ssl. If the ssl connection had a first
successful read, but the second read failed, we would stil try to open a
connection to the backend, although we had enough information to close
the connection early.
sess_update_stream_int() had some logic to handle that case in the
SI_ST_QUE and SI_ST_TAR, but that was missing in the SI_ST_ASS case.
This patches addresses the issue by verifying the state of the req
channel (and the abortonclose option) right before opening the
connection to the backend, so we have the opportunity to close the
connection there, and factorizes the shared SI_ST_{QUE,TAR,ASS} code.
2016-04-07 12:01:04 -04:00
goto abort_connection ;
2008-11-30 12:47:21 -05:00
}
/* Nothing changed */
return ;
}
else if ( si - > state = = SI_ST_TAR ) {
/* Connection request might be aborted */
OPTIM/MINOR: session: abort if possible before connecting to the backend
Depending on the path that led to sess_update_stream_int(), it's
possible that we had a read error on the frontend, but that we haven't
checked if we may abort the connection.
This was seen in particular the following setup: tcp mode, with
abortonclose set, frontend using ssl. If the ssl connection had a first
successful read, but the second read failed, we would stil try to open a
connection to the backend, although we had enough information to close
the connection early.
sess_update_stream_int() had some logic to handle that case in the
SI_ST_QUE and SI_ST_TAR, but that was missing in the SI_ST_ASS case.
This patches addresses the issue by verifying the state of the req
channel (and the abortonclose option) right before opening the
connection to the backend, so we have the opportunity to close the
connection there, and factorizes the shared SI_ST_{QUE,TAR,ASS} code.
2016-04-07 12:01:04 -04:00
if ( check_req_may_abort ( req , s ) ) {
2008-11-30 12:47:21 -05:00
si - > err_type | = SI_ET_CONN_ABRT ;
OPTIM/MINOR: session: abort if possible before connecting to the backend
Depending on the path that led to sess_update_stream_int(), it's
possible that we had a read error on the frontend, but that we haven't
checked if we may abort the connection.
This was seen in particular the following setup: tcp mode, with
abortonclose set, frontend using ssl. If the ssl connection had a first
successful read, but the second read failed, we would stil try to open a
connection to the backend, although we had enough information to close
the connection early.
sess_update_stream_int() had some logic to handle that case in the
SI_ST_QUE and SI_ST_TAR, but that was missing in the SI_ST_ASS case.
This patches addresses the issue by verifying the state of the req
channel (and the abortonclose option) right before opening the
connection to the backend, so we have the opportunity to close the
connection there, and factorizes the shared SI_ST_{QUE,TAR,ASS} code.
2016-04-07 12:01:04 -04:00
goto abort_connection ;
2008-11-30 12:47:21 -05:00
}
if ( ! ( si - > flags & SI_FL_EXP ) )
return ; /* still in turn-around */
si - > exp = TICK_ETERNITY ;
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
/* we keep trying on the same server as long as the stream is
2008-11-30 12:47:21 -05:00
* marked " assigned " .
* FIXME : Should we force a redispatch attempt when the server is down ?
*/
2015-04-02 19:14:29 -04:00
if ( s - > flags & SF_ASSIGNED )
2008-11-30 12:47:21 -05:00
si - > state = SI_ST_ASS ;
else
si - > state = SI_ST_REQ ;
return ;
}
OPTIM/MINOR: session: abort if possible before connecting to the backend
Depending on the path that led to sess_update_stream_int(), it's
possible that we had a read error on the frontend, but that we haven't
checked if we may abort the connection.
This was seen in particular the following setup: tcp mode, with
abortonclose set, frontend using ssl. If the ssl connection had a first
successful read, but the second read failed, we would stil try to open a
connection to the backend, although we had enough information to close
the connection early.
sess_update_stream_int() had some logic to handle that case in the
SI_ST_QUE and SI_ST_TAR, but that was missing in the SI_ST_ASS case.
This patches addresses the issue by verifying the state of the req
channel (and the abortonclose option) right before opening the
connection to the backend, so we have the opportunity to close the
connection there, and factorizes the shared SI_ST_{QUE,TAR,ASS} code.
2016-04-07 12:01:04 -04:00
return ;
abort_connection :
/* give up */
si - > exp = TICK_ETERNITY ;
si_shutr ( si ) ;
si_shutw ( si ) ;
si - > state = SI_ST_CLO ;
if ( s - > srv_error )
s - > srv_error ( s , si ) ;
return ;
2008-11-30 12:47:21 -05:00
}
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
/* Set correct stream termination flags in case no analyser has done it. It
2011-06-07 20:19:07 -04:00
* also counts a failed request if the server state has not reached the request
* stage .
*/
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
static void sess_set_term_flags ( struct stream * s )
2011-06-07 20:19:07 -04:00
{
2015-04-02 19:14:29 -04:00
if ( ! ( s - > flags & SF_FINST_MASK ) ) {
2011-06-07 20:19:07 -04:00
if ( s - > si [ 1 ] . state < SI_ST_REQ ) {
2015-04-03 20:10:38 -04:00
strm_fe ( s ) - > fe_counters . failed_req + + ;
2015-09-23 06:21:21 -04:00
if ( strm_li ( s ) & & strm_li ( s ) - > counters )
2015-04-03 20:10:38 -04:00
strm_li ( s ) - > counters - > failed_req + + ;
2011-06-07 20:19:07 -04:00
2015-04-02 19:14:29 -04:00
s - > flags | = SF_FINST_R ;
2011-06-07 20:19:07 -04:00
}
else if ( s - > si [ 1 ] . state = = SI_ST_QUE )
2015-04-02 19:14:29 -04:00
s - > flags | = SF_FINST_Q ;
2011-06-07 20:19:07 -04:00
else if ( s - > si [ 1 ] . state < SI_ST_EST )
2015-04-02 19:14:29 -04:00
s - > flags | = SF_FINST_C ;
2011-06-07 20:19:07 -04:00
else if ( s - > si [ 1 ] . state = = SI_ST_EST | | s - > si [ 1 ] . prev_state = = SI_ST_EST )
2015-04-02 19:14:29 -04:00
s - > flags | = SF_FINST_D ;
2011-06-07 20:19:07 -04:00
else
2015-04-02 19:14:29 -04:00
s - > flags | = SF_FINST_L ;
2011-06-07 20:19:07 -04:00
}
}
2008-11-30 12:47:21 -05:00
/* This function initiates a server connection request on a stream interface
2013-11-30 03:06:53 -05:00
* already in SI_ST_REQ state . Upon success , the state goes to SI_ST_ASS for
* a real connection to a server , indicating that a server has been assigned ,
* or SI_ST_EST for a successful connection to an applet . It may also return
* SI_ST_QUE , or SI_ST_CLO upon error .
2008-11-30 12:47:21 -05:00
*/
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
static void sess_prepare_conn_req ( struct stream * s )
2012-03-01 12:19:58 -05:00
{
2014-11-28 09:15:44 -05:00
struct stream_interface * si = & s - > si [ 1 ] ;
2012-03-01 12:19:58 -05:00
DPRINTF ( stderr , " [%u] %s: sess=%p rq=%p, rp=%p, exp(r,w)=%u,%u rqf=%08x rpf=%08x rqh=%d rqt=%d rph=%d rpt=%d cs=%d ss=%d \n " ,
2008-11-30 12:47:21 -05:00
now_ms , __FUNCTION__ ,
s ,
2015-05-02 16:35:24 -04:00
& s - > req , & s - > res ,
2014-11-27 14:45:39 -05:00
s - > req . rex , s - > res . wex ,
s - > req . flags , s - > res . flags ,
2015-05-02 16:35:24 -04:00
s - > req . buf - > i , s - > req . buf - > o , s - > res . buf - > i , s - > res . buf - > o , s - > si [ 0 ] . state , s - > si [ 1 ] . state ) ;
2008-11-30 12:47:21 -05:00
if ( si - > state ! = SI_ST_REQ )
return ;
2013-11-30 03:06:53 -05:00
if ( unlikely ( obj_type ( s - > target ) = = OBJ_TYPE_APPLET ) ) {
/* the applet directly goes to the EST state */
2013-12-01 06:25:52 -05:00
struct appctx * appctx = objt_appctx ( si - > end ) ;
if ( ! appctx | | appctx - > applet ! = __objt_applet ( s - > target ) )
appctx = stream_int_register_handler ( si , objt_applet ( s - > target ) ) ;
if ( ! appctx ) {
/* No more memory, let's immediately abort. Force the
* error code to ignore the ERR_LOCAL which is not a
* real error .
*/
2015-04-02 19:14:29 -04:00
s - > flags & = ~ ( SF_ERR_MASK | SF_FINST_MASK ) ;
2013-12-01 06:25:52 -05:00
si_shutr ( si ) ;
si_shutw ( si ) ;
2014-11-28 09:26:12 -05:00
s - > req . flags | = CF_WRITE_ERROR ;
2013-12-09 11:14:23 -05:00
si - > err_type = SI_ET_CONN_RES ;
2013-12-01 06:25:52 -05:00
si - > state = SI_ST_CLO ;
if ( s - > srv_error )
s - > srv_error ( s , si ) ;
return ;
}
2013-11-30 03:06:53 -05:00
s - > logs . t_queue = tv_ms_elapsed ( & s - > logs . tv_accept , & now ) ;
si - > state = SI_ST_EST ;
si - > err_type = SI_ET_NONE ;
2014-04-22 18:35:17 -04:00
be_set_sess_last ( s - > be ) ;
2013-11-30 03:21:49 -05:00
/* let sess_establish() finish the job */
2013-11-30 03:06:53 -05:00
return ;
}
2008-11-30 12:47:21 -05:00
/* Try to assign a server */
if ( srv_redispatch_connect ( s ) ! = 0 ) {
/* We did not get a server. Either we queued the
* connection request , or we encountered an error .
*/
if ( si - > state = = SI_ST_QUE )
return ;
/* we did not get any server, let's check the cause */
2012-05-21 10:31:45 -04:00
si_shutr ( si ) ;
si_shutw ( si ) ;
2014-11-28 09:26:12 -05:00
s - > req . flags | = CF_WRITE_ERROR ;
2008-11-30 12:47:21 -05:00
if ( ! si - > err_type )
si - > err_type = SI_ET_CONN_OTHER ;
si - > state = SI_ST_CLO ;
2008-11-30 14:44:17 -05:00
if ( s - > srv_error )
s - > srv_error ( s , si ) ;
2008-11-30 12:47:21 -05:00
return ;
}
/* The server is assigned */
s - > logs . t_queue = tv_ms_elapsed ( & s - > logs . tv_accept , & now ) ;
si - > state = SI_ST_ASS ;
2014-04-22 18:35:17 -04:00
be_set_sess_last ( s - > be ) ;
2008-11-30 12:47:21 -05:00
}
2015-09-27 13:29:33 -04:00
/* This function parses the use-service action ruleset. It executes
* the associated ACL and set an applet as a stream or txn final node .
* it returns ACT_RET_ERR if an error occurs , the proxy left in
* consistent state . It returns ACT_RET_STOP in succes case because
* use - service must be a terminal action . Returns ACT_RET_YIELD
* if the initialisation function require more data .
*/
enum act_return process_use_service ( struct act_rule * rule , struct proxy * px ,
struct session * sess , struct stream * s , int flags )
{
struct appctx * appctx ;
/* Initialises the applet if it is required. */
if ( flags & ACT_FLAG_FIRST ) {
/* Register applet. this function schedules the applet. */
s - > target = & rule - > applet . obj_type ;
if ( unlikely ( ! stream_int_register_handler ( & s - > si [ 1 ] , objt_applet ( s - > target ) ) ) )
return ACT_RET_ERR ;
/* Initialise the context. */
appctx = si_appctx ( & s - > si [ 1 ] ) ;
memset ( & appctx - > ctx , 0 , sizeof ( appctx - > ctx ) ) ;
appctx - > rule = rule ;
}
else
appctx = si_appctx ( & s - > si [ 1 ] ) ;
/* Stops the applet sheduling, in case of the init function miss
* some data .
*/
appctx_pause ( appctx ) ;
si_applet_stop_get ( & s - > si [ 1 ] ) ;
/* Call initialisation. */
if ( rule - > applet . init )
switch ( rule - > applet . init ( appctx , px , s ) ) {
case 0 : return ACT_RET_ERR ;
case 1 : break ;
default : return ACT_RET_YIELD ;
}
/* Now we can schedule the applet. */
si_applet_cant_get ( & s - > si [ 1 ] ) ;
appctx_wakeup ( appctx ) ;
if ( sess - > fe = = s - > be ) /* report it if the request was intercepted by the frontend */
sess - > fe - > fe_counters . intercepted_req + + ;
/* The flag SF_ASSIGNED prevent from server assignment. */
s - > flags | = SF_ASSIGNED ;
return ACT_RET_STOP ;
}
2009-07-07 09:10:31 -04:00
/* This stream analyser checks the switching rules and changes the backend
2010-01-22 13:10:05 -05:00
* if appropriate . The default_backend rule is also considered , then the
* target backend ' s forced persistence rules are also evaluated last if any .
2009-07-07 09:10:31 -04:00
* It returns 1 if the processing can continue on next analysers , or zero if it
* either needs more data or wants to immediately abort the request .
*/
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
static int process_switching_rules ( struct stream * s , struct channel * req , int an_bit )
2009-07-07 09:10:31 -04:00
{
2010-04-24 18:00:51 -04:00
struct persist_rule * prst_rule ;
2015-04-03 19:47:55 -04:00
struct session * sess = s - > sess ;
struct proxy * fe = sess - > fe ;
2010-01-22 13:10:05 -05:00
2009-07-07 09:10:31 -04:00
req - > analysers & = ~ an_bit ;
req - > analyse_exp = TICK_ETERNITY ;
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
DPRINTF ( stderr , " [%u] %s: stream=%p b=%p, exp(r,w)=%u,%u bf=%08x bh=%d analysers=%02x \n " ,
2009-07-07 09:10:31 -04:00
now_ms , __FUNCTION__ ,
s ,
req ,
req - > rex , req - > wex ,
req - > flags ,
2012-10-12 17:49:43 -04:00
req - > buf - > i ,
2009-07-07 09:10:31 -04:00
req - > analysers ) ;
/* now check whether we have some switching rules for this request */
2015-04-02 19:14:29 -04:00
if ( ! ( s - > flags & SF_BE_ASSIGNED ) ) {
2009-07-07 09:10:31 -04:00
struct switching_rule * rule ;
2015-04-03 09:40:56 -04:00
list_for_each_entry ( rule , & fe - > switching_rules , list ) {
2014-04-22 19:21:56 -04:00
int ret = 1 ;
2009-07-07 09:10:31 -04:00
2014-04-22 19:21:56 -04:00
if ( rule - > cond ) {
2015-04-03 19:47:55 -04:00
ret = acl_exec_cond ( rule - > cond , fe , sess , s , SMP_OPT_DIR_REQ | SMP_OPT_FINAL ) ;
2014-04-22 19:21:56 -04:00
ret = acl_pass ( ret ) ;
if ( rule - > cond - > pol = = ACL_COND_UNLESS )
ret = ! ret ;
}
2009-07-07 09:10:31 -04:00
if ( ret ) {
2013-11-19 05:43:06 -05:00
/* If the backend name is dynamic, try to resolve the name.
* If we can ' t resolve the name , or if any error occurs , break
* the loop and fallback to the default backend .
*/
struct proxy * backend ;
if ( rule - > dynamic ) {
struct chunk * tmp = get_trash_chunk ( ) ;
if ( ! build_logline ( s , tmp - > str , tmp - > size , & rule - > be . expr ) )
break ;
2015-05-26 05:24:42 -04:00
backend = proxy_be_by_name ( tmp - > str ) ;
2013-11-19 05:43:06 -05:00
if ( ! backend )
break ;
}
else
backend = rule - > be . backend ;
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
if ( ! stream_set_backend ( s , backend ) )
2009-07-12 02:27:39 -04:00
goto sw_failed ;
2009-07-07 09:10:31 -04:00
break ;
}
}
/* To ensure correct connection accounting on the backend, we
* have to assign one if it was not set ( eg : a listen ) . This
* measure also takes care of correctly setting the default
* backend if any .
*/
2015-04-02 19:14:29 -04:00
if ( ! ( s - > flags & SF_BE_ASSIGNED ) )
2015-04-03 09:40:56 -04:00
if ( ! stream_set_backend ( s , fe - > defbe . be ? fe - > defbe . be : s - > be ) )
2009-07-12 02:27:39 -04:00
goto sw_failed ;
2009-07-07 09:10:31 -04:00
}
2010-08-03 08:02:05 -04:00
/* we don't want to run the TCP or HTTP filters again if the backend has not changed */
2015-04-03 09:40:56 -04:00
if ( fe = = s - > be ) {
2014-11-27 14:45:39 -05:00
s - > req . analysers & = ~ AN_REQ_INSPECT_BE ;
s - > req . analysers & = ~ AN_REQ_HTTP_PROCESS_BE ;
MAJOR: filters: Add filters support
This patch adds the support of filters in HAProxy. The main idea is to have a
way to "easely" extend HAProxy by adding some "modules", called filters, that
will be able to change HAProxy behavior in a programmatic way.
To do so, many entry points has been added in code to let filters to hook up to
different steps of the processing. A filter must define a flt_ops sutrctures
(see include/types/filters.h for details). This structure contains all available
callbacks that a filter can define:
struct flt_ops {
/*
* Callbacks to manage the filter lifecycle
*/
int (*init) (struct proxy *p);
void (*deinit)(struct proxy *p);
int (*check) (struct proxy *p);
/*
* Stream callbacks
*/
void (*stream_start) (struct stream *s);
void (*stream_accept) (struct stream *s);
void (*session_establish)(struct stream *s);
void (*stream_stop) (struct stream *s);
/*
* HTTP callbacks
*/
int (*http_start) (struct stream *s, struct http_msg *msg);
int (*http_start_body) (struct stream *s, struct http_msg *msg);
int (*http_start_chunk) (struct stream *s, struct http_msg *msg);
int (*http_data) (struct stream *s, struct http_msg *msg);
int (*http_last_chunk) (struct stream *s, struct http_msg *msg);
int (*http_end_chunk) (struct stream *s, struct http_msg *msg);
int (*http_chunk_trailers)(struct stream *s, struct http_msg *msg);
int (*http_end_body) (struct stream *s, struct http_msg *msg);
void (*http_end) (struct stream *s, struct http_msg *msg);
void (*http_reset) (struct stream *s, struct http_msg *msg);
int (*http_pre_process) (struct stream *s, struct http_msg *msg);
int (*http_post_process) (struct stream *s, struct http_msg *msg);
void (*http_reply) (struct stream *s, short status,
const struct chunk *msg);
};
To declare and use a filter, in the configuration, the "filter" keyword must be
used in a listener/frontend section:
frontend test
...
filter <FILTER-NAME> [OPTIONS...]
The filter referenced by the <FILTER-NAME> must declare a configuration parser
on its own name to fill flt_ops and filter_conf field in the proxy's
structure. An exemple will be provided later to make it perfectly clear.
For now, filters cannot be used in backend section. But this is only a matter of
time. Documentation will also be added later. This is the first commit of a long
list about filters.
It is possible to have several filters on the same listener/frontend. These
filters are stored in an array of at most MAX_FILTERS elements (define in
include/types/filters.h). Again, this will be replaced later by a list of
filters.
The filter API has been highly refactored. Main changes are:
* Now, HA supports an infinite number of filters per proxy. To do so, filters
are stored in list.
* Because filters are stored in list, filters state has been moved from the
channel structure to the filter structure. This is cleaner because there is no
more info about filters in channel structure.
* It is possible to defined filters on backends only. For such filters,
stream_start/stream_stop callbacks are not called. Of course, it is possible
to mix frontend and backend filters.
* Now, TCP streams are also filtered. All callbacks without the 'http_' prefix
are called for all kind of streams. In addition, 2 new callbacks were added to
filter data exchanged through a TCP stream:
- tcp_data: it is called when new data are available or when old unprocessed
data are still waiting.
- tcp_forward_data: it is called when some data can be consumed.
* New callbacks attached to channel were added:
- channel_start_analyze: it is called when a filter is ready to process data
exchanged through a channel. 2 new analyzers (a frontend and a backend)
are attached to channels to call this callback. For a frontend filter, it
is called before any other analyzer. For a backend filter, it is called
when a backend is attached to a stream. So some processing cannot be
filtered in that case.
- channel_analyze: it is called before each analyzer attached to a channel,
expects analyzers responsible for data sending.
- channel_end_analyze: it is called when all other analyzers have finished
their processing. A new analyzers is attached to channels to call this
callback. For a TCP stream, this is always the last one called. For a HTTP
one, the callback is called when a request/response ends, so it is called
one time for each request/response.
* 'session_established' callback has been removed. Everything that is done in
this callback can be handled by 'channel_start_analyze' on the response
channel.
* 'http_pre_process' and 'http_post_process' callbacks have been replaced by
'channel_analyze'.
* 'http_start' callback has been replaced by 'http_headers'. This new one is
called just before headers sending and parsing of the body.
* 'http_end' callback has been replaced by 'channel_end_analyze'.
* It is possible to set a forwarder for TCP channels. It was already possible to
do it for HTTP ones.
* Forwarders can partially consumed forwardable data. For this reason a new
HTTP message state was added before HTTP_MSG_DONE : HTTP_MSG_ENDING.
Now all filters can define corresponding callbacks (http_forward_data
and tcp_forward_data). Each filter owns 2 offsets relative to buf->p, next and
forward, to track, respectively, input data already parsed but not forwarded yet
by the filter and parsed data considered as forwarded by the filter. A any time,
we have the warranty that a filter cannot parse or forward more input than
previous ones. And, of course, it cannot forward more input than it has
parsed. 2 macros has been added to retrieve these offets: FLT_NXT and FLT_FWD.
In addition, 2 functions has been added to change the 'next size' and the
'forward size' of a filter. When a filter parses input data, it can alter these
data, so the size of these data can vary. This action has an effet on all
previous filters that must be handled. To do so, the function
'filter_change_next_size' must be called, passing the size variation. In the
same spirit, if a filter alter forwarded data, it must call the function
'filter_change_forward_size'. 'filter_change_next_size' can be called in
'http_data' and 'tcp_data' callbacks and only these ones. And
'filter_change_forward_size' can be called in 'http_forward_data' and
'tcp_forward_data' callbacks and only these ones. The data changes are the
filter responsability, but with some limitation. It must not change already
parsed/forwarded data or data that previous filters have not parsed/forwarded
yet.
Because filters can be used on backends, when we the backend is set for a
stream, we add filters defined for this backend in the filter list of the
stream. But we must only do that when the backend and the frontend of the stream
are not the same. Else same filters are added a second time leading to undefined
behavior.
The HTTP compression code had to be moved.
So it simplifies http_response_forward_body function. To do so, the way the data
are forwarded has changed. Now, a filter (and only one) can forward data. In a
commit to come, this limitation will be removed to let all filters take part to
data forwarding. There are 2 new functions that filters should use to deal with
this feature:
* flt_set_http_data_forwarder: This function sets the filter (using its id)
that will forward data for the specified HTTP message. It is possible if it
was not already set by another filter _AND_ if no data was yet forwarded
(msg->msg_state <= HTTP_MSG_BODY). It returns -1 if an error occurs.
* flt_http_data_forwarder: This function returns the filter id that will
forward data for the specified HTTP message. If there is no forwarder set, it
returns -1.
When an HTTP data forwarder is set for the response, the HTTP compression is
disabled. Of course, this is not definitive.
2015-04-30 05:48:27 -04:00
s - > req . analysers & = ~ AN_FLT_START_BE ;
2010-08-03 08:02:05 -04:00
}
2009-07-07 09:10:31 -04:00
2010-04-24 18:00:51 -04:00
/* as soon as we know the backend, we must check if we have a matching forced or ignored
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
* persistence rule , and report that in the stream .
2010-01-22 13:10:05 -05:00
*/
2010-04-24 18:00:51 -04:00
list_for_each_entry ( prst_rule , & s - > be - > persist_rules , list ) {
2010-01-22 13:10:05 -05:00
int ret = 1 ;
if ( prst_rule - > cond ) {
2015-04-03 19:47:55 -04:00
ret = acl_exec_cond ( prst_rule - > cond , s - > be , sess , s , SMP_OPT_DIR_REQ | SMP_OPT_FINAL ) ;
2010-01-22 13:10:05 -05:00
ret = acl_pass ( ret ) ;
if ( prst_rule - > cond - > pol = = ACL_COND_UNLESS )
ret = ! ret ;
}
if ( ret ) {
/* no rule, or the rule matches */
2010-04-24 18:00:51 -04:00
if ( prst_rule - > type = = PERSIST_TYPE_FORCE ) {
2015-04-02 19:14:29 -04:00
s - > flags | = SF_FORCE_PRST ;
2010-04-24 18:00:51 -04:00
} else {
2015-04-02 19:14:29 -04:00
s - > flags | = SF_IGNORE_PRST ;
2010-04-24 18:00:51 -04:00
}
2010-01-22 13:10:05 -05:00
break ;
}
}
2009-07-07 09:10:31 -04:00
return 1 ;
2009-07-12 02:27:39 -04:00
sw_failed :
/* immediately abort this request in case of allocation failure */
2014-11-27 14:45:39 -05:00
channel_abort ( & s - > req ) ;
channel_abort ( & s - > res ) ;
2009-07-12 02:27:39 -04:00
2015-04-02 19:14:29 -04:00
if ( ! ( s - > flags & SF_ERR_MASK ) )
s - > flags | = SF_ERR_RESOURCE ;
if ( ! ( s - > flags & SF_FINST_MASK ) )
s - > flags | = SF_FINST_R ;
2009-07-12 02:27:39 -04:00
2015-04-03 17:46:31 -04:00
if ( s - > txn )
s - > txn - > status = 500 ;
MAJOR: filters: Add filters support
This patch adds the support of filters in HAProxy. The main idea is to have a
way to "easely" extend HAProxy by adding some "modules", called filters, that
will be able to change HAProxy behavior in a programmatic way.
To do so, many entry points has been added in code to let filters to hook up to
different steps of the processing. A filter must define a flt_ops sutrctures
(see include/types/filters.h for details). This structure contains all available
callbacks that a filter can define:
struct flt_ops {
/*
* Callbacks to manage the filter lifecycle
*/
int (*init) (struct proxy *p);
void (*deinit)(struct proxy *p);
int (*check) (struct proxy *p);
/*
* Stream callbacks
*/
void (*stream_start) (struct stream *s);
void (*stream_accept) (struct stream *s);
void (*session_establish)(struct stream *s);
void (*stream_stop) (struct stream *s);
/*
* HTTP callbacks
*/
int (*http_start) (struct stream *s, struct http_msg *msg);
int (*http_start_body) (struct stream *s, struct http_msg *msg);
int (*http_start_chunk) (struct stream *s, struct http_msg *msg);
int (*http_data) (struct stream *s, struct http_msg *msg);
int (*http_last_chunk) (struct stream *s, struct http_msg *msg);
int (*http_end_chunk) (struct stream *s, struct http_msg *msg);
int (*http_chunk_trailers)(struct stream *s, struct http_msg *msg);
int (*http_end_body) (struct stream *s, struct http_msg *msg);
void (*http_end) (struct stream *s, struct http_msg *msg);
void (*http_reset) (struct stream *s, struct http_msg *msg);
int (*http_pre_process) (struct stream *s, struct http_msg *msg);
int (*http_post_process) (struct stream *s, struct http_msg *msg);
void (*http_reply) (struct stream *s, short status,
const struct chunk *msg);
};
To declare and use a filter, in the configuration, the "filter" keyword must be
used in a listener/frontend section:
frontend test
...
filter <FILTER-NAME> [OPTIONS...]
The filter referenced by the <FILTER-NAME> must declare a configuration parser
on its own name to fill flt_ops and filter_conf field in the proxy's
structure. An exemple will be provided later to make it perfectly clear.
For now, filters cannot be used in backend section. But this is only a matter of
time. Documentation will also be added later. This is the first commit of a long
list about filters.
It is possible to have several filters on the same listener/frontend. These
filters are stored in an array of at most MAX_FILTERS elements (define in
include/types/filters.h). Again, this will be replaced later by a list of
filters.
The filter API has been highly refactored. Main changes are:
* Now, HA supports an infinite number of filters per proxy. To do so, filters
are stored in list.
* Because filters are stored in list, filters state has been moved from the
channel structure to the filter structure. This is cleaner because there is no
more info about filters in channel structure.
* It is possible to defined filters on backends only. For such filters,
stream_start/stream_stop callbacks are not called. Of course, it is possible
to mix frontend and backend filters.
* Now, TCP streams are also filtered. All callbacks without the 'http_' prefix
are called for all kind of streams. In addition, 2 new callbacks were added to
filter data exchanged through a TCP stream:
- tcp_data: it is called when new data are available or when old unprocessed
data are still waiting.
- tcp_forward_data: it is called when some data can be consumed.
* New callbacks attached to channel were added:
- channel_start_analyze: it is called when a filter is ready to process data
exchanged through a channel. 2 new analyzers (a frontend and a backend)
are attached to channels to call this callback. For a frontend filter, it
is called before any other analyzer. For a backend filter, it is called
when a backend is attached to a stream. So some processing cannot be
filtered in that case.
- channel_analyze: it is called before each analyzer attached to a channel,
expects analyzers responsible for data sending.
- channel_end_analyze: it is called when all other analyzers have finished
their processing. A new analyzers is attached to channels to call this
callback. For a TCP stream, this is always the last one called. For a HTTP
one, the callback is called when a request/response ends, so it is called
one time for each request/response.
* 'session_established' callback has been removed. Everything that is done in
this callback can be handled by 'channel_start_analyze' on the response
channel.
* 'http_pre_process' and 'http_post_process' callbacks have been replaced by
'channel_analyze'.
* 'http_start' callback has been replaced by 'http_headers'. This new one is
called just before headers sending and parsing of the body.
* 'http_end' callback has been replaced by 'channel_end_analyze'.
* It is possible to set a forwarder for TCP channels. It was already possible to
do it for HTTP ones.
* Forwarders can partially consumed forwardable data. For this reason a new
HTTP message state was added before HTTP_MSG_DONE : HTTP_MSG_ENDING.
Now all filters can define corresponding callbacks (http_forward_data
and tcp_forward_data). Each filter owns 2 offsets relative to buf->p, next and
forward, to track, respectively, input data already parsed but not forwarded yet
by the filter and parsed data considered as forwarded by the filter. A any time,
we have the warranty that a filter cannot parse or forward more input than
previous ones. And, of course, it cannot forward more input than it has
parsed. 2 macros has been added to retrieve these offets: FLT_NXT and FLT_FWD.
In addition, 2 functions has been added to change the 'next size' and the
'forward size' of a filter. When a filter parses input data, it can alter these
data, so the size of these data can vary. This action has an effet on all
previous filters that must be handled. To do so, the function
'filter_change_next_size' must be called, passing the size variation. In the
same spirit, if a filter alter forwarded data, it must call the function
'filter_change_forward_size'. 'filter_change_next_size' can be called in
'http_data' and 'tcp_data' callbacks and only these ones. And
'filter_change_forward_size' can be called in 'http_forward_data' and
'tcp_forward_data' callbacks and only these ones. The data changes are the
filter responsability, but with some limitation. It must not change already
parsed/forwarded data or data that previous filters have not parsed/forwarded
yet.
Because filters can be used on backends, when we the backend is set for a
stream, we add filters defined for this backend in the filter list of the
stream. But we must only do that when the backend and the frontend of the stream
are not the same. Else same filters are added a second time leading to undefined
behavior.
The HTTP compression code had to be moved.
So it simplifies http_response_forward_body function. To do so, the way the data
are forwarded has changed. Now, a filter (and only one) can forward data. In a
commit to come, this limitation will be removed to let all filters take part to
data forwarding. There are 2 new functions that filters should use to deal with
this feature:
* flt_set_http_data_forwarder: This function sets the filter (using its id)
that will forward data for the specified HTTP message. It is possible if it
was not already set by another filter _AND_ if no data was yet forwarded
(msg->msg_state <= HTTP_MSG_BODY). It returns -1 if an error occurs.
* flt_http_data_forwarder: This function returns the filter id that will
forward data for the specified HTTP message. If there is no forwarder set, it
returns -1.
When an HTTP data forwarder is set for the response, the HTTP compression is
disabled. Of course, this is not definitive.
2015-04-30 05:48:27 -04:00
s - > req . analysers & = AN_FLT_END ;
2014-11-27 14:45:39 -05:00
s - > req . analyse_exp = TICK_ETERNITY ;
2009-07-12 02:27:39 -04:00
return 0 ;
2009-07-07 09:10:31 -04:00
}
2012-04-05 15:09:48 -04:00
/* This stream analyser works on a request. It applies all use-server rules on
* it then returns 1. The data must already be present in the buffer otherwise
* they won ' t match . It always returns 1.
*/
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
static int process_server_rules ( struct stream * s , struct channel * req , int an_bit )
2012-04-05 15:09:48 -04:00
{
struct proxy * px = s - > be ;
2015-04-03 19:47:55 -04:00
struct session * sess = s - > sess ;
2012-04-05 15:09:48 -04:00
struct server_rule * rule ;
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
DPRINTF ( stderr , " [%u] %s: stream=%p b=%p, exp(r,w)=%u,%u bf=%08x bl=%d analysers=%02x \n " ,
2012-04-05 15:09:48 -04:00
now_ms , __FUNCTION__ ,
s ,
req ,
req - > rex , req - > wex ,
req - > flags ,
2012-10-12 17:49:43 -04:00
req - > buf - > i + req - > buf - > o ,
2012-04-05 15:09:48 -04:00
req - > analysers ) ;
2015-04-02 19:14:29 -04:00
if ( ! ( s - > flags & SF_ASSIGNED ) ) {
2012-04-05 15:09:48 -04:00
list_for_each_entry ( rule , & px - > server_rules , list ) {
int ret ;
2015-04-03 19:47:55 -04:00
ret = acl_exec_cond ( rule - > cond , s - > be , sess , s , SMP_OPT_DIR_REQ | SMP_OPT_FINAL ) ;
2012-04-05 15:09:48 -04:00
ret = acl_pass ( ret ) ;
if ( rule - > cond - > pol = = ACL_COND_UNLESS )
ret = ! ret ;
if ( ret ) {
struct server * srv = rule - > srv . ptr ;
2014-05-13 17:41:20 -04:00
if ( ( srv - > state ! = SRV_ST_STOPPED ) | |
2012-04-05 15:09:48 -04:00
( px - > options & PR_O_PERSIST ) | |
2015-04-02 19:14:29 -04:00
( s - > flags & SF_FORCE_PRST ) ) {
s - > flags | = SF_DIRECT | SF_ASSIGNED ;
2012-11-11 18:42:33 -05:00
s - > target = & srv - > obj_type ;
2012-04-05 15:09:48 -04:00
break ;
}
/* if the server is not UP, let's go on with next rules
* just in case another one is suited .
*/
}
}
}
req - > analysers & = ~ an_bit ;
req - > analyse_exp = TICK_ETERNITY ;
return 1 ;
}
2010-01-04 09:47:17 -05:00
/* This stream analyser works on a request. It applies all sticking rules on
* it then returns 1. The data must already be present in the buffer otherwise
* they won ' t match . It always returns 1.
*/
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
static int process_sticking_rules ( struct stream * s , struct channel * req , int an_bit )
2010-01-04 09:47:17 -05:00
{
struct proxy * px = s - > be ;
2015-04-03 19:47:55 -04:00
struct session * sess = s - > sess ;
2010-01-04 09:47:17 -05:00
struct sticking_rule * rule ;
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
DPRINTF ( stderr , " [%u] %s: stream=%p b=%p, exp(r,w)=%u,%u bf=%08x bh=%d analysers=%02x \n " ,
2010-01-04 09:47:17 -05:00
now_ms , __FUNCTION__ ,
s ,
req ,
req - > rex , req - > wex ,
req - > flags ,
2012-10-12 17:49:43 -04:00
req - > buf - > i ,
2010-01-04 09:47:17 -05:00
req - > analysers ) ;
list_for_each_entry ( rule , & px - > sticking_rules , list ) {
int ret = 1 ;
int i ;
2013-12-09 06:52:13 -05:00
/* Only the first stick store-request of each table is applied
* and other ones are ignored . The purpose is to allow complex
* configurations which look for multiple entries by decreasing
* order of precision and to stop at the first which matches .
* An example could be a store of the IP address from an HTTP
* header first , then from the source if not found .
*/
2010-01-04 09:47:17 -05:00
for ( i = 0 ; i < s - > store_count ; i + + ) {
if ( rule - > table . t = = s - > store [ i ] . table )
break ;
}
if ( i ! = s - > store_count )
continue ;
if ( rule - > cond ) {
2015-04-03 19:47:55 -04:00
ret = acl_exec_cond ( rule - > cond , px , sess , s , SMP_OPT_DIR_REQ | SMP_OPT_FINAL ) ;
2010-01-04 09:47:17 -05:00
ret = acl_pass ( ret ) ;
if ( rule - > cond - > pol = = ACL_COND_UNLESS )
ret = ! ret ;
}
if ( ret ) {
struct stktable_key * key ;
2015-04-03 19:47:55 -04:00
key = stktable_fetch_key ( rule - > table . t , px , sess , s , SMP_OPT_DIR_REQ | SMP_OPT_FINAL , rule - > expr , NULL ) ;
2010-01-04 09:47:17 -05:00
if ( ! key )
continue ;
if ( rule - > flags & STK_IS_MATCH ) {
struct stksess * ts ;
2010-06-06 09:38:59 -04:00
if ( ( ts = stktable_lookup_key ( rule - > table . t , key ) ) ! = NULL ) {
2015-04-02 19:14:29 -04:00
if ( ! ( s - > flags & SF_ASSIGNED ) ) {
2010-01-04 09:47:17 -05:00
struct eb32_node * node ;
2010-06-06 10:40:39 -04:00
void * ptr ;
2010-01-04 09:47:17 -05:00
/* srv found in table */
2010-06-06 10:40:39 -04:00
ptr = stktable_data_ptr ( rule - > table . t , ts , STKTABLE_DT_SERVER_ID ) ;
node = eb32_lookup ( & px - > conf . used_server_id , stktable_data_cast ( ptr , server_id ) ) ;
2010-01-04 09:47:17 -05:00
if ( node ) {
struct server * srv ;
srv = container_of ( node , struct server , conf . id ) ;
2014-05-13 17:41:20 -04:00
if ( ( srv - > state ! = SRV_ST_STOPPED ) | |
2010-01-22 13:10:05 -05:00
( px - > options & PR_O_PERSIST ) | |
2015-04-02 19:14:29 -04:00
( s - > flags & SF_FORCE_PRST ) ) {
s - > flags | = SF_DIRECT | SF_ASSIGNED ;
2012-11-11 18:42:33 -05:00
s - > target = & srv - > obj_type ;
2010-01-04 09:47:17 -05:00
}
}
}
2010-09-23 12:16:52 -04:00
stktable_touch ( rule - > table . t , ts , 1 ) ;
2010-01-04 09:47:17 -05:00
}
}
if ( rule - > flags & STK_IS_STORE ) {
if ( s - > store_count < ( sizeof ( s - > store ) / sizeof ( s - > store [ 0 ] ) ) ) {
struct stksess * ts ;
ts = stksess_new ( rule - > table . t , key ) ;
if ( ts ) {
s - > store [ s - > store_count ] . table = rule - > table . t ;
s - > store [ s - > store_count + + ] . ts = ts ;
}
}
}
}
}
req - > analysers & = ~ an_bit ;
req - > analyse_exp = TICK_ETERNITY ;
return 1 ;
}
/* This stream analyser works on a response. It applies all store rules on it
* then returns 1. The data must already be present in the buffer otherwise
* they won ' t match . It always returns 1.
*/
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
static int process_store_rules ( struct stream * s , struct channel * rep , int an_bit )
2010-01-04 09:47:17 -05:00
{
struct proxy * px = s - > be ;
2015-04-03 19:47:55 -04:00
struct session * sess = s - > sess ;
2010-01-04 09:47:17 -05:00
struct sticking_rule * rule ;
int i ;
2013-12-09 06:52:13 -05:00
int nbreq = s - > store_count ;
2010-01-04 09:47:17 -05:00
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
DPRINTF ( stderr , " [%u] %s: stream=%p b=%p, exp(r,w)=%u,%u bf=%08x bh=%d analysers=%02x \n " ,
2010-01-04 09:47:17 -05:00
now_ms , __FUNCTION__ ,
s ,
2010-02-09 14:55:44 -05:00
rep ,
rep - > rex , rep - > wex ,
rep - > flags ,
2012-10-12 17:49:43 -04:00
rep - > buf - > i ,
2010-02-09 14:55:44 -05:00
rep - > analysers ) ;
2010-01-04 09:47:17 -05:00
list_for_each_entry ( rule , & px - > storersp_rules , list ) {
int ret = 1 ;
2013-12-09 06:52:13 -05:00
/* Only the first stick store-response of each table is applied
* and other ones are ignored . The purpose is to allow complex
* configurations which look for multiple entries by decreasing
* order of precision and to stop at the first which matches .
* An example could be a store of a set - cookie value , with a
* fallback to a parameter found in a 302 redirect .
*
* The store - response rules are not allowed to override the
* store - request rules for the same table , but they may coexist .
* Thus we can have up to one store - request entry and one store -
* response entry for the same table at any time .
*/
for ( i = nbreq ; i < s - > store_count ; i + + ) {
if ( rule - > table . t = = s - > store [ i ] . table )
break ;
}
/* skip existing entries for this table */
if ( i < s - > store_count )
continue ;
2010-01-04 09:47:17 -05:00
if ( rule - > cond ) {
2015-04-03 19:47:55 -04:00
ret = acl_exec_cond ( rule - > cond , px , sess , s , SMP_OPT_DIR_RES | SMP_OPT_FINAL ) ;
2010-01-04 09:47:17 -05:00
ret = acl_pass ( ret ) ;
if ( rule - > cond - > pol = = ACL_COND_UNLESS )
ret = ! ret ;
}
if ( ret ) {
struct stktable_key * key ;
2015-04-03 19:47:55 -04:00
key = stktable_fetch_key ( rule - > table . t , px , sess , s , SMP_OPT_DIR_RES | SMP_OPT_FINAL , rule - > expr , NULL ) ;
2010-01-04 09:47:17 -05:00
if ( ! key )
continue ;
BUG/MEDIUM: stick: completely remove the unused flag from the store entries
The store[] array in the session holds a flag which probably aimed to
differenciate store entries learned from the request from those learned
from the response, and allowing responses to overwrite only the request
ones (eg: have a server set a response cookie which overwrites the request
one).
But this flag is set when a response data is stored, and is never cleared.
So in practice, haproxy always runs with this flag set, meaning that
responses prevent themselves from overriding the request data.
It is desirable anyway to keep the ability not to override data, because
the override is performed only based on the table and not on the key, so
that would mean that it would be impossible to retrieve two different
keys to store into a same table. For example, if a client sets a cookie
and a server another one, both need to be updated in the table in the
proper order. This is especially true when multiple keys may be tracked
on each side into the same table (eg: list of IP addresses in a header).
So the correct fix which also maintains the current behaviour consists in
simply removing this flag and never try to optimize for the overwrite case.
This fix also has the benefit of significantly reducing the session size,
by 64 bytes due to alignment issues caused by this flag!
The bug has been there forever (since 1.4-dev7), so a backport to 1.4
would be appropriate.
2013-12-06 17:05:21 -05:00
if ( s - > store_count < ( sizeof ( s - > store ) / sizeof ( s - > store [ 0 ] ) ) ) {
2010-01-04 09:47:17 -05:00
struct stksess * ts ;
ts = stksess_new ( rule - > table . t , key ) ;
if ( ts ) {
s - > store [ s - > store_count ] . table = rule - > table . t ;
s - > store [ s - > store_count + + ] . ts = ts ;
}
}
}
}
/* process store request and store response */
for ( i = 0 ; i < s - > store_count ; i + + ) {
2010-06-06 09:38:59 -04:00
struct stksess * ts ;
2010-06-06 10:40:39 -04:00
void * ptr ;
2010-06-06 09:38:59 -04:00
2014-05-13 09:54:22 -04:00
if ( objt_server ( s - > target ) & & objt_server ( s - > target ) - > flags & SRV_F_NON_STICK ) {
2011-06-24 20:39:49 -04:00
stksess_free ( s - > store [ i ] . table , s - > store [ i ] . ts ) ;
s - > store [ i ] . ts = NULL ;
continue ;
}
2010-06-06 09:38:59 -04:00
ts = stktable_lookup ( s - > store [ i ] . table , s - > store [ i ] . ts ) ;
if ( ts ) {
/* the entry already existed, we can free ours */
2010-09-23 12:16:52 -04:00
stktable_touch ( s - > store [ i ] . table , ts , 1 ) ;
2010-01-04 09:47:17 -05:00
stksess_free ( s - > store [ i ] . table , s - > store [ i ] . ts ) ;
}
2010-06-06 09:38:59 -04:00
else
2010-09-23 12:16:52 -04:00
ts = stktable_store ( s - > store [ i ] . table , s - > store [ i ] . ts , 1 ) ;
2010-06-06 09:38:59 -04:00
s - > store [ i ] . ts = NULL ;
2010-06-06 10:40:39 -04:00
ptr = stktable_data_ptr ( s - > store [ i ] . table , ts , STKTABLE_DT_SERVER_ID ) ;
2012-11-11 18:42:33 -05:00
stktable_data_cast ( ptr , server_id ) = objt_server ( s - > target ) - > puid ;
2010-01-04 09:47:17 -05:00
}
2010-06-18 03:57:45 -04:00
s - > store_count = 0 ; /* everything is stored */
2010-01-04 09:47:17 -05:00
rep - > analysers & = ~ an_bit ;
rep - > analyse_exp = TICK_ETERNITY ;
return 1 ;
}
2010-01-06 17:53:24 -05:00
/* This macro is very specific to the function below. See the comments in
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
* process_stream ( ) below to understand the logic and the tests .
2010-01-06 17:53:24 -05:00
*/
# define UPDATE_ANALYSERS(real, list, back, flag) { \
list = ( ( ( list ) & ~ ( flag ) ) | ~ ( back ) ) & ( real ) ; \
back = real ; \
if ( ! ( list ) ) \
break ; \
if ( ( ( list ) ^ ( ( list ) & ( ( list ) - 1 ) ) ) < ( flag ) ) \
continue ; \
}
2016-05-11 11:06:28 -04:00
/* These 2 following macros call an analayzer for the specified channel if the
* right flag is set . The first one is used for " filterable " analyzers . If a
2016-05-11 11:13:39 -04:00
* stream has some registered filters , pre and post analyaze callbacks are
* called . The second are used for other analyzers ( AN_FLT_ * and
2016-05-11 11:06:28 -04:00
* AN_REQ / RES_HTTP_XFER_BODY ) */
# define FLT_ANALYZE(strm, chn, fun, list, back, flag, ...) \
{ \
if ( ( list ) & ( flag ) ) { \
if ( HAS_FILTERS ( strm ) ) { \
2016-05-11 11:13:39 -04:00
if ( ! flt_pre_analyze ( ( strm ) , ( chn ) , ( flag ) ) ) \
2016-05-11 11:06:28 -04:00
break ; \
if ( ! fun ( ( strm ) , ( chn ) , ( flag ) , # # __VA_ARGS__ ) ) \
break ; \
2016-05-11 11:13:39 -04:00
if ( ! flt_post_analyze ( ( strm ) , ( chn ) , ( flag ) ) ) \
break ; \
2016-05-11 11:06:28 -04:00
} \
else { \
if ( ! fun ( ( strm ) , ( chn ) , ( flag ) , # # __VA_ARGS__ ) ) \
break ; \
} \
UPDATE_ANALYSERS ( ( chn ) - > analysers , ( list ) , \
( back ) , ( flag ) ) ; \
} \
}
# define ANALYZE(strm, chn, fun, list, back, flag, ...) \
{ \
if ( ( list ) & ( flag ) ) { \
if ( ! fun ( ( strm ) , ( chn ) , ( flag ) , # # __VA_ARGS__ ) ) \
break ; \
UPDATE_ANALYSERS ( ( chn ) - > analysers , ( list ) , \
( back ) , ( flag ) ) ; \
} \
}
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
/* Processes the client, server, request and response jobs of a stream task,
2008-11-30 12:47:21 -05:00
* then puts it back to the wait queue in a clean state , or cleans up its
* resources if it must be deleted . Returns in < next > the date the task wants
* to be woken up , or TICK_ETERNITY . In order not to call all functions for
* nothing too many times , the request and response buffers flags are monitored
* and each function is called only if at least another function has changed at
* least one flag it is interested in .
*/
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
struct task * process_stream ( struct task * t )
2008-11-30 12:47:21 -05:00
{
2011-03-10 10:55:02 -05:00
struct server * srv ;
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
struct stream * s = t - > context ;
2015-04-03 08:46:27 -04:00
struct session * sess = s - > sess ;
2008-11-30 12:47:21 -05:00
unsigned int rqf_last , rpf_last ;
2010-07-27 11:15:12 -04:00
unsigned int rq_prod_last , rq_cons_last ;
unsigned int rp_cons_last , rp_prod_last ;
2010-01-06 18:09:04 -05:00
unsigned int req_ana_back ;
2014-11-28 09:07:47 -05:00
struct channel * req , * res ;
struct stream_interface * si_f , * si_b ;
req = & s - > req ;
res = & s - > res ;
si_f = & s - > si [ 0 ] ;
si_b = & s - > si [ 1 ] ;
2008-11-30 12:47:21 -05:00
//DPRINTF(stderr, "%s:%d: cs=%d ss=%d(%d) rqf=0x%08x rpf=0x%08x\n", __FUNCTION__, __LINE__,
2014-11-28 09:07:47 -05:00
// si_f->state, si_b->state, si_b->err_type, req->flags, res->flags);
2008-11-30 12:47:21 -05:00
2010-01-29 13:26:18 -05:00
/* this data may be no longer valid, clear it */
2015-04-03 17:46:31 -04:00
if ( s - > txn )
memset ( & s - > txn - > auth , 0 , sizeof ( s - > txn - > auth ) ) ;
2010-01-29 13:26:18 -05:00
2014-06-23 09:22:31 -04:00
/* This flag must explicitly be set every time */
2014-11-28 09:07:47 -05:00
req - > flags & = ~ ( CF_READ_NOEXP | CF_WAKE_WRITE ) ;
res - > flags & = ~ ( CF_READ_NOEXP | CF_WAKE_WRITE ) ;
2009-06-21 16:03:51 -04:00
/* Keep a copy of req/rep flags so that we can detect shutdowns */
2014-11-28 09:07:47 -05:00
rqf_last = req - > flags & ~ CF_MASK_ANALYSER ;
rpf_last = res - > flags & ~ CF_MASK_ANALYSER ;
2009-06-21 16:03:51 -04:00
2009-09-05 14:57:35 -04:00
/* we don't want the stream interface functions to recursively wake us up */
2014-11-28 09:07:47 -05:00
si_f - > flags | = SI_FL_DONT_WAKE ;
si_b - > flags | = SI_FL_DONT_WAKE ;
2009-09-05 14:57:35 -04:00
2008-11-30 12:47:21 -05:00
/* 1a: Check for low level timeouts if needed. We just set a flag on
* stream interfaces when their timeouts have expired .
*/
if ( unlikely ( t - > state & TASK_WOKEN_TIMER ) ) {
2014-11-28 09:07:47 -05:00
stream_int_check_timeouts ( si_f ) ;
stream_int_check_timeouts ( si_b ) ;
2009-06-21 16:03:51 -04:00
2012-08-27 18:06:31 -04:00
/* check channel timeouts, and close the corresponding stream interfaces
2009-06-21 16:03:51 -04:00
* for future reads or writes . Note : this will also concern upper layers
* but we do not touch any other flag . We must be careful and correctly
* detect state changes when calling them .
*/
2014-11-28 09:07:47 -05:00
channel_check_timeouts ( req ) ;
2009-06-21 16:03:51 -04:00
2014-11-28 09:07:47 -05:00
if ( unlikely ( ( req - > flags & ( CF_SHUTW | CF_WRITE_TIMEOUT ) ) = = CF_WRITE_TIMEOUT ) ) {
si_b - > flags | = SI_FL_NOLINGER ;
si_shutw ( si_b ) ;
2009-12-29 08:49:56 -05:00
}
2014-11-28 09:07:47 -05:00
if ( unlikely ( ( req - > flags & ( CF_SHUTR | CF_READ_TIMEOUT ) ) = = CF_READ_TIMEOUT ) ) {
if ( si_f - > flags & SI_FL_NOHALF )
si_f - > flags | = SI_FL_NOLINGER ;
si_shutr ( si_f ) ;
2012-05-13 08:48:59 -04:00
}
2009-06-21 16:03:51 -04:00
2014-11-28 09:07:47 -05:00
channel_check_timeouts ( res ) ;
2008-11-30 12:47:21 -05:00
2014-11-28 09:07:47 -05:00
if ( unlikely ( ( res - > flags & ( CF_SHUTW | CF_WRITE_TIMEOUT ) ) = = CF_WRITE_TIMEOUT ) ) {
si_f - > flags | = SI_FL_NOLINGER ;
si_shutw ( si_f ) ;
2009-12-29 08:49:56 -05:00
}
2014-11-28 09:07:47 -05:00
if ( unlikely ( ( res - > flags & ( CF_SHUTR | CF_READ_TIMEOUT ) ) = = CF_READ_TIMEOUT ) ) {
if ( si_b - > flags & SI_FL_NOHALF )
si_b - > flags | = SI_FL_NOLINGER ;
si_shutr ( si_b ) ;
2012-05-13 08:48:59 -04:00
}
2012-11-08 08:49:17 -05:00
2016-11-10 08:58:05 -05:00
if ( HAS_FILTERS ( s ) )
flt_stream_check_timeouts ( s ) ;
2012-11-08 08:49:17 -05:00
/* Once in a while we're woken up because the task expires. But
* this does not necessarily mean that a timeout has been reached .
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
* So let ' s not run a whole stream processing if only an expiration
2012-11-08 08:49:17 -05:00
* timeout needs to be refreshed .
*/
2014-11-28 09:07:47 -05:00
if ( ! ( ( req - > flags | res - > flags ) &
2012-11-08 08:49:17 -05:00
( CF_SHUTR | CF_READ_ACTIVITY | CF_READ_TIMEOUT | CF_SHUTW |
CF_WRITE_ACTIVITY | CF_WRITE_TIMEOUT | CF_ANA_TIMEOUT ) ) & &
2014-11-28 09:07:47 -05:00
! ( ( si_f - > flags | si_b - > flags ) & ( SI_FL_EXP | SI_FL_ERR ) ) & &
2016-05-04 04:18:37 -04:00
( ( t - > state & TASK_WOKEN_ANY ) = = TASK_WOKEN_TIMER ) ) {
si_f - > flags & = ~ SI_FL_DONT_WAKE ;
si_b - > flags & = ~ SI_FL_DONT_WAKE ;
2012-11-08 08:49:17 -05:00
goto update_exp_and_leave ;
2016-05-04 04:18:37 -04:00
}
2009-06-21 16:03:51 -04:00
}
2008-11-30 12:47:21 -05:00
MAJOR: session: only allocate buffers when needed
A session doesn't need buffers all the time, especially when they're
empty. With this patch, we don't allocate buffers anymore when the
session is initialized, we only allocate them in two cases :
- during process_session()
- during I/O operations
During process_session(), we try hard to allocate both buffers at once
so that we know for sure that a started operation can complete. Indeed,
a previous version of this patch used to allocate one buffer at a time,
but it can result in a deadlock when all buffers are allocated for
requests for example, and there's no buffer left to emit error responses.
Here, if any of the buffers cannot be allocated, the whole operation is
cancelled and the session is added at the tail of the buffer wait queue.
At the end of process_session(), a call to session_release_buffers() is
done so that we can offer unused buffers to other sessions waiting for
them.
For I/O operations, we only need to allocate a buffer on the Rx path.
For this, we only allocate a single buffer but ensure that at least two
are available to avoid the deadlock situation. In case buffers are not
available, SI_FL_WAIT_ROOM is set on the stream interface and the session
is queued. Unused buffers resulting either from a successful send() or
from an unused read buffer are offered to pending sessions during the
->wake() callback.
2014-11-25 13:46:36 -05:00
/* below we may emit error messages so we have to ensure that we have
* our buffers properly allocated .
*/
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
if ( ! stream_alloc_work_buffer ( s ) ) {
MAJOR: session: only allocate buffers when needed
A session doesn't need buffers all the time, especially when they're
empty. With this patch, we don't allocate buffers anymore when the
session is initialized, we only allocate them in two cases :
- during process_session()
- during I/O operations
During process_session(), we try hard to allocate both buffers at once
so that we know for sure that a started operation can complete. Indeed,
a previous version of this patch used to allocate one buffer at a time,
but it can result in a deadlock when all buffers are allocated for
requests for example, and there's no buffer left to emit error responses.
Here, if any of the buffers cannot be allocated, the whole operation is
cancelled and the session is added at the tail of the buffer wait queue.
At the end of process_session(), a call to session_release_buffers() is
done so that we can offer unused buffers to other sessions waiting for
them.
For I/O operations, we only need to allocate a buffer on the Rx path.
For this, we only allocate a single buffer but ensure that at least two
are available to avoid the deadlock situation. In case buffers are not
available, SI_FL_WAIT_ROOM is set on the stream interface and the session
is queued. Unused buffers resulting either from a successful send() or
from an unused read buffer are offered to pending sessions during the
->wake() callback.
2014-11-25 13:46:36 -05:00
/* No buffer available, we've been subscribed to the list of
* buffer waiters , let ' s wait for our turn .
*/
2016-05-04 04:18:37 -04:00
si_f - > flags & = ~ SI_FL_DONT_WAKE ;
si_b - > flags & = ~ SI_FL_DONT_WAKE ;
MAJOR: session: only allocate buffers when needed
A session doesn't need buffers all the time, especially when they're
empty. With this patch, we don't allocate buffers anymore when the
session is initialized, we only allocate them in two cases :
- during process_session()
- during I/O operations
During process_session(), we try hard to allocate both buffers at once
so that we know for sure that a started operation can complete. Indeed,
a previous version of this patch used to allocate one buffer at a time,
but it can result in a deadlock when all buffers are allocated for
requests for example, and there's no buffer left to emit error responses.
Here, if any of the buffers cannot be allocated, the whole operation is
cancelled and the session is added at the tail of the buffer wait queue.
At the end of process_session(), a call to session_release_buffers() is
done so that we can offer unused buffers to other sessions waiting for
them.
For I/O operations, we only need to allocate a buffer on the Rx path.
For this, we only allocate a single buffer but ensure that at least two
are available to avoid the deadlock situation. In case buffers are not
available, SI_FL_WAIT_ROOM is set on the stream interface and the session
is queued. Unused buffers resulting either from a successful send() or
from an unused read buffer are offered to pending sessions during the
->wake() callback.
2014-11-25 13:46:36 -05:00
goto update_exp_and_leave ;
}
2008-11-30 12:47:21 -05:00
/* 1b: check for low-level errors reported at the stream interface.
* First we check if it ' s a retryable error ( in which case we don ' t
* want to tell the buffer ) . Otherwise we report the error one level
* upper by setting flags into the buffers . Note that the side towards
* the client cannot have connect ( hence retryable ) errors . Also , the
* connection setup code must be able to deal with any type of abort .
*/
2012-11-11 18:42:33 -05:00
srv = objt_server ( s - > target ) ;
2014-11-28 09:07:47 -05:00
if ( unlikely ( si_f - > flags & SI_FL_ERR ) ) {
if ( si_f - > state = = SI_ST_EST | | si_f - > state = = SI_ST_DIS ) {
si_shutr ( si_f ) ;
si_shutw ( si_f ) ;
stream_int_report_error ( si_f ) ;
if ( ! ( req - > analysers ) & & ! ( res - > analysers ) ) {
2011-03-10 17:25:56 -05:00
s - > be - > be_counters . cli_aborts + + ;
2015-04-03 09:40:56 -04:00
sess - > fe - > fe_counters . cli_aborts + + ;
2011-03-10 10:55:02 -05:00
if ( srv )
srv - > counters . cli_aborts + + ;
2015-04-02 19:14:29 -04:00
if ( ! ( s - > flags & SF_ERR_MASK ) )
s - > flags | = SF_ERR_CLICL ;
if ( ! ( s - > flags & SF_FINST_MASK ) )
s - > flags | = SF_FINST_D ;
2008-12-14 05:44:04 -05:00
}
2008-11-30 12:47:21 -05:00
}
}
2014-11-28 09:07:47 -05:00
if ( unlikely ( si_b - > flags & SI_FL_ERR ) ) {
if ( si_b - > state = = SI_ST_EST | | si_b - > state = = SI_ST_DIS ) {
si_shutr ( si_b ) ;
si_shutw ( si_b ) ;
stream_int_report_error ( si_b ) ;
2011-03-10 17:25:56 -05:00
s - > be - > be_counters . failed_resp + + ;
2011-03-10 10:55:02 -05:00
if ( srv )
srv - > counters . failed_resp + + ;
2014-11-28 09:07:47 -05:00
if ( ! ( req - > analysers ) & & ! ( res - > analysers ) ) {
2011-03-10 17:25:56 -05:00
s - > be - > be_counters . srv_aborts + + ;
2015-04-03 09:40:56 -04:00
sess - > fe - > fe_counters . srv_aborts + + ;
2011-03-10 10:55:02 -05:00
if ( srv )
srv - > counters . srv_aborts + + ;
2015-04-02 19:14:29 -04:00
if ( ! ( s - > flags & SF_ERR_MASK ) )
s - > flags | = SF_ERR_SRVCL ;
if ( ! ( s - > flags & SF_FINST_MASK ) )
s - > flags | = SF_FINST_D ;
2008-12-14 05:44:04 -05:00
}
2008-11-30 12:47:21 -05:00
}
/* note: maybe we should process connection errors here ? */
}
2014-11-28 09:07:47 -05:00
if ( si_b - > state = = SI_ST_CON ) {
2008-11-30 12:47:21 -05:00
/* we were trying to establish a connection on the server side,
* maybe it succeeded , maybe it failed , maybe we timed out , . . .
*/
2014-11-28 09:15:44 -05:00
if ( unlikely ( ! sess_update_st_con_tcp ( s ) ) )
sess_update_st_cer ( s ) ;
2014-11-28 09:07:47 -05:00
else if ( si_b - > state = = SI_ST_EST )
2014-11-28 09:15:44 -05:00
sess_establish ( s ) ;
2008-11-30 12:47:21 -05:00
/* state is now one of SI_ST_CON (still in progress), SI_ST_EST
* ( established ) , SI_ST_DIS ( abort ) , SI_ST_CLO ( last error ) ,
* SI_ST_ASS / SI_ST_TAR / SI_ST_REQ for retryable errors .
*/
}
2014-11-28 09:07:47 -05:00
rq_prod_last = si_f - > state ;
rq_cons_last = si_b - > state ;
rp_cons_last = si_f - > state ;
rp_prod_last = si_b - > state ;
2010-07-27 11:15:12 -04:00
resync_stream_interface :
2008-11-30 12:47:21 -05:00
/* Check for connection closure */
DPRINTF ( stderr ,
2012-03-01 12:19:58 -05:00
" [%u] %s:%d: task=%p s=%p, sfl=0x%08x, rq=%p, rp=%p, exp(r,w)=%u,%u rqf=%08x rpf=%08x rqh=%d rqt=%d rph=%d rpt=%d cs=%d ss=%d, cet=0x%x set=0x%x retr=%d \n " ,
2008-11-30 12:47:21 -05:00
now_ms , __FUNCTION__ , __LINE__ ,
t ,
s , s - > flags ,
2014-11-28 09:07:47 -05:00
req , res ,
req - > rex , res - > wex ,
req - > flags , res - > flags ,
req - > buf - > i , req - > buf - > o , res - > buf - > i , res - > buf - > o , si_f - > state , si_b - > state ,
si_f - > err_type , si_b - > err_type ,
si_b - > conn_retries ) ;
2008-11-30 12:47:21 -05:00
/* nothing special to be done on client side */
2014-11-28 09:07:47 -05:00
if ( unlikely ( si_f - > state = = SI_ST_DIS ) )
si_f - > state = SI_ST_CLO ;
2008-11-30 12:47:21 -05:00
/* When a server-side connection is released, we have to count it and
* check for pending connections on this server .
*/
2014-11-28 09:07:47 -05:00
if ( unlikely ( si_b - > state = = SI_ST_DIS ) ) {
si_b - > state = SI_ST_CLO ;
2012-11-11 18:42:33 -05:00
srv = objt_server ( s - > target ) ;
2011-03-10 10:55:02 -05:00
if ( srv ) {
2015-04-02 19:14:29 -04:00
if ( s - > flags & SF_CURR_SESS ) {
s - > flags & = ~ SF_CURR_SESS ;
2011-03-10 10:55:02 -05:00
srv - > cur_sess - - ;
2008-11-30 12:47:21 -05:00
}
sess_change_server ( s , NULL ) ;
2011-03-10 10:55:02 -05:00
if ( may_dequeue_tasks ( srv , s - > be ) )
process_srv_queue ( srv ) ;
2008-11-30 12:47:21 -05:00
}
}
/*
* Note : of the transient states ( REQ , CER , DIS ) , only REQ may remain
* at this point .
*/
2009-03-08 14:20:25 -04:00
resync_request :
2008-11-30 12:47:21 -05:00
/* Analyse request */
2014-11-28 09:07:47 -05:00
if ( ( ( req - > flags & ~ rqf_last ) & CF_MASK_ANALYSER ) | |
( ( req - > flags ^ rqf_last ) & CF_MASK_STATIC ) | |
si_f - > state ! = rq_prod_last | |
si_b - > state ! = rq_cons_last | |
2014-11-24 08:49:56 -05:00
s - > task - > state & TASK_WOKEN_MSG ) {
2014-11-28 09:07:47 -05:00
unsigned int flags = req - > flags ;
2008-11-30 12:47:21 -05:00
2014-11-28 09:07:47 -05:00
if ( si_f - > state > = SI_ST_EST ) {
2010-01-07 18:32:27 -05:00
int max_loops = global . tune . maxpollevents ;
2010-01-06 17:53:24 -05:00
unsigned int ana_list ;
unsigned int ana_back ;
2009-06-28 13:37:53 -04:00
2010-01-06 18:20:41 -05:00
/* it's up to the analysers to stop new connections,
* disable reading or closing . Note : if an analyser
* disables any of these bits , it is responsible for
* enabling them again when it disables itself , so
* that other analysers are called in similar conditions .
*/
2014-11-28 09:07:47 -05:00
channel_auto_read ( req ) ;
channel_auto_connect ( req ) ;
channel_auto_close ( req ) ;
2008-11-30 17:15:34 -05:00
/* We will call all analysers for which a bit is set in
2014-11-28 09:07:47 -05:00
* req - > analysers , following the bit order from LSB
2008-11-30 17:15:34 -05:00
* to MSB . The analysers must remove themselves from
2009-06-28 13:37:53 -04:00
* the list when not needed . Any analyser may return 0
* to break out of the loop , either because of missing
* data to take a decision , or because it decides to
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
* kill the stream . We loop at least once through each
2009-06-28 13:37:53 -04:00
* analyser , and we may loop again if other analysers
* are added in the middle .
2010-01-06 17:53:24 -05:00
*
* We build a list of analysers to run . We evaluate all
* of these analysers in the order of the lower bit to
* the higher bit . This ordering is very important .
* An analyser will often add / remove other analysers ,
* including itself . Any changes to itself have no effect
* on the loop . If it removes any other analysers , we
* want those analysers not to be called anymore during
* this loop . If it adds an analyser that is located
* after itself , we want it to be scheduled for being
* processed during the loop . If it adds an analyser
* which is located before it , we want it to switch to
* it immediately , even if it has already been called
* once but removed since .
*
* In order to achieve this , we compare the analyser
* list after the call with a copy of it before the
* call . The work list is fed with analyser bits that
* appeared during the call . Then we compare previous
* work list with the new one , and check the bits that
* appeared . If the lowest of these bits is lower than
* the current bit , it means we have enabled a previous
* analyser and must immediately loop again .
2008-11-30 17:15:34 -05:00
*/
2009-06-28 13:37:53 -04:00
2014-11-28 09:07:47 -05:00
ana_list = ana_back = req - > analysers ;
2010-01-07 18:32:27 -05:00
while ( ana_list & & max_loops - - ) {
2010-01-06 17:53:24 -05:00
/* Warning! ensure that analysers are always placed in ascending order! */
2016-05-11 11:06:28 -04:00
ANALYZE ( s , req , flt_start_analyze , ana_list , ana_back , AN_FLT_START_FE ) ;
FLT_ANALYZE ( s , req , tcp_inspect_request , ana_list , ana_back , AN_REQ_INSPECT_FE ) ;
FLT_ANALYZE ( s , req , http_wait_for_request , ana_list , ana_back , AN_REQ_WAIT_HTTP ) ;
FLT_ANALYZE ( s , req , http_wait_for_request_body , ana_list , ana_back , AN_REQ_HTTP_BODY ) ;
FLT_ANALYZE ( s , req , http_process_req_common , ana_list , ana_back , AN_REQ_HTTP_PROCESS_FE , sess - > fe ) ;
FLT_ANALYZE ( s , req , process_switching_rules , ana_list , ana_back , AN_REQ_SWITCHING_RULES ) ;
ANALYZE ( s , req , flt_start_analyze , ana_list , ana_back , AN_FLT_START_BE ) ;
FLT_ANALYZE ( s , req , tcp_inspect_request , ana_list , ana_back , AN_REQ_INSPECT_BE ) ;
FLT_ANALYZE ( s , req , http_process_req_common , ana_list , ana_back , AN_REQ_HTTP_PROCESS_BE , s - > be ) ;
FLT_ANALYZE ( s , req , http_process_tarpit , ana_list , ana_back , AN_REQ_HTTP_TARPIT ) ;
FLT_ANALYZE ( s , req , process_server_rules , ana_list , ana_back , AN_REQ_SRV_RULES ) ;
FLT_ANALYZE ( s , req , http_process_request , ana_list , ana_back , AN_REQ_HTTP_INNER ) ;
FLT_ANALYZE ( s , req , tcp_persist_rdp_cookie , ana_list , ana_back , AN_REQ_PRST_RDP_COOKIE ) ;
FLT_ANALYZE ( s , req , process_sticking_rules , ana_list , ana_back , AN_REQ_STICKING_RULES ) ;
ANALYZE ( s , req , flt_analyze_http_headers , ana_list , ana_back , AN_FLT_HTTP_HDRS ) ;
ANALYZE ( s , req , flt_xfer_data , ana_list , ana_back , AN_FLT_XFER_DATA ) ;
ANALYZE ( s , req , http_request_forward_body , ana_list , ana_back , AN_REQ_HTTP_XFER_BODY ) ;
ANALYZE ( s , req , flt_end_analyze , ana_list , ana_back , AN_FLT_END ) ;
2010-01-07 18:32:27 -05:00
break ;
}
2008-11-30 12:47:21 -05:00
}
2009-03-15 17:34:05 -04:00
2014-11-28 09:07:47 -05:00
rq_prod_last = si_f - > state ;
rq_cons_last = si_b - > state ;
req - > flags & = ~ CF_WAKE_ONCE ;
rqf_last = req - > flags ;
2010-07-27 11:15:12 -04:00
2014-11-28 09:07:47 -05:00
if ( ( req - > flags ^ flags ) & CF_MASK_STATIC )
2009-06-21 16:43:05 -04:00
goto resync_request ;
}
2010-01-06 18:09:04 -05:00
/* we'll monitor the request analysers while parsing the response,
* because some response analysers may indirectly enable new request
* analysers ( eg : HTTP keep - alive ) .
*/
2014-11-28 09:07:47 -05:00
req_ana_back = req - > analysers ;
2010-01-06 18:09:04 -05:00
2009-06-21 16:43:05 -04:00
resync_response :
/* Analyse response */
2014-11-28 09:07:47 -05:00
if ( ( ( res - > flags & ~ rpf_last ) & CF_MASK_ANALYSER ) | |
( res - > flags ^ rpf_last ) & CF_MASK_STATIC | |
si_f - > state ! = rp_cons_last | |
si_b - > state ! = rp_prod_last | |
2014-11-24 08:49:56 -05:00
s - > task - > state & TASK_WOKEN_MSG ) {
2014-11-28 09:07:47 -05:00
unsigned int flags = res - > flags ;
2009-06-21 16:43:05 -04:00
2014-11-28 09:07:47 -05:00
if ( ( res - > flags & CF_MASK_ANALYSER ) & &
( res - > analysers & AN_REQ_ALL ) ) {
2010-12-17 01:13:42 -05:00
/* Due to HTTP pipelining, the HTTP request analyser might be waiting
* for some free space in the response buffer , so we might need to call
* it when something changes in the response buffer , but still we pass
* it the request buffer . Note that the SI state might very well still
* be zero due to us returning a flow of redirects !
*/
2014-11-28 09:07:47 -05:00
res - > analysers & = ~ AN_REQ_ALL ;
req - > flags | = CF_WAKE_ONCE ;
2010-12-17 01:13:42 -05:00
}
2014-11-28 09:07:47 -05:00
if ( si_b - > state > = SI_ST_EST ) {
2010-01-07 18:32:27 -05:00
int max_loops = global . tune . maxpollevents ;
2010-01-06 17:53:24 -05:00
unsigned int ana_list ;
unsigned int ana_back ;
2009-10-18 16:53:08 -04:00
2010-01-06 18:20:41 -05:00
/* it's up to the analysers to stop disable reading or
* closing . Note : if an analyser disables any of these
* bits , it is responsible for enabling them again when
* it disables itself , so that other analysers are called
* in similar conditions .
*/
2014-11-28 09:07:47 -05:00
channel_auto_read ( res ) ;
channel_auto_close ( res ) ;
2009-10-18 16:53:08 -04:00
/* We will call all analysers for which a bit is set in
2014-11-28 09:07:47 -05:00
* res - > analysers , following the bit order from LSB
2009-10-18 16:53:08 -04:00
* to MSB . The analysers must remove themselves from
* the list when not needed . Any analyser may return 0
* to break out of the loop , either because of missing
* data to take a decision , or because it decides to
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
* kill the stream . We loop at least once through each
2009-10-18 16:53:08 -04:00
* analyser , and we may loop again if other analysers
* are added in the middle .
*/
2014-11-28 09:07:47 -05:00
ana_list = ana_back = res - > analysers ;
2010-01-07 18:32:27 -05:00
while ( ana_list & & max_loops - - ) {
2010-01-06 17:53:24 -05:00
/* Warning! ensure that analysers are always placed in ascending order! */
2016-05-11 11:06:28 -04:00
ANALYZE ( s , res , flt_start_analyze , ana_list , ana_back , AN_FLT_START_FE ) ;
ANALYZE ( s , res , flt_start_analyze , ana_list , ana_back , AN_FLT_START_BE ) ;
FLT_ANALYZE ( s , res , tcp_inspect_response , ana_list , ana_back , AN_RES_INSPECT ) ;
FLT_ANALYZE ( s , res , http_wait_for_response , ana_list , ana_back , AN_RES_WAIT_HTTP ) ;
FLT_ANALYZE ( s , res , process_store_rules , ana_list , ana_back , AN_RES_STORE_RULES ) ;
FLT_ANALYZE ( s , res , http_process_res_common , ana_list , ana_back , AN_RES_HTTP_PROCESS_BE , s - > be ) ;
ANALYZE ( s , res , flt_analyze_http_headers , ana_list , ana_back , AN_FLT_HTTP_HDRS ) ;
ANALYZE ( s , res , flt_xfer_data , ana_list , ana_back , AN_FLT_XFER_DATA ) ;
ANALYZE ( s , res , http_response_forward_body , ana_list , ana_back , AN_RES_HTTP_XFER_BODY ) ;
ANALYZE ( s , res , flt_end_analyze , ana_list , ana_back , AN_FLT_END ) ;
2010-01-07 18:32:27 -05:00
break ;
}
2009-06-21 16:43:05 -04:00
}
2014-11-28 09:07:47 -05:00
rp_cons_last = si_f - > state ;
rp_prod_last = si_b - > state ;
rpf_last = res - > flags ;
2010-07-27 11:15:12 -04:00
2014-11-28 09:07:47 -05:00
if ( ( res - > flags ^ flags ) & CF_MASK_STATIC )
2009-06-21 16:43:05 -04:00
goto resync_response ;
}
2010-01-06 18:09:04 -05:00
/* maybe someone has added some request analysers, so we must check and loop */
2014-11-28 09:07:47 -05:00
if ( req - > analysers & ~ req_ana_back )
2010-01-06 18:09:04 -05:00
goto resync_request ;
2014-11-28 09:07:47 -05:00
if ( ( req - > flags & ~ rqf_last ) & CF_MASK_ANALYSER )
2010-12-17 01:13:42 -05:00
goto resync_request ;
2009-06-21 16:43:05 -04:00
/* FIXME: here we should call protocol handlers which rely on
* both buffers .
*/
/*
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
* Now we propagate unhandled errors to the stream . Normally
2010-03-04 14:34:23 -05:00
* we ' re just in a data phase here since it means we have not
* seen any analyser who could set an error status .
2009-06-21 16:43:05 -04:00
*/
2012-11-11 18:42:33 -05:00
srv = objt_server ( s - > target ) ;
2015-04-02 19:14:29 -04:00
if ( unlikely ( ! ( s - > flags & SF_ERR_MASK ) ) ) {
2014-11-28 09:07:47 -05:00
if ( req - > flags & ( CF_READ_ERROR | CF_READ_TIMEOUT | CF_WRITE_ERROR | CF_WRITE_TIMEOUT ) ) {
2009-06-21 16:43:05 -04:00
/* Report it if the client got an error or a read timeout expired */
2014-11-28 09:07:47 -05:00
req - > analysers = 0 ;
if ( req - > flags & CF_READ_ERROR ) {
2011-03-10 17:25:56 -05:00
s - > be - > be_counters . cli_aborts + + ;
2015-04-03 09:40:56 -04:00
sess - > fe - > fe_counters . cli_aborts + + ;
2011-03-10 10:55:02 -05:00
if ( srv )
srv - > counters . cli_aborts + + ;
2015-04-02 19:14:29 -04:00
s - > flags | = SF_ERR_CLICL ;
2010-03-04 14:34:23 -05:00
}
2014-11-28 09:07:47 -05:00
else if ( req - > flags & CF_READ_TIMEOUT ) {
2011-03-10 17:25:56 -05:00
s - > be - > be_counters . cli_aborts + + ;
2015-04-03 09:40:56 -04:00
sess - > fe - > fe_counters . cli_aborts + + ;
2011-03-10 10:55:02 -05:00
if ( srv )
srv - > counters . cli_aborts + + ;
2015-04-02 19:14:29 -04:00
s - > flags | = SF_ERR_CLITO ;
2010-03-04 14:34:23 -05:00
}
2014-11-28 09:07:47 -05:00
else if ( req - > flags & CF_WRITE_ERROR ) {
2011-03-10 17:25:56 -05:00
s - > be - > be_counters . srv_aborts + + ;
2015-04-03 09:40:56 -04:00
sess - > fe - > fe_counters . srv_aborts + + ;
2011-03-10 10:55:02 -05:00
if ( srv )
srv - > counters . srv_aborts + + ;
2015-04-02 19:14:29 -04:00
s - > flags | = SF_ERR_SRVCL ;
2010-03-04 14:34:23 -05:00
}
else {
2011-03-10 17:25:56 -05:00
s - > be - > be_counters . srv_aborts + + ;
2015-04-03 09:40:56 -04:00
sess - > fe - > fe_counters . srv_aborts + + ;
2011-03-10 10:55:02 -05:00
if ( srv )
srv - > counters . srv_aborts + + ;
2015-04-02 19:14:29 -04:00
s - > flags | = SF_ERR_SRVTO ;
2010-03-04 14:34:23 -05:00
}
2009-03-15 17:34:05 -04:00
sess_set_term_flags ( s ) ;
}
2014-11-28 09:07:47 -05:00
else if ( res - > flags & ( CF_READ_ERROR | CF_READ_TIMEOUT | CF_WRITE_ERROR | CF_WRITE_TIMEOUT ) ) {
2009-06-21 16:43:05 -04:00
/* Report it if the server got an error or a read timeout expired */
2014-11-28 09:07:47 -05:00
res - > analysers = 0 ;
if ( res - > flags & CF_READ_ERROR ) {
2011-03-10 17:25:56 -05:00
s - > be - > be_counters . srv_aborts + + ;
2015-04-03 09:40:56 -04:00
sess - > fe - > fe_counters . srv_aborts + + ;
2011-03-10 10:55:02 -05:00
if ( srv )
srv - > counters . srv_aborts + + ;
2015-04-02 19:14:29 -04:00
s - > flags | = SF_ERR_SRVCL ;
2010-03-04 14:34:23 -05:00
}
2014-11-28 09:07:47 -05:00
else if ( res - > flags & CF_READ_TIMEOUT ) {
2011-03-10 17:25:56 -05:00
s - > be - > be_counters . srv_aborts + + ;
2015-04-03 09:40:56 -04:00
sess - > fe - > fe_counters . srv_aborts + + ;
2011-03-10 10:55:02 -05:00
if ( srv )
srv - > counters . srv_aborts + + ;
2015-04-02 19:14:29 -04:00
s - > flags | = SF_ERR_SRVTO ;
2010-03-04 14:34:23 -05:00
}
2014-11-28 09:07:47 -05:00
else if ( res - > flags & CF_WRITE_ERROR ) {
2011-03-10 17:25:56 -05:00
s - > be - > be_counters . cli_aborts + + ;
2015-04-03 09:40:56 -04:00
sess - > fe - > fe_counters . cli_aborts + + ;
2011-03-10 10:55:02 -05:00
if ( srv )
srv - > counters . cli_aborts + + ;
2015-04-02 19:14:29 -04:00
s - > flags | = SF_ERR_CLICL ;
2010-03-04 14:34:23 -05:00
}
else {
2011-03-10 17:25:56 -05:00
s - > be - > be_counters . cli_aborts + + ;
2015-04-03 09:40:56 -04:00
sess - > fe - > fe_counters . cli_aborts + + ;
2011-03-10 10:55:02 -05:00
if ( srv )
srv - > counters . cli_aborts + + ;
2015-04-02 19:14:29 -04:00
s - > flags | = SF_ERR_CLITO ;
2010-03-04 14:34:23 -05:00
}
2009-06-21 16:43:05 -04:00
sess_set_term_flags ( s ) ;
}
2009-03-15 17:34:05 -04:00
}
2009-06-21 16:43:05 -04:00
/*
* Here we take care of forwarding unhandled data . This also includes
* connection establishments and shutdown requests .
*/
2009-03-08 16:38:23 -04:00
/* If noone is interested in analysing data, it's time to forward
2009-09-20 06:07:52 -04:00
* everything . We configure the buffer to forward indefinitely .
2012-08-27 17:14:58 -04:00
* Note that we ' re checking CF_SHUTR_NOW as an indication of a possible
2012-08-27 18:06:31 -04:00
* recent call to channel_abort ( ) .
2008-12-14 11:31:54 -05:00
*/
2014-11-28 09:07:47 -05:00
if ( unlikely ( ! req - > analysers & &
! ( req - > flags & ( CF_SHUTW | CF_SHUTR_NOW ) ) & &
( si_f - > state > = SI_ST_EST ) & &
( req - > to_forward ! = CHN_INFINITE_FORWARD ) ) ) {
2012-11-11 17:05:39 -05:00
/* This buffer is freewheeling, there's no analyser
2009-03-08 16:38:23 -04:00
* attached to it . If any data are left in , we ' ll permit them to
* move .
*/
2014-11-28 09:07:47 -05:00
channel_auto_read ( req ) ;
channel_auto_connect ( req ) ;
channel_auto_close ( req ) ;
buffer_flush ( req - > buf ) ;
2009-03-08 16:38:23 -04:00
2010-11-07 14:26:56 -05:00
/* We'll let data flow between the producer (if still connected)
* to the consumer ( which might possibly not be connected yet ) .
2009-03-08 16:38:23 -04:00
*/
2014-11-28 09:07:47 -05:00
if ( ! ( req - > flags & ( CF_SHUTR | CF_SHUTW_NOW ) ) )
2016-05-04 08:05:58 -04:00
channel_forward_forever ( req ) ;
2015-07-09 12:38:57 -04:00
/* Just in order to support fetching HTTP contents after start
* of forwarding when the HTTP forwarding analyser is not used ,
* we simply reset msg - > sov so that HTTP rewinding points to the
* headers .
*/
if ( s - > txn )
s - > txn - > req . sov = s - > txn - > req . eoh + s - > txn - > req . eol - req - > buf - > o ;
2008-12-14 11:31:54 -05:00
}
2008-12-13 15:12:26 -05:00
2009-03-08 16:38:23 -04:00
/* check if it is wise to enable kernel splicing to forward request data */
2014-11-28 09:07:47 -05:00
if ( ! ( req - > flags & ( CF_KERN_SPLICING | CF_SHUTR ) ) & &
req - > to_forward & &
2009-03-08 16:38:23 -04:00
( global . tune . options & GTUNE_USE_SPLICE ) & &
2014-11-28 09:07:47 -05:00
( objt_conn ( si_f - > end ) & & __objt_conn ( si_f - > end ) - > xprt & & __objt_conn ( si_f - > end ) - > xprt - > rcv_pipe ) & &
( objt_conn ( si_b - > end ) & & __objt_conn ( si_b - > end ) - > xprt & & __objt_conn ( si_b - > end ) - > xprt - > snd_pipe ) & &
2009-03-08 16:38:23 -04:00
( pipes_used < global . maxpipes ) & &
2015-04-03 09:40:56 -04:00
( ( ( sess - > fe - > options2 | s - > be - > options2 ) & PR_O2_SPLIC_REQ ) | |
( ( ( sess - > fe - > options2 | s - > be - > options2 ) & PR_O2_SPLIC_AUT ) & &
2014-11-28 09:07:47 -05:00
( req - > flags & CF_STREAMER_FAST ) ) ) ) {
req - > flags | = CF_KERN_SPLICING ;
2009-03-08 16:38:23 -04:00
}
2008-11-30 12:47:21 -05:00
/* reflect what the L7 analysers have seen last */
2014-11-28 09:07:47 -05:00
rqf_last = req - > flags ;
2008-11-30 12:47:21 -05:00
/*
* Now forward all shutdown requests between both sides of the buffer
*/
2009-09-19 15:04:57 -04:00
/* first, let's check if the request buffer needs to shutdown(write), which may
* happen either because the input is closed or because we want to force a close
2014-05-10 08:30:07 -04:00
* once the server has begun to respond . If a half - closed timeout is set , we adjust
* the other side ' s timeout as well .
2009-09-19 15:04:57 -04:00
*/
2014-11-28 09:07:47 -05:00
if ( unlikely ( ( req - > flags & ( CF_SHUTW | CF_SHUTW_NOW | CF_AUTO_CLOSE | CF_SHUTR ) ) = =
2014-05-10 08:30:07 -04:00
( CF_AUTO_CLOSE | CF_SHUTR ) ) ) {
2014-11-28 09:07:47 -05:00
channel_shutw_now ( req ) ;
2014-05-10 08:30:07 -04:00
}
2008-11-30 12:47:21 -05:00
/* shutdown(write) pending */
2014-11-28 09:07:47 -05:00
if ( unlikely ( ( req - > flags & ( CF_SHUTW | CF_SHUTW_NOW ) ) = = CF_SHUTW_NOW & &
channel_is_empty ( req ) ) ) {
if ( req - > flags & CF_READ_ERROR )
si_b - > flags | = SI_FL_NOLINGER ;
si_shutw ( si_b ) ;
2014-05-10 08:30:07 -04:00
if ( tick_isset ( s - > be - > timeout . serverfin ) ) {
2014-11-28 09:07:47 -05:00
res - > rto = s - > be - > timeout . serverfin ;
res - > rex = tick_add ( now_ms , res - > rto ) ;
2014-05-10 08:30:07 -04:00
}
2013-06-21 02:20:19 -04:00
}
2008-11-30 12:47:21 -05:00
/* shutdown(write) done on server side, we must stop the client too */
2014-11-28 09:07:47 -05:00
if ( unlikely ( ( req - > flags & ( CF_SHUTW | CF_SHUTR | CF_SHUTR_NOW ) ) = = CF_SHUTW & &
! req - > analysers ) )
channel_shutr_now ( req ) ;
2008-11-30 12:47:21 -05:00
/* shutdown(read) pending */
2014-11-28 09:07:47 -05:00
if ( unlikely ( ( req - > flags & ( CF_SHUTR | CF_SHUTR_NOW ) ) = = CF_SHUTR_NOW ) ) {
if ( si_f - > flags & SI_FL_NOHALF )
si_f - > flags | = SI_FL_NOLINGER ;
si_shutr ( si_f ) ;
2012-05-13 08:48:59 -04:00
}
2008-11-30 12:47:21 -05:00
2009-09-19 15:04:57 -04:00
/* it's possible that an upper layer has requested a connection setup or abort.
* There are 2 situations where we decide to establish a new connection :
* - there are data scheduled for emission in the buffer
2012-08-27 17:14:58 -04:00
* - the CF_AUTO_CONNECT flag is set ( active connection )
2009-09-19 15:04:57 -04:00
*/
2014-11-28 09:07:47 -05:00
if ( si_b - > state = = SI_ST_INI ) {
if ( ! ( req - > flags & CF_SHUTW ) ) {
if ( ( req - > flags & CF_AUTO_CONNECT ) | | ! channel_is_empty ( req ) ) {
2013-09-29 11:19:56 -04:00
/* If we have an appctx, there is no connect method, so we
* immediately switch to the connected state , otherwise we
* perform a connection request .
2009-09-19 15:04:57 -04:00
*/
2014-11-28 09:07:47 -05:00
si_b - > state = SI_ST_REQ ; /* new connection requested */
si_b - > conn_retries = s - > be - > conn_retries ;
2009-09-19 15:04:57 -04:00
}
2009-08-16 12:27:24 -04:00
}
2009-09-20 02:19:25 -04:00
else {
2014-11-28 09:07:47 -05:00
si_b - > state = SI_ST_CLO ; /* shutw+ini = abort */
channel_shutw_now ( req ) ; /* fix buffer flags upon abort */
channel_shutr_now ( res ) ;
2009-09-20 02:19:25 -04:00
}
2009-03-06 06:51:23 -05:00
}
2008-11-30 12:47:21 -05:00
/* we may have a pending connection request, or a connection waiting
* for completion .
*/
2014-11-28 09:07:47 -05:00
if ( si_b - > state > = SI_ST_REQ & & si_b - > state < SI_ST_CON ) {
2015-06-06 13:29:07 -04:00
/* prune the request variables and swap to the response variables. */
if ( s - > vars_reqres . scope ! = SCOPE_RES ) {
2016-03-10 10:33:04 -05:00
vars_prune ( & s - > vars_reqres , s - > sess , s ) ;
2015-06-06 13:29:07 -04:00
vars_init ( & s - > vars_reqres , SCOPE_RES ) ;
}
2008-11-30 12:47:21 -05:00
do {
/* nb: step 1 might switch from QUE to ASS, but we first want
* to give a chance to step 2 to perform a redirect if needed .
*/
2014-11-28 09:07:47 -05:00
if ( si_b - > state ! = SI_ST_REQ )
2014-11-28 09:15:44 -05:00
sess_update_stream_int ( s ) ;
2014-11-28 09:07:47 -05:00
if ( si_b - > state = = SI_ST_REQ )
2014-11-28 09:15:44 -05:00
sess_prepare_conn_req ( s ) ;
2008-11-30 12:47:21 -05:00
2013-12-31 17:32:12 -05:00
/* applets directly go to the ESTABLISHED state. Similarly,
* servers experience the same fate when their connection
* is reused .
*/
2014-11-28 09:07:47 -05:00
if ( unlikely ( si_b - > state = = SI_ST_EST ) )
2014-11-28 09:15:44 -05:00
sess_establish ( s ) ;
2013-12-31 17:16:50 -05:00
/* Now we can add the server name to a header (if requested) */
/* check for HTTP mode and proxy server_name_hdr_name != NULL */
BUG/MAJOR: http: don't call http_send_name_header() after an error
A crash was reported when using the "famous" http-send-name-header
directive. This time it's a bit tricky, it requires a certain number of
conditions to be met including maxconn on a server, queuing, timeout in
the queue and cookie-based persistence.
The problem is that in stream.c, before calling http_send_name_header(),
we check a number of conditions to know if we have to replace the header
name. But prior to reaching this place, it's possible for
sess_update_stream_int() to fail and change the stream-int's state to
SI_ST_CLO, send an error 503 to the client, and flush all buffers. But
http_send_name_header() can only be called with valid buffer contents
matching the http_msg's description. So when it rewinds the stream to
modify the header, buf->o becomes negative by the size of the incoming
request and is used as the argument to memmove() which basically
displaces 4GB of memory off a few bytes to write the new name, resulting
in a core and a core file that's really not fun to play with.
The solution obviously consists in refraining from calling this nasty
function when the stream interface is already closed.
This bug also affects 1.5 and possibly 1.4, so the fix must be backported
there.
2015-09-07 13:32:33 -04:00
if ( ( si_b - > state > = SI_ST_CON ) & & ( si_b - > state < SI_ST_CLO ) & &
2013-12-31 17:16:50 -05:00
( s - > be - > server_id_hdr_name ! = NULL ) & &
( s - > be - > mode = = PR_MODE_HTTP ) & &
objt_server ( s - > target ) ) {
2015-04-03 17:46:31 -04:00
http_send_name_header ( s - > txn , s - > be , objt_server ( s - > target ) - > id ) ;
2013-04-07 12:19:16 -04:00
}
2012-11-11 18:42:33 -05:00
srv = objt_server ( s - > target ) ;
2015-04-02 19:14:29 -04:00
if ( si_b - > state = = SI_ST_ASS & & srv & & srv - > rdr_len & & ( s - > flags & SF_REDIRECTABLE ) )
2014-11-28 09:07:47 -05:00
http_perform_server_redirect ( s , si_b ) ;
} while ( si_b - > state = = SI_ST_ASS ) ;
2008-11-30 12:47:21 -05:00
}
2009-06-21 16:43:05 -04:00
/* Benchmarks have shown that it's optimal to do a full resync now */
2014-11-28 09:07:47 -05:00
if ( si_f - > state = = SI_ST_DIS | | si_b - > state = = SI_ST_DIS )
2008-11-30 12:47:21 -05:00
goto resync_stream_interface ;
2010-07-27 11:15:12 -04:00
/* otherwise we want to check if we need to resync the req buffer or not */
2014-11-28 09:07:47 -05:00
if ( ( req - > flags ^ rqf_last ) & CF_MASK_STATIC )
2009-03-08 14:20:25 -04:00
goto resync_request ;
2009-06-21 16:43:05 -04:00
/* perform output updates to the response buffer */
2009-03-15 17:34:05 -04:00
2009-03-08 16:38:23 -04:00
/* If noone is interested in analysing data, it's time to forward
2009-09-20 06:07:52 -04:00
* everything . We configure the buffer to forward indefinitely .
2012-08-27 17:14:58 -04:00
* Note that we ' re checking CF_SHUTR_NOW as an indication of a possible
2012-08-27 18:06:31 -04:00
* recent call to channel_abort ( ) .
2008-12-14 11:31:54 -05:00
*/
2014-11-28 09:07:47 -05:00
if ( unlikely ( ! res - > analysers & &
! ( res - > flags & ( CF_SHUTW | CF_SHUTR_NOW ) ) & &
( si_b - > state > = SI_ST_EST ) & &
( res - > to_forward ! = CHN_INFINITE_FORWARD ) ) ) {
2012-11-11 17:05:39 -05:00
/* This buffer is freewheeling, there's no analyser
2009-03-08 16:38:23 -04:00
* attached to it . If any data are left in , we ' ll permit them to
* move .
*/
2014-11-28 09:07:47 -05:00
channel_auto_read ( res ) ;
channel_auto_close ( res ) ;
buffer_flush ( res - > buf ) ;
2010-11-07 14:26:56 -05:00
/* We'll let data flow between the producer (if still connected)
* to the consumer .
*/
2014-11-28 09:07:47 -05:00
if ( ! ( res - > flags & ( CF_SHUTR | CF_SHUTW_NOW ) ) )
2016-05-04 08:05:58 -04:00
channel_forward_forever ( res ) ;
2012-05-12 06:50:00 -04:00
2015-07-09 12:38:57 -04:00
/* Just in order to support fetching HTTP contents after start
* of forwarding when the HTTP forwarding analyser is not used ,
* we simply reset msg - > sov so that HTTP rewinding points to the
* headers .
*/
if ( s - > txn )
s - > txn - > rsp . sov = s - > txn - > rsp . eoh + s - > txn - > rsp . eol - res - > buf - > o ;
2012-05-12 06:50:00 -04:00
/* if we have no analyser anymore in any direction and have a
2014-05-10 08:30:07 -04:00
* tunnel timeout set , use it now . Note that we must respect
* the half - closed timeouts as well .
2012-05-12 06:50:00 -04:00
*/
2014-11-28 09:07:47 -05:00
if ( ! req - > analysers & & s - > be - > timeout . tunnel ) {
req - > rto = req - > wto = res - > rto = res - > wto =
2012-05-12 06:50:00 -04:00
s - > be - > timeout . tunnel ;
2014-05-10 08:30:07 -04:00
2015-04-03 09:40:56 -04:00
if ( ( req - > flags & CF_SHUTR ) & & tick_isset ( sess - > fe - > timeout . clientfin ) )
res - > wto = sess - > fe - > timeout . clientfin ;
2014-11-28 09:07:47 -05:00
if ( ( req - > flags & CF_SHUTW ) & & tick_isset ( s - > be - > timeout . serverfin ) )
res - > rto = s - > be - > timeout . serverfin ;
if ( ( res - > flags & CF_SHUTR ) & & tick_isset ( s - > be - > timeout . serverfin ) )
req - > wto = s - > be - > timeout . serverfin ;
2015-04-03 09:40:56 -04:00
if ( ( res - > flags & CF_SHUTW ) & & tick_isset ( sess - > fe - > timeout . clientfin ) )
req - > rto = sess - > fe - > timeout . clientfin ;
2014-11-28 09:07:47 -05:00
req - > rex = tick_add ( now_ms , req - > rto ) ;
req - > wex = tick_add ( now_ms , req - > wto ) ;
res - > rex = tick_add ( now_ms , res - > rto ) ;
res - > wex = tick_add ( now_ms , res - > wto ) ;
2012-05-12 06:50:00 -04:00
}
2008-12-14 11:31:54 -05:00
}
2008-12-13 15:12:26 -05:00
2009-03-08 16:38:23 -04:00
/* check if it is wise to enable kernel splicing to forward response data */
2014-11-28 09:07:47 -05:00
if ( ! ( res - > flags & ( CF_KERN_SPLICING | CF_SHUTR ) ) & &
res - > to_forward & &
2009-03-08 16:38:23 -04:00
( global . tune . options & GTUNE_USE_SPLICE ) & &
2014-11-28 09:07:47 -05:00
( objt_conn ( si_f - > end ) & & __objt_conn ( si_f - > end ) - > xprt & & __objt_conn ( si_f - > end ) - > xprt - > snd_pipe ) & &
( objt_conn ( si_b - > end ) & & __objt_conn ( si_b - > end ) - > xprt & & __objt_conn ( si_b - > end ) - > xprt - > rcv_pipe ) & &
2009-03-08 16:38:23 -04:00
( pipes_used < global . maxpipes ) & &
2015-04-03 09:40:56 -04:00
( ( ( sess - > fe - > options2 | s - > be - > options2 ) & PR_O2_SPLIC_RTR ) | |
( ( ( sess - > fe - > options2 | s - > be - > options2 ) & PR_O2_SPLIC_AUT ) & &
2014-11-28 09:07:47 -05:00
( res - > flags & CF_STREAMER_FAST ) ) ) ) {
res - > flags | = CF_KERN_SPLICING ;
2009-03-08 16:38:23 -04:00
}
2008-11-30 12:47:21 -05:00
/* reflect what the L7 analysers have seen last */
2014-11-28 09:07:47 -05:00
rpf_last = res - > flags ;
2008-11-30 12:47:21 -05:00
/*
* Now forward all shutdown requests between both sides of the buffer
*/
/*
* FIXME : this is probably where we should produce error responses .
*/
2008-12-14 11:31:54 -05:00
/* first, let's check if the response buffer needs to shutdown(write) */
2014-11-28 09:07:47 -05:00
if ( unlikely ( ( res - > flags & ( CF_SHUTW | CF_SHUTW_NOW | CF_AUTO_CLOSE | CF_SHUTR ) ) = =
2014-05-10 08:30:07 -04:00
( CF_AUTO_CLOSE | CF_SHUTR ) ) ) {
2014-11-28 09:07:47 -05:00
channel_shutw_now ( res ) ;
2014-05-10 08:30:07 -04:00
}
2008-11-30 12:47:21 -05:00
/* shutdown(write) pending */
2014-11-28 09:07:47 -05:00
if ( unlikely ( ( res - > flags & ( CF_SHUTW | CF_SHUTW_NOW ) ) = = CF_SHUTW_NOW & &
channel_is_empty ( res ) ) ) {
si_shutw ( si_f ) ;
2015-04-03 09:40:56 -04:00
if ( tick_isset ( sess - > fe - > timeout . clientfin ) ) {
req - > rto = sess - > fe - > timeout . clientfin ;
2014-11-28 09:07:47 -05:00
req - > rex = tick_add ( now_ms , req - > rto ) ;
2014-05-10 08:30:07 -04:00
}
}
2008-11-30 12:47:21 -05:00
/* shutdown(write) done on the client side, we must stop the server too */
2014-11-28 09:07:47 -05:00
if ( unlikely ( ( res - > flags & ( CF_SHUTW | CF_SHUTR | CF_SHUTR_NOW ) ) = = CF_SHUTW ) & &
! res - > analysers )
channel_shutr_now ( res ) ;
2008-11-30 12:47:21 -05:00
/* shutdown(read) pending */
2014-11-28 09:07:47 -05:00
if ( unlikely ( ( res - > flags & ( CF_SHUTR | CF_SHUTR_NOW ) ) = = CF_SHUTR_NOW ) ) {
if ( si_b - > flags & SI_FL_NOHALF )
si_b - > flags | = SI_FL_NOLINGER ;
si_shutr ( si_b ) ;
2012-05-13 08:48:59 -04:00
}
2008-11-30 12:47:21 -05:00
2014-11-28 09:07:47 -05:00
if ( si_f - > state = = SI_ST_DIS | | si_b - > state = = SI_ST_DIS )
2008-11-30 12:47:21 -05:00
goto resync_stream_interface ;
2014-11-28 09:07:47 -05:00
if ( req - > flags ! = rqf_last )
2009-03-08 14:20:25 -04:00
goto resync_request ;
2014-11-28 09:07:47 -05:00
if ( ( res - > flags ^ rpf_last ) & CF_MASK_STATIC )
2009-03-08 14:20:25 -04:00
goto resync_response ;
2008-11-30 12:47:21 -05:00
2009-09-05 14:57:35 -04:00
/* we're interested in getting wakeups again */
2014-11-28 09:07:47 -05:00
si_f - > flags & = ~ SI_FL_DONT_WAKE ;
si_b - > flags & = ~ SI_FL_DONT_WAKE ;
2009-09-05 14:57:35 -04:00
2008-11-30 12:47:21 -05:00
/* This is needed only when debugging is enabled, to indicate
* client - side or server - side close . Please note that in the unlikely
* event where both sides would close at once , the sequence is reported
* on the server side first .
*/
if ( unlikely ( ( global . mode & MODE_DEBUG ) & &
( ! ( global . mode & MODE_QUIET ) | |
( global . mode & MODE_VERBOSE ) ) ) ) {
2014-11-28 09:07:47 -05:00
if ( si_b - > state = = SI_ST_CLO & &
si_b - > prev_state = = SI_ST_EST ) {
2012-10-29 11:51:55 -04:00
chunk_printf ( & trash , " %08x:%s.srvcls[%04x:%04x] \n " ,
2008-11-30 12:47:21 -05:00
s - > uniq_id , s - > be - > id ,
2014-11-28 09:07:47 -05:00
objt_conn ( si_f - > end ) ? ( unsigned short ) objt_conn ( si_f - > end ) - > t . sock . fd : - 1 ,
objt_conn ( si_b - > end ) ? ( unsigned short ) objt_conn ( si_b - > end ) - > t . sock . fd : - 1 ) ;
2013-12-13 09:14:55 -05:00
shut_your_big_mouth_gcc ( write ( 1 , trash . str , trash . len ) ) ;
2008-11-30 12:47:21 -05:00
}
2014-11-28 09:07:47 -05:00
if ( si_f - > state = = SI_ST_CLO & &
si_f - > prev_state = = SI_ST_EST ) {
2012-10-29 11:51:55 -04:00
chunk_printf ( & trash , " %08x:%s.clicls[%04x:%04x] \n " ,
2008-11-30 12:47:21 -05:00
s - > uniq_id , s - > be - > id ,
2014-11-28 09:07:47 -05:00
objt_conn ( si_f - > end ) ? ( unsigned short ) objt_conn ( si_f - > end ) - > t . sock . fd : - 1 ,
objt_conn ( si_b - > end ) ? ( unsigned short ) objt_conn ( si_b - > end ) - > t . sock . fd : - 1 ) ;
2013-12-13 09:14:55 -05:00
shut_your_big_mouth_gcc ( write ( 1 , trash . str , trash . len ) ) ;
2008-11-30 12:47:21 -05:00
}
}
2014-11-28 09:07:47 -05:00
if ( likely ( ( si_f - > state ! = SI_ST_CLO ) | |
( si_b - > state > SI_ST_INI & & si_b - > state < SI_ST_CLO ) ) ) {
2008-11-30 12:47:21 -05:00
2015-04-03 09:40:56 -04:00
if ( ( sess - > fe - > options & PR_O_CONTSTATS ) & & ( s - > flags & SF_BE_ASSIGNED ) )
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
stream_process_counters ( s ) ;
2008-11-30 12:47:21 -05:00
2015-04-19 12:13:56 -04:00
if ( si_f - > state = = SI_ST_EST )
2014-11-28 09:07:47 -05:00
si_update ( si_f ) ;
2008-11-30 12:47:21 -05:00
2015-04-19 12:13:56 -04:00
if ( si_b - > state = = SI_ST_EST )
2014-11-28 09:07:47 -05:00
si_update ( si_b ) ;
2008-11-30 12:47:21 -05:00
2014-11-28 09:07:47 -05:00
req - > flags & = ~ ( CF_READ_NULL | CF_READ_PARTIAL | CF_WRITE_NULL | CF_WRITE_PARTIAL | CF_READ_ATTACHED ) ;
res - > flags & = ~ ( CF_READ_NULL | CF_READ_PARTIAL | CF_WRITE_NULL | CF_WRITE_PARTIAL | CF_READ_ATTACHED ) ;
si_f - > prev_state = si_f - > state ;
si_b - > prev_state = si_b - > state ;
si_f - > flags & = ~ ( SI_FL_ERR | SI_FL_EXP ) ;
si_b - > flags & = ~ ( SI_FL_ERR | SI_FL_EXP ) ;
2008-11-30 12:47:21 -05:00
2014-06-23 09:22:31 -04:00
/* Trick: if a request is being waiting for the server to respond,
* and if we know the server can timeout , we don ' t want the timeout
* to expire on the client side first , but we ' re still interested
* in passing data from the client to the server ( eg : POST ) . Thus ,
* we can cancel the client ' s request timeout if the server ' s
* request timeout is set and the server has not yet sent a response .
*/
2014-11-28 09:07:47 -05:00
if ( ( res - > flags & ( CF_AUTO_CLOSE | CF_SHUTR ) ) = = 0 & &
( tick_isset ( req - > wex ) | | tick_isset ( res - > rex ) ) ) {
req - > flags | = CF_READ_NOEXP ;
req - > rex = TICK_ETERNITY ;
2014-06-23 09:22:31 -04:00
}
2012-11-08 08:49:17 -05:00
update_exp_and_leave :
2016-05-04 04:18:37 -04:00
/* Note: please ensure that if you branch here you disable SI_FL_DONT_WAKE */
2016-11-10 08:58:05 -05:00
t - > expire = tick_first ( ( tick_is_expired ( t - > expire , now_ms ) ? 0 : t - > expire ) ,
tick_first ( tick_first ( req - > rex , req - > wex ) ,
tick_first ( res - > rex , res - > wex ) ) ) ;
2016-11-08 16:03:00 -05:00
if ( ! req - > analysers )
req - > analyse_exp = TICK_ETERNITY ;
if ( ( sess - > fe - > options & PR_O_CONTSTATS ) & & ( s - > flags & SF_BE_ASSIGNED ) & &
( ! tick_isset ( req - > analyse_exp ) | | tick_is_expired ( req - > analyse_exp , now_ms ) ) )
req - > analyse_exp = tick_add ( now_ms , 5000 ) ;
t - > expire = tick_first ( t - > expire , req - > analyse_exp ) ;
2008-11-30 12:47:21 -05:00
2014-11-28 09:07:47 -05:00
if ( si_f - > exp )
t - > expire = tick_first ( t - > expire , si_f - > exp ) ;
2008-11-30 12:47:21 -05:00
2014-11-28 09:07:47 -05:00
if ( si_b - > exp )
t - > expire = tick_first ( t - > expire , si_b - > exp ) ;
2008-11-30 12:47:21 -05:00
# ifdef DEBUG_FULL
2009-03-28 05:47:26 -04:00
fprintf ( stderr ,
" [%u] queuing with exp=%u req->rex=%u req->wex=%u req->ana_exp=%u "
" rep->rex=%u rep->wex=%u, si[0].exp=%u, si[1].exp=%u, cs=%d, ss=%d \n " ,
2014-11-28 09:07:47 -05:00
now_ms , t - > expire , req - > rex , req - > wex , req - > analyse_exp ,
res - > rex , res - > wex , si_f - > exp , si_b - > exp , si_f - > state , si_b - > state ) ;
2008-11-30 12:47:21 -05:00
# endif
# ifdef DEBUG_DEV
/* this may only happen when no timeout is set or in case of an FSM bug */
2009-03-08 10:53:06 -04:00
if ( ! tick_isset ( t - > expire ) )
2008-11-30 12:47:21 -05:00
ABORT_NOW ( ) ;
# endif
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
stream_release_buffers ( s ) ;
2009-03-08 04:38:41 -04:00
return t ; /* nothing more to do */
2008-11-30 12:47:21 -05:00
}
2015-04-03 09:40:56 -04:00
sess - > fe - > feconn - - ;
2015-04-02 19:14:29 -04:00
if ( s - > flags & SF_BE_ASSIGNED )
2008-11-30 12:47:21 -05:00
s - > be - > beconn - - ;
2010-08-31 09:39:26 -04:00
jobs - - ;
2015-04-03 08:46:27 -04:00
if ( sess - > listener ) {
if ( ! ( sess - > listener - > options & LI_O_UNLIMITED ) )
2014-03-20 10:42:53 -04:00
actconn - - ;
2015-04-03 08:46:27 -04:00
sess - > listener - > nbconn - - ;
if ( sess - > listener - > state = = LI_FULL )
resume_listener ( sess - > listener ) ;
2014-03-20 10:42:53 -04:00
/* Dequeues all of the listeners waiting for a resource */
if ( ! LIST_ISEMPTY ( & global_listener_queue ) )
dequeue_all_listeners ( & global_listener_queue ) ;
2015-04-03 09:40:56 -04:00
if ( ! LIST_ISEMPTY ( & sess - > fe - > listener_queue ) & &
( ! sess - > fe - > fe_sps_lim | | freq_ctr_remain ( & sess - > fe - > fe_sess_per_sec , sess - > fe - > fe_sps_lim , 0 ) > 0 ) )
dequeue_all_listeners ( & sess - > fe - > listener_queue ) ;
2014-03-20 10:42:53 -04:00
}
2011-07-24 17:55:06 -04:00
2008-11-30 12:47:21 -05:00
if ( unlikely ( ( global . mode & MODE_DEBUG ) & &
( ! ( global . mode & MODE_QUIET ) | | ( global . mode & MODE_VERBOSE ) ) ) ) {
2012-10-29 11:51:55 -04:00
chunk_printf ( & trash , " %08x:%s.closed[%04x:%04x] \n " ,
2008-11-30 12:47:21 -05:00
s - > uniq_id , s - > be - > id ,
2014-11-28 09:07:47 -05:00
objt_conn ( si_f - > end ) ? ( unsigned short ) objt_conn ( si_f - > end ) - > t . sock . fd : - 1 ,
objt_conn ( si_b - > end ) ? ( unsigned short ) objt_conn ( si_b - > end ) - > t . sock . fd : - 1 ) ;
2013-12-13 09:14:55 -05:00
shut_your_big_mouth_gcc ( write ( 1 , trash . str , trash . len ) ) ;
2008-11-30 12:47:21 -05:00
}
s - > logs . t_close = tv_ms_elapsed ( & s - > logs . tv_accept , & now ) ;
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
stream_process_counters ( s ) ;
2008-11-30 12:47:21 -05:00
2015-04-03 17:46:31 -04:00
if ( s - > txn & & s - > txn - > status ) {
2009-10-24 09:36:15 -04:00
int n ;
2015-04-03 17:46:31 -04:00
n = s - > txn - > status / 100 ;
2009-10-24 09:36:15 -04:00
if ( n < 1 | | n > 5 )
n = 0 ;
2015-04-03 09:40:56 -04:00
if ( sess - > fe - > mode = = PR_MODE_HTTP ) {
sess - > fe - > fe_counters . p . http . rsp [ n ] + + ;
2012-11-24 08:54:13 -05:00
}
2015-04-02 19:14:29 -04:00
if ( ( s - > flags & SF_BE_ASSIGNED ) & &
2012-11-24 08:54:13 -05:00
( s - > be - > mode = = PR_MODE_HTTP ) ) {
2011-03-10 17:25:56 -05:00
s - > be - > be_counters . p . http . rsp [ n ] + + ;
2012-11-24 08:54:13 -05:00
s - > be - > be_counters . p . http . cum_req + + ;
}
2009-10-24 09:36:15 -04:00
}
2008-11-30 12:47:21 -05:00
/* let's do a final log if we need it */
2015-04-03 09:40:56 -04:00
if ( ! LIST_ISEMPTY ( & sess - > fe - > logformat ) & & s - > logs . logwait & &
2015-04-02 19:14:29 -04:00
! ( s - > flags & SF_MONITOR ) & &
2015-04-03 09:40:56 -04:00
( ! ( sess - > fe - > options & PR_O_NULLNOLOG ) | | req - > total ) ) {
2008-11-30 13:02:32 -05:00
s - > do_log ( s ) ;
2008-11-30 12:47:21 -05:00
}
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
/* update time stats for this stream */
stream_update_time_stats ( s ) ;
2014-06-17 06:19:18 -04:00
2008-11-30 12:47:21 -05:00
/* the task MUST not be in the run queue anymore */
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
stream_free ( s ) ;
2009-03-08 04:38:41 -04:00
task_delete ( t ) ;
2008-11-30 12:47:21 -05:00
task_free ( t ) ;
2009-03-08 04:38:41 -04:00
return NULL ;
2008-11-30 12:47:21 -05:00
}
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
/* Update the stream's backend and server time stats */
void stream_update_time_stats ( struct stream * s )
2014-06-17 06:19:18 -04:00
{
int t_request ;
int t_queue ;
int t_connect ;
int t_data ;
int t_close ;
struct server * srv ;
t_request = 0 ;
t_queue = s - > logs . t_queue ;
t_connect = s - > logs . t_connect ;
t_close = s - > logs . t_close ;
t_data = s - > logs . t_data ;
if ( s - > be - > mode ! = PR_MODE_HTTP )
t_data = t_connect ;
if ( t_connect < 0 | | t_data < 0 )
return ;
if ( tv_isge ( & s - > logs . tv_request , & s - > logs . tv_accept ) )
t_request = tv_ms_elapsed ( & s - > logs . tv_accept , & s - > logs . tv_request ) ;
t_data - = t_connect ;
t_connect - = t_queue ;
t_queue - = t_request ;
srv = objt_server ( s - > target ) ;
if ( srv ) {
swrate_add ( & srv - > counters . q_time , TIME_STATS_SAMPLES , t_queue ) ;
swrate_add ( & srv - > counters . c_time , TIME_STATS_SAMPLES , t_connect ) ;
swrate_add ( & srv - > counters . d_time , TIME_STATS_SAMPLES , t_data ) ;
swrate_add ( & srv - > counters . t_time , TIME_STATS_SAMPLES , t_close ) ;
}
swrate_add ( & s - > be - > be_counters . q_time , TIME_STATS_SAMPLES , t_queue ) ;
swrate_add ( & s - > be - > be_counters . c_time , TIME_STATS_SAMPLES , t_connect ) ;
swrate_add ( & s - > be - > be_counters . d_time , TIME_STATS_SAMPLES , t_data ) ;
swrate_add ( & s - > be - > be_counters . t_time , TIME_STATS_SAMPLES , t_close ) ;
}
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 09:04:11 -04:00
/*
* This function adjusts sess - > srv_conn and maintains the previous and new
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
* server ' s served stream counts . Setting newsrv to NULL is enough to release
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 09:04:11 -04:00
* current connection slot . This function also notifies any LB algo which might
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
* expect to be informed about any change in the number of active streams on a
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 09:04:11 -04:00
* server .
*/
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
void sess_change_server ( struct stream * sess , struct server * newsrv )
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 09:04:11 -04:00
{
if ( sess - > srv_conn = = newsrv )
return ;
if ( sess - > srv_conn ) {
sess - > srv_conn - > served - - ;
2016-10-25 12:48:17 -04:00
sess - > srv_conn - > proxy - > served - - ;
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 09:04:11 -04:00
if ( sess - > srv_conn - > proxy - > lbprm . server_drop_conn )
sess - > srv_conn - > proxy - > lbprm . server_drop_conn ( sess - > srv_conn ) ;
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
stream_del_srv_conn ( sess ) ;
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 09:04:11 -04:00
}
if ( newsrv ) {
newsrv - > served + + ;
2016-10-25 12:48:17 -04:00
newsrv - > proxy - > served + + ;
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 09:04:11 -04:00
if ( newsrv - > proxy - > lbprm . server_take_conn )
newsrv - > proxy - > lbprm . server_take_conn ( newsrv ) ;
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
stream_add_srv_conn ( sess , newsrv ) ;
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 09:04:11 -04:00
}
}
2009-03-15 17:34:05 -04:00
/* Handle server-side errors for default protocols. It is called whenever a a
* connection setup is aborted or a request is aborted in queue . It sets the
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
* stream termination flags so that the caller does not have to worry about
2009-03-15 17:34:05 -04:00
* them . It ' s installed as - > srv_error for the server - side stream_interface .
*/
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
void default_srv_error ( struct stream * s , struct stream_interface * si )
2009-03-15 17:34:05 -04:00
{
int err_type = si - > err_type ;
int err = 0 , fin = 0 ;
if ( err_type & SI_ET_QUEUE_ABRT ) {
2015-04-02 19:14:29 -04:00
err = SF_ERR_CLICL ;
fin = SF_FINST_Q ;
2009-03-15 17:34:05 -04:00
}
else if ( err_type & SI_ET_CONN_ABRT ) {
2015-04-02 19:14:29 -04:00
err = SF_ERR_CLICL ;
fin = SF_FINST_C ;
2009-03-15 17:34:05 -04:00
}
else if ( err_type & SI_ET_QUEUE_TO ) {
2015-04-02 19:14:29 -04:00
err = SF_ERR_SRVTO ;
fin = SF_FINST_Q ;
2009-03-15 17:34:05 -04:00
}
else if ( err_type & SI_ET_QUEUE_ERR ) {
2015-04-02 19:14:29 -04:00
err = SF_ERR_SRVCL ;
fin = SF_FINST_Q ;
2009-03-15 17:34:05 -04:00
}
else if ( err_type & SI_ET_CONN_TO ) {
2015-04-02 19:14:29 -04:00
err = SF_ERR_SRVTO ;
fin = SF_FINST_C ;
2009-03-15 17:34:05 -04:00
}
else if ( err_type & SI_ET_CONN_ERR ) {
2015-04-02 19:14:29 -04:00
err = SF_ERR_SRVCL ;
fin = SF_FINST_C ;
2009-03-15 17:34:05 -04:00
}
2012-05-14 06:11:47 -04:00
else if ( err_type & SI_ET_CONN_RES ) {
2015-04-02 19:14:29 -04:00
err = SF_ERR_RESOURCE ;
fin = SF_FINST_C ;
2012-05-14 06:11:47 -04:00
}
2009-03-15 17:34:05 -04:00
else /* SI_ET_CONN_OTHER and others */ {
2015-04-02 19:14:29 -04:00
err = SF_ERR_INTERNAL ;
fin = SF_FINST_C ;
2009-03-15 17:34:05 -04:00
}
2015-04-02 19:14:29 -04:00
if ( ! ( s - > flags & SF_ERR_MASK ) )
2009-03-15 17:34:05 -04:00
s - > flags | = err ;
2015-04-02 19:14:29 -04:00
if ( ! ( s - > flags & SF_FINST_MASK ) )
2009-03-15 17:34:05 -04:00
s - > flags | = fin ;
}
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 09:04:11 -04:00
2015-04-02 19:14:29 -04:00
/* kill a stream and set the termination flags to <why> (one of SF_ERR_*) */
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
void stream_shutdown ( struct stream * stream , int why )
2011-09-07 17:01:56 -04:00
{
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
if ( stream - > req . flags & ( CF_SHUTW | CF_SHUTW_NOW ) )
2011-09-07 17:01:56 -04:00
return ;
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
channel_shutw_now ( & stream - > req ) ;
channel_shutr_now ( & stream - > res ) ;
stream - > task - > nice = 1024 ;
2015-04-02 19:14:29 -04:00
if ( ! ( stream - > flags & SF_ERR_MASK ) )
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
stream - > flags | = why ;
task_wakeup ( stream - > task , TASK_WOKEN_OTHER ) ;
2011-09-07 17:01:56 -04:00
}
2010-06-14 15:04:55 -04:00
2010-06-18 11:46:06 -04:00
/************************************************************************/
/* All supported ACL keywords must be declared here. */
/************************************************************************/
2013-07-23 13:33:46 -04:00
/* Returns a pointer to a stkctr depending on the fetch keyword name.
* It is designed to be called as sc [ 0 - 9 ] _ * sc_ * or src_ * exclusively .
2013-07-22 16:40:11 -04:00
* sc [ 0 - 9 ] _ * will return a pointer to the respective field in the
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
* stream < l4 > . sc_ * requires an UINT argument specifying the stick
2013-07-23 13:33:46 -04:00
* counter number . src_ * will fill a locally allocated structure with
2013-07-22 16:40:11 -04:00
* the table and entry corresponding to what is specified with src_ * .
2013-07-23 13:56:43 -04:00
* NULL may be returned if the designated stkctr is not tracked . For
* the sc_ * and sc [ 0 - 9 ] _ * forms , an optional table argument may be
* passed . When present , the currently tracked key is then looked up
* in the specified table instead of the current table . The purpose is
* to be able to convery multiple values per key ( eg : have gpc0 from
2015-04-04 10:29:12 -04:00
* multiple tables ) . < strm > is allowed to be NULL , in which case only
* the session will be consulted .
2013-07-22 16:40:11 -04:00
*/
2014-07-15 13:06:18 -04:00
struct stkctr *
2015-04-03 19:47:55 -04:00
smp_fetch_sc_stkctr ( struct session * sess , struct stream * strm , const struct arg * args , const char * kw )
2013-07-22 16:40:11 -04:00
{
static struct stkctr stkctr ;
2015-04-04 10:29:12 -04:00
struct stkctr * stkptr ;
2014-04-09 07:25:42 -04:00
struct stksess * stksess ;
2013-07-23 13:33:46 -04:00
unsigned int num = kw [ 2 ] - ' 0 ' ;
2013-07-23 13:56:43 -04:00
int arg = 0 ;
2013-07-22 16:40:11 -04:00
2013-07-23 13:56:43 -04:00
if ( num = = ' _ ' - ' 0 ' ) {
/* sc_* variant, args[0] = ctr# (mandatory) */
2015-07-20 11:45:02 -04:00
num = args [ arg + + ] . data . sint ;
2013-07-23 13:33:46 -04:00
if ( num > = MAX_SESS_STKCTR )
return NULL ;
2013-07-22 16:40:11 -04:00
}
2013-07-23 13:56:43 -04:00
else if ( num > 9 ) { /* src_* variant, args[0] = table */
2013-10-01 04:45:07 -04:00
struct stktable_key * key ;
2015-04-03 13:19:59 -04:00
struct connection * conn = objt_conn ( sess - > origin ) ;
2015-07-24 13:31:59 -04:00
struct sample smp ;
2013-10-01 04:45:07 -04:00
if ( ! conn )
return NULL ;
2013-07-22 16:40:11 -04:00
2015-07-24 13:31:59 -04:00
/* Fetch source adress in a sample. */
smp . px = NULL ;
smp . sess = sess ;
smp . strm = strm ;
if ( ! smp_fetch_src ( NULL , & smp , NULL , NULL ) )
return NULL ;
/* Converts into key. */
key = smp_to_stkey ( & smp , & args - > data . prx - > table ) ;
2013-07-22 16:40:11 -04:00
if ( ! key )
return NULL ;
2013-10-01 04:45:07 -04:00
2013-07-22 16:40:11 -04:00
stkctr . table = & args - > data . prx - > table ;
2014-01-28 17:18:23 -05:00
stkctr_set_entry ( & stkctr , stktable_lookup_key ( stkctr . table , key ) ) ;
2013-07-22 16:40:11 -04:00
return & stkctr ;
}
2013-07-23 13:56:43 -04:00
/* Here, <num> contains the counter number from 0 to 9 for
* the sc [ 0 - 9 ] _ form , or even higher using sc_ ( num ) if needed .
2015-04-04 10:29:12 -04:00
* args [ arg ] is the first optional argument . We first lookup the
* ctr form the stream , then from the session if it was not there .
2013-07-23 13:56:43 -04:00
*/
2015-04-04 10:29:12 -04:00
2016-03-29 15:27:36 -04:00
if ( strm )
stkptr = & strm - > stkctr [ num ] ;
2015-04-04 10:29:12 -04:00
if ( ! strm | | ! stkctr_entry ( stkptr ) ) {
stkptr = & sess - > stkctr [ num ] ;
if ( ! stkctr_entry ( stkptr ) )
return NULL ;
}
stksess = stkctr_entry ( stkptr ) ;
2014-04-09 07:25:42 -04:00
if ( ! stksess )
return NULL ;
2013-07-23 13:56:43 -04:00
if ( unlikely ( args [ arg ] . type = = ARGT_TAB ) ) {
/* an alternate table was specified, let's look up the same key there */
stkctr . table = & args [ arg ] . data . prx - > table ;
2014-04-09 07:25:42 -04:00
stkctr_set_entry ( & stkctr , stktable_lookup ( stkctr . table , stksess ) ) ;
2013-07-23 13:56:43 -04:00
return & stkctr ;
}
2015-04-04 10:29:12 -04:00
return stkptr ;
2013-07-22 16:40:11 -04:00
}
2015-08-18 11:15:20 -04:00
/* same as smp_fetch_sc_stkctr() but dedicated to src_* and can create
* the entry if it doesn ' t exist yet . This is needed for a few fetch
* functions which need to create an entry , such as src_inc_gpc * and
* src_clr_gpc * .
*/
struct stkctr *
smp_create_src_stkctr ( struct session * sess , struct stream * strm , const struct arg * args , const char * kw )
{
static struct stkctr stkctr ;
struct stktable_key * key ;
struct connection * conn = objt_conn ( sess - > origin ) ;
2015-07-24 13:31:59 -04:00
struct sample smp ;
2015-08-18 11:15:20 -04:00
if ( strncmp ( kw , " src_ " , 4 ) ! = 0 )
return NULL ;
if ( ! conn )
return NULL ;
2015-07-24 13:31:59 -04:00
/* Fetch source adress in a sample. */
smp . px = NULL ;
smp . sess = sess ;
smp . strm = strm ;
if ( ! smp_fetch_src ( NULL , & smp , NULL , NULL ) )
return NULL ;
/* Converts into key. */
key = smp_to_stkey ( & smp , & args - > data . prx - > table ) ;
2015-08-18 11:15:20 -04:00
if ( ! key )
return NULL ;
stkctr . table = & args - > data . prx - > table ;
stkctr_set_entry ( & stkctr , stktable_update_key ( stkctr . table , key ) ) ;
return & stkctr ;
}
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
/* set return a boolean indicating if the requested stream counter is
2013-07-22 12:29:29 -04:00
* currently being tracked or not .
* Supports being called as " sc[0-9]_tracked " only .
*/
2013-06-03 09:15:22 -04:00
static int
2015-05-11 09:42:45 -04:00
smp_fetch_sc_tracked ( const struct arg * args , struct sample * smp , const char * kw , void * private )
2013-06-03 09:15:22 -04:00
{
smp - > flags = SMP_F_VOL_TEST ;
2015-08-19 03:00:18 -04:00
smp - > data . type = SMP_T_BOOL ;
2015-08-19 03:07:19 -04:00
smp - > data . u . sint = ! ! smp_fetch_sc_stkctr ( smp - > sess , smp - > strm , args , kw ) ;
2013-06-03 09:15:22 -04:00
return 1 ;
}
2015-08-19 02:25:14 -04:00
/* set <smp> to the General Purpose Flag 0 value from the stream's tracked
* frontend counters or from the src .
* Supports being called as " sc[0-9]_get_gpc0 " or " src_get_gpt0 " only . Value
* zero is returned if the key is new .
*/
static int
smp_fetch_sc_get_gpt0 ( const struct arg * args , struct sample * smp , const char * kw , void * private )
{
2016-03-10 05:47:01 -05:00
struct stkctr * stkctr ;
2015-08-19 02:25:14 -04:00
2016-03-10 05:47:01 -05:00
stkctr = smp_fetch_sc_stkctr ( smp - > sess , smp - > strm , args , kw ) ;
2015-08-19 02:25:14 -04:00
if ( ! stkctr )
return 0 ;
smp - > flags = SMP_F_VOL_TEST ;
smp - > data . type = SMP_T_SINT ;
smp - > data . u . sint = 0 ;
if ( stkctr_entry ( stkctr ) ! = NULL ) {
void * ptr = stktable_data_ptr ( stkctr - > table , stkctr_entry ( stkctr ) , STKTABLE_DT_GPT0 ) ;
if ( ! ptr )
return 0 ; /* parameter not stored */
smp - > data . u . sint = stktable_data_cast ( ptr , gpt0 ) ;
}
return 1 ;
}
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
/* set <smp> to the General Purpose Counter 0 value from the stream's tracked
2013-07-22 13:46:52 -04:00
* frontend counters or from the src .
* Supports being called as " sc[0-9]_get_gpc0 " or " src_get_gpc0 " only . Value
* zero is returned if the key is new .
*/
[MINOR] session-counters: add a general purpose counter (gpc0)
This counter may be used to track anything. Two sets of ACLs are available
to manage it, one gets its value, and the other one increments its value
and returns it. In the second case, the entry is created if it did not
exist.
Thus it is possible for example to mark a source as being an abuser and
to keep it marked as long as it does not wait for the entry to expire :
# The rules below use gpc0 to track abusers, and reject them if
# a source has been marked as such. The track-counters statement
# automatically refreshes the entry which will not expire until a
# 1-minute silence is respected from the source. The second rule
# evaluates the second part if the first one is true, so GPC0 will
# be increased once the conn_rate is above 100/5s.
stick-table type ip size 200k expire 1m store conn_rate(5s),gpc0
tcp-request track-counters src
tcp-request reject if { trk_get_gpc0 gt 0 }
tcp-request reject if { trk_conn_rate gt 100 } { trk_inc_gpc0 gt 0}
Alternatively, it is possible to let the entry expire even in presence of
traffic by swapping the check for gpc0 and the track-counters statement :
stick-table type ip size 200k expire 1m store conn_rate(5s),gpc0
tcp-request reject if { src_get_gpc0 gt 0 }
tcp-request track-counters src
tcp-request reject if { trk_conn_rate gt 100 } { trk_inc_gpc0 gt 0}
It is also possible not to track counters at all, but entry lookups will
then be performed more often :
stick-table type ip size 200k expire 1m store conn_rate(5s),gpc0
tcp-request reject if { src_get_gpc0 gt 0 }
tcp-request reject if { src_conn_rate gt 100 } { src_inc_gpc0 gt 0}
The '0' at the end of the counter name is there because if we find that more
counters may be useful, other ones will be added.
2010-06-20 06:47:25 -04:00
static int
2015-05-11 09:42:45 -04:00
smp_fetch_sc_get_gpc0 ( const struct arg * args , struct sample * smp , const char * kw , void * private )
[MINOR] session-counters: add a general purpose counter (gpc0)
This counter may be used to track anything. Two sets of ACLs are available
to manage it, one gets its value, and the other one increments its value
and returns it. In the second case, the entry is created if it did not
exist.
Thus it is possible for example to mark a source as being an abuser and
to keep it marked as long as it does not wait for the entry to expire :
# The rules below use gpc0 to track abusers, and reject them if
# a source has been marked as such. The track-counters statement
# automatically refreshes the entry which will not expire until a
# 1-minute silence is respected from the source. The second rule
# evaluates the second part if the first one is true, so GPC0 will
# be increased once the conn_rate is above 100/5s.
stick-table type ip size 200k expire 1m store conn_rate(5s),gpc0
tcp-request track-counters src
tcp-request reject if { trk_get_gpc0 gt 0 }
tcp-request reject if { trk_conn_rate gt 100 } { trk_inc_gpc0 gt 0}
Alternatively, it is possible to let the entry expire even in presence of
traffic by swapping the check for gpc0 and the track-counters statement :
stick-table type ip size 200k expire 1m store conn_rate(5s),gpc0
tcp-request reject if { src_get_gpc0 gt 0 }
tcp-request track-counters src
tcp-request reject if { trk_conn_rate gt 100 } { trk_inc_gpc0 gt 0}
It is also possible not to track counters at all, but entry lookups will
then be performed more often :
stick-table type ip size 200k expire 1m store conn_rate(5s),gpc0
tcp-request reject if { src_get_gpc0 gt 0 }
tcp-request reject if { src_conn_rate gt 100 } { src_inc_gpc0 gt 0}
The '0' at the end of the counter name is there because if we find that more
counters may be useful, other ones will be added.
2010-06-20 06:47:25 -04:00
{
2016-03-10 05:47:01 -05:00
struct stkctr * stkctr ;
stkctr = smp_fetch_sc_stkctr ( smp - > sess , smp - > strm , args , kw ) ;
2013-07-22 13:46:52 -04:00
if ( ! stkctr )
return 0 ;
2012-04-23 10:16:37 -04:00
smp - > flags = SMP_F_VOL_TEST ;
2015-08-19 03:00:18 -04:00
smp - > data . type = SMP_T_SINT ;
2015-08-19 03:07:19 -04:00
smp - > data . u . sint = 0 ;
2013-07-22 13:46:52 -04:00
2014-01-28 17:18:23 -05:00
if ( stkctr_entry ( stkctr ) ! = NULL ) {
void * ptr = stktable_data_ptr ( stkctr - > table , stkctr_entry ( stkctr ) , STKTABLE_DT_GPC0 ) ;
[MINOR] session-counters: add a general purpose counter (gpc0)
This counter may be used to track anything. Two sets of ACLs are available
to manage it, one gets its value, and the other one increments its value
and returns it. In the second case, the entry is created if it did not
exist.
Thus it is possible for example to mark a source as being an abuser and
to keep it marked as long as it does not wait for the entry to expire :
# The rules below use gpc0 to track abusers, and reject them if
# a source has been marked as such. The track-counters statement
# automatically refreshes the entry which will not expire until a
# 1-minute silence is respected from the source. The second rule
# evaluates the second part if the first one is true, so GPC0 will
# be increased once the conn_rate is above 100/5s.
stick-table type ip size 200k expire 1m store conn_rate(5s),gpc0
tcp-request track-counters src
tcp-request reject if { trk_get_gpc0 gt 0 }
tcp-request reject if { trk_conn_rate gt 100 } { trk_inc_gpc0 gt 0}
Alternatively, it is possible to let the entry expire even in presence of
traffic by swapping the check for gpc0 and the track-counters statement :
stick-table type ip size 200k expire 1m store conn_rate(5s),gpc0
tcp-request reject if { src_get_gpc0 gt 0 }
tcp-request track-counters src
tcp-request reject if { trk_conn_rate gt 100 } { trk_inc_gpc0 gt 0}
It is also possible not to track counters at all, but entry lookups will
then be performed more often :
stick-table type ip size 200k expire 1m store conn_rate(5s),gpc0
tcp-request reject if { src_get_gpc0 gt 0 }
tcp-request reject if { src_conn_rate gt 100 } { src_inc_gpc0 gt 0}
The '0' at the end of the counter name is there because if we find that more
counters may be useful, other ones will be added.
2010-06-20 06:47:25 -04:00
if ( ! ptr )
return 0 ; /* parameter not stored */
2015-08-19 03:07:19 -04:00
smp - > data . u . sint = stktable_data_cast ( ptr , gpc0 ) ;
[MINOR] session-counters: add a general purpose counter (gpc0)
This counter may be used to track anything. Two sets of ACLs are available
to manage it, one gets its value, and the other one increments its value
and returns it. In the second case, the entry is created if it did not
exist.
Thus it is possible for example to mark a source as being an abuser and
to keep it marked as long as it does not wait for the entry to expire :
# The rules below use gpc0 to track abusers, and reject them if
# a source has been marked as such. The track-counters statement
# automatically refreshes the entry which will not expire until a
# 1-minute silence is respected from the source. The second rule
# evaluates the second part if the first one is true, so GPC0 will
# be increased once the conn_rate is above 100/5s.
stick-table type ip size 200k expire 1m store conn_rate(5s),gpc0
tcp-request track-counters src
tcp-request reject if { trk_get_gpc0 gt 0 }
tcp-request reject if { trk_conn_rate gt 100 } { trk_inc_gpc0 gt 0}
Alternatively, it is possible to let the entry expire even in presence of
traffic by swapping the check for gpc0 and the track-counters statement :
stick-table type ip size 200k expire 1m store conn_rate(5s),gpc0
tcp-request reject if { src_get_gpc0 gt 0 }
tcp-request track-counters src
tcp-request reject if { trk_conn_rate gt 100 } { trk_inc_gpc0 gt 0}
It is also possible not to track counters at all, but entry lookups will
then be performed more often :
stick-table type ip size 200k expire 1m store conn_rate(5s),gpc0
tcp-request reject if { src_get_gpc0 gt 0 }
tcp-request reject if { src_conn_rate gt 100 } { src_inc_gpc0 gt 0}
The '0' at the end of the counter name is there because if we find that more
counters may be useful, other ones will be added.
2010-06-20 06:47:25 -04:00
}
return 1 ;
}
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
/* set <smp> to the General Purpose Counter 0's event rate from the stream's
2013-07-22 17:47:07 -04:00
* tracked frontend counters or from the src .
* Supports being called as " sc[0-9]_gpc0_rate " or " src_gpc0_rate " only .
* Value zero is returned if the key is new .
*/
2013-05-29 09:54:14 -04:00
static int
2015-05-11 09:42:45 -04:00
smp_fetch_sc_gpc0_rate ( const struct arg * args , struct sample * smp , const char * kw , void * private )
2013-05-29 09:54:14 -04:00
{
2016-03-10 05:47:01 -05:00
struct stkctr * stkctr ;
stkctr = smp_fetch_sc_stkctr ( smp - > sess , smp - > strm , args , kw ) ;
2013-07-22 17:47:07 -04:00
if ( ! stkctr )
return 0 ;
2013-05-29 09:54:14 -04:00
smp - > flags = SMP_F_VOL_TEST ;
2015-08-19 03:00:18 -04:00
smp - > data . type = SMP_T_SINT ;
2015-08-19 03:07:19 -04:00
smp - > data . u . sint = 0 ;
2014-01-28 17:18:23 -05:00
if ( stkctr_entry ( stkctr ) ! = NULL ) {
void * ptr = stktable_data_ptr ( stkctr - > table , stkctr_entry ( stkctr ) , STKTABLE_DT_GPC0_RATE ) ;
2013-05-29 09:54:14 -04:00
if ( ! ptr )
return 0 ; /* parameter not stored */
2015-08-19 03:07:19 -04:00
smp - > data . u . sint = read_freq_ctr_period ( & stktable_data_cast ( ptr , gpc0_rate ) ,
2015-05-11 09:20:49 -04:00
stkctr - > table - > data_arg [ STKTABLE_DT_GPC0_RATE ] . u ) ;
2013-05-29 09:54:14 -04:00
}
return 1 ;
}
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
/* Increment the General Purpose Counter 0 value from the stream's tracked
2013-07-22 18:07:04 -04:00
* frontend counters and return it into temp integer .
* Supports being called as " sc[0-9]_inc_gpc0 " or " src_inc_gpc0 " only .
[MINOR] session-counters: add a general purpose counter (gpc0)
This counter may be used to track anything. Two sets of ACLs are available
to manage it, one gets its value, and the other one increments its value
and returns it. In the second case, the entry is created if it did not
exist.
Thus it is possible for example to mark a source as being an abuser and
to keep it marked as long as it does not wait for the entry to expire :
# The rules below use gpc0 to track abusers, and reject them if
# a source has been marked as such. The track-counters statement
# automatically refreshes the entry which will not expire until a
# 1-minute silence is respected from the source. The second rule
# evaluates the second part if the first one is true, so GPC0 will
# be increased once the conn_rate is above 100/5s.
stick-table type ip size 200k expire 1m store conn_rate(5s),gpc0
tcp-request track-counters src
tcp-request reject if { trk_get_gpc0 gt 0 }
tcp-request reject if { trk_conn_rate gt 100 } { trk_inc_gpc0 gt 0}
Alternatively, it is possible to let the entry expire even in presence of
traffic by swapping the check for gpc0 and the track-counters statement :
stick-table type ip size 200k expire 1m store conn_rate(5s),gpc0
tcp-request reject if { src_get_gpc0 gt 0 }
tcp-request track-counters src
tcp-request reject if { trk_conn_rate gt 100 } { trk_inc_gpc0 gt 0}
It is also possible not to track counters at all, but entry lookups will
then be performed more often :
stick-table type ip size 200k expire 1m store conn_rate(5s),gpc0
tcp-request reject if { src_get_gpc0 gt 0 }
tcp-request reject if { src_conn_rate gt 100 } { src_inc_gpc0 gt 0}
The '0' at the end of the counter name is there because if we find that more
counters may be useful, other ones will be added.
2010-06-20 06:47:25 -04:00
*/
static int
2015-05-11 09:42:45 -04:00
smp_fetch_sc_inc_gpc0 ( const struct arg * args , struct sample * smp , const char * kw , void * private )
[MINOR] session-counters: add a general purpose counter (gpc0)
This counter may be used to track anything. Two sets of ACLs are available
to manage it, one gets its value, and the other one increments its value
and returns it. In the second case, the entry is created if it did not
exist.
Thus it is possible for example to mark a source as being an abuser and
to keep it marked as long as it does not wait for the entry to expire :
# The rules below use gpc0 to track abusers, and reject them if
# a source has been marked as such. The track-counters statement
# automatically refreshes the entry which will not expire until a
# 1-minute silence is respected from the source. The second rule
# evaluates the second part if the first one is true, so GPC0 will
# be increased once the conn_rate is above 100/5s.
stick-table type ip size 200k expire 1m store conn_rate(5s),gpc0
tcp-request track-counters src
tcp-request reject if { trk_get_gpc0 gt 0 }
tcp-request reject if { trk_conn_rate gt 100 } { trk_inc_gpc0 gt 0}
Alternatively, it is possible to let the entry expire even in presence of
traffic by swapping the check for gpc0 and the track-counters statement :
stick-table type ip size 200k expire 1m store conn_rate(5s),gpc0
tcp-request reject if { src_get_gpc0 gt 0 }
tcp-request track-counters src
tcp-request reject if { trk_conn_rate gt 100 } { trk_inc_gpc0 gt 0}
It is also possible not to track counters at all, but entry lookups will
then be performed more often :
stick-table type ip size 200k expire 1m store conn_rate(5s),gpc0
tcp-request reject if { src_get_gpc0 gt 0 }
tcp-request reject if { src_conn_rate gt 100 } { src_inc_gpc0 gt 0}
The '0' at the end of the counter name is there because if we find that more
counters may be useful, other ones will be added.
2010-06-20 06:47:25 -04:00
{
2016-03-10 05:47:01 -05:00
struct stkctr * stkctr ;
stkctr = smp_fetch_sc_stkctr ( smp - > sess , smp - > strm , args , kw ) ;
2013-07-22 18:07:04 -04:00
if ( ! stkctr )
return 0 ;
2012-04-23 10:16:37 -04:00
smp - > flags = SMP_F_VOL_TEST ;
2015-08-19 03:00:18 -04:00
smp - > data . type = SMP_T_SINT ;
2015-08-19 03:07:19 -04:00
smp - > data . u . sint = 0 ;
2015-08-18 11:15:20 -04:00
if ( stkctr_entry ( stkctr ) = = NULL )
stkctr = smp_create_src_stkctr ( smp - > sess , smp - > strm , args , kw ) ;
2016-06-06 12:28:05 -04:00
if ( stkctr & & stkctr_entry ( stkctr ) ) {
2015-06-15 12:29:57 -04:00
void * ptr1 , * ptr2 ;
2013-05-29 09:54:14 -04:00
/* First, update gpc0_rate if it's tracked. Second, update its
* gpc0 if tracked . Returns gpc0 ' s value otherwise the curr_ctr .
*/
2015-06-15 12:29:57 -04:00
ptr1 = stktable_data_ptr ( stkctr - > table , stkctr_entry ( stkctr ) , STKTABLE_DT_GPC0_RATE ) ;
if ( ptr1 ) {
update_freq_ctr_period ( & stktable_data_cast ( ptr1 , gpc0_rate ) ,
2013-07-22 18:07:04 -04:00
stkctr - > table - > data_arg [ STKTABLE_DT_GPC0_RATE ] . u , 1 ) ;
2015-08-19 03:07:19 -04:00
smp - > data . u . sint = ( & stktable_data_cast ( ptr1 , gpc0_rate ) ) - > curr_ctr ;
2013-05-29 09:54:14 -04:00
}
2015-06-15 12:29:57 -04:00
ptr2 = stktable_data_ptr ( stkctr - > table , stkctr_entry ( stkctr ) , STKTABLE_DT_GPC0 ) ;
if ( ptr2 )
2015-08-19 03:07:19 -04:00
smp - > data . u . sint = + + stktable_data_cast ( ptr2 , gpc0 ) ;
2013-05-29 09:54:14 -04:00
2015-06-15 12:29:57 -04:00
/* If data was modified, we need to touch to re-schedule sync */
if ( ptr1 | | ptr2 )
stktable_touch ( stkctr - > table , stkctr_entry ( stkctr ) , 1 ) ;
[MINOR] session-counters: add a general purpose counter (gpc0)
This counter may be used to track anything. Two sets of ACLs are available
to manage it, one gets its value, and the other one increments its value
and returns it. In the second case, the entry is created if it did not
exist.
Thus it is possible for example to mark a source as being an abuser and
to keep it marked as long as it does not wait for the entry to expire :
# The rules below use gpc0 to track abusers, and reject them if
# a source has been marked as such. The track-counters statement
# automatically refreshes the entry which will not expire until a
# 1-minute silence is respected from the source. The second rule
# evaluates the second part if the first one is true, so GPC0 will
# be increased once the conn_rate is above 100/5s.
stick-table type ip size 200k expire 1m store conn_rate(5s),gpc0
tcp-request track-counters src
tcp-request reject if { trk_get_gpc0 gt 0 }
tcp-request reject if { trk_conn_rate gt 100 } { trk_inc_gpc0 gt 0}
Alternatively, it is possible to let the entry expire even in presence of
traffic by swapping the check for gpc0 and the track-counters statement :
stick-table type ip size 200k expire 1m store conn_rate(5s),gpc0
tcp-request reject if { src_get_gpc0 gt 0 }
tcp-request track-counters src
tcp-request reject if { trk_conn_rate gt 100 } { trk_inc_gpc0 gt 0}
It is also possible not to track counters at all, but entry lookups will
then be performed more often :
stick-table type ip size 200k expire 1m store conn_rate(5s),gpc0
tcp-request reject if { src_get_gpc0 gt 0 }
tcp-request reject if { src_conn_rate gt 100 } { src_inc_gpc0 gt 0}
The '0' at the end of the counter name is there because if we find that more
counters may be useful, other ones will be added.
2010-06-20 06:47:25 -04:00
}
return 1 ;
}
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
/* Clear the General Purpose Counter 0 value from the stream's tracked
2013-07-22 18:10:35 -04:00
* frontend counters and return its previous value into temp integer .
* Supports being called as " sc[0-9]_clr_gpc0 " or " src_clr_gpc0 " only .
2011-08-12 19:45:16 -04:00
*/
static int
2015-05-11 09:42:45 -04:00
smp_fetch_sc_clr_gpc0 ( const struct arg * args , struct sample * smp , const char * kw , void * private )
2011-08-12 19:45:16 -04:00
{
2016-03-10 05:47:01 -05:00
struct stkctr * stkctr ;
2013-07-22 18:10:35 -04:00
2016-03-10 05:47:01 -05:00
stkctr = smp_fetch_sc_stkctr ( smp - > sess , smp - > strm , args , kw ) ;
2013-07-22 18:10:35 -04:00
if ( ! stkctr )
return 0 ;
2012-04-23 10:16:37 -04:00
smp - > flags = SMP_F_VOL_TEST ;
2015-08-19 03:00:18 -04:00
smp - > data . type = SMP_T_SINT ;
2015-08-19 03:07:19 -04:00
smp - > data . u . sint = 0 ;
2015-08-18 11:15:20 -04:00
if ( stkctr_entry ( stkctr ) = = NULL )
stkctr = smp_create_src_stkctr ( smp - > sess , smp - > strm , args , kw ) ;
2014-01-28 17:18:23 -05:00
if ( stkctr_entry ( stkctr ) ! = NULL ) {
void * ptr = stktable_data_ptr ( stkctr - > table , stkctr_entry ( stkctr ) , STKTABLE_DT_GPC0 ) ;
2011-08-12 19:45:16 -04:00
if ( ! ptr )
return 0 ; /* parameter not stored */
2015-08-19 03:07:19 -04:00
smp - > data . u . sint = stktable_data_cast ( ptr , gpc0 ) ;
2011-08-12 19:45:16 -04:00
stktable_data_cast ( ptr , gpc0 ) = 0 ;
2015-06-15 12:29:57 -04:00
/* If data was modified, we need to touch to re-schedule sync */
stktable_touch ( stkctr - > table , stkctr_entry ( stkctr ) , 1 ) ;
2011-08-12 19:45:16 -04:00
}
return 1 ;
}
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
/* set <smp> to the cumulated number of connections from the stream's tracked
2013-07-22 18:22:50 -04:00
* frontend counters . Supports being called as " sc[0-9]_conn_cnt " or
* " src_conn_cnt " only .
*/
2010-06-18 13:53:25 -04:00
static int
2015-05-11 09:42:45 -04:00
smp_fetch_sc_conn_cnt ( const struct arg * args , struct sample * smp , const char * kw , void * private )
2010-06-18 13:53:25 -04:00
{
2016-03-10 05:47:01 -05:00
struct stkctr * stkctr ;
2013-07-22 18:22:50 -04:00
2016-03-10 05:47:01 -05:00
stkctr = smp_fetch_sc_stkctr ( smp - > sess , smp - > strm , args , kw ) ;
2013-07-22 18:22:50 -04:00
if ( ! stkctr )
return 0 ;
2012-04-23 10:16:37 -04:00
smp - > flags = SMP_F_VOL_TEST ;
2015-08-19 03:00:18 -04:00
smp - > data . type = SMP_T_SINT ;
2015-08-19 03:07:19 -04:00
smp - > data . u . sint = 0 ;
2014-01-28 17:18:23 -05:00
if ( stkctr_entry ( stkctr ) ! = NULL ) {
void * ptr = stktable_data_ptr ( stkctr - > table , stkctr_entry ( stkctr ) , STKTABLE_DT_CONN_CNT ) ;
2010-06-18 13:53:25 -04:00
if ( ! ptr )
return 0 ; /* parameter not stored */
2015-08-19 03:07:19 -04:00
smp - > data . u . sint = stktable_data_cast ( ptr , conn_cnt ) ;
2010-06-18 13:53:25 -04:00
}
return 1 ;
}
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
/* set <smp> to the connection rate from the stream's tracked frontend
2013-07-23 09:09:35 -04:00
* counters . Supports being called as " sc[0-9]_conn_rate " or " src_conn_rate "
* only .
*/
2010-06-20 05:19:22 -04:00
static int
2015-05-11 09:42:45 -04:00
smp_fetch_sc_conn_rate ( const struct arg * args , struct sample * smp , const char * kw , void * private )
2010-06-20 05:19:22 -04:00
{
2016-03-10 05:47:01 -05:00
struct stkctr * stkctr ;
stkctr = smp_fetch_sc_stkctr ( smp - > sess , smp - > strm , args , kw ) ;
2013-07-23 09:09:35 -04:00
if ( ! stkctr )
return 0 ;
2012-04-23 10:16:37 -04:00
smp - > flags = SMP_F_VOL_TEST ;
2015-08-19 03:00:18 -04:00
smp - > data . type = SMP_T_SINT ;
2015-08-19 03:07:19 -04:00
smp - > data . u . sint = 0 ;
2014-01-28 17:18:23 -05:00
if ( stkctr_entry ( stkctr ) ! = NULL ) {
void * ptr = stktable_data_ptr ( stkctr - > table , stkctr_entry ( stkctr ) , STKTABLE_DT_CONN_RATE ) ;
2010-06-20 05:19:22 -04:00
if ( ! ptr )
return 0 ; /* parameter not stored */
2015-08-19 03:07:19 -04:00
smp - > data . u . sint = read_freq_ctr_period ( & stktable_data_cast ( ptr , conn_rate ) ,
2013-07-23 09:09:35 -04:00
stkctr - > table - > data_arg [ STKTABLE_DT_CONN_RATE ] . u ) ;
2010-06-20 05:19:22 -04:00
}
return 1 ;
}
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
/* set temp integer to the number of connections from the stream's source address
2010-06-18 11:46:06 -04:00
* in the table pointed to by expr , after updating it .
2012-04-20 05:37:56 -04:00
* Accepts exactly 1 argument of type table .
2010-06-18 11:46:06 -04:00
*/
static int
2015-05-11 09:42:45 -04:00
smp_fetch_src_updt_conn_cnt ( const struct arg * args , struct sample * smp , const char * kw , void * private )
2010-06-18 11:46:06 -04:00
{
2015-05-11 09:20:49 -04:00
struct connection * conn = objt_conn ( smp - > sess - > origin ) ;
2010-06-18 11:46:06 -04:00
struct stksess * ts ;
struct stktable_key * key ;
void * ptr ;
2015-05-11 09:20:49 -04:00
struct proxy * px ;
2010-06-18 11:46:06 -04:00
2013-10-01 04:45:07 -04:00
if ( ! conn )
return 0 ;
2015-07-24 13:31:59 -04:00
/* Fetch source adress in a sample. */
if ( ! smp_fetch_src ( NULL , smp , NULL , NULL ) )
return 0 ;
/* Converts into key. */
key = smp_to_stkey ( smp , & args - > data . prx - > table ) ;
2010-06-18 11:46:06 -04:00
if ( ! key )
2011-03-24 06:09:31 -04:00
return 0 ;
2010-06-18 11:46:06 -04:00
2012-04-23 17:55:44 -04:00
px = args - > data . prx ;
2010-06-18 11:46:06 -04:00
2010-06-20 06:27:21 -04:00
if ( ( ts = stktable_update_key ( & px - > table , key ) ) = = NULL )
/* entry does not exist and could not be created */
return 0 ;
2010-06-18 11:46:06 -04:00
ptr = stktable_data_ptr ( & px - > table , ts , STKTABLE_DT_CONN_CNT ) ;
if ( ! ptr )
return 0 ; /* parameter not stored in this table */
2015-08-19 03:00:18 -04:00
smp - > data . type = SMP_T_SINT ;
2015-08-19 03:07:19 -04:00
smp - > data . u . sint = + + stktable_data_cast ( ptr , conn_cnt ) ;
2015-06-15 12:29:57 -04:00
/* Touch was previously performed by stktable_update_key */
2012-04-23 10:16:37 -04:00
smp - > flags = SMP_F_VOL_TEST ;
2010-06-18 11:46:06 -04:00
return 1 ;
}
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
/* set <smp> to the number of concurrent connections from the stream's tracked
2013-07-23 09:17:53 -04:00
* frontend counters . Supports being called as " sc[0-9]_conn_cur " or
* " src_conn_cur " only .
*/
2010-06-18 15:14:36 -04:00
static int
2015-05-11 09:42:45 -04:00
smp_fetch_sc_conn_cur ( const struct arg * args , struct sample * smp , const char * kw , void * private )
2010-06-18 15:14:36 -04:00
{
2016-03-10 05:47:01 -05:00
struct stkctr * stkctr ;
stkctr = smp_fetch_sc_stkctr ( smp - > sess , smp - > strm , args , kw ) ;
2013-07-23 09:17:53 -04:00
if ( ! stkctr )
return 0 ;
2012-04-23 10:16:37 -04:00
smp - > flags = SMP_F_VOL_TEST ;
2015-08-19 03:00:18 -04:00
smp - > data . type = SMP_T_SINT ;
2015-08-19 03:07:19 -04:00
smp - > data . u . sint = 0 ;
2014-01-28 17:18:23 -05:00
if ( stkctr_entry ( stkctr ) ! = NULL ) {
void * ptr = stktable_data_ptr ( stkctr - > table , stkctr_entry ( stkctr ) , STKTABLE_DT_CONN_CUR ) ;
2010-06-18 15:14:36 -04:00
if ( ! ptr )
return 0 ; /* parameter not stored */
2015-08-19 03:07:19 -04:00
smp - > data . u . sint = stktable_data_cast ( ptr , conn_cur ) ;
2010-06-18 15:14:36 -04:00
}
return 1 ;
}
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
/* set <smp> to the cumulated number of streams from the stream's tracked
2013-07-23 09:35:33 -04:00
* frontend counters . Supports being called as " sc[0-9]_sess_cnt " or
* " src_sess_cnt " only .
*/
2010-06-18 16:10:12 -04:00
static int
2015-05-11 09:42:45 -04:00
smp_fetch_sc_sess_cnt ( const struct arg * args , struct sample * smp , const char * kw , void * private )
2010-06-18 16:10:12 -04:00
{
2016-03-10 05:47:01 -05:00
struct stkctr * stkctr ;
stkctr = smp_fetch_sc_stkctr ( smp - > sess , smp - > strm , args , kw ) ;
2013-07-23 09:35:33 -04:00
if ( ! stkctr )
return 0 ;
2012-04-23 10:16:37 -04:00
smp - > flags = SMP_F_VOL_TEST ;
2015-08-19 03:00:18 -04:00
smp - > data . type = SMP_T_SINT ;
2015-08-19 03:07:19 -04:00
smp - > data . u . sint = 0 ;
2014-01-28 17:18:23 -05:00
if ( stkctr_entry ( stkctr ) ! = NULL ) {
void * ptr = stktable_data_ptr ( stkctr - > table , stkctr_entry ( stkctr ) , STKTABLE_DT_SESS_CNT ) ;
2010-06-18 16:10:12 -04:00
if ( ! ptr )
return 0 ; /* parameter not stored */
2015-08-19 03:07:19 -04:00
smp - > data . u . sint = stktable_data_cast ( ptr , sess_cnt ) ;
2010-06-18 16:10:12 -04:00
}
return 1 ;
}
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
/* set <smp> to the stream rate from the stream's tracked frontend counters.
2013-07-23 09:48:01 -04:00
* Supports being called as " sc[0-9]_sess_rate " or " src_sess_rate " only .
*/
2010-06-20 05:19:22 -04:00
static int
2015-05-11 09:42:45 -04:00
smp_fetch_sc_sess_rate ( const struct arg * args , struct sample * smp , const char * kw , void * private )
2010-06-20 05:19:22 -04:00
{
2016-03-10 05:47:01 -05:00
struct stkctr * stkctr ;
2013-07-23 09:48:01 -04:00
2016-03-10 05:47:01 -05:00
stkctr = smp_fetch_sc_stkctr ( smp - > sess , smp - > strm , args , kw ) ;
2013-07-23 09:48:01 -04:00
if ( ! stkctr )
return 0 ;
2012-04-23 10:16:37 -04:00
smp - > flags = SMP_F_VOL_TEST ;
2015-08-19 03:00:18 -04:00
smp - > data . type = SMP_T_SINT ;
2015-08-19 03:07:19 -04:00
smp - > data . u . sint = 0 ;
2014-01-28 17:18:23 -05:00
if ( stkctr_entry ( stkctr ) ! = NULL ) {
void * ptr = stktable_data_ptr ( stkctr - > table , stkctr_entry ( stkctr ) , STKTABLE_DT_SESS_RATE ) ;
2010-06-20 05:19:22 -04:00
if ( ! ptr )
return 0 ; /* parameter not stored */
2015-08-19 03:07:19 -04:00
smp - > data . u . sint = read_freq_ctr_period ( & stktable_data_cast ( ptr , sess_rate ) ,
2013-07-23 09:48:01 -04:00
stkctr - > table - > data_arg [ STKTABLE_DT_SESS_RATE ] . u ) ;
2010-06-20 05:19:22 -04:00
}
return 1 ;
}
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
/* set <smp> to the cumulated number of HTTP requests from the stream's tracked
2013-07-23 09:55:19 -04:00
* frontend counters . Supports being called as " sc[0-9]_http_req_cnt " or
* " src_http_req_cnt " only .
*/
2010-06-23 05:44:09 -04:00
static int
2015-05-11 09:42:45 -04:00
smp_fetch_sc_http_req_cnt ( const struct arg * args , struct sample * smp , const char * kw , void * private )
2010-06-23 05:44:09 -04:00
{
2016-03-10 05:47:01 -05:00
struct stkctr * stkctr ;
2013-07-23 09:55:19 -04:00
2016-03-10 05:47:01 -05:00
stkctr = smp_fetch_sc_stkctr ( smp - > sess , smp - > strm , args , kw ) ;
2013-07-23 09:55:19 -04:00
if ( ! stkctr )
return 0 ;
2012-04-23 10:16:37 -04:00
smp - > flags = SMP_F_VOL_TEST ;
2015-08-19 03:00:18 -04:00
smp - > data . type = SMP_T_SINT ;
2015-08-19 03:07:19 -04:00
smp - > data . u . sint = 0 ;
2014-01-28 17:18:23 -05:00
if ( stkctr_entry ( stkctr ) ! = NULL ) {
void * ptr = stktable_data_ptr ( stkctr - > table , stkctr_entry ( stkctr ) , STKTABLE_DT_HTTP_REQ_CNT ) ;
2010-06-23 05:44:09 -04:00
if ( ! ptr )
return 0 ; /* parameter not stored */
2015-08-19 03:07:19 -04:00
smp - > data . u . sint = stktable_data_cast ( ptr , http_req_cnt ) ;
2010-06-23 05:44:09 -04:00
}
return 1 ;
}
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
/* set <smp> to the HTTP request rate from the stream's tracked frontend
2013-07-23 10:04:37 -04:00
* counters . Supports being called as " sc[0-9]_http_req_rate " or
* " src_http_req_rate " only .
*/
2010-06-23 05:44:09 -04:00
static int
2015-05-11 09:42:45 -04:00
smp_fetch_sc_http_req_rate ( const struct arg * args , struct sample * smp , const char * kw , void * private )
2010-06-23 05:44:09 -04:00
{
2016-03-10 05:47:01 -05:00
struct stkctr * stkctr ;
stkctr = smp_fetch_sc_stkctr ( smp - > sess , smp - > strm , args , kw ) ;
2013-07-23 10:04:37 -04:00
if ( ! stkctr )
return 0 ;
2012-04-23 10:16:37 -04:00
smp - > flags = SMP_F_VOL_TEST ;
2015-08-19 03:00:18 -04:00
smp - > data . type = SMP_T_SINT ;
2015-08-19 03:07:19 -04:00
smp - > data . u . sint = 0 ;
2014-01-28 17:18:23 -05:00
if ( stkctr_entry ( stkctr ) ! = NULL ) {
void * ptr = stktable_data_ptr ( stkctr - > table , stkctr_entry ( stkctr ) , STKTABLE_DT_HTTP_REQ_RATE ) ;
2010-06-23 05:44:09 -04:00
if ( ! ptr )
return 0 ; /* parameter not stored */
2015-08-19 03:07:19 -04:00
smp - > data . u . sint = read_freq_ctr_period ( & stktable_data_cast ( ptr , http_req_rate ) ,
2013-07-23 10:04:37 -04:00
stkctr - > table - > data_arg [ STKTABLE_DT_HTTP_REQ_RATE ] . u ) ;
2010-06-23 05:44:09 -04:00
}
return 1 ;
}
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
/* set <smp> to the cumulated number of HTTP requests errors from the stream's
2013-07-23 10:45:38 -04:00
* tracked frontend counters . Supports being called as " sc[0-9]_http_err_cnt " or
* " src_http_err_cnt " only .
*/
2010-06-23 05:44:09 -04:00
static int
2015-05-11 09:42:45 -04:00
smp_fetch_sc_http_err_cnt ( const struct arg * args , struct sample * smp , const char * kw , void * private )
2010-06-23 05:44:09 -04:00
{
2016-03-10 05:47:01 -05:00
struct stkctr * stkctr ;
stkctr = smp_fetch_sc_stkctr ( smp - > sess , smp - > strm , args , kw ) ;
2013-07-23 10:45:38 -04:00
if ( ! stkctr )
return 0 ;
2012-04-23 10:16:37 -04:00
smp - > flags = SMP_F_VOL_TEST ;
2015-08-19 03:00:18 -04:00
smp - > data . type = SMP_T_SINT ;
2015-08-19 03:07:19 -04:00
smp - > data . u . sint = 0 ;
2014-01-28 17:18:23 -05:00
if ( stkctr_entry ( stkctr ) ! = NULL ) {
void * ptr = stktable_data_ptr ( stkctr - > table , stkctr_entry ( stkctr ) , STKTABLE_DT_HTTP_ERR_CNT ) ;
2010-06-23 05:44:09 -04:00
if ( ! ptr )
return 0 ; /* parameter not stored */
2015-08-19 03:07:19 -04:00
smp - > data . u . sint = stktable_data_cast ( ptr , http_err_cnt ) ;
2010-06-23 05:44:09 -04:00
}
return 1 ;
}
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
/* set <smp> to the HTTP request error rate from the stream's tracked frontend
2013-07-23 10:48:54 -04:00
* counters . Supports being called as " sc[0-9]_http_err_rate " or
* " src_http_err_rate " only .
*/
2010-06-23 05:44:09 -04:00
static int
2015-05-11 09:42:45 -04:00
smp_fetch_sc_http_err_rate ( const struct arg * args , struct sample * smp , const char * kw , void * private )
2010-06-23 05:44:09 -04:00
{
2016-03-10 05:47:01 -05:00
struct stkctr * stkctr ;
stkctr = smp_fetch_sc_stkctr ( smp - > sess , smp - > strm , args , kw ) ;
2013-07-23 10:48:54 -04:00
if ( ! stkctr )
return 0 ;
2012-04-23 10:16:37 -04:00
smp - > flags = SMP_F_VOL_TEST ;
2015-08-19 03:00:18 -04:00
smp - > data . type = SMP_T_SINT ;
2015-08-19 03:07:19 -04:00
smp - > data . u . sint = 0 ;
2014-01-28 17:18:23 -05:00
if ( stkctr_entry ( stkctr ) ! = NULL ) {
void * ptr = stktable_data_ptr ( stkctr - > table , stkctr_entry ( stkctr ) , STKTABLE_DT_HTTP_ERR_RATE ) ;
2010-06-23 05:44:09 -04:00
if ( ! ptr )
return 0 ; /* parameter not stored */
2015-08-19 03:07:19 -04:00
smp - > data . u . sint = read_freq_ctr_period ( & stktable_data_cast ( ptr , http_err_rate ) ,
2013-07-23 10:48:54 -04:00
stkctr - > table - > data_arg [ STKTABLE_DT_HTTP_ERR_RATE ] . u ) ;
2010-06-23 05:44:09 -04:00
}
return 1 ;
}
2013-07-23 11:17:10 -04:00
/* set <smp> to the number of kbytes received from clients, as found in the
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
* stream ' s tracked frontend counters . Supports being called as
2013-07-23 11:17:10 -04:00
* " sc[0-9]_kbytes_in " or " src_kbytes_in " only .
*/
2010-06-18 15:52:52 -04:00
static int
2015-05-11 09:42:45 -04:00
smp_fetch_sc_kbytes_in ( const struct arg * args , struct sample * smp , const char * kw , void * private )
2010-06-18 15:52:52 -04:00
{
2016-03-10 05:47:01 -05:00
struct stkctr * stkctr ;
2013-07-23 11:17:10 -04:00
2016-03-10 05:47:01 -05:00
stkctr = smp_fetch_sc_stkctr ( smp - > sess , smp - > strm , args , kw ) ;
2013-07-23 11:17:10 -04:00
if ( ! stkctr )
return 0 ;
2012-04-23 10:16:37 -04:00
smp - > flags = SMP_F_VOL_TEST ;
2015-08-19 03:00:18 -04:00
smp - > data . type = SMP_T_SINT ;
2015-08-19 03:07:19 -04:00
smp - > data . u . sint = 0 ;
2014-01-28 17:18:23 -05:00
if ( stkctr_entry ( stkctr ) ! = NULL ) {
void * ptr = stktable_data_ptr ( stkctr - > table , stkctr_entry ( stkctr ) , STKTABLE_DT_BYTES_IN_CNT ) ;
2010-06-18 15:52:52 -04:00
if ( ! ptr )
return 0 ; /* parameter not stored */
2015-08-19 03:07:19 -04:00
smp - > data . u . sint = stktable_data_cast ( ptr , bytes_in_cnt ) > > 10 ;
2010-06-18 15:52:52 -04:00
}
return 1 ;
}
2013-07-23 11:39:19 -04:00
/* set <smp> to the data rate received from clients in bytes/s, as found
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
* in the stream ' s tracked frontend counters . Supports being called as
2013-07-23 11:39:19 -04:00
* " sc[0-9]_bytes_in_rate " or " src_bytes_in_rate " only .
2010-06-20 05:56:30 -04:00
*/
static int
2015-05-11 09:42:45 -04:00
smp_fetch_sc_bytes_in_rate ( const struct arg * args , struct sample * smp , const char * kw , void * private )
2010-06-20 05:56:30 -04:00
{
2016-03-10 05:47:01 -05:00
struct stkctr * stkctr ;
2013-07-23 11:39:19 -04:00
2016-03-10 05:47:01 -05:00
stkctr = smp_fetch_sc_stkctr ( smp - > sess , smp - > strm , args , kw ) ;
2013-07-23 11:39:19 -04:00
if ( ! stkctr )
return 0 ;
2012-04-23 10:16:37 -04:00
smp - > flags = SMP_F_VOL_TEST ;
2015-08-19 03:00:18 -04:00
smp - > data . type = SMP_T_SINT ;
2015-08-19 03:07:19 -04:00
smp - > data . u . sint = 0 ;
2014-01-28 17:18:23 -05:00
if ( stkctr_entry ( stkctr ) ! = NULL ) {
void * ptr = stktable_data_ptr ( stkctr - > table , stkctr_entry ( stkctr ) , STKTABLE_DT_BYTES_IN_RATE ) ;
2010-06-20 05:56:30 -04:00
if ( ! ptr )
return 0 ; /* parameter not stored */
2015-08-19 03:07:19 -04:00
smp - > data . u . sint = read_freq_ctr_period ( & stktable_data_cast ( ptr , bytes_in_rate ) ,
2013-07-23 11:39:19 -04:00
stkctr - > table - > data_arg [ STKTABLE_DT_BYTES_IN_RATE ] . u ) ;
2010-06-20 05:56:30 -04:00
}
return 1 ;
}
2013-07-23 11:39:02 -04:00
/* set <smp> to the number of kbytes sent to clients, as found in the
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
* stream ' s tracked frontend counters . Supports being called as
2013-07-23 11:39:02 -04:00
* " sc[0-9]_kbytes_out " or " src_kbytes_out " only .
*/
2010-06-18 15:52:52 -04:00
static int
2015-05-11 09:42:45 -04:00
smp_fetch_sc_kbytes_out ( const struct arg * args , struct sample * smp , const char * kw , void * private )
2010-06-18 15:52:52 -04:00
{
2016-03-10 05:47:01 -05:00
struct stkctr * stkctr ;
stkctr = smp_fetch_sc_stkctr ( smp - > sess , smp - > strm , args , kw ) ;
2013-07-23 11:39:02 -04:00
if ( ! stkctr )
return 0 ;
2012-04-23 10:16:37 -04:00
smp - > flags = SMP_F_VOL_TEST ;
2015-08-19 03:00:18 -04:00
smp - > data . type = SMP_T_SINT ;
2015-08-19 03:07:19 -04:00
smp - > data . u . sint = 0 ;
2014-01-28 17:18:23 -05:00
if ( stkctr_entry ( stkctr ) ! = NULL ) {
void * ptr = stktable_data_ptr ( stkctr - > table , stkctr_entry ( stkctr ) , STKTABLE_DT_BYTES_OUT_CNT ) ;
2010-06-18 12:33:32 -04:00
if ( ! ptr )
return 0 ; /* parameter not stored */
2015-08-19 03:07:19 -04:00
smp - > data . u . sint = stktable_data_cast ( ptr , bytes_out_cnt ) > > 10 ;
2010-06-18 12:33:32 -04:00
}
return 1 ;
}
2013-07-23 12:26:32 -04:00
/* set <smp> to the data rate sent to clients in bytes/s, as found in the
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
* stream ' s tracked frontend counters . Supports being called as
2013-07-23 12:26:32 -04:00
* " sc[0-9]_bytes_out_rate " or " src_bytes_out_rate " only .
2010-06-20 05:56:30 -04:00
*/
static int
2015-05-11 09:42:45 -04:00
smp_fetch_sc_bytes_out_rate ( const struct arg * args , struct sample * smp , const char * kw , void * private )
2010-06-20 05:56:30 -04:00
{
2016-03-10 05:47:01 -05:00
struct stkctr * stkctr ;
stkctr = smp_fetch_sc_stkctr ( smp - > sess , smp - > strm , args , kw ) ;
2013-07-23 12:26:32 -04:00
if ( ! stkctr )
return 0 ;
2012-04-23 10:16:37 -04:00
smp - > flags = SMP_F_VOL_TEST ;
2015-08-19 03:00:18 -04:00
smp - > data . type = SMP_T_SINT ;
2015-08-19 03:07:19 -04:00
smp - > data . u . sint = 0 ;
2014-01-28 17:18:23 -05:00
if ( stkctr_entry ( stkctr ) ! = NULL ) {
void * ptr = stktable_data_ptr ( stkctr - > table , stkctr_entry ( stkctr ) , STKTABLE_DT_BYTES_OUT_RATE ) ;
2010-06-20 05:56:30 -04:00
if ( ! ptr )
return 0 ; /* parameter not stored */
2015-08-19 03:07:19 -04:00
smp - > data . u . sint = read_freq_ctr_period ( & stktable_data_cast ( ptr , bytes_out_rate ) ,
2013-07-23 12:26:32 -04:00
stkctr - > table - > data_arg [ STKTABLE_DT_BYTES_OUT_RATE ] . u ) ;
2010-06-20 05:56:30 -04:00
}
return 1 ;
}
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
/* set <smp> to the number of active trackers on the SC entry in the stream's
2013-07-23 12:32:02 -04:00
* tracked frontend counters . Supports being called as " sc[0-9]_trackers " only .
*/
2012-12-09 06:16:43 -05:00
static int
2015-05-11 09:42:45 -04:00
smp_fetch_sc_trackers ( const struct arg * args , struct sample * smp , const char * kw , void * private )
2012-12-09 06:16:43 -05:00
{
2016-03-10 05:47:01 -05:00
struct stkctr * stkctr ;
stkctr = smp_fetch_sc_stkctr ( smp - > sess , smp - > strm , args , kw ) ;
2013-07-23 12:32:02 -04:00
if ( ! stkctr )
2013-05-28 12:32:20 -04:00
return 0 ;
2013-07-23 12:32:02 -04:00
smp - > flags = SMP_F_VOL_TEST ;
2015-08-19 03:00:18 -04:00
smp - > data . type = SMP_T_SINT ;
2016-08-14 06:02:55 -04:00
smp - > data . u . sint = stkctr_entry ( stkctr ) ? stkctr_entry ( stkctr ) - > ref_cnt : 0 ;
2013-07-23 12:32:02 -04:00
return 1 ;
2013-05-28 12:32:20 -04:00
}
2012-04-19 11:16:54 -04:00
/* set temp integer to the number of used entries in the table pointed to by expr.
2012-04-20 05:37:56 -04:00
* Accepts exactly 1 argument of type table .
2012-04-19 11:16:54 -04:00
*/
2011-03-28 18:57:02 -04:00
static int
2015-05-11 09:42:45 -04:00
smp_fetch_table_cnt ( const struct arg * args , struct sample * smp , const char * kw , void * private )
2011-03-28 18:57:02 -04:00
{
2012-04-23 10:16:37 -04:00
smp - > flags = SMP_F_VOL_TEST ;
2015-08-19 03:00:18 -04:00
smp - > data . type = SMP_T_SINT ;
2015-08-19 03:07:19 -04:00
smp - > data . u . sint = args - > data . prx - > table . current ;
2011-03-28 18:57:02 -04:00
return 1 ;
}
2012-04-19 11:16:54 -04:00
/* set temp integer to the number of free entries in the table pointed to by expr.
2012-04-20 05:37:56 -04:00
* Accepts exactly 1 argument of type table .
2012-04-19 11:16:54 -04:00
*/
2011-03-28 18:57:02 -04:00
static int
2015-05-11 09:42:45 -04:00
smp_fetch_table_avl ( const struct arg * args , struct sample * smp , const char * kw , void * private )
2011-03-28 18:57:02 -04:00
{
2015-05-11 09:20:49 -04:00
struct proxy * px ;
2012-04-23 17:55:44 -04:00
px = args - > data . prx ;
2012-04-23 10:16:37 -04:00
smp - > flags = SMP_F_VOL_TEST ;
2015-08-19 03:00:18 -04:00
smp - > data . type = SMP_T_SINT ;
2015-08-19 03:07:19 -04:00
smp - > data . u . sint = px - > table . size - px - > table . current ;
2011-03-28 18:57:02 -04:00
return 1 ;
}
2010-06-18 11:46:06 -04:00
2015-09-27 13:29:33 -04:00
/* 0=OK, <0=Alert, >0=Warning */
static enum act_parse_ret stream_parse_use_service ( const char * * args , int * cur_arg ,
struct proxy * px , struct act_rule * rule ,
char * * err )
{
struct action_kw * kw ;
/* Check if the service name exists. */
if ( * ( args [ * cur_arg ] ) = = 0 ) {
memprintf ( err , " '%s' expects a service name. " , args [ 0 ] ) ;
2015-11-26 13:48:04 -05:00
return ACT_RET_PRS_ERR ;
2015-09-27 13:29:33 -04:00
}
/* lookup for keyword corresponding to a service. */
kw = action_lookup ( & service_keywords , args [ * cur_arg ] ) ;
if ( ! kw ) {
memprintf ( err , " '%s' unknown service name. " , args [ 1 ] ) ;
return ACT_RET_PRS_ERR ;
}
( * cur_arg ) + + ;
/* executes specific rule parser. */
rule - > kw = kw ;
if ( kw - > parse ( ( const char * * ) args , cur_arg , px , rule , err ) = = ACT_RET_PRS_ERR )
return ACT_RET_PRS_ERR ;
/* Register processing function. */
rule - > action_ptr = process_use_service ;
rule - > action = ACT_CUSTOM ;
return ACT_RET_PRS_OK ;
}
void service_keywords_register ( struct action_kw_list * kw_list )
{
LIST_ADDQ ( & service_keywords , & kw_list - > list ) ;
}
2016-11-21 02:51:11 -05:00
/* This function dumps a complete stream state onto the stream interface's
* read buffer . The stream has to be set in strm . It returns 0 if the output
* buffer is full and it needs to be called again , otherwise non - zero . It is
* designed to be called from stats_dump_strm_to_buffer ( ) below .
*/
static int stats_dump_full_strm_to_buffer ( struct stream_interface * si , struct stream * strm )
{
struct appctx * appctx = __objt_appctx ( si - > end ) ;
struct tm tm ;
extern const char * monthname [ 12 ] ;
char pn [ INET6_ADDRSTRLEN ] ;
struct connection * conn ;
struct appctx * tmpctx ;
chunk_reset ( & trash ) ;
if ( appctx - > ctx . sess . section > 0 & & appctx - > ctx . sess . uid ! = strm - > uniq_id ) {
/* stream changed, no need to go any further */
chunk_appendf ( & trash , " *** session terminated while we were watching it *** \n " ) ;
if ( bi_putchk ( si_ic ( si ) , & trash ) = = - 1 ) {
si_applet_cant_put ( si ) ;
return 0 ;
}
appctx - > ctx . sess . uid = 0 ;
appctx - > ctx . sess . section = 0 ;
return 1 ;
}
switch ( appctx - > ctx . sess . section ) {
case 0 : /* main status of the stream */
appctx - > ctx . sess . uid = strm - > uniq_id ;
appctx - > ctx . sess . section = 1 ;
/* fall through */
case 1 :
get_localtime ( strm - > logs . accept_date . tv_sec , & tm ) ;
chunk_appendf ( & trash ,
" %p: [%02d/%s/%04d:%02d:%02d:%02d.%06d] id=%u proto=%s " ,
strm ,
tm . tm_mday , monthname [ tm . tm_mon ] , tm . tm_year + 1900 ,
tm . tm_hour , tm . tm_min , tm . tm_sec , ( int ) ( strm - > logs . accept_date . tv_usec ) ,
strm - > uniq_id ,
strm_li ( strm ) ? strm_li ( strm ) - > proto - > name : " ? " ) ;
conn = objt_conn ( strm_orig ( strm ) ) ;
switch ( conn ? addr_to_str ( & conn - > addr . from , pn , sizeof ( pn ) ) : AF_UNSPEC ) {
case AF_INET :
case AF_INET6 :
chunk_appendf ( & trash , " source=%s:%d \n " ,
pn , get_host_port ( & conn - > addr . from ) ) ;
break ;
case AF_UNIX :
chunk_appendf ( & trash , " source=unix:%d \n " , strm_li ( strm ) - > luid ) ;
break ;
default :
/* no more information to print right now */
chunk_appendf ( & trash , " \n " ) ;
break ;
}
chunk_appendf ( & trash ,
" flags=0x%x, conn_retries=%d, srv_conn=%p, pend_pos=%p \n " ,
strm - > flags , strm - > si [ 1 ] . conn_retries , strm - > srv_conn , strm - > pend_pos ) ;
chunk_appendf ( & trash ,
" frontend=%s (id=%u mode=%s), listener=%s (id=%u) " ,
strm_fe ( strm ) - > id , strm_fe ( strm ) - > uuid , strm_fe ( strm ) - > mode ? " http " : " tcp " ,
strm_li ( strm ) ? strm_li ( strm ) - > name ? strm_li ( strm ) - > name : " ? " : " ? " ,
strm_li ( strm ) ? strm_li ( strm ) - > luid : 0 ) ;
if ( conn )
conn_get_to_addr ( conn ) ;
switch ( conn ? addr_to_str ( & conn - > addr . to , pn , sizeof ( pn ) ) : AF_UNSPEC ) {
case AF_INET :
case AF_INET6 :
chunk_appendf ( & trash , " addr=%s:%d \n " ,
pn , get_host_port ( & conn - > addr . to ) ) ;
break ;
case AF_UNIX :
chunk_appendf ( & trash , " addr=unix:%d \n " , strm_li ( strm ) - > luid ) ;
break ;
default :
/* no more information to print right now */
chunk_appendf ( & trash , " \n " ) ;
break ;
}
if ( strm - > be - > cap & PR_CAP_BE )
chunk_appendf ( & trash ,
" backend=%s (id=%u mode=%s) " ,
strm - > be - > id ,
strm - > be - > uuid , strm - > be - > mode ? " http " : " tcp " ) ;
else
chunk_appendf ( & trash , " backend=<NONE> (id=-1 mode=-) " ) ;
conn = objt_conn ( strm - > si [ 1 ] . end ) ;
if ( conn )
conn_get_from_addr ( conn ) ;
switch ( conn ? addr_to_str ( & conn - > addr . from , pn , sizeof ( pn ) ) : AF_UNSPEC ) {
case AF_INET :
case AF_INET6 :
chunk_appendf ( & trash , " addr=%s:%d \n " ,
pn , get_host_port ( & conn - > addr . from ) ) ;
break ;
case AF_UNIX :
chunk_appendf ( & trash , " addr=unix \n " ) ;
break ;
default :
/* no more information to print right now */
chunk_appendf ( & trash , " \n " ) ;
break ;
}
if ( strm - > be - > cap & PR_CAP_BE )
chunk_appendf ( & trash ,
" server=%s (id=%u) " ,
objt_server ( strm - > target ) ? objt_server ( strm - > target ) - > id : " <none> " ,
objt_server ( strm - > target ) ? objt_server ( strm - > target ) - > puid : 0 ) ;
else
chunk_appendf ( & trash , " server=<NONE> (id=-1) " ) ;
if ( conn )
conn_get_to_addr ( conn ) ;
switch ( conn ? addr_to_str ( & conn - > addr . to , pn , sizeof ( pn ) ) : AF_UNSPEC ) {
case AF_INET :
case AF_INET6 :
chunk_appendf ( & trash , " addr=%s:%d \n " ,
pn , get_host_port ( & conn - > addr . to ) ) ;
break ;
case AF_UNIX :
chunk_appendf ( & trash , " addr=unix \n " ) ;
break ;
default :
/* no more information to print right now */
chunk_appendf ( & trash , " \n " ) ;
break ;
}
chunk_appendf ( & trash ,
" task=%p (state=0x%02x nice=%d calls=%d exp=%s%s " ,
strm - > task ,
strm - > task - > state ,
strm - > task - > nice , strm - > task - > calls ,
strm - > task - > expire ?
tick_is_expired ( strm - > task - > expire , now_ms ) ? " <PAST> " :
human_time ( TICKS_TO_MS ( strm - > task - > expire - now_ms ) ,
TICKS_TO_MS ( 1000 ) ) : " <NEVER> " ,
task_in_rq ( strm - > task ) ? " , running " : " " ) ;
chunk_appendf ( & trash ,
" age=%s) \n " ,
human_time ( now . tv_sec - strm - > logs . accept_date . tv_sec , 1 ) ) ;
if ( strm - > txn )
chunk_appendf ( & trash ,
" txn=%p flags=0x%x meth=%d status=%d req.st=%s rsp.st=%s waiting=%d \n " ,
strm - > txn , strm - > txn - > flags , strm - > txn - > meth , strm - > txn - > status ,
http_msg_state_str ( strm - > txn - > req . msg_state ) , http_msg_state_str ( strm - > txn - > rsp . msg_state ) , ! LIST_ISEMPTY ( & strm - > buffer_wait ) ) ;
chunk_appendf ( & trash ,
" si[0]=%p (state=%s flags=0x%02x endp0=%s:%p exp=%s, et=0x%03x) \n " ,
& strm - > si [ 0 ] ,
si_state_str ( strm - > si [ 0 ] . state ) ,
strm - > si [ 0 ] . flags ,
obj_type_name ( strm - > si [ 0 ] . end ) ,
obj_base_ptr ( strm - > si [ 0 ] . end ) ,
strm - > si [ 0 ] . exp ?
tick_is_expired ( strm - > si [ 0 ] . exp , now_ms ) ? " <PAST> " :
human_time ( TICKS_TO_MS ( strm - > si [ 0 ] . exp - now_ms ) ,
TICKS_TO_MS ( 1000 ) ) : " <NEVER> " ,
strm - > si [ 0 ] . err_type ) ;
chunk_appendf ( & trash ,
" si[1]=%p (state=%s flags=0x%02x endp1=%s:%p exp=%s, et=0x%03x) \n " ,
& strm - > si [ 1 ] ,
si_state_str ( strm - > si [ 1 ] . state ) ,
strm - > si [ 1 ] . flags ,
obj_type_name ( strm - > si [ 1 ] . end ) ,
obj_base_ptr ( strm - > si [ 1 ] . end ) ,
strm - > si [ 1 ] . exp ?
tick_is_expired ( strm - > si [ 1 ] . exp , now_ms ) ? " <PAST> " :
human_time ( TICKS_TO_MS ( strm - > si [ 1 ] . exp - now_ms ) ,
TICKS_TO_MS ( 1000 ) ) : " <NEVER> " ,
strm - > si [ 1 ] . err_type ) ;
if ( ( conn = objt_conn ( strm - > si [ 0 ] . end ) ) ! = NULL ) {
chunk_appendf ( & trash ,
" co0=%p ctrl=%s xprt=%s data=%s target=%s:%p \n " ,
conn ,
conn_get_ctrl_name ( conn ) ,
conn_get_xprt_name ( conn ) ,
conn_get_data_name ( conn ) ,
obj_type_name ( conn - > target ) ,
obj_base_ptr ( conn - > target ) ) ;
chunk_appendf ( & trash ,
" flags=0x%08x fd=%d fd.state=%02x fd.cache=%d updt=%d \n " ,
conn - > flags ,
conn - > t . sock . fd ,
conn - > t . sock . fd > = 0 ? fdtab [ conn - > t . sock . fd ] . state : 0 ,
conn - > t . sock . fd > = 0 ? fdtab [ conn - > t . sock . fd ] . cache : 0 ,
conn - > t . sock . fd > = 0 ? fdtab [ conn - > t . sock . fd ] . updated : 0 ) ;
}
else if ( ( tmpctx = objt_appctx ( strm - > si [ 0 ] . end ) ) ! = NULL ) {
chunk_appendf ( & trash ,
" app0=%p st0=%d st1=%d st2=%d applet=%s \n " ,
tmpctx ,
tmpctx - > st0 ,
tmpctx - > st1 ,
tmpctx - > st2 ,
tmpctx - > applet - > name ) ;
}
if ( ( conn = objt_conn ( strm - > si [ 1 ] . end ) ) ! = NULL ) {
chunk_appendf ( & trash ,
" co1=%p ctrl=%s xprt=%s data=%s target=%s:%p \n " ,
conn ,
conn_get_ctrl_name ( conn ) ,
conn_get_xprt_name ( conn ) ,
conn_get_data_name ( conn ) ,
obj_type_name ( conn - > target ) ,
obj_base_ptr ( conn - > target ) ) ;
chunk_appendf ( & trash ,
" flags=0x%08x fd=%d fd.state=%02x fd.cache=%d updt=%d \n " ,
conn - > flags ,
conn - > t . sock . fd ,
conn - > t . sock . fd > = 0 ? fdtab [ conn - > t . sock . fd ] . state : 0 ,
conn - > t . sock . fd > = 0 ? fdtab [ conn - > t . sock . fd ] . cache : 0 ,
conn - > t . sock . fd > = 0 ? fdtab [ conn - > t . sock . fd ] . updated : 0 ) ;
}
else if ( ( tmpctx = objt_appctx ( strm - > si [ 1 ] . end ) ) ! = NULL ) {
chunk_appendf ( & trash ,
" app1=%p st0=%d st1=%d st2=%d applet=%s \n " ,
tmpctx ,
tmpctx - > st0 ,
tmpctx - > st1 ,
tmpctx - > st2 ,
tmpctx - > applet - > name ) ;
}
chunk_appendf ( & trash ,
" req=%p (f=0x%06x an=0x%x pipe=%d tofwd=%d total=%lld) \n "
" an_exp=%s " ,
& strm - > req ,
strm - > req . flags , strm - > req . analysers ,
strm - > req . pipe ? strm - > req . pipe - > data : 0 ,
strm - > req . to_forward , strm - > req . total ,
strm - > req . analyse_exp ?
human_time ( TICKS_TO_MS ( strm - > req . analyse_exp - now_ms ) ,
TICKS_TO_MS ( 1000 ) ) : " <NEVER> " ) ;
chunk_appendf ( & trash ,
" rex=%s " ,
strm - > req . rex ?
human_time ( TICKS_TO_MS ( strm - > req . rex - now_ms ) ,
TICKS_TO_MS ( 1000 ) ) : " <NEVER> " ) ;
chunk_appendf ( & trash ,
" wex=%s \n "
" buf=%p data=%p o=%d p=%d req.next=%d i=%d size=%d \n " ,
strm - > req . wex ?
human_time ( TICKS_TO_MS ( strm - > req . wex - now_ms ) ,
TICKS_TO_MS ( 1000 ) ) : " <NEVER> " ,
strm - > req . buf ,
strm - > req . buf - > data , strm - > req . buf - > o ,
( int ) ( strm - > req . buf - > p - strm - > req . buf - > data ) ,
strm - > txn ? strm - > txn - > req . next : 0 , strm - > req . buf - > i ,
strm - > req . buf - > size ) ;
chunk_appendf ( & trash ,
" res=%p (f=0x%06x an=0x%x pipe=%d tofwd=%d total=%lld) \n "
" an_exp=%s " ,
& strm - > res ,
strm - > res . flags , strm - > res . analysers ,
strm - > res . pipe ? strm - > res . pipe - > data : 0 ,
strm - > res . to_forward , strm - > res . total ,
strm - > res . analyse_exp ?
human_time ( TICKS_TO_MS ( strm - > res . analyse_exp - now_ms ) ,
TICKS_TO_MS ( 1000 ) ) : " <NEVER> " ) ;
chunk_appendf ( & trash ,
" rex=%s " ,
strm - > res . rex ?
human_time ( TICKS_TO_MS ( strm - > res . rex - now_ms ) ,
TICKS_TO_MS ( 1000 ) ) : " <NEVER> " ) ;
chunk_appendf ( & trash ,
" wex=%s \n "
" buf=%p data=%p o=%d p=%d rsp.next=%d i=%d size=%d \n " ,
strm - > res . wex ?
human_time ( TICKS_TO_MS ( strm - > res . wex - now_ms ) ,
TICKS_TO_MS ( 1000 ) ) : " <NEVER> " ,
strm - > res . buf ,
strm - > res . buf - > data , strm - > res . buf - > o ,
( int ) ( strm - > res . buf - > p - strm - > res . buf - > data ) ,
strm - > txn ? strm - > txn - > rsp . next : 0 , strm - > res . buf - > i ,
strm - > res . buf - > size ) ;
if ( bi_putchk ( si_ic ( si ) , & trash ) = = - 1 ) {
si_applet_cant_put ( si ) ;
return 0 ;
}
/* use other states to dump the contents */
}
/* end of dump */
appctx - > ctx . sess . uid = 0 ;
appctx - > ctx . sess . section = 0 ;
return 1 ;
}
static int cli_parse_show_sess ( char * * args , struct appctx * appctx , void * private )
{
appctx - > st2 = STAT_ST_INIT ;
if ( ! cli_has_level ( appctx , ACCESS_LVL_OPER ) )
return 1 ;
if ( * args [ 2 ] & & strcmp ( args [ 2 ] , " all " ) = = 0 )
appctx - > ctx . sess . target = ( void * ) - 1 ;
else if ( * args [ 2 ] )
appctx - > ctx . sess . target = ( void * ) strtoul ( args [ 2 ] , NULL , 0 ) ;
else
appctx - > ctx . sess . target = NULL ;
appctx - > ctx . sess . section = 0 ; /* start with stream status */
appctx - > ctx . sess . pos = 0 ;
return 0 ;
}
/* This function dumps all streams' states onto the stream interface's
* read buffer . It returns 0 if the output buffer is full and it needs
* to be called again , otherwise non - zero . It is designed to be called
* from stats_dump_sess_to_buffer ( ) below .
*/
static int cli_io_handler_dump_sess ( struct appctx * appctx )
{
struct stream_interface * si = appctx - > owner ;
struct connection * conn ;
if ( unlikely ( si_ic ( si ) - > flags & ( CF_WRITE_ERROR | CF_SHUTW ) ) ) {
/* If we're forced to shut down, we might have to remove our
* reference to the last stream being dumped .
*/
if ( appctx - > st2 = = STAT_ST_LIST ) {
if ( ! LIST_ISEMPTY ( & appctx - > ctx . sess . bref . users ) ) {
LIST_DEL ( & appctx - > ctx . sess . bref . users ) ;
LIST_INIT ( & appctx - > ctx . sess . bref . users ) ;
}
}
return 1 ;
}
chunk_reset ( & trash ) ;
switch ( appctx - > st2 ) {
case STAT_ST_INIT :
/* the function had not been called yet, let's prepare the
* buffer for a response . We initialize the current stream
* pointer to the first in the global list . When a target
* stream is being destroyed , it is responsible for updating
* this pointer . We know we have reached the end when this
* pointer points back to the head of the streams list .
*/
LIST_INIT ( & appctx - > ctx . sess . bref . users ) ;
appctx - > ctx . sess . bref . ref = streams . n ;
appctx - > st2 = STAT_ST_LIST ;
/* fall through */
case STAT_ST_LIST :
/* first, let's detach the back-ref from a possible previous stream */
if ( ! LIST_ISEMPTY ( & appctx - > ctx . sess . bref . users ) ) {
LIST_DEL ( & appctx - > ctx . sess . bref . users ) ;
LIST_INIT ( & appctx - > ctx . sess . bref . users ) ;
}
/* and start from where we stopped */
while ( appctx - > ctx . sess . bref . ref ! = & streams ) {
char pn [ INET6_ADDRSTRLEN ] ;
struct stream * curr_strm ;
curr_strm = LIST_ELEM ( appctx - > ctx . sess . bref . ref , struct stream * , list ) ;
if ( appctx - > ctx . sess . target ) {
if ( appctx - > ctx . sess . target ! = ( void * ) - 1 & & appctx - > ctx . sess . target ! = curr_strm )
goto next_sess ;
LIST_ADDQ ( & curr_strm - > back_refs , & appctx - > ctx . sess . bref . users ) ;
/* call the proper dump() function and return if we're missing space */
if ( ! stats_dump_full_strm_to_buffer ( si , curr_strm ) )
return 0 ;
/* stream dump complete */
LIST_DEL ( & appctx - > ctx . sess . bref . users ) ;
LIST_INIT ( & appctx - > ctx . sess . bref . users ) ;
if ( appctx - > ctx . sess . target ! = ( void * ) - 1 ) {
appctx - > ctx . sess . target = NULL ;
break ;
}
else
goto next_sess ;
}
chunk_appendf ( & trash ,
" %p: proto=%s " ,
curr_strm ,
strm_li ( curr_strm ) ? strm_li ( curr_strm ) - > proto - > name : " ? " ) ;
conn = objt_conn ( strm_orig ( curr_strm ) ) ;
switch ( conn ? addr_to_str ( & conn - > addr . from , pn , sizeof ( pn ) ) : AF_UNSPEC ) {
case AF_INET :
case AF_INET6 :
chunk_appendf ( & trash ,
" src=%s:%d fe=%s be=%s srv=%s " ,
pn ,
get_host_port ( & conn - > addr . from ) ,
strm_fe ( curr_strm ) - > id ,
( curr_strm - > be - > cap & PR_CAP_BE ) ? curr_strm - > be - > id : " <NONE> " ,
objt_server ( curr_strm - > target ) ? objt_server ( curr_strm - > target ) - > id : " <none> "
) ;
break ;
case AF_UNIX :
chunk_appendf ( & trash ,
" src=unix:%d fe=%s be=%s srv=%s " ,
strm_li ( curr_strm ) - > luid ,
strm_fe ( curr_strm ) - > id ,
( curr_strm - > be - > cap & PR_CAP_BE ) ? curr_strm - > be - > id : " <NONE> " ,
objt_server ( curr_strm - > target ) ? objt_server ( curr_strm - > target ) - > id : " <none> "
) ;
break ;
}
chunk_appendf ( & trash ,
" ts=%02x age=%s calls=%d " ,
curr_strm - > task - > state ,
human_time ( now . tv_sec - curr_strm - > logs . tv_accept . tv_sec , 1 ) ,
curr_strm - > task - > calls ) ;
chunk_appendf ( & trash ,
" rq[f=%06xh,i=%d,an=%02xh,rx=%s " ,
curr_strm - > req . flags ,
curr_strm - > req . buf - > i ,
curr_strm - > req . analysers ,
curr_strm - > req . rex ?
human_time ( TICKS_TO_MS ( curr_strm - > req . rex - now_ms ) ,
TICKS_TO_MS ( 1000 ) ) : " " ) ;
chunk_appendf ( & trash ,
" ,wx=%s " ,
curr_strm - > req . wex ?
human_time ( TICKS_TO_MS ( curr_strm - > req . wex - now_ms ) ,
TICKS_TO_MS ( 1000 ) ) : " " ) ;
chunk_appendf ( & trash ,
" ,ax=%s] " ,
curr_strm - > req . analyse_exp ?
human_time ( TICKS_TO_MS ( curr_strm - > req . analyse_exp - now_ms ) ,
TICKS_TO_MS ( 1000 ) ) : " " ) ;
chunk_appendf ( & trash ,
" rp[f=%06xh,i=%d,an=%02xh,rx=%s " ,
curr_strm - > res . flags ,
curr_strm - > res . buf - > i ,
curr_strm - > res . analysers ,
curr_strm - > res . rex ?
human_time ( TICKS_TO_MS ( curr_strm - > res . rex - now_ms ) ,
TICKS_TO_MS ( 1000 ) ) : " " ) ;
chunk_appendf ( & trash ,
" ,wx=%s " ,
curr_strm - > res . wex ?
human_time ( TICKS_TO_MS ( curr_strm - > res . wex - now_ms ) ,
TICKS_TO_MS ( 1000 ) ) : " " ) ;
chunk_appendf ( & trash ,
" ,ax=%s] " ,
curr_strm - > res . analyse_exp ?
human_time ( TICKS_TO_MS ( curr_strm - > res . analyse_exp - now_ms ) ,
TICKS_TO_MS ( 1000 ) ) : " " ) ;
conn = objt_conn ( curr_strm - > si [ 0 ] . end ) ;
chunk_appendf ( & trash ,
" s0=[%d,%1xh,fd=%d,ex=%s] " ,
curr_strm - > si [ 0 ] . state ,
curr_strm - > si [ 0 ] . flags ,
conn ? conn - > t . sock . fd : - 1 ,
curr_strm - > si [ 0 ] . exp ?
human_time ( TICKS_TO_MS ( curr_strm - > si [ 0 ] . exp - now_ms ) ,
TICKS_TO_MS ( 1000 ) ) : " " ) ;
conn = objt_conn ( curr_strm - > si [ 1 ] . end ) ;
chunk_appendf ( & trash ,
" s1=[%d,%1xh,fd=%d,ex=%s] " ,
curr_strm - > si [ 1 ] . state ,
curr_strm - > si [ 1 ] . flags ,
conn ? conn - > t . sock . fd : - 1 ,
curr_strm - > si [ 1 ] . exp ?
human_time ( TICKS_TO_MS ( curr_strm - > si [ 1 ] . exp - now_ms ) ,
TICKS_TO_MS ( 1000 ) ) : " " ) ;
chunk_appendf ( & trash ,
" exp=%s " ,
curr_strm - > task - > expire ?
human_time ( TICKS_TO_MS ( curr_strm - > task - > expire - now_ms ) ,
TICKS_TO_MS ( 1000 ) ) : " " ) ;
if ( task_in_rq ( curr_strm - > task ) )
chunk_appendf ( & trash , " run(nice=%d) " , curr_strm - > task - > nice ) ;
chunk_appendf ( & trash , " \n " ) ;
if ( bi_putchk ( si_ic ( si ) , & trash ) = = - 1 ) {
/* let's try again later from this stream. We add ourselves into
* this stream ' s users so that it can remove us upon termination .
*/
si_applet_cant_put ( si ) ;
LIST_ADDQ ( & curr_strm - > back_refs , & appctx - > ctx . sess . bref . users ) ;
return 0 ;
}
next_sess :
appctx - > ctx . sess . bref . ref = curr_strm - > list . n ;
}
if ( appctx - > ctx . sess . target & & appctx - > ctx . sess . target ! = ( void * ) - 1 ) {
/* specified stream not found */
if ( appctx - > ctx . sess . section > 0 )
chunk_appendf ( & trash , " *** session terminated while we were watching it *** \n " ) ;
else
chunk_appendf ( & trash , " Session not found. \n " ) ;
if ( bi_putchk ( si_ic ( si ) , & trash ) = = - 1 ) {
si_applet_cant_put ( si ) ;
return 0 ;
}
appctx - > ctx . sess . target = NULL ;
appctx - > ctx . sess . uid = 0 ;
return 1 ;
}
appctx - > st2 = STAT_ST_FIN ;
/* fall through */
default :
appctx - > st2 = STAT_ST_FIN ;
return 1 ;
}
}
static void cli_release_show_sess ( struct appctx * appctx )
{
if ( appctx - > st2 = = STAT_ST_LIST ) {
if ( ! LIST_ISEMPTY ( & appctx - > ctx . sess . bref . users ) )
LIST_DEL ( & appctx - > ctx . sess . bref . users ) ;
}
}
2016-11-24 05:09:25 -05:00
/* Parses the "shutdown session" directive, it always returns 1 */
static int cli_parse_shutdown_session ( char * * args , struct appctx * appctx , void * private )
{
struct stream * strm , * ptr ;
if ( ! cli_has_level ( appctx , ACCESS_LVL_ADMIN ) )
return 1 ;
if ( ! * args [ 2 ] ) {
appctx - > ctx . cli . msg = " Session pointer expected (use 'show sess'). \n " ;
appctx - > st0 = STAT_CLI_PRINT ;
return 1 ;
}
ptr = ( void * ) strtoul ( args [ 2 ] , NULL , 0 ) ;
/* first, look for the requested stream in the stream table */
list_for_each_entry ( strm , & streams , list ) {
if ( strm = = ptr )
break ;
}
/* do we have the stream ? */
if ( strm ! = ptr ) {
appctx - > ctx . cli . msg = " No such session (use 'show sess'). \n " ;
appctx - > st0 = STAT_CLI_PRINT ;
return 1 ;
}
stream_shutdown ( strm , SF_ERR_KILLED ) ;
return 1 ;
}
2016-11-23 10:50:48 -05:00
/* Parses the "shutdown session server" directive, it always returns 1 */
static int cli_parse_shutdown_sessions_server ( char * * args , struct appctx * appctx , void * private )
{
struct server * sv ;
struct stream * strm , * strm_bck ;
if ( ! cli_has_level ( appctx , ACCESS_LVL_ADMIN ) )
return 1 ;
sv = cli_find_server ( appctx , args [ 3 ] ) ;
if ( ! sv )
return 1 ;
/* kill all the stream that are on this server */
list_for_each_entry_safe ( strm , strm_bck , & sv - > actconns , by_srv )
if ( strm - > srv_conn = = sv )
stream_shutdown ( strm , SF_ERR_KILLED ) ;
return 1 ;
}
2016-11-21 02:51:11 -05:00
/* register cli keywords */
static struct cli_kw_list cli_kws = { { } , {
{ { " show " , " sess " , NULL } , " show sess [id] : report the list of current sessions or dump this session " , cli_parse_show_sess , cli_io_handler_dump_sess , cli_release_show_sess } ,
2016-11-24 05:09:25 -05:00
{ { " shutdown " , " session " , NULL } , " shutdown session : kill a specific session " , cli_parse_shutdown_session , NULL , NULL } ,
2016-11-23 10:50:48 -05:00
{ { " shutdown " , " sessions " , " server " } , " shutdown sessions server : kill sessions on a server " , cli_parse_shutdown_sessions_server , NULL , NULL } ,
2016-11-21 02:51:11 -05:00
{ { } , }
} } ;
2015-09-27 13:29:33 -04:00
/* main configuration keyword registration. */
static struct action_kw_list stream_tcp_keywords = { ILH , {
{ " use-service " , stream_parse_use_service } ,
{ /* END */ }
} } ;
static struct action_kw_list stream_http_keywords = { ILH , {
{ " use-service " , stream_parse_use_service } ,
{ /* END */ }
} } ;
2012-04-19 12:42:05 -04:00
/* Note: must not be declared <const> as its list will be overwritten.
* Please take care of keeping this list alphabetically sorted .
*/
2013-06-21 17:16:39 -04:00
static struct acl_kw_list acl_kws = { ILH , {
MINOR: session: rename sample fetch functions and declare the sample keywords
The following sample fetch functions were only usable by ACLs but are now
usable by sample fetches too :
sc1_bytes_in_rate, sc1_bytes_out_rate, sc1_clr_gpc0, sc1_conn_cnt,
sc1_conn_cur, sc1_conn_rate, sc1_get_gpc0, sc1_http_err_cnt,
sc1_http_err_rate, sc1_http_req_cnt, sc1_http_req_rate, sc1_inc_gpc0,
sc1_kbytes_in, sc1_kbytes_out, sc1_sess_cnt, sc1_sess_rate, sc1_trackers,
sc2_bytes_in_rate, sc2_bytes_out_rate, sc2_clr_gpc0, sc2_conn_cnt,
sc2_conn_cur, sc2_conn_rate, sc2_get_gpc0, sc2_http_err_cnt,
sc2_http_err_rate, sc2_http_req_cnt, sc2_http_req_rate, sc2_inc_gpc0,
sc2_kbytes_in, sc2_kbytes_out, sc2_sess_cnt, sc2_sess_rate, sc2_trackers,
src_bytes_in_rate, src_bytes_out_rate, src_clr_gpc0, src_conn_cnt,
src_conn_cur, src_conn_rate, src_get_gpc0, src_http_err_cnt,
src_http_err_rate, src_http_req_cnt, src_http_req_rate, src_inc_gpc0,
src_kbytes_in, src_kbytes_out, src_sess_cnt, src_sess_rate,
src_updt_conn_cnt, table_avl, table_cnt,
The fetch functions have been renamed "smp_fetch_*".
2013-01-07 19:23:27 -05:00
{ /* END */ } ,
} } ;
/* Note: must not be declared <const> as its list will be overwritten.
* Please take care of keeping this list alphabetically sorted .
*/
2013-06-21 17:16:39 -04:00
static struct sample_fetch_kw_list smp_fetch_keywords = { ILH , {
2015-07-20 11:45:02 -04:00
{ " sc_bytes_in_rate " , smp_fetch_sc_bytes_in_rate , ARG2 ( 1 , SINT , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
{ " sc_bytes_out_rate " , smp_fetch_sc_bytes_out_rate , ARG2 ( 1 , SINT , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
{ " sc_clr_gpc0 " , smp_fetch_sc_clr_gpc0 , ARG2 ( 1 , SINT , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
{ " sc_conn_cnt " , smp_fetch_sc_conn_cnt , ARG2 ( 1 , SINT , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
{ " sc_conn_cur " , smp_fetch_sc_conn_cur , ARG2 ( 1 , SINT , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
{ " sc_conn_rate " , smp_fetch_sc_conn_rate , ARG2 ( 1 , SINT , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
2015-08-19 02:25:14 -04:00
{ " sc_get_gpt0 " , smp_fetch_sc_get_gpt0 , ARG2 ( 1 , SINT , TAB ) , NULL , SMP_T_BOOL , SMP_USE_INTRN , } ,
2015-07-20 11:45:02 -04:00
{ " sc_get_gpc0 " , smp_fetch_sc_get_gpc0 , ARG2 ( 1 , SINT , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
{ " sc_gpc0_rate " , smp_fetch_sc_gpc0_rate , ARG2 ( 1 , SINT , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
{ " sc_http_err_cnt " , smp_fetch_sc_http_err_cnt , ARG2 ( 1 , SINT , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
{ " sc_http_err_rate " , smp_fetch_sc_http_err_rate , ARG2 ( 1 , SINT , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
{ " sc_http_req_cnt " , smp_fetch_sc_http_req_cnt , ARG2 ( 1 , SINT , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
{ " sc_http_req_rate " , smp_fetch_sc_http_req_rate , ARG2 ( 1 , SINT , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
{ " sc_inc_gpc0 " , smp_fetch_sc_inc_gpc0 , ARG2 ( 1 , SINT , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
{ " sc_kbytes_in " , smp_fetch_sc_kbytes_in , ARG2 ( 1 , SINT , TAB ) , NULL , SMP_T_SINT , SMP_USE_L4CLI , } ,
{ " sc_kbytes_out " , smp_fetch_sc_kbytes_out , ARG2 ( 1 , SINT , TAB ) , NULL , SMP_T_SINT , SMP_USE_L4CLI , } ,
{ " sc_sess_cnt " , smp_fetch_sc_sess_cnt , ARG2 ( 1 , SINT , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
{ " sc_sess_rate " , smp_fetch_sc_sess_rate , ARG2 ( 1 , SINT , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
{ " sc_tracked " , smp_fetch_sc_tracked , ARG2 ( 1 , SINT , TAB ) , NULL , SMP_T_BOOL , SMP_USE_INTRN , } ,
{ " sc_trackers " , smp_fetch_sc_trackers , ARG2 ( 1 , SINT , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
2015-07-06 17:43:03 -04:00
{ " sc0_bytes_in_rate " , smp_fetch_sc_bytes_in_rate , ARG1 ( 0 , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
{ " sc0_bytes_out_rate " , smp_fetch_sc_bytes_out_rate , ARG1 ( 0 , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
{ " sc0_clr_gpc0 " , smp_fetch_sc_clr_gpc0 , ARG1 ( 0 , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
{ " sc0_conn_cnt " , smp_fetch_sc_conn_cnt , ARG1 ( 0 , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
{ " sc0_conn_cur " , smp_fetch_sc_conn_cur , ARG1 ( 0 , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
{ " sc0_conn_rate " , smp_fetch_sc_conn_rate , ARG1 ( 0 , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
2015-08-19 02:25:14 -04:00
{ " sc0_get_gpt0 " , smp_fetch_sc_get_gpt0 , ARG1 ( 0 , TAB ) , NULL , SMP_T_BOOL , SMP_USE_INTRN , } ,
2015-07-06 17:43:03 -04:00
{ " sc0_get_gpc0 " , smp_fetch_sc_get_gpc0 , ARG1 ( 0 , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
{ " sc0_gpc0_rate " , smp_fetch_sc_gpc0_rate , ARG1 ( 0 , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
{ " sc0_http_err_cnt " , smp_fetch_sc_http_err_cnt , ARG1 ( 0 , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
{ " sc0_http_err_rate " , smp_fetch_sc_http_err_rate , ARG1 ( 0 , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
{ " sc0_http_req_cnt " , smp_fetch_sc_http_req_cnt , ARG1 ( 0 , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
{ " sc0_http_req_rate " , smp_fetch_sc_http_req_rate , ARG1 ( 0 , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
{ " sc0_inc_gpc0 " , smp_fetch_sc_inc_gpc0 , ARG1 ( 0 , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
{ " sc0_kbytes_in " , smp_fetch_sc_kbytes_in , ARG1 ( 0 , TAB ) , NULL , SMP_T_SINT , SMP_USE_L4CLI , } ,
{ " sc0_kbytes_out " , smp_fetch_sc_kbytes_out , ARG1 ( 0 , TAB ) , NULL , SMP_T_SINT , SMP_USE_L4CLI , } ,
{ " sc0_sess_cnt " , smp_fetch_sc_sess_cnt , ARG1 ( 0 , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
{ " sc0_sess_rate " , smp_fetch_sc_sess_rate , ARG1 ( 0 , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
2013-07-23 13:56:43 -04:00
{ " sc0_tracked " , smp_fetch_sc_tracked , ARG1 ( 0 , TAB ) , NULL , SMP_T_BOOL , SMP_USE_INTRN , } ,
2015-07-06 17:43:03 -04:00
{ " sc0_trackers " , smp_fetch_sc_trackers , ARG1 ( 0 , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
{ " sc1_bytes_in_rate " , smp_fetch_sc_bytes_in_rate , ARG1 ( 0 , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
{ " sc1_bytes_out_rate " , smp_fetch_sc_bytes_out_rate , ARG1 ( 0 , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
{ " sc1_clr_gpc0 " , smp_fetch_sc_clr_gpc0 , ARG1 ( 0 , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
{ " sc1_conn_cnt " , smp_fetch_sc_conn_cnt , ARG1 ( 0 , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
{ " sc1_conn_cur " , smp_fetch_sc_conn_cur , ARG1 ( 0 , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
{ " sc1_conn_rate " , smp_fetch_sc_conn_rate , ARG1 ( 0 , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
2015-08-19 02:25:14 -04:00
{ " sc1_get_gpt0 " , smp_fetch_sc_get_gpt0 , ARG1 ( 0 , TAB ) , NULL , SMP_T_BOOL , SMP_USE_INTRN , } ,
2015-07-06 17:43:03 -04:00
{ " sc1_get_gpc0 " , smp_fetch_sc_get_gpc0 , ARG1 ( 0 , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
{ " sc1_gpc0_rate " , smp_fetch_sc_gpc0_rate , ARG1 ( 0 , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
{ " sc1_http_err_cnt " , smp_fetch_sc_http_err_cnt , ARG1 ( 0 , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
{ " sc1_http_err_rate " , smp_fetch_sc_http_err_rate , ARG1 ( 0 , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
{ " sc1_http_req_cnt " , smp_fetch_sc_http_req_cnt , ARG1 ( 0 , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
{ " sc1_http_req_rate " , smp_fetch_sc_http_req_rate , ARG1 ( 0 , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
{ " sc1_inc_gpc0 " , smp_fetch_sc_inc_gpc0 , ARG1 ( 0 , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
{ " sc1_kbytes_in " , smp_fetch_sc_kbytes_in , ARG1 ( 0 , TAB ) , NULL , SMP_T_SINT , SMP_USE_L4CLI , } ,
{ " sc1_kbytes_out " , smp_fetch_sc_kbytes_out , ARG1 ( 0 , TAB ) , NULL , SMP_T_SINT , SMP_USE_L4CLI , } ,
{ " sc1_sess_cnt " , smp_fetch_sc_sess_cnt , ARG1 ( 0 , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
{ " sc1_sess_rate " , smp_fetch_sc_sess_rate , ARG1 ( 0 , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
2013-07-23 13:56:43 -04:00
{ " sc1_tracked " , smp_fetch_sc_tracked , ARG1 ( 0 , TAB ) , NULL , SMP_T_BOOL , SMP_USE_INTRN , } ,
2015-07-06 17:43:03 -04:00
{ " sc1_trackers " , smp_fetch_sc_trackers , ARG1 ( 0 , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
{ " sc2_bytes_in_rate " , smp_fetch_sc_bytes_in_rate , ARG1 ( 0 , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
{ " sc2_bytes_out_rate " , smp_fetch_sc_bytes_out_rate , ARG1 ( 0 , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
{ " sc2_clr_gpc0 " , smp_fetch_sc_clr_gpc0 , ARG1 ( 0 , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
{ " sc2_conn_cnt " , smp_fetch_sc_conn_cnt , ARG1 ( 0 , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
{ " sc2_conn_cur " , smp_fetch_sc_conn_cur , ARG1 ( 0 , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
{ " sc2_conn_rate " , smp_fetch_sc_conn_rate , ARG1 ( 0 , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
2015-08-19 02:25:14 -04:00
{ " sc2_get_gpt0 " , smp_fetch_sc_get_gpt0 , ARG1 ( 0 , TAB ) , NULL , SMP_T_BOOL , SMP_USE_INTRN , } ,
2015-07-06 17:43:03 -04:00
{ " sc2_get_gpc0 " , smp_fetch_sc_get_gpc0 , ARG1 ( 0 , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
{ " sc2_gpc0_rate " , smp_fetch_sc_gpc0_rate , ARG1 ( 0 , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
{ " sc2_http_err_cnt " , smp_fetch_sc_http_err_cnt , ARG1 ( 0 , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
{ " sc2_http_err_rate " , smp_fetch_sc_http_err_rate , ARG1 ( 0 , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
{ " sc2_http_req_cnt " , smp_fetch_sc_http_req_cnt , ARG1 ( 0 , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
{ " sc2_http_req_rate " , smp_fetch_sc_http_req_rate , ARG1 ( 0 , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
{ " sc2_inc_gpc0 " , smp_fetch_sc_inc_gpc0 , ARG1 ( 0 , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
{ " sc2_kbytes_in " , smp_fetch_sc_kbytes_in , ARG1 ( 0 , TAB ) , NULL , SMP_T_SINT , SMP_USE_L4CLI , } ,
{ " sc2_kbytes_out " , smp_fetch_sc_kbytes_out , ARG1 ( 0 , TAB ) , NULL , SMP_T_SINT , SMP_USE_L4CLI , } ,
{ " sc2_sess_cnt " , smp_fetch_sc_sess_cnt , ARG1 ( 0 , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
{ " sc2_sess_rate " , smp_fetch_sc_sess_rate , ARG1 ( 0 , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
2013-07-23 13:56:43 -04:00
{ " sc2_tracked " , smp_fetch_sc_tracked , ARG1 ( 0 , TAB ) , NULL , SMP_T_BOOL , SMP_USE_INTRN , } ,
2015-07-06 17:43:03 -04:00
{ " sc2_trackers " , smp_fetch_sc_trackers , ARG1 ( 0 , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
{ " src_bytes_in_rate " , smp_fetch_sc_bytes_in_rate , ARG1 ( 1 , TAB ) , NULL , SMP_T_SINT , SMP_USE_L4CLI , } ,
{ " src_bytes_out_rate " , smp_fetch_sc_bytes_out_rate , ARG1 ( 1 , TAB ) , NULL , SMP_T_SINT , SMP_USE_L4CLI , } ,
{ " src_clr_gpc0 " , smp_fetch_sc_clr_gpc0 , ARG1 ( 1 , TAB ) , NULL , SMP_T_SINT , SMP_USE_L4CLI , } ,
{ " src_conn_cnt " , smp_fetch_sc_conn_cnt , ARG1 ( 1 , TAB ) , NULL , SMP_T_SINT , SMP_USE_L4CLI , } ,
{ " src_conn_cur " , smp_fetch_sc_conn_cur , ARG1 ( 1 , TAB ) , NULL , SMP_T_SINT , SMP_USE_L4CLI , } ,
{ " src_conn_rate " , smp_fetch_sc_conn_rate , ARG1 ( 1 , TAB ) , NULL , SMP_T_SINT , SMP_USE_L4CLI , } ,
2015-08-19 02:25:14 -04:00
{ " src_get_gpt0 " , smp_fetch_sc_get_gpt0 , ARG1 ( 1 , TAB ) , NULL , SMP_T_BOOL , SMP_USE_L4CLI , } ,
2015-07-06 17:43:03 -04:00
{ " src_get_gpc0 " , smp_fetch_sc_get_gpc0 , ARG1 ( 1 , TAB ) , NULL , SMP_T_SINT , SMP_USE_L4CLI , } ,
{ " src_gpc0_rate " , smp_fetch_sc_gpc0_rate , ARG1 ( 1 , TAB ) , NULL , SMP_T_SINT , SMP_USE_L4CLI , } ,
{ " src_http_err_cnt " , smp_fetch_sc_http_err_cnt , ARG1 ( 1 , TAB ) , NULL , SMP_T_SINT , SMP_USE_L4CLI , } ,
{ " src_http_err_rate " , smp_fetch_sc_http_err_rate , ARG1 ( 1 , TAB ) , NULL , SMP_T_SINT , SMP_USE_L4CLI , } ,
{ " src_http_req_cnt " , smp_fetch_sc_http_req_cnt , ARG1 ( 1 , TAB ) , NULL , SMP_T_SINT , SMP_USE_L4CLI , } ,
{ " src_http_req_rate " , smp_fetch_sc_http_req_rate , ARG1 ( 1 , TAB ) , NULL , SMP_T_SINT , SMP_USE_L4CLI , } ,
{ " src_inc_gpc0 " , smp_fetch_sc_inc_gpc0 , ARG1 ( 1 , TAB ) , NULL , SMP_T_SINT , SMP_USE_L4CLI , } ,
{ " src_kbytes_in " , smp_fetch_sc_kbytes_in , ARG1 ( 1 , TAB ) , NULL , SMP_T_SINT , SMP_USE_L4CLI , } ,
{ " src_kbytes_out " , smp_fetch_sc_kbytes_out , ARG1 ( 1 , TAB ) , NULL , SMP_T_SINT , SMP_USE_L4CLI , } ,
{ " src_sess_cnt " , smp_fetch_sc_sess_cnt , ARG1 ( 1 , TAB ) , NULL , SMP_T_SINT , SMP_USE_L4CLI , } ,
{ " src_sess_rate " , smp_fetch_sc_sess_rate , ARG1 ( 1 , TAB ) , NULL , SMP_T_SINT , SMP_USE_L4CLI , } ,
{ " src_updt_conn_cnt " , smp_fetch_src_updt_conn_cnt , ARG1 ( 1 , TAB ) , NULL , SMP_T_SINT , SMP_USE_L4CLI , } ,
{ " table_avl " , smp_fetch_table_avl , ARG1 ( 1 , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
{ " table_cnt " , smp_fetch_table_cnt , ARG1 ( 1 , TAB ) , NULL , SMP_T_SINT , SMP_USE_INTRN , } ,
MINOR: session: rename sample fetch functions and declare the sample keywords
The following sample fetch functions were only usable by ACLs but are now
usable by sample fetches too :
sc1_bytes_in_rate, sc1_bytes_out_rate, sc1_clr_gpc0, sc1_conn_cnt,
sc1_conn_cur, sc1_conn_rate, sc1_get_gpc0, sc1_http_err_cnt,
sc1_http_err_rate, sc1_http_req_cnt, sc1_http_req_rate, sc1_inc_gpc0,
sc1_kbytes_in, sc1_kbytes_out, sc1_sess_cnt, sc1_sess_rate, sc1_trackers,
sc2_bytes_in_rate, sc2_bytes_out_rate, sc2_clr_gpc0, sc2_conn_cnt,
sc2_conn_cur, sc2_conn_rate, sc2_get_gpc0, sc2_http_err_cnt,
sc2_http_err_rate, sc2_http_req_cnt, sc2_http_req_rate, sc2_inc_gpc0,
sc2_kbytes_in, sc2_kbytes_out, sc2_sess_cnt, sc2_sess_rate, sc2_trackers,
src_bytes_in_rate, src_bytes_out_rate, src_clr_gpc0, src_conn_cnt,
src_conn_cur, src_conn_rate, src_get_gpc0, src_http_err_cnt,
src_http_err_rate, src_http_req_cnt, src_http_req_rate, src_inc_gpc0,
src_kbytes_in, src_kbytes_out, src_sess_cnt, src_sess_rate,
src_updt_conn_cnt, table_avl, table_cnt,
The fetch functions have been renamed "smp_fetch_*".
2013-01-07 19:23:27 -05:00
{ /* END */ } ,
2010-06-18 11:46:06 -04:00
} } ;
__attribute__ ( ( constructor ) )
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
static void __stream_init ( void )
2010-06-18 11:46:06 -04:00
{
MINOR: session: rename sample fetch functions and declare the sample keywords
The following sample fetch functions were only usable by ACLs but are now
usable by sample fetches too :
sc1_bytes_in_rate, sc1_bytes_out_rate, sc1_clr_gpc0, sc1_conn_cnt,
sc1_conn_cur, sc1_conn_rate, sc1_get_gpc0, sc1_http_err_cnt,
sc1_http_err_rate, sc1_http_req_cnt, sc1_http_req_rate, sc1_inc_gpc0,
sc1_kbytes_in, sc1_kbytes_out, sc1_sess_cnt, sc1_sess_rate, sc1_trackers,
sc2_bytes_in_rate, sc2_bytes_out_rate, sc2_clr_gpc0, sc2_conn_cnt,
sc2_conn_cur, sc2_conn_rate, sc2_get_gpc0, sc2_http_err_cnt,
sc2_http_err_rate, sc2_http_req_cnt, sc2_http_req_rate, sc2_inc_gpc0,
sc2_kbytes_in, sc2_kbytes_out, sc2_sess_cnt, sc2_sess_rate, sc2_trackers,
src_bytes_in_rate, src_bytes_out_rate, src_clr_gpc0, src_conn_cnt,
src_conn_cur, src_conn_rate, src_get_gpc0, src_http_err_cnt,
src_http_err_rate, src_http_req_cnt, src_http_req_rate, src_inc_gpc0,
src_kbytes_in, src_kbytes_out, src_sess_cnt, src_sess_rate,
src_updt_conn_cnt, table_avl, table_cnt,
The fetch functions have been renamed "smp_fetch_*".
2013-01-07 19:23:27 -05:00
sample_register_fetches ( & smp_fetch_keywords ) ;
2010-06-18 11:46:06 -04:00
acl_register_keywords ( & acl_kws ) ;
2015-09-27 13:29:33 -04:00
tcp_req_cont_keywords_register ( & stream_tcp_keywords ) ;
http_req_keywords_register ( & stream_http_keywords ) ;
2016-11-21 02:51:11 -05:00
cli_register_kw ( & cli_kws ) ;
2010-06-18 11:46:06 -04:00
}
2006-06-25 20:48:02 -04:00
/*
* Local variables :
* c - indent - level : 8
* c - basic - offset : 8
* End :
*/