2007-04-08 10:39:58 -04:00
|
|
|
/*
|
|
|
|
|
* FD polling functions for generic select()
|
|
|
|
|
*
|
MAJOR: polling: rework the whole polling system
This commit heavily changes the polling system in order to definitely
fix the frequent breakage of SSL which needs to remember the last
EAGAIN before deciding whether to poll or not. Now we have a state per
direction for each FD, as opposed to a previous and current state
previously. An FD can have up to 8 different states for each direction,
each of which being the result of a 3-bit combination. These 3 bits
indicate a wish to access the FD, the readiness of the FD and the
subscription of the FD to the polling system.
This means that it will now be possible to remember the state of a
file descriptor across disable/enable sequences that generally happen
during forwarding, where enabling reading on a previously disabled FD
would result in forgetting the EAGAIN flag it met last time.
Several new state manipulation functions have been introduced or
adapted :
- fd_want_{recv,send} : enable receiving/sending on the FD regardless
of its state (sets the ACTIVE flag) ;
- fd_stop_{recv,send} : stop receiving/sending on the FD regardless
of its state (clears the ACTIVE flag) ;
- fd_cant_{recv,send} : report a failure to receive/send on the FD
corresponding to EAGAIN (clears the READY flag) ;
- fd_may_{recv,send} : report the ability to receive/send on the FD
as reported by poll() (sets the READY flag) ;
Some functions are used to report the current FD status :
- fd_{recv,send}_active
- fd_{recv,send}_ready
- fd_{recv,send}_polled
Some functions were removed :
- fd_ev_clr(), fd_ev_set(), fd_ev_rem(), fd_ev_wai()
The POLLHUP/POLLERR flags are now reported as ready so that the I/O layers
knows it can try to access the file descriptor to get this information.
In order to simplify the conditions to add/remove cache entries, a new
function fd_alloc_or_release_cache_entry() was created to be used from
pollers while scanning for updates.
The following pollers have been updated :
ev_select() : done, built, tested on Linux 3.10
ev_poll() : done, built, tested on Linux 3.10
ev_epoll() : done, built, tested on Linux 3.10 & 3.13
ev_kqueue() : done, built, tested on OpenBSD 5.2
2014-01-10 10:58:45 -05:00
|
|
|
* Copyright 2000-2014 Willy Tarreau <w@1wt.eu>
|
2007-04-08 10:39:58 -04:00
|
|
|
*
|
|
|
|
|
* This program is free software; you can redistribute it and/or
|
|
|
|
|
* modify it under the terms of the GNU General Public License
|
|
|
|
|
* as published by the Free Software Foundation; either version
|
|
|
|
|
* 2 of the License, or (at your option) any later version.
|
|
|
|
|
*
|
|
|
|
|
*/
|
|
|
|
|
|
|
|
|
|
#include <unistd.h>
|
|
|
|
|
#include <sys/time.h>
|
|
|
|
|
#include <sys/types.h>
|
|
|
|
|
|
2020-06-09 03:07:15 -04:00
|
|
|
#include <haproxy/activity.h>
|
2020-05-27 06:58:42 -04:00
|
|
|
#include <haproxy/api.h>
|
2020-06-09 03:07:15 -04:00
|
|
|
#include <haproxy/fd.h>
|
|
|
|
|
#include <haproxy/global.h>
|
2020-06-02 12:15:32 -04:00
|
|
|
#include <haproxy/ticks.h>
|
2020-06-01 05:05:15 -04:00
|
|
|
#include <haproxy/time.h>
|
2007-04-08 10:39:58 -04:00
|
|
|
|
|
|
|
|
|
MAJOR: threads/fd: Make fd stuffs thread-safe
Many changes have been made to do so. First, the fd_updt array, where all
pending FDs for polling are stored, is now a thread-local array. Then 3 locks
have been added to protect, respectively, the fdtab array, the fd_cache array
and poll information. In addition, a lock for each entry in the fdtab array has
been added to protect all accesses to a specific FD or its information.
For pollers, according to the poller, the way to manage the concurrency is
different. There is a poller loop on each thread. So the set of monitored FDs
may need to be protected. epoll and kqueue are thread-safe per-se, so there few
things to do to protect these pollers. This is not possible with select and
poll, so there is no sharing between the threads. The poller on each thread is
independant from others.
Finally, per-thread init/deinit functions are used for each pollers and for FD
part for manage thread-local ressources.
Now, you must be carefull when a FD is created during the HAProxy startup. All
update on the FD state must be made in the threads context and never before
their creation. This is mandatory because fd_updt array is thread-local and
initialized only for threads. Because there is no pollers for the main one, this
array remains uninitialized in this context. For this reason, listeners are now
enabled in run_thread_poll_loop function, just like the worker pipe.
2017-05-29 04:40:41 -04:00
|
|
|
/* private data */
|
2018-01-26 15:48:23 -05:00
|
|
|
static int maxfd; /* # of the highest fd + 1 */
|
2018-01-25 10:48:46 -05:00
|
|
|
static unsigned int *fd_evts[2];
|
MAJOR: threads/fd: Make fd stuffs thread-safe
Many changes have been made to do so. First, the fd_updt array, where all
pending FDs for polling are stored, is now a thread-local array. Then 3 locks
have been added to protect, respectively, the fdtab array, the fd_cache array
and poll information. In addition, a lock for each entry in the fdtab array has
been added to protect all accesses to a specific FD or its information.
For pollers, according to the poller, the way to manage the concurrency is
different. There is a poller loop on each thread. So the set of monitored FDs
may need to be protected. epoll and kqueue are thread-safe per-se, so there few
things to do to protect these pollers. This is not possible with select and
poll, so there is no sharing between the threads. The poller on each thread is
independant from others.
Finally, per-thread init/deinit functions are used for each pollers and for FD
part for manage thread-local ressources.
Now, you must be carefull when a FD is created during the HAProxy startup. All
update on the FD state must be made in the threads context and never before
their creation. This is mandatory because fd_updt array is thread-local and
initialized only for threads. Because there is no pollers for the main one, this
array remains uninitialized in this context. For this reason, listeners are now
enabled in run_thread_poll_loop function, just like the worker pipe.
2017-05-29 04:40:41 -04:00
|
|
|
static THREAD_LOCAL fd_set *tmp_evts[2];
|
2007-04-08 10:39:58 -04:00
|
|
|
|
2012-11-11 10:53:50 -05:00
|
|
|
/* Immediately remove the entry upon close() */
|
2020-02-25 01:38:05 -05:00
|
|
|
static void __fd_clo(int fd)
|
2007-04-08 10:39:58 -04:00
|
|
|
{
|
2018-01-25 10:48:46 -05:00
|
|
|
hap_fd_clr(fd, fd_evts[DIR_RD]);
|
|
|
|
|
hap_fd_clr(fd, fd_evts[DIR_WR]);
|
2007-04-08 10:39:58 -04:00
|
|
|
}
|
|
|
|
|
|
2018-04-25 10:58:25 -04:00
|
|
|
static void _update_fd(int fd, int *max_add_fd)
|
|
|
|
|
{
|
|
|
|
|
int en;
|
|
|
|
|
|
|
|
|
|
en = fdtab[fd].state;
|
|
|
|
|
|
|
|
|
|
/* we have a single state for all threads, which is why we
|
|
|
|
|
* don't check the tid_bit. First thread to see the update
|
|
|
|
|
* takes it for every other one.
|
|
|
|
|
*/
|
2019-09-04 03:52:57 -04:00
|
|
|
if (!(en & FD_EV_ACTIVE_RW)) {
|
2019-07-25 10:00:18 -04:00
|
|
|
if (!(polled_mask[fd].poll_recv | polled_mask[fd].poll_send)) {
|
2018-04-25 10:58:25 -04:00
|
|
|
/* fd was not watched, it's still not */
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
/* fd totally removed from poll list */
|
|
|
|
|
hap_fd_clr(fd, fd_evts[DIR_RD]);
|
|
|
|
|
hap_fd_clr(fd, fd_evts[DIR_WR]);
|
2019-07-25 10:00:18 -04:00
|
|
|
_HA_ATOMIC_AND(&polled_mask[fd].poll_recv, 0);
|
2019-08-05 17:54:37 -04:00
|
|
|
_HA_ATOMIC_AND(&polled_mask[fd].poll_send, 0);
|
2018-04-25 10:58:25 -04:00
|
|
|
}
|
|
|
|
|
else {
|
|
|
|
|
/* OK fd has to be monitored, it was either added or changed */
|
2019-09-04 03:52:57 -04:00
|
|
|
if (!(en & FD_EV_ACTIVE_R)) {
|
2018-04-25 10:58:25 -04:00
|
|
|
hap_fd_clr(fd, fd_evts[DIR_RD]);
|
2019-07-25 10:00:18 -04:00
|
|
|
if (polled_mask[fd].poll_recv & tid_bit)
|
|
|
|
|
_HA_ATOMIC_AND(&polled_mask[fd].poll_recv, ~tid_bit);
|
|
|
|
|
} else {
|
2018-04-25 10:58:25 -04:00
|
|
|
hap_fd_set(fd, fd_evts[DIR_RD]);
|
2019-07-25 10:00:18 -04:00
|
|
|
if (!(polled_mask[fd].poll_recv & tid_bit))
|
|
|
|
|
_HA_ATOMIC_OR(&polled_mask[fd].poll_recv, tid_bit);
|
|
|
|
|
}
|
2018-04-25 10:58:25 -04:00
|
|
|
|
2019-09-04 03:52:57 -04:00
|
|
|
if (!(en & FD_EV_ACTIVE_W)) {
|
2018-04-25 10:58:25 -04:00
|
|
|
hap_fd_clr(fd, fd_evts[DIR_WR]);
|
2019-07-25 10:00:18 -04:00
|
|
|
if (polled_mask[fd].poll_send & tid_bit)
|
|
|
|
|
_HA_ATOMIC_AND(&polled_mask[fd].poll_send, ~tid_bit);
|
|
|
|
|
} else {
|
2018-04-25 10:58:25 -04:00
|
|
|
hap_fd_set(fd, fd_evts[DIR_WR]);
|
2019-07-25 10:00:18 -04:00
|
|
|
if (!(polled_mask[fd].poll_send & tid_bit))
|
|
|
|
|
_HA_ATOMIC_OR(&polled_mask[fd].poll_send, tid_bit);
|
|
|
|
|
}
|
2018-04-25 10:58:25 -04:00
|
|
|
|
|
|
|
|
if (fd > *max_add_fd)
|
|
|
|
|
*max_add_fd = fd;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2007-04-08 10:39:58 -04:00
|
|
|
/*
|
|
|
|
|
* Select() poller
|
|
|
|
|
*/
|
2020-02-25 01:38:05 -05:00
|
|
|
static void _do_poll(struct poller *p, int exp, int wake)
|
2007-04-08 10:39:58 -04:00
|
|
|
{
|
|
|
|
|
int status;
|
|
|
|
|
int fd, i;
|
|
|
|
|
struct timeval delta;
|
2008-06-23 08:00:57 -04:00
|
|
|
int delta_ms;
|
2007-04-08 10:39:58 -04:00
|
|
|
int fds;
|
2018-04-25 10:58:25 -04:00
|
|
|
int updt_idx;
|
2007-04-08 10:39:58 -04:00
|
|
|
char count;
|
MAJOR: threads/fd: Make fd stuffs thread-safe
Many changes have been made to do so. First, the fd_updt array, where all
pending FDs for polling are stored, is now a thread-local array. Then 3 locks
have been added to protect, respectively, the fdtab array, the fd_cache array
and poll information. In addition, a lock for each entry in the fdtab array has
been added to protect all accesses to a specific FD or its information.
For pollers, according to the poller, the way to manage the concurrency is
different. There is a poller loop on each thread. So the set of monitored FDs
may need to be protected. epoll and kqueue are thread-safe per-se, so there few
things to do to protect these pollers. This is not possible with select and
poll, so there is no sharing between the threads. The poller on each thread is
independant from others.
Finally, per-thread init/deinit functions are used for each pollers and for FD
part for manage thread-local ressources.
Now, you must be carefull when a FD is created during the HAProxy startup. All
update on the FD state must be made in the threads context and never before
their creation. This is mandatory because fd_updt array is thread-local and
initialized only for threads. Because there is no pollers for the main one, this
array remains uninitialized in this context. For this reason, listeners are now
enabled in run_thread_poll_loop function, just like the worker pipe.
2017-05-29 04:40:41 -04:00
|
|
|
int readnotnull, writenotnull;
|
2018-01-26 15:48:23 -05:00
|
|
|
int old_maxfd, new_maxfd, max_add_fd;
|
2018-04-25 10:58:25 -04:00
|
|
|
int old_fd;
|
2018-01-26 15:48:23 -05:00
|
|
|
|
|
|
|
|
max_add_fd = -1;
|
MAJOR: threads/fd: Make fd stuffs thread-safe
Many changes have been made to do so. First, the fd_updt array, where all
pending FDs for polling are stored, is now a thread-local array. Then 3 locks
have been added to protect, respectively, the fdtab array, the fd_cache array
and poll information. In addition, a lock for each entry in the fdtab array has
been added to protect all accesses to a specific FD or its information.
For pollers, according to the poller, the way to manage the concurrency is
different. There is a poller loop on each thread. So the set of monitored FDs
may need to be protected. epoll and kqueue are thread-safe per-se, so there few
things to do to protect these pollers. This is not possible with select and
poll, so there is no sharing between the threads. The poller on each thread is
independant from others.
Finally, per-thread init/deinit functions are used for each pollers and for FD
part for manage thread-local ressources.
Now, you must be carefull when a FD is created during the HAProxy startup. All
update on the FD state must be made in the threads context and never before
their creation. This is mandatory because fd_updt array is thread-local and
initialized only for threads. Because there is no pollers for the main one, this
array remains uninitialized in this context. For this reason, listeners are now
enabled in run_thread_poll_loop function, just like the worker pipe.
2017-05-29 04:40:41 -04:00
|
|
|
|
2012-11-11 10:53:50 -05:00
|
|
|
/* first, scan the update list to find changes */
|
|
|
|
|
for (updt_idx = 0; updt_idx < fd_nbupdt; updt_idx++) {
|
|
|
|
|
fd = fd_updt[updt_idx];
|
MAJOR: polling: rework the whole polling system
This commit heavily changes the polling system in order to definitely
fix the frequent breakage of SSL which needs to remember the last
EAGAIN before deciding whether to poll or not. Now we have a state per
direction for each FD, as opposed to a previous and current state
previously. An FD can have up to 8 different states for each direction,
each of which being the result of a 3-bit combination. These 3 bits
indicate a wish to access the FD, the readiness of the FD and the
subscription of the FD to the polling system.
This means that it will now be possible to remember the state of a
file descriptor across disable/enable sequences that generally happen
during forwarding, where enabling reading on a previously disabled FD
would result in forgetting the EAGAIN flag it met last time.
Several new state manipulation functions have been introduced or
adapted :
- fd_want_{recv,send} : enable receiving/sending on the FD regardless
of its state (sets the ACTIVE flag) ;
- fd_stop_{recv,send} : stop receiving/sending on the FD regardless
of its state (clears the ACTIVE flag) ;
- fd_cant_{recv,send} : report a failure to receive/send on the FD
corresponding to EAGAIN (clears the READY flag) ;
- fd_may_{recv,send} : report the ability to receive/send on the FD
as reported by poll() (sets the READY flag) ;
Some functions are used to report the current FD status :
- fd_{recv,send}_active
- fd_{recv,send}_ready
- fd_{recv,send}_polled
Some functions were removed :
- fd_ev_clr(), fd_ev_set(), fd_ev_rem(), fd_ev_wai()
The POLLHUP/POLLERR flags are now reported as ready so that the I/O layers
knows it can try to access the file descriptor to get this information.
In order to simplify the conditions to add/remove cache entries, a new
function fd_alloc_or_release_cache_entry() was created to be used from
pollers while scanning for updates.
The following pollers have been updated :
ev_select() : done, built, tested on Linux 3.10
ev_poll() : done, built, tested on Linux 3.10
ev_epoll() : done, built, tested on Linux 3.10 & 3.13
ev_kqueue() : done, built, tested on OpenBSD 5.2
2014-01-10 10:58:45 -05:00
|
|
|
|
2019-03-08 12:49:54 -05:00
|
|
|
_HA_ATOMIC_AND(&fdtab[fd].update_mask, ~tid_bit);
|
2018-01-20 13:30:13 -05:00
|
|
|
if (!fdtab[fd].owner) {
|
2020-06-17 14:35:33 -04:00
|
|
|
activity[tid].poll_drop_fd++;
|
MAJOR: polling: rework the whole polling system
This commit heavily changes the polling system in order to definitely
fix the frequent breakage of SSL which needs to remember the last
EAGAIN before deciding whether to poll or not. Now we have a state per
direction for each FD, as opposed to a previous and current state
previously. An FD can have up to 8 different states for each direction,
each of which being the result of a 3-bit combination. These 3 bits
indicate a wish to access the FD, the readiness of the FD and the
subscription of the FD to the polling system.
This means that it will now be possible to remember the state of a
file descriptor across disable/enable sequences that generally happen
during forwarding, where enabling reading on a previously disabled FD
would result in forgetting the EAGAIN flag it met last time.
Several new state manipulation functions have been introduced or
adapted :
- fd_want_{recv,send} : enable receiving/sending on the FD regardless
of its state (sets the ACTIVE flag) ;
- fd_stop_{recv,send} : stop receiving/sending on the FD regardless
of its state (clears the ACTIVE flag) ;
- fd_cant_{recv,send} : report a failure to receive/send on the FD
corresponding to EAGAIN (clears the READY flag) ;
- fd_may_{recv,send} : report the ability to receive/send on the FD
as reported by poll() (sets the READY flag) ;
Some functions are used to report the current FD status :
- fd_{recv,send}_active
- fd_{recv,send}_ready
- fd_{recv,send}_polled
Some functions were removed :
- fd_ev_clr(), fd_ev_set(), fd_ev_rem(), fd_ev_wai()
The POLLHUP/POLLERR flags are now reported as ready so that the I/O layers
knows it can try to access the file descriptor to get this information.
In order to simplify the conditions to add/remove cache entries, a new
function fd_alloc_or_release_cache_entry() was created to be used from
pollers while scanning for updates.
The following pollers have been updated :
ev_select() : done, built, tested on Linux 3.10
ev_poll() : done, built, tested on Linux 3.10
ev_epoll() : done, built, tested on Linux 3.10 & 3.13
ev_kqueue() : done, built, tested on OpenBSD 5.2
2014-01-10 10:58:45 -05:00
|
|
|
continue;
|
2018-01-20 13:30:13 -05:00
|
|
|
}
|
2018-04-25 10:58:25 -04:00
|
|
|
_update_fd(fd, &max_add_fd);
|
|
|
|
|
}
|
|
|
|
|
/* Now scan the global update list */
|
|
|
|
|
for (old_fd = fd = update_list.first; fd != -1; fd = fdtab[fd].update.next) {
|
|
|
|
|
if (fd == -2) {
|
|
|
|
|
fd = old_fd;
|
|
|
|
|
continue;
|
MAJOR: polling: rework the whole polling system
This commit heavily changes the polling system in order to definitely
fix the frequent breakage of SSL which needs to remember the last
EAGAIN before deciding whether to poll or not. Now we have a state per
direction for each FD, as opposed to a previous and current state
previously. An FD can have up to 8 different states for each direction,
each of which being the result of a 3-bit combination. These 3 bits
indicate a wish to access the FD, the readiness of the FD and the
subscription of the FD to the polling system.
This means that it will now be possible to remember the state of a
file descriptor across disable/enable sequences that generally happen
during forwarding, where enabling reading on a previously disabled FD
would result in forgetting the EAGAIN flag it met last time.
Several new state manipulation functions have been introduced or
adapted :
- fd_want_{recv,send} : enable receiving/sending on the FD regardless
of its state (sets the ACTIVE flag) ;
- fd_stop_{recv,send} : stop receiving/sending on the FD regardless
of its state (clears the ACTIVE flag) ;
- fd_cant_{recv,send} : report a failure to receive/send on the FD
corresponding to EAGAIN (clears the READY flag) ;
- fd_may_{recv,send} : report the ability to receive/send on the FD
as reported by poll() (sets the READY flag) ;
Some functions are used to report the current FD status :
- fd_{recv,send}_active
- fd_{recv,send}_ready
- fd_{recv,send}_polled
Some functions were removed :
- fd_ev_clr(), fd_ev_set(), fd_ev_rem(), fd_ev_wai()
The POLLHUP/POLLERR flags are now reported as ready so that the I/O layers
knows it can try to access the file descriptor to get this information.
In order to simplify the conditions to add/remove cache entries, a new
function fd_alloc_or_release_cache_entry() was created to be used from
pollers while scanning for updates.
The following pollers have been updated :
ev_select() : done, built, tested on Linux 3.10
ev_poll() : done, built, tested on Linux 3.10
ev_epoll() : done, built, tested on Linux 3.10 & 3.13
ev_kqueue() : done, built, tested on OpenBSD 5.2
2014-01-10 10:58:45 -05:00
|
|
|
}
|
2018-04-25 10:58:25 -04:00
|
|
|
else if (fd <= -3)
|
|
|
|
|
fd = -fd -4;
|
|
|
|
|
if (fd == -1)
|
|
|
|
|
break;
|
|
|
|
|
if (fdtab[fd].update_mask & tid_bit) {
|
|
|
|
|
/* Cheat a bit, as the state is global to all pollers
|
2021-01-06 11:35:12 -05:00
|
|
|
* we don't need every thread to take care of the
|
2018-04-25 10:58:25 -04:00
|
|
|
* update.
|
|
|
|
|
*/
|
2019-03-08 12:49:54 -05:00
|
|
|
_HA_ATOMIC_AND(&fdtab[fd].update_mask, ~all_threads_mask);
|
2018-04-25 10:58:25 -04:00
|
|
|
done_update_polling(fd);
|
|
|
|
|
} else
|
|
|
|
|
continue;
|
|
|
|
|
if (!fdtab[fd].owner)
|
|
|
|
|
continue;
|
|
|
|
|
_update_fd(fd, &max_add_fd);
|
2012-11-11 10:53:50 -05:00
|
|
|
}
|
2018-01-26 15:48:23 -05:00
|
|
|
|
2018-04-25 10:58:25 -04:00
|
|
|
|
2018-01-26 15:48:23 -05:00
|
|
|
/* maybe we added at least one fd larger than maxfd */
|
|
|
|
|
for (old_maxfd = maxfd; old_maxfd <= max_add_fd; ) {
|
2019-03-08 12:49:54 -05:00
|
|
|
if (_HA_ATOMIC_CAS(&maxfd, &old_maxfd, max_add_fd + 1))
|
2018-01-26 15:48:23 -05:00
|
|
|
break;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/* maxfd doesn't need to be precise but it needs to cover *all* active
|
|
|
|
|
* FDs. Thus we only shrink it if we have such an opportunity. The algo
|
|
|
|
|
* is simple : look for the previous used place, try to update maxfd to
|
|
|
|
|
* point to it, abort if maxfd changed in the mean time.
|
|
|
|
|
*/
|
|
|
|
|
old_maxfd = maxfd;
|
|
|
|
|
do {
|
|
|
|
|
new_maxfd = old_maxfd;
|
|
|
|
|
while (new_maxfd - 1 >= 0 && !fdtab[new_maxfd - 1].owner)
|
|
|
|
|
new_maxfd--;
|
|
|
|
|
if (new_maxfd >= old_maxfd)
|
|
|
|
|
break;
|
2019-03-08 12:49:54 -05:00
|
|
|
} while (!_HA_ATOMIC_CAS(&maxfd, &old_maxfd, new_maxfd));
|
2018-01-26 15:48:23 -05:00
|
|
|
|
MEDIUM: threads: add a stronger thread_isolate_full() call
The current principle of running under isolation was made to access
sensitive data while being certain that no other thread was using them
in parallel, without necessarily having to place locks everywhere. The
main use case are "show sess" and "show fd" which run over long chains
of pointers.
The thread_isolate() call relies on the "harmless" bit that indicates
for a given thread that it's not currently doing such sensitive things,
which is advertised using thread_harmless_now() and which ends usings
thread_harmless_end(), which also waits for possibly concurrent threads
to complete their work if they took this opportunity for starting
something tricky.
As some system calls were notoriously slow (e.g. mmap()), a bunch of
thread_harmless_now() / thread_harmless_end() were placed around them
to let waiting threads do their work while such other threads were not
able to modify memory contents.
But this is not sufficient for performing memory modifications. One such
example is the server deletion code. By modifying memory, it not only
requires that other threads are not playing with it, but are not either
in the process of touching it. The fact that a pool_alloc() or pool_free()
on some structure may call thread_harmless_now() and let another thread
start to release the same object's memory is not acceptable.
This patch introduces the concept of "idle threads". Threads entering
the polling loop are idle, as well as those that are waiting for all
others to become idle via the new function thread_isolate_full(). Once
thread_isolate_full() is granted, the thread is not idle anymore, and
it is released using thread_release() just like regular isolation. Its
users have to keep in mind that across this call nothing is granted as
another thread might have performed shared memory modifications. But
such users are extremely rare and are actually expecting this from their
peers as well.
Note that that in case of backport, this patch depends on previous patch:
MINOR: threads: make thread_release() not wait for other ones to complete
2021-08-04 05:44:17 -04:00
|
|
|
thread_idle_now();
|
2018-08-02 04:16:17 -04:00
|
|
|
thread_harmless_now();
|
|
|
|
|
|
2012-11-11 10:53:50 -05:00
|
|
|
fd_nbupdt = 0;
|
|
|
|
|
|
MAJOR: threads/fd: Make fd stuffs thread-safe
Many changes have been made to do so. First, the fd_updt array, where all
pending FDs for polling are stored, is now a thread-local array. Then 3 locks
have been added to protect, respectively, the fdtab array, the fd_cache array
and poll information. In addition, a lock for each entry in the fdtab array has
been added to protect all accesses to a specific FD or its information.
For pollers, according to the poller, the way to manage the concurrency is
different. There is a poller loop on each thread. So the set of monitored FDs
may need to be protected. epoll and kqueue are thread-safe per-se, so there few
things to do to protect these pollers. This is not possible with select and
poll, so there is no sharing between the threads. The poller on each thread is
independant from others.
Finally, per-thread init/deinit functions are used for each pollers and for FD
part for manage thread-local ressources.
Now, you must be carefull when a FD is created during the HAProxy startup. All
update on the FD state must be made in the threads context and never before
their creation. This is mandatory because fd_updt array is thread-local and
initialized only for threads. Because there is no pollers for the main one, this
array remains uninitialized in this context. For this reason, listeners are now
enabled in run_thread_poll_loop function, just like the worker pipe.
2017-05-29 04:40:41 -04:00
|
|
|
/* let's restore fdset state */
|
|
|
|
|
readnotnull = 0; writenotnull = 0;
|
|
|
|
|
for (i = 0; i < (maxfd + FD_SETSIZE - 1)/(8*sizeof(int)); i++) {
|
|
|
|
|
readnotnull |= (*(((int*)tmp_evts[DIR_RD])+i) = *(((int*)fd_evts[DIR_RD])+i)) != 0;
|
|
|
|
|
writenotnull |= (*(((int*)tmp_evts[DIR_WR])+i) = *(((int*)fd_evts[DIR_WR])+i)) != 0;
|
|
|
|
|
}
|
|
|
|
|
|
2018-10-17 05:25:54 -04:00
|
|
|
/* now let's wait for events */
|
2019-05-28 10:44:05 -04:00
|
|
|
delta_ms = wake ? 0 : compute_poll_timeout(exp);
|
2018-10-17 05:25:54 -04:00
|
|
|
delta.tv_sec = (delta_ms / 1000);
|
|
|
|
|
delta.tv_usec = (delta_ms % 1000) * 1000;
|
2018-10-17 08:31:19 -04:00
|
|
|
tv_entering_poll();
|
2018-11-22 02:31:09 -05:00
|
|
|
activity_count_runtime();
|
2007-04-08 10:39:58 -04:00
|
|
|
status = select(maxfd,
|
2007-04-08 11:42:27 -04:00
|
|
|
readnotnull ? tmp_evts[DIR_RD] : NULL,
|
|
|
|
|
writenotnull ? tmp_evts[DIR_WR] : NULL,
|
2007-04-08 10:39:58 -04:00
|
|
|
NULL,
|
2008-06-23 08:00:57 -04:00
|
|
|
&delta);
|
2018-11-22 12:57:37 -05:00
|
|
|
tv_update_date(delta_ms, status);
|
2018-10-17 08:31:19 -04:00
|
|
|
tv_leaving_poll(delta_ms, status);
|
2007-04-08 10:39:58 -04:00
|
|
|
|
2018-08-02 04:16:17 -04:00
|
|
|
thread_harmless_end();
|
MEDIUM: threads: add a stronger thread_isolate_full() call
The current principle of running under isolation was made to access
sensitive data while being certain that no other thread was using them
in parallel, without necessarily having to place locks everywhere. The
main use case are "show sess" and "show fd" which run over long chains
of pointers.
The thread_isolate() call relies on the "harmless" bit that indicates
for a given thread that it's not currently doing such sensitive things,
which is advertised using thread_harmless_now() and which ends usings
thread_harmless_end(), which also waits for possibly concurrent threads
to complete their work if they took this opportunity for starting
something tricky.
As some system calls were notoriously slow (e.g. mmap()), a bunch of
thread_harmless_now() / thread_harmless_end() were placed around them
to let waiting threads do their work while such other threads were not
able to modify memory contents.
But this is not sufficient for performing memory modifications. One such
example is the server deletion code. By modifying memory, it not only
requires that other threads are not playing with it, but are not either
in the process of touching it. The fact that a pool_alloc() or pool_free()
on some structure may call thread_harmless_now() and let another thread
start to release the same object's memory is not acceptable.
This patch introduces the concept of "idle threads". Threads entering
the polling loop are idle, as well as those that are waiting for all
others to become idle via the new function thread_isolate_full(). Once
thread_isolate_full() is granted, the thread is not idle anymore, and
it is released using thread_release() just like regular isolation. Its
users have to keep in mind that across this call nothing is granted as
another thread might have performed shared memory modifications. But
such users are extremely rare and are actually expecting this from their
peers as well.
Note that that in case of backport, this patch depends on previous patch:
MINOR: threads: make thread_release() not wait for other ones to complete
2021-08-04 05:44:17 -04:00
|
|
|
thread_idle_end();
|
|
|
|
|
|
2019-07-24 12:07:06 -04:00
|
|
|
if (sleeping_thread_mask & tid_bit)
|
|
|
|
|
_HA_ATOMIC_AND(&sleeping_thread_mask, ~tid_bit);
|
2018-08-02 04:16:17 -04:00
|
|
|
|
2007-04-08 10:39:58 -04:00
|
|
|
if (status <= 0)
|
|
|
|
|
return;
|
|
|
|
|
|
2020-06-17 14:25:18 -04:00
|
|
|
activity[tid].poll_io++;
|
|
|
|
|
|
2008-07-14 18:36:31 -04:00
|
|
|
for (fds = 0; (fds * BITS_PER_INT) < maxfd; fds++) {
|
2007-04-08 11:42:27 -04:00
|
|
|
if ((((int *)(tmp_evts[DIR_RD]))[fds] | ((int *)(tmp_evts[DIR_WR]))[fds]) == 0)
|
2007-04-08 10:39:58 -04:00
|
|
|
continue;
|
|
|
|
|
|
2008-07-14 18:36:31 -04:00
|
|
|
for (count = BITS_PER_INT, fd = fds * BITS_PER_INT; count && fd < maxfd; count--, fd++) {
|
2017-08-30 04:34:36 -04:00
|
|
|
unsigned int n = 0;
|
|
|
|
|
|
2021-07-30 07:55:36 -04:00
|
|
|
if (FD_ISSET(fd, tmp_evts[DIR_RD]))
|
|
|
|
|
n |= FD_EV_READY_R;
|
|
|
|
|
|
|
|
|
|
if (FD_ISSET(fd, tmp_evts[DIR_WR]))
|
|
|
|
|
n |= FD_EV_READY_W;
|
|
|
|
|
|
|
|
|
|
if (!n)
|
|
|
|
|
continue;
|
|
|
|
|
|
2020-06-23 04:04:54 -04:00
|
|
|
#ifdef DEBUG_FD
|
2021-04-06 07:53:36 -04:00
|
|
|
_HA_ATOMIC_INC(&fdtab[fd].event_count);
|
2020-06-23 04:04:54 -04:00
|
|
|
#endif
|
2012-07-06 10:02:29 -04:00
|
|
|
|
2017-08-30 04:34:36 -04:00
|
|
|
fd_update_events(fd, n);
|
2007-04-08 10:39:58 -04:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
MAJOR: threads/fd: Make fd stuffs thread-safe
Many changes have been made to do so. First, the fd_updt array, where all
pending FDs for polling are stored, is now a thread-local array. Then 3 locks
have been added to protect, respectively, the fdtab array, the fd_cache array
and poll information. In addition, a lock for each entry in the fdtab array has
been added to protect all accesses to a specific FD or its information.
For pollers, according to the poller, the way to manage the concurrency is
different. There is a poller loop on each thread. So the set of monitored FDs
may need to be protected. epoll and kqueue are thread-safe per-se, so there few
things to do to protect these pollers. This is not possible with select and
poll, so there is no sharing between the threads. The poller on each thread is
independant from others.
Finally, per-thread init/deinit functions are used for each pollers and for FD
part for manage thread-local ressources.
Now, you must be carefull when a FD is created during the HAProxy startup. All
update on the FD state must be made in the threads context and never before
their creation. This is mandatory because fd_updt array is thread-local and
initialized only for threads. Because there is no pollers for the main one, this
array remains uninitialized in this context. For this reason, listeners are now
enabled in run_thread_poll_loop function, just like the worker pipe.
2017-05-29 04:40:41 -04:00
|
|
|
static int init_select_per_thread()
|
|
|
|
|
{
|
|
|
|
|
int fd_set_bytes;
|
|
|
|
|
|
|
|
|
|
fd_set_bytes = sizeof(fd_set) * (global.maxsock + FD_SETSIZE - 1) / FD_SETSIZE;
|
2021-04-08 14:05:23 -04:00
|
|
|
tmp_evts[DIR_RD] = calloc(1, fd_set_bytes);
|
2020-05-11 09:20:05 -04:00
|
|
|
if (tmp_evts[DIR_RD] == NULL)
|
MAJOR: threads/fd: Make fd stuffs thread-safe
Many changes have been made to do so. First, the fd_updt array, where all
pending FDs for polling are stored, is now a thread-local array. Then 3 locks
have been added to protect, respectively, the fdtab array, the fd_cache array
and poll information. In addition, a lock for each entry in the fdtab array has
been added to protect all accesses to a specific FD or its information.
For pollers, according to the poller, the way to manage the concurrency is
different. There is a poller loop on each thread. So the set of monitored FDs
may need to be protected. epoll and kqueue are thread-safe per-se, so there few
things to do to protect these pollers. This is not possible with select and
poll, so there is no sharing between the threads. The poller on each thread is
independant from others.
Finally, per-thread init/deinit functions are used for each pollers and for FD
part for manage thread-local ressources.
Now, you must be carefull when a FD is created during the HAProxy startup. All
update on the FD state must be made in the threads context and never before
their creation. This is mandatory because fd_updt array is thread-local and
initialized only for threads. Because there is no pollers for the main one, this
array remains uninitialized in this context. For this reason, listeners are now
enabled in run_thread_poll_loop function, just like the worker pipe.
2017-05-29 04:40:41 -04:00
|
|
|
goto fail;
|
2021-04-08 14:05:23 -04:00
|
|
|
tmp_evts[DIR_WR] = calloc(1, fd_set_bytes);
|
2020-05-11 09:20:05 -04:00
|
|
|
if (tmp_evts[DIR_WR] == NULL)
|
MAJOR: threads/fd: Make fd stuffs thread-safe
Many changes have been made to do so. First, the fd_updt array, where all
pending FDs for polling are stored, is now a thread-local array. Then 3 locks
have been added to protect, respectively, the fdtab array, the fd_cache array
and poll information. In addition, a lock for each entry in the fdtab array has
been added to protect all accesses to a specific FD or its information.
For pollers, according to the poller, the way to manage the concurrency is
different. There is a poller loop on each thread. So the set of monitored FDs
may need to be protected. epoll and kqueue are thread-safe per-se, so there few
things to do to protect these pollers. This is not possible with select and
poll, so there is no sharing between the threads. The poller on each thread is
independant from others.
Finally, per-thread init/deinit functions are used for each pollers and for FD
part for manage thread-local ressources.
Now, you must be carefull when a FD is created during the HAProxy startup. All
update on the FD state must be made in the threads context and never before
their creation. This is mandatory because fd_updt array is thread-local and
initialized only for threads. Because there is no pollers for the main one, this
array remains uninitialized in this context. For this reason, listeners are now
enabled in run_thread_poll_loop function, just like the worker pipe.
2017-05-29 04:40:41 -04:00
|
|
|
goto fail;
|
|
|
|
|
return 1;
|
|
|
|
|
fail:
|
|
|
|
|
free(tmp_evts[DIR_RD]);
|
|
|
|
|
free(tmp_evts[DIR_WR]);
|
|
|
|
|
return 0;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static void deinit_select_per_thread()
|
|
|
|
|
{
|
2021-02-20 04:46:51 -05:00
|
|
|
ha_free(&tmp_evts[DIR_WR]);
|
|
|
|
|
ha_free(&tmp_evts[DIR_RD]);
|
MAJOR: threads/fd: Make fd stuffs thread-safe
Many changes have been made to do so. First, the fd_updt array, where all
pending FDs for polling are stored, is now a thread-local array. Then 3 locks
have been added to protect, respectively, the fdtab array, the fd_cache array
and poll information. In addition, a lock for each entry in the fdtab array has
been added to protect all accesses to a specific FD or its information.
For pollers, according to the poller, the way to manage the concurrency is
different. There is a poller loop on each thread. So the set of monitored FDs
may need to be protected. epoll and kqueue are thread-safe per-se, so there few
things to do to protect these pollers. This is not possible with select and
poll, so there is no sharing between the threads. The poller on each thread is
independant from others.
Finally, per-thread init/deinit functions are used for each pollers and for FD
part for manage thread-local ressources.
Now, you must be carefull when a FD is created during the HAProxy startup. All
update on the FD state must be made in the threads context and never before
their creation. This is mandatory because fd_updt array is thread-local and
initialized only for threads. Because there is no pollers for the main one, this
array remains uninitialized in this context. For this reason, listeners are now
enabled in run_thread_poll_loop function, just like the worker pipe.
2017-05-29 04:40:41 -04:00
|
|
|
}
|
|
|
|
|
|
2007-04-09 03:23:31 -04:00
|
|
|
/*
|
|
|
|
|
* Initialization of the select() poller.
|
|
|
|
|
* Returns 0 in case of failure, non-zero in case of success. If it fails, it
|
|
|
|
|
* disables the poller by setting its pref to 0.
|
|
|
|
|
*/
|
2020-02-25 01:38:05 -05:00
|
|
|
static int _do_init(struct poller *p)
|
2007-04-09 03:23:31 -04:00
|
|
|
{
|
|
|
|
|
int fd_set_bytes;
|
|
|
|
|
|
|
|
|
|
p->private = NULL;
|
2013-03-31 08:41:15 -04:00
|
|
|
|
|
|
|
|
if (global.maxsock > FD_SETSIZE)
|
2020-05-11 09:20:03 -04:00
|
|
|
goto fail_srevt;
|
2013-03-31 08:41:15 -04:00
|
|
|
|
2007-04-09 03:23:31 -04:00
|
|
|
fd_set_bytes = sizeof(fd_set) * (global.maxsock + FD_SETSIZE - 1) / FD_SETSIZE;
|
|
|
|
|
|
2018-01-25 10:48:46 -05:00
|
|
|
if ((fd_evts[DIR_RD] = calloc(1, fd_set_bytes)) == NULL)
|
2007-04-09 03:23:31 -04:00
|
|
|
goto fail_srevt;
|
2018-01-25 10:48:46 -05:00
|
|
|
if ((fd_evts[DIR_WR] = calloc(1, fd_set_bytes)) == NULL)
|
2007-04-09 03:23:31 -04:00
|
|
|
goto fail_swevt;
|
|
|
|
|
|
2017-10-27 07:53:47 -04:00
|
|
|
hap_register_per_thread_init(init_select_per_thread);
|
|
|
|
|
hap_register_per_thread_deinit(deinit_select_per_thread);
|
|
|
|
|
|
2007-04-09 03:23:31 -04:00
|
|
|
return 1;
|
|
|
|
|
|
|
|
|
|
fail_swevt:
|
|
|
|
|
free(fd_evts[DIR_RD]);
|
|
|
|
|
fail_srevt:
|
|
|
|
|
p->pref = 0;
|
|
|
|
|
return 0;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
|
* Termination of the select() poller.
|
|
|
|
|
* Memory is released and the poller is marked as unselectable.
|
|
|
|
|
*/
|
2020-02-25 01:38:05 -05:00
|
|
|
static void _do_term(struct poller *p)
|
2007-04-09 03:23:31 -04:00
|
|
|
{
|
2008-08-03 06:19:50 -04:00
|
|
|
free(fd_evts[DIR_WR]);
|
|
|
|
|
free(fd_evts[DIR_RD]);
|
2007-04-09 03:23:31 -04:00
|
|
|
p->private = NULL;
|
|
|
|
|
p->pref = 0;
|
|
|
|
|
}
|
|
|
|
|
|
2007-04-09 13:29:56 -04:00
|
|
|
/*
|
|
|
|
|
* Check that the poller works.
|
|
|
|
|
* Returns 1 if OK, otherwise 0.
|
|
|
|
|
*/
|
2020-02-25 01:38:05 -05:00
|
|
|
static int _do_test(struct poller *p)
|
2007-04-09 13:29:56 -04:00
|
|
|
{
|
2013-03-31 08:41:15 -04:00
|
|
|
if (global.maxsock > FD_SETSIZE)
|
|
|
|
|
return 0;
|
|
|
|
|
|
2007-04-09 13:29:56 -04:00
|
|
|
return 1;
|
|
|
|
|
}
|
|
|
|
|
|
2007-04-08 10:39:58 -04:00
|
|
|
/*
|
2007-04-15 18:25:25 -04:00
|
|
|
* It is a constructor, which means that it will automatically be called before
|
|
|
|
|
* main(). This is GCC-specific but it works at least since 2.95.
|
|
|
|
|
* Special care must be taken so that it does not need any uninitialized data.
|
2007-04-08 10:39:58 -04:00
|
|
|
*/
|
2007-04-15 18:25:25 -04:00
|
|
|
__attribute__((constructor))
|
|
|
|
|
static void _do_register(void)
|
2007-04-08 10:39:58 -04:00
|
|
|
{
|
2007-04-15 18:25:25 -04:00
|
|
|
struct poller *p;
|
|
|
|
|
|
|
|
|
|
if (nbpollers >= MAX_POLLERS)
|
|
|
|
|
return;
|
|
|
|
|
p = &pollers[nbpollers++];
|
|
|
|
|
|
2007-04-08 10:39:58 -04:00
|
|
|
p->name = "select";
|
|
|
|
|
p->pref = 150;
|
2017-03-13 06:38:28 -04:00
|
|
|
p->flags = 0;
|
2007-04-08 10:39:58 -04:00
|
|
|
p->private = NULL;
|
|
|
|
|
|
2012-11-11 15:02:34 -05:00
|
|
|
p->clo = __fd_clo;
|
2007-04-15 18:25:25 -04:00
|
|
|
p->test = _do_test;
|
|
|
|
|
p->init = _do_init;
|
|
|
|
|
p->term = _do_term;
|
|
|
|
|
p->poll = _do_poll;
|
2007-04-08 10:39:58 -04:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
|
* Local variables:
|
|
|
|
|
* c-indent-level: 8
|
|
|
|
|
* c-basic-offset: 8
|
|
|
|
|
* End:
|
|
|
|
|
*/
|