The clang-scan 19 has reported that we are ignoring errno after the call
to rewind(). As we don't really care about the result, just silence the
error, the whole code will be removed in the development version anyway
as it is not needed.
The contexpr introduced in C23 standard makes perfect sense to be used
instead of preprocessor macros - the symbols are kept, etc. Define
ISC_CONSTEXPR to be `constexpr` for C23 and `static const` for the older
C standards. Use the newly introduced macro for the NS_PER_SEC and
friends time constants.
New version of clang (19) has introduced a stricter checks when mixing
integer (and float types) with enums. In this case, we used enum {}
as C17 doesn't have constexpr yet. Change the time conversion constants
to be static const unsigned int instead of enum values.
Check if 'lctx->logconfig' is NULL before using it in isc_log_doit(),
because it's possible that isc_log_destroy() was already called, e.g.
when a 'call_rcu' function wants to log a message during shutdown.
When iterating through the old internal hashmap table, skip all the
nodes that have been already migrated to the new table. We know that
all positions with index less than .hiter are NULL.
When the round robin hashing reorders the map entries on deletion, we
were adjusting the iterator table size only when the reordering was
happening at the internal table boundary. The iterator table size had
to be reduced by one to prevent seeing the entry that resized on
position [0] twice because it migrated to [iter->size - 1] position.
However, the same thing could happen when the same entry migrates a
second time from [iter->size - 1] to [iter->size - 2] position (and so
on) because the check that we are manipulating the entry just in the [0]
position was insufficient. Instead of checking the position [pos == 0],
we now check that the [pos % iter->size == 0], thus ignoring all the
entries that might have moved back to the end of the internal table.
Since the enable_fips_mode() now resides inside the isc_tls unit, BIND 9
would fail to compile when FIPS mode was enabled as the DST subsystem
logging functions were missing.
Move the crypto library logging functions from the openssl_link unit to
isc_tls unit and enhance it, so it can now be used from both places
keeping the old dst__openssl_toresult* macros alive.
Instead of calling dst_lib_init() and dst_lib_destroy() explicitly by
all the programs, create a separate memory context for the DST subsystem
and use the library constructor and destructor to initialize the DST
internals.
When the SSL object was destroyed, it would invalidate all SSL_SESSION
objects including the cached, but not yet used, TLS session objects.
Properly disassociate the SSL object from the SSL_SESSION before we
store it in the TLS session cache, so we can later destroy it without
invalidating the cached TLS sessions.
Co-authored-by: Ondřej Surý <ondrej@isc.org>
Co-authored-by: Artem Boldariev <artem@isc.org>
Co-authored-by: Aram Sargsyan <aram@isc.org>
When TLS connection (TLSstream) connection was accepted, the children
listening socket was not attached to sock->server and thus it could have
been freed before all the accepted connections were actually closed.
In turn, this would cause us to call isc_tls_free() too soon - causing
cascade errors in pending SSL_read_ex() in the accepted connections.
Properly attach and detach the children listening socket when accepting
and closing the server connections.
Since the support for OpenSSL Engines has been removed, we can now also
remove the checks for OPENSSL_API_LEVEL; The OpenSSL 3.x APIs will be
used when compiling with OpenSSL 3.x, and OpenSSL 1.1.xx APIs will be
used only when OpenSSL 1.1.x is used.
The OpenSSL 1.x Engines support has been deprecated in the OpenSSL 3.x
and is going to be removed. Remove the OpenSSL Engine support in favor
of OpenSSL Providers.
The previous work in this area was led by the belief that we might be
calling call_rcu() from within call_rcu() callbacks. After carefully
checking all the current callback, it became evident that this is not
the case and the problem isn't enough rcu_barrier() calls, but something
entirely else.
Call the rcu_barrier() just once as that's enough and the multiple
rcu_barrier() calls will not hide the real problem anymore, so we can
find it.
Since the minimal OpenSSL version is now OpenSSL 1.1.1, remove all kind
of OpenSSL shims and checks for functions that are now always present in
the OpenSSL libraries.
Co-authored-by: Ondřej Surý <ondrej@isc.org>
Co-authored-by: Aydın Mercan <aydin@isc.org>
When putting the 48-bit number into a fixed-size buffer that's exactly 6
bytes, the assertion failure would occur as the 48-bit number is
internally represented as 64-bit number and the code was checking if
there is enough space for `sizeof(val)`. This causes assertion failure
when otherwise valid TSIG signature has a bad timing information.
Specify the size of the argument explicitly, so the 48-bit number
doesn't require 8-byte long buffer.
The PTHREAD_MUTEX_ADAPTIVE_NP and PTHREAD_MUTEX_ERRORCHECK_NP are
usually not defines, but enum values, so simple preprocessor check
doesn't work.
Check for PTHREAD_MUTEX_ADAPTIVE_NP from the autoconf AS_COMPILE_IFELSE
block and define HAVE_PTHREAD_MUTEX_ADAPTIVE_NP. This should enable
adaptive mutex on Linux and FreeBSD.
As PTHREAD_MUTEX_ERRORCHECK actually comes from POSIX and Linux glibc
does define it when compatibility macros are being set, we can just use
PTHREAD_MUTEX_ERRORCHECK instead of PTHREAD_MUTEX_ERRORCHECK_NP.
On a 32 bit machine casting to size_t can still lead to an overflow.
Cast to uint64_t. Also detect all possible negative values for
pages and pagesize to silence warning about possible negative value.
39#if defined(_SC_PHYS_PAGES) && defined(_SC_PAGESIZE)
1. tainted_data_return: Called function sysconf(_SC_PHYS_PAGES),
and a possible return value may be less than zero.
2. assign: Assigning: pages = sysconf(_SC_PHYS_PAGES).
40 long pages = sysconf(_SC_PHYS_PAGES);
41 long pagesize = sysconf(_SC_PAGESIZE);
42
3. Condition pages == -1, taking false branch.
4. Condition pagesize == -1, taking false branch.
43 if (pages == -1 || pagesize == -1) {
44 return (0);
45 }
46
5. overflow: The expression (size_t)pages * pagesize might be negative,
but is used in a context that treats it as unsigned.
CID 498034: (#1 of 1): Overflowed return value (INTEGER_OVERFLOW)
6. return_overflow: (size_t)pages * pagesize, which might have underflowed,
is returned from the function.
47 return ((size_t)pages * pagesize);
48#endif /* if defined(_SC_PHYS_PAGES) && defined(_SC_PAGESIZE) */
This commit ensures that we are not attempting to accept an expired
TCP connection as we are not interested in any data that could have
been accumulated in its internal buffers. Now we just drop them for
good.
Be more aggressive when throttling the reading - when we can't send the
outgoing TCP synchronously with uv_try_write(), we start throttling the
reading immediately instead of waiting for the send buffers to fill up.
This should not affect behaved clients that read the data from the TCP
on the other end.
Due to omission it was possible to un-throttle a TCP connection
previously throttled due to the peer not reading back data we are
sending.
In particular, that affected DoH code, but it could also affect other
transports (the current or future ones) that pause/resume reading
according to its internal state.
The single TCP read can create as much as 64k divided by the minimum
size of the DNS message. This can clog the processing thread and trash
the memory allocator because we need to do as much as ~20k allocations in
a single UV loop tick.
Limit the number of the DNS messages processed in a single UV loop tick
to just single DNS message and limit the number of the outstanding DNS
messages back to 23. This effectively limits the number of pipelined
DNS messages to that number (this is the limit we already had before).
When TCP client would not read the DNS message sent to them, the TCP
sends inside named would accumulate and cause degradation of the
service. Throttle the reading from the TCP socket when we accumulate
enough DNS data to be sent. Currently this is limited in a way that a
single largest possible DNS message can fit into the buffer.
This commit ensures that an HTTP endpoints set reference is stored in
a socket object associated with an HTTP/2 stream instead of
referencing the global set stored inside a listener.
This helps to prevent an issue like follows:
1. BIND is configured to serve DoH clients;
2. A client is connected and one or more HTTP/2 stream is
created. Internal pointers are now pointing to the data on the
associated HTTP endpoints set;
3. BIND is reconfigured - the new endpoints set object is created and
promoted to all listeners;
4. The old pointers to the HTTP endpoints set data are now invalid.
Instead referencing a global object that is updated on
re-configurations we now store a local reference which prevents the
endpoints set objects to go out of scope prematurely.
It was reported that HTTP/2 session might get closed or even deleted
before all async. processing has been completed.
This commit addresses that: now we are avoiding using the object when
we do not need it or specifically check if the pointers used are not
'NULL' and by ensuring that there is at least one reference to the
session object while we are doing incoming data processing.
This commit makes the code more resilient to such issues in the
future.
Add an isc_queue implementation that hides the gory details of cds_wfcq
into more neat API. The same caveats as with cds_wfcq.
TODO: Add documentation to the API.
The detach function declaration in `ISC__REFCOUNT_TRACE_DECL` had an
returned an accidental implicit int. While not allowed since C99, it
became an error by default in GCC 14.
`ISC_REFCOUNT_TRACE_IMPL` and `ISC_REFCOUNT_STATIC_TRACE_IMPL` expanded
into the wrong macros, trying to declare it again with the wrong number
of parameters.
An assertion failure would be triggered when sending the TCP data ends
after the TCP reading gets closed. Implement proper reference counting
for the isc_httpd object.
Returning the value allows for better high-water tracking without
running into edge cases like the following:
0. The counter is at value X
1. Increment the value (X+1)
2. The value is decreased multiple times in another threads (X+1-Y)
3. Get the value (X+1-Y)
4. Update-if-greater misses the X+1 value which should have been the
high-water
Prepare the statistics channel data in the offloaded worker thread, so
the networking thread is not blocked by the process gathering data from
various data structures. Only the netmgr send is then run on the
networkin thread when all the data is already there.
isc_loop() can now take its place.
This also requires changes to the test harness - instead of running the
setup and teardown outside of th main loop, we now schedule the setup
and teardown to run on the loop (via isc_loop_setup() and
isc_loop_teardown()) - this is needed because the new the isc_loop()
call has to be run on the active event loop, but previously the
isc_loop_current() (and the variants like isc_loop_main()) would work
even outside of the loop because it needed just isc_tid() to work, but
not the full loop (which was mainly true for the main thread).
if we had a method to get the running loop, similar to how
isc_tid() gets the current thread ID, we can simplify loop
and loopmgr initialization.
remove most uses of isc_loop_current() in favor of isc_loop().
in some places where that was the only reason to pass loopmgr,
remove loopmgr from the function parameters.
The case insensitive matching in isc_ht was basically completely broken
as only the hashvalue computation was case insensitive, but the key
comparison was always case sensitive.
The isc_log_t contains a isc_logconfig_t that is swapped, dereferenced
or accessed its fields through a mutex. Instead of protecting it with a
rwlock, use RCU.
To reduce memory pressure, we can add light per-loop (netmgr worker)
memory pools for isc_nmsocket_t structures. This will help in
situations where there's a lot of churn creating and destroying the
nmsockets.
Embedding isc_nmsocket_h2_t directly inside isc_nmsocket_t had increased
the size of isc_nmsocket_t to 1840 bytes. Making the isc_nmsocket_h2_t
to be a pointer to the structure and allocated on demand allows us to
reduce the size to 1208 bytes. While there are still some possible
reductions in the isc_nmsocket_t (embedded tlsstream, streamdns
structures), this was the far biggest drop in the memory usage.
The uv_req union member of struct isc__nm_uvreq contained libuv request
types that we don't use. Turns out that uv_getnameinfo_t is 1000 bytes
big and unnecessarily enlarged the whole structure. Remove all the
unused members from the uv_req union.
After removing sockaddr_unix from isc_sockaddr, we can also remove
sockaddr_storage and reduce the isc_sockaddr size from 152 bytes to just
48 bytes needed to hold IPv6 addresses.
As it was pointed out, the alignas() can't be used on objects larger
than `max_align_t` otherwise the compiler might miscompile the code to
use auto-vectorization on unaligned memory.
As we were only using alignas() as a way to prevent false memory
sharing, we can use manual padding in the affected structures.
With _exit() instead of exit() in place, we don't need
isc__tls_setfatalmode() mechanism as the atexit() calls will not be
executed including OpenSSL atexit hooks.
Instead of crude 5x rcu_barrier() call in the isc__mem_destroy(), change
the mechanism to call rcu_barrier() until the memory use and references
stops decreasing. This should deal with any number of nested call_rcu()
levels.
Additionally, don't destroy the contextslock if the list of the contexts
isn't empty. Destroying the lock could make the late threads crash.
Instead of running all the cryptographic validation in a tight loop,
spread it out into multiple event loop "ticks", but moving every single
validation into own isc_async_run() asynchronous event. Move the
cryptographic operations - both verification and DNSKEY selection - to
the offloaded threads (isc_work_enqueue), this further limits the time
we spend doing expensive operations on the event loops that should be
fast.
Limit the impact of invalid or malicious RRSets that contain crafted
records causing the dns_validator to do many validations per single
fetch by adding a cap on the maximum number of validations and maximum
number of validation failures that can happen before the resolving
fails.
cmocka.h and jemalloc.h/malloc_np.h has conflicting macro definitions.
While fixing them with push_macro for only malloc is done below, we only
need the non-standard mallocx interface which is easy to just define by
ourselves.
Because we don't use jemalloc functions directly, but only via the
libisc library, the dynamic linker might pull the jemalloc library
too late when memory has been already allocated via standard libc
allocator.
Add a workaround round isc_mem_create() that makes the dynamic linker
to pull jemalloc earlier than libc.
This commit improves TLS messages framing by avoiding an extra call to
SSL_write_ex(). Before that we would use an extra SSL_write_ex() call
to pass DNS message length to OpenSSL. That could create an extra TLS
frame, increasing number of bytes sent due to frame header and
padding.
This commit fixes that by making the code pass both DNS message length
and data at once, just like old TLS code did.
It should improve compatibility with some buggy clients that expect
both DNS message length and data to be in one TLS frame.
Older TLS DNS code worked like this, too.
The statistics channel does not expose the current number of TCP clients
connected, only the highwater. Therefore, users did not have an easy
means to collect statistics about TCP clients served over time. This
information could only be measured as a seperate mechanism via rndc by
looking at the TCP quota filled.
In order to expose the exact current count of connected TCP clients
(tracked by the "tcp-clients" quota) as a statistics counter, an
extra, dedicated Network Manager callback would need to be
implemented for that purpose (a counterpart of ns__client_tcpconn()
that would be run when a TCP connection is torn down), which is
inefficient. Instead, track the number of currently-connected TCP
clients separately for IPv4 and IPv6, as Network Manager statistics.
This commit removes wrong INSIST() condition as the assumption that if
'csock->recv_cb != NULL' iff 'csock->statichandle != NULL' is wrong.
There is no direct relation between 'csock->statichandle' and
'csock->recv_cb', as 'csock->statichandle' gets set when allocating a
handle regardless of 'csock->recv_cb' not being NULL, as it is
possible to attach to the handle without starting a read operation (at
the very least, it is correct to start writing before reading).
That condition made `cipher-suites` system test fail with crash on
some platforms in FIPS mode (namely, Oracle Linux 9) despite not being
related to FIPS at all.
This commits adds low-level wrappers on top of
'SSL_CTX_set_ciphersuites()'. These are going to be a foundation
behind the 'cipher-suites' option of the 'tls' statement.
isc/atomic.h and its defined macros should be preferred over
stdatomic.h and explicit atomic operations.
Fix the redundant stdatomic.h header in histo.c found by the introduced
check.
This commit modifies TLS Stream and DNS-over-HTTPS transports so that
they do not use the "sock->iface" and "sock->peer" of the lower level
transport directly.
That did not cause any problems before, as things worked as expected,
but with the introduction of PROXYv2 support we use handles to store
the information in both PROXY Stream and UDP Proxy
transports. Therefore, in order to propagate the information (like
addresses), extracted from PROXYv2 headers, from the lower level
transports to the higher-level ones, we need to get that information
from the lower-level handles rather than sockets. That means that we
should get the peer and interface addresses using the intended
APIs ("isc_nmhandle_peeraddr()" and "isc_nmhandle_localaddr()").
Add the new isc__nm_dump_active_manager() function that can be used
for debugging purposes: it dumps all active sockets withing the
network manager instance.
This commit adds a new transport that supports PROXYv2 over UDP. It is
built on top of PROXYv2 handling code (just like PROXY Stream). It
works by processing and stripping the PROXYv2 headers at the beginning
of a datagram (when accepting a datagram) or by placing a PROXYv2
header to the beginning of an outgoing datagram.
The transport is built in such a way that incoming datagrams are being
handled with minimal memory allocations and copying.
In the previous versions of the NM, detecting the case when worker is
shutting down was not that important and actual status code did not
matter much. However, that might be not the case all the time.
This commit makes necessary modifications to the code.
This commit makes it possible to use PROXY Stream not only over TCP,
but also over TLS. That is, now PROXY Stream can work in two modes as
far as TLS is involved:
1. PROXY over (plain) TCP - PROXYv2 headers are sent unencrypted before
TLS handshake messages. That is the main mode as described in the
PROXY protocol specification (as it is clearly stated there), and most
of the software expects PROXYv2 support to be implemented that
way (e.g. HAProxy);
2. PROXY over (encrypted) TLS - PROXYv2 headers are sent after the TLS
handshake has happened. For example, this mode is being used (only ?)
by "dnsdist". As far as I can see, that is, in fact, a deviation from
the spec, but I can certainly see how PROXYv2 could end up being
implemented this way elsewhere.
This commit modifies TLS Stream to make it possible to use over PROXY
Stream. That is required to add PROVYv2 support into TLS-based
transports (DNS over HTTP, DNS over TLS).
This commit adds a new stream-based transport with an interface
compatible with TCP. The transport is built on top of TCP transport
and the new PROXYv2 handling code. Despite being built on top of TCP,
it can be easily extended to work on top of any TCP-like stream-based
transport. The intention of having this transport is to add PROXYv2
support into all existing stream-based DNS transport (DNS over TCP,
DNS over TLS, DNS over HTTP) by making the work on top of this new
transport.
The idea behind the transport is simple after accepting the connection
or connecting to a remote server it enters PROXYv2 handling mode: that
is, it either attempts to read (when accepting the connection) or send
(when establishing a connection) a PROXYv2 header. After that it works
like a mere wrapper on top of the underlying stream-based
transport (TCP).
This commit adds a set of utilities for dealing with PROXYv2 headers,
both parsing and generating them. The code has no dependencies from
the networking code and is (for the most part) a "separate library".
The part responsible for handling incoming PROXYv2 headers is
structured as a state machine which accepts data as input and calls a
callback to notify the upper-level code about the data processing
status.
Such a design, among other things, makes it easy to write a thorough
unit test suite for that, as there are fewer dependencies as well as
will not stand in the way of any changes in the networking code.
Previously, there were two methods of working with the overmem
condition:
1. hi/lo water callback - when the overmem condition was reached
for the first time, the water callback was called with HIWATER
mark and .is_overmem boolean was set internally. Similarly,
when the used memory went below the lo water mark, the water
callback would be called with LOWATER mark and .is_overmem
was reset. This check would be called **every** time memory
was allocated or freed.
2. isc_mem_isovermem() - a simple getter for the internal
.is_overmem flag
This commit refactors removes the first method and move the hi/lo water
checks to the isc_mem_isovermem() function, thus we now have only a
single method of checking overmem condition and the check for hi/lo
water is removed from the hot path for memory contexts that doesn't use
overmem checks.
The AES algorithm for DNS cookies was being kept for legacy reasons, and
it can be safely removed in the next major release. Remove both the AES
usage for DNS cookies and the AES implementation itself.
Concurrent threads can access a hashmap for reading by creating and
then destroying an iterator, in which case the integer number of the
active iterators is increased or decreased from different threads,
introducing a data race. Use atomic operations to protect the variable.
Add complementary macros to ISC_LIST_FOREACH(_SAFE) that walk the lists
in reverse.
* ISC_LIST_FOREACH_REV(list, elt, link) - walk the static list from
tail to head
* ISC_LIST_FOREACH_REV_SAFE(list, elt, link, next) - walk the list
from tail to head in a manner that's safe against list member
deletions
OpenSSL 1.1 has already reached end-of-life and since we are
experiencing a weird memory leak in the mirror system test on just
Ubuntu 20.04 (Focal) with OpenSSL 1.1, we disable the legacy code for
enabling memory contexts for OpenSSL < 3.0.0 in this commit.
In units that support detailed reference tracing via ISC_REFCOUNT
macros, we were doing:
/* Define to 1 for detailed reference tracing */
#undef <unit>_TRACE
This would prevent using -D<unit>_TRACE=1 in the CFLAGS.
Convert the above mentioned snippet with just a comment how to enable
the detailed reference tracing:
/* Add -D<unit>_TRACE=1 to CFLAGS for detailed reference tracing */
Apply the semantic patch to catch all the places where we pass 'char' to
the <ctype.h> family of functions (isalpha() and friends, toupper(),
tolower()).
Instead of copying address back and forth when hashing addr+port, we can
use incremental hashing. Additionally, switch from 64-bit
isc_hash_function to 32-bit isc_hash32() as the resulting value is
32-bit.
The Unix Domain Sockets support in BIND 9 has been completely disabled
since BIND 9.18 and it has been a fatal error since then. Cleanup the
code and the documentation that suggest that Unix Domain Sockets are
supported.
When iterating the table, we can't add new nodes to the hashmap because
we can't assure that we are not adding the new node before the iterator.
This also applies to rehashing - which might be triggered by both
isc_hashmap_add() and isc_hashmap_delete(), but not
isc_hashmap_iter_delcurrent_next().
When isc_hashmap_iter_delcurrent_next calls hashmap_delete_node
nodes from the front of the table could be added to the end of
the table resulting in them being returned twice. Detect when
this is happening and prevent those nodes being returned twice
buy reducing the effective size of the table by one each time
it happens.
Instead of high number of dispatches (4 * named_g_udpdisp)[1], make the
dispatches bound to threads and make dns_dispatchset_t create a dispatch
for each thread (event loop).
This required couple of other changes:
1. The dns_dispatch_createudp() must be called on loop, so the isc_tid()
is already initialized - changes to nsupdate and mdig were required.
2. The dns_requestmgr had only a single dispatch per v4 and v6. Instead
of using single dispatch, use dns_dispatchset_t for each protocol -
this is same as dns_resolver.
Looking up unique message ID in the dns_dispatch has been using custom
hash tables. Rewrite the custom hashtable to use cds_lfht API, removing
one extra lock in the cold-cache resolver hot path.
Refactor isc_hashmap to allow custom matching functions. This allows us
to have better tailored keys that don't require fixed uint8_t arrays,
but can be composed of more fields from the stored data structure.
Add support for incremental hashing to the isc_hash unit, both 32-bit
and 64-bit incremental hashing is now supported.
This is commit second in series adding incremental hashing to libisc.
When inserting items into hashtables (hashmaps), we might have a
fragmented key (as an example we might want to hash DNS name + class +
type). We either need to construct continuous key in the memory and
then hash it en bloc, or incremental hashing is required.
This incremental version of SipHash 2-4 algorithm is the first building
block.
As SipHash 2-4 is often used in the hot paths, I've turned the
implementation into header-only version in the process.
We now depend on explicitly creating memory arenas and disabling tcache
on those, and these features are not available with jemalloc < 4.
Instead of working around these issues, make the jemalloc >= 4.0.0 hard
requirement by looking for sdallocx() symbol that's only available from
that version.
The jemalloc < 4 was only used by RHEL 7 which is not supported since
BIND 9.19+.
This commit extends the internal memory management middleware code in
BIND so that memory contexts backed by dedicated jemalloc arenas can
be created. A new function (isc_mem_create_arena()) is added for that.
Moreover, it extends the existing code so that specialised memory
contexts can be created easily, should we need that functionality for
other future purposes. We have achieved that by passing the flags to
the underlying jemalloc-related calls. See the above
isc_mem_create_arena(), which can serve as an example of this.
Having this opens up possibilities for creating memory contexts tuned
for specific needs.
Use the new isc_mem_c*() calloc-like API for allocations that are
zeroed.
In turn, this also fixes couple of incorrect usage of the ISC_MEM_ZERO
for structures that need to be zeroed explicitly.
There are few places where isc_mem_cput() is used on structures with a
flexible member (or similar).
Add new isc_mem_cget(), isc_mem_creget(), and isc_mem_cput() macros to
complement the isc_mem_callocate() (which works like calloc()).
The overflow checks are implemented as macros in the <isc/mem.h>, so
that the compiler can see that the element size is constant: it should
always be `sizeof(something)`.
Before calling isc_buffer_putmem(), there is a condition to check
that 'buf_size' is greater than 0. At this point 'buf_size' is
guaranteed to be greater than zero, so either the condition is
redundant, or 'unprocessed_size' should be checked instead, which
seems more logical, because calling isc_buffer_putmem() with
'unprocessed_size' being zero is not useful, although harmless.
The isc_dnsstream_assembler_incoming() inline function expects that
when 'buf_size' is zero, then 'buf' must be NULL. The expectation is
not correct, because those values come from the libuv read callback,
and its documentation notes[1] that 'nread' ('buf_size' here) might
be 0, which does not indicate an error or EOF, but is equivalent to
EAGAIN or EWOULDBLOCK under read(2).
Change the isc_dnsstream_assembler_incoming() inline function to
remove the invalid expectation.
[1] https://docs.libuv.org/en/v1.x/stream.html#c.uv_read_cb
There used to be an extra layer of indirection in the memory functions
for certain dynamic linking scenarios. This involved variant spellings
like isc__mem and isc___mem. The isc___mem variants were removed in
commit 7de846977b so the token pasting is no longer needed and
only serves to obfuscate.
Add tracing probes to isc_job unit:
* libisc:job_cb_before - before the job callback is called
* libisc:job_cb_after - after the job callback is called
Add tracing probes to ISC own isc_rwlock implementation to allow
fine-grained tracing. The pthread rwlock already has probes inside
glibc, and it's difficult to add probes to headers included from the
other libraries.
This adds support for User Statically Defined Tracing (USDT). On
Linux, this uses the header from SystemTap and dtrace utility, but the
support is universal as long as dtrace is available.
Also add the required infrastructure to add probes to libisc, libdns and
libns libraries, where most of the probes will be.
Instead of growing and never shrinking the list of the inactive
handles (to be reused mostly on the UDP connections), limit the number
of maximum number of inactive handles kept to 64. Instead of caching
the inactive handles for all listening sockets, enable the caching on on
UDP listening sockets. For TCP, the handles were cached for each
accepted socket thus reusing the handles only for long-standing TCP
connections, but not reusing the handles across different TCP streams.
The SET_IF_NOT_NULL() macro avoids a fair amount of tedious boilerplate,
checking pointer parameters to see if they're non-NULL and updating
them if they are. The macro was already in the dns_zone unit, and this
commit moves it to the <isc/util.h> header.
I have included a Coccinelle semantic patch to use SET_IF_NOT_NULL()
where appropriate. The patch needs an #include in `openssl_shim.c`
in order to work.
Because rcu_barrier() needs to be called as many times as the number of
nested call_rcu() calls (call_rcu() calls made from call_rcu thread),
and currently there's no mechanism to detect whether there are more
call_rcu callbacks scheduled, we simply call the rcu_barrier() multiple
times. The overhead is negligible and it prevents rare assertion
failures caused by the check for memory leaks in isc__mem_destroy().
When dns_request was canceled via dns_requestmgr_shutdown() the cancel
event would be propagated on different loop (loop 0) than the loop where
request was created on. In turn this would propagate down to isc_netmgr
where we require all the events to be called from the matching isc_loop.
Pin the dns_requests to the loops and ensure that all the events are
called on the associated loop. This in turn allows us to remove the
hashed locks on the requests and change the single .requests list to be
a per-loop list for the request accounting.
Additionally, do some extra cleanup because some race condititions are
now not possible as all events on the dns_request are serialized.
With ThreadSanitizer support added to the Userspace RCU, we no longer
need to wrap the call_rcu and caa_container_of with
__tsan_{acquire,release} hints. Remove the direct calls to
__tsan_{acquire,release} and the isc_urcu_{container,cleanup} macros.
The cds_lfht_for_each_entry and cds_lfht_for_each_entry_duplicate macros
had a code that operated on the NULL pointer, at the end of the list it
was calling caa_container_of() on the NULL pointer in the init-clause
and iteration-expression, but the result wasn't actually used anywhere
because the cond-expression in the for loop has prevented executing
loop-statement. This made AddressSanitizer notice the invalid operation
and rightfully complain.
This was reported to the upstream and fixed there. Pull the upstream
fix into our <isc/urcu.h> header, so our CI checks pass.
The isc_stats_create() can no longer return anything else than
ISC_R_SUCCESS. Refactor isc_stats_create() and its variants in libdns,
libns and named to just return void.
to reduce the amount of common code that will need to be shared
between the separated cache and zone database implementations,
clean up unused portions of dns_db.
the methods dns_db_dump(), dns_db_isdnssec(), dns_db_printnode(),
dns_db_resigned(), dns_db_expirenode() and dns_db_overmem() were
either never called or were only implemented as nonoperational stub
functions: they have now been removed.
dns_db_nodefullname() was only used in one place, which turned out
to be unnecessary, so it has also been removed.
dns_db_ispersistent() and dns_db_transfernode() are used, but only
the default implementation in db.c was ever actually called. since
they were never overridden by database methods, there's no need to
retain methods for them.
in rbtdb.c, beginload() and endload() methods are no longer defined for
the cache database, because that was never used (except in a few unit
tests which can easily be modified to use the zone implementation
instead). issecure() is also no longer defined for the cache database,
as the cache is always insecure and the default implementation of
dns_db_issecure() returns false.
for similar reasons, hashsize() is no longer defined for zone databases.
implementation functions that are shared between zone and cache are now
prepended with 'dns__rbtdb_' so they can become nonstatic.
serve_stale_ttl is now a common member of dns_db.
When compiled using a malloc that lacks an equivalent to sallocx(),
the jemalloc_shim adds a size prefix to each allocation. We must check
that this does not overflow.
Closes#4121