This commit completes the support for DNS-over-HTTP(S) built on top of
nghttp2 and plugs it into the BIND. Support for both GET and POST
requests is present, as required by RFC8484.
Both encrypted (via TLS) and unencrypted HTTP/2 connections are
supported. The latter are mostly there for debugging/troubleshooting
purposes and for the means of encryption offloading to third-party
software (as might be desirable in some environments to simplify TLS
certificates management).
This commit allows stale RRset to be used (if available) for responding
a query, before an attempt to refresh an expired, or otherwise resolve
an unavailable RRset in cache is made.
For that to work, a value of zero must be specified for
stale-answer-client-timeout statement.
To better understand the logic implemented, there are three flags being
used during database lookup and other parts of code that must be
understood:
. DNS_DBFIND_STALEOK: This flag is set when BIND fails to refresh a
RRset due to timeout (resolver-query-timeout), its intent is to
try to look for stale data in cache as a fallback, but only if
stale answers are enabled in configuration.
This flag is also used to activate stale-refresh-time window, since it
is the only way the database knows that a resolution has failed.
. DNS_DBFIND_STALEENABLED: This flag is used as a hint to the database
that it may use stale data. It is always set during query lookup if
stale answers are enabled, but only effectively used during
stale-refresh-time window. Also during this window, the resolver will
not try to resolve the query, in other words no attempt to refresh the
data in cache is made when the stale-refresh-time window is active.
. DNS_DBFIND_STALEONLY: This new introduced flag is used when we want
stale data from the database, but not due to a failure in resolution,
it also doesn't require stale-refresh-time window timer to be active.
As long as there is a stale RRset available, it should be returned.
It is mainly used in two situations:
1. When stale-answer-client-timeout timer is triggered: in that case
we want to know if there is stale data available to answer the
client.
2. When stale-answer-client-timeout value is set to zero: in that
case, we also want to know if there is some stale RRset available
to promptly answer the client.
We must also discern between three situations that may happen when
resolving a query after the addition of stale-answer-client-timeout
statement, and how to handle them:
1. Are we running query_lookup() due to stale-answer-client-timeout
timer being triggered?
In this case, we look for stale data, making use of
DNS_DBFIND_STALEONLY flag. If a stale RRset is available then
respond the client with the data found, mark this query as
answered (query attribute NS_QUERYATTR_ANSWERED), so when the
fetch completes the client won't be answered twice.
We must also take care of not detaching from the client, as a
fetch will still be running in background, this is handled by the
following snippet:
if (!QUERY_STALEONLY(&client->query)) {
isc_nmhandle_detach(&client->reqhandle);
}
Which basically tests if DNS_DBFIND_STALEONLY flag is set, which
means we are here due to a stale-answer-client-timeout timer
expiration.
2. Are we running query_lookup() due to resolver-query-timeout being
triggered?
In this case, DNS_DBFIND_STALEOK flag will be set and an attempt
to look for stale data will be made.
As already explained, this flag is algo used to activate
stale-refresh-time window, as it means that we failed to refresh
a RRset due to timeout.
It is ok in this situation to detach from the client, as the
fetch is already completed.
3. Are we running query_lookup() during the first time, looking for
a RRset in cache and stale-answer-client-timeout value is set to
zero?
In this case, if stale answers are enabled (probably), we must do
an initial database lookup with DNS_DBFIND_STALEONLY flag set, to
indicate to the database that we want stale data.
If we find an active RRset, proceed as normal, answer the client
and the query is done.
If we find a stale RRset we respond to the client and mark the
query as answered, but don't detach from the client yet as an
attempt in refreshing the RRset will still be made by means of
the new introduced function 'query_resolve'.
If no active or stale RRset is available, begin resolution as
usual.
The general logic behind the addition of this new feature works as
folows:
When a client query arrives, the basic path (query.c / ns_query_recurse)
was to create a fetch, waiting for completion in fetch_callback.
With the introduction of stale-answer-client-timeout, a new event of
type DNS_EVENT_TRYSTALE may invoke fetch_callback, whenever stale
answers are enabled and the fetch took longer than
stale-answer-client-timeout to complete.
When an event of type DNS_EVENT_TRYSTALE triggers fetch_callback, we
must ensure that the folowing happens:
1. Setup a new query context with the sole purpose of looking up for
stale RRset only data, for that matters a new flag was added
'DNS_DBFIND_STALEONLY' used in database lookups.
. If a stale RRset is found, mark the original client query as
answered (with a new query attribute named NS_QUERYATTR_ANSWERED),
so when the fetch completion event is received later, we avoid
answering the client twice.
. If a stale RRset is not found, cleanup and wait for the normal
fetch completion event.
2. In ns_query_done, we must change this part:
/*
* If we're recursing then just return; the query will
* resume when recursion ends.
*/
if (RECURSING(qctx->client)) {
return (qctx->result);
}
To this:
if (RECURSING(qctx->client) && !QUERY_STALEONLY(qctx->client)) {
return (qctx->result);
}
Otherwise we would not proceed to answer the client if it happened
that a stale answer was found when looking up for stale only data.
When an event of type DNS_EVENT_FETCHDONE triggers fetch_callback, we
proceed as before, resuming query, updating stats, etc, but a few
exceptions had to be added, most important of which are two:
1. Before answering the client (ns_client_send), check if the query
wasn't already answered before.
2. Before detaching a client, e.g.
isc_nmhandle_detach(&client->reqhandle), ensure that this is the
fetch completion event, and not the one triggered due to
stale-answer-client-timeout, so a correct call would be:
if (!QUERY_STALEONLY(client)) {
isc_nmhandle_detach(&client->reqhandle);
}
Other than these notes, comments were added in code in attempt to make
these updates easier to follow.
* Following the example set in 634bdfb16d, the tlsdns netmgr
module now uses libuv and SSL primitives directly, rather than
opening a TLS socket which opens a TCP socket, as the previous
model was difficult to debug. Closes#2335.
* Remove the netmgr tls layer (we will have to re-add it for DoH)
* Add isc_tls API to wrap the OpenSSL SSL_CTX object into libisc
library; move the OpenSSL initialization/deinitialization from dstapi
needed for OpenSSL 1.0.x to the isc_tls_{initialize,destroy}()
* Add couple of new shims needed for OpenSSL 1.0.x
* When LibreSSL is used, require at least version 2.7.0 that
has the best OpenSSL 1.1.x compatibility and auto init/deinit
* Enforce OpenSSL 1.1.x usage on Windows
* Added a TLSDNS unit test and implemented a simple TLSDNS echo
server and client.
previously query plugins were strictly synchrounous - the query
process would be interrupted at some point, data would be looked
up or a change would be made, and then the query processing would
resume immediately.
this commit enables query plugins to initiate asynchronous processes
and resume on a completion event, as with recursion.
Parse the configuration of tls objects into SSL_CTX* objects. Listen on
DoT if 'tls' option is setup in listen-on directive. Use DoT/DoH ports
for DoT/DoH.
As the query_prefetch() or query_rpzfetch() could be called during
"regular" fetch, we need to introduce separate storage for attaching
the nmhandle during prefetching the records. The query_prefetch()
and query_rpzfetch() are guarded for re-entrance by .query.prefetch
member of ns_client_t, so we can reuse the same .prefetchhandle for
both.
Attaching and detaching handle pointers will make it easier to
determine where and why reference counting errors have occurred.
A handle needs to be referenced more than once when multiple
asynchronous operations are in flight, so callers must now maintain
multiple handle pointers for each pending operation. For example,
ns_client objects now contain:
- reqhandle: held while waiting for a request callback (query,
notify, update)
- sendhandle: held while waiting for a send callback
- fetchhandle: held while waiting for a recursive fetch to
complete
- updatehandle: held while waiting for an update-forwarding
task to complete
control channel connection objects now contain:
- readhandle: held while waiting for a read callback
- sendhandle: held while waiting for a send callback
- cmdhandle: held while an rndc command is running
httpd connections contain:
- readhandle: held while waiting for a read callback
- sendhandle: held while waiting for a send callback
The basic scenario for the problem was that in the process of
resolving a query, if any rrset was eligible for prefetching, then it
would trigger a call to query_prefetch(), this call would run in
parallel to the normal query processing.
The problem arises due to the fact that both query_prefetch(), and,
in the original thread, a call to ns_query_recurse(), try to attach
to the recursionquota, but recursing client stats counter is only
incremented if ns_query_recurse() attachs to it first.
Conversely, if fetch_callback() is called before prefetch_done(),
it would not only detach from recursionquota, but also decrement
the stats counter, if query_prefetch() attached to te quota first
that would result in a decrement not matched by an increment, as
expected.
To solve this issue an atomic bool was added, it is set once in
ns_query_recurse(), allowing fetch_callback() to check for it
and decrement stats accordingly.
For a more compreensive explanation check the thread comment below:
https://gitlab.isc.org/isc-projects/bind9/-/issues/1719#note_145857
the blackhole ACL was accidentally disabled with respect to client
queries during the netmgr conversion.
in order to make this work for TCP, it was necessary to add a return
code to the accept callback functions passed to isc_nm_listentcp() and
isc_nm_listentcpdns().
NS_CLIENT_TCP_BUFFER_SIZE was 2 byte too large following the
move to netmgr add associated changes to lib/ns/client.c and
as a result an INSIST could be trigger if the DNS message being
constructed had a checkpoint stage that fell in those two extra
bytes. Adjusted NS_CLIENT_TCP_BUFFER_SIZE and cleaned up
client_allocsendbuf now that the previously reserved 2 bytes
are no longer used.
The rewrite of BIND 9 build system is a large work and cannot be reasonable
split into separate merge requests. Addition of the automake has a positive
effect on the readability and maintainability of the build system as it is more
declarative, it allows conditional and we are able to drop all of the custom
make code that BIND 9 developed over the years to overcome the deficiencies of
autoconf + custom Makefile.in files.
This squashed commit contains following changes:
- conversion (or rather fresh rewrite) of all Makefile.in files to Makefile.am
by using automake
- the libtool is now properly integrated with automake (the way we used it
was rather hackish as the only official way how to use libtool is via
automake
- the dynamic module loading was rewritten from a custom patchwork to libtool's
libltdl (which includes the patchwork to support module loading on different
systems internally)
- conversion of the unit test executor from kyua to automake parallel driver
- conversion of the system test executor from custom make/shell to automake
parallel driver
- The GSSAPI has been refactored, the custom SPNEGO on the basis that
all major KRB5/GSSAPI (mit-krb5, heimdal and Windows) implementations
support SPNEGO mechanism.
- The various defunct tests from bin/tests have been removed:
bin/tests/optional and bin/tests/pkcs11
- The text files generated from the MD files have been removed, the
MarkDown has been designed to be readable by both humans and computers
- The xsl header is now generated by a simple sed command instead of
perl helper
- The <irs/platform.h> header has been removed
- cleanups of configure.ac script to make it more simpler, addition of multiple
macros (there's still work to be done though)
- the tarball can now be prepared with `make dist`
- the system tests are partially able to run in oot build
Here's a list of unfinished work that needs to be completed in subsequent merge
requests:
- `make distcheck` doesn't yet work (because of system tests oot run is not yet
finished)
- documentation is not yet built, there's a different merge request with docbook
to sphinx-build rst conversion that needs to be rebased and adapted on top of
the automake
- msvc build is non functional yet and we need to decide whether we will just
cross-compile bind9 using mingw-w64 or fix the msvc build
- contributed dlz modules are not included neither in the autoconf nor automake
After the network manager rewrite, tcp-higwater stats was only being
updated when a valid DNS query was received over tcp.
It turns out tcp-quota is updated right after a tcp connection is
accepted, before any data is read, so in the event that some client
connect but don't send a valid query, it wouldn't be taken into
account to update tcp-highwater stats, that is wrong.
This commit fix tcp-highwater to update its stats whenever a tcp connection
is established, independent of what happens after (timeout/invalid
request, etc).
- restore support for tcp-initial-timeout, tcp-idle-timeout,
tcp-keepalive-timeout and tcp-advertised-timeout configuration
options, which were ineffective previously.
- ns__client_request() is now called by netmgr with an isc_nmhandle_t
parameter. The handle can then be permanently associated with an
ns_client object.
- The task manager is paused so that isc_task events that may be
triggred during client processing will not fire until after the netmgr is
finished with it. Before any asynchronous event, the client MUST
call isc_nmhandle_ref(client->handle), to prevent the client from
being reset and reused while waiting for an event to process. When
the asynchronous event is complete, isc_nmhandle_unref(client->handle)
must be called to ensure the handle can be reused later.
- reference counting of client objects is now handled in the nmhandle
object. when the handle references drop to zero, the client's "reset"
callback is used to free temporary resources and reiniialize it,
whereupon the handle (and associated client) is placed in the
"inactive handles" queue. when the sysstem is shutdown and the
handles are cleaned up, the client's "put" callback is called to free
all remaining resources.
- because client allocation is no longer handled in the same way,
the '-T clienttest' option has now been removed and is no longer
used by any system tests.
- the unit tests require wrapping the isc_nmhandle_unref() function;
when LD_WRAP is supported, that is used. otherwise we link a
libwrap.so interposer library and use that.
The double-locked queue implementation is still currently in use
in ns_client, but will be replaced by a fetch-and-add array queue.
This commit moves it from queue.h to list.h so that queue.h can be
used for the new data structure, and clean up dependencies between
list.h and types.h. Later, when the ISC_QUEUE is no longer is use,
it will be removed completely.
This variable will report the maximum number of simultaneous tcp clients
that BIND has served while running.
It can be verified by running rndc status, then inspect "tcp high-water:
count", or by generating statistics file, rndc stats, then inspect the
line with "TCP connection high-water" text.
The tcp-highwater variable is atomically updated based on an existing
tcp-quota system handled in ns/client.c.
This commit changes the BIND cookie algorithms to match
draft-sury-toorop-dnsop-server-cookies-00. Namely, it changes the Client Cookie
algorithm to use SipHash 2-4, adds the new Server Cookie algorithm using SipHash
2-4, and changes the default for the Server Cookie algorithm to be siphash24.
Add siphash24 cookie algorithm, and make it keep legacy aes as
- if the TCP quota has been exceeded but there are no clients listening
for new connections on the interface, we can now force attachment to the
quota using isc_quota_force(), instead of carrying on with the quota not
attached.
- the TCP client quota is now referenced via a reference-counted
'ns_tcpconn' object, one of which is created whenever a client begins
listening for new connections, and attached to by members of that
client's pipeline group. when the last reference to the tcpconn
object is detached, it is freed and the TCP quota slot is released.
- reduce code duplication by adding mark_tcp_active() function
- convert counters to stdatomic
(cherry picked from commit a8dd133d270873b736c1be9bf50ebaa074f5b38f)
(cherry picked from commit 4a8fc979c4)
- ensure that tcpactive is cleaned up correctly when accept() fails.
- set 'client->tcpattached' when the client is attached to the tcpquota.
carry this value on to new clients sharing the same pipeline group.
don't call isc_quota_detach() on the tcpquota unless tcpattached is
set. this way clients that were allowed to accept TCP connections
despite being over quota (and therefore, were never attached to the
quota) will not inadvertently detach from it and mess up the
accounting.
- simplify the code for tcpquota disconnection by using a new function
tcpquota_disconnect().
- before deciding whether to reject a new connection due to quota
exhaustion, check to see whether there are at least two active
clients. previously, this was "at least one", but that could be
insufficient if there was one other client in READING state (waiting
for messages on an open connection) but none in READY (listening
for new connections).
- before deciding whether a TCP client object can to go inactive, we
must ensure there are enough other clients to maintain service
afterward -- both accepting new connections and reading/processing new
queries. A TCP client can't shut down unless at least one
client is accepting new connections and (in the case of pipelined
clients) at least one additional client is waiting to read.
(cherry picked from commit 427a2fb4d17bc04ca3262f58a9dcf5c93fc6d33e)
(cherry picked from commit 0896841272)
Track pipeline groups using a shared reference counter
instead of a linked list.
(cherry picked from commit 31f392db20207a1b05d6286c3c56f76c8d69e574)
(cherry picked from commit 2211120222)
the TCP client quota could still be ineffective under some
circumstances. this change:
- improves quota accounting to ensure that TCP clients are
properly limited, while still guaranteeing that at least one client
is always available to serve TCP connections on each interface.
- uses more descriptive names and removes one (ntcptarget) that
was no longer needed
- adds comments
(cherry picked from commit 9e74969f85329fe26df2fad390468715215e2edd)
(cherry picked from commit d7e84cee0b)
- named could return FORMERR if parsing iterative responses
ended with a result code such as DNS_R_OPTERR. instead of
computing a response code based on the result, in this case
we now just force the response to be SERVFAIL.
Implement a helper function which, given an input string:
- copies it verbatim if it contains at least one path separator,
- prepends the named plugin installation directory to it otherwise.
This function will allow configuration parsing code to conveniently
determine the full path to a plugin module given either a path or a
filename.
While other, simpler ways exist for making sure filenames passed to
dlopen() cause the latter to look for shared objects in a specific
directory, they are very platform-specific. Using full paths is thus
likely the most portable and reliable solution.
Also added unit tests for ns_plugin_expandpath() to ensure it behaves
as expected for absolute paths, relative paths, and filenames, for
various target buffer sizes.
(Note: plugins share a directory with named on Windows; there is no
default plugin path. Therefore the source path is copied to the
destination path with no modification.)
- "hook" is now used only for hook points and hook actions
- the "hook" statement in named.conf is now "plugin"
- ns_module and ns_modlist are now ns_plugin and ns_plugins
- ns_module_load is renamed ns_plugin_register
- the mandatory functions in plugin modules (hook_register,
hook_check, hook_version, hook_destroy) have been renamed
- use a per-view module list instead of global hook_modules
- create an 'instance' pointer when registering modules, store it in
the module structure, and use it as action_data when calling
hook functions - this enables multiple module instances to be set
up in parallel
- also some nomenclature changes and cleanup
- added some hook points that will be needed for a dns64 module later
- moved some code from the beginning of query_respond() to
the end of query_prepresponse(); this has no effect on functionality
but means we can have a hook point at the top of query_respond(),
which seems nicer
- compressed duplicated code into query_zerottl_refetch() function
- added a qctx->answered flag so that a module can prevent
query_addrrset() from being called from query_respond() when
it's already been called from the module.
- this is necessary because adding the same hook to multiple views
causes the ISC_LIST link value to become inconsistent; it isn't
noticeable when only one hook action is ever registered at a
given hook point, but it will break things when there are two.
- eliminate qctx->hookdata and client->hookflags.
- use a memory pool to allocate data blobs in the filter-aaaa module,
and associate them with the client address in a hash table
- instead of detaching the client in query_done(), mark it for deletion
and then call ns_client_detach() from qctx_destroy(); this ensures
that it will still exist when the QCTX_DESTROYED hook point is
reached.
- use a get_hooktab() function to determine the hook table.
- PROCESS_HOOK now jumps to a cleanup tag on failure
- add PROCESS_ALL_HOOKS in query.c, to run all hook functions at
a specified hook point without stopping. this is to be used for
intiialization and destruction functions that must run in every
module.
- 'result' is set in PROCESS_HOOK only when a hook function
interrupts processing.
- revised terminology: a "callback" is now a "hook action"
- remove unused NS_PROCESS_HOOK and NS_PROCESS_HOOK_VOID macros.
- added a 'hookdata' array to qctx to store pointers to up to
16 blobs of data which are allocated by modules as needed.
each module is assigned an ID number as it's loaded, and this
is the index into the hook data array. this is to be used for
holding persistent state between calls to a hook module for a
specific query.
- instead of using qctx->filter_aaaa, we now use qctx->hookdata.
(this was the last piece of filter-aaaa specific code outside the
module.)
- added hook points for qctx initialization and destruction. we get
a filter-aaaa data pointer from the mempool when initializing and
store it in the qctx->hookdata table; return to to the mempool
when destroying the qctx.
- link the view to the qctx so that detaching the client doesn't cause
hooks to fail
- added a qctx_destroy() function which must be called after qctx_init;
this calls the QCTX_DESTROY hook and detaches the view
- general cleanup and comments
- make some cfg-parsing functions global so they can be run
from filter-aaaa.so
- add filter-aaaa options to the hook module's parser
- mark filter-aaaa options in named.conf as obsolete, remove
from named and checkconf, and update the filter-aaaa test not to
use checkconf anymore
- remove filter-aaaa-related struct members from dns_view
- allow multiple "hook" statements at global or view level
- add "optional bracketed text" type for optional parameter list
- load hook module from specified path rather than hardcoded path
- add a hooktable pointer (and a callback for freeing it) to the
view structure
- change the hooktable functions so they no longer update ns__hook_table
by default, and modify PROCESS_HOOK so it uses the view hooktable, if
set, rather than ns__hook_table. (ns__hook_table is retained for
use by unit tests.)
- update the filter-aaaa system test to load filter-aaaa.so
- add a prereq script to check for dlopen support before running
the filter-aaaa system test
not yet done:
- configuration parameters are not being passed to the filter-aaaa
module; the filter-aaaa ACL and filter-aaaa-on-{v4,v6} settings are
still stored in dns_view
- temporary kluge! in this version, for testing purposes,
named always searches for a filter-aaaa module at /tmp/filter-aaaa.so.
this enables the filter-aaaa system test to run even though the
code to configure hooks in named.conf hasn't been written yet.
- filter-aaaa-on-v4, filter-aaaa-on-v6 and the filter-aaaa ACL are
still configured in the view as they were before, not in the hook.
- these formerly static helper functions have been moved into client.c
and made external so that they can be used in hook modules as well as
internally in libns: query_newrdataset, query_putrdataset,
query_newnamebuf, query_newname, query_getnamebuf, query_keepname,
query_releasename, query_newdbversion, query_findversion
- made query_recurse() and query_done() into public functions
ns_query_recurse() and ns_query_done() so they can be called from
modules.
- the goal of this change is for AAAA filtering to be fully contained
in the query logic, and implemented at discrete points that can be
replaced with hook callouts later on.
- the new code may be slightly less efficient than the old filter-aaaa
implementation, but maximum efficiency was never a priority for AAAA
filtering anyway.
- we now use the rdataset RENDERED attribute to indicate that an AAAA
rdataset should not be included when rendering the message. (this
flag was originally meant to indicate that an rdataset has already
been rendered and should not be repeated, but it can also be used to
prevent rendering in the first place.)
- the DNS_MESSAGERENDER_FILTER_AAAA, NS_CLIENTATTR_FILTER_AAAA,
and DNS_RDATASETGLUE_FILTERAAAA flags are all now unnecessary and
have been removed.
- the purpose of this change is allow for more well-defined hook points
to be available in the query processing logic. some functions that
formerly didn't have access to 'qctx' do now; this is needed because
'qctx' is what gets passed when calling a hook function.
- query_addrdataset() has been broken up into three separate functions
since it used to do three unrelated things, and what was formerly
query_addadditional() has been renamed query_additional_cb() for
clarity.
- client->filter_aaaa is now qctx->filter_aaaa. (later, it will be moved
into opaque storage in the qctx, for use by the filter-aaaa module.)
- cleaned up style and braces
- move hooks.h to public include directory
- ns_hooktable_init() initializes a hook table. if NULL is passed in, it
initializes the global hook table
- ns_hooktable_save() saves a pointer to the current global hook table.
- ns_hooktable_reset() replaces the global hook table with different
one
- ns_hook_add() adds hooks at specified hook points in a hook table (or
the global hook table if the specified table is NULL)
- load and unload functions support dlopen() of hook modules (this is
adapted from dyndb and not yet functional)
- began adding new hook points to query.c
When query processing hits a delegation from a locally configured zone,
an attempt may be made to look for a better answer in the cache. In
such a case, the zone-sourced delegation data is set aside and the
lookup is retried using the cache database. When that lookup is
completed, a decision is made whether the answer found in the cache is
better than the answer found in the zone.
Currently, if the zone-sourced answer turns out to be better than the
one found in the cache:
- qctx->zdb is not restored into qctx->db,
- qctx->node, holding the zone database node found, is not even saved.
Thus, in such a case both qctx->db and qctx->node will point at cache
data. This is not an issue for BIND versions which do not support
mirror zones because in these versions non-recursive queries always
cause the zone-sourced delegation to be returned and thus the
non-recursive part of query_delegation() is never reached if the
delegation is coming from a zone. With mirror zones, however,
non-recursive queries may cause cache lookups even after a zone
delegation is found. Leaving qctx->db assigned to the cache database
when query_delegation() determines that the zone-sourced delegation is
the best answer to the client's query prevents DS records from being
added to delegations coming from mirror zones. Fix this issue by
keeping the zone database and zone node in qctx while the cache is
searched for an answer and then restoring them into qctx->db and
qctx->node, respectively, if the zone-sourced delegation turns out to be
the best answer. Since this change means that qctx->zdb cannot be used
as the glue database any more as it will be reset to NULL by RESTORE(),
ensure that qctx->db is not a cache database before attaching it to
qctx->client->query.gluedb.
Furthermore, current code contains a conditional statement which
prevents a mirror zone from being used as a source of glue records.
Said statement was added to prevent assertion failures caused by
attempting to use a zone database's glue cache for finding glue for an
NS RRset coming from a cache database. However, that check is overly
strict since it completely prevents glue from being added to delegations
coming from mirror zones. With the changes described above in place,
the scenario this check was preventing can no longer happen, so remove
the aforementioned check.
If qctx->zdb is not NULL, qctx->zfname will also not be NULL;
qctx->zsigrdataset may be NULL in such a case, but query_putrdataset()
handles pointers to NULL pointers gracefully. Remove redundant
conditional expressions to make the cleanup code in query_freedata()
match the corresponding sequences of SAVE() / RESTORE() macros more
closely.
- mark the 'geoip-use-ecs' option obsolete; warn when it is used
in named.conf
- prohibit 'ecs' ACL tags in named.conf; note that this is a fatal error
since simply ignoring the tags could make ACLs behave unpredictably
- re-simplify the radix and iptable code
- clean up dns_acl_match(), dns_aclelement_match(), dns_acl_allowed()
and dns_geoip_match() so they no longer take ecs options
- remove the ECS-specific unit and system test cases
- remove references to ECS from the ARM
Interrupt query processing when query_recurse() attempts to ask the same
name servers for the same QNAME/QTYPE tuple for two times in a row as
this indicates that query processing may be stuck for an indeterminate
period of time, e.g. due to interactions between features able to
restart query_lookup().
The three functions has been modeled after the arc4random family of
functions, and they will always return random bytes.
The isc_random family of functions internally use these CSPRNG (if available):
1. getrandom() libc call (might be available on Linux and Solaris)
2. SYS_getrandom syscall (might be available on Linux, detected at runtime)
3. arc4random(), arc4random_buf() and arc4random_uniform() (available on BSDs and Mac OS X)
4. crypto library function:
4a. RAND_bytes in case OpenSSL
4b. pkcs_C_GenerateRandom() in case PKCS#11 library
4762. [func] "update-policy local" is now restricted to updates
from local addresses. (Previously, other addresses
were allowed so long as updates were signed by the
local session key.) [RT #45492]
4724. [func] By default, BIND now uses the random number
functions provided by the crypto library (i.e.,
OpenSSL or a PKCS#11 provider) as a source of
randomness rather than /dev/random. This is
suitable for virtual machine environments
which have limited entropy pools and lack
hardware random number generators.
This can be overridden by specifying another
entropy source via the "random-device" option
in named.conf, or via the -r command line option;
however, for functions requiring full cryptographic
strength, such as DNSSEC key generation, this
cannot be overridden. In particular, the -r
command line option no longer has any effect on
dnssec-keygen.
This can be disabled by building with
"configure --disable-crypto-rand".
[RT #31459] [RT #46047]
4708. [cleanup] Legacy Windows builds (i.e. for XP and earlier)
are no longer supported. [RT #45186]
4707. [func] The lightweight resolver daemon and library (lwresd
and liblwres) have been removed. [RT #45186]
4706. [func] Code implementing name server query processing has
been moved from bin/named to a new library "libns".
Functions remaining in bin/named are now prefixed
with "named_" rather than "ns_". This will make it
easier to write unit tests for name server code, or
link name server functionality into new tools.
[RT #45186]