We cannot use a 'quick' lock (i.e. lock spinning on the CPU) for the log
lock because it can wait a lot on I/Os. Using a 'quick' lock leads to
eating the CPU for no good reason.
Example of 'pidstat' output when using various locks for log_lock:
- 'quick' lock and slow log file system (tail -f on the log file on XFS on RHEL 8)
04:15:11 PM UID TGID TID %usr %system %CPU CPU Command
04:15:21 PM 998 16431 - 100.00 4.20 100.00 2 unbound
04:15:21 PM 998 - 16431 31.00 1.00 32.00 2 |__unbound
04:15:21 PM 998 - 16432 31.30 0.80 32.10 0 |__unbound
04:15:21 PM 998 - 16433 30.20 1.40 31.60 1 |__unbound
04:15:21 PM 998 - 16434 30.70 1.00 31.70 3 |__unbound
- 'quick' lock and log file system being fast
04:15:40 PM UID TGID TID %usr %system %CPU CPU Command
04:15:50 PM 998 16431 - 10.00 1.60 11.60 1 unbound
04:15:50 PM 998 - 16431 2.50 0.50 3.00 1 |__unbound
04:15:50 PM 998 - 16432 2.30 0.40 2.70 3 |__unbound
04:15:50 PM 998 - 16433 2.70 0.30 3.00 0 |__unbound
04:15:50 PM 998 - 16434 2.60 0.40 3.00 2 |__unbound
- 'basic' lock (this commit) and slow log file system (tail -f on the log file on XFS on RHEL 8)
04:29:48 PM UID TGID TID %usr %system %CPU CPU Command
04:29:58 PM 998 11632 - 7.10 14.10 21.20 3 unbound
04:29:58 PM 998 - 11632 1.70 3.20 4.90 3 |__unbound
04:29:58 PM 998 - 11633 1.60 3.30 4.90 1 |__unbound
04:29:58 PM 998 - 11634 2.00 4.10 6.10 1 |__unbound
04:29:58 PM 998 - 11635 1.90 3.50 5.40 1 |__unbound
We can see in the above example, when 'basic' lock is used, that CPU
isn't consumed when log file system is slow.
Another reproducer scenario: put the log file on a NFS share with 'sync'
option.
on low verbosity, they show on verbosity 3 (query details), because
there is a high volume and the operator cannot do anything for the
remote failure. Specifically filters the high volume errors.
This sets the hash function to use a slower but better auditable code
that does not read beyond array boundaries. This makes code better
security checkable, and is better for security. It is fixed to be
slower, but not read outside of the array.
This commit adds proper support for multiple instances of the python
module: When more than one instance is added to the module list, the
first instance loads the first script specified in the `python:`
configuration section. The second instance loads the second script,
and so on.
When there are more module instances in the module list than there are
scripts in the `python:` section, an error is raised during
initialization and unbound won't start. When more scripts than module
instances are provided, the surplus scripts are ignored.