haproxy/doc/internals/filters.txt
Willy Tarreau a257a9b015 [RELEASE] Released version 2.0-dev3
Released version 2.0-dev3 with the following main changes :
    - BUG/MINOR: peers: Really close the sessions with no heartbeat.
    - CLEANUP: peers: remove useless annoying tabulations.
    - CLEANUP: peers: replace timeout constants by macros.
    - REGTEST: Enable again reg tests with HEAD HTTP method usage.
    - DOC: The option httplog is no longer valid in a backend.
    - DOC: peers: Peers protocol documentation update.
    - REGTEST: remove unexpected "nbthread" statement from Lua test cases
    - BUILD: Makefile: remove 11-years old workarounds for deprecated options
    - BUILD: remove 10-years old error message for obsolete option USE_TCPSPLICE
    - BUILD: Makefile: remove outdated support for dlmalloc
    - BUILD: Makefile: consider a variable's origin and not its value for the options list
    - BUILD: Makefile: also report disabled options in the BUILD_OPTIONS variable
    - BUILD: Makefile: shorten default settings declaration
    - BUILD: Makefile: clean up the target declarations
    - BUILD: report the whole feature set with their status in haproxy -vv
    - BUILD: pass all "USE_*" variables as -DUSE_* to the compiler
    - REGTEST: script: make the script use the new features list
    - REGTEST: script: remove platform-specific assigments of OPTIONS
    - BUG/MINOR: peers: Missing initializations after peer session shutdown.
    - BUG/MINOR: contrib/prometheus-exporter: Fix applet accordingly to recent changes
    - BUILD/MINOR: listener: Silent a few signedness warnings.
    - BUG/MINOR: mux-h1: Only skip invalid C-L headers on output
    - BUG/MEDIUM: mworker: don't free the wrong child when not found
    - BUG/MEDIUM: checks: Don't bother subscribing if we have a connection error.
    - BUG/MAJOR: checks: segfault during tcpcheck_main
    - BUILD: makefile: work around an old bug in GNU make-3.80
    - BUILD: makefile: work around another bug in make 3.80
    - BUILD: http: properly mark some struct as extern
    - BUILD: chunk: properly declare pool_head_trash as extern
    - BUILD: cache: avoid a build warning with some compilers/linkers
    - MINOR: tools: make memvprintf() never pass a NULL target to vsnprintf()
    - MINOR: tools: add an unsetenv() implementation
    - BUILD: re-implement an initcall variant without using executable sections
    - BUILD: use inttypes.h instead of stdint.h
    - BUILD: connection: fix naming of ip_v field
    - BUILD: makefile: fix build of IPv6 header on aix51
    - BUILD: makefile: add _LINUX_SOURCE_COMPAT to build on AIX-51
    - BUILD: define unsetenv on AIX 5.1
    - BUILD: Makefile: disable shared cache on AIX 5.1
    - MINOR: ssl: Add aes_gcm_dec converter
    - REORG: mworker: move serializing functions to mworker.c
    - REORG: mworker: move signals functions to mworker.c
    - REORG: mworker: move IPC functions to mworker.c
    - REORG: mworker: move signal handlers and related functions
    - REORG: mworker: move mworker_cleanlisteners to mworker.c
    - MINOR: mworker: calloc mworker_proc structures
    - MINOR: mworker: don't use children variable anymore
    - MINOR: cli: export cli_parse_default() definition in cli.h
    - REORG: mworker/cli: move CLI functions to mworker.c
    - MEDIUM: mworker-prog: implement program for master-worker
    - MINOR: mworker/cli: show programs in 'show proc'
    - BUG/MINOR: cli: correctly handle abns in 'show cli sockets'
    - MINOR: cli: start addresses by a prefix in 'show cli sockets'
    - MINOR: cli: export HAPROXY_CLI environment variable
    - BUG/MINOR: htx: Preserve empty HTX messages with an unprocessed parsing error
    - BUG/MINOR: proto_htx: Reset to_forward value when a message is set to DONE
    - REGTEST: http-capture/h00000: Relax a regex matching the log message
    - REGTEST: http-messaging/h00000: Fix the test when the HTX is enabled
    - REGTEST: http-rules/h00003: Use a different client for requests expecting a 301
    - REGTEST: log/b00000: Be sure the client always hits its timeout
    - REGTEST: lua/b00003: Relax the regex matching the log message
    - REGTEST: lua/b00003: Specify the HAProxy pid when the command ss is executed
    - BUG/MEDIUM: peers: fix a case where peer session is not cleanly reset on release.
    - BUG/MEDIUM: h2: Don't attempt to recv from h2_process_demux if we subscribed.
    - BUG/MEDIUM: htx: fix random premature abort of data transfers
    - BUG/MEDIUM: streams: Don't remove the SI_FL_ERR flag in si_update_both().
    - BUG/MEDIUM: streams: Store prev_state before calling si_update_both().
    - BUG/MEDIUM: stream: Don't clear the stream_interface flags in si_update_both.
    - MINOR: initcall: Don't forget to define the __start/stop_init_##stg symbols.
    - MINOR: threads: Implement thread_cpus_enabled() for FreeBSD.
    - BUG/MEDIUM: pattern: assign pattern IDs after checking the config validity
    - MINOR: skip get_gmtime where tm is unused
    - MINOR: ssl: Activate aes_gcm_dec converter for BoringSSL
    - BUG/MEDIUM: streams: Only re-run process_stream if we're in a connected state.
    - BUG/MEDIUM: stream_interface: Don't bother doing chk_rcv/snd if not connected.
    - BUG/MEDIUM: task/threads: address a fairness issue between local and global tasks
    - BUG/MINOR: tasks: make sure the first task to be queued keeps its nice value
    - BUG/MINOR: listener: renice the accept ring processing task
    - MINOR: cli/listener: report the number of accepts on "show activity"
    - MINOR: cli/activity: report the accept queue sizes in "show activity"
    - BUG/MEDIUM: spoe: Queue message only if no SPOE applet is attached to the stream
    - BUG/MEDIUM: spoe: Return an error if nothing is encoded for fragmented messages
    - BUG/MINOR: spoe: Be sure to set tv_request when each message fragment is encoded
    - BUG/MEDIUM: htx: Defrag if blocks position is changed and the payloads wrap
    - BUG/MEDIUM: htx: Don't crush blocks payload when append is done on a data block
    - MEDIUM: htx: Deprecate the option 'http-tunnel' and ignore it in HTX
    - MINOR: proto_htx: Don't adjust transaction mode anymore in HTX analyzers
    - BUG/MEDIUM: htx: Fix the process of HTTP CONNECT with h2 connections
    - MINOR: mux-h1: Simplify handling of 1xx responses
    - MINOR: stats/htx: Don't add "Connection: close" header anymore in stats responses
    - MEDIUM: h1: Add an option to sanitize connection headers during parsing
    - MEDIUM: mux-h1: Simplify the connection mode management by sanitizing headers
    - MINOR: mux-h1: Don't release the conn_stream anymore when h1s is destroyed
    - BUG/MINOR: mux-h1: Handle the flag CS_FL_KILL_CONN during a shutdown read/write
    - MINOR: mux-h2: Add a mux_ops dedicated to the HTX mode
    - MINOR: muxes: Add a flag to specify a multiplexer uses the HTX
    - MINOR: stream: Set a flag when the stream uses the HTX
    - MINOR: http: update the macro IS_HTX_STRM() to check the stream flag SF_HTX
    - MINOR: http_fetch/htx: Use stream flags instead of px mode in smp_prefetch_htx
    - MINOR: filters/htx: Use stream flags instead of px mode to instanciate a filter
    - MINOR: muxes: Rely on conn_is_back() during init to handle front/back conn
    - MEDIUM: muxes: Add an optional input buffer during mux initialization
    - MINOR: muxes: Pass the context of the mux to destroy() instead of the connection
    - MEDIUM: muxes: Be prepared to don't own connection during the release
    - MEDIUM: connection: Add conn_upgrade_mux_fe() to handle mux upgrades
    - MEDIUM: htx: Allow the option http-use-htx to be used on TCP proxies too
    - MAJOR: proxy/htx: Handle mux upgrades from TCP to HTTP in HTX mode
    - MAJOR: muxes/htx: Handle inplicit upgrades from h1 to h2
    - MAJOR: htx: Enable the HTX mode by default for all proxies
    - REGTEST: Use HTX by default and add '--no-htx' option to disable it
    - BUG/MEDIUM: muxes: Don't dereference mux context if null in release functions
    - CLEANUP: task: do not export rq_next anymore
    - MEDIUM: tasks: improve fairness between the local and global queues
    - MEDIUM: tasks: only base the nice offset on the run queue depth
    - MINOR: tasks: restore the lower latency scheduling when niced tasks are present
    - BUG/MEDIUM: map: Fix memory leak in the map converter
    - BUG/MINOR: ssl: Fix 48 byte TLS ticket key rotation
    - BUILD: task/thread: fix single-threaded build of task.c
    - BUILD: cli/threads: fix build in single-threaded mode
    - BUG/MEDIUM: muxes: Make sure we unsubcribed when destroying mux ctx.
    - BUG/MEDIUM: h2: Make sure we're not already in the send_list in h2_subscribe().
    - BUG/MEDIUM: h2: Revamp the way send subscriptions works.
    - MINOR: connections: Remove the SUB_CALL_UNSUBSCRIBE flag.
    - BUG/MEDIUM: Threads: Only use the gcc >= 4.7 builtins when using gcc >= 4.7.
    - BUILD: address a few cases of "static <type> inline foo()"
    - BUILD: do not specify "const" on functions returning structs or scalars
    - BUILD: htx: fix a used uninitialized warning on is_cookie2
    - MINOR: peers: Add a new command to the CLI for peers.
    - DOC: update for "show peers" CLI command.
    - BUG/MAJOR: lb/threads: fix insufficient locking on round-robin LB
    - MEDIUM: mworker: store the leaving state of a process
    - MEDIUM: mworker-prog: implements 'option start-on-reload'
    - CLEANUP: mworker: remove the type field in mworker_proc
    - MEDIUM: mworker/cli: export the HAPROXY_MASTER_CLI variable
    - MINOR: cli: don't add a semicolon at the end of HAPROXY_CLI
    - MINOR: mworker: export HAPROXY_MWORKER=1 when running in mworker mode
    - MINOR: init: add a "set-dumpable" global directive to enable core dumps
    - BUG/MINOR: listener/mq: correctly scan all bound threads under low load
    - BUG/MINOR: mworker: mworker_kill should apply on every children
    - BUG/MINOR: mworker: don't exit with an ambiguous value
    - BUG/MINOR: mworker: ensure that we still quits with SIGINT
    - REGTESTS: exclude tests that require ssl, pcre if no such feature is enabled
    - BUG/MINOR: mux-h1: Process input even if the input buffer is empty
    - BUG/MINOR: mux-h1: Don't switch the parser in busy mode if other side has done
    - BUG/MEDIUM: mux-h1: Notify the stream waiting for TCP splicing if ibuf is empty
    - BUG/MEDIUM: mux-h1: Enable TCP splicing to exchange data only
    - MINOR: mux-h1: Handle read0 during TCP splicing
    - BUG/MEDIUM: htx: Don't return the start-line if the HTX message is empty
    - BUG/MAJOR: http_fetch: Get the channel depending on the keyword used
    - BUG/MINOR: http_fetch/htx: Allow permissive sample prefetch for the HTX
    - BUG/MINOR: http_fetch/htx: Use HTX versions if the proxy enables the HTX mode
    - BUG/MEDIUM: tasks: Make sure we set TASK_QUEUED before adding a task to the rq.
    - BUG/MEDIUM: tasks: Make sure we modify global_tasks_mask with the rq_lock.
    - MINOR: tasks: Don't consider we can wake task with tasklet_wakeup().
    - MEDIUM: tasks: No longer use rq.node.leaf_p as a lock.
    - MINOR: tasks: Don't set the TASK_RUNNING flag when adding in the tasklet list.
    - BUG/MEDIUM: applets: Don't use task_in_rq().
    - BUG/MAJOR: task: make sure never to delete a queued task
    - MINOR: task/thread: factor out a wake-up condition
    - CLEANUP: task: remain consistent when using the task's handler
    - MEDIUM: tasks: Merge task_delete() and task_free() into task_destroy().
    - MEDIUM: tasks: Don't account a destroyed task as a runned task.
    - BUG/MINOR: contrib/prometheus-exporter: Fix a typo in the run-queue metric type
    - MINOR: contrib/prometheus-exporter: Remove usless rate metrics
    - MINOR: contrib/prometheus-exporter: Rename some metrics to be more usable
    - MINOR: contrib/prometheus-exporter: Follow best practices about metrics type
    - BUG/MINOR: mworker: disable busy polling in the master process
    - MEDIUM: tasks: Use __ha_barrier_store after modifying global_tasks_mask.
    - MEDIUM: ssl: Give ssl_sock its own context.
    - MEDIUM: connections: Move some fields from struct connection to ssl_sock_ctx.
    - MEDIUM: ssl: provide its own subscribe/unsubscribe function.
    - MEDIUM: connections: Provide a xprt_ctx for each xprt method.
    - MEDIUM: ssl: provide our own BIO.
    - BUILD/medium: ssl: Fix build with OpenSSL < 1.1.0
    - MINOR: peers: adds counters on show peers about tasks calls.
    - MEDIUM: enable travis-ci builds
    - MINOR: fd: Add a counter of used fds.
    - MEDIUM: connections: Add a way to control the number of idling connections.
    - BUG/MEDIUM: maps: only try to parse the default value when it's present
    - BUG/MINOR: acl: properly detect pattern type SMP_T_ADDR
    - REGTEST: Missing REQUIRE_VERSION declarations.
    - MINOR: proto_tcp: tcp-request content: enable set-dst and set-dst-var
    - BUG/MEDIUM: h1: Don't parse chunks CRLF if not enough data are available
    - BUG/MEDIUM: thread/http: Add missing locks in set-map and add-acl HTTP rules
    - BUG/MEDIUM: stream: Don't request a server connection if a shutw was scheduled
    - BUG/MINOR: 51d: Get the request channel to call CHECK_HTTP_MESSAGE_FIRST()
    - BUG/MINOR: da: Get the request channel to call CHECK_HTTP_MESSAGE_FIRST()
    - MINOR: gcc: Fix a silly gcc warning in connect_server()
    - MINOR: ssl/cli: async fd io-handlers printable on show fd
    - Revert "CLEANUP: wurfl: remove dead, broken and unmaintained code"
    - BUILD: add USE_WURFL to the list of known build options
    - MINOR: wurfl: indicate in haproxy -vv the wurfl version in use
    - BUILD: wurfl: build fix for 1.9/2.0 code base
    - CLEANUP: wurfl: removed deprecated methods
    - DOC: wurfl: added point of contact in MAINTAINERS file
    - MINOR: wurfl: enabled multithreading mode
    - MINOR: contrib: dummy wurfl library
    - MINOR: dns: dns_requester structures are now in a memory pool
    - MINOR: dns: move callback affection in dns_link_resolution()
    - MINOR: obj_type: new object type for struct stream
    - MINOR: action: new '(http-request|tcp-request content) do-resolve' action
    - MINOR: log: Extract some code to send syslog messages.
    - REGTEST: replace LEVEL option by a more human readable one.
    - REGTEST: rename the reg test files.
    - REGTEST: adapt some reg tests after renaming.
    - REGTEST: make the "run-regtests" script search for tests in reg-tests by default
    - BUG/MAJOR: stream: Missing DNS context initializations.
    - BUG/MEDIUM: stream: Fix the way early aborts on the client side are handled
    - BUG/MINOR: spoe: Don't systematically wakeup SPOE stream in the applet handler
    - BUG/MEDIUM: ssl: Return -1 on recv/send if we got EAGAIN.
    - BUG/MAJOR: lb/threads: fix AB/BA locking issue in round-robin LB
    - BUG/MAJOR: muxes: Use the HTX mode to find the best mux for HTTP proxies only
    - BUG/MINOR: htx: Exclude TCP proxies when the HTX mode is handled during startup
    - CLEANUP: task: report calls as unsigned in show sess
    - MINOR: tasks/activity: report the context switch and task wakeup rates
    - MINOR: stream: measure and report a stream's call rate in "show sess"
    - MINOR: applet: measure and report an appctx's call rate in "show sess"
    - BUILD: extend Travis CI config to support more platforms
    - REGTEST: exclude osx and generic targets for 40be_2srv_odd_health_checks
    - REGTEST: relax the IPv6 address format checks in converters_ipmask_concat_strcmp_field_word
    - REGTEST: exclude OSX and generic targets from abns_socket.vtc
    - BUILD: travis: remove the "allow_failures" entry
    - BUG/MINOR: activity: always initialize the profiling variable
    - MINOR: activity: make the profiling status per thread and not global
    - MINOR: activity: enable automatic profiling turn on/off
    - CLEANUP: standard: use proper const to addr_to_str() and port_to_str()
    - BUG/MINOR: proto_http: properly reset the stream's call rate on keep-alive
    - MINOR: connection: make the debugging helper functions safer
    - MINOR: stream/debug: make a stream dump and crash function
    - MEDIUM: appctx/debug: force a crash if an appctx spins over itself forever
    - MEDIUM: stream/debug: force a crash if a stream spins over itself forever
    - MEDIUM: streams: measure processing time and abort when detecting bugs
    - BUILD/MEDIUM: contrib: Dummy DeviceAtlas API.
    - MEDIUM: da: HTX mode support.
    - BUG/MEDIUM: mux-h2: properly deal with too large headers frames
    - BUG/MINOR: http: Call stream_inc_be_http_req_ctr() only one time per request
    - BUG/MEDIUM: spoe: arg len encoded in previous frag frame but len changed
    - MINOR: spoe: Use the sample context to pass frag_ctx info during encoding
    - DOC: contrib/modsecurity: Typos and fix the reject example
    - BUG/MEDIUM: contrib/modsecurity: If host header is NULL, don't try to strdup it
    - MINOR: log: Add "sample" new keyword to "log" lines.
    - MINOR: log: Enable the log sampling and load-balancing feature.
    - DOC: log: Document the sampling and load-balancing logging feature.
    - REGTEST: Add a new reg test for log load-balancing feature.
    - BUG/MAJOR: map/acl: real fix segfault during show map/acl on CLI
    - REGTEST: Make this reg test be Linux specific.
    - CLEANUP: task: move the task_per_thread definition to task.h
    - MINOR: activity: report context switch counts instead of rates
    - MINOR: threads: Implement HA_ATOMIC_LOAD().
    - BUG/MEDIUM: port_range: Make the ring buffer lock-free.
    - BUG/MEDIUM: listener: Fix how unlimited number of consecutive accepts is handled
    - MINOR: config: Test validity of tune.maxaccept during the config parsing
    - CLEANUP: config: Don't alter listener->maxaccept when nbproc is set to 1
    - BUG/MEDIUM: servers: fix typo "src" instead of "srv"
    - BUG/MEDIUM: ssl: Don't pretend we can retry a recv/send if we got a shutr/w.
    - BUG/MINOR: haproxy: fix rule->file memory leak
    - BUG/MINOR: log: properly free memory on logformat parse error and deinit()
    - BUG/MINOR: checks: free memory allocated for tasklets
    - BUG/MEDIUM: pattern: fix memory leak in regex pattern functions
    - BUG/MEDIUM: channels: Don't forget to reset output in channel_erase().
    - BUG/MEDIUM: connections: Make sure we remove CO_FL_SESS_IDLE on disown.
    - MINOR: threads: flatten the per-thread cpu-map
    - MINOR: init/threads: remove the useless tids[] array
    - MINOR: init/threads: make the threads array global
    - BUG/MEDIUM: ssl: Use the early_data API the right way.
    - BUG/MEDIUM: streams: Don't add CF_WRITE_ERROR if early data were rejected.
    - MEDIUM: streams: Add the ability to retry a request on L7 failure.
    - MEDIUM: streams: Add a way to replay failed 0rtt requests.
    - MEDIUM: streams: Add a new keyword for retry-on, "junk-response"
    - BUG/MINOR: stream: also increment the retry stats counter on L7 retries
    - BUG/MEDIUM: checks: make sure the warmup task takes the server lock
    - BUG/MINOR: logs/threads: properly split the log area upon startup
    - BUILD: extend travis-ci matrix
    - CLEANUP: Remove appsession documentation
    - DOC: Fix typo in keyword matrix
    - BUILD: remove "build_libressl" duplicate declaration
    - BUILD: travis-ci: get back to osx without openssl support
    - BUILD: enable several LibreSSL hacks, including
    - BUILD: temporarily mark LibreSSL builds as allowed to fail
    - BUILD: travis: TMPDIR replacement.
    - BUG/MEDIUM: ssl: Don't attempt to use early data with libressl.
    - MINOR: doc: Document allow-0rtt on the server line.
    - MINOR: doc: Document the interaction of allow-0rtt and retry-on 0rtt-rejected.
    - MEDIUM: proto: Change the prototype of the connect() method.
    - MEDIUM: tcp: add the "tfo" option to support TCP fastopen on the server
    - MINOR: config: Extract the code of "stick-table" line parsing.
    - BUILD/MINOR: stick-table: Compilation fix.
    - MEDIUM: stick-table: Stop handling stick-tables as proxies.
    - MINOR: stick-tables: Add peers process binding computing.
    - MINOR: stick-table: Add prefixes to stick-table names.
    - MINOR: peers: Do not emit global stick-table names.
    - DOC: Update for "table" lines in "peers" section.
    - REGTEST: Add reg tests for "table" lines in "peers" sections.
    - MEDIUM: regex: modify regex_comp() to atomically allocate/free the my_regex struct
    - REGTEST: make the tls_health_checks test much faster
    - REGTEST: make the "table in peers" test require v2.0
    - BUG/MINOR: mux-h2: rely on trailers output not input to turn them to empty data
    - BUG/MEDIUM: h2/htx: always fail on too large trailers
    - MEDIUM: mux-h2: discard contents that are to be sent after a shutdown
    - BUG/MEDIUM: mux-h2/htx: never wait for EOM when processing trailers
    - BUG/MEDIUM: h2/htx: never leave a trailers block alone with no EOM block
    - REGTEST: Flag some slow reg tests.
    - REGTEST: Reg tests file renaming.
    - REGTEST: Wrong renaming for one reg test.
    - REGTEST: Wrong assumption in IP:port logging test.
    - BUG/MINOR: mworker/ssl: close OpenSSL FDs on reload
    - MINOR: systemd: Use the variables from /etc/default/haproxy
    - MINOR: systemd: Make use of master socket in systemd unit
    - MINOR: systemd: support /etc/sysconfig/ for redhat based distrib
    - BUG/MEDIUM: stick-table: fix regression caused by a change in proxy struct
    - BUG/MEDIUM: tasks: fix possible segfault on task_destroy()
    - CLEANUP: task: remove unneeded tests before task_destroy()
    - MINOR: mworker: support a configurable maximum number of reloads
    - BUG/MINOR: mux-h2: fix the condition to close a cs-less h2s on the backend
    - BUG/MEDIUM: spoe: Be sure the sample is found before setting its context
    - BUG/MINOR: mux-h1: Fix the parsing of trailers
    - BUG/MINOR: htx: Never transfer more than expected in htx_xfer_blks()
    - MINOR: htx: Split on DATA blocks only when blocks are moved to an HTX message
    - MINOR: htx: Don't try to append a trailer block with the previous one
    - MINOR: htx: Remove support for unused OOB HTX blocks
    - BUILD: travis-ci bugfixes and improvements
    - BUG/MEDIUM: servers: Don't use the same srv flag for cookie-set and TFO.
    - BUG/MEDIUM: h2: Make sure we set send_list to NULL in h2_detach().
    - BUILD: ssl: fix again a libressl build failure after the openssl FD leak fix
    - CLEANUP: ssl-sock: use HA_OPENSSL_VERSION_NUMBER instead of OPENSSL_VERSION_NUMBER
    - BUILD: ssl: make libressl use its own version numbers
    - CLEANUP: ssl: remove 57 occurrences of useless tests on LIBRESSL_VERSION_NUMBER
    - MINOR: ssl: enable aes_gcm_dec on LibreSSL
    - BUILD: ssl: fix libressl build again after aes-gcm-enc
    - REORG: ssl: move openssl-compat from proto to common
    - REORG: ssl: move some OpenSSL defines from ssl_sock to openssl-compat
    - CLEANUP: ssl: never include openssl/*.h outside of openssl-compat.h anymore
    - CLEANUP: ssl: make inclusion of openssl headers safe
    - BUILD: add BoringSSL to travis-ci build matrix
    - BUILD: threads: Add __ha_cas_dw fallback for single threaded builds
    - BUG/MINOR: stream: Attach the read side on the response as soon as possible
    - BUG/MEDIUM: http: Use pointer to the begining of input to parse message headers
    - BUG/MEDIUM: h2: Don't check send_wait to know if we're in the send_list.
    - BUG/MEDIUM: streams: Make sur SI_FL_L7_RETRY is set before attempting a retry.
    - MEDIUM: streams: Add a new http action, disable-l7-retry.
    - MINOR: streams: Introduce a new retry-on keyword, all-retryable-errors.
    - BUG/MINOR: vars: Fix memory leak in vars_check_arg
    - BUILD: travis-ci: make TMPDIR global variable in travis-ci
    - CLEANUP: ssl: move the SSL_OP_* and SSL_MODE_* definitions to openssl-compat
    - CLEANUP: ssl: remove ifdef around SSL_CTX_get_extra_chain_certs()
    - CLEANUP: ssl: move all BIO_* definitions to openssl-compat
    - BUILD: threads: fix again the __ha_cas_dw() definition
    - BUG/MAJOR: mux-h2: do not add a stream twice to the send list
    - Revert "BUG/MINOR: vars: Fix memory leak in vars_check_arg"
    - BUG/MINOR: peers: Fix memory leak in cfg_parse_peers
    - BUG/MINOR: htx: make sure to always initialize the HTTP method when parsing a buffer
    - REGTEST: fix tls_health_checks random failures on MacOS in Travis-CI
    - MINOR: spoe: Set the argument chunk size to 0 when SPOE variables are checked
    - BUG/MINOR: vars: Fix memory leak in vars_check_arg
    - BUG/MAJOR: ssl: segfault upon an heartbeat request
    - MINOR: spoa-server: Clone the v1.7 spoa-example project
    - MINOR: spoa-server: move some definition from spoa_server.c to spoa_server.h
    - MINOR: spoa-server: Externalise debug functions
    - MINOR: spoe-server: rename "worker" functions
    - MINOR: spoa-server: Replace the thread init system by processes
    - MINOR: spoa-server: With debug mode, start only one process
    - MINOR: spoa-server: Allow registering external processes
    - MINOR: spoa-server: Allow registering message processors
    - MINOR: spoa-server: Load files
    - MINOR: spoa-server: Prepare responses
    - MINOR: spoa-server: Execute registered callbacks
    - MINOR: spoa-server: Add Lua processing
    - MINOR: spoa-server: Add python
    - MINOR/DOC: spoe-server: Add documentation
    - BUG/MEDIUM: connections: Don't forget to set xprt_ctx to NULL on close.
    - MINOR: lists: add LIST_ADDED() to check if an element belongs to a list
    - CLEANUP: mux-h2: use LIST_ADDED() instead of LIST_ISEMPTY() where relevant
    - MINOR: mux-h2: add two H2S flags to report the need for shutr/shutw
    - CLEANUP: mux-h2: simply use h2s->flags instead of ret in h2_deferred_shut()
    - CLEANUP: connection: remove the handle field from the wait_event struct
    - BUG/MINOR: log: Wrong log format initialization.
    - BUG/MINOR: mux-h2: make the do_shut{r,w} functions more robust against retries
    - BUG/MINOR: mworker: use after free when the PID not assigned
    - MINOR: mux-h2: remove useless test on stream ID vs last in wake function
    - MINOR: mux-h2: make h2_wake_some_streams() not depend on the CS flags
    - MINOR: mux-h2: make h2s_wake_one_stream() the only function to deal with CS
    - MINOR: mux-h2: make h2s_wake_one_stream() not depend on temporary CS flags
    - BUG/MINOR: mux-h2: make sure to honor KILL_CONN in do_shut{r,w}
    - CLEANUP: mux-h2: don't test for impossible CS_FL_REOS conditions
    - MINOR: mux-h2: add macros to check multiple stream states at once
    - MINOR: mux-h2: stop relying on CS_FL_REOS
    - BUG/MEDIUM: mux-h2: Set EOI on the conn_stream during h2_rcv_buf()
    - BUILD: debug: make gcc not complain on the ABORT_NOW() macro
    - MINOR: debug: add a new BUG_ON macro
    - MINOR: h2: Use BUG_ON() to enforce rules in subscribe/unsubscribe.
    - MINOR: h1: Use BUG_ON() to enforce rules in subscribe/unsubscribe.
    - MINOR: connections: Use BUG_ON() to enforce rules in subscribe/unsubscribe.
    - BUILD: ist: turn the lower/upper case tables to literal on obsolete linkers
2019-05-15 16:51:48 +02:00

1277 lines
52 KiB
Text

-----------------------------------------
Filters Guide - version 2.0
( Last update: 2017-07-27 )
------------------------------------------
Author : Christopher Faulet
Contact : christopher dot faulet at capflam dot org
ABSTRACT
--------
The filters support is a new feature of HAProxy 1.7. It is a way to extend
HAProxy without touching its core code and, in certain extent, without knowing
its internals. This feature will ease contributions, reducing impact of
changes. Another advantage will be to simplify HAProxy by replacing some parts
by filters. As we will see, and as an example, the HTTP compression is the first
feature moved in a filter.
This document describes how to write a filter and what you have to keep in mind
to do so. It also talks about the known limits and the pitfalls to avoid.
As said, filters are quite new for now. The API is not freezed and will be
updated/modified/improved/extended as needed.
SUMMARY
-------
1. Filters introduction
2. How to use filters
3. How to write a new filter
3.1. API Overview
3.2. Defining the filter name and its configuration
3.3. Managing the filter lifecycle
3.3.1. Dealing with threads
3.4. Handling the streams activity
3.5. Analyzing the channels activity
3.6. Filtering the data exchanged
4. FAQ
1. FILTERS INTRODUCTION
-----------------------
First of all, to fully understand how filters work and how to create one, it is
best to know, at least from a distance, what is a proxy (frontend/backend), a
stream and a channel in HAProxy and how these entities are linked to each other.
doc/internals/entities.pdf is a good overview.
Then, to support filters, many callbacks has been added to HAProxy at different
places, mainly around channel analyzers. Their purpose is to allow filters to
be involved in the data processing, from the stream creation/destruction to
the data forwarding. Depending of what it should do, a filter can implement all
or part of these callbacks. For now, existing callbacks are focused on
streams. But futur improvements could enlarge filters scope. For example, it
could be useful to handle events at the connection level.
In HAProxy configuration file, a filter is declared in a proxy section, except
default. So the configuration corresponding to a filter declaration is attached
to a specific proxy, and will be shared by all its instances. it is opaque from
the HAProxy point of view, this is the filter responsibility to manage it. For
each filter declaration matches a uniq configuration. Several declarations of
the same filter in the same proxy will be handle as different filters by
HAProxy.
A filter instance is represented by a partially opaque context (or a state)
attached to a stream and passed as arguments to callbacks. Through this context,
filter instances are stateful. Depending the filter is declared in a frontend or
a backend section, its instances will be created, respectively, when a stream is
created or when a backend is selected. Their behaviors will also be
different. Only instances of filters declared in a frontend section will be
aware of the creation and the destruction of the stream, and will take part in
the channels analyzing before the backend is defined.
It is important to remember the configuration of a filter is shared by all its
instances, while the context of an instance is owned by a uniq stream.
Filters are designed to be chained. It is possible to declare several filters in
the same proxy section. The declaration order is important because filters will
be called one after the other respecting this order. Frontend and backend
filters are also chained, frontend ones called first. Even if the filters
processing is serialized, each filter will bahave as it was alone (unless it was
developed to be aware of other filters). For all that, some constraints are
imposed to filters, especially when data exchanged between the client and the
server are processed. We will dicuss again these contraints when we will tackle
the subject of writing a filter.
2. HOW TO USE FILTERS
---------------------
To use a filter, you must use the parameter 'filter' followed by the filter name
and, optionnaly, its configuration in the desired listen, frontend or backend
section. For example:
listen test
...
filter trace name TST
...
See doc/configuration.txt for a formal definition of the parameter 'filter'.
Note that additional parameters on the filter line must be parsed by the filter
itself.
The list of available filters is reported by 'haproxy -vv':
$> haproxy -vv
HA-Proxy version 1.7-dev2-3a1d4a-33 2016/03/21
Copyright 2000-2016 Willy Tarreau <willy@haproxy.org>
[...]
Available filters :
[COMP] compression
[TRACE] trace
Multiple filter lines can be used in a proxy section to chain filters. Filters
will be called in the declaration order.
Some filters can support implicit declarartions in certain circumstances
(without the filter line). This is not recommanded for new features but are
useful for existing ones moved in a filter, for backward compatibility
reasons. Implicit declarartions are supported when there is only one filter used
on a proxy. When several filters are used, explicit declarartions are mandatory.
The HTTP compression filter is one of these filters. Alone, using 'compression'
keywords is enough to use it. But when at least a second filter is used, a
filter line must be added.
# filter line is optionnal
listen t1
bind *:80
compression algo gzip
compression offload
server srv x.x.x.x:80
# filter line is mandatory for the compression filter
listen t2
bind *:81
filter trace name T2
filter compression
compression algo gzip
compression offload
server srv x.x.x.x:80
3. HOW TO WRITE A NEW FILTER
----------------------------
If you want to write a filter, there are 2 header files that you must know:
* include/types/filters.h: This is the main header file, containing all
important structures you will use. It represents
the filter API.
* include/proto/filters.h: This header file contains helper functions that
you may need to use. It also contains the internal
API used by HAProxy to handle filters.
To ease the filters integration, it is better to follow some conventions:
* Use 'flt_' prefix to name your filter (e.g: flt_http_comp or flt_trace).
* Keep everything related to your filter in a same file.
The filter 'trace' can be used as a template to write your own filter. It is a
good start to see how filters really work.
3.1 API OVERVIEW
----------------
Writing a filter can be summarized to write functions and attach them to the
existing callbacks. Available callbacks are listed in the following structure:
struct flt_ops {
/*
* Callbacks to manage the filter lifecycle
*/
int (*init) (struct proxy *p, struct flt_conf *fconf);
void (*deinit) (struct proxy *p, struct flt_conf *fconf);
int (*check) (struct proxy *p, struct flt_conf *fconf);
int (*init_per_thread) (struct proxy *p, struct flt_conf *fconf);
void (*deinit_per_thread)(struct proxy *p, struct flt_conf *fconf);
/*
* Stream callbacks
*/
int (*attach) (struct stream *s, struct filter *f);
int (*stream_start) (struct stream *s, struct filter *f);
int (*stream_set_backend)(struct stream *s, struct filter *f, struct proxy *be);
void (*stream_stop) (struct stream *s, struct filter *f);
void (*detach) (struct stream *s, struct filter *f);
void (*check_timeouts) (struct stream *s, struct filter *f);
/*
* Channel callbacks
*/
int (*channel_start_analyze)(struct stream *s, struct filter *f,
struct channel *chn);
int (*channel_pre_analyze) (struct stream *s, struct filter *f,
struct channel *chn,
unsigned int an_bit);
int (*channel_post_analyze) (struct stream *s, struct filter *f,
struct channel *chn,
unsigned int an_bit);
int (*channel_end_analyze) (struct stream *s, struct filter *f,
struct channel *chn);
/*
* HTTP callbacks
*/
int (*http_headers) (struct stream *s, struct filter *f,
struct http_msg *msg);
int (*http_data) (struct stream *s, struct filter *f,
struct http_msg *msg);
int (*http_chunk_trailers)(struct stream *s, struct filter *f,
struct http_msg *msg);
int (*http_end) (struct stream *s, struct filter *f,
struct http_msg *msg);
int (*http_forward_data) (struct stream *s, struct filter *f,
struct http_msg *msg,
unsigned int len);
void (*http_reset) (struct stream *s, struct filter *f,
struct http_msg *msg);
void (*http_reply) (struct stream *s, struct filter *f,
short status,
const struct buffer *msg);
/*
* TCP callbacks
*/
int (*tcp_data) (struct stream *s, struct filter *f,
struct channel *chn);
int (*tcp_forward_data)(struct stream *s, struct filter *f,
struct channel *chn,
unsigned int len);
};
We will explain in following parts when these callbacks are called and what they
should do.
Filters are declared in proxy sections. So each proxy have an ordered list of
filters, possibly empty if no filter is used. When the configuration of a proxy
is parsed, each filter line represents an entry in this list. In the structure
'proxy', the filters configurations are stored in the field 'filter_configs',
each one of type 'struct flt_conf *':
/*
* Structure representing the filter configuration, attached to a proxy and
* accessible from a filter when instantiated in a stream
*/
struct flt_conf {
const char *id; /* The filter id */
struct flt_ops *ops; /* The filter callbacks */
void *conf; /* The filter configuration */
struct list list; /* Next filter for the same proxy */
};
* 'flt_conf.id' is an identifier, defined by the filter. It can be
NULL. HAProxy does not use this field. Filters can use it in log messages or
as a uniq identifier to check multiple declarations. It is the filter
responsibility to free it, if necessary.
* 'flt_conf.conf' is opaque. It is the internal configuration of a filter,
generally allocated and filled by its parsing function (See § 3.2). It is
the filter responsibility to free it.
* 'flt_conf.ops' references the callbacks implemented by the filter. This
field must be set during the parsing phase (See § 3.2) and can be refine
during the initialization phase (See § 3.3). If it is dynamically allocated,
it is the filter responsibility to free it.
The filter configuration is global and shared by all its instances. A filter
instance is created in the context of a stream and attached to this stream. in
the structure 'stream', the field 'strm_flt' is the state of all filter
instances attached to a stream:
/*
* Structure reprensenting the "global" state of filters attached to a
* stream.
*/
struct strm_flt {
struct list filters; /* List of filters attached to a stream */
struct filter *current[2]; /* From which filter resume processing, for a specific channel.
* This is used for resumable callbacks only,
* If NULL, we start from the first filter.
* 0: request channel, 1: response channel */
unsigned short flags; /* STRM_FL_* */
unsigned char nb_req_data_filters; /* Number of data filters registered on the request channel */
unsigned char nb_rsp_data_filters; /* Number of data filters registered on the response channel */
};
Filter instances attached to a stream are stored in the field
'strm_flt.filters', each instance is of type 'struct filter *':
/*
* Structure reprensenting a filter instance attached to a stream
*
* 2D-Array fields are used to store info per channel. The first index
* stands for the request channel, and the second one for the response
* channel. Especially, <next> and <fwd> are offets representing amount of
* data that the filter are, respectively, parsed and forwarded on a
* channel. Filters can access these values using FLT_NXT and FLT_FWD
* macros.
*/
struct filter {
struct flt_conf *config; /* the filter's configuration */
void *ctx; /* The filter context (opaque) */
unsigned short flags; /* FLT_FL_* */
unsigned int next[2]; /* Offset, relative to buf->p, to the next
* byte to parse for a specific channel
* 0: request channel, 1: response channel */
unsigned int fwd[2]; /* Offset, relative to buf->p, to the next
* byte to forward for a specific channel
* 0: request channel, 1: response channel */
unsigned int pre_analyzers; /* bit field indicating analyzers to
* pre-process */
unsigned int post_analyzers; /* bit field indicating analyzers to
* post-process */
struct list list; /* Next filter for the same proxy/stream */
};
* 'filter.config' is the filter configuration previously described. All
instances of a filter share it.
* 'filter.ctx' is an opaque context. It is managed by the filter, so it is its
responsibility to free it.
* 'filter.pre_analyzers and 'filter.post_analyzers will be described later
(See § 3.5).
* 'filter.next' and 'filter.fwd' will be described later (See § 3.6).
3.2. DEFINING THE FILTER NAME AND ITS CONFIGURATION
---------------------------------------------------
When you write a filter, the first thing to do is to add it in the supported
filters. To do so, you must register its name as a valid keyword on the filter
line:
/* Declare the filter parser for "my_filter" keyword */
static struct flt_kw_list flt_kws = { "MY_FILTER_SCOPE", { }, {
{ "my_filter", parse_my_filter_cfg, NULL /* private data */ },
{ NULL, NULL, NULL },
}
};
INITCALL1(STG_REGISTER, flt_register_keywords, &flt_kws);
Then you must define the internal configuration your filter will use. For
example:
struct my_filter_config {
struct proxy *proxy;
char *name;
/* ... */
};
You also must list all callbacks implemented by your filter. Here, we use a
global variable:
struct flt_ops my_filter_ops {
.init = my_filter_init,
.deinit = my_filter_deinit,
.check = my_filter_config_check,
/* ... */
};
Finally, you must define the function to parse your filter configuration, here
'parse_my_filter_cfg'. This function must parse all remaining keywords on the
filter line:
/* Return -1 on error, else 0 */
static int
parse_my_filter_cfg(char **args, int *cur_arg, struct proxy *px,
struct flt_conf *flt_conf, char **err, void *private)
{
struct my_filter_config *my_conf;
int pos = *cur_arg;
/* Allocate the internal configuration used by the filter */
my_conf = calloc(1, sizeof(*my_conf));
if (!my_conf) {
memprintf(err, "%s: out of memory", args[*cur_arg]);
return -1;
}
my_conf->proxy = px;
/* ... */
/* Parse all keywords supported by the filter and fill the internal
* configuration */
pos++; /* Skip the filter name */
while (*args[pos]) {
if (!strcmp(args[pos], "name")) {
if (!*args[pos + 1]) {
memprintf(err, "'%s' : '%s' option without value",
args[*cur_arg], args[pos]);
goto error;
}
my_conf->name = strdup(args[pos + 1]);
if (!my_conf->name) {
memprintf(err, "%s: out of memory", args[*cur_arg]);
goto error;
}
pos += 2;
}
/* ... parse other keywords ... */
}
*cur_arg = pos;
/* Set callbacks supported by the filter */
flt_conf->ops = &my_filter_ops;
/* Last, save the internal configuration */
flt_conf->conf = my_conf;
return 0;
error:
if (my_conf->name)
free(my_conf->name);
free(my_conf);
return -1;
}
WARNING: In your parsing function, you must define 'flt_conf->ops'. You must
also parse all arguments on the filter line. This is mandatory.
In the previous example, we expect to read a filter line as follows:
filter my_filter name MY_NAME ...
Optionnaly, by implementing the 'flt_ops.check' callback, you add a step to
check the internal configuration of your filter after the parsing phase, when
the HAProxy configuration is fully defined. For example:
/* Check configuration of a trace filter for a specified proxy.
* Return 1 on error, else 0. */
static int
my_filter_config_check(struct proxy *px, struct flt_conf *my_conf)
{
if (px->mode != PR_MODE_HTTP) {
Alert("The filter 'my_filter' cannot be used in non-HTTP mode.\n");
return 1;
}
/* ... */
return 0;
}
3.3. MANAGING THE FILTER LIFECYCLE
----------------------------------
Once the configuration parsed and checked, filters are ready to by used. There
are two main callbacks to manage the filter lifecycle:
* 'flt_ops.init': It initializes the filter for a proxy. You may define this
callback if you need to complete your filter configuration.
* 'flt_ops.deinit': It cleans up what the parsing function and the init
callback have done. This callback is useful to release
memory allocated for the filter configuration.
Here is an example:
/* Initialize the filter. Returns -1 on error, else 0. */
static int
my_filter_init(struct proxy *px, struct flt_conf *fconf)
{
struct my_filter_config *my_conf = fconf->conf;
/* ... */
return 0;
}
/* Free ressources allocated by the trace filter. */
static void
my_filter_deinit(struct proxy *px, struct flt_conf *fconf)
{
struct my_filter_config *my_conf = fconf->conf;
if (my_conf) {
free(my_conf->name);
/* ... */
free(my_conf);
}
fconf->conf = NULL;
}
TODO: Add callbacks to handle creation/destruction of filter instances. And
document it.
3.3.1 DEALING WITH THREADS
--------------------------
When HAProxy is compiled with the threads support and started with more that one
thread (global.nbthread > 1), then it is possible to manage the filter per
thread with following callbacks:
* 'flt_ops.init_per_thread': It initializes the filter for each thread. It
works the same way than 'flt_ops.init' but in the
context of a thread. This callback is called
after the thread creation.
* 'flt_ops.deinit_per_thread': It cleans up what the init_per_thread callback
have done. It is called in the context of a
thread, before exiting it.
This is the filter's responsibility to deal with concurrency. check, init and
deinit callbacks are called on the main thread. All others are called on a
"worker" thread (not always the same). This is also the filter's responsibility
to know if HAProxy is started with more than one thread. If it is started with
one thread (or compiled without the threads support), these callbacks will be
silently ignored (in this case, global.nbthread will be always equal to one).
3.4. HANDLING THE STREAMS ACTIVITY
-----------------------------------
You may be interested to handle streams activity. For now, there is three
callbacks that you should define to do so:
* 'flt_ops.stream_start': It is called when a stream is started. This callback
can fail by returning a negative value. It will be
considered as a critical error by HAProxy which
disabled the listener for a short time.
* 'flt_ops.stream_set_backend': It is called when a backend is set for a
stream. This callbacks will be called for all
filters attached to a stream (frontend and
backend). Note this callback is not called if
the frontend and the backend are the same.
* 'flt_ops.stream_stop': It is called when a stream is stopped. This callback
always succeed. Anyway, it is too late to return an
error.
For example:
/* Called when a stream is created. Returns -1 on error, else 0. */
static int
my_filter_stream_start(struct stream *s, struct filter *filter)
{
struct my_filter_config *my_conf = FLT_CONF(filter);
/* ... */
return 0;
}
/* Called when a backend is set for a stream */
static int
my_filter_stream_set_backend(struct stream *s, struct filter *filter,
struct proxy *be)
{
struct my_filter_config *my_conf = FLT_CONF(filter);
/* ... */
return 0;
}
/* Called when a stream is destroyed */
static void
my_filter_stream_stop(struct stream *s, struct filter *filter)
{
struct my_filter_config *my_conf = FLT_CONF(filter);
/* ... */
}
WARNING: Handling the streams creation and destuction is only possible for
filters defined on proxies with the frontend capability.
In addition, it is possible to handle creation and destruction of filter
instances using following callbacks:
* 'flt_ops.attach': It is called after a filter instance creation, when it is
attached to a stream. This happens when the stream is
started for filters defined on the stream's frontend and
when the backend is set for filters declared on the
stream's backend. It is possible to ignore the filter, if
needed, by returning 0. This could be useful to have
conditional filtering.
* 'flt_ops.detach': It is called when a filter instance is detached from a
stream, before its destruction. This happens when the
stream is stopped for filters defined on the stream's
frontend and when the analyze ends for filters defined on
the stream's backend.
For example:
/* Called when a filter instance is created and attach to a stream */
static int
my_filter_attach(struct stream *s, struct filter *filter)
{
struct my_filter_config *my_conf = FLT_CONF(filter);
if (/* ... */)
return 0; /* Ignore the filter here */
return 1;
}
/* Called when a filter instance is detach from a stream, just before its
* destruction */
static void
my_filter_detach(struct stream *s, struct filter *filter)
{
struct my_filter_config *my_conf = FLT_CONF(filter);
/* ... */
}
Finally, you may be interested to be notified when the stream is woken up
because of an expired timer. This could let you a chance to check your own
timeouts, if any. To do so you can use the following callback:
* 'flt_opt.check_timeouts': It is called when a stream is woken up because
of an expired timer.
For example:
/* Called when a stream is woken up because of an expired timer */
static void
my_filter_check_timeouts(struct stream *s, struct filter *filter)
{
struct my_filter_config *my_conf = FLT_CONF(filter);
/* ... */
}
3.5. ANALYZING THE CHANNELS ACTIVITY
------------------------------------
The main purpose of filters is to take part in the channels analyzing. To do so,
there is 2 callbacks, 'flt_ops.channel_pre_analyze' and
'flt_ops.channel_post_analyze', called respectively before and after each
analyzer attached to a channel, execpt analyzers responsible for the data
parsing/forwarding (TCP or HTTP data). Concretely, on the request channel, these
callbacks could be called before following analyzers:
* tcp_inspect_request (AN_REQ_INSPECT_FE and AN_REQ_INSPECT_BE)
* http_wait_for_request (AN_REQ_WAIT_HTTP)
* http_wait_for_request_body (AN_REQ_HTTP_BODY)
* http_process_req_common (AN_REQ_HTTP_PROCESS_FE)
* process_switching_rules (AN_REQ_SWITCHING_RULES)
* http_process_req_ common (AN_REQ_HTTP_PROCESS_BE)
* http_process_tarpit (AN_REQ_HTTP_TARPIT)
* process_server_rules (AN_REQ_SRV_RULES)
* http_process_request (AN_REQ_HTTP_INNER)
* tcp_persist_rdp_cookie (AN_REQ_PRST_RDP_COOKIE)
* process_sticking_rules (AN_REQ_STICKING_RULES)
And on the response channel:
* tcp_inspect_response (AN_RES_INSPECT)
* http_wait_for_response (AN_RES_WAIT_HTTP)
* process_store_rules (AN_RES_STORE_RULES)
* http_process_res_common (AN_RES_HTTP_PROCESS_BE)
Unlike the other callbacks previously seen before, 'flt_ops.channel_pre_analyze'
can interrupt the stream processing. So a filter can decide to not execute the
analyzer that follows and wait the next iteration. If there are more than one
filter, following ones are skipped. On the next iteration, the filtering resumes
where it was stopped, i.e. on the filter that has previously stopped the
processing. So it is possible for a filter to stop the stream processing on a
specific analyzer for a while before continuing. Moreover, this callback can be
called many times for the same analyzer, until it finishes its processing. For
example:
/* Called before a processing happens on a given channel.
* Returns a negative value if an error occurs, 0 if it needs to wait,
* any other value otherwise. */
static int
my_filter_chn_pre_analyze(struct stream *s, struct filter *filter,
struct channel *chn, unsigned an_bit)
{
struct my_filter_config *my_conf = FLT_CONF(filter);
switch (an_bit) {
case AN_REQ_WAIT_HTTP:
if (/* wait that a condition is verified before continuing */)
return 0;
break;
/* ... * /
}
return 1;
}
* 'an_bit' is the analyzer id. All analyzers are listed in
'include/types/channels.h'.
* 'chn' is the channel on which the analyzing is done. You can know if it is
the request or the response channel by testing if CF_ISRESP flag is set:
│ ((chn->flags & CF_ISRESP) == CF_ISRESP)
In previous example, the stream processing is blocked before receipt of the HTTP
request until a condition is verified.
'flt_ops.channel_post_analyze', for its part, is not resumable. It returns a
negative value if an error occurs, any other value otherwise. It is called when
a filterable analyzer finishes its processing. So it called once for the same
analyzer. For example:
/* Called after a processing happens on a given channel.
* Returns a negative value if an error occurs, any other
* value otherwise. */
static int
my_filter_chn_post_analyze(struct stream *s, struct filter *filter,
struct channel *chn, unsigned an_bit)
{
struct my_filter_config *my_conf = FLT_CONF(filter);
struct http_msg *msg;
switch (an_bit) {
case AN_REQ_WAIT_HTTP:
if (/* A test on received headers before any other treatment */) {
msg = ((chn->flags & CF_ISRESP) ? &s->txn->rsp : &s->txn->req);
txn->status = 400;
msg->msg_state = HTTP_MSG_ERROR;
http_reply_and_close(s, s->txn->status,
http_error_message(s, HTTP_ERR_400));
return -1; /* This is an error ! */
}
break;
/* ... * /
}
return 1;
}
Pre and post analyzer callbacks of a filter are not automatically called. You
must register it explicitly on analyzers, updating the value of
'filter.pre_analyzers' and 'filter.post_analyzers' bit fields. All analyzer bits
are listed in 'include/types/channels.h'. Here is an example:
static int
my_filter_stream_start(struct stream *s, struct filter *filter)
{
/* ... * /
/* Register the pre analyzer callback on all request and response
* analyzers */
filter->pre_analyzers |= (AN_REQ_ALL | AN_RES_ALL)
/* Register the post analyzer callback of only on AN_REQ_WAIT_HTTP and
* AN_RES_WAIT_HTTP analyzers */
filter->post_analyzers |= (AN_REQ_WAIT_HTTP | AN_RES_WAIT_HTTP)
/* ... * /
return 0;
}
To surround activity of a filter during the channel analyzing, two new analyzers
has been added:
* 'flt_start_analyze' (AN_REQ/RES_FLT_START_FE/AN_REQ_RES_FLT_START_BE): For
a specific filter, this analyzer is called before any call to the
'channel_analyze' callback. From the filter point of view, it calls the
'flt_ops.channel_start_analyze' callback.
* 'flt_end_analyze' (AN_REQ/RES_FLT_END): For a specific filter, this analyzer
is called when all other analyzers have finished their processing. From the
filter point of view, it calls the 'flt_ops.channel_end_analyze' callback.
For TCP streams, these analyzers are called only once. For HTTP streams, if the
client connection is kept alive, this happens at each request/response roundtip.
'flt_ops.channel_start_analyze' and 'flt_ops.channel_end_analyze' callbacks can
interrupt the stream processing, as 'flt_ops.channel_analyze'. Here is an
example:
/* Called when analyze starts for a given channel
* Returns a negative value if an error occurs, 0 if it needs to wait,
* any other value otherwise. */
static int
my_filter_chn_start_analyze(struct stream *s, struct filter *filter,
struct channel *chn)
{
struct my_filter_config *my_conf = FLT_CONF(filter);
/* ... TODO ... */
return 1;
}
/* Called when analyze ends for a given channel
* Returns a negative value if an error occurs, 0 if it needs to wait,
* any other value otherwise. */
static int
my_filter_chn_end_analyze(struct stream *s, struct filter *filter,
struct channel *chn)
{
struct my_filter_config *my_conf = FLT_CONF(filter);
/* ... TODO ... */
return 1;
}
Workflow on channels can be summarized as following:
FE: Called for filters defined on the stream's frontend
BE: Called for filters defined on the stream's backend
+------->---------+
| | |
+----------------------+ | +----------------------+
| flt_ops.attach (FE) | | | flt_ops.attach (BE) |
+----------------------+ | +----------------------+
| | |
V | V
+--------------------------+ | +------------------------------------+
| flt_ops.stream_start (FE)| | | flt_ops.stream_set_backend (FE+BE) |
+--------------------------+ | +------------------------------------+
| | |
... | ...
| | |
+-<-- [1] ^ |
| --+ | | --+
+------<----------+ | | +--------<--------+ |
| | | | | | |
V | | | V | |
+-------------------------------+ | | | +-------------------------------+ | |
| flt_start_analyze (FE) +-+ | | | flt_start_analyze (BE) +-+ |
|(flt_ops.channel_start_analyze)| | F | |(flt_ops.channel_start_analyze)| |
+---------------+---------------+ | R | +-------------------------------+ |
| | O | | |
+------<---------+ | N ^ +--------<-------+ | B
| | | T | | | | A
+---------------|------------+ | | E | +---------------|------------+ | | C
|+--------------V-------------+ | | N | |+--------------V-------------+ | | K
||+----------------------------+ | | D | ||+----------------------------+ | | E
|||flt_ops.channel_pre_analyze | | | | |||flt_ops.channel_pre_analyze | | | N
||| V | | | | ||| V | | | D
||| analyzer (FE) +-+ | | ||| analyzer (FE+BE) +-+ |
+|| V | | | +|| V | |
+|flt_ops.channel_post_analyze| | | +|flt_ops.channel_post_analyze| |
+----------------------------+ | | +----------------------------+ |
| --+ | | |
+------------>------------+ ... |
| |
[ data filtering (see below) ] |
| |
... |
| |
+--------<--------+ |
| | |
V | |
+-------------------------------+ | |
| flt_end_analyze (FE+BE) +-+ |
| (flt_ops.channel_end_analyze) | |
+---------------+---------------+ |
| --+
V
+----------------------+
| flt_ops.detach (BE) |
+----------------------+
|
If HTTP stream, go back to [1] --<--+
|
...
|
V
+--------------------------+
| flt_ops.stream_stop (FE) |
+--------------------------+
|
V
+----------------------+
| flt_ops.detach (FE) |
+----------------------+
|
V
By zooming on an analyzer box we have:
...
|
V
|
+-----------<-----------+
| |
+-----------------+--------------------+ |
| | | |
| +--------<---------+ | |
| | | | |
| V | | |
| flt_ops.channel_pre_analyze ->-+ | ^
| | | |
| | | |
| V | |
| analyzer --------->-----+--+
| | |
| | |
| V |
| flt_ops.channel_post_analyze |
| | |
| | |
+-----------------+--------------------+
|
V
...
3.6. FILTERING THE DATA EXCHANGED
-----------------------------------
WARNING: To fully understand this part, you must be aware on how the buffers
work in HAProxy. In particular, you must be comfortable with the idea
of circular buffers. See doc/internals/buffer-operations.txt and
doc/internals/buffer-ops.fig for details.
doc/internals/body-parsing.txt could also be useful.
An extended feature of the filters is the data filtering. By default a filter
does not look into data exchanged between the client and the server because it
is expensive. Indeed, instead of forwarding data without any processing, each
byte need to be buffered.
So, to enable the data filtering on a channel, at any time, in one of previous
callbacks, you should call 'register_data_filter' function. And conversely, to
disable it, you should call 'unregister_data_filter' function. For example:
my_filter_http_headers(struct stream *s, struct filter *filter,
struct http_msg *msg)
{
struct my_filter_config *my_conf = FLT_CONF(filter);
/* 'chn' must be the request channel */
if (!(msg->chn->flags & CF_ISRESP)) {
struct http_txn *txn = s->txn;
struct buffer *req = msg->chn->buf;
struct hdr_ctx ctx;
/* Enable the data filtering for the request if 'X-Filter' header
* is set to 'true'. */
if (http_find_header2("X-Filter", 8, req->p, &txn->hdr_idx, &ctx) &&
ctx.vlen >= 3 && memcmp(ctx.line + ctx.val, "true", 4) == 0)
register_data_filter(s, chn, filter);
}
return 1;
}
Here, the data filtering is enabled if the HTTP header 'X-Filter' is found and
set to 'true'.
If several filters are declared, the evaluation order remains the same,
regardless the order of the registrations to the data filtering.
Depending on the stream type, TCP or HTTP, the way to handle data filtering will
be slightly different. Among other things, for HTTP streams, there are more
callbacks to help you to fully handle all steps of an HTTP transaction. But the
basis is the same. The data filtering is done in 2 stages:
* The data parsing: At this stage, filters will analyze input data on a
channel. Once a filter has parsed some data, it cannot parse it again. At
any time, a filter can choose to not parse all available data. So, it is
possible for a filter to retain data for a while. Because filters are
chained, a filter cannot parse more data than its predecessors. Thus only
data considered as parsed by the last filter will be available to the next
stage, the data forwarding.
* The data forwarding: At this stage, filters will decide how much data
HAProxy can forward among those considered as parsed at the previous
stage. Once a filter has marked data as forwardable, it cannot analyze it
anymore. At any time, a filter can choose to not forward all parsed
data. So, it is possible for a filter to retain data for a while. Because
filters are chained, a filter cannot forward more data than its
predecessors. Thus only data marked as forwardable by the last filter will
be actually forwarded by HAProxy.
Internally, filters own 2 offsets, relatively to 'buf->p', representing the
number of bytes already parsed in the available input data and the number of
bytes considered as forwarded. We will call these offsets, respectively, 'nxt'
and 'fwd'. Following macros reference these offsets:
* FLT_NXT(flt, chn), flt_req_nxt(flt) and flt_rsp_nxt(flt)
* FLT_FWD(flt, chn), flt_req_fwd(flt) and flt_rsp_fwd(flt)
where 'flt' is the 'struct filter' passed as argument in all callbacks and 'chn'
is the considered channel.
Using these offsets, following operations on buffers are possible:
chn->buf->p + FLT_NXT(flt, chn) // the pointer on parsable data for
// the filter 'flt' on the channel 'chn'.
// Everything between chn->buf->p and 'nxt' offset was already parsed
// by the filter.
chn->buf->i - FLT_NXT(flt, chn) // the number of bytes of parsable data for
// the filter 'flt' on the channel 'chn'.
chn->buf->p + FLT_FWD(flt, chn) // the pointer on forwardable data for
// the filter 'flt' on the channel 'chn'.
// Everything between chn->buf->p and 'fwd' offset was already forwarded
// by the filter.
Note that at any time, for a filter, 'nxt' offset is always greater or equal to
'fwd' offset.
TODO: Add schema with buffer states when there is 2 filters that analyze data.
3.6.1 FILTERING DATA ON TCP STREAMS
-----------------------------------
The TCP data filtering is the easy case, because HAProxy do not parse these
data. So you have only two callbacks that you need to consider:
* 'flt_ops.tcp_data': This callback is called when unparsed data are
available. If not defined, all available data will be considered as parsed
for the filter.
* 'flt_ops.tcp_forward_data': This callback is called when parsed data are
available. If not defined, all parsed data will be considered as forwarded
for the filter.
Here is an example:
/* Returns a negative value if an error occurs, else the number of
* consumed bytes. */
static int
my_filter_tcp_data(struct stream *s, struct filter *filter,
struct channel *chn)
{
struct my_filter_config *my_conf = FLT_CONF(filter);
int avail = chn->buf->i - FLT_NXT(filter, chn);
int ret = avail;
/* Do not parse more than 'my_conf->max_parse' bytes at a time */
if (my_conf->max_parse != 0 && ret > my_conf->max_parse)
ret = my_conf->max_parse;
/* if available data are not completely parsed, wake up the stream to
* be sure to not freeze it. */
if (ret != avail)
task_wakeup(s->task, TASK_WOKEN_MSG);
return ret;
}
/* Returns a negative value if an error occurs, else * or the number of
* forwarded bytes. */
static int
my_filter_tcp_forward_data(struct stream *s, struct filter *filter,
struct channel *chn, unsigned int len)
{
struct my_filter_config *my_conf = FLT_CONF(filter);
int ret = len;
/* Do not forward more than 'my_conf->max_forward' bytes at a time */
if (my_conf->max_forward != 0 && ret > my_conf->max_forward)
ret = my_conf->max_forward;
/* if parsed data are not completely forwarded, wake up the stream to
* be sure to not freeze it. */
if (ret != len)
task_wakeup(s->task, TASK_WOKEN_MSG);
return ret;
}
3.6.2 FILTERING DATA ON HTTP STREAMS
------------------------------------
The HTTP data filtering is a bit tricky because HAProxy will parse the body
structure, especially chunked body. So basically there is the HTTP counterpart
to the previous callbacks:
* 'flt_ops.http_data': This callback is called when unparsed data are
available. If not defined, all available data will be considered as parsed
for the filter.
* 'flt_ops.http_forward_data': This callback is called when parsed data are
available. If not defined, all parsed data will be considered as forwarded
for the filter.
But the prototype for these callbacks is slightly different. Instead of having
the channel as parameter, we have the HTTP message (struct http_msg). You need
to be careful when you use 'http_msg.chunk_len' size. This value is the number
of bytes remaining to parse in the HTTP body (or the chunk for chunked
messages). The HTTP parser of HAProxy uses it to have the number of bytes that
it could consume:
/* Available input data in the current chunk from the HAProxy point of view.
* msg->next bytes were already parsed. Without data filtering, HAProxy
* will consume all of it. */
Bytes = MIN(msg->chunk_len, chn->buf->i - msg->next);
But in your filter, you need to recompute it:
/* Available input data in the current chunk from the filter point of view.
* 'nxt' bytes were already parsed. */
Bytes = MIN(msg->chunk_len + msg->next, chn->buf->i) - FLT_NXT(flt, chn);
In addition to these callbacks, there are three others:
* 'flt_ops.http_headers': This callback is called just before the HTTP body
parsing and after any processing on the request/response HTTP headers. When
defined, this callback is always called for HTTP streams (i.e. without needs
of a registration on data filtering).
* 'flt_ops.http_end': This callback is called when the whole HTTP
request/response is processed. It can interrupt the stream processing. So,
it could be used to synchronize the HTTP request with the HTTP response, for
example:
/* Returns a negative value if an error occurs, 0 if it needs to wait,
* any other value otherwise. */
static int
my_filter_http_end(struct stream *s, struct filter *filter,
struct http_msg *msg)
{
struct my_filter_ctx *my_ctx = filter->ctx;
if (!(msg->chn->flags & CF_ISRESP)) /* The request */
my_ctx->end_of_req = 1;
else /* The response */
my_ctx->end_of_rsp = 1;
/* Both the request and the response are finished */
if (my_ctx->end_of_req == 1 && my_ctx->end_of_rsp == 1)
return 1;
/* Wait */
return 0;
}
* 'flt_ops.http_chunk_trailers': This callback is called for chunked HTTP
messages only when all chunks were parsed. HTTP trailers can be parsed into
several passes. This callback will be called each time. The number of bytes
parsed by HAProxy at each iteration is stored in 'msg->sol'.
Then, to finish, there are 2 informational callbacks:
* 'flt_ops.http_reset': This callback is called when a HTTP message is
reset. This happens either when a '100-continue' response is received, or
if we're retrying to send the request to the server after it failed. It
could be useful to reset the filter context before receiving the true
response.
You can know why the callback is called by checking s->txn->status. If it's
10X, we're called because of a '100-continue', if not, it's a L7 retry.
* 'flt_ops.http_reply': This callback is called when, at any time, HAProxy
decides to stop the processing on a HTTP message and to send an internal
response to the client. This mainly happens when an error or a redirect
occurs.
3.6.3 REWRITING DATA
--------------------
The last part, and the trickiest one about the data filtering, is about the data
rewriting. For now, the filter API does not offer a lot of functions to handle
it. There are only functions to notify HAProxy that the data size has changed to
let it update internal state of filters. This is your responsibility to update
data itself, i.e. the buffer offsets. For a HTTP message, you also must update
'msg->next' and 'msg->chunk_len' values accordingly:
* 'flt_change_next_size': This function must be called when a filter alter
incoming data. It updates 'nxt' offset value of all its predecessors. Do not
call this function when a filter change the size of incoming data leads to
an undefined behavior.
unsigned int avail = MIN(msg->chunk_len + msg->next, chn->buf->i) -
flt_rsp_next(filter);
if (avail > 10 and /* ...Some condition... */) {
/* Move the buffer forward to have buf->p pointing on unparsed
* data */
c_adv(msg->chn, flt_rsp_nxt(filter));
/* Skip first 10 bytes. To simplify this example, we consider a
* non-wrapping buffer */
memmove(buf->p + 10, buf->p, avail - 10);
/* Restore buf->p value */
c_rew(msg->chn, flt_rsp_nxt(filter));
/* Now update other filters */
flt_change_next_size(filter, msg->chn, -10);
/* Update the buffer state */
buf->i -= 10;
/* And update the HTTP message state */
msg->chunk_len -= 10;
return (avail - 10);
}
else
return 0; /* Wait for more data */
* 'flt_change_forward_size': This function must be called when a filter alter
parsed data. It updates offset values ('nxt' and 'fwd') of all filters. Do
not call this function when a filter change the size of parsed data leads to
an undefined behavior.
/* len is the number of bytes of forwardable data */
if (len > 10 and /* ...Some condition... */) {
/* Move the buffer forward to have buf->p pointing on non-forwarded
* data */
c_adv(msg->chn, flt_rsp_fwd(filter));
/* Skip first 10 bytes. To simplify this example, we consider a
* non-wrapping buffer */
memmove(buf->p + 10, buf->p, len - 10);
/* Restore buf->p value */
c_rew(msg->chn, flt_rsp_fwd(filter));
/* Now update other filters */
flt_change_forward_size(filter, msg->chn, -10);
/* Update the buffer state */
buf->i -= 10;
/* And update the HTTP message state */
msg->next -= 10;
return (len - 10);
}
else
return 0; /* Wait for more data */
TODO: implement all the stuff to easily rewrite data. For HTTP messages, this
requires to have a chunked message. Else the size of data cannot be
changed.
4. FAQ
------
4.1. Detect multiple declarations of the same filter
----------------------------------------------------
TODO