haproxy/include/haproxy/hlua-t.h

244 lines
7.4 KiB
C
Raw Normal View History

/*
* include/haproxy/hlua-t.h
* Lua core types definitions
*
* Copyright (C) 2015-2016 Thierry Fournier <tfournier@arpalert.org>
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation, version 2.1
* exclusively.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef _HAPROXY_HLUA_T_H
#define _HAPROXY_HLUA_T_H
#ifdef USE_LUA
#include <lua.h>
#include <lauxlib.h>
MEDIUM: hlua: reliable timeout detection For non yieldable lua handlers (converters, fetches or yield incompatible lua functions), current timeout detection relies on now_ms thread local variable. But within non-yieldable contexts, now_ms won't be updated if not by us (because we're momentarily stuck in lua context so we won't re-enter the polling loop, which is responsible for clock updates). To circumvent this, clock_update_date(0, 1) was manually performed right before now_ms is being read for the timeout checks. But this fails to work consistently, because if no other concurrent threads periodically run clock_update_global_date(), which do happen if we're the only active thread (nbthread=1 or low traffic), our clock_update_date() call won't reliably update our local now_ms variable Moreover, clock_update_date() is not the right tool for this anyway, as it was initially meant to be used from the polling context. Using it could have negative impact on other threads relying on now_ms to be stable. (because clock_update_date() performs global clock update from time to time) -> Introducing hlua multipurpose timer, which is internally based on now_cpu_time_fast() that provides per-thread consistent clock readings. Thanks to this new hlua timer API, hlua timeout logic is less error-prone and more robust. This allows the timeout detection to work as expected for both yieldable and non-yieldable lua handlers. This patch depends on commit "MINOR: clock: add now_cpu_time_fast() function" While this could theorically be backported to all stable versions, it is advisable to avoid backports unless we're confident enough since it could cause slight behavior changes (timing related) in existing setups.
2022-11-25 03:10:07 -05:00
#include <stdint.h>
#include <import/ebtree-t.h>
#include <haproxy/proxy-t.h>
#include <haproxy/regex-t.h>
#include <haproxy/server-t.h>
#include <haproxy/stick_table-t.h>
#include <haproxy/xref-t.h>
MEDIUM: hlua/event_hdl: initial support for event handlers Now that the event handler API is pretty mature, we can expose it in the lua API. Introducing the core.event_sub(<event_types>, <cb>) lua function that takes an array of event types <event_types> as well as a callback function <cb> as argument. The function returns a subscription <sub> on success. Subscription <sub> allows you to manage the subscription from anywhere in the script. To this day only the sub->unsub method is implemented. The following event types are currently supported: - "SERVER_ADD": when a server is added - "SERVER_DEL": when a server is removed from haproxy - "SERVER_DOWN": server states goes from up to down - "SERVER_UP": server states goes from down to up As for the <cb> function: it will be called when one of the registered event types occur. The function will be called with 3 arguments: cb(<event>,<data>,<sub>) <event>: event type (string) that triggered the function. (could be any of the types used in <event_types> when registering the subscription) <data>: data associated with the event (specific to each event family). For "SERVER_" family events, server details such as server name/id/proxy will be provided. If the server still exists (not yet deleted), a reference to the live server is provided to spare you from an additionnal lookup if you need to have direct access to the server from lua. <sub> refers to the subscription. In case you need to manage it from within an event handler. (It refers to the same subscription that the one returned from core.event_sub()) Subscriptions are per-thread: the thread that will be handling the event is the one who performed the subscription using core.event_sub() function. Each thread treats events sequentially, it means that if you have, let's say SERVER_UP, then SERVER_DOWN in a short timelapse, then your cb function will first be called with SERVER_UP, and once you're done handling the event, your function will be called again with SERVER_DOWN. This is to ensure event consitency when it comes to logging / triggering logic from lua. Your lua cb function may yield if needed, but you're pleased to process the event as fast as possible to prevent the event queue from growing up To prevent abuses, if the event queue for the current subscription goes over 100 unconsumed events, the subscription will pause itself automatically for as long as it takes for your handler to catch up. This would lead to events being missed, so a warning will be emitted in the logs to inform you about that. This is not something you want to let happen too often, it may indicate that you subscribed to an event that is occurring too frequently or/and that your callback function is too slow to keep up the pace and you should review it. If you want to do some parallel processing because your callback functions are slow: you might want to create subtasks from lua using core.register_task() from within your callback function to perform the heavy job in a dedicated task and allow remaining events to be processed more quickly. Please check the lua documentation for more information.
2023-02-20 12:18:59 -05:00
#include <haproxy/event_hdl-t.h>
#define CLASS_CORE "Core"
#define CLASS_TXN "TXN"
#define CLASS_FETCHES "Fetches"
#define CLASS_CONVERTERS "Converters"
#define CLASS_SOCKET "Socket"
#define CLASS_CHANNEL "Channel"
#define CLASS_HTTP "HTTP"
#define CLASS_HTTP_MSG "HTTPMessage"
#define CLASS_HTTPCLIENT "HTTPClient"
#define CLASS_MAP "Map"
#define CLASS_APPLET_TCP "AppletTCP"
#define CLASS_APPLET_HTTP "AppletHTTP"
#define CLASS_PROXY "Proxy"
#define CLASS_SERVER "Server"
#define CLASS_LISTENER "Listener"
MEDIUM: hlua/event_hdl: initial support for event handlers Now that the event handler API is pretty mature, we can expose it in the lua API. Introducing the core.event_sub(<event_types>, <cb>) lua function that takes an array of event types <event_types> as well as a callback function <cb> as argument. The function returns a subscription <sub> on success. Subscription <sub> allows you to manage the subscription from anywhere in the script. To this day only the sub->unsub method is implemented. The following event types are currently supported: - "SERVER_ADD": when a server is added - "SERVER_DEL": when a server is removed from haproxy - "SERVER_DOWN": server states goes from up to down - "SERVER_UP": server states goes from down to up As for the <cb> function: it will be called when one of the registered event types occur. The function will be called with 3 arguments: cb(<event>,<data>,<sub>) <event>: event type (string) that triggered the function. (could be any of the types used in <event_types> when registering the subscription) <data>: data associated with the event (specific to each event family). For "SERVER_" family events, server details such as server name/id/proxy will be provided. If the server still exists (not yet deleted), a reference to the live server is provided to spare you from an additionnal lookup if you need to have direct access to the server from lua. <sub> refers to the subscription. In case you need to manage it from within an event handler. (It refers to the same subscription that the one returned from core.event_sub()) Subscriptions are per-thread: the thread that will be handling the event is the one who performed the subscription using core.event_sub() function. Each thread treats events sequentially, it means that if you have, let's say SERVER_UP, then SERVER_DOWN in a short timelapse, then your cb function will first be called with SERVER_UP, and once you're done handling the event, your function will be called again with SERVER_DOWN. This is to ensure event consitency when it comes to logging / triggering logic from lua. Your lua cb function may yield if needed, but you're pleased to process the event as fast as possible to prevent the event queue from growing up To prevent abuses, if the event queue for the current subscription goes over 100 unconsumed events, the subscription will pause itself automatically for as long as it takes for your handler to catch up. This would lead to events being missed, so a warning will be emitted in the logs to inform you about that. This is not something you want to let happen too often, it may indicate that you subscribed to an event that is occurring too frequently or/and that your callback function is too slow to keep up the pace and you should review it. If you want to do some parallel processing because your callback functions are slow: you might want to create subtasks from lua using core.register_task() from within your callback function to perform the heavy job in a dedicated task and allow remaining events to be processed more quickly. Please check the lua documentation for more information.
2023-02-20 12:18:59 -05:00
#define CLASS_EVENT_SUB "EventSub"
#define CLASS_REGEX "Regex"
#define CLASS_STKTABLE "StickTable"
#define CLASS_CERTCACHE "CertCache"
#define CLASS_PROXY_LIST "ProxyList"
#define CLASS_SERVER_LIST "ServerList"
REORG/MAJOR: session: rename the "session" entity to "stream" With HTTP/2, we'll have to support multiplexed streams. A stream is in fact the largest part of what we currently call a session, it has buffers, logs, etc. In order to catch any error, this commit removes any reference to the struct session and tries to rename most "session" occurrences in function names to "stream" and "sess" to "strm" when that's related to a session. The files stream.{c,h} were added and session.{c,h} removed. The session will be reintroduced later and a few parts of the stream will progressively be moved overthere. It will more or less contain only what we need in an embryonic session. Sample fetch functions and converters will have to change a bit so that they'll use an L5 (session) instead of what's currently called "L4" which is in fact L6 for now. Once all changes are completed, we should see approximately this : L7 - http_txn L6 - stream L5 - session L4 - connection | applet There will be at most one http_txn per stream, and a same session will possibly be referenced by multiple streams. A connection will point to a session and to a stream. The session will hold all the information we need to keep even when we don't yet have a stream. Some more cleanup is needed because some code was already far from being clean. The server queue management still refers to sessions at many places while comments talk about connections. This will have to be cleaned up once we have a server-side connection pool manager. Stream flags "SN_*" still need to be renamed, it doesn't seem like any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
struct stream;
#define HLUA_RUN 0x00000001
#define HLUA_CTRLYIELD 0x00000002
#define HLUA_WAKERESWR 0x00000004
#define HLUA_WAKEREQWR 0x00000008
#define HLUA_EXIT 0x00000010
#define HLUA_NOYIELD 0x00000020
#define HLUA_F_AS_STRING 0x01
#define HLUA_F_MAY_USE_HTTP 0x02
/* HLUA TXN flags */
#define HLUA_TXN_NOTERM 0x00000001
/* 0x00000002 .. 0x00000008 unused */
/* The execution context (enum), bits values from 0x00000010 to
* 0x00000030. These flags are mutually exclusives. Only one must be set at a
* time.
*/
#define HLUA_TXN_SMP_NONE 0x00000000 /* No specific execution context */
#define HLUA_TXN_SMP_CTX 0x00000010 /* Executed from a sample fecth context */
#define HLUA_TXN_ACT_CTX 0x00000020 /* Executed from a action context */
#define HLUA_TXN_FLT_CTX 0x00000030 /* Executed from a filter context */
#define HLUA_TXN_CTX_MASK 0x00000030 /* Mask to get the execution context */
#define HLUA_CONCAT_BLOCSZ 2048
enum hlua_exec {
HLUA_E_OK = 0,
HLUA_E_AGAIN, /* LUA yield, must resume the stack execution later, when
the associatedtask is waked. */
HLUA_E_ETMOUT, /* Execution timeout */
HLUA_E_NOMEM, /* Out of memory error */
HLUA_E_YIELD, /* LUA code try to yield, and this is not allowed */
HLUA_E_ERRMSG, /* LUA stack execution failed with a string error message
in the top of stack. */
HLUA_E_ERR, /* LUA stack execution failed without error message. */
};
MEDIUM: hlua: reliable timeout detection For non yieldable lua handlers (converters, fetches or yield incompatible lua functions), current timeout detection relies on now_ms thread local variable. But within non-yieldable contexts, now_ms won't be updated if not by us (because we're momentarily stuck in lua context so we won't re-enter the polling loop, which is responsible for clock updates). To circumvent this, clock_update_date(0, 1) was manually performed right before now_ms is being read for the timeout checks. But this fails to work consistently, because if no other concurrent threads periodically run clock_update_global_date(), which do happen if we're the only active thread (nbthread=1 or low traffic), our clock_update_date() call won't reliably update our local now_ms variable Moreover, clock_update_date() is not the right tool for this anyway, as it was initially meant to be used from the polling context. Using it could have negative impact on other threads relying on now_ms to be stable. (because clock_update_date() performs global clock update from time to time) -> Introducing hlua multipurpose timer, which is internally based on now_cpu_time_fast() that provides per-thread consistent clock readings. Thanks to this new hlua timer API, hlua timeout logic is less error-prone and more robust. This allows the timeout detection to work as expected for both yieldable and non-yieldable lua handlers. This patch depends on commit "MINOR: clock: add now_cpu_time_fast() function" While this could theorically be backported to all stable versions, it is advisable to avoid backports unless we're confident enough since it could cause slight behavior changes (timing related) in existing setups.
2022-11-25 03:10:07 -05:00
struct hlua_timer {
uint32_t start; /* cpu time in ms when the timer was started */
uint32_t burst; /* execution time for the current call in ms */
uint32_t cumulative; /* cumulative execution time for the coroutine in ms */
uint32_t max; /* max (cumulative) execution time for the coroutine in ms */
};
struct hlua {
lua_State *T; /* The LUA stack. */
int state_id; /* contains the lua state id. 0 is common state, 1 to n are per-thread states.*/
int Tref; /* The reference of the stack in coroutine case.
-1 for the main lua stack. */
int Mref; /* The reference of the memory context in coroutine case.
-1 if the memory context is not used. */
int nargs; /* The number of arguments in the stack at the start of execution. */
unsigned int flags; /* The current execution flags. */
int wake_time; /* The lua wants to be waked at this time, or before. (ticks) */
MEDIUM: hlua: reliable timeout detection For non yieldable lua handlers (converters, fetches or yield incompatible lua functions), current timeout detection relies on now_ms thread local variable. But within non-yieldable contexts, now_ms won't be updated if not by us (because we're momentarily stuck in lua context so we won't re-enter the polling loop, which is responsible for clock updates). To circumvent this, clock_update_date(0, 1) was manually performed right before now_ms is being read for the timeout checks. But this fails to work consistently, because if no other concurrent threads periodically run clock_update_global_date(), which do happen if we're the only active thread (nbthread=1 or low traffic), our clock_update_date() call won't reliably update our local now_ms variable Moreover, clock_update_date() is not the right tool for this anyway, as it was initially meant to be used from the polling context. Using it could have negative impact on other threads relying on now_ms to be stable. (because clock_update_date() performs global clock update from time to time) -> Introducing hlua multipurpose timer, which is internally based on now_cpu_time_fast() that provides per-thread consistent clock readings. Thanks to this new hlua timer API, hlua timeout logic is less error-prone and more robust. This allows the timeout detection to work as expected for both yieldable and non-yieldable lua handlers. This patch depends on commit "MINOR: clock: add now_cpu_time_fast() function" While this could theorically be backported to all stable versions, it is advisable to avoid backports unless we're confident enough since it could cause slight behavior changes (timing related) in existing setups.
2022-11-25 03:10:07 -05:00
struct hlua_timer timer; /* lua multipurpose timer */
struct task *task; /* The task associated with the lua stack execution.
We must wake this task to continue the task execution */
struct list com; /* The list head of the signals attached to this task. */
BUG/MEDIUM: httpclient/lua: fix a race between lua GC and hlua_ctx_destroy In bb581423b ("BUG/MEDIUM: httpclient/lua: crash when the lua task timeout before the httpclient"), a new logic was implemented to make sure that when a lua ctx destroyed, related httpclients are correctly destroyed too to prevent a such httpclients from being resuscitated on a destroyed lua ctx. This was implemented by adding a list of httpclients within the lua ctx, and a new function, hlua_httpclient_destroy_all(), that is called under hlua_ctx_destroy() and runs through the httpclients list in the lua context to properly terminate them. This was done with the assumption that no concurrent Lua garbage collection cycles could occur on the same ressources, which seems OK since the "lua" context is about to be freed and is not explicitly being used by other threads. But when 'lua-load' is used, the main lua stack is shared between multiple OS threads, which means that all lua ctx in the process are linked to the same parent stack. Yet it seems that lua GC, which can be triggered automatically under lua_resume() or manually through lua_gc(), does not limit itself to the "coroutine" stack (the stack referenced in lua->T) when performing the cleanup, but is able to perform some cleanup on the main stack plus coroutines stacks that were created under the same main stack (via lua_newthread()) as well. This can be explained by the fact that lua_newthread() coroutines are not meant to be thread-safe by design. Source: http://lua-users.org/lists/lua-l/2011-07/msg00072.html (lua co-author) It did not cause other issues so far because most of the time when using 'lua-load', the global lua lock is taken when performing critical operations that are known to interfere with the main stack. But here in hlua_httpclient_destroy_all(), we don't run under the global lock. Now that we properly understand the issue, the fix is pretty trivial: We could simply guard the hlua_httpclient_destroy_all() under the global lua lock, this would work but it could increase the contention over the global lock. Instead, we switched 'lua->hc_list' which was introduced with bb581423b from simple list to mt_list so that concurrent accesses between hlua_httpclient_destroy_all and hlua_httpclient_gc() are properly handled. The issue was reported by @Mark11122 on Github #2037. This must be backported with bb581423b ("BUG/MEDIUM: httpclient/lua: crash when the lua task timeout before the httpclient") as far as 2.5.
2023-02-09 11:02:57 -05:00
struct mt_list hc_list; /* list of httpclient associated to this lua task */
struct ebpt_node node;
int gc_count; /* number of items which need a GC */
};
/* This is a part of the list containing references to functions
* called at the initialisation time.
*/
struct hlua_init_function {
struct list l;
int function_ref;
};
/* This struct contains the lua data used to bind
* Lua function on HAProxy hook like sample-fetches
* or actions.
*/
struct hlua_function {
struct list l;
char *name;
int function_ref[MAX_THREADS + 1];
int nargs;
};
/* This struct is used with the structs:
* - http_req_rule
* - http_res_rule
* - tcp_rule
* It contains the lua execution configuration.
*/
struct hlua_rule {
struct hlua_function *fcn;
char **args;
};
/* This struct contains the pointer provided on the most
* of internal HAProxy calls during the processing of
* rules, converters and sample-fetches. This struct is
* associated with the lua object called "TXN".
*/
struct hlua_txn {
REORG/MAJOR: session: rename the "session" entity to "stream" With HTTP/2, we'll have to support multiplexed streams. A stream is in fact the largest part of what we currently call a session, it has buffers, logs, etc. In order to catch any error, this commit removes any reference to the struct session and tries to rename most "session" occurrences in function names to "stream" and "sess" to "strm" when that's related to a session. The files stream.{c,h} were added and session.{c,h} removed. The session will be reintroduced later and a few parts of the stream will progressively be moved overthere. It will more or less contain only what we need in an embryonic session. Sample fetch functions and converters will have to change a bit so that they'll use an L5 (session) instead of what's currently called "L4" which is in fact L6 for now. Once all changes are completed, we should see approximately this : L7 - http_txn L6 - stream L5 - session L4 - connection | applet There will be at most one http_txn per stream, and a same session will possibly be referenced by multiple streams. A connection will point to a session and to a stream. The session will hold all the information we need to keep even when we don't yet have a stream. Some more cleanup is needed because some code was already far from being clean. The server queue management still refers to sessions at many places while comments talk about connections. This will have to be cleaned up once we have a server-side connection pool manager. Stream flags "SN_*" still need to be renamed, it doesn't seem like any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
struct stream *s;
struct proxy *p;
int dir; /* SMP_OPT_DIR_{REQ,RES} */
int flags;
};
/* This struct contains the applet context. */
struct hlua_appctx {
struct appctx *appctx;
luaL_Buffer b; /* buffer used to prepare strings. */
struct hlua_txn htxn;
};
/* This struct is used with sample fetches and sample converters. */
struct hlua_smp {
REORG/MAJOR: session: rename the "session" entity to "stream" With HTTP/2, we'll have to support multiplexed streams. A stream is in fact the largest part of what we currently call a session, it has buffers, logs, etc. In order to catch any error, this commit removes any reference to the struct session and tries to rename most "session" occurrences in function names to "stream" and "sess" to "strm" when that's related to a session. The files stream.{c,h} were added and session.{c,h} removed. The session will be reintroduced later and a few parts of the stream will progressively be moved overthere. It will more or less contain only what we need in an embryonic session. Sample fetch functions and converters will have to change a bit so that they'll use an L5 (session) instead of what's currently called "L4" which is in fact L6 for now. Once all changes are completed, we should see approximately this : L7 - http_txn L6 - stream L5 - session L4 - connection | applet There will be at most one http_txn per stream, and a same session will possibly be referenced by multiple streams. A connection will point to a session and to a stream. The session will hold all the information we need to keep even when we don't yet have a stream. Some more cleanup is needed because some code was already far from being clean. The server queue management still refers to sessions at many places while comments talk about connections. This will have to be cleaned up once we have a server-side connection pool manager. Stream flags "SN_*" still need to be renamed, it doesn't seem like any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
struct stream *s;
struct proxy *p;
unsigned int flags; /* LUA_F_OPT_* */
int dir; /* SMP_OPT_DIR_{REQ,RES} */
};
/* This struct contains data used with sleep functions. */
struct hlua_sleep {
struct task *task; /* task associated with sleep. */
struct list com; /* list of signal to wake at the end of sleep. */
unsigned int wakeup_ms; /* hour to wakeup. */
};
/* This struct is used to create coprocess doing TCP or
REORG/MAJOR: session: rename the "session" entity to "stream" With HTTP/2, we'll have to support multiplexed streams. A stream is in fact the largest part of what we currently call a session, it has buffers, logs, etc. In order to catch any error, this commit removes any reference to the struct session and tries to rename most "session" occurrences in function names to "stream" and "sess" to "strm" when that's related to a session. The files stream.{c,h} were added and session.{c,h} removed. The session will be reintroduced later and a few parts of the stream will progressively be moved overthere. It will more or less contain only what we need in an embryonic session. Sample fetch functions and converters will have to change a bit so that they'll use an L5 (session) instead of what's currently called "L4" which is in fact L6 for now. Once all changes are completed, we should see approximately this : L7 - http_txn L6 - stream L5 - session L4 - connection | applet There will be at most one http_txn per stream, and a same session will possibly be referenced by multiple streams. A connection will point to a session and to a stream. The session will hold all the information we need to keep even when we don't yet have a stream. Some more cleanup is needed because some code was already far from being clean. The server queue management still refers to sessions at many places while comments talk about connections. This will have to be cleaned up once we have a server-side connection pool manager. Stream flags "SN_*" still need to be renamed, it doesn't seem like any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
* SSL I/O. It uses a fake stream.
*/
struct hlua_socket {
struct xref xref; /* cross reference with the stream used for socket I/O. */
luaL_Buffer b; /* buffer used to prepare strings. */
unsigned long tid; /* Store the thread id which creates the socket. */
};
struct hlua_concat {
int size;
int len;
};
/* This struct is used to store the httpclient */
struct hlua_httpclient {
struct httpclient *hc; /* ptr to the httpclient instance */
size_t sent; /* payload sent */
luaL_Buffer b; /* buffer used to prepare strings. */
BUG/MEDIUM: httpclient/lua: fix a race between lua GC and hlua_ctx_destroy In bb581423b ("BUG/MEDIUM: httpclient/lua: crash when the lua task timeout before the httpclient"), a new logic was implemented to make sure that when a lua ctx destroyed, related httpclients are correctly destroyed too to prevent a such httpclients from being resuscitated on a destroyed lua ctx. This was implemented by adding a list of httpclients within the lua ctx, and a new function, hlua_httpclient_destroy_all(), that is called under hlua_ctx_destroy() and runs through the httpclients list in the lua context to properly terminate them. This was done with the assumption that no concurrent Lua garbage collection cycles could occur on the same ressources, which seems OK since the "lua" context is about to be freed and is not explicitly being used by other threads. But when 'lua-load' is used, the main lua stack is shared between multiple OS threads, which means that all lua ctx in the process are linked to the same parent stack. Yet it seems that lua GC, which can be triggered automatically under lua_resume() or manually through lua_gc(), does not limit itself to the "coroutine" stack (the stack referenced in lua->T) when performing the cleanup, but is able to perform some cleanup on the main stack plus coroutines stacks that were created under the same main stack (via lua_newthread()) as well. This can be explained by the fact that lua_newthread() coroutines are not meant to be thread-safe by design. Source: http://lua-users.org/lists/lua-l/2011-07/msg00072.html (lua co-author) It did not cause other issues so far because most of the time when using 'lua-load', the global lua lock is taken when performing critical operations that are known to interfere with the main stack. But here in hlua_httpclient_destroy_all(), we don't run under the global lock. Now that we properly understand the issue, the fix is pretty trivial: We could simply guard the hlua_httpclient_destroy_all() under the global lua lock, this would work but it could increase the contention over the global lock. Instead, we switched 'lua->hc_list' which was introduced with bb581423b from simple list to mt_list so that concurrent accesses between hlua_httpclient_destroy_all and hlua_httpclient_gc() are properly handled. The issue was reported by @Mark11122 on Github #2037. This must be backported with bb581423b ("BUG/MEDIUM: httpclient/lua: crash when the lua task timeout before the httpclient") as far as 2.5.
2023-02-09 11:02:57 -05:00
struct mt_list by_hlua; /* linked in the current hlua task */
};
struct hlua_proxy_list {
char capabilities;
};
struct hlua_proxy_list_iterator_context {
struct proxy *next;
char capabilities;
};
struct hlua_server_list {
struct proxy *px;
};
struct hlua_server_list_iterator_context {
struct server *cur;
struct proxy *px;
};
#else /* USE_LUA */
/************************ For use when Lua is disabled ********************/
/* Empty struct for compilation compatibility */
struct hlua { };
struct hlua_socket { };
struct hlua_rule { };
#endif /* USE_LUA */
#endif /* _HAPROXY_HLUA_T_H */