2020-06-04 03:20:54 -04:00
|
|
|
/*
|
|
|
|
|
* include/haproxy/hlua-t.h
|
|
|
|
|
* Lua core types definitions
|
|
|
|
|
*
|
|
|
|
|
* Copyright (C) 2015-2016 Thierry Fournier <tfournier@arpalert.org>
|
|
|
|
|
*
|
|
|
|
|
* This library is free software; you can redistribute it and/or
|
|
|
|
|
* modify it under the terms of the GNU Lesser General Public
|
|
|
|
|
* License as published by the Free Software Foundation, version 2.1
|
|
|
|
|
* exclusively.
|
|
|
|
|
*
|
|
|
|
|
* This library is distributed in the hope that it will be useful,
|
|
|
|
|
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
|
|
|
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
|
|
|
|
* Lesser General Public License for more details.
|
|
|
|
|
*
|
|
|
|
|
* You should have received a copy of the GNU Lesser General Public
|
|
|
|
|
* License along with this library; if not, write to the Free Software
|
|
|
|
|
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
|
|
|
|
|
*/
|
|
|
|
|
|
|
|
|
|
#ifndef _HAPROXY_HLUA_T_H
|
|
|
|
|
#define _HAPROXY_HLUA_T_H
|
2015-01-23 08:06:13 -05:00
|
|
|
|
2015-03-04 10:48:34 -05:00
|
|
|
#ifdef USE_LUA
|
|
|
|
|
|
2015-01-23 08:06:13 -05:00
|
|
|
#include <lua.h>
|
2015-02-16 13:27:16 -05:00
|
|
|
#include <lauxlib.h>
|
MEDIUM: hlua: reliable timeout detection
For non yieldable lua handlers (converters, fetches or yield
incompatible lua functions), current timeout detection relies on now_ms
thread local variable.
But within non-yieldable contexts, now_ms won't be updated if not by us
(because we're momentarily stuck in lua context so we won't
re-enter the polling loop, which is responsible for clock updates).
To circumvent this, clock_update_date(0, 1) was manually performed right
before now_ms is being read for the timeout checks.
But this fails to work consistently, because if no other concurrent
threads periodically run clock_update_global_date(), which do happen if
we're the only active thread (nbthread=1 or low traffic), our
clock_update_date() call won't reliably update our local now_ms variable
Moreover, clock_update_date() is not the right tool for this anyway, as
it was initially meant to be used from the polling context.
Using it could have negative impact on other threads relying on now_ms
to be stable. (because clock_update_date() performs global clock update
from time to time)
-> Introducing hlua multipurpose timer, which is internally based on
now_cpu_time_fast() that provides per-thread consistent clock readings.
Thanks to this new hlua timer API, hlua timeout logic is less error-prone
and more robust.
This allows the timeout detection to work as expected for both yieldable
and non-yieldable lua handlers.
This patch depends on commit "MINOR: clock: add now_cpu_time_fast() function"
While this could theorically be backported to all stable versions,
it is advisable to avoid backports unless we're confident enough
since it could cause slight behavior changes (timing related) in
existing setups.
2022-11-25 03:10:07 -05:00
|
|
|
#include <stdint.h>
|
2015-02-16 13:27:16 -05:00
|
|
|
|
2021-10-06 12:31:48 -04:00
|
|
|
#include <import/ebtree-t.h>
|
2020-06-04 03:20:54 -04:00
|
|
|
|
2020-06-04 16:29:18 -04:00
|
|
|
#include <haproxy/proxy-t.h>
|
2020-06-02 11:32:26 -04:00
|
|
|
#include <haproxy/regex-t.h>
|
2020-06-04 17:20:13 -04:00
|
|
|
#include <haproxy/server-t.h>
|
2020-06-04 12:46:44 -04:00
|
|
|
#include <haproxy/stick_table-t.h>
|
2020-06-09 03:07:15 -04:00
|
|
|
#include <haproxy/xref-t.h>
|
MEDIUM: hlua/event_hdl: initial support for event handlers
Now that the event handler API is pretty mature, we can expose it in
the lua API.
Introducing the core.event_sub(<event_types>, <cb>) lua function that
takes an array of event types <event_types> as well as a callback
function <cb> as argument.
The function returns a subscription <sub> on success.
Subscription <sub> allows you to manage the subscription from anywhere
in the script.
To this day only the sub->unsub method is implemented.
The following event types are currently supported:
- "SERVER_ADD": when a server is added
- "SERVER_DEL": when a server is removed from haproxy
- "SERVER_DOWN": server states goes from up to down
- "SERVER_UP": server states goes from down to up
As for the <cb> function: it will be called when one of the registered
event types occur. The function will be called with 3 arguments:
cb(<event>,<data>,<sub>)
<event>: event type (string) that triggered the function.
(could be any of the types used in <event_types> when registering
the subscription)
<data>: data associated with the event (specific to each event family).
For "SERVER_" family events, server details such as server name/id/proxy
will be provided.
If the server still exists (not yet deleted), a reference to the live
server is provided to spare you from an additionnal lookup if you need
to have direct access to the server from lua.
<sub> refers to the subscription. In case you need to manage it from
within an event handler.
(It refers to the same subscription that the one returned from
core.event_sub())
Subscriptions are per-thread: the thread that will be handling the
event is the one who performed the subscription using
core.event_sub() function.
Each thread treats events sequentially, it means that if you have,
let's say SERVER_UP, then SERVER_DOWN in a short timelapse, then your
cb function will first be called with SERVER_UP, and once you're done
handling the event, your function will be called again with SERVER_DOWN.
This is to ensure event consitency when it comes to logging / triggering
logic from lua.
Your lua cb function may yield if needed, but you're pleased to process
the event as fast as possible to prevent the event queue from growing up
To prevent abuses, if the event queue for the current subscription goes
over 100 unconsumed events, the subscription will pause itself
automatically for as long as it takes for your handler to catch up.
This would lead to events being missed, so a warning will be emitted in
the logs to inform you about that. This is not something you want to let
happen too often, it may indicate that you subscribed to an event that
is occurring too frequently or/and that your callback function is too
slow to keep up the pace and you should review it.
If you want to do some parallel processing because your callback
functions are slow: you might want to create subtasks from lua using
core.register_task() from within your callback function to perform the
heavy job in a dedicated task and allow remaining events to be processed
more quickly.
Please check the lua documentation for more information.
2023-02-20 12:18:59 -05:00
|
|
|
#include <haproxy/event_hdl-t.h>
|
2015-01-23 08:06:13 -05:00
|
|
|
|
2015-09-16 15:22:28 -04:00
|
|
|
#define CLASS_CORE "Core"
|
|
|
|
|
#define CLASS_TXN "TXN"
|
|
|
|
|
#define CLASS_FETCHES "Fetches"
|
|
|
|
|
#define CLASS_CONVERTERS "Converters"
|
|
|
|
|
#define CLASS_SOCKET "Socket"
|
|
|
|
|
#define CLASS_CHANNEL "Channel"
|
|
|
|
|
#define CLASS_HTTP "HTTP"
|
2020-02-26 10:57:19 -05:00
|
|
|
#define CLASS_HTTP_MSG "HTTPMessage"
|
2021-09-21 10:25:15 -04:00
|
|
|
#define CLASS_HTTPCLIENT "HTTPClient"
|
2015-09-16 15:22:28 -04:00
|
|
|
#define CLASS_MAP "Map"
|
2015-09-19 06:36:17 -04:00
|
|
|
#define CLASS_APPLET_TCP "AppletTCP"
|
2015-09-18 03:04:27 -04:00
|
|
|
#define CLASS_APPLET_HTTP "AppletHTTP"
|
2016-02-19 14:56:00 -05:00
|
|
|
#define CLASS_PROXY "Proxy"
|
2016-02-22 02:21:39 -05:00
|
|
|
#define CLASS_SERVER "Server"
|
2016-02-25 02:36:46 -05:00
|
|
|
#define CLASS_LISTENER "Listener"
|
MEDIUM: hlua/event_hdl: initial support for event handlers
Now that the event handler API is pretty mature, we can expose it in
the lua API.
Introducing the core.event_sub(<event_types>, <cb>) lua function that
takes an array of event types <event_types> as well as a callback
function <cb> as argument.
The function returns a subscription <sub> on success.
Subscription <sub> allows you to manage the subscription from anywhere
in the script.
To this day only the sub->unsub method is implemented.
The following event types are currently supported:
- "SERVER_ADD": when a server is added
- "SERVER_DEL": when a server is removed from haproxy
- "SERVER_DOWN": server states goes from up to down
- "SERVER_UP": server states goes from down to up
As for the <cb> function: it will be called when one of the registered
event types occur. The function will be called with 3 arguments:
cb(<event>,<data>,<sub>)
<event>: event type (string) that triggered the function.
(could be any of the types used in <event_types> when registering
the subscription)
<data>: data associated with the event (specific to each event family).
For "SERVER_" family events, server details such as server name/id/proxy
will be provided.
If the server still exists (not yet deleted), a reference to the live
server is provided to spare you from an additionnal lookup if you need
to have direct access to the server from lua.
<sub> refers to the subscription. In case you need to manage it from
within an event handler.
(It refers to the same subscription that the one returned from
core.event_sub())
Subscriptions are per-thread: the thread that will be handling the
event is the one who performed the subscription using
core.event_sub() function.
Each thread treats events sequentially, it means that if you have,
let's say SERVER_UP, then SERVER_DOWN in a short timelapse, then your
cb function will first be called with SERVER_UP, and once you're done
handling the event, your function will be called again with SERVER_DOWN.
This is to ensure event consitency when it comes to logging / triggering
logic from lua.
Your lua cb function may yield if needed, but you're pleased to process
the event as fast as possible to prevent the event queue from growing up
To prevent abuses, if the event queue for the current subscription goes
over 100 unconsumed events, the subscription will pause itself
automatically for as long as it takes for your handler to catch up.
This would lead to events being missed, so a warning will be emitted in
the logs to inform you about that. This is not something you want to let
happen too often, it may indicate that you subscribed to an event that
is occurring too frequently or/and that your callback function is too
slow to keep up the pace and you should review it.
If you want to do some parallel processing because your callback
functions are slow: you might want to create subtasks from lua using
core.register_task() from within your callback function to perform the
heavy job in a dedicated task and allow remaining events to be processed
more quickly.
Please check the lua documentation for more information.
2023-02-20 12:18:59 -05:00
|
|
|
#define CLASS_EVENT_SUB "EventSub"
|
2017-10-25 06:59:51 -04:00
|
|
|
#define CLASS_REGEX "Regex"
|
2018-07-13 06:18:33 -04:00
|
|
|
#define CLASS_STKTABLE "StickTable"
|
2022-03-30 06:03:12 -04:00
|
|
|
#define CLASS_CERTCACHE "CertCache"
|
2022-09-30 05:03:38 -04:00
|
|
|
#define CLASS_PROXY_LIST "ProxyList"
|
2022-10-07 07:25:51 -04:00
|
|
|
#define CLASS_SERVER_LIST "ServerList"
|
2015-02-16 14:11:43 -05:00
|
|
|
|
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
|
|
|
struct stream;
|
2015-01-23 08:07:08 -05:00
|
|
|
|
2015-03-03 09:17:35 -05:00
|
|
|
#define HLUA_RUN 0x00000001
|
2015-03-03 11:29:06 -05:00
|
|
|
#define HLUA_CTRLYIELD 0x00000002
|
2015-03-05 11:45:34 -05:00
|
|
|
#define HLUA_WAKERESWR 0x00000004
|
2015-03-05 18:35:53 -05:00
|
|
|
#define HLUA_WAKEREQWR 0x00000008
|
2015-08-25 18:14:17 -04:00
|
|
|
#define HLUA_EXIT 0x00000010
|
2021-08-04 11:58:21 -04:00
|
|
|
#define HLUA_NOYIELD 0x00000020
|
2015-01-23 08:27:52 -05:00
|
|
|
|
2015-12-20 12:42:25 -05:00
|
|
|
#define HLUA_F_AS_STRING 0x01
|
2015-12-20 12:43:03 -05:00
|
|
|
#define HLUA_F_MAY_USE_HTTP 0x02
|
2015-12-20 12:42:25 -05:00
|
|
|
|
2021-10-25 05:41:53 -04:00
|
|
|
/* HLUA TXN flags */
|
2019-07-26 09:09:53 -04:00
|
|
|
#define HLUA_TXN_NOTERM 0x00000001
|
2021-08-22 13:18:07 -04:00
|
|
|
/* 0x00000002 .. 0x00000008 unused */
|
2021-10-25 05:41:53 -04:00
|
|
|
|
|
|
|
|
/* The execution context (enum), bits values from 0x00000010 to
|
|
|
|
|
* 0x00000030. These flags are mutually exclusives. Only one must be set at a
|
|
|
|
|
* time.
|
|
|
|
|
*/
|
|
|
|
|
#define HLUA_TXN_SMP_NONE 0x00000000 /* No specific execution context */
|
2021-08-06 10:29:41 -04:00
|
|
|
#define HLUA_TXN_SMP_CTX 0x00000010 /* Executed from a sample fecth context */
|
|
|
|
|
#define HLUA_TXN_ACT_CTX 0x00000020 /* Executed from a action context */
|
|
|
|
|
#define HLUA_TXN_FLT_CTX 0x00000030 /* Executed from a filter context */
|
|
|
|
|
#define HLUA_TXN_CTX_MASK 0x00000030 /* Mask to get the execution context */
|
|
|
|
|
|
2016-07-14 05:42:37 -04:00
|
|
|
|
BUG/MAJOR: lua: segfault using Concat object
Concat object is based on "luaL_Buffer". The luaL_Buffer documentation says:
During its normal operation, a string buffer uses a variable number of stack
slots. So, while using a buffer, you cannot assume that you know where the
top of the stack is. You can use the stack between successive calls to buffer
operations as long as that use is balanced; that is, when you call a buffer
operation, the stack is at the same level it was immediately after the
previous buffer operation. (The only exception to this rule is
luaL_addvalue.) After calling luaL_pushresult the stack is back to its level
when the buffer was initialized, plus the final string on its top.
So, the stack cannot be manipulated between the first call at the function
"luaL_buffinit()" and the last call to the function "luaL_pushresult()" because
we cannot known the stack status.
In other way, the memory used by these functions seems to be collected by GC, so
if the GC is triggered during the usage of the Concat object, it can be used
some released memory.
This patch rewrite the Concat class without the "luaL_Buffer" system. It uses
"userdata()" forr the memory allocation of the buffer strings.
2016-02-19 06:09:29 -05:00
|
|
|
#define HLUA_CONCAT_BLOCSZ 2048
|
|
|
|
|
|
2015-01-23 08:27:52 -05:00
|
|
|
enum hlua_exec {
|
|
|
|
|
HLUA_E_OK = 0,
|
|
|
|
|
HLUA_E_AGAIN, /* LUA yield, must resume the stack execution later, when
|
|
|
|
|
the associatedtask is waked. */
|
2018-05-21 13:42:47 -04:00
|
|
|
HLUA_E_ETMOUT, /* Execution timeout */
|
|
|
|
|
HLUA_E_NOMEM, /* Out of memory error */
|
|
|
|
|
HLUA_E_YIELD, /* LUA code try to yield, and this is not allowed */
|
2015-01-23 08:27:52 -05:00
|
|
|
HLUA_E_ERRMSG, /* LUA stack execution failed with a string error message
|
|
|
|
|
in the top of stack. */
|
|
|
|
|
HLUA_E_ERR, /* LUA stack execution failed without error message. */
|
|
|
|
|
};
|
|
|
|
|
|
MEDIUM: hlua: reliable timeout detection
For non yieldable lua handlers (converters, fetches or yield
incompatible lua functions), current timeout detection relies on now_ms
thread local variable.
But within non-yieldable contexts, now_ms won't be updated if not by us
(because we're momentarily stuck in lua context so we won't
re-enter the polling loop, which is responsible for clock updates).
To circumvent this, clock_update_date(0, 1) was manually performed right
before now_ms is being read for the timeout checks.
But this fails to work consistently, because if no other concurrent
threads periodically run clock_update_global_date(), which do happen if
we're the only active thread (nbthread=1 or low traffic), our
clock_update_date() call won't reliably update our local now_ms variable
Moreover, clock_update_date() is not the right tool for this anyway, as
it was initially meant to be used from the polling context.
Using it could have negative impact on other threads relying on now_ms
to be stable. (because clock_update_date() performs global clock update
from time to time)
-> Introducing hlua multipurpose timer, which is internally based on
now_cpu_time_fast() that provides per-thread consistent clock readings.
Thanks to this new hlua timer API, hlua timeout logic is less error-prone
and more robust.
This allows the timeout detection to work as expected for both yieldable
and non-yieldable lua handlers.
This patch depends on commit "MINOR: clock: add now_cpu_time_fast() function"
While this could theorically be backported to all stable versions,
it is advisable to avoid backports unless we're confident enough
since it could cause slight behavior changes (timing related) in
existing setups.
2022-11-25 03:10:07 -05:00
|
|
|
struct hlua_timer {
|
|
|
|
|
uint32_t start; /* cpu time in ms when the timer was started */
|
|
|
|
|
uint32_t burst; /* execution time for the current call in ms */
|
|
|
|
|
uint32_t cumulative; /* cumulative execution time for the coroutine in ms */
|
|
|
|
|
uint32_t max; /* max (cumulative) execution time for the coroutine in ms */
|
|
|
|
|
};
|
|
|
|
|
|
2015-01-23 08:27:52 -05:00
|
|
|
struct hlua {
|
|
|
|
|
lua_State *T; /* The LUA stack. */
|
2020-11-28 17:42:03 -05:00
|
|
|
int state_id; /* contains the lua state id. 0 is common state, 1 to n are per-thread states.*/
|
2015-01-23 08:27:52 -05:00
|
|
|
int Tref; /* The reference of the stack in coroutine case.
|
|
|
|
|
-1 for the main lua stack. */
|
|
|
|
|
int Mref; /* The reference of the memory context in coroutine case.
|
|
|
|
|
-1 if the memory context is not used. */
|
|
|
|
|
int nargs; /* The number of arguments in the stack at the start of execution. */
|
2015-03-03 09:17:35 -05:00
|
|
|
unsigned int flags; /* The current execution flags. */
|
2023-04-04 12:41:04 -04:00
|
|
|
int wake_time; /* The lua wants to be waked at this time, or before. (ticks) */
|
MEDIUM: hlua: reliable timeout detection
For non yieldable lua handlers (converters, fetches or yield
incompatible lua functions), current timeout detection relies on now_ms
thread local variable.
But within non-yieldable contexts, now_ms won't be updated if not by us
(because we're momentarily stuck in lua context so we won't
re-enter the polling loop, which is responsible for clock updates).
To circumvent this, clock_update_date(0, 1) was manually performed right
before now_ms is being read for the timeout checks.
But this fails to work consistently, because if no other concurrent
threads periodically run clock_update_global_date(), which do happen if
we're the only active thread (nbthread=1 or low traffic), our
clock_update_date() call won't reliably update our local now_ms variable
Moreover, clock_update_date() is not the right tool for this anyway, as
it was initially meant to be used from the polling context.
Using it could have negative impact on other threads relying on now_ms
to be stable. (because clock_update_date() performs global clock update
from time to time)
-> Introducing hlua multipurpose timer, which is internally based on
now_cpu_time_fast() that provides per-thread consistent clock readings.
Thanks to this new hlua timer API, hlua timeout logic is less error-prone
and more robust.
This allows the timeout detection to work as expected for both yieldable
and non-yieldable lua handlers.
This patch depends on commit "MINOR: clock: add now_cpu_time_fast() function"
While this could theorically be backported to all stable versions,
it is advisable to avoid backports unless we're confident enough
since it could cause slight behavior changes (timing related) in
existing setups.
2022-11-25 03:10:07 -05:00
|
|
|
struct hlua_timer timer; /* lua multipurpose timer */
|
2015-01-23 08:27:52 -05:00
|
|
|
struct task *task; /* The task associated with the lua stack execution.
|
|
|
|
|
We must wake this task to continue the task execution */
|
2015-01-23 05:08:20 -05:00
|
|
|
struct list com; /* The list head of the signals attached to this task. */
|
BUG/MEDIUM: httpclient/lua: fix a race between lua GC and hlua_ctx_destroy
In bb581423b ("BUG/MEDIUM: httpclient/lua: crash when the lua task timeout
before the httpclient"), a new logic was implemented to make sure that
when a lua ctx destroyed, related httpclients are correctly destroyed too
to prevent a such httpclients from being resuscitated on a destroyed lua ctx.
This was implemented by adding a list of httpclients within the lua ctx,
and a new function, hlua_httpclient_destroy_all(), that is called under
hlua_ctx_destroy() and runs through the httpclients list in the lua context
to properly terminate them.
This was done with the assumption that no concurrent Lua garbage collection
cycles could occur on the same ressources, which seems OK since the "lua"
context is about to be freed and is not explicitly being used by other threads.
But when 'lua-load' is used, the main lua stack is shared between multiple
OS threads, which means that all lua ctx in the process are linked to the
same parent stack.
Yet it seems that lua GC, which can be triggered automatically under
lua_resume() or manually through lua_gc(), does not limit itself to the
"coroutine" stack (the stack referenced in lua->T) when performing the cleanup,
but is able to perform some cleanup on the main stack plus coroutines stacks
that were created under the same main stack (via lua_newthread()) as well.
This can be explained by the fact that lua_newthread() coroutines are not meant
to be thread-safe by design.
Source: http://lua-users.org/lists/lua-l/2011-07/msg00072.html (lua co-author)
It did not cause other issues so far because most of the time when using
'lua-load', the global lua lock is taken when performing critical operations
that are known to interfere with the main stack.
But here in hlua_httpclient_destroy_all(), we don't run under the global lock.
Now that we properly understand the issue, the fix is pretty trivial:
We could simply guard the hlua_httpclient_destroy_all() under the global
lua lock, this would work but it could increase the contention over the
global lock.
Instead, we switched 'lua->hc_list' which was introduced with bb581423b
from simple list to mt_list so that concurrent accesses between
hlua_httpclient_destroy_all and hlua_httpclient_gc() are properly handled.
The issue was reported by @Mark11122 on Github #2037.
This must be backported with bb581423b ("BUG/MEDIUM: httpclient/lua: crash
when the lua task timeout before the httpclient") as far as 2.5.
2023-02-09 11:02:57 -05:00
|
|
|
struct mt_list hc_list; /* list of httpclient associated to this lua task */
|
2015-01-23 08:27:52 -05:00
|
|
|
struct ebpt_node node;
|
2020-01-14 03:59:38 -05:00
|
|
|
int gc_count; /* number of items which need a GC */
|
2015-01-23 08:27:52 -05:00
|
|
|
};
|
|
|
|
|
|
2015-01-23 06:08:30 -05:00
|
|
|
/* This is a part of the list containing references to functions
|
|
|
|
|
* called at the initialisation time.
|
|
|
|
|
*/
|
|
|
|
|
struct hlua_init_function {
|
|
|
|
|
struct list l;
|
|
|
|
|
int function_ref;
|
|
|
|
|
};
|
|
|
|
|
|
2015-02-16 14:19:18 -05:00
|
|
|
/* This struct contains the lua data used to bind
|
|
|
|
|
* Lua function on HAProxy hook like sample-fetches
|
|
|
|
|
* or actions.
|
|
|
|
|
*/
|
|
|
|
|
struct hlua_function {
|
2020-11-28 15:06:35 -05:00
|
|
|
struct list l;
|
2015-02-16 14:19:18 -05:00
|
|
|
char *name;
|
2020-11-28 17:57:24 -05:00
|
|
|
int function_ref[MAX_THREADS + 1];
|
2016-12-09 12:03:31 -05:00
|
|
|
int nargs;
|
2015-02-16 14:19:18 -05:00
|
|
|
};
|
|
|
|
|
|
2015-02-16 14:23:40 -05:00
|
|
|
/* This struct is used with the structs:
|
|
|
|
|
* - http_req_rule
|
|
|
|
|
* - http_res_rule
|
|
|
|
|
* - tcp_rule
|
|
|
|
|
* It contains the lua execution configuration.
|
|
|
|
|
*/
|
|
|
|
|
struct hlua_rule {
|
2020-11-28 20:05:57 -05:00
|
|
|
struct hlua_function *fcn;
|
2015-02-16 14:23:40 -05:00
|
|
|
char **args;
|
|
|
|
|
};
|
|
|
|
|
|
2015-02-16 14:11:43 -05:00
|
|
|
/* This struct contains the pointer provided on the most
|
|
|
|
|
* of internal HAProxy calls during the processing of
|
|
|
|
|
* rules, converters and sample-fetches. This struct is
|
|
|
|
|
* associated with the lua object called "TXN".
|
|
|
|
|
*/
|
|
|
|
|
struct hlua_txn {
|
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
|
|
|
struct stream *s;
|
2015-02-16 14:11:43 -05:00
|
|
|
struct proxy *p;
|
2015-11-02 04:01:59 -05:00
|
|
|
int dir; /* SMP_OPT_DIR_{REQ,RES} */
|
2016-07-14 05:42:37 -04:00
|
|
|
int flags;
|
2015-02-16 14:11:43 -05:00
|
|
|
};
|
|
|
|
|
|
2015-09-19 06:36:17 -04:00
|
|
|
/* This struct contains the applet context. */
|
|
|
|
|
struct hlua_appctx {
|
|
|
|
|
struct appctx *appctx;
|
|
|
|
|
luaL_Buffer b; /* buffer used to prepare strings. */
|
|
|
|
|
struct hlua_txn htxn;
|
|
|
|
|
};
|
|
|
|
|
|
2020-04-07 16:07:56 -04:00
|
|
|
/* This struct is used with sample fetches and sample converters. */
|
2015-03-11 15:13:36 -04:00
|
|
|
struct hlua_smp {
|
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
|
|
|
struct stream *s;
|
2015-03-11 15:13:36 -04:00
|
|
|
struct proxy *p;
|
2015-12-20 12:42:25 -05:00
|
|
|
unsigned int flags; /* LUA_F_OPT_* */
|
2015-11-02 04:01:59 -05:00
|
|
|
int dir; /* SMP_OPT_DIR_{REQ,RES} */
|
2015-03-11 15:13:36 -04:00
|
|
|
};
|
|
|
|
|
|
2015-02-16 13:43:25 -05:00
|
|
|
/* This struct contains data used with sleep functions. */
|
|
|
|
|
struct hlua_sleep {
|
|
|
|
|
struct task *task; /* task associated with sleep. */
|
|
|
|
|
struct list com; /* list of signal to wake at the end of sleep. */
|
|
|
|
|
unsigned int wakeup_ms; /* hour to wakeup. */
|
|
|
|
|
};
|
|
|
|
|
|
2015-02-16 13:27:16 -05:00
|
|
|
/* This struct is used to create coprocess doing TCP or
|
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
|
|
|
* SSL I/O. It uses a fake stream.
|
2015-02-16 13:27:16 -05:00
|
|
|
*/
|
|
|
|
|
struct hlua_socket {
|
2017-09-11 12:37:23 -04:00
|
|
|
struct xref xref; /* cross reference with the stream used for socket I/O. */
|
2015-02-16 13:27:16 -05:00
|
|
|
luaL_Buffer b; /* buffer used to prepare strings. */
|
2017-07-12 06:10:44 -04:00
|
|
|
unsigned long tid; /* Store the thread id which creates the socket. */
|
2015-02-16 13:27:16 -05:00
|
|
|
};
|
|
|
|
|
|
BUG/MAJOR: lua: segfault using Concat object
Concat object is based on "luaL_Buffer". The luaL_Buffer documentation says:
During its normal operation, a string buffer uses a variable number of stack
slots. So, while using a buffer, you cannot assume that you know where the
top of the stack is. You can use the stack between successive calls to buffer
operations as long as that use is balanced; that is, when you call a buffer
operation, the stack is at the same level it was immediately after the
previous buffer operation. (The only exception to this rule is
luaL_addvalue.) After calling luaL_pushresult the stack is back to its level
when the buffer was initialized, plus the final string on its top.
So, the stack cannot be manipulated between the first call at the function
"luaL_buffinit()" and the last call to the function "luaL_pushresult()" because
we cannot known the stack status.
In other way, the memory used by these functions seems to be collected by GC, so
if the GC is triggered during the usage of the Concat object, it can be used
some released memory.
This patch rewrite the Concat class without the "luaL_Buffer" system. It uses
"userdata()" forr the memory allocation of the buffer strings.
2016-02-19 06:09:29 -05:00
|
|
|
struct hlua_concat {
|
|
|
|
|
int size;
|
|
|
|
|
int len;
|
|
|
|
|
};
|
|
|
|
|
|
2021-09-21 10:25:15 -04:00
|
|
|
/* This struct is used to store the httpclient */
|
|
|
|
|
struct hlua_httpclient {
|
|
|
|
|
struct httpclient *hc; /* ptr to the httpclient instance */
|
2021-10-28 09:41:38 -04:00
|
|
|
size_t sent; /* payload sent */
|
2021-09-21 10:25:15 -04:00
|
|
|
luaL_Buffer b; /* buffer used to prepare strings. */
|
BUG/MEDIUM: httpclient/lua: fix a race between lua GC and hlua_ctx_destroy
In bb581423b ("BUG/MEDIUM: httpclient/lua: crash when the lua task timeout
before the httpclient"), a new logic was implemented to make sure that
when a lua ctx destroyed, related httpclients are correctly destroyed too
to prevent a such httpclients from being resuscitated on a destroyed lua ctx.
This was implemented by adding a list of httpclients within the lua ctx,
and a new function, hlua_httpclient_destroy_all(), that is called under
hlua_ctx_destroy() and runs through the httpclients list in the lua context
to properly terminate them.
This was done with the assumption that no concurrent Lua garbage collection
cycles could occur on the same ressources, which seems OK since the "lua"
context is about to be freed and is not explicitly being used by other threads.
But when 'lua-load' is used, the main lua stack is shared between multiple
OS threads, which means that all lua ctx in the process are linked to the
same parent stack.
Yet it seems that lua GC, which can be triggered automatically under
lua_resume() or manually through lua_gc(), does not limit itself to the
"coroutine" stack (the stack referenced in lua->T) when performing the cleanup,
but is able to perform some cleanup on the main stack plus coroutines stacks
that were created under the same main stack (via lua_newthread()) as well.
This can be explained by the fact that lua_newthread() coroutines are not meant
to be thread-safe by design.
Source: http://lua-users.org/lists/lua-l/2011-07/msg00072.html (lua co-author)
It did not cause other issues so far because most of the time when using
'lua-load', the global lua lock is taken when performing critical operations
that are known to interfere with the main stack.
But here in hlua_httpclient_destroy_all(), we don't run under the global lock.
Now that we properly understand the issue, the fix is pretty trivial:
We could simply guard the hlua_httpclient_destroy_all() under the global
lua lock, this would work but it could increase the contention over the
global lock.
Instead, we switched 'lua->hc_list' which was introduced with bb581423b
from simple list to mt_list so that concurrent accesses between
hlua_httpclient_destroy_all and hlua_httpclient_gc() are properly handled.
The issue was reported by @Mark11122 on Github #2037.
This must be backported with bb581423b ("BUG/MEDIUM: httpclient/lua: crash
when the lua task timeout before the httpclient") as far as 2.5.
2023-02-09 11:02:57 -05:00
|
|
|
struct mt_list by_hlua; /* linked in the current hlua task */
|
2021-09-21 10:25:15 -04:00
|
|
|
};
|
|
|
|
|
|
2022-09-30 05:03:38 -04:00
|
|
|
struct hlua_proxy_list {
|
|
|
|
|
char capabilities;
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
struct hlua_proxy_list_iterator_context {
|
|
|
|
|
struct proxy *next;
|
|
|
|
|
char capabilities;
|
|
|
|
|
};
|
|
|
|
|
|
2022-10-07 07:25:51 -04:00
|
|
|
struct hlua_server_list {
|
|
|
|
|
struct proxy *px;
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
struct hlua_server_list_iterator_context {
|
|
|
|
|
struct server *cur;
|
|
|
|
|
struct proxy *px;
|
|
|
|
|
};
|
|
|
|
|
|
2015-03-04 10:48:34 -05:00
|
|
|
#else /* USE_LUA */
|
2020-06-04 03:20:54 -04:00
|
|
|
/************************ For use when Lua is disabled ********************/
|
2015-03-04 10:48:34 -05:00
|
|
|
|
|
|
|
|
/* Empty struct for compilation compatibility */
|
|
|
|
|
struct hlua { };
|
|
|
|
|
struct hlua_socket { };
|
2015-07-30 13:03:55 -04:00
|
|
|
struct hlua_rule { };
|
2015-03-04 10:48:34 -05:00
|
|
|
|
|
|
|
|
#endif /* USE_LUA */
|
|
|
|
|
|
2020-06-04 03:20:54 -04:00
|
|
|
#endif /* _HAPROXY_HLUA_T_H */
|