mirror of
https://github.com/opnsense/src.git
synced 2026-03-28 13:43:12 -04:00
callout: Wait for the softclock thread to switch before rescheduling
When a softclock thread prepares to go off-CPU, the following happens in
the context of the thread:
1. callout state is locked
2. thread state is set to IWAIT
3. thread lock is switched from the tdq lock to the callout lock
4. tdq lock is released
5. sched_switch() sets td_lock to &blocked_lock
6. sched_switch() releases old td_lock (callout lock)
7. sched_switch() removes td from its runqueue
8. cpu_switch() sets td_lock back to the callout lock
Suppose a timer interrupt fires while the softclock thread is switching
off, and callout_process() schedules the softclock thread. Then there
is a window between steps 5 and 8 where callout_process() can call
sched_add() while td_lock is &blocked_lock, but this is not correct
since the thread is not logically locked.
callout_process() thus needs to spin waiting for the softclock thread to
finish switching off (i.e., after step 8 completes) before rescheduling
it, since callout_process() does not acquire the thread lock directly.
Reported by: syzbot+fb44dbf6734ff492c337@syzkaller.appspotmail.com
Fixes: 74cf7cae4d ("softclock: Use dedicated ithreads for running callouts.")
Reviewed by: mav, kib, jhb
Sponsored by: The FreeBSD Foundation
Differential Revision: https://reviews.freebsd.org/D33709
This commit is contained in:
parent
5a73a6c178
commit
6b95cf5bde
1 changed files with 2 additions and 0 deletions
|
|
@ -548,6 +548,8 @@ next:
|
|||
if (!TAILQ_EMPTY(&cc->cc_expireq)) {
|
||||
td = cc->cc_thread;
|
||||
if (TD_AWAITING_INTR(td)) {
|
||||
thread_lock_block_wait(td);
|
||||
THREAD_LOCK_ASSERT(td, MA_OWNED);
|
||||
TD_CLR_IWAIT(td);
|
||||
sched_add(td, SRQ_INTR);
|
||||
} else
|
||||
|
|
|
|||
Loading…
Reference in a new issue