Date: Wed, 18 Jun 2025 02:13:22 GMT From: Olivier Certner <olce@FreeBSD.org> To: src-committers@FreeBSD.org, dev-commits-src-all@FreeBSD.org, dev-commits-src-main@FreeBSD.org Subject: git: fdf31d274769 - main - sched_ule: runq_steal_from(): Suppress first thread special case Message-ID: <202506180213.55I2DMbe024367@gitrepo.freebsd.org>
next in thread | raw e-mail | index | archive | help
The branch main has been updated by olce: URL: https://cgit.FreeBSD.org/src/commit/?id=fdf31d27476968456a8a389d8152370582756ef1 commit fdf31d27476968456a8a389d8152370582756ef1 Author: Olivier Certner <olce@FreeBSD.org> AuthorDate: 2024-04-29 06:54:43 +0000 Commit: Olivier Certner <olce@FreeBSD.org> CommitDate: 2025-06-18 02:08:01 +0000 sched_ule: runq_steal_from(): Suppress first thread special case This special case was introduced as soon as commit "ULE 3.0" (ae7a6b38d53f, r171482, from July 2007). It caused runq_steal_from() to ignore the highest-priority thread while stealing. Its functionality was changed in commit "Rework CPU load balancing in SCHED_ULE" (36acfc6507aa, r232207, from February 2012), where the intent was to keep track of that first thread and return it if no other one was stealable, instead of returning NULL (no steal). Some bug prevented it from working in loaded cases (more than one thread, and all threads but the first one not stealable), which was subsequently fixed in commit "sched_ule(4): Fix interactive threads stealing." (bd84094a51c4, from September 2021). All the reasons for this mechanism we could second-guess were dubious at best. Jeff Roberson, ULE's main author, says in the differential revision that "The point was to move threads that are least likely to benefit from affinity because they are unlikely to run soon enough to take advantage of it.", to which we responded: "(snip) This may improve affinity in some cases, but at the same time we don't really know when the next thread on the queue is to run. Not stealing in this case also amounts to slightly violating the expected execution ordering and fairness.". As this twist doesn't seem to bring any performance improvement in general, let's just remove it. MFC after: 1 month Event: Kitchener-Waterloo Hackathon 202506 Sponsored by: The FreeBSD Foundation Differential Revision: https://reviews.freebsd.org/D45388 --- sys/kern/sched_ule.c | 14 -------------- 1 file changed, 14 deletions(-) diff --git a/sys/kern/sched_ule.c b/sys/kern/sched_ule.c index 5c7665eb7add..1b780f192352 100644 --- a/sys/kern/sched_ule.c +++ b/sys/kern/sched_ule.c @@ -1185,9 +1185,7 @@ tdq_notify(struct tdq *tdq, int lowpri) struct runq_steal_pred_data { struct thread *td; - struct thread *first; int cpu; - bool use_first_last; }; static bool @@ -1197,11 +1195,6 @@ runq_steal_pred(const int idx, struct rq_queue *const q, void *const data) struct thread *td; TAILQ_FOREACH(td, q, td_runq) { - if (d->use_first_last && d->first == NULL) { - d->first = td; - continue; - } - if (THREAD_CAN_MIGRATE(td) && THREAD_CAN_SCHED(td, d->cpu)) { d->td = td; return (true); @@ -1220,9 +1213,7 @@ runq_steal_from(struct runq *const rq, int cpu, int start_idx) { struct runq_steal_pred_data data = { .td = NULL, - .first = NULL, .cpu = cpu, - .use_first_last = true }; int idx; @@ -1238,9 +1229,6 @@ runq_steal_from(struct runq *const rq, int cpu, int start_idx) } MPASS(idx == -1 && data.td == NULL); - if (data.first != NULL && THREAD_CAN_MIGRATE(data.first) && - THREAD_CAN_SCHED(data.first, cpu)) - return (data.first); return (NULL); found: MPASS(data.td != NULL); @@ -1255,9 +1243,7 @@ runq_steal(struct runq *rq, int cpu) { struct runq_steal_pred_data data = { .td = NULL, - .first = NULL, .cpu = cpu, - .use_first_last = false }; int idx;
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?202506180213.55I2DMbe024367>