Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 24 Aug 2017 09:41:03 -0700 (PDT)
From:      Don Lewis <truckman@FreeBSD.org>
To:        avg@FreeBSD.org
Cc:        freebsd-arch@FreeBSD.org
Subject:   Re: ULE steal_idle questions
Message-ID:  <201708241641.v7OGf3pA042851@gw.catspoiler.org>
In-Reply-To: <d9dae0c1-e718-13fe-b6b5-87160c71784e@FreeBSD.org>

next in thread | previous in thread | raw e-mail | index | archive | help
Aside from the Ryzen problem, I think the steal_idle code should be
re-written so that it doesn't block interrupts for so long.  In its
current state, interrupt latence increases with the number of cores and
the complexity of the topology.

What I'm thinking is that we should set a flag at the start of the
search for a thread to steal.  If we are preempted by another, higher
priority thread, that thread will clear the flag.  Next we start the
loop to search up the hierarchy.  Once we find a candidate CPU:

                steal = TDQ_CPU(cpu);
                CPU_CLR(cpu, &mask);
                tdq_lock_pair(tdq, steal);
		if (tdq->tdq_load != 0) {
			goto out; /* to exit loop and switch to the new thread */
		}
		if (flag was cleared) {
			tdq_unlock_pair(tdq, steal);
			goto restart; /* restart the search */
		}
		if (steal->tdq_load < thresh || steal->tdq_transferable == 0 ||
		    tdq_move(steal, tdq) == 0) {
			tdq_unlock_pair(tdq, steal);
			continue;
		}
	    out:
	    	TDQ_UNLOCK(steal);
	    	clear flag;
	    	mi_switch(SW_VOL | SWT_IDLE, NULL);
	    	thread_unlock(curthread);
	    	return (0);

And we also have to clear the flag if we did not find a thread to steal.




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201708241641.v7OGf3pA042851>