From owner-freebsd-stable@FreeBSD.ORG Thu May 15 20:56:20 2008 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id E922B1065670; Thu, 15 May 2008 20:56:20 +0000 (UTC) (envelope-from avg@icyb.net.ua) Received: from hosted.kievnet.com (hosted.kievnet.com [193.138.144.10]) by mx1.freebsd.org (Postfix) with ESMTP id A32158FC1A; Thu, 15 May 2008 20:56:20 +0000 (UTC) (envelope-from avg@icyb.net.ua) Received: from localhost ([127.0.0.1] helo=edge.pp.kiev.ua) by hosted.kievnet.com with esmtpa (Exim 4.62) (envelope-from ) id 1JwkUt-000C9t-Aw; Thu, 15 May 2008 23:56:19 +0300 Message-ID: <482CA372.3000400@icyb.net.ua> Date: Thu, 15 May 2008 23:56:18 +0300 From: Andriy Gapon User-Agent: Thunderbird 2.0.0.12 (X11/20080320) MIME-Version: 1.0 To: davids@webmaster.com References: In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-stable@freebsd.org, freebsd-threads@freebsd.org Subject: Re: thread scheduling at mutex unlock X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 15 May 2008 20:56:21 -0000 on 15/05/2008 22:29 David Schwartz said the following: >> what if you have an infinite number of items on one side and finite >> number on the other, and you want to process them all (in infinite >> time, of course). Would you still try to finish everything on one >> side (the infinite one) or would you try to look at what you have >> on the other side? >> >> I am sorry about fuzzy wording of my original report, I should have >> mentioned "starvation" somewhere in it. > > There is no such thing as a "fair share" when comparing an infinite > quantity to a finite quantity. It is just as sensible to do 1 then 1 > as 10 then 10 or a billion then 1. > > What I would do in this case is work on one side for one timeslice > then the other side for one timeslice, continuuing until either side > was finished, then I'd work exclusively on the other side. This is > precisely the purpose for having timeslices in a scheduler. > > The timeslice is carefully chosen so that it's not so long that you > ignore a side for too long. It's also carefully chosen so that it's > not so short that you spend all your time switching swides. > > What sane schedulers do is assume that you want to make as much > forward progress as quickly as possible. This means getting as many > work units done per unit time as possible. This means as few context > switches as possible. > > A scheduler that switches significantly more often than once per > timeslice with a load like this is *broken*. The purpose of the > timeslice is to place an upper bound on the number of context > switches in cases where forward progress can be made on more than one > process. An ideal scheduler would not switch more often than once per > timeslice unless it could not make further forward progress. > > Real-world schedulers actually may allow one side to pre-empt the > other, and may switch a bit more often than a scheduler that's > "ideal" in the sense described above. This is done in an attempt to > boost interactive performance. > > But your basic assumption that strict alternation is desirable is > massively wrong. That's the *worst* *possible* outcome. David, thank you for the tutorial, it is quite enlightening. But first of all, did you take a look at my small test program? There are 1 second sleeps in it, this is not about timeslices and scheduling at that level at all. This is about basic expectation about fairness of acquiring a lock at macro level. I know that when one thread acquires and releases and reacquires a mutex during 10 seconds while the other thread is blocked on that mutex for 10 seconds, then this is not about timeslices. -- Andriy Gapon