From owner-freebsd-current@FreeBSD.ORG Fri Jul 8 20:57:53 2005 Return-Path: X-Original-To: current@freebsd.org Delivered-To: freebsd-current@FreeBSD.ORG Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id CE42D16A41C for ; Fri, 8 Jul 2005 20:57:53 +0000 (GMT) (envelope-from scottl@samsco.org) Received: from pooker.samsco.org (pooker.samsco.org [168.103.85.57]) by mx1.FreeBSD.org (Postfix) with ESMTP id 5FCA143D48 for ; Fri, 8 Jul 2005 20:57:53 +0000 (GMT) (envelope-from scottl@samsco.org) Received: from [192.168.254.14] (imini.samsco.home [192.168.254.14]) (authenticated bits=0) by pooker.samsco.org (8.13.3/8.13.3) with ESMTP id j68L40hw039557; Fri, 8 Jul 2005 15:04:01 -0600 (MDT) (envelope-from scottl@samsco.org) Message-ID: <42CEE8CF.4060200@samsco.org> Date: Fri, 08 Jul 2005 14:57:51 -0600 From: Scott Long User-Agent: Mozilla/5.0 (Macintosh; U; PPC Mac OS X Mach-O; en-US; rv:1.7.7) Gecko/20050416 X-Accept-Language: en-us, en MIME-Version: 1.0 To: Luigi Rizzo References: <20050705053114.A96381@xorpc.icir.org> <35386.1120575587@phk.freebsd.dk> <20050705103353.A8185@xorpc.icir.org> <20050708110742.A6284@xorpc.icir.org> In-Reply-To: <20050708110742.A6284@xorpc.icir.org> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-2.8 required=3.8 tests=ALL_TRUSTED autolearn=failed version=3.0.2 X-Spam-Checker-Version: SpamAssassin 3.0.2 (2004-11-16) on pooker.samsco.org Cc: s223560@studenti.ing.unipi.it, current@freebsd.org Subject: Re: location of bioq lock X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 08 Jul 2005 20:57:54 -0000 Luigi Rizzo wrote: > [see context at the end] > > Queues of I/O requests are implemented by a struct bio_queue_head > in src/sys/sys/bio.h - however the lock for the queue is > outside the structure, and, as a result, whoever wants to manipulate > this data structure either a) needs it already locked, or b) needs to > know which lock to grab. > Case a) occurs when someone accesses the bio_queue through the > regular API -- the caller already does the required locking. > Case b) however can occur when we have some asynchronous request > to work on the queue, e.g. to change scheduler. > > So we need to know where the lock is. I can see two ways: > > 1) put the lock in the struct bio_queue_head. > This is the same thing done in struct g_bioq defined in > sys/geom/geom.h . Quite clean, except that perhaps some > users of bio_queue_head still run under Giant (e.g. cam/scsi ?) > and so it is not possible to 'bring in' the lock. > > 2) change bioq_init() so that it takes also a pointer to the mtx > that protects the queue. > This is probably less clean, but perhaps a bit more flexible because > the queue and its lock are decoupled. Also it permits to deal > with the 'Giant' case easily. > > Other ideas ? > > cheers > luigi > > > (background - this is related to the work my SoC student Emiliano, > in Cc, is doing on pluggalbe disk schedulers) > > The disk scheduler operates on struct bio_queue_head objects > (which include CSCAN scheduler info) and uses 5 methods: > > bioq_init() initializes the queue. > bioq_disksort() to add requests to the queue > bioq_first() to peek at the head of the queue > bioq_remove() to remove the first element. > bioq_flush() right now simply a wrapper around bioq_first() and > bioq_remove(), but one could imagine the need for a > specific destructor to free memory etc. Each bioq is owned by the consumer of it, i.e. the individual driver. As such, locking it is the responsibility of the driver. The block layer/GEOM does not manipulate the driver bioq's, nor should it. The driver knows best how to sort and queue requests to the hardware, and many drivers don't even need explicit sorting because they are talking to an intelligent controller that will do that work for them. Also, putting descrete locks on bioq operations is inefficient. I did a _lot_ of experimentation with locking storage drivers and found that one lock in the fast path that covers the bioq, softc/resources, and hardware access was a whole lot faster than splitting it up into multiple locks. There are certain circumstances where this might not be the case, but as a general rule it is. So, the status quo with regard to bioq locking is fine. Please don't try to make the block layer outguess the drivers and the hardware, and please don't try to mandate extra locking that reduces performance. Scott