From owner-freebsd-fs@FreeBSD.ORG Fri Apr 23 06:42:56 2010 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 8D2CA106564A; Fri, 23 Apr 2010 06:42:56 +0000 (UTC) (envelope-from avg@icyb.net.ua) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 67F588FC1E; Fri, 23 Apr 2010 06:42:55 +0000 (UTC) Received: from porto.topspin.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id JAA23663; Fri, 23 Apr 2010 09:42:54 +0300 (EEST) (envelope-from avg@icyb.net.ua) Received: from localhost.topspin.kiev.ua ([127.0.0.1]) by porto.topspin.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1O5CbF-000BL2-Nu; Fri, 23 Apr 2010 09:42:53 +0300 Message-ID: <4BD1416D.30207@icyb.net.ua> Date: Fri, 23 Apr 2010 09:42:53 +0300 From: Andriy Gapon User-Agent: Thunderbird 2.0.0.24 (X11/20100321) MIME-Version: 1.0 To: Pawel Jakub Dawidek References: <4BD0C802.3000004@icyb.net.ua> <20100423060850.GB1670@garage.freebsd.pl> In-Reply-To: <20100423060850.GB1670@garage.freebsd.pl> X-Enigmail-Version: 0.96.0 Content-Type: text/plain; charset=KOI8-U Content-Transfer-Encoding: 7bit Cc: freebsd-fs@FreeBSD.org, freebsd-geom@FreeBSD.org Subject: Re: vdev_geom_io: parallelize ? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 23 Apr 2010 06:42:56 -0000 on 23/04/2010 09:08 Pawel Jakub Dawidek said the following: > On Fri, Apr 23, 2010 at 01:04:50AM +0300, Andriy Gapon wrote: >> Just thinking out loud. >> >> Currently ZFS vdev_geom_io does something like: >> for (...) { >> ... >> g_io_request(...); >> biowait(...); >> ... >> } >> I/O is done in MAXPHYS chunks. >> >> If that was changed to first issuing all the requests and only after that >> waiting on them, could there be any performance benefit? >> Or cases of vdev_geom_io with size > MAXPHYS are too rare? >> Or something else? > > The vdev_geom_io() function is there only to read ZFS labels, it is not > used during regular I/O. Regular I/O requests are handled asynchronously > by the vdev_geom_io_start() function. Oops. Thanks! -- Andriy Gapon