From owner-freebsd-geom@FreeBSD.ORG Tue Aug 25 09:20:03 2009 Return-Path: Delivered-To: freebsd-geom@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 53A5B106568B for ; Tue, 25 Aug 2009 09:20:03 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 438918FC3A for ; Tue, 25 Aug 2009 09:20:03 +0000 (UTC) Received: from freefall.freebsd.org (gnats@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.3/8.14.3) with ESMTP id n7P9K3ts042100 for ; Tue, 25 Aug 2009 09:20:03 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.3/8.14.3/Submit) id n7P9K3XA042099; Tue, 25 Aug 2009 09:20:03 GMT (envelope-from gnats) Date: Tue, 25 Aug 2009 09:20:03 GMT Message-Id: <200908250920.n7P9K3XA042099@freefall.freebsd.org> To: freebsd-geom@FreeBSD.org From: Ivan Voras Cc: Subject: Re: kern/113885: [gmirror] [patch] improved gmirror balance algorithm X-BeenThere: freebsd-geom@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: Ivan Voras List-Id: GEOM-specific discussions and implementations List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 25 Aug 2009 09:20:03 -0000 The following reply was made to PR kern/113885; it has been noted by GNATS. From: Ivan Voras To: bug-followup@freebsd.org, zuborg@advancedhosters.com Cc: Subject: Re: kern/113885: [gmirror] [patch] improved gmirror balance algorithm Date: Tue, 25 Aug 2009 11:11:12 +0200 The patch will not increase streaming read performance beyond what's possible with a single drive, it will improve random read performance in certain cases where reads are localized in such ways that reading some of them from one drive and others from the other drive helps. The reason why there is no scalability with streaming read performance vs what can be achieved with RAID0/3/5 is that there is no striping here. For example: if you need to read 4 striped blocks from a RAID0 of two drives, blocks 0 and 2 will be sequentially stored on the first drive, blocks 1 and 3 will be sequentially stored on the second drive. Thus reading the 4 blocks will result in two sequential reads per drive. OTOH, with RAID1, blocks 0 and 2 will be stored with a "gap" between them, containing block 1, and cannot be read sequentially, but a seek is needed. This is why e.g. the "split" method (which effectively does striping on the request level) doesn't help much with performance.