From owner-freebsd-geom@FreeBSD.ORG Tue Jul 21 07:50:09 2009 Return-Path: Delivered-To: freebsd-geom@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 0AEC010656AC for ; Tue, 21 Jul 2009 07:50:08 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id D7F478FC12 for ; Tue, 21 Jul 2009 07:50:07 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (gnats@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.3/8.14.3) with ESMTP id n6L7o7F0000584 for ; Tue, 21 Jul 2009 07:50:07 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.3/8.14.3/Submit) id n6L7o7MC000583; Tue, 21 Jul 2009 07:50:07 GMT (envelope-from gnats) Date: Tue, 21 Jul 2009 07:50:07 GMT Message-Id: <200907210750.n6L7o7MC000583@freefall.freebsd.org> To: freebsd-geom@FreeBSD.org From: freebsdpr Cc: Subject: Re: kern/113885: [gmirror] [patch] improved gmirror balance algorithm X-BeenThere: freebsd-geom@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: freebsdpr List-Id: GEOM-specific discussions and implementations List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 21 Jul 2009 07:50:19 -0000 The following reply was made to PR kern/113885; it has been noted by GNATS. From: freebsdpr To: bug-followup@FreeBSD.org Cc: freebsdpr Subject: Re: kern/113885: [gmirror] [patch] improved gmirror balance algorithm Date: Tue, 21 Jul 2009 17:45:37 +1000 (EST) I was also surprised to discover that gmirror, regardless of the algorithm used, does not seem to offer either random or sequential read performance any better than a single drive. I have a new SATA backplane which shows individual drive activity indicators - with these you can easily see that the "load" algorithm seems to be selecting (and staying on) only a single drive at a time, for anywhere between 0.1 - 1 seconds. Some simple testing confirmed that there's no discernable read performance benefit between 1 or >1 drives - so much for my 4 drive RAID1 idea! In comparison, a 5 drive graid3 array offers a sequential read speed of nearly 4 times a single drive... with read verify ON. ---- Onto the "load" patch above - it doesn't seem to work for me. I thought it may have been because I had 4 drives in the array, but even after dropping back to 2 it still only reads from a *single* drive. Any ideas? I'm using 7.1R-amd64. Geom name: db0 State: COMPLETE Components: 2 Balance: load <--- ***