From owner-freebsd-questions Sun Oct 24 11:52:25 1999 Delivered-To: freebsd-questions@freebsd.org Received: from mail.rdc2.on.home.com (ha1.rdc2.on.home.com [24.9.0.15]) by hub.freebsd.org (Postfix) with ESMTP id C9362150CD for ; Sun, 24 Oct 1999 11:52:23 -0700 (PDT) (envelope-from street@iname.com) Received: from mired.eh.local ([24.64.136.188]) by mail.rdc2.on.home.com (InterMail v4.01.01.07 201-229-111-110) with ESMTP id <19991024185222.PSKM3040.mail.rdc2.on.home.com@mired.eh.local>; Sun, 24 Oct 1999 11:52:22 -0700 Received: (from kws@localhost) by mired.eh.local (8.9.3/8.9.3) id OAA68798; Sun, 24 Oct 1999 14:52:22 -0400 (EDT) (envelope-from kws) From: Kevin Street MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Message-ID: <14355.21862.534594.839643@mired.eh.local> Date: Sun, 24 Oct 1999 14:52:22 -0400 (EDT) To: freebsd-questions@FreeBSD.org Cc: Greg Lehey Subject: vinum stripe size vs ufs X-Mailer: VM 6.71 under 21.1 "20 Minutes to Nikko" XEmacs Lucid (patch 2) Sender: owner-freebsd-questions@FreeBSD.ORG Precedence: bulk X-Loop: FreeBSD.ORG I just set up a new vinum installation with a striped volume on 2 disks. I noticed an ugly interaction between vinum's stripe sizes and ufs. My first try used the default stripe size of 256k and newfs defaults. Watching iostat when I did the restore of my file system to the new volume, I could see all the io going to one disk during the first part of the restore (restore of the dir structure ?) then a 4 to 1 imbalance of io during the rest of the restore. A du on the resulting file system did all its io on one disk. I tried changing the newfs cylinders/group to 19 (from 16) but this had little effect. I then rebuilt my vinum plex with a stripe size of 257k (rather than 256k) and newfs -c 19 and this works MUCH better. I now see very even distribution of io during the restore, directory operations and general file access. Prime numbers == good. I've probably managed to get vinum into a fairly optimal state by tinkering, but are there any recommendations for stripe sizes or newfs parameters for vinum use on striped configurations? Greg, you should probably think about changing the default stripe size since the default gives very non-optimal results on 2 disks (and probably on any even number of striped disks). This was all on a very recent 4.0 -current but probably applies to vinum in 3.x as well. -- Kevin Street street@iname.com To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-questions" in the body of the message