From owner-freebsd-stable@FreeBSD.ORG Thu Mar 16 17:58:09 2006 Return-Path: X-Original-To: freebsd-stable@freebsd.org Delivered-To: freebsd-stable@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id CBAE716A401 for ; Thu, 16 Mar 2006 17:58:09 +0000 (UTC) (envelope-from jbozza@qlinksmedia.com) Received: from mail.thinkburst.com (mail.thinkburst.com [66.210.222.46]) by mx1.FreeBSD.org (Postfix) with ESMTP id 6EC7843D46 for ; Thu, 16 Mar 2006 17:58:09 +0000 (GMT) (envelope-from jbozza@qlinksmedia.com) Received: from mailgate.thinkburstmedia.com (gateway.thinkburstmedia.com [66.210.222.36]) by mail.thinkburst.com (Postfix) with ESMTP id 708A83A; Thu, 16 Mar 2006 11:58:08 -0600 (CST) Received: from thinkburst.com (bacchus.thinkburst.com [10.1.1.25]) by mailgate.thinkburstmedia.com (Postfix) with ESMTP id 5D67017046; Thu, 16 Mar 2006 11:58:08 -0600 (CST) Content-class: urn:content-classes:message MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable X-MimeOLE: Produced By Microsoft Exchange V6.5.7226.0 Date: Thu, 16 Mar 2006 11:58:07 -0600 Message-ID: X-MS-Has-Attach: X-MS-TNEF-Correlator: Thread-Topic: well-supported SATA RAID card? thread-index: AcZIjc0TvAAfaJxuTae1OV+xVDjIlQAkpSpA From: "Jaime Bozza" To: Cc: freebsd-stable@mlists.thewrittenword.com Subject: RE: well-supported SATA RAID card? X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 16 Mar 2006 17:58:09 -0000 >>>*Rebuild times? >>Can't give you an exact since it's been a while since I tested the >>original rebuild, but we've migrated the RAID set (and volume) twice >>since getting the system and the migrations happened within hours. I >>was able to expand the RAID Set (adding drives) and expand the >>corresponding volume set to fill the drives all while the system was >>running without a hitch. >So you increased the size of a file-system on-the-fly? Not a file-system but a volume. I'm partitioning the volume into 800GB chunks for this particular situation. We just did it for the last time, so I have some numbers. Previous Configuration: 11 WD4000YR 400GB drives RAID 6 3600GB volume 4 800GB partitions (using gpt) Remaining 400GB unused Added: 5 WD4000YR 400GB drives Time to Expand RAID set: 12 hours Time to Expand Volume: 56 minutes New Volume: RAID 6 5600GB 7 800GB partitions During the RAID Set Expansion, the Areca fills out the Volume from the 11 drives to the 16 drives, so it's a lot of writing. It basically rewrote all 3600GB of existing data, which accounts for the 12 hours. Expanding the Volume "initializes" the extra space and once it's done FreeBSD sees the "new" larger volume. Areca doesn't touch the first part of the volume when expanding it, so existing data isn't destroyed. Of course, if you modified a volume set to make it smaller, you're mostly out of luck. I didn't have to reboot during any of this process. The most I had to do was unmount the 4 existing volumes so that I had write access to the volume (gpt doesn't allow write access when partitions are mounted), then run gpt recover to recover the secondary partition table at the end of the volume. After that, it was just a simple matter of adding the 3 new partitions and mounting them. The above "Time to Expand Volume" was actually generating RAID 6 parity data for the additional 2 terabytes, so that should give a good idea on the speed of the XOR engine. This was at the maximum of 80% utilization for the background process. I suspect it would have been a little quicker if I restarted and used the BIOS menu to expand (since it would have been a foreground process), but it's nice to be able to keep the system in used while I was running the processes. Jaime Bozza