From owner-freebsd-hackers@FreeBSD.ORG Sat Feb 11 20:44:04 2006 Return-Path: X-Original-To: freebsd-hackers@freebsd.org Delivered-To: freebsd-hackers@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 6DFA516A420 for ; Sat, 11 Feb 2006 20:44:04 +0000 (GMT) (envelope-from bakul@bitblocks.com) Received: from gate.bitblocks.com (bitblocks.com [209.204.185.216]) by mx1.FreeBSD.org (Postfix) with ESMTP id 2C10D43D5C for ; Sat, 11 Feb 2006 20:44:04 +0000 (GMT) (envelope-from bakul@bitblocks.com) Received: from bitblocks.com (localhost [127.0.0.1]) by gate.bitblocks.com (8.13.4/8.13.1) with ESMTP id k1BKi3u8083329 for ; Sat, 11 Feb 2006 12:44:03 -0800 (PST) (envelope-from bakul@bitblocks.com) Message-Id: <200602112044.k1BKi3u8083329@gate.bitblocks.com> To: freebsd-hackers@freebsd.org Date: Sat, 11 Feb 2006 12:44:03 -0800 From: Bakul Shah Subject: RAID5 on athlon64 machines X-BeenThere: freebsd-hackers@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Technical Discussions relating to FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 11 Feb 2006 20:44:04 -0000 I built an Asus A8N SLI Deluxe based system and installed FreeBSD-6.1-BETA1 on it. This works well enough. Now I am looking for a decent RAID5 solution. This motherboard has two SATA RAID controllers. But one does only RAID1. The other supports RAID5 but seems to require s/w assistance from windows driver. The BIOS does let you designate a set of disks as a raid5 group but Freebsd does not recognize it as a group in any case. I noticed that vinum is gone from -current and we have gvinum now but it does not implement all of the vinum commands. But that is ok if it provides what I need. I played with it a little bit. Its sequential read performance is ok (I am using 3 disks for RAID5 and the read rate is twice the speed of one disk as expected). But the write rate is abysmal! I get about 12.5MB/s or about 1/9 of the read rate. So what gives? Are there some magic stripe sizes for better performance? I used a stripe size of 279k as per vinum recommendation. Theoretically the sequential write rate should be same or higher than the sequential read rate. Given an N+1 disk array, for N blocks reads you XOR N + 1 blocks and compare the result to 0 but for N block writes you XOR N blocks. So there is less work for large writes. Which leads me to ask: is gvinum stable enough for real use or should I just get a h/w RAID card? If the latter, any recommendations? What I'd like: Critical: - RAID5 - good write performance - orderly shutdown (I noticed vinum stop command is gone but may be it is not needed?) - quick recovery from a system crash. It shouldn't have to rebuild the whole array. - parity check on reads (a crash may have rendered a stripe inconsistent) - must not correct bad parity by rewriting a stripe Nice to have: - ability to operate in "degraded" mode, where one of the disks is dead. - ability to rebuild the array in background - commands to take a disk offline, associate a spare with a particular disk - use a spare drive effectively - allow a bad parity stripe for future writes - allow rewriting parity under user control. Thanks!