From owner-freebsd-hardware@FreeBSD.ORG Mon Dec 10 22:27:34 2007 Return-Path: Delivered-To: freebsd-hardware@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id BA9FF16A420 for ; Mon, 10 Dec 2007 22:27:34 +0000 (UTC) (envelope-from lan@hangwithme.com) Received: from mta1.recol.net (mta1.recol.net [64.207.103.6]) by mx1.freebsd.org (Postfix) with ESMTP id 9571113C4D5 for ; Mon, 10 Dec 2007 22:27:34 +0000 (UTC) (envelope-from lan@hangwithme.com) Message-ID: <475DBD55.1000702@hangwithme.com> Date: Mon, 10 Dec 2007 17:27:33 -0500 From: Lan Tran User-Agent: Thunderbird 2.0.0.9 (Windows/20071031) MIME-Version: 1.0 To: Michael Fuckner References: <475D7866.1070803@hangwithme.com> <475D7D60.4040701@fuckner.net> In-Reply-To: <475D7D60.4040701@fuckner.net> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-hardware@freebsd.org Subject: Re: large disk > 8 TB X-BeenThere: freebsd-hardware@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: General discussion of FreeBSD hardware List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 10 Dec 2007 22:27:34 -0000 Michael Fuckner wrote: > Lan Tran wrote: > >> I have a Dell PERC 6/E controller connected to an external Dell MD1000 >> storage, which I set up RAID 6 for. The RAID BIOS reports 8.5 TB. I >> installed 7BETA4 amd64 and Sysinstall/dmesg.boot detects this correctly: >> mfid1: on mfi1 >> mfid1: 8578560MB (17568890880 sectors) RAID volume 'raid6' is optimal" >> >> However, after I created a zfs zpool on this device it only shows 185 >> GB. # zpool create tank /dev/mfid1s1d >> # zpool list >> NAME SIZE USED AVAIL CAP HEALTH ALTROOT >> tank 185G 111K 185G 0% ONLINE - >> >> also with 'dh': >> # df -h tank >> Filesystem Size Used Avail Capacity Mounted on >> tank 182G 0B 182G 0% /tank >> >> > > The main purpose of ZFS is doing Software raid (which is even faster > than HW Raid nowadays). > > You should export all disks seperately to the OS- and then you don't > have the 4GB limit wrapping the size to 185GB. > > Regards, > Michael > Hi Michael, I took your advice and went with ZFS software raid. There are 15-750 GB SATA2 disks and I want the most space, easy management and reliability. Performance (read/write) is not a big concern since it's a mail archiving system. I chose raidz2 + 2 hot spares. Getting about 65 MB/s write performance from 'dd'. # zpool status pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 raidz2 ONLINE 0 0 0 mfid1 ONLINE 0 0 0 mfid2 ONLINE 0 0 0 mfid3 ONLINE 0 0 0 mfid4 ONLINE 0 0 0 mfid5 ONLINE 0 0 0 mfid6 ONLINE 0 0 0 mfid7 ONLINE 0 0 0 mfid8 ONLINE 0 0 0 mfid9 ONLINE 0 0 0 mfid10 ONLINE 0 0 0 mfid11 ONLINE 0 0 0 mfid12 ONLINE 0 0 0 mfid13 ONLINE 0 0 0 spares mfid14 AVAIL mfid15 AVAIL errors: No known data errors Thanks! Lan