From owner-freebsd-hardware@FreeBSD.ORG Wed Dec 12 15:41:24 2007 Return-Path: Delivered-To: freebsd-hardware@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id D705F16A46B for ; Wed, 12 Dec 2007 15:41:24 +0000 (UTC) (envelope-from lan@hangwithme.com) Received: from mta1.recol.net (mta1.recol.net [64.207.103.6]) by mx1.freebsd.org (Postfix) with ESMTP id 8867513C47E for ; Wed, 12 Dec 2007 15:41:24 +0000 (UTC) (envelope-from lan@hangwithme.com) Message-ID: <47600123.4060706@hangwithme.com> Date: Wed, 12 Dec 2007 10:41:23 -0500 From: Lan Tran User-Agent: Thunderbird 2.0.0.9 (Windows/20071031) MIME-Version: 1.0 To: Ivan Voras References: <475D7866.1070803@hangwithme.com> <475D7D60.4040701@fuckner.net> In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-hardware@freebsd.org Subject: Re: large disk > 8 TB X-BeenThere: freebsd-hardware@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: General discussion of FreeBSD hardware List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 12 Dec 2007 15:41:24 -0000 Ivan Voras wrote: > Michael Fuckner wrote: > >> Lan Tran wrote: >> >>> I have a Dell PERC 6/E controller connected to an external Dell MD1000 >>> storage, which I set up RAID 6 for. The RAID BIOS reports 8.5 TB. I >>> installed 7BETA4 amd64 and Sysinstall/dmesg.boot detects this correctly: >>> mfid1: on mfi1 >>> mfid1: 8578560MB (17568890880 sectors) RAID volume 'raid6' is optimal" >>> >>> However, after I created a zfs zpool on this device it only shows 185 >>> GB. # zpool create tank /dev/mfid1s1d >>> # zpool list >>> NAME SIZE USED AVAIL CAP HEALTH ALTROOT >>> tank 185G 111K 185G 0% ONLINE - >>> >>> also with 'dh': >>> # df -h tank >>> Filesystem Size Used Avail Capacity Mounted on >>> tank 182G 0B 182G 0% /tank >>> >>> >> The main purpose of ZFS is doing Software raid (which is even faster >> than HW Raid nowadays). >> >> You should export all disks seperately to the OS- and then you don't >> have the 4GB limit wrapping the size to 185GB. >> > > This is the wrong way around. Why would something wrap drive sizes at a > 32-bit limit? The driver and the GEOM systems are 64-bit clean, if this > is a problem in ZFS, it's a serious one. > > I don't have the drive capacity to create a large array, but I assume > someone has tested ZFS on large arrays (Pawel?) > > Can you run "diskinfo -v " on the large array (the 8.5 TB one) and > verify the system sees it all? > > > # diskinfo -v mfid1 mfid1 512 # sectorsize 8995272130560 # mediasize in bytes (8.2T) 17568890880 # mediasize in sectors 1093612 # Cylinders according to firmware. 255 # Heads according to firmware. 63 # Sectors according to firmware. I realized that Sysinstall cannot handle fdisk/disklabel that are more than 2 TB after some searching. So it is not a ZFS issue. I deleted and re-created the raw device with 'newfs /dev/mfid1' command. I can see the 8 TB slice now. I went back to hardware RAID because while testing ZFS raidz2 the hot spare did not kick in, if one of the disk was pulled out of the bay. I think because it's a RAID controller and not a JBOD card and each disk is exported to the OS a RAID 0. The Dell PERC 6/E reorders the "disk groups" when a disk is missing. There are 15 disk groups for 15 "virtual disks" and they are labeled as disk group 1 to 15. Disk group 1 is mapped to virtual disk 1 and so on. After pulling out disk 13 for example, the disk group to virtual disk mappings are changed and mismatched. A JBOD card would work nicely with ZFS. I don't see an option in the card BIOS to make it act as a JBOD instead. Thanks for all your responses. I'm a happy camper to see all the space in one big fat slice :). Lan