From owner-freebsd-scsi@FreeBSD.ORG Tue Feb 11 15:41:06 2014 Return-Path: Delivered-To: freebsd-scsi@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 8E969981 for ; Tue, 11 Feb 2014 15:41:06 +0000 (UTC) Received: from cu01176a.smtpx.saremail.com (cu01176a.smtpx.saremail.com [195.16.150.151]) by mx1.freebsd.org (Postfix) with ESMTP id 108411FE9 for ; Tue, 11 Feb 2014 15:41:05 +0000 (UTC) Received: from [172.16.2.2] (izaro.sarenet.es [192.148.167.11]) by proxypop03.sare.net (Postfix) with ESMTPSA id D14A19DC5A9 for ; Tue, 11 Feb 2014 16:41:03 +0100 (CET) From: Borja Marcos Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable Subject: The bloody RAID/JBOD virus Date: Tue, 11 Feb 2014 16:41:02 +0100 Message-Id: <359D6D68-B8AA-4295-8571-85F82E24D5E0@sarenet.es> To: freebsd-scsi@freebsd.org Mime-Version: 1.0 (Apple Message framework v1283) X-Mailer: Apple Mail (2.1283) X-BeenThere: freebsd-scsi@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: SCSI subsystem List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 11 Feb 2014 15:41:06 -0000 Hello, We are again evaluating hardware to use FreeBSD with ZFS as a storage = server. And, of course, we are again banging our heads against the bloody "intelligent" controllers. This machine has two controllers. The first one is recognized by the mps = driver, mps0: port 0x3f00-0x3fff mem = 0x90ebc000-0x90ebffff,0x912c0000-0x912fffff irq 32 at device 0.0 on = pci17 mps0: Firmware: 15.00.00.00, Driver: 16.00.00.00-fbsd mps0: IOCCapabilities: = 185c and the second one (this is what I don't like at all) by the mfi driver. mfi0: port 0x4f00-0x4fff mem = 0x913f0000-0x913fffff,0x91400000-0x914fffff irq 34 at device 0.0 on = pci22 mfi0: Using MSI mfi0: Megaraid SAS driver Ver 4.23=20 mfi0: FW MaxCmds =3D 240, limiting to 128 mfi0: MaxCmd =3D 240, Drv MaxCmd =3D 128, MaxSgl =3D 70, state =3D = 0xb73c00f0 So, again, we have the typical scenario of the RAID card between ZFS and = the disks. # camcontrol devlist at scbus0 target 1 lun 0 (pass0,da0) The machine has actually 24 disks.=20 # zpool status pool: clientes state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM clientes ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 mfisyspd0 ONLINE 0 0 0 mfisyspd1 ONLINE 0 0 0 mfisyspd2 ONLINE 0 0 0 mfisyspd3 ONLINE 0 0 0 mfisyspd4 ONLINE 0 0 0 mfisyspd5 ONLINE 0 0 0 mfisyspd6 ONLINE 0 0 0 mfisyspd7 ONLINE 0 0 0 mfisyspd8 ONLINE 0 0 0 mfisyspd9 ONLINE 0 0 0 mfisyspd10 ONLINE 0 0 0 raidz2-1 ONLINE 0 0 0 mfisyspd11 ONLINE 0 0 0 mfisyspd12 ONLINE 0 0 0 mfisyspd13 ONLINE 0 0 0 mfisyspd14 ONLINE 0 0 0 mfisyspd15 ONLINE 0 0 0 mfisyspd16 ONLINE 0 0 0 mfisyspd17 ONLINE 0 0 0 mfisyspd18 ONLINE 0 0 0 mfisyspd19 ONLINE 0 0 0 mfisyspd20 ONLINE 0 0 0 mfisyspd21 ONLINE 0 0 0 spares mfisyspd22 AVAIL =20 errors: No known data errors But, again, defined as "jbod", which I don't like at all. At least in = the past this formula has been a proven disaster. In the past (similar = situations) I've been unable to hot swap disks without voodoo macumba = procedures (I consider needing a "mfiutil" voodoo macumba) and, of = course, the real CAM subsystem has no visibility at all. # mfiutil show adapter mfi0 Adapter: Product Name: ServeRAID M5210e Serial Number: 3CJ0SG =20 Firmware: 24.0.2-0013 RAID Levels: JBOD, RAID0, RAID1, RAID10 Battery Backup: not present NVRAM: 32K Onboard Memory: 0M Minimum Stripe: 64K Maximum Stripe: 64K Actually, in several machines I use a patched driver that ignores all = the raid crap and presents the disks as real SAS disks. But I am using = it on machines in which a failure is not a tragedy. Been working for = years without incidents, but OS updates are always risky. So, I would like to know: 1) Is this "mfisyspd" REALLY a disk? Won't I notice any differences? So = far, I've attached SSD disks and ZFS has created a pool with 512 byte = blocks. Note the difference between a "more or less like a disk JBOD" = (which I definitely do not want) and a real disk. 2) Is there a way to bypass all that or should I look for a replacement = HBA instead? Seems it's impossible to get manufacturers to ship simple = HBAs without that "intelligent" RAID thing. Of course I may be wrong, and this card might be what I really want, = with no interference from the RAID functionality. But, so far, every = time I've seen so-called JBODs defined with RAID cards, they were = actually 1 disk RAID0 logical volumes, which I don't want. At least loading the mfip driver gives me access to the pass devices, = which is some progress. But I'm still not sure. Sorry for the blunt message, but wherever I look I see all these cards = we should not use with ZFS. Thanks! Borja.