Date: Thu, 22 Aug 2013 08:19:05 -0700 From: Ravi Pokala <rp_freebsd@mac.com> To: freebsd-geom@freebsd.org Subject: Re: Bootable RAID10 on 9.0-RELEASE Message-ID: <CE3B7737.EE69D%rpokala@mac.com> In-Reply-To: <mailman.89.1377172803.35175.freebsd-geom@freebsd.org>
next in thread | previous in thread | raw e-mail | index | archive | help
-----Original Message----- >From: freeiron <gul@ironsystems.com> >To: freebsd-geom@freebsd.org >Subject: Re: Bootable RAID10 on 9.0-RELEASE >Message-ID: <1377127128386-5837809.post@n5.nabble.com> >Content-Type: text/plain; charset=us-ascii > >Hi Ravi, > >Can you please help with software RAID 10 on LSI 9207-8I adapter with 20 >drives on FreeBSD 9.1. i am new to FreeBSD and will appreciate your help. >Seems like you have mentioned something using with LSI drivers and then i >might not have to use a spare drive. > >-- >View this message in context: >http://freebsd.1045724.n5.nabble.com/Bootable-RAID10-on-9-0-RELEASE-tp5437 >647p5837809.html >Sent from the freebsd-geom mailing list archive at Nabble.com. Hi freeiron, As described in the last post on the thread ( http://freebsd.1045724.n5.nabble.com/Bootable-RAID10-on-9-0-RELEASE-tp54376 47p5452951.html ), I ended up just using the LSI controller firmware to create the RAID10: | When I got a chance to play with the actual hardware, I found that it | has an LSI SAS controller which is supported by mfi(4). I ended up | setting up the RAID10 in the pre-boot environment, then just creating | GPT partitions on mfid0 and going from there. *Much* easier (once I | dug up the documentation on LSI's website), works fine, and the | interface offered by `mfiutil' looks pretty reasonable. FreeBSD recognized my controller as being supported by mfi(4), so the resulting array showed up as a drive named 'mfid0', which I then treated in the usual manner in `bsdinstall' (i.e. partitioned w/ GPT, created filesystems, etc.). The only thing that gave me trouble was that I accidentally configured one of the servers with a very small stripe size (8KB?), which lead to terrible performance; nuking the array and reconfiguring it with a 64KB stripe yielded *much* better performance. Manipulating and reporting status can be done from a running system using mfiutil(8). | [server:~] root% mfiutil show adapter | mfi0 Adapter: | Product Name: Supermicro SMC2108 | Serial Number: | Firmware: 12.12.0-0047 | RAID Levels: JBOD, RAID0, RAID1, RAID5, RAID6, RAID10, RAID50 | Battery Backup: not present | NVRAM: 32K | Onboard Memory: 512M | Minimum Stripe: 8k | Maximum Stripe: 1M | [server:~] root% mfiutil show firmware | mfi0 Firmware Package Version: 12.12.0-0047 | mfi0 Firmware Images: | Name Version Date Time Status | APP 2.120.53-1235 Mar 25 2011 17:37:57 active | BIOS 3.22.00_4.11.05.00_0x05020000 3/16/2011 | 3/16/2011 | active | PCLI 04.04-017:#%00008 Oct 12 2010 11:32:58 active | BCON 6.0-37-e_32-Rel Mar 23 2011 10:30:10 active | NVDT 2.09.03-0013 Mar 29 2011 02:35:36 active | BTBL 2.02.00.00-0000 Sep 16 2009 21:37:06 active | BOOT 01.250.04.219 4/28/2009 12:51:38 active | [server:~] root% mfiutil -de show config | mfi0 Configuration: 2 arrays, 1 volumes, 0 spares | array 0 of 2 drives: | drive 8 E1:S0 ( 558G) ONLINE <SEAGATE ST3600057SS 0008 serial=6SL195VK> SAS | drive 9 E1:S1 ( 558G) ONLINE <SEAGATE ST3600057SS 0008 serial=6SL0XCHV> SAS | array 1 of 2 drives: | drive 10 E1:S2 ( 558G) ONLINE <SEAGATE ST3600057SS 0008 serial=6SL1D7SB> SAS | drive 11 E1:S3 ( 558G) ONLINE <SEAGATE ST3600057SS 0008 serial=6SL1GFBT> SAS | volume mfid0 (1115G) RAID-1 64k OPTIMAL spans: | array 0 | array 1 | [server:~] root% mfiutil show volumes | mfi0 Volumes: | Id Size Level Stripe State Cache Name | mfid0 ( 1115G) RAID-1 64k OPTIMAL Enabled Of course, all that is only applicable if your controller is also recognized by mfi(4). If not, then I can't really help you. I hope that helps, rp
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CE3B7737.EE69D%rpokala>