Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 13 Nov 2012 17:46:44 -0800
From:      Julian Elischer <julian@freebsd.org>
To:        Jason Keltz <jas@cse.yorku.ca>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: RHEL to FreeBSD file server
Message-ID:  <50A2F804.3010009@freebsd.org>
In-Reply-To: <50A2B95D.4000400@cse.yorku.ca>
References:  <50A130B7.4080604@cse.yorku.ca> <20121113043409.GA70601@neutralgood.org> <alpine.GSO.2.01.1211131132110.14586@freddy.simplesystems.org> <50A2B95D.4000400@cse.yorku.ca>

next in thread | previous in thread | raw e-mail | index | archive | help
On 11/13/12 1:19 PM, Jason Keltz wrote:
> On 11/13/2012 12:41 PM, Bob Friesenhahn wrote:
>> On Mon, 12 Nov 2012, kpneal@pobox.com wrote:
>>>
>>> With your setup of 11 mirrors you have a good mixture of read and 
>>> write
>>> performance, but you've compromised on the safety. The reason that 
>>> RAID 6
>>> (and thus raidz2) and up were invented was because drives that get 
>>> used
>>> together tend to fail together. If you lose a drive in a mirror 
>>> there is
>>> an elevated probability that the replacement drive will not be in 
>>> place
>>> before the remaining leg of the mirror fails. If that happens then 
>>> you've
>>> lost the pool. (Drive failures are _not_ independent.)
>>
>> Do you have a reference to independent data which supports this 
>> claim that drive failures are not independent?  The whole function 
>> of RAID assumes that drive failures are independent.
>>
>> If drives share a chassis, care should be taken to make sure that 
>> redundant drives are not in physical proximity to each other and 
>> that they are supported via a different controller, I/O path, and 
>> power supply.  If the drives are in a different chassis then their 
>> failures should be completely independent outside of a shared event 
>> like power surge, fire, EMP, flood, or sun-spot activity.
>>
>> The idea of raidz2 vdevs of four drives each sounds nice but will 
>> suffer from decreased performance and increased time to replace a 
>> failed disk.   There are always tradeoffs.
>
> Hi Bob.
>
> Initially, I had one storage chassis, split between 2 LSI 9205-8e 
> controllers with a 22 disk pool comprised of 11 mirrored vdevs.
> I think that I'm still slightly uncomfortable with the fact that 2 
> disks, which were all purchased at the same time, could essentially 
> die at the same time, killing my whole pool.   Yet, while moving to 
> raidz2 would allow better redundancy, I'm not sure if the raidz2 
> rebuild time and decrease in performance would be worth it..
> After all, this would be a primary file server, without which, I'd 
> be in big trouble..
> As a result, I'm considering this approach..
> I'll buy another md1220, a few more disks, add another 9205-8e 
> card...  and use triple mirrored vdevs instead of dual....  I only 
> really need about 8 x 900 GB storage, so if I can multiply this by 
> 3, add a few spares... in addition, each set of disks would be on 
> its own controller.  I should be able to lose a controller, and 
> maintain full redundancy....  I should be able to lose an entire 
> disk enclosure and still be up ... I believe read performance would 
> probably go up, but I suspect that write performance would suffer a 
> little -- not sure exactly by how much.
>
> When I first speced out the server, the LSI 9205-8e was the best 
> choice for a card since the PCI Express 3 HBAs (which the R720 
> supports) weren't out yet ... now, there's the LSI 9207-8e which is 
> PCIE3, but I guess it doesn't make much sense to buy one of those 
> now that I have another 2 x LSI 9205-8e cards already ... (a shame 
> though since there is less than $50 difference between the cards).
>
> By the way - on another note - what do you or other list members 
> think of the new Intel SSD DC S3700 as ZIL? Sounds very promising 
> when it's finally available.  I spent a lot of time researching ZILs 
> today, and one thing I can say is that I have a major headache now 
> because of it!!

ZIL is best served by battery backed up ram or something.. it's tiny 
and not a really good fit an SSD
(maybe just  a partition)  L2ARC on the other hand is a really good 
use for SSD.

>
> Jason.
>
> _______________________________________________
> freebsd-fs@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"
>
>




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?50A2F804.3010009>