Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 20 Jan 2019 09:59:25 +0000 (GMT)
From:      andy thomas <andy@time-domain.co.uk>
To:        Maciej Jan Broniarz <gausus@gausus.net>
Cc:        Rich <rincebrain@gmail.com>, freebsd-fs <freebsd-fs@freebsd.org>
Subject:   Re: ZFS on Hardware RAID
Message-ID:  <alpine.BSF.2.21.1901200930550.12592@mail0.time-domain.co.uk>
In-Reply-To: <1691666278.63816.1547976245836.JavaMail.zimbra@gausus.net>
References:  <1180280695.63420.1547910313494.JavaMail.zimbra@gausus.net> <92646202.63422.1547910433715.JavaMail.zimbra@gausus.net> <CAOeNLurgn-ep1e=Lq9kgxXK%2By5xqq4ULnudKZAbye59Ys7q96Q@mail.gmail.com> <alpine.BSF.2.21.1901200834470.12592@mail0.time-domain.co.uk> <1691666278.63816.1547976245836.JavaMail.zimbra@gausus.net>

next in thread | previous in thread | raw e-mail | index | archive | help
I don't h/w RAID controllers do any parity checking, etc for RAID 0 
virtual disks containing only one disk?

I know ZFS on h/w raid can't possibly be optimal and use of JBOD, 
pass-thru or plain HBA is to be preferred at all times but older RAID 
controllers don't support non-RAID operation - after all, h/w RAID 
controller design is primarily intended for the Windows Server market 
where Windows didn't support any kind of software RAID scheme at that time 
(it might do know, I don't know).

All I can say is ZFS on h/w RAID does work and we've been using it in 
production for years. I also have a system at home that has FreeBSD 
installed as ZFS on root on a LSI SAS RAID controller with four RAID 0 
virtual disks, again with no problems.

Below is the output from one of our servers:

root@penguin14:~ # uname -a
FreeBSD penguin14 9.3-RELEASE FreeBSD 9.3-RELEASE #0 r268512: Thu Jul 10 
23:44:39 UTC 2014     root@snap.freebsd.org:/usr/obj/usr/src/sys/GENERIC 
amd64

root@penguin14:~ # uptime
  9:39AM  up 1575 days, 17:52, 2 users, load averages: 0.27, 0.26, 0.28

root@penguin14:~ # zpool status
   pool: penguin14_tank
  state: ONLINE
   scan: none requested
config:

         NAME         STATE     READ WRITE CKSUM
         penguin14_tank  ONLINE       0     0     0
           raidz1-0   ONLINE       0     0     0
             mfid1p1  ONLINE       0     0     0
             mfid2p1  ONLINE       0     0     0
             mfid3p1  ONLINE       0     0     0
         spares
           mfid4p1    AVAIL

errors: No known data errors

root@penguin14:~ # jls -h jid host.hostname
jid host.hostname
19 penguin14web4
20 penguin14web3
21 penguin14web2
23 penguin14ssl
24 p14mysql55
25 noticeboard_3pb

Andy

On Sun, 20 Jan 2019, Maciej Jan Broniarz wrote:

> Hi,
>
> I am thinking about the scenario with ZFS on single disks configured to RAID0 by hw raid.
> Please correct me, if i'm wrong, but HW Raid uses a dedicated unit to process all RAID related work (eg. parity checks).
> With ZFS the job is done by CPU. How significant is the performance loss in that particular case?
>
> mjb
>
>
> ----- Oryginalna wiadomo?? -----
> Od: "andy thomas" <andy@time-domain.co.uk>
> Do: "Rich" <rincebrain@gmail.com>
> DW: "Maciej Jan Broniarz" <gausus@gausus.net>, "freebsd-fs" <freebsd-fs@freebsd.org>
> Wys?ane: niedziela, 20 stycze? 2019 9:45:21
> Temat: Re: ZFS on Hardware RAID
>
> I have to agree with your comment that hardware RAID controllers add
> another layer of opaque complexity but for what it's worth, I have to
> admit ZFS on h/w RAID does work and can work well in practice.
>
> I run a number of very busy webservers (Dell PowerEdge 2950 with LSI
> MegaRAID SAS 1078 controllers) with the first two disks in RAID 1 as the
> FreeBSD system disk and the remaining 4 disks configured as RAID 0 virtual
> disks making up a ZFS RAIDz1 pool with 3 disks plus one hot spare.
> With 6-10 jails running on each server, these have been running for
> years with no problems at all.
>
> Andy
>
> On Sat, 19 Jan 2019, Rich wrote:
>
>> The two caveats I'd offer are:
>> - RAID controllers add an opaque complexity layer if you have problems
>> - e.g. if you're using single-disk RAID0s to make a RAID controller
>> pretend to be an HBA, if the disk starts misbehaving, you have an
>> additional layer of behavior (how the RAID controller interprets
>> drives misbehaving and shows that to the OS) to figure out whether the
>> drive is bad, the connection is loose, the controller is bad, ...
>> - abstracting the redundancy away from ZFS means that ZFS can't
>> recover if it knows there's a problem but the underlying RAID
>> controller doesn't - that is, say you made a RAID-6, and ZFS sees some
>> block fail checksum. There's not a way to say "hey that block was
>> wrong, try that read again with different disks" to the controller, so
>> you're just sad at data loss on your nominally "redundant" array.
>>
>> - Rich
>>
>> On Sat, Jan 19, 2019 at 11:44 AM Maciej Jan Broniarz <gausus@gausus.net> wrote:
>>>
>>> Hi,
>>>
>>> I want to use ZFS on a hardware-raid array. I have no option of making it JBOD. I know it is best to use ZFS on JBOD, but
>>> that possible in that particular case. My question is - how bad of an idea is it. I have read very different opinions on that subject, but none of them seems conclusive.
>>>
>>> Any comments and especially case studies are most welcome.
>>> All best,
>>> mjb
>>> _______________________________________________
>>> freebsd-fs@freebsd.org mailing list
>>> https://lists.freebsd.org/mailman/listinfo/freebsd-fs
>>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"
>> _______________________________________________
>> freebsd-fs@freebsd.org mailing list
>> https://lists.freebsd.org/mailman/listinfo/freebsd-fs
>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"
>>
>
>
> ----------------------------
> Andy Thomas,
> Time Domain Systems
>
> Tel: +44 (0)7866 556626
> Fax: +44 (0)20 8372 2582
> http://www.time-domain.co.uk
>
>


----------------------------
Andy Thomas,
Time Domain Systems

Tel: +44 (0)7866 556626
Fax: +44 (0)20 8372 2582
http://www.time-domain.co.uk



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?alpine.BSF.2.21.1901200930550.12592>