From owner-freebsd-fs@freebsd.org Sun Jan 20 09:24:11 2019 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 8855F14AC16F for ; Sun, 20 Jan 2019 09:24:11 +0000 (UTC) (envelope-from gausus@gausus.net) Received: from poczta.dzikakuna.net (poczta.dzikakuna.net [91.192.224.135]) by mx1.freebsd.org (Postfix) with ESMTP id 4C2328D91C for ; Sun, 20 Jan 2019 09:24:10 +0000 (UTC) (envelope-from gausus@gausus.net) Received: from localhost (localhost [127.0.0.1]) by poczta.dzikakuna.net (Postfix) with ESMTP id 1B77B43A0A9; Sun, 20 Jan 2019 10:24:08 +0100 (CET) Received: from poczta.dzikakuna.net ([127.0.0.1]) by localhost (poczta.dzikakuna.net [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id Dtpyd3yG4jHJ; Sun, 20 Jan 2019 10:24:06 +0100 (CET) Received: from localhost (localhost [127.0.0.1]) by poczta.dzikakuna.net (Postfix) with ESMTP id AFAE843A0AA; Sun, 20 Jan 2019 10:24:06 +0100 (CET) X-Virus-Scanned: amavisd-new at metroplex.pl Received: from poczta.dzikakuna.net ([127.0.0.1]) by localhost (poczta.dzikakuna.net [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id QVGZDnSvpCJ7; Sun, 20 Jan 2019 10:24:06 +0100 (CET) Received: from poczta.dzikakuna.net (poczta.dzikakuna.net [91.192.224.135]) by poczta.dzikakuna.net (Postfix) with ESMTP id 8C76743A0A9; Sun, 20 Jan 2019 10:24:06 +0100 (CET) Date: Sun, 20 Jan 2019 10:24:05 +0100 (CET) From: Maciej Jan Broniarz To: andy thomas Cc: Rich , freebsd-fs Message-ID: <1691666278.63816.1547976245836.JavaMail.zimbra@gausus.net> In-Reply-To: References: <1180280695.63420.1547910313494.JavaMail.zimbra@gausus.net> <92646202.63422.1547910433715.JavaMail.zimbra@gausus.net> Subject: Re: ZFS on Hardware RAID MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Mailer: Zimbra 8.0.9_GA_6191 (ZimbraWebClient - GC71 (Mac)/8.0.9_GA_6191) Thread-Topic: ZFS on Hardware RAID Thread-Index: Sw47gmJaRzxfRN8DdUurA9QnL/tzyA== X-Rspamd-Queue-Id: 4C2328D91C X-Spamd-Bar: / Authentication-Results: mx1.freebsd.org X-Spamd-Result: default: False [0.25 / 15.00]; ARC_NA(0.00)[]; NEURAL_HAM_MEDIUM(-0.55)[-0.547,0]; RCVD_COUNT_FIVE(0.00)[6]; FROM_HAS_DN(0.00)[]; RCPT_COUNT_THREE(0.00)[3]; NEURAL_SPAM_SHORT(0.31)[0.308,0]; IP_SCORE(0.01)[country: PL(0.03)]; MIME_GOOD(-0.10)[text/plain]; DMARC_NA(0.00)[gausus.net]; AUTH_NA(1.00)[]; NEURAL_HAM_LONG(-0.50)[-0.502,0]; TO_MATCH_ENVRCPT_SOME(0.00)[]; TO_DN_ALL(0.00)[]; MX_GOOD(-0.01)[cached: poczta.dzikakuna.net]; R_SPF_NA(0.00)[]; RCVD_NO_TLS_LAST(0.10)[]; FROM_EQ_ENVFROM(0.00)[]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; ASN(0.00)[asn:42576, ipnet:91.192.224.0/24, country:PL]; FREEMAIL_CC(0.00)[gmail.com]; MID_RHS_MATCH_FROM(0.00)[] X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 20 Jan 2019 09:24:11 -0000 Hi, I am thinking about the scenario with ZFS on single disks configured to RAI= D0 by hw raid. Please correct me, if i'm wrong, but HW Raid uses a dedicated unit to proce= ss all RAID related work (eg. parity checks). With ZFS the job is done by CPU. How significant is the performance loss in= that particular case? mjb ----- Oryginalna wiadomo=C5=9B=C4=87 ----- Od: "andy thomas" Do: "Rich" DW: "Maciej Jan Broniarz" , "freebsd-fs" Wys=C5=82ane: niedziela, 20 stycze=C5=84 2019 9:45:21 Temat: Re: ZFS on Hardware RAID I have to agree with your comment that hardware RAID controllers add=20 another layer of opaque complexity but for what it's worth, I have to=20 admit ZFS on h/w RAID does work and can work well in practice. I run a number of very busy webservers (Dell PowerEdge 2950 with LSI=20 MegaRAID SAS 1078 controllers) with the first two disks in RAID 1 as the=20 FreeBSD system disk and the remaining 4 disks configured as RAID 0 virtual= =20 disks making up a ZFS RAIDz1 pool with 3 disks plus one hot spare.=20 With 6-10 jails running on each server, these have been running for=20 years with no problems at all. Andy On Sat, 19 Jan 2019, Rich wrote: > The two caveats I'd offer are: > - RAID controllers add an opaque complexity layer if you have problems > - e.g. if you're using single-disk RAID0s to make a RAID controller > pretend to be an HBA, if the disk starts misbehaving, you have an > additional layer of behavior (how the RAID controller interprets > drives misbehaving and shows that to the OS) to figure out whether the > drive is bad, the connection is loose, the controller is bad, ... > - abstracting the redundancy away from ZFS means that ZFS can't > recover if it knows there's a problem but the underlying RAID > controller doesn't - that is, say you made a RAID-6, and ZFS sees some > block fail checksum. There's not a way to say "hey that block was > wrong, try that read again with different disks" to the controller, so > you're just sad at data loss on your nominally "redundant" array. > > - Rich > > On Sat, Jan 19, 2019 at 11:44 AM Maciej Jan Broniarz = wrote: >> >> Hi, >> >> I want to use ZFS on a hardware-raid array. I have no option of making i= t JBOD. I know it is best to use ZFS on JBOD, but >> that possible in that particular case. My question is - how bad of an id= ea is it. I have read very different opinions on that subject, but none of = them seems conclusive. >> >> Any comments and especially case studies are most welcome. >> All best, >> mjb >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> https://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > _______________________________________________ > freebsd-fs@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > ---------------------------- Andy Thomas, Time Domain Systems Tel: +44 (0)7866 556626 Fax: +44 (0)20 8372 2582 http://www.time-domain.co.uk