Date: Fri, 3 May 2019 09:09:29 +0200 From: Borja Marcos <borjam@sarenet.es> To: Michelle Sullivan <michelle@sorbs.net> Cc: Xin LI <delphij@gmail.com>, owner-freebsd-stable@freebsd.org, Andrea Venturoli <ml@netfence.it>, freebsd-stable <freebsd-stable@freebsd.org>, rainer@ultra-secure.de Subject: Re: ZFS... Message-ID: <58DA896C-5312-47BC-8887-7680941A9AF2@sarenet.es> In-Reply-To: <fe6880bc-d40a-2377-6bea-28bfd8229e9f@sorbs.net> References: <30506b3d-64fb-b327-94ae-d9da522f3a48@sorbs.net> <CAOtMX2gf3AZr1-QOX_6yYQoqE-H%2B8MjOWc=eK1tcwt5M3dCzdw@mail.gmail.com> <56833732-2945-4BD3-95A6-7AF55AB87674@sorbs.net> <3d0f6436-f3d7-6fee-ed81-a24d44223f2f@netfence.it> <17B373DA-4AFC-4D25-B776-0D0DED98B320@sorbs.net> <70fac2fe3f23f85dd442d93ffea368e1@ultra-secure.de> <70C87D93-D1F9-458E-9723-19F9777E6F12@sorbs.net> <CAGMYy3tYqvrKgk2c==WTwrH03uTN1xQifPRNxXccMsRE1spaRA@mail.gmail.com> <fe6880bc-d40a-2377-6bea-28bfd8229e9f@sorbs.net>
next in thread | previous in thread | raw e-mail | index | archive | help
> On 1 May 2019, at 04:26, Michelle Sullivan <michelle@sorbs.net> wrote: >=20 > mfid8 ONLINE 0 0 0 Anyway I think this is a mistake (mfid). I know, HBA makers have been = insisting on having their firmware getting in the middle, which is a bad thing. The right way to use disks is to give ZFS access to the plain CAM = devices, not thorugh some so-called JBOD on a RAID controller which, at least for a long time, has been a *logical* = =E2=80=9CRAID0=E2=80=9D volume on a single disk. That additional layer = can=20 completely break the semantics of transaction writes and cache flushes.=20= With some older cards it can be tricky to achieve, from patching source = drivers to enabling a sysctl tunable or even flashing the card to turn it into a plain HBA with no RAID features (or = minimal ones). If your drives are not called /dev/daX or /dev/adaX you are likely to be = in trouble. Unless something has really changed recently you don=E2=80=99t want =E2=80=9Cmfid=E2=80=9D or =E2=80=9Cmfisyspd=E2=80=9D= . I have suffered hidden data corruption due to a faulty HBA and failures = of old disks, and in all cases ZFS has survived brilianty. And actually ZFS works on somewhat unreliable hardware. The problem is = not non-perfect hardware, but *evil* hardware with=20 firmware based on some assumptions that won=E2=80=99t work with ZFS.=20 But I agree, non-ECC memory can be a problem. In my case all of the = servers had ECC. Borja.=
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?58DA896C-5312-47BC-8887-7680941A9AF2>