Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 27 Dec 2018 08:57:44 -0500
From:      Andrew Heybey <ath@heybey.org>
To:        freebsd-fs@freebsd.org
Subject:   Re: Suggestion for hardware for ZFS fileserver
Message-ID:  <814be2c2-4c09-9f1c-2a99-ed5a4c9bf7e7@heybey.org>
In-Reply-To: <1d76f92c-6665-81ef-1b94-dc1b4b8925d1@denninger.net>
References:  <CAEW%2BogZnWC07OCSuzO7E4TeYGr1E9BARKSKEh9ELCL9Zc4YY3w@mail.gmail.com> <C839431D-628C-4C73-8285-2360FE6FFE88@gmail.com> <CAEW%2BogYWKPL5jLW2H_UWEsCOiz=8fzFcSJ9S5k8k7FXMQjywsw@mail.gmail.com> <4f816be7-79e0-cacb-9502-5fbbe343cfc9@denninger.net> <3160F105-85C1-4CB4-AAD5-D16CF5D6143D@ifm.liu.se> <YQBPR01MB038805DBCCE94383219306E1DDB80@YQBPR01MB0388.CANPRD01.PROD.OUTLOOK.COM> <20181223113031.00005150@Leidinger.net> <YQBPR01MB038868AC3D6BAC5C6FB40C9CDDBB0@YQBPR01MB0388.CANPRD01.PROD.OUTLOOK.COM> <CACpH0Md5y%2BSFTHbRL=OzP9joG60gKStOkoK3GrZqTYHO97k_FA@mail.gmail.com> <1d76f92c-6665-81ef-1b94-dc1b4b8925d1@denninger.net>

next in thread | previous in thread | raw e-mail | index | archive | help
On 12/24/18 7:02 PM, Karl Denninger wrote:
> On 12/24/2018 17:13, Zaphod Beeblebrox wrote:
>> [ regarding ZFS hardware thread ]
>>
>> There's another type of server --- the "ghetto" or home storage serer.  For
>> this server, I like to optimize for not loosing data, not for uptime.
>>
>> Going back a few years, there were consumer motherboards with 10 or 12 SATA
>> onboard.  Mostly, this was at the change of technologies ... so you had
>> some of one kind of port and some of another.  Used SAS HBAs are another
>> option ... but they have a caviat: many SATA drives will eventually reject
>> them under load.  Good SATA drives won't (but again, we're talking a ghetto
>> system).  If you're taking WD reds (and not, say, seagate barracudas) ...
>> these work well.  On the seagates, however, I've had drives repeatedly fail
>> ... only to go on working fine in a workstation with a SATA controller.
> 
> I've run "ghetto mode" fileservers with the LSI adapters in IT mode
> (that always just seem to work) with one of their SFP ports connected to
> a SAS expander, and then fanned THAT out to SATA drives.  The only
> constraint is that you can run into problems booting from an expander,
> so don't -- use the ports on the HBA (or even the motherboard) for the
> boot drives.
> 
> Never had a problem doing this with HGST drives, Intel SSDs and most
> others.  The Seagates I've had fail actually physically failed; they
> didn't throw a protocol hissy fit on the bus.  I don't buy Seagates any
> more as I've had too many die out-of-warranty for my taste.  They work
> fine with WD drives too.  Never had one of the drives that failed cause
> a cascade detach event either.  The last few years (five or so) for
> spinning rust HGST seems to sell the most-reliable stuff in my
> experience but YMMV on that.
> 
> Those adapters and expanders are cheap these days.  The expanders used
> to be expensive, but not any more -- there's a ton of them around on the
> secondary market for very little money (not much more than the LSI
> cards.)  Their only downside is they run hot so you need good fan
> coverage in the case.
> 
> Older SuperMicro boards (X8DTL- series) that will take the 5600-series
> Westmere Xeon processors can be had for almost nothing (note you have to
> have the latest BIOS in them, which can be flashed, to run the Westmere
> processors), and the CPUs are a literal $25.  The only "gotcha" is you
> need ECC memory, but if you can find it at a decent price used you're
> golden.  I would NOT run a ZFS filesystem without ECC memory; the risk
> of undetected data corruption that you don't catch for months or even
> years is material and if it happens you WILL cry since your backup
> coverage may have expired at that point.

My current "ghetto" server in the basement is an AMD Phenom 8-core,
using SATA drives plugged into the motherboard, and 32GB ECC RAM.  It is
getting long in the tooth and I have been contemplating replacements.
The existing one replaced an old Intel server chassis ("Jarrell" IIRC)
when I realized that I could pay for the new server in a year with the
power savings.  The basement is cooler now too.

I am considering going back to a "real" server and getting a generation
or two old Dell server like the R710, R520 or R720 and putting a couple
8-core low-power Xeons like the E5-2648L in it.  The Dells on Ebay seem
to often come with a RAID controller that can be flashed with the "IT"
firmware for JBOD for ZFS, and DDR3 ECC server RAM is relatively cheap.
 The low-power Xeons are 65W TDP (compared to my current AMD at 125W) so
it shouldn't use too much more power (maybe?).  For $500-$750 I can get
lots of (slow) cores, 64-128GB RAM and 8 or 12 disk slots.

My idea for keeping my ZFS2 disk array from crapping out on me is ECC
RAM, scrubbing every week, and replacing the oldest drive every year or
so (or whenever SMART complains about reallocated sectors).  I started
out with 1.5, 2 and 3TB drives, and am up to 3,4 and 6TB drives.  (I use
partitions for the ZFS array so that I can use the extra space on the
larger drives for other stuff.)  As the older smaller drives get
replaced with larger ones, the array grows in size to (hopefully) keep
up with my needs.  I also then have a variety of drives (by both
manufacturer and date of manufacture) so hopefully I won't stumble
across the same bug on multiple drives at the same time.

andrew



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?814be2c2-4c09-9f1c-2a99-ed5a4c9bf7e7>