From owner-freebsd-fs@freebsd.org Thu Dec 27 13:59:25 2018 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id A3CAD135FD79 for ; Thu, 27 Dec 2018 13:59:25 +0000 (UTC) (envelope-from ath@heybey.org) Received: from resqmta-ch2-10v.sys.comcast.net (resqmta-ch2-10v.sys.comcast.net [IPv6:2001:558:fe21:29:69:252:207:42]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "resqmta-po-01v.sys.comcast.net", Issuer "COMODO RSA Organization Validation Secure Server CA" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 2DB8174BAC for ; Thu, 27 Dec 2018 13:59:24 +0000 (UTC) (envelope-from ath@heybey.org) Received: from resomta-ch2-11v.sys.comcast.net ([69.252.207.107]) by resqmta-ch2-10v.sys.comcast.net with ESMTP id cV7mga4cqzx9VcWC3ghyHP; Thu, 27 Dec 2018 13:59:23 +0000 Received: from spaten.heybey.org ([98.237.106.248]) by resomta-ch2-11v.sys.comcast.net with ESMTPA id cWC1gjUF3V20XcWC2gVo2d; Thu, 27 Dec 2018 13:59:23 +0000 X-Xfinity-VAAS: gggruggvucftvghtrhhoucdtuddrgedtledrtddvgdehlecutefuodetggdotefrodftvfcurfhrohhfihhlvgemucevohhmtggrshhtqdftvghsihdpqfgfvfenuceurghilhhouhhtmecufedttdenucenucfjughrpefuvfhfhffkffgfgggjtgfgsehtkeertddtfeejnecuhfhrohhmpeetnhgurhgvficujfgvhigsvgihuceorghthheshhgvhigsvgihrdhorhhgqeenucfkphepleekrddvfeejrddutdeirddvgeeknecurfgrrhgrmhephhgvlhhopehsphgrthgvnhdrhhgvhigsvgihrdhorhhgpdhinhgvthepleekrddvfeejrddutdeirddvgeekpdhmrghilhhfrhhomheprghthheshhgvhigsvgihrdhorhhgpdhrtghpthhtohepfhhrvggvsghsugdqfhhssehfrhgvvggsshgurdhorhhgnecuvehluhhsthgvrhfuihiivgeptd X-Xfinity-VMeta: sc=0;st=legit Received: from yuengling.local (murphys [10.1.0.49]) by spaten.heybey.org (Postfix) with ESMTPSA id AF6ABB977 for ; Thu, 27 Dec 2018 08:59:20 -0500 (EST) Subject: Re: Suggestion for hardware for ZFS fileserver To: freebsd-fs@freebsd.org References: <4f816be7-79e0-cacb-9502-5fbbe343cfc9@denninger.net> <3160F105-85C1-4CB4-AAD5-D16CF5D6143D@ifm.liu.se> <20181223113031.00005150@Leidinger.net> <1d76f92c-6665-81ef-1b94-dc1b4b8925d1@denninger.net> From: Andrew Heybey Message-ID: <814be2c2-4c09-9f1c-2a99-ed5a4c9bf7e7@heybey.org> Date: Thu, 27 Dec 2018 08:57:44 -0500 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.13; rv:60.0) Gecko/20100101 Thunderbird/60.3.3 MIME-Version: 1.0 In-Reply-To: <1d76f92c-6665-81ef-1b94-dc1b4b8925d1@denninger.net> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 8bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 27 Dec 2018 13:59:25 -0000 On 12/24/18 7:02 PM, Karl Denninger wrote: > On 12/24/2018 17:13, Zaphod Beeblebrox wrote: >> [ regarding ZFS hardware thread ] >> >> There's another type of server --- the "ghetto" or home storage serer. For >> this server, I like to optimize for not loosing data, not for uptime. >> >> Going back a few years, there were consumer motherboards with 10 or 12 SATA >> onboard. Mostly, this was at the change of technologies ... so you had >> some of one kind of port and some of another. Used SAS HBAs are another >> option ... but they have a caviat: many SATA drives will eventually reject >> them under load. Good SATA drives won't (but again, we're talking a ghetto >> system). If you're taking WD reds (and not, say, seagate barracudas) ... >> these work well. On the seagates, however, I've had drives repeatedly fail >> ... only to go on working fine in a workstation with a SATA controller. > > I've run "ghetto mode" fileservers with the LSI adapters in IT mode > (that always just seem to work) with one of their SFP ports connected to > a SAS expander, and then fanned THAT out to SATA drives.  The only > constraint is that you can run into problems booting from an expander, > so don't -- use the ports on the HBA (or even the motherboard) for the > boot drives. > > Never had a problem doing this with HGST drives, Intel SSDs and most > others.  The Seagates I've had fail actually physically failed; they > didn't throw a protocol hissy fit on the bus.  I don't buy Seagates any > more as I've had too many die out-of-warranty for my taste.  They work > fine with WD drives too.  Never had one of the drives that failed cause > a cascade detach event either.  The last few years (five or so) for > spinning rust HGST seems to sell the most-reliable stuff in my > experience but YMMV on that. > > Those adapters and expanders are cheap these days.  The expanders used > to be expensive, but not any more -- there's a ton of them around on the > secondary market for very little money (not much more than the LSI > cards.)  Their only downside is they run hot so you need good fan > coverage in the case. > > Older SuperMicro boards (X8DTL- series) that will take the 5600-series > Westmere Xeon processors can be had for almost nothing (note you have to > have the latest BIOS in them, which can be flashed, to run the Westmere > processors), and the CPUs are a literal $25.  The only "gotcha" is you > need ECC memory, but if you can find it at a decent price used you're > golden.  I would NOT run a ZFS filesystem without ECC memory; the risk > of undetected data corruption that you don't catch for months or even > years is material and if it happens you WILL cry since your backup > coverage may have expired at that point. My current "ghetto" server in the basement is an AMD Phenom 8-core, using SATA drives plugged into the motherboard, and 32GB ECC RAM. It is getting long in the tooth and I have been contemplating replacements. The existing one replaced an old Intel server chassis ("Jarrell" IIRC) when I realized that I could pay for the new server in a year with the power savings. The basement is cooler now too. I am considering going back to a "real" server and getting a generation or two old Dell server like the R710, R520 or R720 and putting a couple 8-core low-power Xeons like the E5-2648L in it. The Dells on Ebay seem to often come with a RAID controller that can be flashed with the "IT" firmware for JBOD for ZFS, and DDR3 ECC server RAM is relatively cheap. The low-power Xeons are 65W TDP (compared to my current AMD at 125W) so it shouldn't use too much more power (maybe?). For $500-$750 I can get lots of (slow) cores, 64-128GB RAM and 8 or 12 disk slots. My idea for keeping my ZFS2 disk array from crapping out on me is ECC RAM, scrubbing every week, and replacing the oldest drive every year or so (or whenever SMART complains about reallocated sectors). I started out with 1.5, 2 and 3TB drives, and am up to 3,4 and 6TB drives. (I use partitions for the ZFS array so that I can use the extra space on the larger drives for other stuff.) As the older smaller drives get replaced with larger ones, the array grows in size to (hopefully) keep up with my needs. I also then have a variety of drives (by both manufacturer and date of manufacture) so hopefully I won't stumble across the same bug on multiple drives at the same time. andrew