From owner-freebsd-questions@freebsd.org Mon Jul 15 04:26:33 2019 Return-Path: Delivered-To: freebsd-questions@mailman.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.nyi.freebsd.org (Postfix) with ESMTP id A744AAFC99 for ; Mon, 15 Jul 2019 04:26:33 +0000 (UTC) (envelope-from lee@adminart.net) Received: from mo6-p01-ob.smtp.rzone.de (mo6-p01-ob.smtp.rzone.de [IPv6:2a01:238:20a:202:5301::9]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) server-signature RSA-PSS (4096 bits) client-signature RSA-PSS (2048 bits) client-digest SHA256) (Client CN "*.smtp.rzone.de", Issuer "TeleSec ServerPass Class 2 CA" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 44EFF7514C for ; Mon, 15 Jul 2019 04:26:31 +0000 (UTC) (envelope-from lee@adminart.net) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1563164789; s=strato-dkim-0002; d=adminart.net; h=References:Message-ID:Date:In-Reply-To:Subject:Cc:To:From: X-RZG-CLASS-ID:X-RZG-AUTH:From:Subject:Sender; bh=WyfElFBvEi7+yp5TqL3Ub5UISYySE75tmw2H2ETQGpI=; b=osg3Mgi9UmKPW4vbDvBog5i9ucybeHaja2hSTi2J2pLu571WrWtFtxoWwpHgoT6t7F zIwREwZkF7NOo1CsbpwoPs6u9K/kZcsI9yNQMz84p3PlwgvA7tYHFqzauT9XnDzQOgJC 4pjS0EwYPxkJYksSeFgKrqT2icuJ8vstz+7c94htkoODwR1KH3UEs3GWWL0RF+U0bvwP kv8WhTCVhqY33ZP1sxR7/WNKdmf/8BMei0ZRO9TOJQFkBtHUL87JhyLE11lVrXCjz2Cc kRCiZlAJkLItEMde1lzV3w1hMtwNguwWCfEsI8h5Rlo4xV46os0BIz16nmlGMGwzQ9vr qxfw== X-RZG-AUTH: ":O2kGeEG7b/pS1FS4THaxjVF9w0vVgfQ9xGcjwO5WMRo5c+h5ceMqQWZ3yrBp+ARdaXvxIDf7nlw=" X-RZG-CLASS-ID: mo00 Received: from himinbjorg.adminart.net by smtp.strato.de (RZmta 44.24 DYNA|AUTH) with ESMTPSA id e0059dv6F4QRRPL (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (curve secp521r1 with 521 ECDH bits, eq. 15360 bits RSA)) (Client did not present a certificate); Mon, 15 Jul 2019 06:26:27 +0200 (CEST) Received: from toy.adminart.net ([192.168.3.55]) by himinbjorg.adminart.net with esmtps (TLSv1.2:ECDHE-RSA-AES256-GCM-SHA384:256) (Exim 4.92) (envelope-from ) id 1hmsZH-0001yM-Bi; Mon, 15 Jul 2019 06:26:27 +0200 Received: from lee by toy.adminart.net with local (Exim 4.92) (envelope-from ) id 1hmsZG-0000l5-SV; Mon, 15 Jul 2019 06:26:27 +0200 From: hw To: "Kevin P. Neal" Cc: Karl Denninger , freebsd-questions@freebsd.org Subject: Re: dead slow update servers In-Reply-To: <20190715014129.GA62729@neutralgood.org> (Kevin P. Neal's message of "Sun, 14 Jul 2019 21:41:29 -0400") Date: Mon, 15 Jul 2019 05:42:25 +0200 Organization: my virtual residence Message-ID: <87ftn8otem.fsf@toy.adminart.net> References: <87sgrbi3qg.fsf@toy.adminart.net> <20190712171910.GA25091@neutralgood.org> <871ryuj3ex.fsf@toy.adminart.net> <874l3qfvqw.fsf@toy.adminart.net> <20190714011303.GA25317@neutralgood.org> <87v9w58apd.fsf@toy.adminart.net> <87v9w4qjy8.fsf@toy.adminart.net> <20190715014129.GA62729@neutralgood.org> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/27.0.50 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 44EFF7514C X-Spamd-Bar: --- Authentication-Results: mx1.freebsd.org; dkim=pass header.d=adminart.net header.s=strato-dkim-0002 header.b=osg3Mgi9 X-Spamd-Result: default: False [-3.62 / 15.00]; ARC_NA(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; R_DKIM_ALLOW(-0.20)[adminart.net:s=strato-dkim-0002]; NEURAL_HAM_MEDIUM(-1.00)[-0.999,0]; FROM_HAS_DN(0.00)[]; RCPT_COUNT_THREE(0.00)[3]; TO_DN_SOME(0.00)[]; NEURAL_HAM_LONG(-1.00)[-1.000,0]; MIME_GOOD(-0.10)[text/plain]; DMARC_NA(0.00)[adminart.net]; HAS_ORG_HEADER(0.00)[]; RCVD_COUNT_THREE(0.00)[4]; TO_MATCH_ENVRCPT_SOME(0.00)[]; MX_GOOD(-0.01)[cached: smtpin.rzone.de]; DKIM_TRACE(0.00)[adminart.net:+]; NEURAL_HAM_SHORT(-0.78)[-0.783,0]; R_SPF_NA(0.00)[]; FORGED_SENDER(0.30)[hw@adminart.net,lee@adminart.net]; RCVD_IN_DNSWL_LOW(-0.10)[9.0.0.0.0.0.0.0.0.0.0.0.1.0.3.5.2.0.2.0.a.0.2.0.8.3.2.0.1.0.a.2.list.dnswl.org : 127.0.5.1]; MIME_TRACE(0.00)[0:+]; RCVD_TLS_LAST(0.00)[]; ASN(0.00)[asn:6724, ipnet:2a01:238::/32, country:DE]; FROM_NEQ_ENVFROM(0.00)[hw@adminart.net,lee@adminart.net]; IP_SCORE(-0.73)[ipnet: 2a01:238::/32(-3.25), asn: 6724(-0.41), country: DE(-0.01)] X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 15 Jul 2019 04:26:33 -0000 "Kevin P. Neal" writes: > On Mon, Jul 15, 2019 at 01:23:43AM +0200, hw wrote: >> Karl Denninger writes: >>=20 >> > On 7/14/2019 00:10, hw wrote: >> >> "Kevin P. Neal" writes: >> >> >> >>> On Sat, Jul 13, 2019 at 05:39:51AM +0200, hw wrote: >> >>>> ZFS is great when you have JBODs while storage performance is >> >>>> irrelevant. I do not have JBODs, and in almost all cases, storage >> >>>> performance is relevant. >> >>> Huh? Is a _properly_ _designed_ ZFS setup really slower? A raidz >> >>> setup of N drives gets you the performance of roughly 1 drive, but a >> >>> mirror gets you the write performance of a titch less than one drive >> >>> with the read performance of N drives. How does ZFS hurt performance? >> >> Performance is hurt when you have N disks and only get the performance >> >> of a single disk from them. >> > >> > There's no free lunch.=C2=A0 If you want two copies of the data (or on= e plus >> > parity) you must write two copies.=C2=A0 The second one doesn't magica= lly >> > appear.=C2=A0 If you think it did you were conned by something that is >> > cheating (e.g. said it had written something when in fact it was sitti= ng >> > in a DRAM chip) and, at a bad time, you're going to discover it was >> > cheating. >> > >> > Murphy is a SOB. >>=20 >> I'm not sure what your point is. Even RAID5 gives you better >> performance than raidz because it doesn't limit you to a single disk. > > I don't see how this is possible. With either RAID5 or raidz enough > drives have to be written to recover the data at a minimum. And since > raidz1 uses the same number of drives as RAID5 it should have similar > performance characteristics. So read and write performance of raidz1 > should be about the same as RAID5 -- about the speed of a single disk > since the disks will be returning data roughly in parallel. Well, if you follow [1], then, in theory, with no more than 4 disks, the performance could be the same. [1]: https://blog.storagecraft.com/raid-performance/ > What have you been testing RAID5 with? Bursty loads with large amounts > of raid controller cache? Of course that's going to appear faster since > you are writing to memory and not disk in the very short term. But a > sustained amount of traffic will show raidz1 and RAID5 about the same. I have been very happy with the overall system performance after I switched from software RAID5 (mdraid) to a hardware RAID controller, using the same disks. It was like night and day difference. The cache on the controller was only 512MB. I'm suspecting that the mainboard I was using had trouble handling concurrent data transfers to multiple disks and that the CPU wasn't great at it, either. This might explain why the system was so sluggish before changing to hardware RAID. It was used as a desktop with a little bit of server stuff running, and just having it all running seemed to create sluggishness even without much actual load. Other than that, I'm seeing that ZFS is disappointingly slow (on entirely different hardware than what was used above) while hardware RAID has always been nicely fast. > Oh, and my Dell machines are old enough that I'm stuck with the hardware > RAID controller. I use ZFS and have raid0 arrays configured with single > drives in each. I _hate_ it. When a drive fails the machine reboots and > the controller hangs the boot until I drive out there and dump the card's > cache. It's just awful. That doesn't sound like a good setup. Usually, nothing reboots when a drive fails. Would it be a disadvantage to put all drives into a single RAID10 (or each half of them into one) and put ZFS on it (or them) if you want to keep ZFS? > Now Dell offers a vanilla HBA on the "same" server as an > option. *phew* That's cool.