From owner-freebsd-questions@freebsd.org Sat Dec 5 13:51:16 2020 Return-Path: Delivered-To: freebsd-questions@mailman.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.nyi.freebsd.org (Postfix) with ESMTP id 3413847BEB5 for ; Sat, 5 Dec 2020 13:51:16 +0000 (UTC) (envelope-from paul@gromit.dlib.vt.edu) Received: from gromit.dlib.vt.edu (gromit.dlib.vt.edu [128.173.49.70]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (2048 bits) client-digest SHA256) (Client CN "gromit.dlib.vt.edu", Issuer "Chumby Certificate Authority" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 4Cp9w336jWz3lXq for ; Sat, 5 Dec 2020 13:51:15 +0000 (UTC) (envelope-from paul@gromit.dlib.vt.edu) Received: from mbp-2012.gromit23.net (unknown [73.99.214.146]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by gromit.dlib.vt.edu (Postfix) with ESMTPSA id A5AA13A5; Sat, 5 Dec 2020 08:51:09 -0500 (EST) Content-Type: text/plain; charset=us-ascii Mime-Version: 1.0 (Mac OS X Mail 13.4 \(3608.120.23.2.4\)) Subject: Re: effect of differing spindle speeds on prospective zfs vdevs From: Paul Mather In-Reply-To: Date: Sat, 5 Dec 2020 08:51:08 -0500 Cc: tech-lists@zyxst.net Content-Transfer-Encoding: quoted-printable Message-Id: References: To: freebsd-questions@freebsd.org X-Mailer: Apple Mail (2.3608.120.23.2.4) X-Rspamd-Queue-Id: 4Cp9w336jWz3lXq X-Spamd-Bar: / Authentication-Results: mx1.freebsd.org; dkim=none; dmarc=fail reason="No valid SPF, No valid DKIM" header.from=vt.edu (policy=none); spf=none (mx1.freebsd.org: domain of paul@gromit.dlib.vt.edu has no SPF policy when checking 128.173.49.70) smtp.mailfrom=paul@gromit.dlib.vt.edu X-Spamd-Result: default: False [-0.50 / 15.00]; RCVD_TLS_ALL(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; FREEFALL_USER(0.00)[paul]; FROM_HAS_DN(0.00)[]; RBL_DBL_DONT_QUERY_IPS(0.00)[128.173.49.70:from]; MV_CASE(0.50)[]; MID_RHS_MATCH_FROM(0.00)[]; MIME_GOOD(-0.10)[text/plain]; TO_DN_NONE(0.00)[]; ARC_NA(0.00)[]; NEURAL_SPAM_SHORT(1.00)[1.000]; SPAMHAUS_ZRD(0.00)[128.173.49.70:from:127.0.2.255]; RECEIVED_SPAMHAUS_PBL(0.00)[73.99.214.146:received]; TO_MATCH_ENVRCPT_SOME(0.00)[]; NEURAL_HAM_LONG(-1.00)[-1.000]; RCPT_COUNT_TWO(0.00)[2]; NEURAL_HAM_MEDIUM(-1.00)[-1.000]; R_SPF_NA(0.00)[no SPF record]; FROM_EQ_ENVFROM(0.00)[]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; ASN(0.00)[asn:1312, ipnet:128.173.0.0/16, country:US]; RCVD_COUNT_TWO(0.00)[2]; MAILMAN_DEST(0.00)[freebsd-questions]; DMARC_POLICY_SOFTFAIL(0.10)[vt.edu : No valid SPF, No valid DKIM,none] X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 05 Dec 2020 13:51:16 -0000 On Fri, 4 Dec 2020 23:43:15 +0000, tech-lists = wrote: > Normally when making an array, I'd like to use all disks all same = speed,=20 > interface, make and model but from different batches. In this case, = I've no=20 > choice, so we have multiple 1TB disks some 7.2k some 5.4k. I've not = mixed > them like this before. >=20 > What effect would this have on the final array? Slower than if all one = or the other? > No effect? I'm expecting the fastest access will be that of the = slowest vdev. I believe you are correct in intuiting that the performance of the pool = will be influenced by the slowest devices. ZFS supports a variety of pool organisations, each with differing I/O = characteristics, so "making an array" could cover a multiplicity of = possibilities. I.e., a "JBOD" pool would have different I/O = characteristics than a RAIDZ pool. Read access would also be different = than write access, and so the use case of the pool (read-intensive or = write-intensive) would I/O speeds. (And, furthermore, small random vs. = large sequential I/O will have an impact.) IIRC, write IOPS of RAIDZ pools are limited to the IOPS of the slowest = device. > Similarly some disks block size is 512b logical/512b physical, others = are=20 > 512b logical/4096 physical, still others are 4096/4096. Any effect of > mixing hardware? I understand sfs sets its own blocksize. IIRC, ZFS pools have a single ashift for the entire pool, so you should = set it to accommodate the 4096/4096 devices to avoid performance = degradation. I believe it defaults to that now, and should auto-detect = anyway. But, in a mixed setup of vdevs like you have, you should be = using ashift=3D12. I believe having an ashift=3D9 on your mixed-drive setup would have the = biggest performance impact in terms of reducing performance. Cheers, Paul.=