From owner-freebsd-fs@freebsd.org Sat Dec 5 17:22:22 2020 Return-Path: Delivered-To: freebsd-fs@mailman.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.nyi.freebsd.org (Postfix) with ESMTP id AC3BD4A2913 for ; Sat, 5 Dec 2020 17:22:22 +0000 (UTC) (envelope-from spork@bway.net) Received: from smtp1.bway.net (smtp1.bway.net [216.220.96.27]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 4CpGbf0jMZz4THN for ; Sat, 5 Dec 2020 17:22:21 +0000 (UTC) (envelope-from spork@bway.net) Received: from gaseousweiner.sporklab.com (pool-74-102-98-114.nwrknj.fios.verizon.net [74.102.98.114]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) (Authenticated sender: spork@bway.net) by smtp1.bway.net (Postfix) with ESMTPSA id 0F8DC95874; Sat, 5 Dec 2020 12:22:20 -0500 (EST) Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Mac OS X Mail 12.4 \(3445.104.17\)) Subject: Re: vdevs with different spindle speeds From: Charles Sprickman In-Reply-To: Date: Sat, 5 Dec 2020 12:22:20 -0500 Cc: freebsd-fs@freebsd.org X-Mao-Original-Outgoing-Id: 628881740.397988-c00dcf199ada2957199f1f0b60afd681 Content-Transfer-Encoding: quoted-printable Message-Id: <56C3DF95-A5F5-4DFB-BD5D-492126D059BB@bway.net> References: To: Mel Pilgrim X-Mailer: Apple Mail (2.3445.104.17) X-Rspamd-Queue-Id: 4CpGbf0jMZz4THN X-Spamd-Bar: --- X-Spamd-Result: default: False [-3.60 / 15.00]; RCVD_VIA_SMTP_AUTH(0.00)[]; TO_DN_SOME(0.00)[]; MV_CASE(0.50)[]; R_SPF_ALLOW(-0.20)[+ip4:216.220.96.27/32]; DKIM_TRACE(0.00)[bway.net:+]; RCPT_COUNT_TWO(0.00)[2]; DMARC_POLICY_ALLOW(-0.50)[bway.net,quarantine]; NEURAL_HAM_SHORT(-1.00)[-1.000]; RCVD_IN_DNSWL_LOW(-0.10)[216.220.96.27:from]; FROM_EQ_ENVFROM(0.00)[]; MIME_TRACE(0.00)[0:+]; RBL_DBL_DONT_QUERY_IPS(0.00)[216.220.96.27:from]; MID_RHS_MATCH_FROM(0.00)[]; DWL_DNSWL_NONE(0.00)[bway.net:dkim]; ARC_NA(0.00)[]; NEURAL_HAM_MEDIUM(-1.00)[-1.000]; R_DKIM_ALLOW(-0.20)[bway.net:s=mail]; RECEIVED_SPAMHAUS_PBL(0.00)[74.102.98.114:received]; FROM_HAS_DN(0.00)[]; ASN(0.00)[asn:8059, ipnet:216.220.96.0/19, country:US]; NEURAL_HAM_LONG(-1.00)[-1.000]; MIME_GOOD(-0.10)[text/plain]; SPAMHAUS_ZRD(0.00)[216.220.96.27:from:127.0.2.255]; TO_MATCH_ENVRCPT_SOME(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; RCVD_TLS_ALL(0.00)[]; MAILMAN_DEST(0.00)[freebsd-fs] X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 05 Dec 2020 17:22:22 -0000 > On Dec 5, 2020, at 8:24 AM, Mel Pilgrim = wrote: >=20 > On 2020-12-05 4:38, tech-lists wrote: >> Normally when making an array, I'd like to use all disks all same = speed, >> interface, make and model but from different batches. In this case, = I've no >> choice, so we have multiple 1TB disks some 7.2k some 5.4k. I've not = mixed >> them like this before. >> = What effect = would this have on the final array? Slower than if all one or the other? >> No effect? I'm expecting the fastest access will be that of the = slowest vdev. >> = Similarly some = disks block size is 512b logical/512b physical, others are 512b >> logical/4096 physical, still others are 4096/4096. Any effect of >> mixing hardware? I understand zfs sets its own blocksize. >=20 > Make sure you have ashift=3D12 for everything and you'll be fine. The = marginal increase in latency with the 5400 rpm drives will disappear = behind ZFS' heavily-cached, asynchronous operation unless you're = hammering the pool with calls for cold data. This is interesting, I always considered mixing a =E2=80=9Cno-no=E2=80=9D,= probably due to being told this in the old days of hardware raid with = minimal/dumb caching. I was considering something similar for some cheap servers that are not = terribly critical, but in my case, mixing SSDs. For example, one = standard Samsung EVO and then one of the cheaper Intel datacenter-grade = drives (generally about twice the cost of a standard SSD) in a mirror. I = figured even if I put them in service at the same time, the first = failure should be staggered. But I was not really clear on what affect = this would have or if I=E2=80=99d be confusing zfs with this mix of = drives=E2=80=A6 Any thoughts on this? Charles >=20 > If you're really worried about it, get a cheap SSD and use it as a = cache device. > _______________________________________________ > freebsd-fs@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"