From owner-freebsd-questions@freebsd.org Sun Nov 22 03:50:45 2020 Return-Path: Delivered-To: freebsd-questions@mailman.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.nyi.freebsd.org (Postfix) with ESMTP id 1476647B779 for ; Sun, 22 Nov 2020 03:50:45 +0000 (UTC) (envelope-from dpchrist@holgerdanske.com) Received: from holgerdanske.com (holgerdanske.com [IPv6:2001:470:0:19b::b869:801b]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-256) server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (2048 bits) client-digest SHA256) (Client CN "www.holgerdanske.com", Issuer "www.holgerdanske.com" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 4CdxC73NN3z4W0j for ; Sun, 22 Nov 2020 03:50:43 +0000 (UTC) (envelope-from dpchrist@holgerdanske.com) Received: from 99.100.19.101 (99-100-19-101.lightspeed.frokca.sbcglobal.net [99.100.19.101]) by holgerdanske.com with ESMTPSA (TLS_AES_128_GCM_SHA256:TLSv1.3:Kx=any:Au=any:Enc=AESGCM(128):Mac=AEAD) (SMTP-AUTH username dpchrist@holgerdanske.com, mechanism PLAIN) for ; Sat, 21 Nov 2020 19:50:39 -0800 Subject: Re: [SOLVED] Re: "zpool attach" problem To: freebsd-questions@freebsd.org References: <202011212233.0ALMXfvE022876@sdf.org> From: David Christensen Message-ID: Date: Sat, 21 Nov 2020 19:50:36 -0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.12.0 MIME-Version: 1.0 In-Reply-To: <202011212233.0ALMXfvE022876@sdf.org> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: 4CdxC73NN3z4W0j X-Spamd-Bar: -- Authentication-Results: mx1.freebsd.org; dkim=none; dmarc=none; spf=none (mx1.freebsd.org: domain of dpchrist@holgerdanske.com has no SPF policy when checking 2001:470:0:19b::b869:801b) smtp.mailfrom=dpchrist@holgerdanske.com X-Spamd-Result: default: False [-2.01 / 15.00]; RCVD_TLS_ALL(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; MID_RHS_MATCH_FROM(0.00)[]; FROM_HAS_DN(0.00)[]; RBL_DBL_DONT_QUERY_IPS(0.00)[2001:470:0:19b::b869:801b:from]; TO_MATCH_ENVRCPT_ALL(0.00)[]; NEURAL_HAM_LONG(-1.00)[-1.000]; MIME_GOOD(-0.10)[text/plain]; TO_DN_NONE(0.00)[]; PREVIOUSLY_DELIVERED(0.00)[freebsd-questions@freebsd.org]; AUTH_NA(1.00)[]; RCPT_COUNT_ONE(0.00)[1]; SPAMHAUS_ZRD(0.00)[2001:470:0:19b::b869:801b:from:127.0.2.255]; ARC_NA(0.00)[]; NEURAL_HAM_SHORT(-0.91)[-0.914]; DMARC_NA(0.00)[holgerdanske.com]; NEURAL_HAM_MEDIUM(-1.00)[-1.000]; R_SPF_NA(0.00)[no SPF record]; FROM_EQ_ENVFROM(0.00)[]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; ASN(0.00)[asn:6939, ipnet:2001:470::/32, country:US]; RCVD_COUNT_TWO(0.00)[2]; MAILMAN_DEST(0.00)[freebsd-questions] X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 22 Nov 2020 03:50:45 -0000 On 2020-11-21 14:33, Scott Bennett via freebsd-questions wrote: > Hi David, > Thanks for your reply. I was about to respond to my own message to say that the > issue has been resolved, but I saw your reply first. However, I respond below to > your comments and questions, as well as stating what the problem turned out to be. I suspect that we all have similar stories. :-) It sounds like we both have small SOHO networks. My comments below reflect such. Spreading two ZFS pools and one GEOM RAID (?) across six HDD's is not something that I would do or recommend. I also avoid raidzN. I suggest that you backup, wipe, create one pool using mirrors, and restore. Apply the GPT partitioning scheme and create one large partition with 1 MiB alignment on each of the six data drives. When partitioning, some people recommend leaving a non-trivial amount of unused space at the end of the drive -- say 2% to 5% -- to facilitate replacing failed drives with somewhat smaller drives. I prefer to use 100% and will buy a pair of identical replacements if faced with that situation. Label your partitions with names that correlate with your ZFS storage architecture [3]. Always use the labels for administrative commands; never use raw device nodes. (I encrypt the partitions and use the GPT label GELI nodes when creating the pool, below.) Create a zpool with three mirrors -- a first mirror of two 2 TB drive partitions, a second mirror of two 2 TB drive partitions, and a third mirror of the 3 TB drive and the 4 TB drive partitions. That should give you about the same available space as your existing raidz2. Consider buying a spare 4 TB (or two) and putting it on the shelf. Better yet, connect it to the machine and tell ZFS to use it as a spare. Buy 4 TB drives going forward. Adding a solid-state cache device or partition can noticeably improve read responsiveness (e.g. both sequential and random latency). After the initial hits, both my Samba and CVS services are snappy. I expect solid-state log devices would similarly help write performance, but have not tried it yet. David [3] https://b3n.org/zfs-hierarchy/