From owner-freebsd-fs@freebsd.org Mon Apr 8 21:28:41 2019 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 7A67E1569797; Mon, 8 Apr 2019 21:28:41 +0000 (UTC) (envelope-from peter@rulingia.com) Received: from vtr.rulingia.com (vtr.rulingia.com [IPv6:2001:19f0:5801:ebe:5400:1ff:fe53:30fd]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "vtr.rulingia.com", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 7E20A8CD6A; Mon, 8 Apr 2019 21:28:40 +0000 (UTC) (envelope-from peter@rulingia.com) Received: from server.rulingia.com (ppp59-167-167-3.static.internode.on.net [59.167.167.3]) by vtr.rulingia.com (8.15.2/8.15.2) with ESMTPS id x38LSSr5048704 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 9 Apr 2019 07:28:34 +1000 (AEST) (envelope-from peter@rulingia.com) X-Bogosity: Ham, spamicity=0.000000 Received: from server.rulingia.com (localhost.rulingia.com [127.0.0.1]) by server.rulingia.com (8.15.2/8.15.2) with ESMTPS id x38LSM2E045124 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Tue, 9 Apr 2019 07:28:23 +1000 (AEST) (envelope-from peter@server.rulingia.com) Received: (from peter@localhost) by server.rulingia.com (8.15.2/8.15.2/Submit) id x38LSM7M045123; Tue, 9 Apr 2019 07:28:22 +1000 (AEST) (envelope-from peter) Date: Tue, 9 Apr 2019 07:28:22 +1000 From: Peter Jeremy To: tech-lists Cc: freebsd-fs@freebsd.org, freebsd-stable@freebsd.org Subject: Re: about zfs and ashift and changing ashift on existing zpool Message-ID: <20190408212822.GD13734@server.rulingia.com> References: <20190407153639.GA41753@rpi3.zyxst.net> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha512; protocol="application/pgp-signature"; boundary="jRHKVT23PllUwdXP" Content-Disposition: inline In-Reply-To: <20190407153639.GA41753@rpi3.zyxst.net> X-PGP-Key: http://www.rulingia.com/keys/peter.pgp User-Agent: Mutt/1.11.4 (2019-03-13) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 08 Apr 2019 21:28:41 -0000 --jRHKVT23PllUwdXP Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On 2019-Apr-07 16:36:40 +0100, tech-lists wrote: >storage ONLINE 0 0 0 > raidz1-0 ONLINE 0 0 0 > replacing-0 ONLINE 0 0 1.65K > ada2 ONLINE 0 0 0 > ada1 ONLINE 0 0 0 block size: 512B configured, = 4096B native > ada3 ONLINE 0 0 0 > ada4 ONLINE 0 0 0 > >What I'd like to know is: > >1. is the above situation harmful to data In general no. The only danger is that ZFS is updating the uberblock replicas at the start and end of the volume assuming 512B sectors which means you are at a higher risk or losing one of the replica sets if a power failure occurs during an uberblock update. >2. given that vfs.zfs.min_auto_ashift=3D12, why does it still say 512B > configured for ada1 which is the new disk, or.. The pool is configured with ashift=3D9. >3. does "configured" pertain to the pool, the disk, or both "configured" relates to the pool - all vdevs match the pool >4. what would be involved in making them all 4096B Rebuild the pool - backup/destroy/create/restore >5. does a 512B disk wear out faster than 4096B (all other things being > equal) It shouldn't. It does mean that the disk is doing read/modify/write at the physical sector level but that should be masked by the drive cache. >Given that the machine and disks were new in 2016, I can't understand why = zfs >didn't default to 4096B on installation I can't answer that easily. The current version of ZFS looks at the native disk blocksize to determine the pool ashift but I'm not sure how things were in 2016. Possibilities include: * The pool was built explicitly with ashift=3D9 * The initial disks reported 512B native (I think this is most likely) * That version of ZFS was using logical, rather than native blocksize. My guess (given that only ada1 is reporting a blocksize mismatch) is that your disks reported a 512B native blocksize. In the absence of any overrid= e, ZFS will then build an ashift=3D9 pool. --=20 Peter Jeremy --jRHKVT23PllUwdXP Content-Type: application/pgp-signature; name="signature.asc" -----BEGIN PGP SIGNATURE----- iQKTBAEBCgB9FiEE7rKYbDBnHnTmXCJ+FqWXoOSiCzQFAlyrvPZfFIAAAAAALgAo aXNzdWVyLWZwckBub3RhdGlvbnMub3BlbnBncC5maWZ0aGhvcnNlbWFuLm5ldEVF QjI5ODZDMzA2NzFFNzRFNjVDMjI3RTE2QTU5N0EwRTRBMjBCMzQACgkQFqWXoOSi CzQbEg//dTqzj9xBLYWErHc7ot1sH54+Mnwsgu7bYthA6E2uXIjT3lqK1+2X68Yn NStQiv9SIljbrCVlyN6aejgx1dFuSsd90DGOpPtwdFvHx8QkXIcXl0xzyef5R+7q XNfSyltWVa/DxGnH+7ve98PaQQTIfgn3WG4zn4tx69+XwwMOkhPlF6E0TC4XnST9 o+Qpv9BGzkRGYZsYy4gNMRFVhvUhoZvTys+k2euC7x9onZ4L/OnbeY6CAD/1Wj54 lZpAVgBB6Ms+lUWVvDPVtIKqA3RoDvlwefLgdJee6gnlNZ1vzQ/KBVr2DnqgQtbu xFhZB9j1tph+P184HcH8fKziYa+fudXGI9A9y1snga2hVLDSnPX1MlC0tt+4uHYs PJqCVWj9nWrw/x0B5z9nVAbLK74qS6QAe9Eodjp0p0tJ8sh/hHzHEB5llK44l8TL mlzMdgf5/PZlw8N+1TfbU8TfNWej+ImDRUa2L8n/vgU695Z2fMYSOBnqo/S3ZlKj 77Z45V05bQqUGz4XSn9VLG6jo5joibpH/gwyjr3extFWkXTEK36ZWoMKxW1ndlzX +FX5tNDhj6psntcRlDj/S5embKFFElKvyV2/CWNrIwJZvxiQ0HCTyiaW56vdflUX r91OkXSjk17hEJB8Zh95JMr/zUcYybofIak/JviHml0vHof4mJ4= =yHzn -----END PGP SIGNATURE----- --jRHKVT23PllUwdXP--