Date: Tue, 9 Apr 2019 07:28:22 +1000 From: Peter Jeremy <peter@rulingia.com> To: tech-lists <tech-lists@zyxst.net> Cc: freebsd-fs@freebsd.org, freebsd-stable@freebsd.org Subject: Re: about zfs and ashift and changing ashift on existing zpool Message-ID: <20190408212822.GD13734@server.rulingia.com> In-Reply-To: <20190407153639.GA41753@rpi3.zyxst.net> References: <20190407153639.GA41753@rpi3.zyxst.net>
next in thread | previous in thread | raw e-mail | index | archive | help
--jRHKVT23PllUwdXP Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On 2019-Apr-07 16:36:40 +0100, tech-lists <tech-lists@zyxst.net> wrote: >storage ONLINE 0 0 0 > raidz1-0 ONLINE 0 0 0 > replacing-0 ONLINE 0 0 1.65K > ada2 ONLINE 0 0 0 > ada1 ONLINE 0 0 0 block size: 512B configured, = 4096B native > ada3 ONLINE 0 0 0 > ada4 ONLINE 0 0 0 > >What I'd like to know is: > >1. is the above situation harmful to data In general no. The only danger is that ZFS is updating the uberblock replicas at the start and end of the volume assuming 512B sectors which means you are at a higher risk or losing one of the replica sets if a power failure occurs during an uberblock update. >2. given that vfs.zfs.min_auto_ashift=3D12, why does it still say 512B > configured for ada1 which is the new disk, or.. The pool is configured with ashift=3D9. >3. does "configured" pertain to the pool, the disk, or both "configured" relates to the pool - all vdevs match the pool >4. what would be involved in making them all 4096B Rebuild the pool - backup/destroy/create/restore >5. does a 512B disk wear out faster than 4096B (all other things being > equal) It shouldn't. It does mean that the disk is doing read/modify/write at the physical sector level but that should be masked by the drive cache. >Given that the machine and disks were new in 2016, I can't understand why = zfs >didn't default to 4096B on installation I can't answer that easily. The current version of ZFS looks at the native disk blocksize to determine the pool ashift but I'm not sure how things were in 2016. Possibilities include: * The pool was built explicitly with ashift=3D9 * The initial disks reported 512B native (I think this is most likely) * That version of ZFS was using logical, rather than native blocksize. My guess (given that only ada1 is reporting a blocksize mismatch) is that your disks reported a 512B native blocksize. In the absence of any overrid= e, ZFS will then build an ashift=3D9 pool. --=20 Peter Jeremy --jRHKVT23PllUwdXP Content-Type: application/pgp-signature; name="signature.asc" -----BEGIN PGP SIGNATURE----- iQKTBAEBCgB9FiEE7rKYbDBnHnTmXCJ+FqWXoOSiCzQFAlyrvPZfFIAAAAAALgAo aXNzdWVyLWZwckBub3RhdGlvbnMub3BlbnBncC5maWZ0aGhvcnNlbWFuLm5ldEVF QjI5ODZDMzA2NzFFNzRFNjVDMjI3RTE2QTU5N0EwRTRBMjBCMzQACgkQFqWXoOSi CzQbEg//dTqzj9xBLYWErHc7ot1sH54+Mnwsgu7bYthA6E2uXIjT3lqK1+2X68Yn NStQiv9SIljbrCVlyN6aejgx1dFuSsd90DGOpPtwdFvHx8QkXIcXl0xzyef5R+7q XNfSyltWVa/DxGnH+7ve98PaQQTIfgn3WG4zn4tx69+XwwMOkhPlF6E0TC4XnST9 o+Qpv9BGzkRGYZsYy4gNMRFVhvUhoZvTys+k2euC7x9onZ4L/OnbeY6CAD/1Wj54 lZpAVgBB6Ms+lUWVvDPVtIKqA3RoDvlwefLgdJee6gnlNZ1vzQ/KBVr2DnqgQtbu xFhZB9j1tph+P184HcH8fKziYa+fudXGI9A9y1snga2hVLDSnPX1MlC0tt+4uHYs PJqCVWj9nWrw/x0B5z9nVAbLK74qS6QAe9Eodjp0p0tJ8sh/hHzHEB5llK44l8TL mlzMdgf5/PZlw8N+1TfbU8TfNWej+ImDRUa2L8n/vgU695Z2fMYSOBnqo/S3ZlKj 77Z45V05bQqUGz4XSn9VLG6jo5joibpH/gwyjr3extFWkXTEK36ZWoMKxW1ndlzX +FX5tNDhj6psntcRlDj/S5embKFFElKvyV2/CWNrIwJZvxiQ0HCTyiaW56vdflUX r91OkXSjk17hEJB8Zh95JMr/zUcYybofIak/JviHml0vHof4mJ4= =yHzn -----END PGP SIGNATURE----- --jRHKVT23PllUwdXP--
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20190408212822.GD13734>