From owner-freebsd-current@freebsd.org Tue Dec 26 20:25:01 2017 Return-Path: Delivered-To: freebsd-current@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id D120FE8048B for ; Tue, 26 Dec 2017 20:25:01 +0000 (UTC) (envelope-from ohartmann@walstatt.org) Received: from mout.gmx.net (mout.gmx.net [212.227.17.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "mout.gmx.net", Issuer "TeleSec ServerPass DE-2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 516DE65E73; Tue, 26 Dec 2017 20:25:00 +0000 (UTC) (envelope-from ohartmann@walstatt.org) Received: from thor.intern.walstatt.dynvpn.de ([77.180.120.247]) by mail.gmx.com (mrgmx103 [212.227.17.168]) with ESMTPSA (Nemesis) id 0MZU7V-1eFzQr03Lh-00LJCq; Tue, 26 Dec 2017 21:24:48 +0100 Date: Tue, 26 Dec 2017 21:24:12 +0100 From: "O. Hartmann" To: "Rodney W. Grimes" Cc: Alan Somers , "O. Hartmann" , Allan Jude , FreeBSD CURRENT Subject: Re: ZFS: alignment/boundary for partition type freebsd-zfs Message-ID: <20171226212355.76494782@thor.intern.walstatt.dynvpn.de> In-Reply-To: <201712261731.vBQHVr7d057227@pdx.rh.CN85.dnsmgr.net> References: <201712261731.vBQHVr7d057227@pdx.rh.CN85.dnsmgr.net> Organization: WALSTATT User-Agent: OutScare 3.1415926 X-Operating-System: ImNotAnOperatingSystem 3.141592527 MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha512; boundary="Sig_/ByRKJ_/QgjWtRXJGudvkA5U"; protocol="application/pgp-signature" X-Provags-ID: V03:K0:pGn6GVhw/UnEl4E+yXIx1vjTHJKYWBRfNPfWqVt7FnAxGc/9UpZ k9TznibqFJuGUrX13c2bal/pscDYVm9sdElGqY4MovGw57iUnoVeqnxkdMeECR75clmpiwe 3ujuM+Xo1veoWYxcNI+mK6WQLWUIBzvWCaKBvBagJmKTCS5TehSx5SYcJ2ljhF5Nkm9gMoi ++IoinI+F1KAH7wR5B20Q== X-UI-Out-Filterresults: notjunk:1;V01:K0:BS6Y4+FRS8Q=:ChofiQpZrPpIjwq8GAd36S I8tXN+u7pzuNABwdLX+xyWdvciim6mimB1HoLtNt99CrVPj5dw61zr+JYBAKHD0nmXzqobSpD hZIPEM2VKP5DcNvuNwPHx78ZzrkBZjqtDykd0Ie8xmrjczaFLe9UzYfXHjUKqsL/HcUIn1Ev2 0eZO/qDtf8jwCyOz7bPoEH8W9pKR5gZpUj9rxoeXj0nSnHXEAFd/0/jMOrbVqT7PvFOSSCGSn HYVf5aLdRi+Fi4T3oTz6hg6MKgdv22XAFcTNKaqbkIwRJ7n/SgmLGBpjZSopK54HYqJ3S0HZS My8qX11Z+nlbeMWgKM15DN+zIUw08XBOpyWAK1IEzv0PIVx8/glSePpKW1h7z9zgMYG7k2QIz klXyqLfYa2B0WsPtXmKF3pY3HI3CIbsmrOTCGngmnvOhOdE54y5aAa5/IPFsMnJIwC1xemy9N kkpHTqGubf3bvQD+NhVHfTxV2r64kqj8ep3bWb2Cvy0D6AZXR5yBuaL2kopuQYun2to2u67Fa uqwOpEiMC9IqzBC/eFEk2YuDjttQdwa49Bvg3W5lVkgZxtTMD15R1oPjLhryRvfy1cbwUcIU4 hZ+3hMs4EBiEUxIALvvqkTAUkpCzhpyrxhWMa2r2wGXrM+l60qssb5z548PcIvvLxLqI+UgNQ G9ZB1Gc/Z7+xdHmxD7RLr4hMcwzJWob1kgoJR6BWKw17VUmQ61Jjw81ZTKzQHwma8CWyxRaPc Ji/ygIY+LBTlG+tM3UqlcUgEBqOkqW0NKTCWpUW+2ZEmxMnFIdsJnvcE+zdoyFG2AMQETe0wp AqX12iVKRmBzcM8EM8cKlZh5A+wgTsqOb9L3DOrdH5U+7IhTCfzETEHwKavl2Owg7lsgvNJSo HVj+JZYkTgKyySF4Xd0MAwzDdxlWp/SElNrbT+SuHXtyLyYQPtv3YSZVDddqlroX8cb+oSILt KIp+pZThp3ckBJ5Z3y6OdFMZbLLMylcn3JYRsCvTFKrxNLtOy3S3h X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.25 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 26 Dec 2017 20:25:01 -0000 --Sig_/ByRKJ_/QgjWtRXJGudvkA5U Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Am Tue, 26 Dec 2017 09:31:53 -0800 (PST) "Rodney W. Grimes" schrieb: > > On Tue, Dec 26, 2017 at 10:04 AM, O. Hartmann > > wrote: > > =20 > > > Am Tue, 26 Dec 2017 11:44:29 -0500 > > > Allan Jude schrieb: > > > =20 > > > > On 2017-12-26 11:24, O. Hartmann wrote: =20 > > > > > Running recent CURRENT on most of our lab's boxes, I was in need = to =20 > > > replace and =20 > > > > > restore a ZFS RAIDZ pool. Doing so, I was in need to partition th= e =20 > > > disks I was about =20 > > > > > to replace. Well, the drives in question are 4k block size drives= with =20 > > > 512b emulation =20 > > > > > - as most of them today. I've created the only and sole partiton = on =20 > > > each 4 TB drive =20 > > > > > via the command sequence > > > > > > > > > > gpart create -s GPT adaX > > > > > gpart add -t freebsd-zfs -a 4k -l nameXX adaX > > > > > > > > > > After doing this on all drives I was about to replace, something = drove =20 > > > me to check on =20 > > > > > the net and I found a lot of websites giving "advices", how to pr= epare =20 > > > large, modern =20 > > > > > drives for ZFS. I think the GNOP trick is not necessary any more,= but =20 > > > many blogs =20 > > > > > recommend to perform > > > > > > > > > > gpart add -t freebsd-zfs -b 1m -a 4k -l nameXX adaX > > > > > > > > > > to put the partition boundary at the 1 Megabytes boundary. I didn= 't do =20 > > > that. My =20 > > > > > partitions all start now at block 40. > > > > > > > > > > My question is: will this have severe performance consequences or= is =20 > > > that negligible? =20 > > > > > > > > > > Since most of those websites I found via "zfs freebsd alignement"= are =20 > > > from years ago, =20 > > > > > I'm a bit confused now an consideration performing all this =20 > > > days-taking resilvering =20 > > > > > process let me loose some more hair as the usual "fallout" ... > > > > > > > > > > Thanks in advance, > > > > > > > > > > Oliver > > > > > =20 > > > > > > > > The 1mb alignment is not required. It is just what I do to leave ro= om > > > > for the other partition types before the ZFS partition. > > > > > > > > However, the replacement for the GNOP hack, is separate. In additio= n to > > > > aligning the partitions to 4k, you have to tell ZFS that the drive = is 4k: > > > > > > > > sysctl vfs.zfs.min_auto_ashift=3D12 > > > > > > > > (2^12 =3D 4096) > > > > > > > > Before you create the pool, or add additional vdevs. > > > > =20 > > > > > > I didn't do the sysctl vfs.zfs.min_auto_ashift=3D12 :-(( when I creat= ed the > > > vdev. What is > > > the consequence for that for the pool? I lived under the impression t= hat > > > this is necessary > > > for "native 4k" drives. > > > > > > How can I check what ashift is in effect for a specific vdev? > > > =20 > >=20 > > It's only necessary if your drive stupidly fails to report its physical > > sector size correctly, and no other FreeBSD developer has already writt= en a > > quirk for that drive. Do "zdb -l /dev/adaXXXpY" for any one of the > > partitions in the ZFS raid group in question. It should print either > > "ashift: 12" or "ashift: 9". =20 >=20 > And more than likely if you used the bsdinstall from one of > the distributions to setup the system you created the ZFS > pool from it has the sysctl in /boot/loader.conf as the > default for all? recent? bsdinstall's is that the 4k default > is used and the sysctl gets written to /boot/loader.conf > at install time so from then on all pools you create shall > also be 4k. You have to change a default during the > system install to change this to 512. I never used any installation scripts so far. Before I replaced the pool's drives, I tried to search for informations on = how-to. This important tiny fact must have slipped through - or it is very bad documente= d. I didn't find a hint in tuning(7), which is the man page I consulted first. Luckily, as Allan Jude stated, the disk recognition was correct (I guess st= ripesize instead of blocksize is taken?).=20 > =20 > > -aLAn =20 >=20 > > _______________________________________________ > > freebsd-current@freebsd.org mailing list > > https://lists.freebsd.org/mailman/listinfo/freebsd-current > > To unsubscribe, send any mail to "freebsd-current-unsubscribe@freebsd.o= rg" > > =20 >=20 --=20 O. Hartmann Ich widerspreche der Nutzung oder =C3=9Cbermittlung meiner Daten f=C3=BCr Werbezwecke oder f=C3=BCr die Markt- oder Meinungsforschung (=C2=A7 28 Abs.= 4 BDSG). --Sig_/ByRKJ_/QgjWtRXJGudvkA5U Content-Type: application/pgp-signature Content-Description: OpenPGP digital signature -----BEGIN PGP SIGNATURE----- iLUEARMKAB0WIQQZVZMzAtwC2T/86TrS528fyFhYlAUCWkKwBwAKCRDS528fyFhY lI+tAf4u8+6gJtpqUEWxg8OjpvGQwqAsQpm9pVuCbMqKOxOsI8wc6HPktHoGhmBS eGjLioiS1p+l5z3pcoUEuWbvMrmjAf4rewz64tN4GvwE/6+P+D1F0TbRqIY9suNK Q201KNarBKuZ787hiZOcGADXyKrkVxY/hBXRlTrXzGyZHzL4uJdJ =/oUt -----END PGP SIGNATURE----- --Sig_/ByRKJ_/QgjWtRXJGudvkA5U--