From owner-freebsd-fs@freebsd.org Tue May 21 09:29:12 2019 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 9A2E815A8295 for ; Tue, 21 May 2019 09:29:12 +0000 (UTC) (envelope-from peter@rulingia.com) Received: from vtr.rulingia.com (unknown [IPv6:2001:19f0:5801:ebe:d87d:d822:b408:4936]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "vtr.rulingia.com", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 83A0185B3D for ; Tue, 21 May 2019 09:29:11 +0000 (UTC) (envelope-from peter@rulingia.com) Received: from server.rulingia.com (ppp59-167-167-3.static.internode.on.net [59.167.167.3]) by vtr.rulingia.com (8.15.2/8.15.2) with ESMTPS id x4L9SPpf044157 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 21 May 2019 19:28:31 +1000 (AEST) (envelope-from peter@rulingia.com) X-Bogosity: Ham, spamicity=0.000000 Received: from server.rulingia.com (localhost.rulingia.com [127.0.0.1]) by server.rulingia.com (8.15.2/8.15.2) with ESMTPS id x4L9SJxd044004 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Tue, 21 May 2019 19:28:19 +1000 (AEST) (envelope-from peter@server.rulingia.com) Received: (from peter@localhost) by server.rulingia.com (8.15.2/8.15.2/Submit) id x4L9SJem044003; Tue, 21 May 2019 19:28:19 +1000 (AEST) (envelope-from peter) Date: Tue, 21 May 2019 19:28:19 +1000 From: Peter Jeremy To: Peter Cc: Miroslav Lachman <000.fbsd@quip.cz>, freebsd-fs@freebsd.org Subject: Re: Waht is the minimum free space ... (Full Report) Message-ID: <20190521092819.GB41934@server.rulingia.com> References: <20190515204243.GA67445@gate.oper.dinoex.org> <60d57363-eb5c-e985-82ad-30f03b06a4c6@quip.cz> <20190517010239.GA34758@gate.oper.dinoex.org> <20190517053043.GF98005@server.rulingia.com> <20190517124937.GA11835@gate.oper.dinoex.org> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha512; protocol="application/pgp-signature"; boundary="CE+1k2dSO48ffgeK" Content-Disposition: inline In-Reply-To: <20190517124937.GA11835@gate.oper.dinoex.org> X-PGP-Key: http://www.rulingia.com/keys/peter.pgp User-Agent: Mutt/1.11.4 (2019-03-13) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 21 May 2019 09:29:12 -0000 --CE+1k2dSO48ffgeK Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On 2019-May-17 14:49:37 +0200, Peter wrote: >On Fri, May 17, 2019 at 03:30:43PM +1000, Peter Jeremy wrote: >! On 2019-May-17 03:02:39 +0200, Peter wrote: >! >The original idea was to check if ZFS can grow a raid5. >!=20 >! I've done this (see https://bugs.au.freebsd.org/dokuwiki/zfsraid), thoug= h I >! also migrated from RAIDZ1 to RAIDZ2 in the process. If this process no >! longer works (that page is 4 years old), it would seem that there has be= en >! an unfortunate regression. > >You don't mention setting "autoexpand=3Don" - I suppose it would not >work without that. Hmmm... After all this time, I don't recall. I did an export/import, in which case, I don't believe autoexpand is necessary. >What we have here is most likely not a problem with the raid or it's >growth, but a kind of "autority conflict" between ZFS and GPT on who >is going to manage the underlying partitions. (Which doesn't surprize >me - if I were ZFS, I would be quite frustrated to run under GPT.) I don't agree with this. Geom is layered - by definition, a geom partition manager class (eg BSD, GPT) is responsible for managing the partition layout. ZFS can only see exposed device entries - which means either entire raw disks or the partitions exposed via geom. If ZFS tries to access data outside the partition it is using then it should receive an error back (or there's a serious bug in geom). When juggling disk partitions, the most common problem is unexpected metadata: Both geom and ZFS store metadata at the edges of the containers they live in (disks or partitions). Using gpart to resize a partition does not touch any data on the disk (other that geom metadata). Whilst this is good in some circumstances (if you accidently mis-partition your disk you can fix the partitioning and your data will still be there), it can cause problems if a partitioning change exposes stale metadata. In particular, ZFS will "taste" every[*] partition it can see, looking for ZFS metadata. Resizing a partition could result in seemingly valid but stale ZFS metadata becoming visible, potentially confusing ZFS. If you look through my procedure, you'll notice that I explicitly write zeroes over regions that contained ZFS metadata to guard against this. [*] I'm not sure whether ZFS looks at the partition type. --=20 Peter Jeremy --CE+1k2dSO48ffgeK Content-Type: application/pgp-signature; name="signature.asc" -----BEGIN PGP SIGNATURE----- iQKTBAEBCgB9FiEE7rKYbDBnHnTmXCJ+FqWXoOSiCzQFAlzjxLNfFIAAAAAALgAo aXNzdWVyLWZwckBub3RhdGlvbnMub3BlbnBncC5maWZ0aGhvcnNlbWFuLm5ldEVF QjI5ODZDMzA2NzFFNzRFNjVDMjI3RTE2QTU5N0EwRTRBMjBCMzQACgkQFqWXoOSi CzT5xw//SuHrAStvIbro3rpmOKKfKY3oB4MFNWK8axMZMj8qwgBst6Nz7KiKr14O sFb5cyZ3029ncn9+dE9xcoZl+dClkfsANiPPcTF+zds2waVPW1I8reXR8kxG/ZF6 Kpv6btmVi7KmYO+mgIeKlrozNOx7sANJh1ZDpd7SAjtJWR+WwNHL8/0PjXeKBjIm TufNz2SdxFC5IlERuPaI9/tL6xX7b5rr+xM1kbS0KwC+unl/zbWSXh5jMzeAaYRN 4TFyKAwQHepjWhr+4Y8gE++ohPFGPq0l9uiz7AQWj3dW//UAtXT9IYpejP7UoNVb mpWBcT/D+QPH7s/ZvkPEs5CetfQuCSVeo956nf8i3JFxV/o+yz8fO0rjR+P0qiff QeP0m0kOhma8we6IN/wi0RPFSfgyuLVFznh2A8qjD8D1PBcCMLW6CKKCUPwXhdmn pHz8X0REzm/HcoKv8B4EsKM4SweJ+EJ2dlEi985A2dsutGUDcL3LRyDWhL2hYa5n lY+g8kpovquVdP3lkDSKGPOrNZs+V7T/rGmYHQd/RJl5EwTXRVQBbHU0wEUWxBwu b2OQmXUV/XFSWaw7Ks+6cgjqq+N8/VFuZuESyI/mLAe9JK1wbFrNvEXFRK5+ylHY s0345Gu19RIhZF39RMD37b4KWR0PsGfsU9z0ydPtIpfbyirs9TA= =Qtrh -----END PGP SIGNATURE----- --CE+1k2dSO48ffgeK--