From owner-freebsd-fs@freebsd.org Thu May 16 01:13:40 2019 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 62E5315A8397 for ; Thu, 16 May 2019 01:13:40 +0000 (UTC) (envelope-from pmc@citylink.dinoex.sub.org) Received: from uucp.dinoex.org (uucp.dinoex.sub.de [IPv6:2001:1440:5001:1::2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) server-signature RSA-PSS (4096 bits) client-signature RSA-PSS (2048 bits) client-digest SHA256) (Client CN "uucp.dinoex.sub.de", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 79C2781A0D for ; Thu, 16 May 2019 01:13:38 +0000 (UTC) (envelope-from pmc@citylink.dinoex.sub.org) Received: from uucp.dinoex.sub.de (uucp.dinoex.sub.de [194.45.71.2]) by uucp.dinoex.org (8.16.0.41/8.16.0.41) with ESMTPS id x4G1D5vs066355 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO); Thu, 16 May 2019 03:13:06 +0200 (CEST) (envelope-from pmc@citylink.dinoex.sub.org) Received: from citylink.dinoex.sub.org (uucp@localhost) by uucp.dinoex.sub.de (8.16.0.41/8.16.0.41/Submit) with UUCP id x4G1D5ZW066354; Thu, 16 May 2019 03:13:05 +0200 (CEST) (envelope-from pmc@citylink.dinoex.sub.org) Received: from gate.oper.dinoex.org (gate-e [192.168.98.2]) by citylink.dinoex.sub.de (8.15.2/8.15.2) with ESMTP id x4G0aQBC002082; Thu, 16 May 2019 02:36:26 +0200 (CEST) (envelope-from peter@gate.oper.dinoex.org) Received: from gate.oper.dinoex.org (gate-e [192.168.98.2]) by gate.oper.dinoex.org (8.15.2/8.15.2) with ESMTP id x4G0a7X7002001; Thu, 16 May 2019 02:36:07 +0200 (CEST) (envelope-from peter@gate.oper.dinoex.org) Received: (from peter@localhost) by gate.oper.dinoex.org (8.15.2/8.15.2/Submit) id x4G0a70c001998; Thu, 16 May 2019 02:36:07 +0200 (CEST) (envelope-from peter) Date: Thu, 16 May 2019 02:36:07 +0200 From: Peter To: Miroslav Lachman <000.fbsd@quip.cz> Cc: freebsd-fs@freebsd.org Subject: Re: Waht is the minimum free space between GPT partitions? Message-ID: <20190516003607.GA93284@gate.oper.dinoex.org> References: <20190515204243.GA67445@gate.oper.dinoex.org> <60d57363-eb5c-e985-82ad-30f03b06a4c6@quip.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <60d57363-eb5c-e985-82ad-30f03b06a4c6@quip.cz> User-Agent: Mutt/1.11.4 (2019-03-13) X-Milter: Spamilter (Reciever: uucp.dinoex.sub.de; Sender-ip: 194.45.71.2; Sender-helo: uucp.dinoex.sub.de; ) X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.6.2 (uucp.dinoex.org [194.45.71.2]); Thu, 16 May 2019 03:13:09 +0200 (CEST) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 16 May 2019 01:13:40 -0000 On Thu, May 16, 2019 at 12:29:16AM +0200, Miroslav Lachman wrote: ! > I found, if I put partitions directly together (so that another starts ! > immediately after one ends), under certain circumstances the volumes ! > become inaccessible and the system (11.2) does crash. Obviousely there ! > is a safety distance required - but how big should it be? ! ! I read your post on forum ! https://forums.freebsd.org/threads/create-degraded-raid-5-with-2-disks-on-freebsd.70750/#post-426756 Hi, great, that should explain how to make it happen. ! No problems for years. Me neither with MBR/packlabels, but only recently switched to GPT. I suppose either GPT or ZFS-autoexpand seems to go out-of-bounds; I couldn't determine which. ! I think your case is somewhat different if you split disk in to 3 ! partitions later used as 3 devices for one ZFS pool, so maybe there is ! some coincidence with expanding ZFS... and then it is a bug which should ! be fixed. If we could fix it that would be even better! Agreed, it's an ugly operation, but I love to do ugly things with ZFS, and usually it stands it. ;) ! Can you prepare some simple testcase (scriptable) which make a panic on ! your host? I will try it in some spare VM. The description in mentioned forum-post is pretty much what I did. At first I did it on my router, as there is empty space on a disk, and when that had gone by-by, I tried it on the desktop with an (otherwise empty) USB stick. Takes an eternity to create ZFS-raidz even on USB-3 stick - they are not designed for that - but the outcome was the same. Procedure is: 1. create new GPT scheme on stick. 2. add 3x 1G freebsd-zfs partitions with 1G -free- in between. 3. zpool create test raidz da0p1 da0p2 da0p3 4. resize 3x partitions to 2G each 5. zpool set autoexpand=on test 6. export the pool 7. zpool online At that point it will start to complain that (some of) the pool isn't readable. Now resize the partitions back to 1G. -> kernel crash And after going thru that and having all partitions back at 1G, the pool works again. :) I'll try to reproduce it from a script, as soon as my toolchain is done with building from the recent patches. Cheerio, PMc