From owner-freebsd-fs@freebsd.org Tue Jan 31 21:49:18 2017 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 9270ACCA36D for ; Tue, 31 Jan 2017 21:49:18 +0000 (UTC) (envelope-from ler@FreeBSD.org) Received: from thebighonker.lerctr.org (thebighonker.lerctr.org [IPv6:2001:470:1f0f:3ad:223:7dff:fe9e:6e8a]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "thebighonker.lerctr.org", Issuer "COMODO RSA Domain Validation Secure Server CA" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 6A57F1D3C for ; Tue, 31 Jan 2017 21:49:18 +0000 (UTC) (envelope-from ler@FreeBSD.org) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lerctr.org; s=lerami; h=Message-ID:References:In-Reply-To:Subject:Cc:To:From:Date: Content-Type:MIME-Version:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe: List-Subscribe:List-Post:List-Owner:List-Archive; bh=dK0NMaPyAg+H70Ewm9sd0RlzlPhkG44RdQHBEfShpYw=; b=YOV54223uMERott6SXV0/a76Dq +aA+znQJ98YAj5vNkSPNeuKj5DYNFvAMBIbOWoinVQRMFlIr/Qu6guEDcm5f5ZjF8PWZ/2RiymHj/ 3I9U1/zz2A6yFdru06u4n+I1qcde4daT2SAcvj5UxpDJTTpCtsODsxgxYFnwmHrTeP5M=; Received: from thebighonker.lerctr.org ([2001:470:1f0f:3ad:223:7dff:fe9e:6e8a]:21948 helo=webmail.lerctr.org) by thebighonker.lerctr.org with esmtpsa (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128) (Exim 4.88 (FreeBSD)) (envelope-from ) id 1cYgIf-000Gi0-Oh; Tue, 31 Jan 2017 15:49:17 -0600 Received: from proxy.na.alcatel-lucent.com ([135.245.48.75]) by webmail.lerctr.org with HTTP (HTTP/1.1 POST); Tue, 31 Jan 2017 15:49:17 -0600 MIME-Version: 1.0 Date: Tue, 31 Jan 2017 15:49:17 -0600 From: Larry Rosenman To: Steven Hartland Cc: Freebsd fs Subject: Re: 16.0E ExpandSize? -- New Server In-Reply-To: References: <00db0ab7243ce6368c246ae20f9c075a@FreeBSD.org> <1a69057c-dc59-9b78-9762-4f98a071105e@multiplay.co.uk> <35a9034f91542bb1329ac5104bf3b773@FreeBSD.org> <76fc9505-f681-0de0-fe0c-5624b29de321@multiplay.co.uk> <22e1bfc5840d972cf93643733682cda1@FreeBSD.org> <8a710dc75c129f58b0372eeaeca575b5@FreeBSD.org> Message-ID: X-Sender: ler@FreeBSD.org User-Agent: Roundcube Webmail/1.2.3 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Content-Filtered-By: Mailman/MimeDel 2.1.23 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 31 Jan 2017 21:49:18 -0000 revert the other patch and apply this one? On 01/31/2017 3:47 pm, Steven Hartland wrote: > Hmm, looks like there's also a bug in the way vdev_min_asize is calculated for raidz as it can and has resulted in child min_asize which won't provided enough space for the parent due to the use of unrounded integer division. > > 1981411579221 * 6 = 11888469475326 < 11888469475328 > > You should have vdev_min_asize: 1981411579222 for your children. > > Updated patch attached, however calculation still isn't 100% reversible so may need work, however it does now ensure that the children will provide enough capacity for min_asize even if all of them are shrunk to their individual min_asize, which I believe previously may not have been the case. > > This isn't related to the incorrect EXPANDSZ output, but would be good if you could confirm it doesn't cause any issues for your pool given its state. > > On 31/01/2017 21:00, Larry Rosenman wrote: > > borg-new /home/ler $ sudo ./vdev-stats.d > Password: > vdev_path: n/a, vdev_max_asize: 0, vdev_asize: 0, vdev_min_asize: 0 > vdev_path: n/a, vdev_max_asize: 11947471798272, vdev_asize: 11947478089728, vdev_min_asize: 11888469475328 > vdev_path: /dev/mfid4p4, vdev_max_asize: 1991245299712, vdev_asize: 1991245299712, vdev_min_asize: 1981411579221 > vdev_path: /dev/mfid0p4, vdev_max_asize: 1991246348288, vdev_asize: 1991246348288, vdev_min_asize: 1981411579221 > vdev_path: /dev/mfid1p4, vdev_max_asize: 1991246348288, vdev_asize: 1991246348288, vdev_min_asize: 1981411579221 > vdev_path: /dev/mfid3p4, vdev_max_asize: 1991247921152, vdev_asize: 1991247921152, vdev_min_asize: 1981411579221 > vdev_path: /dev/mfid2p4, vdev_max_asize: 1991246348288, vdev_asize: 1991246348288, vdev_min_asize: 1981411579221 > vdev_path: /dev/mfid5p4, vdev_max_asize: 1991246348288, vdev_asize: 1991246348288, vdev_min_asize: 1981411579221 > ^C > > borg-new /home/ler $ > > borg-new /home/ler $ sudo zpool list -v > Password: > NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT > zroot 10.8T 94.3G 10.7T 16.0E 0% 0% 1.00x ONLINE - > raidz1 10.8T 94.3G 10.7T 16.0E 0% 0% > mfid4p4 - - - - - - > mfid0p4 - - - - - - > mfid1p4 - - - - - - > mfid3p4 - - - - - - > mfid2p4 - - - - - - > mfid5p4 - - - - - - > borg-new /home/ler $ > > On 01/31/2017 2:37 pm, Steven Hartland wrote: In that case based on your zpool history I suspect that the original mfid4p4 was the same size as mfid0p4 (1991246348288) but its been replaced with a drive which is (1991245299712), slightly smaller. > > This smaller size results in a max_asize of 1991245299712 * 6 instead of original 1991246348288* 6. > > Now given the way min_asize (the value used to check if the device size is acceptable) is rounded to the the nearest metaslab I believe that replace would be allowed. > https://github.com/freebsd/freebsd/blob/master/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/spa.c#L4947 > > Now the problem is that on open the calculated asize is only updated if its expanding: > https://github.com/freebsd/freebsd/blob/master/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/vdev.c#L1424 > > The updated dtrace file outputs vdev_min_asize which should confirm my suspicion about why the replace was allowed. > > On 31/01/2017 19:05, Larry Rosenman wrote: > > I've replaced some disks due to failure, and some of the pariition sizes are different. > > autoexpand is off: > > borg-new /home/ler $ zpool get all zroot > NAME PROPERTY VALUE SOURCE > zroot size 10.8T - > zroot capacity 0% - > zroot altroot - default > zroot health ONLINE - > zroot guid 11945658884309024932 default > zroot version - default > zroot bootfs zroot/ROOT/default local > zroot delegation on default > zroot autoreplace off default > zroot cachefile - default > zroot failmode wait default > zroot listsnapshots off default > zroot autoexpand off default > zroot dedupditto 0 default > zroot dedupratio 1.00x - > zroot free 10.7T - > zroot allocated 94.3G - > zroot readonly off - > zroot comment - default > zroot expandsize 16.0E - > zroot freeing 0 default > zroot fragmentation 0% - > zroot leaked 0 default > zroot feature@async_destroy enabled local > zroot feature@empty_bpobj active local > zroot feature@lz4_compress active local > zroot feature@multi_vdev_crash_dump enabled local > zroot feature@spacemap_histogram active local > zroot feature@enabled_txg active local > zroot feature@hole_birth active local > zroot feature@extensible_dataset enabled local > zroot feature@embedded_data active local > zroot feature@bookmarks enabled local > zroot feature@filesystem_limits enabled local > zroot feature@large_blocks enabled local > zroot feature@sha512 enabled local > zroot feature@skein enabled local > borg-new /home/ler $ > > borg-new /home/ler $ gpart show > => 40 3905945520 mfid0 GPT (1.8T) > 40 1600 1 efi (800K) > 1640 1024 2 freebsd-boot (512K) > 2664 1432 - free - (716K) > 4096 16777216 3 freebsd-swap (8.0G) > 16781312 3889162240 4 freebsd-zfs (1.8T) > 3905943552 2008 - free - (1.0M) > > => 40 3905945520 mfid1 GPT (1.8T) > 40 1600 1 efi (800K) > 1640 1024 2 freebsd-boot (512K) > 2664 1432 - free - (716K) > 4096 16777216 3 freebsd-swap (8.0G) > 16781312 3889162240 4 freebsd-zfs (1.8T) > 3905943552 2008 - free - (1.0M) > > => 40 3905945520 mfid2 GPT (1.8T) > 40 1600 1 efi (800K) > 1640 1024 2 freebsd-boot (512K) > 2664 1432 - free - (716K) > 4096 16777216 3 freebsd-swap (8.0G) > 16781312 3889162240 4 freebsd-zfs (1.8T) > 3905943552 2008 - free - (1.0M) > > => 40 3905945520 mfid3 GPT (1.8T) > 40 1600 1 efi (800K) > 1640 1024 2 freebsd-boot (512K) > 2664 16777216 3 freebsd-swap (8.0G) > 16779880 3889165680 4 freebsd-zfs (1.8T) > > => 40 3905945520 mfid5 GPT (1.8T) > 40 1600 1 efi (800K) > 1640 1024 2 freebsd-boot (512K) > 2664 1432 - free - (716K) > 4096 16777216 3 freebsd-swap (8.0G) > 16781312 3889162240 4 freebsd-zfs (1.8T) > 3905943552 2008 - free - (1.0M) > > => 40 3905945520 mfid4 GPT (1.8T) > 40 1600 1 efi (800K) > 1640 1024 2 freebsd-boot (512K) > 2664 1432 - free - (716K) > 4096 16777216 3 freebsd-swap (8.0G) > 16781312 3889160192 4 freebsd-zfs (1.8T) > 3905941504 4056 - free - (2.0M) > > borg-new /home/ler $ > > this system was built last week, and I **CAN** rebuild it if necessary, but I didn't do anything strange (so I thought :) ) > > On 01/31/2017 12:30 pm, Steven Hartland wrote: Your issue is the reported vdev_max_asize > vdev_asize: > vdev_max_asize: 11947471798272 > vdev_asize: 11947478089728 > > max asize is smaller than asize by 6291456 > > For raidz1 Xsize should be the smallest disk Xsize * disks so: > 1991245299712 * 6 = 11947471798272 > > So your max asize looks right but asize looks too big > > Expand Size is calculated by: > if (vd->vdev_aux == NULL && tvd != NULL && vd->vdev_max_asize != 0) { > vs->vs_esize = P2ALIGN(vd->vdev_max_asize - vd->vdev_asize, > 1ULL << tvd->vdev_ms_shift); > } > > So the question is why is asize too big? > > Given you seem to have some random disk sizes do you have auto expand turned on? > > On 31/01/2017 17:39, Larry Rosenman wrote: vdev_path: n/a, vdev_max_asize: 11947471798272, vdev_asize: 11947478089728 -- Larry Rosenman http://people.freebsd.org/~ler [1] Phone: +1 214-642-9640 E-Mail: ler@FreeBSD.org US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 -- Larry Rosenman http://people.freebsd.org/~ler [1] Phone: +1 214-642-9640 E-Mail: ler@FreeBSD.org US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 -- Larry Rosenman http://people.freebsd.org/~ler Phone: +1 214-642-9640 E-Mail: ler@FreeBSD.org US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 Links: ------ [1] http://people.freebsd.org/%7Eler From owner-freebsd-fs@freebsd.org Tue Jan 31 22:02:29 2017 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id C3654CCAA94 for ; Tue, 31 Jan 2017 22:02:29 +0000 (UTC) (envelope-from marieheleneka@gmail.com) Received: from mail-qt0-x231.google.com (mail-qt0-x231.google.com [IPv6:2607:f8b0:400d:c0d::231]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 78C1EB88; Tue, 31 Jan 2017 22:02:29 +0000 (UTC) (envelope-from marieheleneka@gmail.com) Received: by mail-qt0-x231.google.com with SMTP id v23so247475116qtb.0; Tue, 31 Jan 2017 14:02:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=a/aV3Nq2HBnEisz7zHYTsNy03RZcFNK7VLtfzH6mm2Y=; b=eMaJTKFlapPqOGgObCtvVv+/A2UbRhEiVn+FVDHZTYLhFTLBXbp/odhqGTmvIcdm0o amUulclXTYmuUk7YV0ftP0xK2Bhw4EAoIr2KQayyyt16UCKiDvbpsgpwAuW+BNK70TzO eWoKVIE1/NC+bZm8/fHlIt1FVAP7Tk5M5bFdgiSIAm+H8Clfrv/ZZGiZeOzHwV+v8ifu a/fMx0ZNxwcFFJBP3PQ9hbG2576bsIgQ/XmI6Zcex3CAZ7IUtgepX1K7np/eR4WW4P+L X3jK3bQHX+za9Z3N8/lhSfC6llhLx9GcmxoxNlTijE2KveWkWCEPN9McC13nVy4WrbpP cp/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=a/aV3Nq2HBnEisz7zHYTsNy03RZcFNK7VLtfzH6mm2Y=; b=pmfnC5KJovcqAtlVjuIwsyA0ALOrHP1MtVz6k5XRFHRbg+OcVhAl1iRfCjBWZIDLgB J4dy70aAUoQ0QIDIqCcr8boRcLMLS6PQF9/lSPS4sdXh1MaWdQZbhsKHJ3LnzCqDBy4k KnCjdiph4G9QvJF5pMEeiPEgLIm1FObQ+/A85gLOcPAOfYLkm5umhaVn7MpYJhqO83WF HpnSCCGCxQugSH7YWehwL58aJxXJAI0/ACmfQ3cCtl5AEmVKwmHRYwffzW8VRAq54K+j icFQEpbRRs+/Ixady1yjn/ZBPcAZLCq2Z3xzyMlzL3Lwnst9qYvUz6hAyINdpDmbuTzL lwKw== X-Gm-Message-State: AIkVDXI7EPII3NTZ/r4mmB5DbYPFOnCilyfqV15/hAcrtpvb1XKlLQ40pjjLT3Ezaz9x4beFxlhE6bDDX42xew== X-Received: by 10.200.2.8 with SMTP id k8mr26509838qtg.163.1485900148471; Tue, 31 Jan 2017 14:02:28 -0800 (PST) MIME-Version: 1.0 References: <00db0ab7243ce6368c246ae20f9c075a@FreeBSD.org> <1a69057c-dc59-9b78-9762-4f98a071105e@multiplay.co.uk> <35a9034f91542bb1329ac5104bf3b773@FreeBSD.org> <76fc9505-f681-0de0-fe0c-5624b29de321@multiplay.co.uk> <22e1bfc5840d972cf93643733682cda1@FreeBSD.org> <8a710dc75c129f58b0372eeaeca575b5@FreeBSD.org> In-Reply-To: From: Marie Helene Kvello-Aune Date: Tue, 31 Jan 2017 22:02:17 +0000 Message-ID: Subject: Re: 16.0E ExpandSize? -- New Server To: Larry Rosenman , Steven Hartland Cc: Freebsd fs Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.23 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 31 Jan 2017 22:02:29 -0000 On Tue, Jan 31, 2017 at 10:49 PM Larry Rosenman wrote: > revert the other patch and apply this one? > > On 01/31/2017 3:47 pm, Steven Hartland wrote: > > > Hmm, looks like there's also a bug in the way vdev_min_asize is > calculated for raidz as it can and has resulted in child min_asize which > won't provided enough space for the parent due to the use of unrounded > integer division. > > > > 1981411579221 * 6 = 11888469475326 < 11888469475328 > > > > You should have vdev_min_asize: 1981411579222 for your children. > > > > Updated patch attached, however calculation still isn't 100% reversible > so may need work, however it does now ensure that the children will provide > enough capacity for min_asize even if all of them are shrunk to their > individual min_asize, which I believe previously may not have been the case. > > > > This isn't related to the incorrect EXPANDSZ output, but would be good > if you could confirm it doesn't cause any issues for your pool given its > state. > > > > On 31/01/2017 21:00, Larry Rosenman wrote: > > > > borg-new /home/ler $ sudo ./vdev-stats.d > > Password: > > vdev_path: n/a, vdev_max_asize: 0, vdev_asize: 0, vdev_min_asize: 0 > > vdev_path: n/a, vdev_max_asize: 11947471798272, vdev_asize: > 11947478089728, vdev_min_asize: 11888469475328 > > vdev_path: /dev/mfid4p4, vdev_max_asize: 1991245299712, vdev_asize: > 1991245299712, vdev_min_asize: 1981411579221 > > vdev_path: /dev/mfid0p4, vdev_max_asize: 1991246348288, vdev_asize: > 1991246348288, vdev_min_asize: 1981411579221 > > vdev_path: /dev/mfid1p4, vdev_max_asize: 1991246348288, vdev_asize: > 1991246348288, vdev_min_asize: 1981411579221 > > vdev_path: /dev/mfid3p4, vdev_max_asize: 1991247921152, vdev_asize: > 1991247921152, vdev_min_asize: 1981411579221 > > vdev_path: /dev/mfid2p4, vdev_max_asize: 1991246348288, vdev_asize: > 1991246348288, vdev_min_asize: 1981411579221 > > vdev_path: /dev/mfid5p4, vdev_max_asize: 1991246348288, vdev_asize: > 1991246348288, vdev_min_asize: 1981411579221 > > ^C > > > > borg-new /home/ler $ > > > > borg-new /home/ler $ sudo zpool list -v > > Password: > > NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT > > zroot 10.8T 94.3G 10.7T 16.0E 0% 0% 1.00x ONLINE - > > raidz1 10.8T 94.3G 10.7T 16.0E 0% 0% > > mfid4p4 - - - - - - > > mfid0p4 - - - - - - > > mfid1p4 - - - - - - > > mfid3p4 - - - - - - > > mfid2p4 - - - - - - > > mfid5p4 - - - - - - > > borg-new /home/ler $ > > > > On 01/31/2017 2:37 pm, Steven Hartland wrote: In that case based on your > zpool history I suspect that the original mfid4p4 was the same size as > mfid0p4 (1991246348288) but its been replaced with a drive which is > (1991245299712), slightly smaller. > > > > This smaller size results in a max_asize of 1991245299712 * 6 instead of > original 1991246348288* 6. > > > > Now given the way min_asize (the value used to check if the device size > is acceptable) is rounded to the the nearest metaslab I believe that > replace would be allowed. > > > https://github.com/freebsd/freebsd/blob/master/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/spa.c#L4947 > > > > Now the problem is that on open the calculated asize is only updated if > its expanding: > > > https://github.com/freebsd/freebsd/blob/master/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/vdev.c#L1424 > > > > The updated dtrace file outputs vdev_min_asize which should confirm my > suspicion about why the replace was allowed. > > > > On 31/01/2017 19:05, Larry Rosenman wrote: > > > > I've replaced some disks due to failure, and some of the pariition sizes > are different. > > > > autoexpand is off: > > > > borg-new /home/ler $ zpool get all zroot > > NAME PROPERTY VALUE SOURCE > > zroot size 10.8T - > > zroot capacity 0% - > > zroot altroot - default > > zroot health ONLINE - > > zroot guid 11945658884309024932 default > > zroot version - default > > zroot bootfs zroot/ROOT/default local > > zroot delegation on default > > zroot autoreplace off default > > zroot cachefile - default > > zroot failmode wait default > > zroot listsnapshots off default > > zroot autoexpand off default > > zroot dedupditto 0 default > > zroot dedupratio 1.00x - > > zroot free 10.7T - > > zroot allocated 94.3G - > > zroot readonly off - > > zroot comment - default > > zroot expandsize 16.0E - > > zroot freeing 0 default > > zroot fragmentation 0% - > > zroot leaked 0 default > > zroot feature@async_destroy enabled local > > zroot feature@empty_bpobj active local > > zroot feature@lz4_compress active local > > zroot feature@multi_vdev_crash_dump enabled local > > zroot feature@spacemap_histogram active local > > zroot feature@enabled_txg active local > > zroot feature@hole_birth active local > > zroot feature@extensible_dataset enabled local > > zroot feature@embedded_data active local > > zroot feature@bookmarks enabled local > > zroot feature@filesystem_limits enabled local > > zroot feature@large_blocks enabled local > > zroot feature@sha512 enabled local > > zroot feature@skein enabled local > > borg-new /home/ler $ > > > > borg-new /home/ler $ gpart show > > => 40 3905945520 mfid0 GPT (1.8T) > > 40 1600 1 efi (800K) > > 1640 1024 2 freebsd-boot (512K) > > 2664 1432 - free - (716K) > > 4096 16777216 3 freebsd-swap (8.0G) > > 16781312 3889162240 4 freebsd-zfs (1.8T) > > 3905943552 2008 - free - (1.0M) > > > > => 40 3905945520 mfid1 GPT (1.8T) > > 40 1600 1 efi (800K) > > 1640 1024 2 freebsd-boot (512K) > > 2664 1432 - free - (716K) > > 4096 16777216 3 freebsd-swap (8.0G) > > 16781312 3889162240 4 freebsd-zfs (1.8T) > > 3905943552 2008 - free - (1.0M) > > > > => 40 3905945520 mfid2 GPT (1.8T) > > 40 1600 1 efi (800K) > > 1640 1024 2 freebsd-boot (512K) > > 2664 1432 - free - (716K) > > 4096 16777216 3 freebsd-swap (8.0G) > > 16781312 3889162240 4 freebsd-zfs (1.8T) > > 3905943552 2008 - free - (1.0M) > > > > => 40 3905945520 mfid3 GPT (1.8T) > > 40 1600 1 efi (800K) > > 1640 1024 2 freebsd-boot (512K) > > 2664 16777216 3 freebsd-swap (8.0G) > > 16779880 3889165680 4 freebsd-zfs (1.8T) > > > > => 40 3905945520 mfid5 GPT (1.8T) > > 40 1600 1 efi (800K) > > 1640 1024 2 freebsd-boot (512K) > > 2664 1432 - free - (716K) > > 4096 16777216 3 freebsd-swap (8.0G) > > 16781312 3889162240 4 freebsd-zfs (1.8T) > > 3905943552 2008 - free - (1.0M) > > > > => 40 3905945520 mfid4 GPT (1.8T) > > 40 1600 1 efi (800K) > > 1640 1024 2 freebsd-boot (512K) > > 2664 1432 - free - (716K) > > 4096 16777216 3 freebsd-swap (8.0G) > > 16781312 3889160192 4 freebsd-zfs (1.8T) > > 3905941504 4056 - free - (2.0M) > > > > borg-new /home/ler $ > > > > this system was built last week, and I **CAN** rebuild it if necessary, > but I didn't do anything strange (so I thought :) ) > > > > On 01/31/2017 12:30 pm, Steven Hartland wrote: Your issue is the > reported vdev_max_asize > vdev_asize: > > vdev_max_asize: 11947471798272 > > vdev_asize: 11947478089728 > > > > max asize is smaller than asize by 6291456 > > > > For raidz1 Xsize should be the smallest disk Xsize * disks so: > > 1991245299712 * 6 = 11947471798272 > > > > So your max asize looks right but asize looks too big > > > > Expand Size is calculated by: > > if (vd->vdev_aux == NULL && tvd != NULL && vd->vdev_max_asize != 0) { > > vs->vs_esize = P2ALIGN(vd->vdev_max_asize - vd->vdev_asize, > > 1ULL << tvd->vdev_ms_shift); > > } > > > > So the question is why is asize too big? > > > > Given you seem to have some random disk sizes do you have auto expand > turned on? > > > > On 31/01/2017 17:39, Larry Rosenman wrote: vdev_path: n/a, > vdev_max_asize: 11947471798272, vdev_asize: 11947478089728 > > -- > Larry Rosenman http://people.freebsd.org/~ler [1] > Phone: +1 214-642-9640 <(214)%20642-9640> E-Mail: > ler@FreeBSD.org > US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 > > -- > Larry Rosenman http://people.freebsd.org/~ler [1] > Phone: +1 214-642-9640 <(214)%20642-9640> E-Mail: > ler@FreeBSD.org > US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 > > -- > Larry Rosenman http://people.freebsd.org/~ler > Phone: +1 214-642-9640 <(214)%20642-9640> E-Mail: > ler@FreeBSD.org > US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 > > > Links: > ------ > [1] http://people.freebsd.org/%7Eler > _______________________________________________ > freebsd-fs@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > I have the same observation on my home file server. I've not tried the patches (will try that once I get time next week), but the output of the dtrace script while doing 'zpool list -v' shows: # ./dtrace.sh vdev_path: n/a, vdev_max_asize: 0, vdev_asize: 0 vdev_path: n/a, vdev_max_asize: 23907502915584, vdev_asize: 23907504488448 vdev_path: /dev/gpt/Bay1.eli, vdev_max_asize: 3984583819264, vdev_asize: 3984583819264 vdev_path: /dev/gpt/Bay2.eli, vdev_max_asize: 3984583819264, vdev_asize: 3984583819264 vdev_path: /dev/gpt/Bay3.eli, vdev_max_asize: 3984583819264, vdev_asize: 3984583819264 vdev_path: /dev/gpt/Bay4.eli, vdev_max_asize: 3984583819264, vdev_asize: 3984583819264 vdev_path: /dev/gpt/Bay5.eli, vdev_max_asize: 3984583819264, vdev_asize: 3984583819264 vdev_path: /dev/gpt/Bay6.eli, vdev_max_asize: 3984583819264, vdev_asize: 3984583819264 The second line has the same discrepancy as above. This pool was created without geli encryption first, then while the pool was still empty, each drive was offlined and replaced with its .eli counterpart. IIRC geli leaves some metadata on the disk, shrinking available space ever so slightly, which seems to fit the proposed cause earlier in this thread. MH