Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 3 Oct 2019 10:51:09 +0200
From:      Peter Eriksson <pen@lysator.liu.se>
To:        =?utf-8?Q?Karli_Sj=C3=B6berg_via_freebsd-fs?= <freebsd-fs@freebsd.org>
Subject:   Re: zfs_unlinked_drain "forever"?
Message-ID:  <E96592D2-F0D8-4EAA-B592-1598DAC0D45E@lysator.liu.se>
In-Reply-To: <D9E95E33-E91E-4B13-BD8F-FC6A72D05A64@lysator.liu.se>
References:  <D9E95E33-E91E-4B13-BD8F-FC6A72D05A64@lysator.liu.se>

next in thread | previous in thread | raw e-mail | index | archive | help
Weee.. I can report that _that_ =E2=80=9Czfs mount=E2=80=9D of that =
filesystem took ~18 hours. Now it is continuing with the rest=E2=80=A6

# df -h | wc -l
   11563

(It=E2=80=99s currently mounting about 1 filesystem per second so at =
that pace it=E2=80=99ll be done in=E2=80=A6 12 hours).

- Peter

> On 3 Oct 2019, at 09:51, Peter Eriksson <pen@lysator.liu.se> wrote:
>=20
> Just upgraded and rebooted one of our servers from 11.2 to =
11.3-RELEASE-p3 and now it seems =E2=80=9Cstuck=E2=80=9D at mounting the =
filesystems=E2=80=A6=20
>=20
> =E2=80=9Cstuck=E2=80=9D as in it _is_ doing something:
>=20
>> # zpool iostat 10
>>               capacity     operations    bandwidth
>> pool        alloc   free   read  write   read  write
>> ----------  -----  -----  -----  -----  -----  -----
>> DATA2        351T  11.3T      5    517  25.3K  4.97M
>> DATA3       71.4T  73.6T      0     62  1.84K   863K
>> DATA4        115T  12.2T      0      0  1.38K    152
>> zroot       34.2G  78.8G      0     61  5.68K   461K
>> ----------  -----  -----  -----  -----  -----  -----
>> DATA2        351T  11.3T      0    272      0  2.46M
>> DATA3       71.4T  73.6T      0      0      0      0
>> DATA4        115T  12.2T      0      0      0      0
>> zroot       34.2G  78.8G      0     47      0   200K
>> ----------  -----  -----  -----  -----  -----  -----
>=20
> It=E2=80=99s been doing these 272-300 Write IOPS on pool DATA2 since =
around 15:00 yesterday now and has mounted 3781 filesystems out of =
58295...
>=20
>=20
> A =E2=80=9Cprocstat -kka=E2=80=9D shows one =E2=80=9Czfs mount=E2=80=9D =
process currently doing:
>=20
>> 26508 102901 zfs            -                   mi_switch+0xeb =
sleepq_wait+0x2c _cv_wait+0x16e txg_wait_synced+0xa5=20
>> dmu_tx_assign+0x48 zfs_rmnode+0x122 zfs_freebsd_reclaim+0x4e =
VOP_RECLAIM_APV+0x80 vgonel+0x213=20
>> vrecycle+0x46 zfs_freebsd_inactive+0xd VOP_INACTIVE_APV+0x80 =
vinactive+0xf0 vputx+0x2c3=20
>> zfs_unlinked_drain+0x1b8 zfsvfs_setup+0x5e zfs_mount+0x623 =
vfs_domount+0x573=20
>=20
>=20
>> # ps auxwww|egrep zfs
>> root        17    0.0  0.0      0   2864  -  DL   14:53       7:03.54 =
[zfskern]
>> root       960    0.0  0.0 104716  31900  -  Is   14:55       0:00.05 =
/usr/sbin/mountd -r -S /etc/exports /etc/zfs/exports
>> root      4390    0.0  0.0   9040   5872  -  Is   14:57       0:00.02 =
/usr/sbin/zfsd
>> root     20330    0.0  0.0  22652  18388  -  S    15:07       0:48.90 =
perl /usr/local/bin/parallel --will-cite -j 40 zfs mount {}
>> root     26508    0.0  0.0   7804   5316  -  D    15:09       0:08.58 =
/sbin/zfs mount  DATA2/filur04.it.liu.se/DATA/staff/nikca89
>> root       101    0.0  0.0  20148  14860 u1- I    14:55       0:00.88 =
/bin/bash /sbin/zfs-speedmount
>> root       770    0.0  0.0   6732   2700  0  S+   09:45       0:00.00 =
egrep zfs
>=20
> (=E2=80=9Czfs-speedmount=E2=80=9D is a locally developed script that =
runs multiple =E2=80=9Czfs mount=E2=80=9D commands in parallel - speeds =
up mounting of filesystems a lot on this server _a lot_ (normally), =
since =E2=80=9Czfs mount=E2=80=9D didn=E2=80=99t use to mount stuff in =
parallell before, took multiple hours when rebooting servers).
>=20
>=20
> A google-search found stuff about zfs_unlinked_drain improvements in =
Nexenta and Linux ZFS:
>=20
>> https://github.com/zfsonlinux/zfs/pull/8142/commits =
<https://github.com/zfsonlinux/zfs/pull/8142/commits>;
>=20
> Anyone know if this (or similar) fixes are in FreeBSD ZFS =
(11.3-RELEASE-p3)?
>=20
>=20
> - Peter
>=20
> _______________________________________________
> freebsd-fs@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?E96592D2-F0D8-4EAA-B592-1598DAC0D45E>