Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 10 Sep 2020 05:58:56 +0700
From:      Eugene Grosbein <eugen@grosbein.net>
To:        "Eugene M. Zheganin" <emz@norma.perm.ru>, freebsd-stable@FreeBSD.org
Subject:   Re: spa_namespace_lock and concurrent zfs commands
Message-ID:  <0244c814-747b-1874-4931-40cd4647d9ee@grosbein.net>
In-Reply-To: <e458ba84-c044-1502-3672-c89e353ef303@norma.perm.ru>
References:  <e458ba84-c044-1502-3672-c89e353ef303@norma.perm.ru>

next in thread | previous in thread | raw e-mail | index | archive | help
09.09.2020 19:29, Eugene M. Zheganin wrote:

> I'm using sort of FreeBSD ZFS appliance with custom API, and I'm suffering from huge timeouts when large (dozens, actually) of concurrent zfs/zpool commands are issued (get/create/destroy/snapshot/clone mostly).
> 
> Are there any tunables that could help mitigate this ?
> 
> Once I took part in reporting the https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=203906 , but the issue that time got resolved somehow. Now I have another set of FreeBSD SANs and it;'s back. I've read the https://wiki.freebsd.org/AndriyGapon/AvgZFSLocking and I realize this probably doesn't have a quick solution, but still...

I think this is some kind of bug/misfeature.
As work-around, try using "zfs destroy -d" instead of plain "zfs destroy".

I suffered from this, too when used ZFS pool over SSD only
instead of HDD+SSD for L2ARC and used SSD sucked really hard
processing BIO_DELETE (trim) with looong delays.

Take a look at "gstat -adI3s" output to monitor amount of delete operations and their timings.




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?0244c814-747b-1874-4931-40cd4647d9ee>