From owner-freebsd-stable@freebsd.org Mon Dec 2 09:27:55 2019 Return-Path: Delivered-To: freebsd-stable@mailman.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.nyi.freebsd.org (Postfix) with ESMTP id 625181CBAE5 for ; Mon, 2 Dec 2019 09:27:55 +0000 (UTC) (envelope-from bennett@sdf.org) Received: from mx.sdf.org (mx.sdf.org [205.166.94.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "mx.sdf.org", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 47RKXQ3BVVz4rZX for ; Mon, 2 Dec 2019 09:27:49 +0000 (UTC) (envelope-from bennett@sdf.org) Received: from sdf.org (IDENT:bennett@otaku.sdf.org [205.166.94.8]) by mx.sdf.org (8.15.2/8.14.5) with ESMTPS id xB29Rnfp007499 (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256 bits) verified NO); Mon, 2 Dec 2019 09:27:49 GMT Received: (from bennett@localhost) by sdf.org (8.15.2/8.12.8/Submit) id xB29RmLd024059; Mon, 2 Dec 2019 03:27:48 -0600 (CST) From: Scott Bennett Message-Id: <201912020927.xB29RmLd024059@sdf.org> Date: Mon, 02 Dec 2019 03:27:48 -0600 To: eugen@grosbein.net Subject: Re: Slow zfs destroy Cc: freebsd-stable@freebsd.org References: <201911291757.xATHv1P1003382@sdf.org> <3a51f9be-7eb8-21a7-b418-580dae58f189@grosbein.net> In-Reply-To: <3a51f9be-7eb8-21a7-b418-580dae58f189@grosbein.net> User-Agent: Heirloom mailx 12.5 6/20/10 MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: 47RKXQ3BVVz4rZX X-Spamd-Bar: - Authentication-Results: mx1.freebsd.org; dkim=none; dmarc=none; spf=none (mx1.freebsd.org: domain of bennett@sdf.org has no SPF policy when checking 205.166.94.20) smtp.mailfrom=bennett@sdf.org X-Spamd-Result: default: False [-1.39 / 15.00]; ARC_NA(0.00)[]; NEURAL_HAM_MEDIUM(-0.98)[-0.982,0]; FROM_HAS_DN(0.00)[]; IP_SCORE(-0.31)[ip: (-0.99), ipnet: 205.166.94.0/24(-0.49), asn: 14361(-0.03), country: US(-0.05)]; MIME_GOOD(-0.10)[text/plain]; TO_DN_NONE(0.00)[]; DMARC_NA(0.00)[sdf.org]; AUTH_NA(1.00)[]; NEURAL_HAM_LONG(-0.99)[-0.994,0]; TO_MATCH_ENVRCPT_SOME(0.00)[]; RCPT_COUNT_TWO(0.00)[2]; RCVD_IN_DNSWL_NONE(0.00)[20.94.166.205.list.dnswl.org : 127.0.10.0]; R_SPF_NA(0.00)[]; FROM_EQ_ENVFROM(0.00)[]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; ASN(0.00)[asn:14361, ipnet:205.166.94.0/24, country:US]; MID_RHS_MATCH_FROM(0.00)[]; RCVD_TLS_ALL(0.00)[]; RCVD_COUNT_TWO(0.00)[2] X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 02 Dec 2019 09:27:55 -0000 Eugene Grosbein wrote: > 30.11.2019 0:57, Scott Bennett wrote: > > > On Thu, 28 Nov 2019 23:18:37 +0700 Eugene Grosbein > > wrote: > > > >> 28.11.2019 20:34, Steven Hartland wrote: > >> > >>> It may well depend on the extent of the deletes occurring. > >>> > >>> Have you tried disabling TRIM to see if it eliminates the delay? > >> > >> This system used mfi(4) first and mfi(4) does not support TRIM at all. Performance was abysmal. > >> Now it uses mrsas(4) and after switch I ran trim(8) for all SSDs one-by-one then re-added them to RAID1. > >> Disabling TRIM is not an option. > >> > >> Almost a year has passed since then and I suspect SSDs have no or a few spare trimmed cells for some reason. > >> Is there documented way to check this out? Maybe some SMART attribute? > >> > > You neglected to state whether you used "zfs destroy datasetname" or > > "zfs destroy -d datasetname". If you used the former, then ZFS did what > > you told it to do. If you want the data set destroyed in the background, > > you will need to include the "-d" option in the command. (See the zfs(1) > > man page at defer_destroy under "Native Properties".) > > The manual says "zfs destroy -d" is not for "background" but for "deferred". > The "zfs destroy" without -d would return EBUSY for a snapshot on hold (zfs hold) > or bound with a clone, but "zfs destroy -d" would mark the snapshot for later destruction > in a moment the clone is deleted or user lock (hold) is lifted. > Until then the snapshot still usable and destruction does not happen. > > All my snapshots are free from holds or clones and can be deleted, > so "zfs destroy -d" is equal to "zfs destroy" for them. > What you say is true, and I have seen it accept a "zfs destroy -d" for a held snapshot but do nothing until the hold is released, whereupon the "destroy" begins. However, that cannot be the whole story because... The vast majority of my "destroy" operations are for snapshots, but what I have seen is that, without the "-d", the command does not return until the disk activity of the "destroy" finishes, but with the "-d", it returns within a couple of seconds,--i.e., just long enough to get the operation going--and the disk I/Os continue until the work is done and free space in the pool increases until the I/Os stop. Perhaps the man pages for zfs(8) and zpool-features(7) need some modification/ clarification on this matter. Scott Bennett, Comm. ASMELG, CFIAG ********************************************************************** * Internet: bennett at sdf.org *xor* bennett at freeshell.org * *--------------------------------------------------------------------* * "A well regulated and disciplined militia, is at all times a good * * objection to the introduction of that bane of all free governments * * -- a standing army." * * -- Gov. John Hancock, New York Journal, 28 January 1790 * **********************************************************************