From owner-svn-src-head@freebsd.org Wed Dec 18 11:42:37 2019 Return-Path: Delivered-To: svn-src-head@mailman.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.nyi.freebsd.org (Postfix) with ESMTP id 0AB001DC6CE for ; Wed, 18 Dec 2019 11:42:37 +0000 (UTC) (envelope-from steven.hartland@multiplay.co.uk) Received: from mail-wm1-x32d.google.com (mail-wm1-x32d.google.com [IPv6:2a00:1450:4864:20::32d]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) server-signature RSA-PSS (4096 bits) client-signature RSA-PSS (2048 bits) client-digest SHA256) (Client CN "smtp.gmail.com", Issuer "GTS CA 1O1" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 47dCmX00Zgz4V4Y for ; Wed, 18 Dec 2019 11:42:35 +0000 (UTC) (envelope-from steven.hartland@multiplay.co.uk) Received: by mail-wm1-x32d.google.com with SMTP id u2so1535344wmc.3 for ; Wed, 18 Dec 2019 03:42:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=multiplay-co-uk.20150623.gappssmtp.com; s=20150623; h=subject:to:cc:references:from:message-id:date:user-agent :mime-version:in-reply-to:content-language; bh=dYsUu6WNhgu4zsHjN+bmglIu4xtOzhDY+vuwM0lrAEg=; b=1oGqlOe5fvS8jeBYFvgNBsVI64ObqiACfYtgvd4mIIJCifCeuY7mFeJXwZoy+Y7XbM tcMYvpeqEBMkoLPpDM+pNQQhRXkaQIfsEl1KSsaWzA6fJRnfjyVbTBEMJ1Ow5TTGihhs on2fgnwj9lJDrNYF6MhgSh7s2tLuVWG+nMFJS4aEm5MWmytCb8g/qLdMDZy+GdxoA2sA JQVmBgsjBKM82GjyP3bz+/+kXDvc6isB29q6o2kbCxF7QjOZftcp4OD0jGhzOBJtzpYE NdZk78eQPpogAxZPxO+SQZCR2IetfxweBVijjgMO0/VIvsZxSXv4yo0vDq3hNm1xj3aK Cg3w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:cc:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-language; bh=dYsUu6WNhgu4zsHjN+bmglIu4xtOzhDY+vuwM0lrAEg=; b=jgx9lEO4d+nCHtfKYzzKFBXK+5IlGpcKOMUnDYdzviy6zqFqQcHY7WqkyFZYZHWlkv LpzcjCezl5onfXYs1WFedi3o08db5vYMf47UdNHLvwy+iLCysbMdml7FkNAcI68guuwS 697OZ7eU+Sh2mmIL0uFeAuc13YSdJF0x9jkV5YW0XdnjYHweKVMgXxFFbyHbD4BJokiV qW0UOHvyNDCopBfUDQyZRNHG9qi0855sRaQo/Fo9I55ILqhFiLv04E5Cdck3hmSUo4Ow Y0ofDs9sVeWBFjZMPSV+wIcoO5zLQtTL+JNLFl8nzXjpPJH82aK522xni6BjIv1/3YtN JT/g== X-Gm-Message-State: APjAAAX+7v/qE3uLQmyK1wIW2svVVtT5m5bYT4NiO9boZ+6/E2Nz9ifj CjsziG3HFYrC07JKlEyXpmznsl1CtiM= X-Google-Smtp-Source: APXvYqyZNibdqvg4SuQ+llz4VO0iPY/QIuQ8BYqJeSXIEVG04tkVsDWsOyzWgU6J9dQj0lnZyv2n3w== X-Received: by 2002:a1c:7508:: with SMTP id o8mr2697597wmc.74.1576669353523; Wed, 18 Dec 2019 03:42:33 -0800 (PST) Received: from [10.44.128.75] ([193.117.175.106]) by smtp.gmail.com with ESMTPSA id f1sm2271215wru.6.2019.12.18.03.42.32 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 18 Dec 2019 03:42:32 -0800 (PST) Subject: Re: svn commit: r355831 - head/sys/cam/nvme To: Warner Losh Cc: Warner Losh , src-committers , svn-src-all , svn-src-head References: <201912170011.xBH0Bm5I088826@repo.freebsd.org> <4c5ce3c8-d074-f907-af03-20f4752f428c@multiplay.co.uk> From: Steven Hartland Message-ID: <8185819d-aa76-a184-4710-37bfc60c6cd8@multiplay.co.uk> Date: Wed, 18 Dec 2019 11:42:31 +0000 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.9.1 MIME-Version: 1.0 In-Reply-To: Content-Language: en-GB X-Rspamd-Queue-Id: 47dCmX00Zgz4V4Y X-Spamd-Bar: ----- Authentication-Results: mx1.freebsd.org; dkim=pass header.d=multiplay-co-uk.20150623.gappssmtp.com header.s=20150623 header.b=1oGqlOe5; dmarc=pass (policy=none) header.from=multiplay.co.uk; spf=pass (mx1.freebsd.org: domain of steven.hartland@multiplay.co.uk designates 2a00:1450:4864:20::32d as permitted sender) smtp.mailfrom=steven.hartland@multiplay.co.uk X-Spamd-Result: default: False [-5.76 / 15.00]; ARC_NA(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; R_DKIM_ALLOW(-0.20)[multiplay-co-uk.20150623.gappssmtp.com:s=20150623]; NEURAL_HAM_MEDIUM(-1.00)[-1.000,0]; FROM_HAS_DN(0.00)[]; R_SPF_ALLOW(-0.20)[+ip6:2a00:1450:4000::/36]; NEURAL_HAM_LONG(-1.00)[-1.000,0]; MIME_GOOD(-0.10)[multipart/alternative,text/plain]; PREVIOUSLY_DELIVERED(0.00)[svn-src-head@freebsd.org]; RCPT_COUNT_FIVE(0.00)[5]; RCVD_COUNT_THREE(0.00)[3]; TO_MATCH_ENVRCPT_SOME(0.00)[]; TO_DN_ALL(0.00)[]; DKIM_TRACE(0.00)[multiplay-co-uk.20150623.gappssmtp.com:+]; DMARC_POLICY_ALLOW(-0.50)[multiplay.co.uk,none]; RCVD_IN_DNSWL_NONE(0.00)[d.2.3.0.0.0.0.0.0.0.0.0.0.0.0.0.0.2.0.0.4.6.8.4.0.5.4.1.0.0.a.2.list.dnswl.org : 127.0.5.0]; FROM_EQ_ENVFROM(0.00)[]; MIME_TRACE(0.00)[0:+,1:+,2:~]; IP_SCORE(-2.76)[ip: (-9.19), ipnet: 2a00:1450::/32(-2.66), asn: 15169(-1.90), country: US(-0.05)]; ASN(0.00)[asn:15169, ipnet:2a00:1450::/32, country:US]; MID_RHS_MATCH_FROM(0.00)[]; RCVD_TLS_ALL(0.00)[] Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit X-Content-Filtered-By: Mailman/MimeDel 2.1.29 X-BeenThere: svn-src-head@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: SVN commit messages for the src tree for head/-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 18 Dec 2019 11:42:37 -0000 Thanks for all the feedback Warner, some more comments in line below, would be interested in your thoughts. On 17/12/2019 02:53, Warner Losh wrote: > On Mon, Dec 16, 2019, 5:28 PM Steven Hartland > > wrote: > > Be aware that ZFS already does a pretty decent job of this > already, so the statement about upper layers isn't true for all. > It even has different priorities > for different request types so I'm a little concerned that doing > it at both layers could cause issues. > > > ZFS' BIO_DELETE scheduling works well for enterprise drives, but needs > tuning the further away you get from enterprise performance. I don't > anticipate any effect on performance here since this is not enabled by > default, unless I've messed something up (and if I have screwed this > up, please let me know). I've honestly not tried to enable these > things on ZFS. > > In addition to this if its anything like SSD's numbers of requests > are only a small part of the story with total trim size being the > other one. I this case you could hit total desired size with just > one BIO_DELETE request. > > With this code what's the impact of this? > > > You're correct.  It tends to be the number of segments and/or the size > of the segment. This steers cases where the number of segments > dominates. For cases where total size dominates, you're often better > off using the I/O scheduler to rate limit the size of the trims. This is also one of the reasons I introduced kern.geom.dev.delete_max_sectors. It would be worth at some time writing up a guide to all the logic in the various layers with regards to how we treat TRIM requests. There are quite few elements now and I don't believe its clear where they all are and what they are trying to achieve, which makes it easy for them to start fighting against either other. > This feature is designed to allow a large number of files to be > deleted at once while doing the trims from them a little at a time to > even the load out. That's pretty similar in concept to our current ZFS TRIM code, only time will tell once the new upstream gets merged, if this is still the case.    Regards    Steve