Date: Wed, 18 Dec 2019 11:42:31 +0000 From: Steven Hartland <steven.hartland@multiplay.co.uk> To: Warner Losh <imp@bsdimp.com> Cc: Warner Losh <imp@freebsd.org>, src-committers <src-committers@freebsd.org>, svn-src-all <svn-src-all@freebsd.org>, svn-src-head <svn-src-head@freebsd.org> Subject: Re: svn commit: r355831 - head/sys/cam/nvme Message-ID: <8185819d-aa76-a184-4710-37bfc60c6cd8@multiplay.co.uk> In-Reply-To: <CANCZdfqWZxdaMejQhxP52eVT3cAuDoFHoSfL6U0w=X6OwCRGiw@mail.gmail.com> References: <201912170011.xBH0Bm5I088826@repo.freebsd.org> <4c5ce3c8-d074-f907-af03-20f4752f428c@multiplay.co.uk> <CANCZdfqWZxdaMejQhxP52eVT3cAuDoFHoSfL6U0w=X6OwCRGiw@mail.gmail.com>
next in thread | previous in thread | raw e-mail | index | archive | help
Thanks for all the feedback Warner, some more comments in line below, would be interested in your thoughts. On 17/12/2019 02:53, Warner Losh wrote: > On Mon, Dec 16, 2019, 5:28 PM Steven Hartland > <steven.hartland@multiplay.co.uk > <mailto:steven.hartland@multiplay.co.uk>> wrote: > > Be aware that ZFS already does a pretty decent job of this > already, so the statement about upper layers isn't true for all. > It even has different priorities > for different request types so I'm a little concerned that doing > it at both layers could cause issues. > > > ZFS' BIO_DELETE scheduling works well for enterprise drives, but needs > tuning the further away you get from enterprise performance. I don't > anticipate any effect on performance here since this is not enabled by > default, unless I've messed something up (and if I have screwed this > up, please let me know). I've honestly not tried to enable these > things on ZFS. > > In addition to this if its anything like SSD's numbers of requests > are only a small part of the story with total trim size being the > other one. I this case you could hit total desired size with just > one BIO_DELETE request. > > With this code what's the impact of this? > > > You're correct. It tends to be the number of segments and/or the size > of the segment. This steers cases where the number of segments > dominates. For cases where total size dominates, you're often better > off using the I/O scheduler to rate limit the size of the trims. This is also one of the reasons I introduced kern.geom.dev.delete_max_sectors. It would be worth at some time writing up a guide to all the logic in the various layers with regards to how we treat TRIM requests. There are quite few elements now and I don't believe its clear where they all are and what they are trying to achieve, which makes it easy for them to start fighting against either other. > This feature is designed to allow a large number of files to be > deleted at once while doing the trims from them a little at a time to > even the load out. That's pretty similar in concept to our current ZFS TRIM code, only time will tell once the new upstream gets merged, if this is still the case. Regards Steve
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?8185819d-aa76-a184-4710-37bfc60c6cd8>