Date: Mon, 16 Dec 2019 17:54:19 -0700 From: Kevin Bowling <kevin.bowling@kev009.com> To: Steven Hartland <steven.hartland@multiplay.co.uk> Cc: Warner Losh <imp@freebsd.org>, src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: Re: svn commit: r355837 - head/sys/cam Message-ID: <CAK7dMtDF1g-36CPp7DJJO-oPTa_qc40BuoxYK8%2BkJ7O=DqB__A@mail.gmail.com> In-Reply-To: <5b944742-51ff-861b-10bc-c01d67c02933@multiplay.co.uk> References: <201912170013.xBH0DjZY090730@repo.freebsd.org> <5b944742-51ff-861b-10bc-c01d67c02933@multiplay.co.uk>
next in thread | previous in thread | raw e-mail | index | archive | help
On Mon, Dec 16, 2019 at 5:44 PM Steven Hartland < steven.hartland@multiplay.co.uk> wrote: > Sticky keyboard there Warner? LOL > On a more serious note the fact that the controllers lie about the > underlying > location of data, the impact of skipping the TRIM requests can have a > much more > serious impact than one might think depending on the drive, so this type = of > optimisation can significantly harm performance instead of increasing it. > > This was the main reasons we sponsored the initial ZFS TRIM > implementation; as > drive performance go so bad with no TRIM that SSD's performed worse than > HDD's. Have you been able to test the new OpenZFS/ZoF TRIM? I notice the current FBSD one gets quite beleaguered with highly concurrent poudriere as the snapshots are being reaped, I.e TRIMs totally swamp r/w ops on the Plextor PCIe SSD I have. I haven=E2=80=99t tried ZoF on this ma= chine yet since it is my main workstation but will do so once it is ready for mainline. > Now obviously this was some time ago, but I wouldn't be surprised if > there's bad > hardware / firmware like this still being produced. > > Given that might be a good idea to make this optional, possibly even opt > in not opt > out? > > Regards > Steve > > On 17/12/2019 00:13, Warner Losh wrote: > > Author: imp > > Date: Tue Dec 17 00:13:45 2019 > > New Revision: 355837 > > URL: https://svnweb.freebsd.org/changeset/base/355837 > > > > Log: > > Implement bio_speedup > > > > React to the BIO_SPEED command in the cam io scheduler by completing > > as successful BIO_DELETE commands that are pending, up to the length > > passed down in the BIO_SPEEDUP cmomand. The length passed down is a > > hint for how much space on the drive needs to be recovered. By > > completing the BIO_DELETE comomands, this allows the upper layers to > > allocate and write to the blocks that were about to be trimmed. Sinc= e > > FreeBSD implements TRIMSs as advisory, we can eliminliminate them an= d > > go directly to writing. > > > > The biggest benefit from TRIMS coomes ffrom the drive being able t > > ooptimize its free block pool inthe log run. There's little nto no > > bene3efit in the shoort term. , sepeciall whn the trim is followed b= y > > a write. Speedup lets us make this tradeoff. > > > > Reviewed by: kirk, kib > > Sponsored by: Netflix > > Differential Revision: https://reviews.freebsd.org/D18351 > > > > Modified: > > head/sys/cam/cam_iosched.c > > > > Modified: head/sys/cam/cam_iosched.c > > > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D > > --- head/sys/cam/cam_iosched.c Tue Dec 17 00:13:40 2019 > (r355836) > > +++ head/sys/cam/cam_iosched.c Tue Dec 17 00:13:45 2019 > (r355837) > > @@ -1534,6 +1534,41 @@ cam_iosched_queue_work(struct cam_iosched_softc > *isc, > > { > > > > /* > > + * A BIO_SPEEDUP from the uppper layers means that they have a > block > > + * shortage. At the present, this is only sent when we're trying = to > > + * allocate blocks, but have a shortage before giving up. > bio_length is > > + * the size of their shortage. We will complete just enough > BIO_DELETEs > > + * in the queue to satisfy the need. If bio_length is 0, we'll > complete > > + * them all. This allows the scheduler to delay BIO_DELETEs to > improve > > + * read/write performance without worrying about the upper layers= . > When > > + * it's possibly a problem, we respond by pretending the > BIO_DELETEs > > + * just worked. We can't do anything about the BIO_DELETEs in the > > + * hardware, though. We have to wait for them to complete. > > + */ > > + if (bp->bio_cmd =3D=3D BIO_SPEEDUP) { > > + off_t len; > > + struct bio *nbp; > > + > > + len =3D 0; > > + while (bioq_first(&isc->trim_queue) && > > + (bp->bio_length =3D=3D 0 || len < bp->bio_length)) { > > + nbp =3D bioq_takefirst(&isc->trim_queue); > > + len +=3D nbp->bio_length; > > + nbp->bio_error =3D 0; > > + biodone(nbp); > > + } > > + if (bp->bio_length > 0) { > > + if (bp->bio_length > len) > > + bp->bio_resid =3D bp->bio_length - len; > > + else > > + bp->bio_resid =3D 0; > > + } > > + bp->bio_error =3D 0; > > + biodone(bp); > > + return; > > + } > > + > > + /* > > * If we get a BIO_FLUSH, and we're doing delayed BIO_DELETEs the= n > we > > * set the last tick time to one less than the current ticks minu= s > the > > * delay to force the BIO_DELETEs to be presented to the client > driver. > > @@ -1919,8 +1954,8 @@ DB_SHOW_COMMAND(iosched, cam_iosched_db_show) > > db_printf("Trim Q len %d\n", biolen(&isc->trim_queue)); > > db_printf("read_bias: %d\n", isc->read_bias); > > db_printf("current_read_bias: %d\n", isc->current_read_bias); > > - db_printf("Trims active %d\n", isc->pend_trim); > > - db_printf("Max trims active %d\n", isc->max_trim); > > + db_printf("Trims active %d\n", isc->pend_trims); > > + db_printf("Max trims active %d\n", isc->max_trims); > > } > > #endif > > #endif > > _______________________________________________ > svn-src-head@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/svn-src-head > To unsubscribe, send any mail to "svn-src-head-unsubscribe@freebsd.org" >
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAK7dMtDF1g-36CPp7DJJO-oPTa_qc40BuoxYK8%2BkJ7O=DqB__A>