Date: Thu, 10 Mar 2016 11:01:23 -0700 From: Alan Somers <asomers@freebsd.org> To: Steven Hartland <smh@freebsd.org> Cc: "src-committers@freebsd.org" <src-committers@freebsd.org>, "svn-src-all@freebsd.org" <svn-src-all@freebsd.org>, "svn-src-head@freebsd.org" <svn-src-head@freebsd.org> Subject: Re: svn commit: r292074 - in head/sys/dev: nvd nvme Message-ID: <CAOtMX2gAmt_--_vs6M=be9nShkCpKbwzK-K_N4t1MahMijyoog@mail.gmail.com> In-Reply-To: <201512110206.tBB264Ad039486@repo.freebsd.org> References: <201512110206.tBB264Ad039486@repo.freebsd.org>
next in thread | previous in thread | raw e-mail | index | archive | help
Are you saying that Intel NVMe controllers perform poorly for all I/Os that are less than 128KB, or just for I/Os of any size that cross a 128KB boundary? On Thu, Dec 10, 2015 at 7:06 PM, Steven Hartland <smh@freebsd.org> wrote: > Author: smh > Date: Fri Dec 11 02:06:03 2015 > New Revision: 292074 > URL: https://svnweb.freebsd.org/changeset/base/292074 > > Log: > Limit stripesize reported from nvd(4) to 4K > > Intel NVMe controllers have a slow path for I/Os that span a 128KB > stripe boundary but ZFS limits ashift, which is derived from d_stripesize, > to 13 (8KB) so we limit the stripesize reported to geom(8) to 4KB. > > This may result in a small number of additional I/Os to require > splitting in nvme(4), however the NVMe I/O path is very efficient so these > additional I/Os will cause very minimal (if any) difference in performance > or CPU utilisation. > > This can be controller by the new sysctl > kern.nvme.max_optimal_sectorsize. > > MFC after: 1 week > Sponsored by: Multiplay > Differential Revision: https://reviews.freebsd.org/D4446 > > Modified: > head/sys/dev/nvd/nvd.c > head/sys/dev/nvme/nvme.h > head/sys/dev/nvme/nvme_ns.c > head/sys/dev/nvme/nvme_sysctl.c > >
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAOtMX2gAmt_--_vs6M=be9nShkCpKbwzK-K_N4t1MahMijyoog>