Date: Wed, 20 Sep 2017 21:42:25 +0000 (UTC) From: Warner Losh <imp@FreeBSD.org> To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: svn commit: r323834 - head/sys/dev/nvme Message-ID: <201709202142.v8KLgPba066013@repo.freebsd.org>
next in thread | raw e-mail | index | archive | help
Author: imp Date: Wed Sep 20 21:42:25 2017 New Revision: 323834 URL: https://svnweb.freebsd.org/changeset/base/323834 Log: Fix queue depth for nda. 1/4 of the number of queues times queue entries is too limiting. It works up to about 4k IOPS / 3.0GB/s for hardware that can do 4.4k/3.2GB/s with nvd. 3/4 works better, though it highlights issues in the fairness of nda's choice of TRIM vs READ. That will be fixed separately. Modified: head/sys/dev/nvme/nvme_ctrlr.c Modified: head/sys/dev/nvme/nvme_ctrlr.c ============================================================================== --- head/sys/dev/nvme/nvme_ctrlr.c Wed Sep 20 21:29:54 2017 (r323833) +++ head/sys/dev/nvme/nvme_ctrlr.c Wed Sep 20 21:42:25 2017 (r323834) @@ -151,7 +151,7 @@ nvme_ctrlr_construct_io_qpairs(struct nvme_controller * not a hard limit and will need to be revisitted when the upper layers * of the storage system grows multi-queue support. */ - ctrlr->max_hw_pend_io = num_trackers * ctrlr->num_io_queues / 4; + ctrlr->max_hw_pend_io = num_trackers * ctrlr->num_io_queues * 3 / 4; /* * This was calculated previously when setting up interrupts, but
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201709202142.v8KLgPba066013>