Date: Fri, 8 Mar 2019 19:38:52 +0000 (UTC) From: Alexander Motin <mav@FreeBSD.org> To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: svn commit: r344934 - head/sys/cddl/contrib/opensolaris/uts/common/fs/zfs Message-ID: <201903081938.x28JcqIB079551@repo.freebsd.org>
next in thread | raw e-mail | index | archive | help
Author: mav Date: Fri Mar 8 19:38:52 2019 New Revision: 344934 URL: https://svnweb.freebsd.org/changeset/base/344934 Log: Add separate aggregation limit for non-rotating media. Before sequential scrub patches ZFS never aggregated I/Os above 128KB. Sequential scrub bumped that to 1MB, which motivation I understand for spinning disks, since it should reduce number of head seeks. But for SSDs it makes much less sense to me, especially on FreeBSD, where due to MAXPHYS limitation device will likely still see bunch of 128KB I/Os instead of one large. Having more strict aggregation limit allows to avoid allocation of large memory buffer and memcpy to/from it, that is a serious problem when bandwidth reaches few GB/s. MFC after: 1 month Sponsored by: iXsystems, Inc. Modified: head/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_queue.c Modified: head/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_queue.c ============================================================================== --- head/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_queue.c Fri Mar 8 19:20:46 2019 (r344933) +++ head/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_queue.c Fri Mar 8 19:38:52 2019 (r344934) @@ -178,6 +178,7 @@ int zfs_vdev_async_write_active_max_dirty_percent = 60 * they aren't able to help us aggregate at this level. */ int zfs_vdev_aggregation_limit = 1 << 20; +int zfs_vdev_aggregation_limit_non_rotating = SPA_OLD_MAXBLOCKSIZE; int zfs_vdev_read_gap_limit = 32 << 10; int zfs_vdev_write_gap_limit = 4 << 10; @@ -262,6 +263,9 @@ ZFS_VDEV_QUEUE_KNOB_MAX(initializing); SYSCTL_INT(_vfs_zfs_vdev, OID_AUTO, aggregation_limit, CTLFLAG_RWTUN, &zfs_vdev_aggregation_limit, 0, "I/O requests are aggregated up to this size"); +SYSCTL_INT(_vfs_zfs_vdev, OID_AUTO, aggregation_limit_non_rotating, CTLFLAG_RWTUN, + &zfs_vdev_aggregation_limit_non_rotating, 0, + "I/O requests are aggregated up to this size for non-rotating media"); SYSCTL_INT(_vfs_zfs_vdev, OID_AUTO, read_gap_limit, CTLFLAG_RWTUN, &zfs_vdev_read_gap_limit, 0, "Acceptable gap between two reads being aggregated"); @@ -682,9 +686,13 @@ vdev_queue_aggregate(vdev_queue_t *vq, zio_t *zio) ASSERT(MUTEX_HELD(&vq->vq_lock)); maxblocksize = spa_maxblocksize(vq->vq_vdev->vdev_spa); - limit = MAX(MIN(zfs_vdev_aggregation_limit, maxblocksize), 0); + if (vq->vq_vdev->vdev_rotation_rate == VDEV_RATE_NON_ROTATING) + limit = zfs_vdev_aggregation_limit_non_rotating; + else + limit = zfs_vdev_aggregation_limit; + limit = MAX(MIN(limit, maxblocksize), 0); - if (zio->io_flags & ZIO_FLAG_DONT_AGGREGATE || limit == 0) + if (zio->io_flags & ZIO_FLAG_DONT_AGGREGATE || zio->io_size >= limit) return (NULL); first = last = zio;
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201903081938.x28JcqIB079551>