From owner-svn-src-stable@freebsd.org Wed Oct 3 14:59:40 2018 Return-Path: Delivered-To: svn-src-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 5B8D410C6124; Wed, 3 Oct 2018 14:59:40 +0000 (UTC) (envelope-from mav@FreeBSD.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "mxrelay.nyi.freebsd.org", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 0933880489; Wed, 3 Oct 2018 14:59:40 +0000 (UTC) (envelope-from mav@FreeBSD.org) Received: from repo.freebsd.org (repo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id 03DC518739; Wed, 3 Oct 2018 14:59:40 +0000 (UTC) (envelope-from mav@FreeBSD.org) Received: from repo.freebsd.org ([127.0.1.37]) by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id w93ExdJs036316; Wed, 3 Oct 2018 14:59:39 GMT (envelope-from mav@FreeBSD.org) Received: (from mav@localhost) by repo.freebsd.org (8.15.2/8.15.2/Submit) id w93ExdlU036315; Wed, 3 Oct 2018 14:59:39 GMT (envelope-from mav@FreeBSD.org) Message-Id: <201810031459.w93ExdlU036315@repo.freebsd.org> X-Authentication-Warning: repo.freebsd.org: mav set sender to mav@FreeBSD.org using -f From: Alexander Motin Date: Wed, 3 Oct 2018 14:59:39 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-stable@freebsd.org, svn-src-stable-11@freebsd.org Subject: svn commit: r339147 - in stable/11/sys/cddl/contrib/opensolaris/uts/common/fs/zfs: . sys X-SVN-Group: stable-11 X-SVN-Commit-Author: mav X-SVN-Commit-Paths: in stable/11/sys/cddl/contrib/opensolaris/uts/common/fs/zfs: . sys X-SVN-Commit-Revision: 339147 X-SVN-Commit-Repository: base MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-src-stable@freebsd.org X-Mailman-Version: 2.1.27 Precedence: list List-Id: SVN commit messages for all the -stable branches of the src tree List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 03 Oct 2018 14:59:40 -0000 Author: mav Date: Wed Oct 3 14:59:39 2018 New Revision: 339147 URL: https://svnweb.freebsd.org/changeset/base/339147 Log: MFC r337229: Reduce taskq and context-switch cost of zio pipe When doing a read from disk, ZFS creates 3 ZIO's: a zio_null(), the logical zio_read(), and then a physical zio. Currently, each of these results in a separate taskq_dispatch(zio_execute). On high-read-iops workloads, this causes a significant performance impact. By processing all 3 ZIO's in a single taskq entry, we reduce the overhead on taskq locking and context switching. We accomplish this by allowing zio_done() to return a "next zio to execute" to zio_execute(). This results in a ~12% performance increase for random reads, from 96,000 iops to 108,000 iops (with recordsize=8k, on SSD's). Reviewed by: Pavel Zakharov Reviewed-by: Brian Behlendorf Reviewed by: George Wilson Signed-off-by: Matthew Ahrens External-issue: DLPX-59292 Closes #7736 zfsonlinux/zfs@62840030a7dceaee013ddbcc1eebcfc7922edf7c Modified: stable/11/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/sys/zio.h stable/11/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c Directory Properties: stable/11/ (props changed) Modified: stable/11/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/sys/zio.h ============================================================================== --- stable/11/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/sys/zio.h Wed Oct 3 14:59:03 2018 (r339146) +++ stable/11/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/sys/zio.h Wed Oct 3 14:59:39 2018 (r339147) @@ -217,7 +217,7 @@ enum zio_child { #define ZIO_CHILD_DDT_BIT ZIO_CHILD_BIT(ZIO_CHILD_DDT) #define ZIO_CHILD_LOGICAL_BIT ZIO_CHILD_BIT(ZIO_CHILD_LOGICAL) #define ZIO_CHILD_ALL_BITS \ - (ZIO_CHILD_VDEV_BIT | ZIO_CHILD_GANG_BIT | \ + (ZIO_CHILD_VDEV_BIT | ZIO_CHILD_GANG_BIT | \ ZIO_CHILD_DDT_BIT | ZIO_CHILD_LOGICAL_BIT) enum zio_wait_type { @@ -356,7 +356,7 @@ typedef struct zio_transform { struct zio_transform *zt_next; } zio_transform_t; -typedef int zio_pipe_stage_t(zio_t *zio); +typedef zio_t *zio_pipe_stage_t(zio_t *zio); /* * The io_reexecute flags are distinct from io_flags because the child must Modified: stable/11/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c ============================================================================== --- stable/11/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c Wed Oct 3 14:59:03 2018 (r339146) +++ stable/11/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c Wed Oct 3 14:59:39 2018 (r339147) @@ -100,9 +100,6 @@ kmem_cache_t *zio_data_buf_cache[SPA_MAXBLOCKSIZE >> S extern vmem_t *zio_alloc_arena; #endif -#define ZIO_PIPELINE_CONTINUE 0x100 -#define ZIO_PIPELINE_STOP 0x101 - #define BP_SPANB(indblkshift, level) \ (((uint64_t)1) << ((level) * ((indblkshift) - SPA_BLKPTRSHIFT))) #define COMPARE_META_LEVEL 0x80000000ul @@ -539,7 +536,8 @@ zio_wait_for_children(zio_t *zio, uint8_t childbits, e } static void -zio_notify_parent(zio_t *pio, zio_t *zio, enum zio_wait_type wait) +zio_notify_parent(zio_t *pio, zio_t *zio, enum zio_wait_type wait, + zio_t **next_to_executep) { uint64_t *countp = &pio->io_children[zio->io_child_type][wait]; int *errorp = &pio->io_child_error[zio->io_child_type]; @@ -558,13 +556,33 @@ zio_notify_parent(zio_t *pio, zio_t *zio, enum zio_wai ZIO_TASKQ_INTERRUPT; pio->io_stall = NULL; mutex_exit(&pio->io_lock); + /* - * Dispatch the parent zio in its own taskq so that - * the child can continue to make progress. This also - * prevents overflowing the stack when we have deeply nested - * parent-child relationships. + * If we can tell the caller to execute this parent next, do + * so. Otherwise dispatch the parent zio as its own task. + * + * Having the caller execute the parent when possible reduces + * locking on the zio taskq's, reduces context switch + * overhead, and has no recursion penalty. Note that one + * read from disk typically causes at least 3 zio's: a + * zio_null(), the logical zio_read(), and then a physical + * zio. When the physical ZIO completes, we are able to call + * zio_done() on all 3 of these zio's from one invocation of + * zio_execute() by returning the parent back to + * zio_execute(). Since the parent isn't executed until this + * thread returns back to zio_execute(), the caller should do + * so promptly. + * + * In other cases, dispatching the parent prevents + * overflowing the stack when we have deeply nested + * parent-child relationships, as we do with the "mega zio" + * of writes for spa_sync(), and the chain of ZIL blocks. */ - zio_taskq_dispatch(pio, type, B_FALSE); + if (next_to_executep != NULL && *next_to_executep == NULL) { + *next_to_executep = pio; + } else { + zio_taskq_dispatch(pio, type, B_FALSE); + } } else { mutex_exit(&pio->io_lock); } @@ -1275,7 +1293,7 @@ zio_shrink(zio_t *zio, uint64_t size) * ========================================================================== */ -static int +static zio_t * zio_read_bp_init(zio_t *zio) { blkptr_t *bp = zio->io_bp; @@ -1312,14 +1330,14 @@ zio_read_bp_init(zio_t *zio) if (BP_GET_DEDUP(bp) && zio->io_child_type == ZIO_CHILD_LOGICAL) zio->io_pipeline = ZIO_DDT_READ_PIPELINE; - return (ZIO_PIPELINE_CONTINUE); + return (zio); } -static int +static zio_t * zio_write_bp_init(zio_t *zio) { if (!IO_IS_ALLOCATING(zio)) - return (ZIO_PIPELINE_CONTINUE); + return (zio); ASSERT(zio->io_child_type != ZIO_CHILD_DDT); @@ -1334,7 +1352,7 @@ zio_write_bp_init(zio_t *zio) zio->io_pipeline = ZIO_INTERLOCK_PIPELINE; if (BP_IS_EMBEDDED(bp)) - return (ZIO_PIPELINE_CONTINUE); + return (zio); /* * If we've been overridden and nopwrite is set then @@ -1345,13 +1363,13 @@ zio_write_bp_init(zio_t *zio) ASSERT(!zp->zp_dedup); ASSERT3U(BP_GET_CHECKSUM(bp), ==, zp->zp_checksum); zio->io_flags |= ZIO_FLAG_NOPWRITE; - return (ZIO_PIPELINE_CONTINUE); + return (zio); } ASSERT(!zp->zp_nopwrite); if (BP_IS_HOLE(bp) || !zp->zp_dedup) - return (ZIO_PIPELINE_CONTINUE); + return (zio); ASSERT((zio_checksum_table[zp->zp_checksum].ci_flags & ZCHECKSUM_FLAG_DEDUP) || zp->zp_dedup_verify); @@ -1359,7 +1377,7 @@ zio_write_bp_init(zio_t *zio) if (BP_GET_CHECKSUM(bp) == zp->zp_checksum) { BP_SET_DEDUP(bp, 1); zio->io_pipeline |= ZIO_STAGE_DDT_WRITE; - return (ZIO_PIPELINE_CONTINUE); + return (zio); } /* @@ -1371,10 +1389,10 @@ zio_write_bp_init(zio_t *zio) zio->io_pipeline = zio->io_orig_pipeline; } - return (ZIO_PIPELINE_CONTINUE); + return (zio); } -static int +static zio_t * zio_write_compress(zio_t *zio) { spa_t *spa = zio->io_spa; @@ -1393,11 +1411,11 @@ zio_write_compress(zio_t *zio) */ if (zio_wait_for_children(zio, ZIO_CHILD_LOGICAL_BIT | ZIO_CHILD_GANG_BIT, ZIO_WAIT_READY)) { - return (ZIO_PIPELINE_STOP); + return (NULL); } if (!IO_IS_ALLOCATING(zio)) - return (ZIO_PIPELINE_CONTINUE); + return (zio); if (zio->io_children_ready != NULL) { /* @@ -1456,7 +1474,7 @@ zio_write_compress(zio_t *zio) zio->io_pipeline = ZIO_INTERLOCK_PIPELINE; ASSERT(spa_feature_is_active(spa, SPA_FEATURE_EMBEDDED_DATA)); - return (ZIO_PIPELINE_CONTINUE); + return (zio); } else { /* * Round up compressed size up to the ashift @@ -1544,10 +1562,10 @@ zio_write_compress(zio_t *zio) zio->io_pipeline |= ZIO_STAGE_NOP_WRITE; } } - return (ZIO_PIPELINE_CONTINUE); + return (zio); } -static int +static zio_t * zio_free_bp_init(zio_t *zio) { blkptr_t *bp = zio->io_bp; @@ -1559,7 +1577,7 @@ zio_free_bp_init(zio_t *zio) ASSERT3P(zio->io_bp, ==, &zio->io_bp_copy); - return (ZIO_PIPELINE_CONTINUE); + return (zio); } /* @@ -1633,12 +1651,12 @@ zio_taskq_member(zio_t *zio, zio_taskq_type_t q) return (B_FALSE); } -static int +static zio_t * zio_issue_async(zio_t *zio) { zio_taskq_dispatch(zio, ZIO_TASKQ_ISSUE, B_FALSE); - return (ZIO_PIPELINE_STOP); + return (NULL); } void @@ -1720,15 +1738,14 @@ static zio_pipe_stage_t *zio_pipeline[]; void zio_execute(zio_t *zio) { - zio->io_executor = curthread; - ASSERT3U(zio->io_queued_timestamp, >, 0); while (zio->io_stage < ZIO_STAGE_DONE) { enum zio_stage pipeline = zio->io_pipeline; enum zio_stage stage = zio->io_stage; - int rv; + zio->io_executor = curthread; + ASSERT(!MUTEX_HELD(&zio->io_lock)); ASSERT(ISP2(stage)); ASSERT(zio->io_stall == NULL); @@ -1758,12 +1775,16 @@ zio_execute(zio_t *zio) zio->io_stage = stage; zio->io_pipeline_trace |= zio->io_stage; - rv = zio_pipeline[highbit64(stage) - 1](zio); - if (rv == ZIO_PIPELINE_STOP) - return; + /* + * The zio pipeline stage returns the next zio to execute + * (typically the same as this one), or NULL if we should + * stop. + */ + zio = zio_pipeline[highbit64(stage) - 1](zio); - ASSERT(rv == ZIO_PIPELINE_CONTINUE); + if (zio == NULL) + return; } } @@ -2226,7 +2247,7 @@ zio_gang_tree_issue(zio_t *pio, zio_gang_node_t *gn, b zio_nowait(zio); } -static int +static zio_t * zio_gang_assemble(zio_t *zio) { blkptr_t *bp = zio->io_bp; @@ -2238,16 +2259,16 @@ zio_gang_assemble(zio_t *zio) zio_gang_tree_assemble(zio, bp, &zio->io_gang_tree); - return (ZIO_PIPELINE_CONTINUE); + return (zio); } -static int +static zio_t * zio_gang_issue(zio_t *zio) { blkptr_t *bp = zio->io_bp; if (zio_wait_for_children(zio, ZIO_CHILD_GANG_BIT, ZIO_WAIT_DONE)) { - return (ZIO_PIPELINE_STOP); + return (NULL); } ASSERT(BP_IS_GANG(bp) && zio->io_gang_leader == zio); @@ -2261,7 +2282,7 @@ zio_gang_issue(zio_t *zio) zio->io_pipeline = ZIO_INTERLOCK_PIPELINE; - return (ZIO_PIPELINE_CONTINUE); + return (zio); } static void @@ -2300,7 +2321,7 @@ zio_write_gang_done(zio_t *zio) abd_put(zio->io_abd); } -static int +static zio_t * zio_write_gang_block(zio_t *pio) { spa_t *spa = pio->io_spa; @@ -2359,7 +2380,7 @@ zio_write_gang_block(zio_t *pio) gbh_copies - copies, pio->io_allocator, pio); } pio->io_error = error; - return (ZIO_PIPELINE_CONTINUE); + return (pio); } if (pio == gio) { @@ -2426,7 +2447,7 @@ zio_write_gang_block(zio_t *pio) zio_nowait(zio); - return (ZIO_PIPELINE_CONTINUE); + return (pio); } /* @@ -2447,7 +2468,7 @@ zio_write_gang_block(zio_t *pio) * used for nopwrite, assuming that the salt and the checksums * themselves remain secret. */ -static int +static zio_t * zio_nop_write(zio_t *zio) { blkptr_t *bp = zio->io_bp; @@ -2474,7 +2495,7 @@ zio_nop_write(zio_t *zio) BP_GET_COMPRESS(bp) != BP_GET_COMPRESS(bp_orig) || BP_GET_DEDUP(bp) != BP_GET_DEDUP(bp_orig) || zp->zp_copies != BP_GET_NDVAS(bp_orig)) - return (ZIO_PIPELINE_CONTINUE); + return (zio); /* * If the checksums match then reset the pipeline so that we @@ -2494,7 +2515,7 @@ zio_nop_write(zio_t *zio) zio->io_flags |= ZIO_FLAG_NOPWRITE; } - return (ZIO_PIPELINE_CONTINUE); + return (zio); } /* @@ -2522,7 +2543,7 @@ zio_ddt_child_read_done(zio_t *zio) mutex_exit(&pio->io_lock); } -static int +static zio_t * zio_ddt_read_start(zio_t *zio) { blkptr_t *bp = zio->io_bp; @@ -2542,7 +2563,7 @@ zio_ddt_read_start(zio_t *zio) zio->io_vsd = dde; if (ddp_self == NULL) - return (ZIO_PIPELINE_CONTINUE); + return (zio); for (int p = 0; p < DDT_PHYS_TYPES; p++, ddp++) { if (ddp->ddp_phys_birth == 0 || ddp == ddp_self) @@ -2555,23 +2576,23 @@ zio_ddt_read_start(zio_t *zio) zio->io_priority, ZIO_DDT_CHILD_FLAGS(zio) | ZIO_FLAG_DONT_PROPAGATE, &zio->io_bookmark)); } - return (ZIO_PIPELINE_CONTINUE); + return (zio); } zio_nowait(zio_read(zio, zio->io_spa, bp, zio->io_abd, zio->io_size, NULL, NULL, zio->io_priority, ZIO_DDT_CHILD_FLAGS(zio), &zio->io_bookmark)); - return (ZIO_PIPELINE_CONTINUE); + return (zio); } -static int +static zio_t * zio_ddt_read_done(zio_t *zio) { blkptr_t *bp = zio->io_bp; if (zio_wait_for_children(zio, ZIO_CHILD_DDT_BIT, ZIO_WAIT_DONE)) { - return (ZIO_PIPELINE_STOP); + return (NULL); } ASSERT(BP_GET_DEDUP(bp)); @@ -2583,12 +2604,12 @@ zio_ddt_read_done(zio_t *zio) ddt_entry_t *dde = zio->io_vsd; if (ddt == NULL) { ASSERT(spa_load_state(zio->io_spa) != SPA_LOAD_NONE); - return (ZIO_PIPELINE_CONTINUE); + return (zio); } if (dde == NULL) { zio->io_stage = ZIO_STAGE_DDT_READ_START >> 1; zio_taskq_dispatch(zio, ZIO_TASKQ_ISSUE, B_FALSE); - return (ZIO_PIPELINE_STOP); + return (NULL); } if (dde->dde_repair_abd != NULL) { abd_copy(zio->io_abd, dde->dde_repair_abd, @@ -2601,7 +2622,7 @@ zio_ddt_read_done(zio_t *zio) ASSERT(zio->io_vsd == NULL); - return (ZIO_PIPELINE_CONTINUE); + return (zio); } static boolean_t @@ -2759,7 +2780,7 @@ zio_ddt_ditto_write_done(zio_t *zio) ddt_exit(ddt); } -static int +static zio_t * zio_ddt_write(zio_t *zio) { spa_t *spa = zio->io_spa; @@ -2803,7 +2824,7 @@ zio_ddt_write(zio_t *zio) ASSERT(!BP_GET_DEDUP(bp)); zio->io_pipeline = ZIO_WRITE_PIPELINE; ddt_exit(ddt); - return (ZIO_PIPELINE_CONTINUE); + return (zio); } ditto_copies = ddt_ditto_copies_needed(ddt, dde, ddp); @@ -2829,7 +2850,7 @@ zio_ddt_write(zio_t *zio) zio->io_bp_override = NULL; BP_ZERO(bp); ddt_exit(ddt); - return (ZIO_PIPELINE_CONTINUE); + return (zio); } dio = zio_write(zio, spa, txg, bp, zio->io_orig_abd, @@ -2871,12 +2892,12 @@ zio_ddt_write(zio_t *zio) if (dio) zio_nowait(dio); - return (ZIO_PIPELINE_CONTINUE); + return (zio); } ddt_entry_t *freedde; /* for debugging */ -static int +static zio_t * zio_ddt_free(zio_t *zio) { spa_t *spa = zio->io_spa; @@ -2894,7 +2915,7 @@ zio_ddt_free(zio_t *zio) ddt_phys_decref(ddp); ddt_exit(ddt); - return (ZIO_PIPELINE_CONTINUE); + return (zio); } /* @@ -2932,7 +2953,7 @@ zio_io_to_allocate(spa_t *spa, int allocator) return (zio); } -static int +static zio_t * zio_dva_throttle(zio_t *zio) { spa_t *spa = zio->io_spa; @@ -2942,7 +2963,7 @@ zio_dva_throttle(zio_t *zio) !spa_normal_class(zio->io_spa)->mc_alloc_throttle_enabled || zio->io_child_type == ZIO_CHILD_GANG || zio->io_flags & ZIO_FLAG_NODATA) { - return (ZIO_PIPELINE_CONTINUE); + return (zio); } ASSERT(zio->io_child_type > ZIO_CHILD_GANG); @@ -2968,22 +2989,7 @@ zio_dva_throttle(zio_t *zio) nio = zio_io_to_allocate(zio->io_spa, zio->io_allocator); mutex_exit(&spa->spa_alloc_locks[zio->io_allocator]); - if (nio == zio) - return (ZIO_PIPELINE_CONTINUE); - - if (nio != NULL) { - ASSERT(nio->io_stage == ZIO_STAGE_DVA_THROTTLE); - /* - * We are passing control to a new zio so make sure that - * it is processed by a different thread. We do this to - * avoid stack overflows that can occur when parents are - * throttled and children are making progress. We allow - * it to go to the head of the taskq since it's already - * been waiting. - */ - zio_taskq_dispatch(nio, ZIO_TASKQ_ISSUE, B_TRUE); - } - return (ZIO_PIPELINE_STOP); + return (nio); } void @@ -3002,7 +3008,7 @@ zio_allocate_dispatch(spa_t *spa, int allocator) zio_taskq_dispatch(zio, ZIO_TASKQ_ISSUE, B_TRUE); } -static int +static zio_t * zio_dva_allocate(zio_t *zio) { spa_t *spa = zio->io_spa; @@ -3045,18 +3051,18 @@ zio_dva_allocate(zio_t *zio) zio->io_error = error; } - return (ZIO_PIPELINE_CONTINUE); + return (zio); } -static int +static zio_t * zio_dva_free(zio_t *zio) { metaslab_free(zio->io_spa, zio->io_bp, zio->io_txg, B_FALSE); - return (ZIO_PIPELINE_CONTINUE); + return (zio); } -static int +static zio_t * zio_dva_claim(zio_t *zio) { int error; @@ -3065,7 +3071,7 @@ zio_dva_claim(zio_t *zio) if (error) zio->io_error = error; - return (ZIO_PIPELINE_CONTINUE); + return (zio); } /* @@ -3160,7 +3166,7 @@ zio_alloc_zil(spa_t *spa, uint64_t objset, uint64_t tx * force the underlying vdev layers to call either zio_execute() or * zio_interrupt() to ensure that the pipeline continues with the correct I/O. */ -static int +static zio_t * zio_vdev_io_start(zio_t *zio) { vdev_t *vd = zio->io_vd; @@ -3179,13 +3185,13 @@ zio_vdev_io_start(zio_t *zio) * The mirror_ops handle multiple DVAs in a single BP. */ vdev_mirror_ops.vdev_op_io_start(zio); - return (ZIO_PIPELINE_STOP); + return (NULL); } if (vd->vdev_ops->vdev_op_leaf && zio->io_type == ZIO_TYPE_FREE && zio->io_priority == ZIO_PRIORITY_NOW) { trim_map_free(vd, zio->io_offset, zio->io_size, zio->io_txg); - return (ZIO_PIPELINE_CONTINUE); + return (zio); } ASSERT3P(zio->io_logical, !=, zio); @@ -3299,24 +3305,24 @@ zio_vdev_io_start(zio_t *zio) !vdev_dtl_contains(vd, DTL_PARTIAL, zio->io_txg, 1)) { ASSERT(zio->io_type == ZIO_TYPE_WRITE); zio_vdev_io_bypass(zio); - return (ZIO_PIPELINE_CONTINUE); + return (zio); } if (vd->vdev_ops->vdev_op_leaf) { switch (zio->io_type) { case ZIO_TYPE_READ: if (vdev_cache_read(zio)) - return (ZIO_PIPELINE_CONTINUE); + return (zio); /* FALLTHROUGH */ case ZIO_TYPE_WRITE: case ZIO_TYPE_FREE: if ((zio = vdev_queue_io(zio)) == NULL) - return (ZIO_PIPELINE_STOP); + return (NULL); if (!vdev_accessible(vd, zio)) { zio->io_error = SET_ERROR(ENXIO); zio_interrupt(zio); - return (ZIO_PIPELINE_STOP); + return (NULL); } break; } @@ -3328,14 +3334,14 @@ zio_vdev_io_start(zio_t *zio) if (zio->io_type == ZIO_TYPE_WRITE && !(zio->io_flags & ZIO_FLAG_IO_REPAIR) && !trim_map_write_start(zio)) - return (ZIO_PIPELINE_STOP); + return (NULL); } vd->vdev_ops->vdev_op_io_start(zio); - return (ZIO_PIPELINE_STOP); + return (NULL); } -static int +static zio_t * zio_vdev_io_done(zio_t *zio) { vdev_t *vd = zio->io_vd; @@ -3343,7 +3349,7 @@ zio_vdev_io_done(zio_t *zio) boolean_t unexpected_error = B_FALSE; if (zio_wait_for_children(zio, ZIO_CHILD_VDEV_BIT, ZIO_WAIT_DONE)) { - return (ZIO_PIPELINE_STOP); + return (NULL); } ASSERT(zio->io_type == ZIO_TYPE_READ || @@ -3386,7 +3392,7 @@ zio_vdev_io_done(zio_t *zio) if (unexpected_error) VERIFY(vdev_probe(vd, zio) == NULL); - return (ZIO_PIPELINE_CONTINUE); + return (zio); } /* @@ -3444,13 +3450,13 @@ zio_vsd_default_cksum_report(zio_t *zio, zio_cksum_rep zcr->zcr_free = zio_buf_free; } -static int +static zio_t * zio_vdev_io_assess(zio_t *zio) { vdev_t *vd = zio->io_vd; if (zio_wait_for_children(zio, ZIO_CHILD_VDEV_BIT, ZIO_WAIT_DONE)) { - return (ZIO_PIPELINE_STOP); + return (NULL); } if (vd == NULL && !(zio->io_flags & ZIO_FLAG_CONFIG_WRITER)) @@ -3496,7 +3502,7 @@ zio_vdev_io_assess(zio_t *zio) zio->io_stage = ZIO_STAGE_VDEV_IO_START >> 1; zio_taskq_dispatch(zio, ZIO_TASKQ_ISSUE, zio_requeue_io_start_cut_in_line); - return (ZIO_PIPELINE_STOP); + return (NULL); } /* @@ -3536,7 +3542,7 @@ zio_vdev_io_assess(zio_t *zio) zio->io_physdone(zio->io_logical); } - return (ZIO_PIPELINE_CONTINUE); + return (zio); } void @@ -3571,7 +3577,7 @@ zio_vdev_io_bypass(zio_t *zio) * Generate and verify checksums * ========================================================================== */ -static int +static zio_t * zio_checksum_generate(zio_t *zio) { blkptr_t *bp = zio->io_bp; @@ -3585,7 +3591,7 @@ zio_checksum_generate(zio_t *zio) checksum = zio->io_prop.zp_checksum; if (checksum == ZIO_CHECKSUM_OFF) - return (ZIO_PIPELINE_CONTINUE); + return (zio); ASSERT(checksum == ZIO_CHECKSUM_LABEL); } else { @@ -3599,10 +3605,10 @@ zio_checksum_generate(zio_t *zio) zio_checksum_compute(zio, checksum, zio->io_abd, zio->io_size); - return (ZIO_PIPELINE_CONTINUE); + return (zio); } -static int +static zio_t * zio_checksum_verify(zio_t *zio) { zio_bad_cksum_t info; @@ -3617,7 +3623,7 @@ zio_checksum_verify(zio_t *zio) * We're either verifying a label checksum, or nothing at all. */ if (zio->io_prop.zp_checksum == ZIO_CHECKSUM_OFF) - return (ZIO_PIPELINE_CONTINUE); + return (zio); ASSERT(zio->io_prop.zp_checksum == ZIO_CHECKSUM_LABEL); } @@ -3632,7 +3638,7 @@ zio_checksum_verify(zio_t *zio) } } - return (ZIO_PIPELINE_CONTINUE); + return (zio); } /* @@ -3675,7 +3681,7 @@ zio_worst_error(int e1, int e2) * I/O completion * ========================================================================== */ -static int +static zio_t * zio_ready(zio_t *zio) { blkptr_t *bp = zio->io_bp; @@ -3684,7 +3690,7 @@ zio_ready(zio_t *zio) if (zio_wait_for_children(zio, ZIO_CHILD_GANG_BIT | ZIO_CHILD_DDT_BIT, ZIO_WAIT_READY)) { - return (ZIO_PIPELINE_STOP); + return (NULL); } if (zio->io_ready) { @@ -3730,7 +3736,7 @@ zio_ready(zio_t *zio) */ for (; pio != NULL; pio = pio_next) { pio_next = zio_walk_parents(zio, &zl); - zio_notify_parent(pio, zio, ZIO_WAIT_READY); + zio_notify_parent(pio, zio, ZIO_WAIT_READY, NULL); } if (zio->io_flags & ZIO_FLAG_NODATA) { @@ -3746,7 +3752,7 @@ zio_ready(zio_t *zio) zio->io_spa->spa_syncing_txg == zio->io_txg) zio_handle_ignored_writes(zio); - return (ZIO_PIPELINE_CONTINUE); + return (zio); } /* @@ -3810,7 +3816,7 @@ zio_dva_throttle_done(zio_t *zio) zio_allocate_dispatch(zio->io_spa, pio->io_allocator); } -static int +static zio_t * zio_done(zio_t *zio) { spa_t *spa = zio->io_spa; @@ -3827,7 +3833,7 @@ zio_done(zio_t *zio) * wait for them and then repeat this pipeline stage. */ if (zio_wait_for_children(zio, ZIO_CHILD_ALL_BITS, ZIO_WAIT_DONE)) { - return (ZIO_PIPELINE_STOP); + return (NULL); } /* @@ -4041,7 +4047,12 @@ zio_done(zio_t *zio) if ((pio->io_flags & ZIO_FLAG_GODFATHER) && (zio->io_reexecute & ZIO_REEXECUTE_SUSPEND)) { zio_remove_child(pio, zio, remove_zl); - zio_notify_parent(pio, zio, ZIO_WAIT_DONE); + /* + * This is a rare code path, so we don't + * bother with "next_to_execute". + */ + zio_notify_parent(pio, zio, ZIO_WAIT_DONE, + NULL); } } @@ -4053,7 +4064,11 @@ zio_done(zio_t *zio) */ ASSERT(!(zio->io_flags & ZIO_FLAG_GODFATHER)); zio->io_flags |= ZIO_FLAG_DONT_PROPAGATE; - zio_notify_parent(pio, zio, ZIO_WAIT_DONE); + /* + * This is a rare code path, so we don't bother with + * "next_to_execute". + */ + zio_notify_parent(pio, zio, ZIO_WAIT_DONE, NULL); } else if (zio->io_reexecute & ZIO_REEXECUTE_SUSPEND) { /* * We'd fail again if we reexecuted now, so suspend @@ -4074,7 +4089,7 @@ zio_done(zio_t *zio) ZIO_TASKQ_ISSUE, (task_func_t *)zio_reexecute, zio, 0, &zio->io_tqent); } - return (ZIO_PIPELINE_STOP); + return (NULL); } ASSERT(zio->io_child_count == 0); @@ -4104,12 +4119,17 @@ zio_done(zio_t *zio) zio->io_state[ZIO_WAIT_DONE] = 1; mutex_exit(&zio->io_lock); + /* + * We are done executing this zio. We may want to execute a parent + * next. See the comment in zio_notify_parent(). + */ + zio_t *next_to_execute = NULL; zl = NULL; for (pio = zio_walk_parents(zio, &zl); pio != NULL; pio = pio_next) { zio_link_t *remove_zl = zl; pio_next = zio_walk_parents(zio, &zl); zio_remove_child(pio, zio, remove_zl); - zio_notify_parent(pio, zio, ZIO_WAIT_DONE); + zio_notify_parent(pio, zio, ZIO_WAIT_DONE, &next_to_execute); } if (zio->io_waiter != NULL) { @@ -4121,7 +4141,7 @@ zio_done(zio_t *zio) zio_destroy(zio); } - return (ZIO_PIPELINE_STOP); + return (next_to_execute); } /*