From owner-svn-src-projects@freebsd.org Tue Sep 3 14:06:38 2019 Return-Path: Delivered-To: svn-src-projects@mailman.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.nyi.freebsd.org (Postfix) with ESMTP id 877F2DCD02 for ; Tue, 3 Sep 2019 14:06:38 +0000 (UTC) (envelope-from yuripv@freebsd.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2610:1c1:1:6074::16:84]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) server-signature RSA-PSS (4096 bits) client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "freefall.freebsd.org", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 46N7zf1KXPz4Plh; Tue, 3 Sep 2019 14:06:38 +0000 (UTC) (envelope-from yuripv@freebsd.org) Received: by freefall.freebsd.org (Postfix, from userid 1452) id 6A36A1A96D; Tue, 3 Sep 2019 14:06:13 +0000 (UTC) X-Original-To: yuripv@localmail.freebsd.org Delivered-To: yuripv@localmail.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (Client CN "mx1.freebsd.org", Issuer "Let's Encrypt Authority X3" (verified OK)) by freefall.freebsd.org (Postfix) with ESMTPS id 6A1F913D2E; Wed, 10 Apr 2019 18:17:35 +0000 (UTC) (envelope-from owner-src-committers@freebsd.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2610:1c1:1:6074::16:84]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) server-signature RSA-PSS (4096 bits) client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "freefall.freebsd.org", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 33238843B6; Wed, 10 Apr 2019 18:17:35 +0000 (UTC) (envelope-from owner-src-committers@freebsd.org) Received: by freefall.freebsd.org (Postfix, from userid 538) id 1372F13D2C; Wed, 10 Apr 2019 18:17:35 +0000 (UTC) Delivered-To: src-committers@localmail.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [96.47.72.80]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (Client CN "mx1.freebsd.org", Issuer "Let's Encrypt Authority X3" (verified OK)) by freefall.freebsd.org (Postfix) with ESMTPS id 37E3313D2A for ; Wed, 10 Apr 2019 18:17:32 +0000 (UTC) (envelope-from hselasky@FreeBSD.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) server-signature RSA-PSS (4096 bits) client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "mxrelay.nyi.freebsd.org", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id B8B0A843B2; Wed, 10 Apr 2019 18:17:31 +0000 (UTC) (envelope-from hselasky@FreeBSD.org) Received: from repo.freebsd.org (repo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id 863C82640F; Wed, 10 Apr 2019 18:17:31 +0000 (UTC) (envelope-from hselasky@FreeBSD.org) Received: from repo.freebsd.org ([127.0.1.37]) by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id x3AIHVnk076676; Wed, 10 Apr 2019 18:17:31 GMT (envelope-from hselasky@FreeBSD.org) Received: (from hselasky@localhost) by repo.freebsd.org (8.15.2/8.15.2/Submit) id x3AIHRR7076658; Wed, 10 Apr 2019 18:17:27 GMT (envelope-from hselasky@FreeBSD.org) Message-Id: <201904101817.x3AIHRR7076658@repo.freebsd.org> X-Authentication-Warning: repo.freebsd.org: hselasky set sender to hselasky@FreeBSD.org using -f From: Hans Petter Selasky To: src-committers@freebsd.org, svn-src-projects@freebsd.org Subject: svn commit: r346093 - in projects/hps_callouts/sys: compat/linuxkpi/common/src dev/nand dev/oce dev/twa kern net netgraph netinet netinet6 netpfil/pf sys tests/callout_test X-SVN-Group: projects X-SVN-Commit-Author: hselasky X-SVN-Commit-Paths: in projects/hps_callouts/sys: compat/linuxkpi/common/src dev/nand dev/oce dev/twa kern net netgraph netinet netinet6 netpfil/pf sys tests/callout_test X-SVN-Commit-Revision: 346093 X-SVN-Commit-Repository: base MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk X-Loop: FreeBSD.org Sender: owner-src-committers@freebsd.org X-Rspamd-Queue-Id: 33238843B6 X-Spamd-Bar: -- Authentication-Results: mx1.freebsd.org X-Spamd-Result: default: False [-2.97 / 15.00]; local_wl_from(0.00)[freebsd.org]; NEURAL_HAM_MEDIUM(-1.00)[-0.998,0]; NEURAL_HAM_SHORT(-0.97)[-0.974,0]; NEURAL_HAM_LONG(-1.00)[-1.000,0]; ASN(0.00)[asn:11403, ipnet:2610:1c1:1::/48, country:US] Status: O X-BeenThere: svn-src-projects@freebsd.org X-Mailman-Version: 2.1.29 List-Id: "SVN commit messages for the src " projects" tree" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Date: Tue, 03 Sep 2019 14:06:38 -0000 X-Original-Date: Wed, 10 Apr 2019 18:17:27 +0000 (UTC) X-List-Received-Date: Tue, 03 Sep 2019 14:06:38 -0000 Author: hselasky Date: Wed Apr 10 18:17:27 2019 New Revision: 346093 URL: https://svnweb.freebsd.org/changeset/base/346093 Log: Define the callout return value like a two-bit state in a structure. This enforce all clients to audit the use of these return values. Make the code in question more readable. This change also fixes some return value usage bugs in the callout API clients when encountered. Sponsored by: Mellanox Technologies Modified: projects/hps_callouts/sys/compat/linuxkpi/common/src/linux_hrtimer.c projects/hps_callouts/sys/compat/linuxkpi/common/src/linux_work.c projects/hps_callouts/sys/dev/nand/nandsim_chip.c projects/hps_callouts/sys/dev/oce/oce_if.c projects/hps_callouts/sys/dev/twa/tw_osl_freebsd.c projects/hps_callouts/sys/kern/kern_exit.c projects/hps_callouts/sys/kern/kern_timeout.c projects/hps_callouts/sys/kern/subr_taskqueue.c projects/hps_callouts/sys/net/if_llatbl.c projects/hps_callouts/sys/netgraph/ng_base.c projects/hps_callouts/sys/netinet/if_ether.c projects/hps_callouts/sys/netinet/in.c projects/hps_callouts/sys/netinet/tcp_timer.c projects/hps_callouts/sys/netinet/tcp_var.h projects/hps_callouts/sys/netinet6/nd6.c projects/hps_callouts/sys/netpfil/pf/if_pfsync.c projects/hps_callouts/sys/sys/callout.h projects/hps_callouts/sys/tests/callout_test/callout_test.c Modified: projects/hps_callouts/sys/compat/linuxkpi/common/src/linux_hrtimer.c ============================================================================== --- projects/hps_callouts/sys/compat/linuxkpi/common/src/linux_hrtimer.c Wed Apr 10 18:15:36 2019 (r346092) +++ projects/hps_callouts/sys/compat/linuxkpi/common/src/linux_hrtimer.c Wed Apr 10 18:17:27 2019 (r346093) @@ -73,7 +73,7 @@ int linux_hrtimer_cancel(struct hrtimer *hrtimer) { - return (callout_drain(&hrtimer->callout) > 0); + return (callout_drain(&hrtimer->callout).was_cancelled); } void Modified: projects/hps_callouts/sys/compat/linuxkpi/common/src/linux_work.c ============================================================================== --- projects/hps_callouts/sys/compat/linuxkpi/common/src/linux_work.c Wed Apr 10 18:15:36 2019 (r346092) +++ projects/hps_callouts/sys/compat/linuxkpi/common/src/linux_work.c Wed Apr 10 18:17:27 2019 (r346093) @@ -356,7 +356,7 @@ linux_cancel_timer(struct delayed_work *dwork, bool dr bool cancelled; mtx_lock(&dwork->timer.mtx); - cancelled = (callout_stop(&dwork->timer.callout) == 1); + cancelled = callout_stop(&dwork->timer.callout).was_cancelled; mtx_unlock(&dwork->timer.mtx); /* check if we should drain */ Modified: projects/hps_callouts/sys/dev/nand/nandsim_chip.c ============================================================================== --- projects/hps_callouts/sys/dev/nand/nandsim_chip.c Wed Apr 10 18:15:36 2019 (r346092) +++ projects/hps_callouts/sys/dev/nand/nandsim_chip.c Wed Apr 10 18:17:27 2019 (r346093) @@ -403,7 +403,7 @@ nandsim_delay(struct nandsim_chip *chip, int timeout) chip->sm_state = NANDSIM_STATE_TIMEOUT; tm = (timeout/10000) * (hz / 100); - if (callout_reset(&chip->ns_callout, tm, nandsim_callout_eh, ev)) + if (callout_reset(&chip->ns_callout, tm, nandsim_callout_eh, ev).was_cancelled) return (-1); delay.tv_sec = chip->read_delay / 1000000; Modified: projects/hps_callouts/sys/dev/oce/oce_if.c ============================================================================== --- projects/hps_callouts/sys/dev/oce/oce_if.c Wed Apr 10 18:15:36 2019 (r346092) +++ projects/hps_callouts/sys/dev/oce/oce_if.c Wed Apr 10 18:17:27 2019 (r346093) @@ -370,7 +370,7 @@ oce_attach(device_t dev) oce_add_sysctls(sc); callout_init(&sc->timer, CALLOUT_MPSAFE); - rc = callout_reset(&sc->timer, 2 * hz, oce_local_timer, sc); + rc = callout_reset(&sc->timer, 2 * hz, oce_local_timer, sc).was_cancelled; if (rc) goto stats_free; Modified: projects/hps_callouts/sys/dev/twa/tw_osl_freebsd.c ============================================================================== --- projects/hps_callouts/sys/dev/twa/tw_osl_freebsd.c Wed Apr 10 18:15:36 2019 (r346092) +++ projects/hps_callouts/sys/dev/twa/tw_osl_freebsd.c Wed Apr 10 18:17:27 2019 (r346093) @@ -478,14 +478,14 @@ twa_watchdog(TW_VOID *arg) device_printf((sc)->bus_dev, "Watchdog rescheduled in 70 seconds\n"); #endif /* TW_OSL_DEBUG */ my_watchdog_was_pending = - callout_reset(&(sc->watchdog_callout[i]), 70*hz, twa_watchdog, &sc->ctlr_handle); + callout_reset(&(sc->watchdog_callout[i]), 70*hz, twa_watchdog, &sc->ctlr_handle).was_cancelled; tw_cl_reset_ctlr(ctlr_handle); #ifdef TW_OSL_DEBUG device_printf((sc)->bus_dev, "Watchdog reset completed!\n"); #endif /* TW_OSL_DEBUG */ } else if (driver_is_active) { my_watchdog_was_pending = - callout_reset(&(sc->watchdog_callout[i]), 5*hz, twa_watchdog, &sc->ctlr_handle); + callout_reset(&(sc->watchdog_callout[i]), 5*hz, twa_watchdog, &sc->ctlr_handle).was_cancelled; } #ifdef TW_OSL_DEBUG if (i_need_a_reset || my_watchdog_was_pending) Modified: projects/hps_callouts/sys/kern/kern_exit.c ============================================================================== --- projects/hps_callouts/sys/kern/kern_exit.c Wed Apr 10 18:15:36 2019 (r346092) +++ projects/hps_callouts/sys/kern/kern_exit.c Wed Apr 10 18:17:27 2019 (r346093) @@ -208,6 +208,7 @@ exit1(struct thread *td, int rval, int signo) struct proc *p, *nq, *q, *t; struct thread *tdt; ksiginfo_t *ksi, *ksi1; + int drain_callout; int signal_parent; mtx_assert(&Giant, MA_NOTOWNED); @@ -363,15 +364,23 @@ exit1(struct thread *td, int rval, int signo) * Stop the real interval timer. If the handler is currently * executing, prevent it from rearming itself and let it finish. */ - if (timevalisset(&p->p_realtimer.it_value) && - _callout_stop_safe(&p->p_itcallout, CS_EXECUTING, NULL) == 0) { - timevalclear(&p->p_realtimer.it_interval); - msleep(&p->p_itcallout, &p->p_mtx, PWAIT, "ritwait", 0); - KASSERT(!timevalisset(&p->p_realtimer.it_value), - ("realtime timer is still armed")); + if (timevalisset(&p->p_realtimer.it_value)) { + /* + * The p_itcallout is associated with a mutex and + * stopping the callout should be atomic. + */ + drain_callout = callout_stop(&p->p_itcallout).is_executing; + } else { + drain_callout = 0; } - PROC_UNLOCK(p); + + /* + * The mutex may still be in use after the callout_stop() + * returns, which is handled by callout_drain() + */ + if (drain_callout) + callout_drain(&p->p_itcallout); umtx_thread_exit(td); Modified: projects/hps_callouts/sys/kern/kern_timeout.c ============================================================================== --- projects/hps_callouts/sys/kern/kern_timeout.c Wed Apr 10 18:15:36 2019 (r346092) +++ projects/hps_callouts/sys/kern/kern_timeout.c Wed Apr 10 18:17:27 2019 (r346093) @@ -1015,16 +1015,16 @@ callout_when(sbintime_t sbt, sbintime_t precision, int * callout_pending() - returns truth if callout is still waiting for timeout * callout_deactivate() - marks the callout as having been serviced */ -int +callout_ret_t callout_reset_sbt_on(struct callout *c, sbintime_t sbt, sbintime_t prec, void (*ftn)(void *), void *arg, int cpu, int flags) { sbintime_t to_sbt, precision; struct callout_cpu *cc; - int cancelled, direct; + callout_ret_t retval = {}; + int direct; int ignore_cpu=0; - cancelled = 0; if (cpu == -1) { ignore_cpu = 1; } else if ((cpu >= MAXCPU) || @@ -1063,8 +1063,13 @@ callout_reset_sbt_on(struct callout *c, sbintime_t sbt * currently in progress. If there is a lock then we * can cancel the callout if it has not really started. */ - if (c->c_lock != NULL && !cc_exec_cancel(cc, direct)) - cancelled = cc_exec_cancel(cc, direct) = true; + retval.is_executing = 1; + + if (c->c_lock != NULL && !cc_exec_cancel(cc, direct)) { + cc_exec_cancel(cc, direct) = true; + retval.was_cancelled = 1; + } + if (cc_exec_waiting(cc, direct) || cc_exec_drain(cc, direct)) { /* * Someone has called callout_drain to kill this @@ -1073,8 +1078,7 @@ callout_reset_sbt_on(struct callout *c, sbintime_t sbt CTR4(KTR_CALLOUT, "%s %p func %p arg %p", cancelled ? "cancelled" : "failed to cancel", c, c->c_func, c->c_arg); - CC_UNLOCK(cc); - return (cancelled); + goto done; } #ifdef SMP if (callout_migrating(c)) { @@ -1090,9 +1094,8 @@ callout_reset_sbt_on(struct callout *c, sbintime_t sbt cc_migration_prec(cc, direct) = precision; cc_migration_func(cc, direct) = ftn; cc_migration_arg(cc, direct) = arg; - cancelled = 1; - CC_UNLOCK(cc); - return (cancelled); + retval.was_cancelled = 1; + goto done; } #endif } @@ -1104,7 +1107,7 @@ callout_reset_sbt_on(struct callout *c, sbintime_t sbt } else { TAILQ_REMOVE(&cc->cc_expireq, c, c_links.tqe); } - cancelled = 1; + retval.was_cancelled = 1; c->c_iflags &= ~ CALLOUT_PENDING; c->c_flags &= ~ CALLOUT_ACTIVE; } @@ -1144,8 +1147,7 @@ callout_reset_sbt_on(struct callout *c, sbintime_t sbt "migration of %p func %p arg %p in %d.%08x to %u deferred", c, c->c_func, c->c_arg, (int)(to_sbt >> 32), (u_int)(to_sbt & 0xffffffff), cpu); - CC_UNLOCK(cc); - return (cancelled); + goto done; } cc = callout_cpu_switch(c, cc, cpu); } @@ -1155,38 +1157,42 @@ callout_reset_sbt_on(struct callout *c, sbintime_t sbt CTR6(KTR_CALLOUT, "%sscheduled %p func %p arg %p in %d.%08x", cancelled ? "re" : "", c, c->c_func, c->c_arg, (int)(to_sbt >> 32), (u_int)(to_sbt & 0xffffffff)); +done: CC_UNLOCK(cc); - - return (cancelled); + return (retval); } /* * Common idioms that can be optimized in the future. */ -int +callout_ret_t callout_schedule_on(struct callout *c, int to_ticks, int cpu) { return callout_reset_on(c, to_ticks, c->c_func, c->c_arg, cpu); } -int +callout_ret_t callout_schedule(struct callout *c, int to_ticks) { return callout_reset_on(c, to_ticks, c->c_func, c->c_arg, c->c_cpu); } -int +callout_ret_t _callout_stop_safe(struct callout *c, int flags, void (*drain)(void *)) { struct callout_cpu *cc, *old_cc; struct lock_class *class; + callout_ret_t retval = {}; int direct, sq_locked, use_lock; - int cancelled, not_on_a_list; + int not_on_a_list; if ((flags & CS_DRAIN) != 0) WITNESS_WARN(WARN_GIANTOK | WARN_SLEEPOK, c->c_lock, "calling %s", __func__); + KASSERT((flags & CS_DRAIN) == 0 || drain == NULL, + ("Cannot set drain callback when CS_DRAIN flag is set")); + /* * Some old subsystems don't hold Giant while running a callout_stop(), * so just discard this check for the moment. @@ -1348,17 +1354,17 @@ again: cc_migration_arg(cc, direct) = NULL; #endif } - CC_UNLOCK(cc); KASSERT(!sq_locked, ("sleepqueue chain locked")); - return (1); + retval.was_cancelled = 1; + retval.is_executing = 1; + goto done; } else if (callout_migrating(c)) { /* * The callout is currently being serviced * and the "next" callout is scheduled at * its completion with a migration. We remove * the migration flag so it *won't* get rescheduled, - * but we can't stop the one thats running so - * we return 0. + * but we can't stop the one that's running. */ c->c_iflags &= ~CALLOUT_DFRMIGRATION; #ifdef SMP @@ -1380,18 +1386,18 @@ again: if (drain) { cc_exec_drain(cc, direct) = drain; } - CC_UNLOCK(cc); - return ((flags & CS_EXECUTING) != 0); + retval.is_executing = 1; + goto done; + } else { + CTR3(KTR_CALLOUT, "postponing stop %p func %p arg %p", + c, c->c_func, c->c_arg); + if (drain) { + cc_exec_drain(cc, direct) = drain; + } + retval.is_executing = 1; } - CTR3(KTR_CALLOUT, "failed to stop %p func %p arg %p", - c, c->c_func, c->c_arg); - if (drain) { - cc_exec_drain(cc, direct) = drain; - } KASSERT(!sq_locked, ("sleepqueue chain still locked")); - cancelled = ((flags & CS_EXECUTING) != 0); - } else - cancelled = 1; + } if (sq_locked) sleepq_release(&cc_exec_waiting(cc, direct)); @@ -1399,16 +1405,11 @@ again: if ((c->c_iflags & CALLOUT_PENDING) == 0) { CTR3(KTR_CALLOUT, "failed to stop %p func %p arg %p", c, c->c_func, c->c_arg); - /* - * For not scheduled and not executing callout return - * negative value. - */ - if (cc_exec_curr(cc, direct) != c) - cancelled = -1; - CC_UNLOCK(cc); - return (cancelled); + goto done; } + retval.was_cancelled = 1; + c->c_iflags &= ~CALLOUT_PENDING; c->c_flags &= ~CALLOUT_ACTIVE; @@ -1424,8 +1425,9 @@ again: } } callout_cc_del(c, cc); +done: CC_UNLOCK(cc); - return (cancelled); + return (retval); } void Modified: projects/hps_callouts/sys/kern/subr_taskqueue.c ============================================================================== --- projects/hps_callouts/sys/kern/subr_taskqueue.c Wed Apr 10 18:15:36 2019 (r346092) +++ projects/hps_callouts/sys/kern/subr_taskqueue.c Wed Apr 10 18:17:27 2019 (r346093) @@ -550,7 +550,7 @@ taskqueue_cancel_timeout(struct taskqueue *queue, int error; TQ_LOCK(queue); - pending = !!(callout_stop(&timeout_task->c) > 0); + pending = callout_stop(&timeout_task->c).was_cancelled; error = taskqueue_cancel_locked(queue, &timeout_task->t, &pending1); if ((timeout_task->f & DT_CALLOUT_ARMED) != 0) { timeout_task->f &= ~DT_CALLOUT_ARMED; Modified: projects/hps_callouts/sys/net/if_llatbl.c ============================================================================== --- projects/hps_callouts/sys/net/if_llatbl.c Wed Apr 10 18:15:36 2019 (r346092) +++ projects/hps_callouts/sys/net/if_llatbl.c Wed Apr 10 18:17:27 2019 (r346093) @@ -438,7 +438,7 @@ llentry_free(struct llentry *lle) pkts_dropped = lltable_drop_entry_queue(lle); /* cancel timer */ - if (callout_stop(&lle->lle_timer) > 0) + if (callout_stop(&lle->lle_timer).was_cancelled) LLE_REMREF(lle); LLE_FREE_LOCKED(lle); Modified: projects/hps_callouts/sys/netgraph/ng_base.c ============================================================================== --- projects/hps_callouts/sys/netgraph/ng_base.c Wed Apr 10 18:15:36 2019 (r346092) +++ projects/hps_callouts/sys/netgraph/ng_base.c Wed Apr 10 18:17:27 2019 (r346093) @@ -3795,7 +3795,7 @@ ng_callout(struct callout *c, node_p node, hook_p hook NGI_ARG1(item) = arg1; NGI_ARG2(item) = arg2; oitem = c->c_arg; - if (callout_reset(c, ticks, &ng_callout_trampoline, item) == 1 && + if (callout_reset(c, ticks, &ng_callout_trampoline, item).was_cancelled && oitem != NULL) NG_FREE_ITEM(oitem); return (0); @@ -3811,10 +3811,10 @@ ng_uncallout(struct callout *c, node_p node) KASSERT(c != NULL, ("ng_uncallout: NULL callout")); KASSERT(node != NULL, ("ng_uncallout: NULL node")); - rval = callout_stop(c); + rval = callout_stop(c).was_cancelled; item = c->c_arg; /* Do an extra check */ - if ((rval > 0) && (c->c_func == &ng_callout_trampoline) && + if ((rval != 0) && (c->c_func == &ng_callout_trampoline) && (item != NULL) && (NGI_NODE(item) == node)) { /* * We successfully removed it from the queue before it ran @@ -3829,7 +3829,7 @@ ng_uncallout(struct callout *c, node_p node) * Callers only want to know if the callout was cancelled and * not draining or stopped. */ - return (rval > 0); + return (rval); } /* Modified: projects/hps_callouts/sys/netinet/if_ether.c ============================================================================== --- projects/hps_callouts/sys/netinet/if_ether.c Wed Apr 10 18:15:36 2019 (r346092) +++ projects/hps_callouts/sys/netinet/if_ether.c Wed Apr 10 18:17:27 2019 (r346093) @@ -582,7 +582,7 @@ arpresolve_full(struct ifnet *ifp, int is_gw, int flag LLE_ADDREF(la); la->la_expire = time_uptime; canceled = callout_reset(&la->lle_timer, hz * V_arpt_down, - arptimer, la); + arptimer, la).was_cancelled; if (canceled) LLE_REMREF(la); la->la_asked++; @@ -1272,7 +1272,7 @@ arp_mark_lle_reachable(struct llentry *la) if (wtime < 0) wtime = V_arpt_keep; canceled = callout_reset(&la->lle_timer, - hz * wtime, arptimer, la); + hz * wtime, arptimer, la).was_cancelled; if (canceled) LLE_REMREF(la); } @@ -1384,7 +1384,7 @@ garp_rexmit(void *arg) IF_ADDR_WLOCK(ia->ia_ifa.ifa_ifp); rescheduled = callout_reset(&ia->ia_garp_timer, (1 << ia->ia_garp_count) * hz, - garp_rexmit, ia); + garp_rexmit, ia).was_cancelled; IF_ADDR_WUNLOCK(ia->ia_ifa.ifa_ifp); if (rescheduled) { ifa_free(&ia->ia_ifa); @@ -1420,7 +1420,7 @@ garp_timer_start(struct ifaddr *ifa) IF_ADDR_WLOCK(ia->ia_ifa.ifa_ifp); ia->ia_garp_count = 0; if (callout_reset(&ia->ia_garp_timer, (1 << ia->ia_garp_count) * hz, - garp_rexmit, ia) == 0) { + garp_rexmit, ia).was_cancelled == 0) { ifa_ref(ifa); } IF_ADDR_WUNLOCK(ia->ia_ifa.ifa_ifp); Modified: projects/hps_callouts/sys/netinet/in.c ============================================================================== --- projects/hps_callouts/sys/netinet/in.c Wed Apr 10 18:15:36 2019 (r346092) +++ projects/hps_callouts/sys/netinet/in.c Wed Apr 10 18:17:27 2019 (r346093) @@ -641,7 +641,7 @@ in_difaddr_ioctl(u_long cmd, caddr_t data, struct ifne } IF_ADDR_WLOCK(ifp); - if (callout_stop(&ia->ia_garp_timer) == 1) { + if (callout_stop(&ia->ia_garp_timer).was_cancelled) { ifa_free(&ia->ia_ifa); } IF_ADDR_WUNLOCK(ifp); Modified: projects/hps_callouts/sys/netinet/tcp_timer.c ============================================================================== --- projects/hps_callouts/sys/netinet/tcp_timer.c Wed Apr 10 18:15:36 2019 (r346092) +++ projects/hps_callouts/sys/netinet/tcp_timer.c Wed Apr 10 18:17:27 2019 (r346093) @@ -924,7 +924,7 @@ tcp_timer_active(struct tcpcb *tp, uint32_t timer_type * the timer to possibly restart itself (keep and persist * especially do this). */ -int +void tcp_timer_suspend(struct tcpcb *tp, uint32_t timer_type) { struct callout *t_callout; @@ -955,7 +955,7 @@ tcp_timer_suspend(struct tcpcb *tp, uint32_t timer_typ panic("tp:%p bad timer_type 0x%x", tp, timer_type); } tp->t_timers->tt_flags |= t_flags; - return (callout_stop(t_callout)); + callout_stop(t_callout); } void @@ -1055,7 +1055,7 @@ tcp_timer_stop(struct tcpcb *tp, uint32_t timer_type) panic("tp %p bad timer_type %#x", tp, timer_type); } - if (callout_async_drain(t_callout, tcp_timer_discard) == 0) { + if (callout_async_drain(t_callout, tcp_timer_discard).is_executing) { /* * Can't stop the callout, defer tcpcb actual deletion * to the last one. We do this using the async drain Modified: projects/hps_callouts/sys/netinet/tcp_var.h ============================================================================== --- projects/hps_callouts/sys/netinet/tcp_var.h Wed Apr 10 18:15:36 2019 (r346092) +++ projects/hps_callouts/sys/netinet/tcp_var.h Wed Apr 10 18:17:27 2019 (r346093) @@ -908,7 +908,7 @@ struct tcptemp * tcpip_maketemplate(struct inpcb *); void tcpip_fillheaders(struct inpcb *, void *, void *); void tcp_timer_activate(struct tcpcb *, uint32_t, u_int); -int tcp_timer_suspend(struct tcpcb *, uint32_t); +void tcp_timer_suspend(struct tcpcb *, uint32_t); void tcp_timers_unsuspend(struct tcpcb *, uint32_t); int tcp_timer_active(struct tcpcb *, uint32_t); void tcp_timer_stop(struct tcpcb *, uint32_t); Modified: projects/hps_callouts/sys/netinet6/nd6.c ============================================================================== --- projects/hps_callouts/sys/netinet6/nd6.c Wed Apr 10 18:15:36 2019 (r346092) +++ projects/hps_callouts/sys/netinet6/nd6.c Wed Apr 10 18:17:27 2019 (r346093) @@ -521,21 +521,21 @@ nd6_llinfo_settimer_locked(struct llentry *ln, long ti if (tick < 0) { ln->la_expire = 0; ln->ln_ntick = 0; - canceled = callout_stop(&ln->lle_timer); + canceled = callout_stop(&ln->lle_timer).was_cancelled; } else { ln->la_expire = time_uptime + tick / hz; LLE_ADDREF(ln); if (tick > INT_MAX) { ln->ln_ntick = tick - INT_MAX; canceled = callout_reset(&ln->lle_timer, INT_MAX, - nd6_llinfo_timer, ln); + nd6_llinfo_timer, ln).was_cancelled; } else { ln->ln_ntick = 0; canceled = callout_reset(&ln->lle_timer, tick, - nd6_llinfo_timer, ln); + nd6_llinfo_timer, ln).was_cancelled; } } - if (canceled > 0) + if (canceled) LLE_REMREF(ln); } Modified: projects/hps_callouts/sys/netpfil/pf/if_pfsync.c ============================================================================== --- projects/hps_callouts/sys/netpfil/pf/if_pfsync.c Wed Apr 10 18:15:36 2019 (r346092) +++ projects/hps_callouts/sys/netpfil/pf/if_pfsync.c Wed Apr 10 18:17:27 2019 (r346093) @@ -405,15 +405,21 @@ pfsync_clone_destroy(struct ifnet *ifp) TAILQ_REMOVE(&b->b_deferrals, pd, pd_entry); b->b_deferred--; - if (callout_stop(&pd->pd_tmo) > 0) { + if (callout_stop(&pd->pd_tmo).was_cancelled) { pf_release_state(pd->pd_st); m_freem(pd->pd_m); - free(pd, M_PFSYNC); } else { pd->pd_refs++; - callout_drain(&pd->pd_tmo); - free(pd, M_PFSYNC); } + + /* + * Must drain in either case. + * The callout associated with the mutex + * may still be in use. + */ + callout_drain(&pd->pd_tmo); + + free(pd, M_PFSYNC); } callout_drain(&b->b_tmo); @@ -1846,7 +1852,7 @@ pfsync_undefer_state(struct pf_state *st, int drop) TAILQ_FOREACH(pd, &b->b_deferrals, pd_entry) { if (pd->pd_st == st) { - if (callout_stop(&pd->pd_tmo) > 0) + if (callout_stop(&pd->pd_tmo).was_cancelled) pfsync_undefer(pd, drop); PFSYNC_BUCKET_UNLOCK(b); Modified: projects/hps_callouts/sys/sys/callout.h ============================================================================== --- projects/hps_callouts/sys/sys/callout.h Wed Apr 10 18:15:36 2019 (r346092) +++ projects/hps_callouts/sys/sys/callout.h Wed Apr 10 18:17:27 2019 (r346093) @@ -62,14 +62,20 @@ #define C_PRECALC 0x0400 /* event time is pre-calculated. */ #define C_CATCH 0x0800 /* catch signals, used by pause_sbt(9) */ +/* return value for all callout_xxx() functions */ +typedef struct callout_ret { + unsigned raw_value[0]; + unsigned was_cancelled : 1; + unsigned is_executing : 1; + unsigned reserved : 30; +} callout_ret_t; + struct callout_handle { struct callout *callout; }; /* Flags for callout_stop_safe() */ #define CS_DRAIN 0x0001 /* callout_drain(), wait allowed */ -#define CS_EXECUTING 0x0002 /* Positive return value indicates that - the callout was executing */ #ifdef _KERNEL /* @@ -103,7 +109,7 @@ void _callout_init_lock(struct callout *, struct lock_ _callout_init_lock((c), ((rw) != NULL) ? &(rw)->lock_object : \ NULL, (flags)) #define callout_pending(c) ((c)->c_iflags & CALLOUT_PENDING) -int callout_reset_sbt_on(struct callout *, sbintime_t, sbintime_t, +callout_ret_t callout_reset_sbt_on(struct callout *, sbintime_t, sbintime_t, void (*)(void *), void *, int, int); #define callout_reset_sbt(c, sbt, pr, fn, arg, flags) \ callout_reset_sbt_on((c), (sbt), (pr), (fn), (arg), -1, (flags)) @@ -124,12 +130,12 @@ int callout_reset_sbt_on(struct callout *, sbintime_t, callout_schedule_sbt_on((c), (sbt), (pr), -1, (flags)) #define callout_schedule_sbt_curcpu(c, sbt, pr, flags) \ callout_schedule_sbt_on((c), (sbt), (pr), PCPU_GET(cpuid), (flags)) -int callout_schedule(struct callout *, int); -int callout_schedule_on(struct callout *, int, int); +callout_ret_t callout_schedule(struct callout *, int); +callout_ret_t callout_schedule_on(struct callout *, int, int); #define callout_schedule_curcpu(c, on_tick) \ callout_schedule_on((c), (on_tick), PCPU_GET(cpuid)) #define callout_stop(c) _callout_stop_safe(c, 0, NULL) -int _callout_stop_safe(struct callout *, int, void (*)(void *)); +callout_ret_t _callout_stop_safe(struct callout *, int, void (*)(void *)); void callout_process(sbintime_t now); #define callout_async_drain(c, d) \ _callout_stop_safe(c, 0, d) Modified: projects/hps_callouts/sys/tests/callout_test/callout_test.c ============================================================================== --- projects/hps_callouts/sys/tests/callout_test/callout_test.c Wed Apr 10 18:15:36 2019 (r346092) +++ projects/hps_callouts/sys/tests/callout_test/callout_test.c Wed Apr 10 18:17:27 2019 (r346093) @@ -158,7 +158,7 @@ execute_the_co_test(struct callout_run *rn) } /* OK everyone is waiting and we have the lock */ for (i = 0; i < rn->co_number_callouts; i++) { - ret = callout_async_drain(&rn->co_array[i], drainit); + ret = callout_async_drain(&rn->co_array[i], drainit).is_executing; if (ret) { rn->cnt_one++; } else {