From owner-svn-src-head@freebsd.org Sat Nov 30 17:22:11 2019 Return-Path: Delivered-To: svn-src-head@mailman.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.nyi.freebsd.org (Postfix) with ESMTP id 8C6EE1B15B0; Sat, 30 Nov 2019 17:22:11 +0000 (UTC) (envelope-from mjg@FreeBSD.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) server-signature RSA-PSS (4096 bits) client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "mxrelay.nyi.freebsd.org", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 47QJ8g3DCqz4Fv7; Sat, 30 Nov 2019 17:22:11 +0000 (UTC) (envelope-from mjg@FreeBSD.org) Received: from repo.freebsd.org (repo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id 524421A29; Sat, 30 Nov 2019 17:22:11 +0000 (UTC) (envelope-from mjg@FreeBSD.org) Received: from repo.freebsd.org ([127.0.1.37]) by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id xAUHMB0A054172; Sat, 30 Nov 2019 17:22:11 GMT (envelope-from mjg@FreeBSD.org) Received: (from mjg@localhost) by repo.freebsd.org (8.15.2/8.15.2/Submit) id xAUHMAsx054170; Sat, 30 Nov 2019 17:22:10 GMT (envelope-from mjg@FreeBSD.org) Message-Id: <201911301722.xAUHMAsx054170@repo.freebsd.org> X-Authentication-Warning: repo.freebsd.org: mjg set sender to mjg@FreeBSD.org using -f From: Mateusz Guzik Date: Sat, 30 Nov 2019 17:22:10 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: svn commit: r355230 - in head/sys: kern sys X-SVN-Group: head X-SVN-Commit-Author: mjg X-SVN-Commit-Paths: in head/sys: kern sys X-SVN-Commit-Revision: 355230 X-SVN-Commit-Repository: base MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-src-head@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: SVN commit messages for the src tree for head/-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 30 Nov 2019 17:22:11 -0000 Author: mjg Date: Sat Nov 30 17:22:10 2019 New Revision: 355230 URL: https://svnweb.freebsd.org/changeset/base/355230 Log: Add a way to inject fences using IPIs A variant of this facility was already used by rmlocks where IPIs would enforce ordering. This allows to elide fences where they are rarely needed and the cost of IPI (should it be necessary) is cheaper. Reviewed by: kib, jeff (previous version) Sponsored by: The FreeBSD Foundation Differential Revision: https://reviews.freebsd.org/D21740 Modified: head/sys/kern/subr_smp.c head/sys/sys/smp.h Modified: head/sys/kern/subr_smp.c ============================================================================== --- head/sys/kern/subr_smp.c Sat Nov 30 16:59:29 2019 (r355229) +++ head/sys/kern/subr_smp.c Sat Nov 30 17:22:10 2019 (r355230) @@ -929,6 +929,66 @@ quiesce_all_cpus(const char *wmesg, int prio) return quiesce_cpus(all_cpus, wmesg, prio); } +/* + * Observe all CPUs not executing in critical section. + * We are not in one so the check for us is safe. If the found + * thread changes to something else we know the section was + * exited as well. + */ +void +quiesce_all_critical(void) +{ + struct thread *td, *newtd; + struct pcpu *pcpu; + int cpu; + + MPASS(curthread->td_critnest == 0); + + CPU_FOREACH(cpu) { + pcpu = cpuid_to_pcpu[cpu]; + td = pcpu->pc_curthread; + for (;;) { + if (td->td_critnest == 0) + break; + cpu_spinwait(); + newtd = (struct thread *) + atomic_load_acq_ptr((u_long *)pcpu->pc_curthread); + if (td != newtd) + break; + } + } +} + +static void +cpus_fence_seq_cst_issue(void *arg __unused) +{ + + atomic_thread_fence_seq_cst(); +} + +/* + * Send an IPI forcing a sequentially consistent fence. + * + * Allows replacement of an explicitly fence with a compiler barrier. + * Trades speed up during normal execution for a significant slowdown when + * the barrier is needed. + */ +void +cpus_fence_seq_cst(void) +{ + +#ifdef SMP + smp_rendezvous( + smp_no_rendezvous_barrier, + cpus_fence_seq_cst_issue, + smp_no_rendezvous_barrier, + NULL + ); +#else + cpus_fence_seq_cst_issue(NULL); +#endif +} + /* Extra care is taken with this sysctl because the data type is volatile */ static int sysctl_kern_smp_active(SYSCTL_HANDLER_ARGS) Modified: head/sys/sys/smp.h ============================================================================== --- head/sys/sys/smp.h Sat Nov 30 16:59:29 2019 (r355229) +++ head/sys/sys/smp.h Sat Nov 30 17:22:10 2019 (r355230) @@ -264,6 +264,8 @@ extern struct mtx smp_ipi_mtx; int quiesce_all_cpus(const char *, int); int quiesce_cpus(cpuset_t, const char *, int); +void quiesce_all_critical(void); +void cpus_fence_seq_cst(void); void smp_no_rendezvous_barrier(void *); void smp_rendezvous(void (*)(void *), void (*)(void *),