From owner-svn-src-head@freebsd.org Thu Feb 6 20:10:22 2020 Return-Path: Delivered-To: svn-src-head@mailman.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.nyi.freebsd.org (Postfix) with ESMTP id 8FBB323920A; Thu, 6 Feb 2020 20:10:22 +0000 (UTC) (envelope-from jeff@FreeBSD.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) server-signature RSA-PSS (4096 bits) client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "mxrelay.nyi.freebsd.org", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 48D8gL33Qhz4cC6; Thu, 6 Feb 2020 20:10:22 +0000 (UTC) (envelope-from jeff@FreeBSD.org) Received: from repo.freebsd.org (repo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id 6402C2101A; Thu, 6 Feb 2020 20:10:22 +0000 (UTC) (envelope-from jeff@FreeBSD.org) Received: from repo.freebsd.org ([127.0.1.37]) by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id 016KAMHk072437; Thu, 6 Feb 2020 20:10:22 GMT (envelope-from jeff@FreeBSD.org) Received: (from jeff@localhost) by repo.freebsd.org (8.15.2/8.15.2/Submit) id 016KAMMX072436; Thu, 6 Feb 2020 20:10:22 GMT (envelope-from jeff@FreeBSD.org) Message-Id: <202002062010.016KAMMX072436@repo.freebsd.org> X-Authentication-Warning: repo.freebsd.org: jeff set sender to jeff@FreeBSD.org using -f From: Jeff Roberson Date: Thu, 6 Feb 2020 20:10:22 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: svn commit: r357637 - head/sys/kern X-SVN-Group: head X-SVN-Commit-Author: jeff X-SVN-Commit-Paths: head/sys/kern X-SVN-Commit-Revision: 357637 X-SVN-Commit-Repository: base MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-src-head@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: SVN commit messages for the src tree for head/-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 06 Feb 2020 20:10:22 -0000 Author: jeff Date: Thu Feb 6 20:10:21 2020 New Revision: 357637 URL: https://svnweb.freebsd.org/changeset/base/357637 Log: Add some global counters for SMR. These may eventually become per-smr counters. In my stress test there is only one poll for every 15,000 frees. This means we are effectively amortizing the cache coherency overhead even with very high write rates (3M/s/core). Reviewed by: markj, rlibby Differential Revision: https://reviews.freebsd.org/D23463 Modified: head/sys/kern/subr_smr.c Modified: head/sys/kern/subr_smr.c ============================================================================== --- head/sys/kern/subr_smr.c Thu Feb 6 18:51:36 2020 (r357636) +++ head/sys/kern/subr_smr.c Thu Feb 6 20:10:21 2020 (r357637) @@ -30,11 +30,13 @@ __FBSDID("$FreeBSD$"); #include #include -#include +#include #include +#include #include #include #include +#include #include @@ -162,6 +164,17 @@ static uma_zone_t smr_zone; #define SMR_SEQ_MAX_ADVANCE SMR_SEQ_MAX_DELTA / 2 #endif +static SYSCTL_NODE(_debug, OID_AUTO, smr, CTLFLAG_RW, NULL, "SMR Stats"); +static counter_u64_t advance = EARLY_COUNTER; +SYSCTL_COUNTER_U64(_debug_smr, OID_AUTO, advance, CTLFLAG_RD, &advance, ""); +static counter_u64_t advance_wait = EARLY_COUNTER; +SYSCTL_COUNTER_U64(_debug_smr, OID_AUTO, advance_wait, CTLFLAG_RD, &advance_wait, ""); +static counter_u64_t poll = EARLY_COUNTER; +SYSCTL_COUNTER_U64(_debug_smr, OID_AUTO, poll, CTLFLAG_RD, &poll, ""); +static counter_u64_t poll_scan = EARLY_COUNTER; +SYSCTL_COUNTER_U64(_debug_smr, OID_AUTO, poll_scan, CTLFLAG_RD, &poll_scan, ""); + + /* * Advance the write sequence and return the new value for use as the * wait goal. This guarantees that any changes made by the calling @@ -197,14 +210,17 @@ smr_advance(smr_t smr) */ s = zpcpu_get(smr)->c_shared; goal = atomic_fetchadd_int(&s->s_wr_seq, SMR_SEQ_INCR) + SMR_SEQ_INCR; + counter_u64_add(advance, 1); /* * Force a synchronization here if the goal is getting too * far ahead of the read sequence number. This keeps the * wrap detecting arithmetic working in pathological cases. */ - if (goal - atomic_load_int(&s->s_rd_seq) >= SMR_SEQ_MAX_DELTA) + if (goal - atomic_load_int(&s->s_rd_seq) >= SMR_SEQ_MAX_DELTA) { + counter_u64_add(advance_wait, 1); smr_wait(smr, goal - SMR_SEQ_MAX_ADVANCE); + } return (goal); } @@ -263,6 +279,7 @@ smr_poll(smr_t smr, smr_seq_t goal, bool wait) success = true; critical_enter(); s = zpcpu_get(smr)->c_shared; + counter_u64_add_protected(poll, 1); /* * Acquire barrier loads s_wr_seq after s_rd_seq so that we can not @@ -306,6 +323,7 @@ smr_poll(smr_t smr, smr_seq_t goal, bool wait) * gone inactive. Keep track of the oldest sequence currently * active as rd_seq. */ + counter_u64_add_protected(poll_scan, 1); rd_seq = s_wr_seq; CPU_FOREACH(i) { c = zpcpu_get_cpu(smr, i); @@ -366,7 +384,7 @@ smr_poll(smr_t smr, smr_seq_t goal, bool wait) s_rd_seq = atomic_load_int(&s->s_rd_seq); do { if (SMR_SEQ_LEQ(rd_seq, s_rd_seq)) - break; + goto out; } while (atomic_fcmpset_int(&s->s_rd_seq, &s_rd_seq, rd_seq) == 0); out: @@ -426,3 +444,14 @@ smr_init(void) smr_zone = uma_zcreate("SMR CPU", sizeof(struct smr), NULL, NULL, NULL, NULL, (CACHE_LINE_SIZE * 2) - 1, UMA_ZONE_PCPU); } + +static void +smr_init_counters(void *unused) +{ + + advance = counter_u64_alloc(M_WAITOK); + advance_wait = counter_u64_alloc(M_WAITOK); + poll = counter_u64_alloc(M_WAITOK); + poll_scan = counter_u64_alloc(M_WAITOK); +} +SYSINIT(smr_counters, SI_SUB_CPU, SI_ORDER_ANY, smr_init_counters, NULL);