From nobody Thu Aug 29 20:04:10 2024 X-Original-To: dev-commits-src-all@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 4WvsgP0Tg5z52XdR; Thu, 29 Aug 2024 20:04:17 +0000 (UTC) (envelope-from markjdb@gmail.com) Received: from mail-qv1-xf2e.google.com (mail-qv1-xf2e.google.com [IPv6:2607:f8b0:4864:20::f2e]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (2048 bits) client-digest SHA256) (Client CN "smtp.gmail.com", Issuer "WR4" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4WvsgN0mtBz4GHB; Thu, 29 Aug 2024 20:04:16 +0000 (UTC) (envelope-from markjdb@gmail.com) Authentication-Results: mx1.freebsd.org; dkim=pass header.d=gmail.com header.s=20230601 header.b=WZHsMJGF; dmarc=fail reason="SPF not aligned (relaxed), DKIM not aligned (relaxed)" header.from=freebsd.org (policy=none); spf=pass (mx1.freebsd.org: domain of markjdb@gmail.com designates 2607:f8b0:4864:20::f2e as permitted sender) smtp.mailfrom=markjdb@gmail.com Received: by mail-qv1-xf2e.google.com with SMTP id 6a1803df08f44-6bf9ddfc2dcso5396336d6.1; Thu, 29 Aug 2024 13:04:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1724961854; x=1725566654; darn=freebsd.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:sender:from:to:cc:subject:date:message-id :reply-to; bh=O22jxAxMputcOsSSojMXvHXFGA+GsVvAJKFIxAfBnp8=; b=WZHsMJGFCAYF1aYxPZw4CmdlWeEDrFQobRhlCnA6+ErpXnTaj5aXZw6TTSL/5/UGfL w7Qn++U3lSNHYW2S88WlUlG9aCZIlglq2fS226F0uqTtbf31BrkfC6xp+aTjQc560l3j fslk6KKWSiNoNPY89eWIGlQBpIdDVK7vhvXfkgHWaHQjmKtbCs0eGwMOM7iGh8oTGT1T IIrJkKyUpBjX6r4giJIJPeniILsmSprGtKXJ1FmFENQLjRV7hQzoFY5GnBJKAc7K/yFT A3KGnusuGkKHF3ogjEfDGFpPk1uji8tyjTPueXpksX33WPMCXaAXLYWy64PfGjVxgXH2 DZBw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1724961854; x=1725566654; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:sender:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=O22jxAxMputcOsSSojMXvHXFGA+GsVvAJKFIxAfBnp8=; b=pM9Tf1i/i2KCltwtMjOG030C7RDIoCXkxyviRxeUwUhUmfIScI2M4PtgeTajhkhv8S Yuv61RbnE+MV3E5Nr0vCuZzQrtef5x33pjJgpxSkUk3E4VY/LWqqsVXI5MuLNv20idz5 sZ+ygC7UTVuOWajvildZ+7nwfrJe1KWN5tRjoj9Kt2BULPp2lKBoodAvy8NxzYBd1QRe geq/xIevGaN5k14pS6MzCKHwrQj5b8l4lv1IDwSlKF6UcLzf+M7cQH6QCVsJasZN+Bjg U6Et9xZhCgMAnFwSmLc+RlNTNL7IGJEZgRmza+Eoh2KetBppUtkeP7AjJwr5s3LR74M8 KhuA== X-Forwarded-Encrypted: i=1; AJvYcCU5Bpa17iGDI2NCoEKoH1gms376exdj4UB2lLqfaezMqZ258u1s6teUSe6tLKXfQcapRI9A9M/G+Z4neHyC7tq7BUfb@freebsd.org, AJvYcCWsGtiR6MR6KN9k5ui3maOCeSgDnWeFTEP6nbnZTzFO7au38uNbtztOn4OGLXgih1fEdZNCCJC6sR0OCrg0mXRes+vKKyg=@freebsd.org X-Gm-Message-State: AOJu0YzyetvrhGTQeTvhrTcovwgbAf4BnWJP0R3hQpwXeMDlwkKDbtdc tbJq3idu3PrI3M/4PR3tTNPXDd7gzleVoDFSwzv9mNpn3dbzcg+FNYDIOQ== X-Google-Smtp-Source: AGHT+IG30aVeDAlKY9G66SK4lL+APhoEJz8Y1O/4fHD3pqPIUaPwWV2WCxOWkurddxyvNPsFMe/wLg== X-Received: by 2002:a05:6214:428e:b0:6bf:4fc4:cdd2 with SMTP id 6a1803df08f44-6c33e62a56fmr45830226d6.30.1724961853878; Thu, 29 Aug 2024 13:04:13 -0700 (PDT) Received: from nuc (192-0-220-237.cpe.teksavvy.com. [192.0.220.237]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-6c340c1e65esm8350996d6.46.2024.08.29.13.04.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 29 Aug 2024 13:04:13 -0700 (PDT) Date: Thu, 29 Aug 2024 16:04:10 -0400 From: Mark Johnston To: Konstantin Belousov Cc: src-committers@freebsd.org, dev-commits-src-all@freebsd.org, dev-commits-src-main@freebsd.org Subject: Re: git: ff1ae3b3639d - main - rangelocks: restore caching of the single rl entry in the struct thread Message-ID: References: <202408060406.47646dvJ004709@gitrepo.freebsd.org> List-Id: Commit messages for all branches of the src repository List-Archive: https://lists.freebsd.org/archives/dev-commits-src-all List-Help: List-Post: List-Subscribe: List-Unsubscribe: X-BeenThere: dev-commits-src-all@freebsd.org Sender: owner-dev-commits-src-all@FreeBSD.org MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <202408060406.47646dvJ004709@gitrepo.freebsd.org> X-Spamd-Bar: -- X-Spamd-Result: default: False [-2.59 / 15.00]; NEURAL_HAM_LONG(-1.00)[-1.000]; NEURAL_HAM_MEDIUM(-1.00)[-1.000]; NEURAL_HAM_SHORT(-0.99)[-0.989]; MID_RHS_NOT_FQDN(0.50)[]; FORGED_SENDER(0.30)[markj@freebsd.org,markjdb@gmail.com]; R_SPF_ALLOW(-0.20)[+ip6:2607:f8b0:4000::/36:c]; R_DKIM_ALLOW(-0.20)[gmail.com:s=20230601]; DMARC_POLICY_SOFTFAIL(0.10)[freebsd.org : SPF not aligned (relaxed), DKIM not aligned (relaxed),none]; MIME_GOOD(-0.10)[text/plain]; FROM_HAS_DN(0.00)[]; MIME_TRACE(0.00)[0:+]; ARC_NA(0.00)[]; FREEMAIL_ENVFROM(0.00)[gmail.com]; TO_DN_SOME(0.00)[]; RCVD_TLS_LAST(0.00)[]; RCPT_COUNT_THREE(0.00)[4]; DKIM_TRACE(0.00)[gmail.com:+]; MLMMJ_DEST(0.00)[dev-commits-src-all@freebsd.org,dev-commits-src-main@freebsd.org]; RCVD_COUNT_TWO(0.00)[2]; FROM_NEQ_ENVFROM(0.00)[markj@freebsd.org,markjdb@gmail.com]; TO_MATCH_ENVRCPT_ALL(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; MISSING_XM_UA(0.00)[]; ASN(0.00)[asn:15169, ipnet:2607:f8b0::/32, country:US]; RCVD_IN_DNSWL_NONE(0.00)[2607:f8b0:4864:20::f2e:from]; DWL_DNSWL_NONE(0.00)[gmail.com:dkim] X-Rspamd-Queue-Id: 4WvsgN0mtBz4GHB On Tue, Aug 06, 2024 at 04:06:39AM +0000, Konstantin Belousov wrote: > The branch main has been updated by kib: > > URL: https://cgit.FreeBSD.org/src/commit/?id=ff1ae3b3639d39a6485cfc655bf565cd04b9caa6 > > commit ff1ae3b3639d39a6485cfc655bf565cd04b9caa6 > Author: Konstantin Belousov > AuthorDate: 2023-08-23 23:29:50 +0000 > Commit: Konstantin Belousov > CommitDate: 2024-08-06 04:05:58 +0000 > > rangelocks: restore caching of the single rl entry in the struct thread > > Reviewed by: markj, Olivier Certner > Tested by: pho > Sponsored by: The FreeBSD Foundation > Differential revision: https://reviews.freebsd.org/D41787 > --- > sys/kern/kern_rangelock.c | 33 +++++++++++++++++++++++++++------ > sys/kern/kern_thread.c | 1 + > sys/sys/rangelock.h | 1 + > 3 files changed, 29 insertions(+), 6 deletions(-) > > diff --git a/sys/kern/kern_rangelock.c b/sys/kern/kern_rangelock.c > index 355b2125dd9b..2f16569b19aa 100644 > --- a/sys/kern/kern_rangelock.c > +++ b/sys/kern/kern_rangelock.c > @@ -82,8 +82,15 @@ static struct rl_q_entry * > rlqentry_alloc(vm_ooffset_t start, vm_ooffset_t end, int flags) > { > struct rl_q_entry *e; > - > - e = uma_zalloc_smr(rl_entry_zone, M_WAITOK); > + struct thread *td; > + > + td = curthread; > + if (td->td_rlqe != NULL) { > + e = td->td_rlqe; > + td->td_rlqe = NULL; > + } else { > + e = uma_zalloc_smr(rl_entry_zone, M_WAITOK); > + } > e->rl_q_next = NULL; > e->rl_q_free = NULL; > e->rl_q_start = start; > @@ -95,6 +102,12 @@ rlqentry_alloc(vm_ooffset_t start, vm_ooffset_t end, int flags) > return (e); > } > > +void > +rangelock_entry_free(struct rl_q_entry *e) > +{ > + uma_zfree_smr(rl_entry_zone, e); > +} > + > void > rangelock_init(struct rangelock *lock) > { > @@ -106,6 +119,7 @@ void > rangelock_destroy(struct rangelock *lock) > { > struct rl_q_entry *e, *ep; > + struct thread *td; > > MPASS(!lock->sleepers); > for (e = (struct rl_q_entry *)atomic_load_ptr(&lock->head); > @@ -386,8 +400,10 @@ rangelock_lock_int(struct rangelock *lock, bool trylock, vm_ooffset_t start, > vm_ooffset_t end, int locktype) > { > struct rl_q_entry *e, *free, *x, *xp; > + struct thread *td; > enum RL_INSERT_RES res; > > + td = curthread; > for (res = RL_LOCK_RETRY; res == RL_LOCK_RETRY;) { > free = NULL; > e = rlqentry_alloc(start, end, locktype); > @@ -401,10 +417,15 @@ rangelock_lock_int(struct rangelock *lock, bool trylock, vm_ooffset_t start, > e = NULL; > } > for (x = free; x != NULL; x = xp) { > - MPASS(!rl_e_is_marked(x)); > - xp = x->rl_q_free; > - MPASS(!rl_e_is_marked(xp)); > - uma_zfree_smr(rl_entry_zone, x); > + MPASS(!rl_e_is_marked(x)); > + xp = x->rl_q_free; > + MPASS(!rl_e_is_marked(xp)); > + if (td->td_rlqe == NULL) { > + smr_synchronize(rl_smr); I think I had the impression that this was a rare case, but empirically it is not. As far as I can see, this smr_synchronize() call happens every time an entry is freed, which could be very frequent. smr_synchronize() bumps the global sequence counter and blocks until all CPUs have had a chance to observe the new value, so is quite expensive. I didn't try benchmarking yet, but pwrite3 from will-it-scale should be a good candidate. Why do we maintain this per-thread cache at all? IMO it would make more sense to unconditionally free the entry here. Or perhaps I'm missing something here. > + td->td_rlqe = x; > + } else { > + uma_zfree_smr(rl_entry_zone, x); > + } > } > } > return (e); > diff --git a/sys/kern/kern_thread.c b/sys/kern/kern_thread.c > index c951e7297c89..9c3694feb945 100644 > --- a/sys/kern/kern_thread.c > +++ b/sys/kern/kern_thread.c > @@ -480,6 +480,7 @@ thread_fini(void *mem, int size) > EVENTHANDLER_DIRECT_INVOKE(thread_fini, td); > turnstile_free(td->td_turnstile); > sleepq_free(td->td_sleepqueue); > + rangelock_entry_free(td->td_rlqe); > umtx_thread_fini(td); > MPASS(td->td_sel == NULL); > } > diff --git a/sys/sys/rangelock.h b/sys/sys/rangelock.h > index 310371bef879..60f394b67677 100644 > --- a/sys/sys/rangelock.h > +++ b/sys/sys/rangelock.h > @@ -65,6 +65,7 @@ void *rangelock_wlock(struct rangelock *lock, vm_ooffset_t start, > vm_ooffset_t end); > void *rangelock_trywlock(struct rangelock *lock, vm_ooffset_t start, > vm_ooffset_t end); > +void rangelock_entry_free(struct rl_q_entry *e); > #if defined(INVARIANTS) || defined(INVARIANT_SUPPORT) > void _rangelock_cookie_assert(void *cookie, int what, const char *file, > int line);