Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 17 Nov 2020 05:20:51 +0200
From:      Konstantin Belousov <kostikbel@gmail.com>
To:        Mateusz Guzik <mjguzik@gmail.com>
Cc:        src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org
Subject:   Re: svn commit: r367713 - head/sys/kern
Message-ID:  <X7NBkwpj9vxm5nvm@kib.kiev.ua>
In-Reply-To: <CAGudoHFh2MBNAg26fVzud-rgd5qPVZi7G-mM-dB-p8ba5=gHwQ@mail.gmail.com>
References:  <202011160309.0AG39JmP067464@repo.freebsd.org> <X7MuQVV49j7ynzdN@kib.kiev.ua> <CAGudoHFh2MBNAg26fVzud-rgd5qPVZi7G-mM-dB-p8ba5=gHwQ@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Tue, Nov 17, 2020 at 04:15:12AM +0100, Mateusz Guzik wrote:
> On 11/17/20, Konstantin Belousov <kostikbel@gmail.com> wrote:
> > On Mon, Nov 16, 2020 at 03:09:19AM +0000, Mateusz Guzik wrote:
> >> Author: mjg
> >> Date: Mon Nov 16 03:09:18 2020
> >> New Revision: 367713
> >> URL: https://svnweb.freebsd.org/changeset/base/367713
> >>
> >> Log:
> >>   select: replace reference counting with memory barriers in selfd
> >>
> >>   Refcounting was added to combat a race between selfdfree and
> >> doselwakup,
> >>   but it adds avoidable overhead.
> >>
> >>   selfdfree detects it can free the object by ->sf_si == NULL, thus we
> >> can
> >>   ensure that the condition only holds after all accesses are completed.
> >>
> >> Modified:
> >>   head/sys/kern/sys_generic.c
> >>
> >> Modified: head/sys/kern/sys_generic.c
> >> ==============================================================================
> >> --- head/sys/kern/sys_generic.c	Sun Nov 15 22:49:28 2020	(r367712)
> >> +++ head/sys/kern/sys_generic.c	Mon Nov 16 03:09:18 2020	(r367713)
> >> @@ -156,7 +156,6 @@ struct selfd {
> >>  	struct mtx		*sf_mtx;	/* Pointer to selinfo mtx. */
> >>  	struct seltd		*sf_td;		/* (k) owning seltd. */
> >>  	void			*sf_cookie;	/* (k) fd or pollfd. */
> >> -	u_int			sf_refs;
> >>  };
> >>
> >>  MALLOC_DEFINE(M_SELFD, "selfd", "selfd");
> >> @@ -1704,16 +1703,17 @@ static void
> >>  selfdfree(struct seltd *stp, struct selfd *sfp)
> >>  {
> >>  	STAILQ_REMOVE(&stp->st_selq, sfp, selfd, sf_link);
> >> -	if (sfp->sf_si != NULL) {
> >> +	/*
> >> +	 * Paired with doselwakeup.
> >> +	 */
> >> +	if (atomic_load_acq_ptr((uintptr_t *)&sfp->sf_si) != (uintptr_t)NULL) {
> > This could be != 0.
> >
> >>  		mtx_lock(sfp->sf_mtx);
> >>  		if (sfp->sf_si != NULL) {
> >>  			TAILQ_REMOVE(&sfp->sf_si->si_tdlist, sfp, sf_threads);
> >> -			refcount_release(&sfp->sf_refs);
> >>  		}
> >>  		mtx_unlock(sfp->sf_mtx);
> >>  	}
> >> -	if (refcount_release(&sfp->sf_refs))
> >> -		free(sfp, M_SELFD);
> >> +	free(sfp, M_SELFD);
> > What guarantees that doselwakeup() finished with sfp ?
> >
> 
> Release semantics provided by atomic_store_rel_ptr -- it means the
> NULL store is the last access, neither CPU nor the compiler are going
> to reorder preceding loads/stores past it.
It only guarantees that if we observed NULL as the result of load.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?X7NBkwpj9vxm5nvm>