Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 22 Aug 2018 00:09:39 -0700
From:      Matthew Macy <mmacy@freebsd.org>
To:        freebsd-hackers@freebsd.org
Subject:   Re: epoch(9) background information?
Message-ID:  <CAPrugNptSR4dmqAqo82SskP-YNqkVnR%2BnFxqKjFQv-k_8wrhUQ@mail.gmail.com>
In-Reply-To: <26445c95-17c5-1a05-d290-0741d91b7721@embedded-brains.de>
References:  <db397431-2c4c-64de-634a-20f38ce6a60e@embedded-brains.de> <CALX0vxBAN6nckuAnYR3_mOfwbCjJCjHGuuOFh9njpxO%2BGUzo3w@mail.gmail.com> <fc088eb4-f306-674c-7404-ebe17a60a5f8@embedded-brains.de> <15e3f080-2f82-a243-80e9-f0a916445828@embedded-brains.de> <CAPrugNpZ5CihCW6hz3ztXAZrNn1qJNRsE=yGCvw1rzqNPQYRvg@mail.gmail.com> <26445c95-17c5-1a05-d290-0741d91b7721@embedded-brains.de>

next in thread | previous in thread | raw e-mail | index | archive | help
>

> > Yes. Very. It is generally not permitted to hold a mutex across
> > epoch_wait() that's why there's the special flag EPOCH_LOCKED. If you
> > have a very limited number of threads, you might want to have each
> > thread have its own record registered with the epoch. Then you
> > wouldn't need the CPU pinning. The pinning is just away of providing a
> > limited number of records to an unbounded number of threads.
>
> Thanks for the prompt answer.
>
> Do I need a record per thread and per epoch? Could I use only one (maybe
> dependent on the nest level?) record per thread?
>
>


A record can only be registered with one epoch. And yes you can have just
one single global epoch. However, then the epoch_wait_preempt time  or time
until the gc task is run is determined be the longest epoch section
globally.

It may help to look at the ck_epoch man pages and the implementation in ck
https://www.mankier.com/3/ck_epoch_register

https://github.com/concurrencykit/ck/blob/master/src/ck_epoch.c

https://github.com/concurrencykit/ck/blob/master/include/ck_epoch.h

> --
>
>



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAPrugNptSR4dmqAqo82SskP-YNqkVnR%2BnFxqKjFQv-k_8wrhUQ>