Date: Mon, 05 Nov 2012 11:09:04 -0700 From: Ian Lepore <freebsd@damnhippie.dyndns.org> To: Warner Losh <imp@bsdimp.com> Cc: "Rodney W. Grimes" <freebsd@pdx.rh.cn85.chatusa.com>, Juli Mallett <juli@clockworksquid.com>, "freebsd-mips@FreeBSD.org" <freebsd-mips@freebsd.org> Subject: Re: CACHE_LINE_SIZE macro. Message-ID: <1352138944.1120.187.camel@revolution.hippie.lan> In-Reply-To: <E68E4C16-64E5-460E-B13C-164FDA89436C@bsdimp.com> References: <CACVs6=_BrwJ19CPj7OqKvV8boHfujVWqn96u3VPUmZ040JpAeQ@mail.gmail.com> <201211041828.qA4ISomC076058@pdx.rh.CN85.ChatUSA.com> <CAF6rxgn-bNJOuvdiRj_UUGQUTRaeOt54OdzHOioNz5f566hoig@mail.gmail.com> <DAE462F0-9D85-4942-8826-C0709E36D3B7@bsdimp.com> <CAF6rxg=Et1d6u4RBCB88KibW_uiaRbNdb75v0TQOr-0BrEXV=g@mail.gmail.com> <B4225C25-BD43-423C-A1A2-C9FD4AC92ECB@bsdimp.com> <1352137087.1120.180.camel@revolution.hippie.lan> <E68E4C16-64E5-460E-B13C-164FDA89436C@bsdimp.com>
next in thread | previous in thread | raw e-mail | index | archive | help
On Mon, 2012-11-05 at 10:58 -0700, Warner Losh wrote: > On Nov 5, 2012, at 10:38 AM, Ian Lepore wrote: > > > On Mon, 2012-11-05 at 10:11 -0700, Warner Losh wrote: > >> On Nov 5, 2012, at 10:01 AM, Eitan Adler wrote: > >> > >>> On 5 November 2012 11:49, Warner Losh <imp@bsdimp.com> wrote: > >>>>> There has been some discussion recently about padding lock mutexs to > >>>>> the cache line size in order to avoid false sharing of CPUs. Some have > >>>>> claimed to see significant performance increases as a result. > >>>> > >>>> Is that an out-of-kernel interface? > >>>> > >>>> If we did that, we'd have to make it run-time settable, because there's no one right answer for arm and MIPS cpus: they are all different. > >>> > >>> The discussion ended up with using a special parameter > >>> CACHE_LINE_SIZE_LOCKS which is different than CACHE_LINE_SIZE. This is > >>> necessary for other reasons as well (CACHE_LINE_SIZE_LOCKS may take > >>> into account prefetching of cache lines, but CACHE_LINE_SIZE > >>> wouldn't). > >>> > >>> I think the "correct" thing to do here is choose a reasonable, but > >>> not-always-correct CACHE_LINE_SIZE_LOCKS and make CACHE_LINE_SIZE a > >>> per-board constant (or run time setting, or whatever works). You > >>> can't make it run-time settable as the padding is part of the ABI: > >>> > >>> For more details see > >>> http://comments.gmane.org/gmane.os.freebsd.devel.cvs/483696 > >>> which contains the original discussion. > >>> > >>> Note - I was not involved. > >> > >> this is a kernel-only interface, so compile time constants are fine there. What user-land visible interfaces are affected by this setting? The answer should be 'none' > >> > >> Warner > > > > When I commented on Attilio's recent checkins concerning padding of > > locks to cache line size and the fact that the value changes per-cpu and > > we're not well-positioned to handle that right now, his main concern was > > modules matching the kernel. I had suggested making the padding > > conditional on SMP (because apparently there's no benefit to the padding > > in a UP kernel), but then a module compiled for UP wouldn't work right > > on an SMP kernel, and vice versa. I'm not sure why that's a problem, my > > solution to that would be "So then don't do that." > > Don't make these structures compile time aligned, but make them run-time aligned is the only alternative. For arm this isn't currently a huge issue: armv4/v5 can be set to 32. For armv6 it can be set to 64. Since you usually don't mix kernel bits from each, that's fine. It is a bigger deal on mips where the various 64-bit architectures have different values. Since we compile mips and mips64 differently, the embedded stuff that Adrian is worried about won't necessarily be penalized. We can make these compile time. > > However, making it compile time makes it more optimal for some members of the mips64 family and less efficient for others. I'd have to see measurements to see how much. > > > What scares me the most is the mushy definition of what CACHE_LINE_SIZE > > really means. There's nothing about the name that says "This may not be > > the actual cache line size but it's probably close," but increasingly I > > see people talking about it as if it had such a malleable meaning. Is > > that consistant with the existing uses in the code? Is it a good idea? > > There's a number of places where CAHCE_LINE_SIZE are used, but mostly to compile-time align structures. Since these tend to be 64 vs 128 vs 256 typically, the effect is only a little increase in memory use. We don't flush data in this sized chunk, so we aren't polluting cache more by making this number bigger: we're just have extra padding. Most of the interfaces that I saw aren't exposed aren't KABI things. > > Warner Right now according to Attilio only a few instances of padded lock structures will exist, so that's not a problem. If this padding finds its way into a per-<something> lock instead of global lock that's going to change. How about a per-vnode padded lock? That could add up to a lot of wasted memory. This whole padded-lock thing feels like it's making future trouble easy to happen and hard to fix once it has happened, because the 800lb gorrilla is amd64, and once something is shown to help that environment it's going to stay in place regardless of what it does to tier 2. The other thing that bugs me is that cache is a scarce resource on our wimpy little platforms, and padding just ensures that we use it even less effectively in the UP case. It seems like it's more likely to kill performance than improve it, because one would expect that when a lock is embedded in a structure, there's going to be access to other data nearby once the lock is acquired. -- Ian
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?1352138944.1120.187.camel>