Date: Sun, 26 Apr 2015 13:30:51 -0700 From: Adrian Chadd <adrian@freebsd.org> To: "freebsd-arch@freebsd.org" <freebsd-arch@freebsd.org> Subject: Re: RFT: numa policy branch Message-ID: <CAJ-VmonyjDVb_sdKZVdDLy4vYcRWjVKFydsx6gNymmcqpPYKeA@mail.gmail.com> In-Reply-To: <CAJ-VmonCp7VDWrSXhiQ5PwcCogPM8NG6tDjQRy8osUQw=uUYKQ@mail.gmail.com> References: <CAJ-VmomL9hZZHPtZ3%2BTdujHmo5UQfFhm59vQKUbxW%2B%2B-TGobmg@mail.gmail.com> <CAJ-VmokPd=CUAfqmjWPns%2Bpj6zKbpF55tDn2_u8JPNzaK7F1Pw@mail.gmail.com> <CAJ-VmonCp7VDWrSXhiQ5PwcCogPM8NG6tDjQRy8osUQw=uUYKQ@mail.gmail.com>
next in thread | previous in thread | raw e-mail | index | archive | help
Hi! Another update: * updated to recent -HEAD; * numactl now can set memory policy and cpuset domain information - so it's easy to say "this runs in memory domain X and cpu domain Y" in one pass with it; * the locality matrix is now available. Here's an example from scott's 2x haswell v3, with cluster-on-die enabled: vm.phys_locality: 0: 10 21 31 31 1: 21 10 31 31 2: 31 31 10 21 3: 31 31 21 10 And on the westmere-ex box, with no SLIT table: vm.phys_locality: 0: -1 -1 -1 -1 1: -1 -1 -1 -1 2: -1 -1 -1 -1 3: -1 -1 -1 -1 * I've tested in on westmere-ex (4x socket), sandybridge, ivybridge, haswell v3 and haswell v3 cluster on die. * I've discovered that our implementation of libgomp (from gcc-4.2) is very old and doesn't include some of the thread control environment variables, grr. * .. and that the gcc libgomp code doesn't at all have freebsd thread affinity routines, so I added them to gcc-4.8. Testing with a local copy of stream - using gcc-4.9 and the updated libgomp to support thread pinning - shows that yes, it all works as expected, and yes for NUMA workloads its quite a big difference. I'd appreciate any reviews / testing people are able to provide. I'm about at the functionality point where I'd like to submit it for formal review and try to land it in -HEAD. -adrian
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAJ-VmonyjDVb_sdKZVdDLy4vYcRWjVKFydsx6gNymmcqpPYKeA>