Date: Wed, 23 Jul 2014 00:28:29 +0100 (BST) From: Robert Watson <rwatson@FreeBSD.org> To: Shawn Webb <lattera@gmail.com> Cc: PaX Team <pageexec@freemail.hu>, Pedro Giffuni <pfg@freebsd.org>, Oliver Pinter <oliver.pntr@gmail.com>, Bryan Drewery <bdrewery@FreeBSD.org>, freebsd-arch@freebsd.org Subject: Re: [RFC] ASLR Whitepaper and Candidate Final Patch Message-ID: <alpine.BSF.2.11.1407230017490.88645@fledge.watson.org> In-Reply-To: <20140720201858.GB29618@pwnie.vrt.sourcefire.com> References: <96C72773-3239-427E-A90B-D05FF0F5B782@freebsd.org> <20140720201858.GB29618@pwnie.vrt.sourcefire.com>
next in thread | previous in thread | raw e-mail | index | archive | help
On Sun, 20 Jul 2014, Shawn Webb wrote: >> - It is yet undetermined what the performance effect will be, and it is not >> clear (but seems likely from past measurements) if there will be a >> performance hit even when ASLR is off. -Apparently there are applications >> that will segfault (?). > > So I have an old Dell Latitude E6500 that I bought at Defcon a year or > so ago that I'm doing testing on. Even though it's quite an underpowered > laptop, I'm running ZFS on it for BE support (in case one of our changes > kills it). I'll run unixbench on it a few times to benchmark the ASLR > patch. I'll test these three scenarios: > 1) ASLR compiled in and enabled; > 2) ASLR compiled in and disabled; > 3) ASLR compiled out (GENERIC kernel). > > In each of these three scenarios, I'll have the kernel debugging features > (WITNESS, INVARIANTS, etc.) turned off to better simulate a production > system and to remove just one more variable in the tests. > > I'll run unixbench ten times under each scenario and I'll compute averages. > > Since this is an older laptop (and it's running ZFS), these tests will take > a couple days. I'll have an answer for you soon. Hi Shawn: Great news that this work is coming to fruition -- ASLR is long overdue. Are you having any luck with performance measurements? Unixbench seems like a good starting point, but I wonder if it would be useful to look, in particular, at memory-mapping intensive workloads that might be affected as a result of changes in kernel VM data-structure use, or greater fragmentation of the address space. I'm not sure I have a specific application here in mind -- in the past I might have pointed out tools such as ElectricFence that tend to increase fragmentation themselves. Also, could you say a little more about the effects that the change might have on transparent superpage use -- other than suitable alignment of large mappings, it's not clear to me what effect it might have. I wonder if some equipment in the FreeBSD Netperf cluster might be used to help with performance characterisation -- in particular, very recent high-end server hardware, and also, lower-end embedded-style systems that have markedly different virtual-memory implementations in hardware and software. Often those two classes of systems see markedly different performance-change characteristics as a result of greater cache-centrism and instruction-level parallelism in the higher-end designs that can mask increases in instruction count. I think someone has already commented that Peter Holm's help might be enlisted; you have have seen his 'stress2' suite, which could help with stability testing. Thanks, Robert
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?alpine.BSF.2.11.1407230017490.88645>