From owner-freebsd-arch@FreeBSD.ORG Thu Oct 25 21:40:15 2012 Return-Path: Delivered-To: freebsd-arch@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 16252B58; Thu, 25 Oct 2012 21:40:15 +0000 (UTC) (envelope-from jim.harris@gmail.com) Received: from mail-vc0-f182.google.com (mail-vc0-f182.google.com [209.85.220.182]) by mx1.freebsd.org (Postfix) with ESMTP id 9FAB38FC16; Thu, 25 Oct 2012 21:40:14 +0000 (UTC) Received: by mail-vc0-f182.google.com with SMTP id fw7so3025524vcb.13 for ; Thu, 25 Oct 2012 14:40:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=JqoviMTfWo0B2q7VmKVgTZotVyok+a9OucG2Vc951+Y=; b=DfzK6Yt3rwcuDV5oZiNwn7Q8v1sMY0k/6buHb90TJI+e3R7F64K2qoNG0/vXN2+TP8 oI/lv9QmtOiuxiqVCZnIpqjCfT18EMtSon9ajMcjeLk6jiKcX+qrd+5pcIII4YR5hnk6 9fN/LFbFILYCtZ05z1alnhsMrtPPD7jDGmYM60AcQMXhmPcg8aHlRdj5Qbi/99fkWslU Tl71Vu2M4h2BzLlN+AD/S5lgk+JdIyebmEma0c4oFzM2deQ76pp0xoawqqYZb6AmcltR WJ/DeY3Oable4zzF9V6akagX+f49tZwgxKSi3oWaizCY/xNXxY6HplXqm8pDgdRMRU+l mcIA== MIME-Version: 1.0 Received: by 10.52.75.70 with SMTP id a6mr28298622vdw.5.1351201213523; Thu, 25 Oct 2012 14:40:13 -0700 (PDT) Received: by 10.58.225.2 with HTTP; Thu, 25 Oct 2012 14:40:13 -0700 (PDT) In-Reply-To: <201210251732.31631.jhb@freebsd.org> References: <201210250918.00602.jhb@freebsd.org> <5089690A.8070503@networx.ch> <201210251732.31631.jhb@freebsd.org> Date: Thu, 25 Oct 2012 14:40:13 -0700 Message-ID: Subject: Re: CACHE_LINE_SIZE on x86 From: Jim Harris To: John Baldwin Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-arch@freebsd.org X-BeenThere: freebsd-arch@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Discussion related to FreeBSD architecture List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 25 Oct 2012 21:40:15 -0000 On Thu, Oct 25, 2012 at 2:32 PM, John Baldwin wrote: > On Thursday, October 25, 2012 12:30:02 pm Andre Oppermann wrote: >> On 25.10.2012 15:18, John Baldwin wrote: >> > On Wednesday, October 24, 2012 3:13:38 pm Jim Harris wrote: >> >> While investigating padding of the ULE scheduler locks (r242014), I >> >> recently discovered that CACHE_LINE_SIZE on x86 is defined as 128 (not >> >> 64). From what I can tell from svn logs, this was to account for 128 >> >> byte cache "sectors" that existed on the NetBurst micro architecture >> >> CPUs. >> >> >> >> I'm curious if there's been consideration in changing this back to 64? >> >> With maybe a kernel config option to modify it? On 2S systems (but >> >> not on 1S systems), I see a benefit using CACHE_LINE_SIZE=128 for the >> >> scheduler locks. I suspect this is related to data prefetching but am >> >> still running experiments to verify. >> > >> > All the i7 and later systems I've seen (maybe even Penryn?) have a BIOS option >> > (typically enabled by default) to enable adjacent cache line prefetching (my >> > understanding is that this only affects the LLC, and it seems to always fetch >> > an aligned 128 bytes, so if your miss is in the "second" line it fetches N-1 >> > and N, not always fetching N and N+1). That is why I thought we still use 128 >> > bytes on x86. >> >> As long as the additionally prefetched cache line has its own MOESI >> state and gets marked as shared there is not problem with using only >> 64B alignment and padding. > > It would be good to know though if there are performance benefits from > avoiding sharing across paired lines in this manner. Even if it has > its own MOESI state, there might still be negative effects from sharing > the pair. On 2S, I do see further benefits by using 128 byte padding instead of 64. On 1S, I see no difference. I've been meaning to turn off prefetching on my system to see if it has any effect in the 2S case - I can give that a shot tomorrow. > -- > John Baldwin