From owner-svn-src-all@FreeBSD.ORG Wed Oct 24 19:30:45 2012 Return-Path: Delivered-To: svn-src-all@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id EC37324E; Wed, 24 Oct 2012 19:30:44 +0000 (UTC) (envelope-from mavbsd@gmail.com) Received: from mail-la0-f54.google.com (mail-la0-f54.google.com [209.85.215.54]) by mx1.freebsd.org (Postfix) with ESMTP id 8BB7D8FC0C; Wed, 24 Oct 2012 19:30:43 +0000 (UTC) Received: by mail-la0-f54.google.com with SMTP id e12so701596lag.13 for ; Wed, 24 Oct 2012 12:30:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject :references:in-reply-to:content-type:content-transfer-encoding; bh=4YsOSMzejkNUeZxdhL9N3bKWuyCugIx6v4VujHYWPp4=; b=aqIKV1ZfECKqX8N6atkv8EcHlPAh/1EvV6SdCO5mAeGf4LLUuVkyQW8IAhQIMa6wxX 22PF61oowbajufKbhhDjalhzvsF7muQSButJVUAQCzFwwEu8LjrLt2FUgBsZSp5cTKHp EdFREoxUz27z0eDlSWIs0vqdE8s55TrpVklx5ghsp9vprVbf/trkgdTXSaVi6KQy96Xw R1vbsGQZN4ghsOn7EOvm4R/o/Ie3YD69TuToNj5l7DDFQWwPqxuO+9GVcMWo0XKQh0sA oVmwAFv0kZVf3Ds/IYkcAAQHdPk1d9GCmR38TC4PvDHpWR8BPp8Qwa4iMZThlDhJfWhn SOOQ== Received: by 10.112.29.9 with SMTP id f9mr6856134lbh.22.1351107041296; Wed, 24 Oct 2012 12:30:41 -0700 (PDT) Received: from mavbook.mavhome.dp.ua (mavhome.mavhome.dp.ua. [213.227.240.37]) by mx.google.com with ESMTPS id q2sm5270673lbd.14.2012.10.24.12.30.39 (version=TLSv1/SSLv3 cipher=OTHER); Wed, 24 Oct 2012 12:30:40 -0700 (PDT) Sender: Alexander Motin Message-ID: <508841DC.7040701@FreeBSD.org> Date: Wed, 24 Oct 2012 22:30:36 +0300 From: Alexander Motin User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:13.0) Gecko/20120628 Thunderbird/13.0.1 MIME-Version: 1.0 To: Andre Oppermann Subject: Re: svn commit: r242014 - head/sys/kern References: <201210241836.q9OIafqo073002@svn.freebsd.org> <50883EA8.1010308@freebsd.org> In-Reply-To: <50883EA8.1010308@freebsd.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: svn-src-head@freebsd.org, Adrian Chadd , src-committers@freebsd.org, Jim Harris , svn-src-all@freebsd.org X-BeenThere: svn-src-all@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: "SVN commit messages for the entire src tree \(except for " user" and " projects" \)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 24 Oct 2012 19:30:45 -0000 On 24.10.2012 22:16, Andre Oppermann wrote: > On 24.10.2012 20:56, Jim Harris wrote: >> On Wed, Oct 24, 2012 at 11:41 AM, Adrian Chadd >> wrote: >>> On 24 October 2012 11:36, Jim Harris wrote: >>> >>>> Pad tdq_lock to avoid false sharing with tdq_load and tdq_cpu_idle. >>> >>> Ok, but.. >>> >>> >>>> struct mtx tdq_lock; /* run queue lock. */ >>>> + char pad[64 - sizeof(struct mtx)]; >>> >>> .. don't we have an existing compile time macro for the cache line >>> size, which can be used here? >> >> Yes, but I didn't use it for a couple of reasons: >> >> 1) struct tdq itself is currently using __aligned(64), so I wanted to >> keep it consistent. >> 2) CACHE_LINE_SIZE is currently defined as 128 on x86, due to >> NetBurst-based processors having 128-byte cache sectors a while back. >> I had planned to start a separate thread on arch@ about this today on >> whether this was still appropriate. > > See also the discussion on svn-src-all regarding global struct mtx > alignment. > > Thank you for proving my point. ;) > > Let's go back and see how we can do this the sanest way. These are > the options I see at the moment: > > 1. sprinkle __aligned(CACHE_LINE_SIZE) all over the place > 2. use a macro like MTX_ALIGN that can be SMP/UP aware and in > the future possibly change to a different compiler dependent > align attribute > 3. embed __aligned(CACHE_LINE_SIZE) into struct mtx itself so it > automatically gets aligned in all cases, even when dynamically > allocated. > > Personally I'm undecided between #2 and #3. #1 is ugly. In favor > of #3 is that there possibly isn't any case where you'd actually > want the mutex to share a cache line with anything else, even a data > structure. I'm sorry, could you hint me with some theory? I think I can agree that cache line sharing can be a problem in case of spin locks -- waiting thread will constantly try to access page modified by other CPU, that I guess will cause cache line writes to the RAM. But why is it so bad to share lock with respective data in case of non-spin locks? Won't benefits from free regular prefetch of the right data while grabbing lock compensate penalties from relatively rare collisions? -- Alexander Motin