From owner-svn-src-all@FreeBSD.ORG Wed Oct 24 20:02:44 2012 Return-Path: Delivered-To: svn-src-all@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id BA50340E; Wed, 24 Oct 2012 20:02:44 +0000 (UTC) (envelope-from asmrookie@gmail.com) Received: from mail-lb0-f182.google.com (mail-lb0-f182.google.com [209.85.217.182]) by mx1.freebsd.org (Postfix) with ESMTP id 387E08FC0C; Wed, 24 Oct 2012 20:02:42 +0000 (UTC) Received: by mail-lb0-f182.google.com with SMTP id b5so1701925lbd.13 for ; Wed, 24 Oct 2012 13:02:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:reply-to:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type; bh=aPjs4fVlhpEOnXsoh/xv3Lh7V5UmsYqpV8RWxwGvwiY=; b=VrUf62HOcIXm9ni7hgSTG9SFK5idSlXhBgtPmgx4d/NI6R7fCE437TPRA2hneYIyuX aMul7qx5vUXFm+tw58756U0GMP8vw5IuNuKzOyfuHqCXZ11hTdsSjyuLB3glnjsxP7m+ U1Y4KCM/xqWoJhb/bx9cPRCDYGOL8uotOAfdETBr12C9LtEPxkw9vOn0FcJV7CqSHvQO Hak/kPfZlh3s21CdwAIhtVP7wxa0ZCwWcBGm91xpk+lKyEWCgUhK70zt5u1vci7EqAup zzFeUJXu2E0s69sYp5Fa98cmcrwWC9mfARPjJ41ZNut9UbPHzb/ZJHoRZgivLyizQckL 5E0g== MIME-Version: 1.0 Received: by 10.112.47.129 with SMTP id d1mr6699529lbn.115.1351108961882; Wed, 24 Oct 2012 13:02:41 -0700 (PDT) Sender: asmrookie@gmail.com Received: by 10.112.30.37 with HTTP; Wed, 24 Oct 2012 13:02:41 -0700 (PDT) In-Reply-To: <508841DC.7040701@FreeBSD.org> References: <201210241836.q9OIafqo073002@svn.freebsd.org> <50883EA8.1010308@freebsd.org> <508841DC.7040701@FreeBSD.org> Date: Wed, 24 Oct 2012 21:02:41 +0100 X-Google-Sender-Auth: UG_0V6u43VOE4Ls94aco_uIDvKI Message-ID: Subject: Re: svn commit: r242014 - head/sys/kern From: Attilio Rao To: Alexander Motin Content-Type: text/plain; charset=UTF-8 Cc: Adrian Chadd , src-committers@freebsd.org, Andre Oppermann , svn-src-all@freebsd.org, svn-src-head@freebsd.org, Jim Harris X-BeenThere: svn-src-all@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list Reply-To: attilio@FreeBSD.org List-Id: "SVN commit messages for the entire src tree \(except for " user" and " projects" \)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 24 Oct 2012 20:02:44 -0000 On Wed, Oct 24, 2012 at 8:30 PM, Alexander Motin wrote: > On 24.10.2012 22:16, Andre Oppermann wrote: >> >> On 24.10.2012 20:56, Jim Harris wrote: >>> >>> On Wed, Oct 24, 2012 at 11:41 AM, Adrian Chadd >>> wrote: >>>> >>>> On 24 October 2012 11:36, Jim Harris wrote: >>>> >>>>> Pad tdq_lock to avoid false sharing with tdq_load and tdq_cpu_idle. >>>> >>>> >>>> Ok, but.. >>>> >>>> >>>>> struct mtx tdq_lock; /* run queue lock. */ >>>>> + char pad[64 - sizeof(struct mtx)]; >>>> >>>> >>>> .. don't we have an existing compile time macro for the cache line >>>> size, which can be used here? >>> >>> >>> Yes, but I didn't use it for a couple of reasons: >>> >>> 1) struct tdq itself is currently using __aligned(64), so I wanted to >>> keep it consistent. >>> 2) CACHE_LINE_SIZE is currently defined as 128 on x86, due to >>> NetBurst-based processors having 128-byte cache sectors a while back. >>> I had planned to start a separate thread on arch@ about this today on >>> whether this was still appropriate. >> >> >> See also the discussion on svn-src-all regarding global struct mtx >> alignment. >> >> Thank you for proving my point. ;) >> >> Let's go back and see how we can do this the sanest way. These are >> the options I see at the moment: >> >> 1. sprinkle __aligned(CACHE_LINE_SIZE) all over the place >> 2. use a macro like MTX_ALIGN that can be SMP/UP aware and in >> the future possibly change to a different compiler dependent >> align attribute >> 3. embed __aligned(CACHE_LINE_SIZE) into struct mtx itself so it >> automatically gets aligned in all cases, even when dynamically >> allocated. >> >> Personally I'm undecided between #2 and #3. #1 is ugly. In favor >> of #3 is that there possibly isn't any case where you'd actually >> want the mutex to share a cache line with anything else, even a data >> structure. > > > I'm sorry, could you hint me with some theory? I think I can agree that > cache line sharing can be a problem in case of spin locks -- waiting thread > will constantly try to access page modified by other CPU, that I guess will > cause cache line writes to the RAM. But why is it so bad to share lock with > respective data in case of non-spin locks? Won't benefits from free regular > prefetch of the right data while grabbing lock compensate penalties from > relatively rare collisions? Yes, but be aware that adaptive spinning imposes the same type of cache sharing issues than spinlocks have. I just found this 4 years old patch that implements back-off algorithms for spinmutexes and adaptive spinning. I think it would be interesting if someone can find time to benchmark and tune it. I can clean it up and make a commit candidate if there is an interest: http://lists.freebsd.org/pipermail/freebsd-smp/2008-June/001561.html Thanks, Attilio -- Peace can only be achieved by understanding - A. Einstein