From owner-freebsd-current@FreeBSD.ORG Mon Feb 18 20:27:42 2013 Return-Path: Delivered-To: freebsd-current@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 0060AEEC for ; Mon, 18 Feb 2013 20:27:41 +0000 (UTC) (envelope-from onwahe@gmail.com) Received: from mail-qe0-f49.google.com (mail-qe0-f49.google.com [209.85.128.49]) by mx1.freebsd.org (Postfix) with ESMTP id BD3B8620 for ; Mon, 18 Feb 2013 20:27:41 +0000 (UTC) Received: by mail-qe0-f49.google.com with SMTP id 5so2648766qea.22 for ; Mon, 18 Feb 2013 12:27:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:cc:content-type; bh=XLrsWld1oo43LPdfv68K5RO9QVyv5/kWCT3iomBMkrs=; b=ispEaPfFnuuVeU860+Wx9eGDMi6GyckWX4pBEaaQzKD6+ZjL38F5inAXZ331lu3ReL lSEZ07YzB5Lw9ZdO4AQqZs3UKcYQUxwgVq9xkcxI4Fj5KRXUnmWmEuJjnL6wegxC4/2R oMmsfVjO99EKPT9pYj2s5eTatZz23QkmpUu/UGjXHNzwfBb3CTuhRc4DCaz3+PQHm3Be Dbg/HHNK7PUw7v/LxjANn/4ZcDJKc2pgzi7y5JOvKv92P422UnEY/1jRqYwsR0yZ/ACS Z0UTcKq/6v7sZxKbcagAj1dVie8oCLRtycMtjsWfk9j/PO0WZRI/jHu1VIK2Wr28boIg 01fA== MIME-Version: 1.0 X-Received: by 10.49.108.9 with SMTP id hg9mr5682352qeb.34.1361219260816; Mon, 18 Feb 2013 12:27:40 -0800 (PST) Received: by 10.49.121.198 with HTTP; Mon, 18 Feb 2013 12:27:40 -0800 (PST) In-Reply-To: <20130218170957.GJ2598@kib.kiev.ua> References: <20130218150809.GG2598@kib.kiev.ua> <20130218170957.GJ2598@kib.kiev.ua> Date: Mon, 18 Feb 2013 21:27:40 +0100 Message-ID: Subject: Re: [patch] i386 pmap sysmaps_pcpu[] atomic access From: Svatopluk Kraus To: Konstantin Belousov Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-current@freebsd.org X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 18 Feb 2013 20:27:42 -0000 On Mon, Feb 18, 2013 at 6:09 PM, Konstantin Belousov wrote: > On Mon, Feb 18, 2013 at 06:06:42PM +0100, Svatopluk Kraus wrote: >> On Mon, Feb 18, 2013 at 4:08 PM, Konstantin Belousov >> wrote: >> > On Mon, Feb 18, 2013 at 01:44:35PM +0100, Svatopluk Kraus wrote: >> >> Hi, >> >> >> >> the access to sysmaps_pcpu[] should be atomic with respect to >> >> thread migration. Otherwise, a sysmaps for one CPU can be stolen by >> >> another CPU and the purpose of per CPU sysmaps is broken. A patch is >> >> enclosed. >> > And, what are the problem caused by the 'otherwise' ? >> > I do not see any. >> >> The 'otherwise' issue is the following: >> >> 1. A thread is running on CPU0. >> >> sysmaps = &sysmaps_pcpu[PCPU_GET(cpuid)]; >> >> 2. A sysmaps variable contains a pointer to 'CPU0' sysmaps. >> 3. Now, the thread migrates into CPU1. >> 4. However, the sysmaps variable still contains a pointers to 'CPU0' sysmaps. >> >> mtx_lock(&sysmaps->lock); >> >> 4. The thread running on CPU1 locked 'CPU0' sysmaps mutex, so the >> thread uselessly can block another thread running on CPU0. Maybe, it's >> not a problem. However, it definitely goes against the reason why the >> submaps (one for each CPU) exist. > So what ? It depends. You don't understand it or you think it's ok? Tell me. >> >> >> > Really, taking the mutex while bind to a CPU could be deadlock-prone >> > under some situations. >> > >> > This was discussed at least one more time. Might be, a comment saying that >> > there is no issue should be added. >> >> I missed the discussion. Can you point me to it, please? A deadlock is >> not problem here, however, I can be wrong, as I can't imagine now how >> a simple pinning could lead into a deadlock at all. > Because some other load on the bind cpu might prevent the thread from > being scheduled. I'm afraid I still have no idea. On single CPU, a binding has no meaning. Thus, if any deadlock exists then exists without binding too. Hmm, you are talking about a deadlock caused by heavy CPU load? Is it a deadlock at all? Anyhow, mutex is a lock with priority propagation, isn't it? > >> >> >> >> >> Svata >> >> >> >> Index: sys/i386/i386/pmap.c >> >> =================================================================== >> >> --- sys/i386/i386/pmap.c (revision 246831) >> >> +++ sys/i386/i386/pmap.c (working copy) >> >> @@ -4146,11 +4146,11 @@ >> >> { >> >> struct sysmaps *sysmaps; >> >> >> >> + sched_pin(); >> >> sysmaps = &sysmaps_pcpu[PCPU_GET(cpuid)]; >> >> mtx_lock(&sysmaps->lock); >> >> if (*sysmaps->CMAP2) >> >> panic("pmap_zero_page: CMAP2 busy"); >> >> - sched_pin(); >> >> *sysmaps->CMAP2 = PG_V | PG_RW | VM_PAGE_TO_PHYS(m) | PG_A | PG_M | >> >> pmap_cache_bits(m->md.pat_mode, 0); >> >> invlcaddr(sysmaps->CADDR2); >> >> @@ -4171,11 +4171,11 @@ >> >> { >> >> struct sysmaps *sysmaps; >> >> >> >> + sched_pin(); >> >> sysmaps = &sysmaps_pcpu[PCPU_GET(cpuid)]; >> >> mtx_lock(&sysmaps->lock); >> >> if (*sysmaps->CMAP2) >> >> panic("pmap_zero_page_area: CMAP2 busy"); >> >> - sched_pin(); >> >> *sysmaps->CMAP2 = PG_V | PG_RW | VM_PAGE_TO_PHYS(m) | PG_A | PG_M | >> >> pmap_cache_bits(m->md.pat_mode, 0); >> >> invlcaddr(sysmaps->CADDR2); >> >> @@ -4220,13 +4220,13 @@ >> >> { >> >> struct sysmaps *sysmaps; >> >> >> >> + sched_pin(); >> >> sysmaps = &sysmaps_pcpu[PCPU_GET(cpuid)]; >> >> mtx_lock(&sysmaps->lock); >> >> if (*sysmaps->CMAP1) >> >> panic("pmap_copy_page: CMAP1 busy"); >> >> if (*sysmaps->CMAP2) >> >> panic("pmap_copy_page: CMAP2 busy"); >> >> - sched_pin(); >> >> invlpg((u_int)sysmaps->CADDR1); >> >> invlpg((u_int)sysmaps->CADDR2); >> >> *sysmaps->CMAP1 = PG_V | VM_PAGE_TO_PHYS(src) | PG_A | >> >> @@ -5072,11 +5072,11 @@ >> >> vm_offset_t sva, eva; >> >> >> >> if ((cpu_feature & CPUID_CLFSH) != 0) { >> >> + sched_pin(); >> >> sysmaps = &sysmaps_pcpu[PCPU_GET(cpuid)]; >> >> mtx_lock(&sysmaps->lock); >> >> if (*sysmaps->CMAP2) >> >> panic("pmap_flush_page: CMAP2 busy"); >> >> - sched_pin(); >> >> *sysmaps->CMAP2 = PG_V | PG_RW | VM_PAGE_TO_PHYS(m) | >> >> PG_A | PG_M | pmap_cache_bits(m->md.pat_mode, 0); >> >> invlcaddr(sysmaps->CADDR2); >> >> _______________________________________________ >> >> freebsd-current@freebsd.org mailing list >> >> http://lists.freebsd.org/mailman/listinfo/freebsd-current >> >> To unsubscribe, send any mail to "freebsd-current-unsubscribe@freebsd.org"