Date: Mon, 18 Feb 2013 18:06:42 +0100 From: Svatopluk Kraus <onwahe@gmail.com> To: Konstantin Belousov <kostikbel@gmail.com> Cc: freebsd-current@freebsd.org Subject: Re: [patch] i386 pmap sysmaps_pcpu[] atomic access Message-ID: <CAFHCsPVbkwj7fhqax5D5kk89VZgAjW9gT8uJunjevav2eTUbNQ@mail.gmail.com> In-Reply-To: <20130218150809.GG2598@kib.kiev.ua> References: <CAFHCsPUVTM9jfrnzY72YsPszLWkg-UaJcycTR4xXcS%2BfPzS1Vg@mail.gmail.com> <20130218150809.GG2598@kib.kiev.ua>
next in thread | previous in thread | raw e-mail | index | archive | help
On Mon, Feb 18, 2013 at 4:08 PM, Konstantin Belousov <kostikbel@gmail.com> wrote: > On Mon, Feb 18, 2013 at 01:44:35PM +0100, Svatopluk Kraus wrote: >> Hi, >> >> the access to sysmaps_pcpu[] should be atomic with respect to >> thread migration. Otherwise, a sysmaps for one CPU can be stolen by >> another CPU and the purpose of per CPU sysmaps is broken. A patch is >> enclosed. > And, what are the problem caused by the 'otherwise' ? > I do not see any. The 'otherwise' issue is the following: 1. A thread is running on CPU0. sysmaps = &sysmaps_pcpu[PCPU_GET(cpuid)]; 2. A sysmaps variable contains a pointer to 'CPU0' sysmaps. 3. Now, the thread migrates into CPU1. 4. However, the sysmaps variable still contains a pointers to 'CPU0' sysmaps. mtx_lock(&sysmaps->lock); 4. The thread running on CPU1 locked 'CPU0' sysmaps mutex, so the thread uselessly can block another thread running on CPU0. Maybe, it's not a problem. However, it definitely goes against the reason why the submaps (one for each CPU) exist. > Really, taking the mutex while bind to a CPU could be deadlock-prone > under some situations. > > This was discussed at least one more time. Might be, a comment saying that > there is no issue should be added. I missed the discussion. Can you point me to it, please? A deadlock is not problem here, however, I can be wrong, as I can't imagine now how a simple pinning could lead into a deadlock at all. >> >> Svata >> >> Index: sys/i386/i386/pmap.c >> =================================================================== >> --- sys/i386/i386/pmap.c (revision 246831) >> +++ sys/i386/i386/pmap.c (working copy) >> @@ -4146,11 +4146,11 @@ >> { >> struct sysmaps *sysmaps; >> >> + sched_pin(); >> sysmaps = &sysmaps_pcpu[PCPU_GET(cpuid)]; >> mtx_lock(&sysmaps->lock); >> if (*sysmaps->CMAP2) >> panic("pmap_zero_page: CMAP2 busy"); >> - sched_pin(); >> *sysmaps->CMAP2 = PG_V | PG_RW | VM_PAGE_TO_PHYS(m) | PG_A | PG_M | >> pmap_cache_bits(m->md.pat_mode, 0); >> invlcaddr(sysmaps->CADDR2); >> @@ -4171,11 +4171,11 @@ >> { >> struct sysmaps *sysmaps; >> >> + sched_pin(); >> sysmaps = &sysmaps_pcpu[PCPU_GET(cpuid)]; >> mtx_lock(&sysmaps->lock); >> if (*sysmaps->CMAP2) >> panic("pmap_zero_page_area: CMAP2 busy"); >> - sched_pin(); >> *sysmaps->CMAP2 = PG_V | PG_RW | VM_PAGE_TO_PHYS(m) | PG_A | PG_M | >> pmap_cache_bits(m->md.pat_mode, 0); >> invlcaddr(sysmaps->CADDR2); >> @@ -4220,13 +4220,13 @@ >> { >> struct sysmaps *sysmaps; >> >> + sched_pin(); >> sysmaps = &sysmaps_pcpu[PCPU_GET(cpuid)]; >> mtx_lock(&sysmaps->lock); >> if (*sysmaps->CMAP1) >> panic("pmap_copy_page: CMAP1 busy"); >> if (*sysmaps->CMAP2) >> panic("pmap_copy_page: CMAP2 busy"); >> - sched_pin(); >> invlpg((u_int)sysmaps->CADDR1); >> invlpg((u_int)sysmaps->CADDR2); >> *sysmaps->CMAP1 = PG_V | VM_PAGE_TO_PHYS(src) | PG_A | >> @@ -5072,11 +5072,11 @@ >> vm_offset_t sva, eva; >> >> if ((cpu_feature & CPUID_CLFSH) != 0) { >> + sched_pin(); >> sysmaps = &sysmaps_pcpu[PCPU_GET(cpuid)]; >> mtx_lock(&sysmaps->lock); >> if (*sysmaps->CMAP2) >> panic("pmap_flush_page: CMAP2 busy"); >> - sched_pin(); >> *sysmaps->CMAP2 = PG_V | PG_RW | VM_PAGE_TO_PHYS(m) | >> PG_A | PG_M | pmap_cache_bits(m->md.pat_mode, 0); >> invlcaddr(sysmaps->CADDR2); >> _______________________________________________ >> freebsd-current@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-current >> To unsubscribe, send any mail to "freebsd-current-unsubscribe@freebsd.org"
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAFHCsPVbkwj7fhqax5D5kk89VZgAjW9gT8uJunjevav2eTUbNQ>