From owner-freebsd-arm@FreeBSD.ORG Sun Aug 25 12:11:49 2013 Return-Path: Delivered-To: freebsd-arm@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id 29D47301; Sun, 25 Aug 2013 12:11:49 +0000 (UTC) (envelope-from andrew@fubar.geek.nz) Received: from nibbler.fubar.geek.nz (nibbler.fubar.geek.nz [199.48.134.198]) by mx1.freebsd.org (Postfix) with ESMTP id EABA32A98; Sun, 25 Aug 2013 12:11:48 +0000 (UTC) Received: from bender.Home (97e5e46b.skybroadband.com [151.229.228.107]) by nibbler.fubar.geek.nz (Postfix) with ESMTPSA id A4EF95DFF7; Sun, 25 Aug 2013 12:11:40 +0000 (UTC) Date: Sun, 25 Aug 2013 13:11:34 +0100 From: Andrew Turner To: Grzegorz Bernacki Subject: Re: svn commit: r251370 - head/sys/arm/arm Message-ID: <20130825131134.11815f5d@bender.Home> In-Reply-To: <201306040921.r549LI8t021617@svn.freebsd.org> References: <201306040921.r549LI8t021617@svn.freebsd.org> Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Cc: svn-src-head@freebsd.org, svn-src-all@freebsd.org, src-committers@freebsd.org, freebsd-arm@freebsd.org X-BeenThere: freebsd-arm@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Porting FreeBSD to the StrongARM Processor List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 25 Aug 2013 12:11:49 -0000 On Tue, 4 Jun 2013 09:21:18 +0000 (UTC) Grzegorz Bernacki wrote: > Author: gber > Date: Tue Jun 4 09:21:18 2013 > New Revision: 251370 > URL: http://svnweb.freebsd.org/changeset/base/251370 > > Log: > Implement pmap_copy() for ARMv6/v7. > > Copy the given range of mappings from the source map to the > destination map, thereby reducing the number of VM faults on fork. > > Submitted by: Zbigniew Bodek > Sponsored by: The FreeBSD Foundation, Semihalf > > Modified: > head/sys/arm/arm/pmap-v6.c This change leads to a deadlock when I attempt to run make buildworld on my PandaBoard. The problem is you are locking pvh_global_lock, src_pmap, and dst_pmap then calling pmap_alloc_l2_bucket. This may unlock pvh_global_lock and dst_pmap, but it has no knowledge of src_pmap so it will keep it locked. If another thread needs to lock src_pmap, for example the pagedaemon may in pmap_clearbit, it will lock pvh_global_lock. This will succeed when pmap_alloc_l2_bucket unlocks it. It will then attempt to lock src_pmap but, as it is already locked, it will wait for pmap_copy to unlock it. At this point pmap_alloc_l2_bucket will attempt to lock pvh_global_lock again, however this lock is already held so it waits for the other thread to unlock it. At this point both threads are waiting on each other causing a deadlock. I don't know enough about the pmap or vm code to be able to fix this, other than reverting this commit. Andrew > > Modified: head/sys/arm/arm/pmap-v6.c > ============================================================================== > --- head/sys/arm/arm/pmap-v6.c Tue Jun 4 07:37:06 2013 > (r251369) +++ head/sys/arm/arm/pmap-v6.c Tue Jun 4 09:21:18 > 2013 (r251370) @@ -2966,6 +2966,126 @@ void > pmap_copy(pmap_t dst_pmap, pmap_t src_pmap, vm_offset_t dst_addr, > vm_size_t len, vm_offset_t src_addr) > { > + struct l2_bucket *l2b_src, *l2b_dst; > + struct pv_entry *pve; > + vm_offset_t addr; > + vm_offset_t end_addr; > + vm_offset_t next_bucket; > + u_int flags; > + boolean_t l2b_alloc; > + > + CTR4(KTR_PMAP, "%s: VA = 0x%08x, len = 0x%08x. Will %s\n", > __func__, > + src_addr, len, (dst_addr != src_addr) ? "exit" : "copy"); > + > + if (dst_addr != src_addr) > + return; > + > + rw_wlock(&pvh_global_lock); > + if (dst_pmap < src_pmap) { > + PMAP_LOCK(dst_pmap); > + PMAP_LOCK(src_pmap); > + } else { > + PMAP_LOCK(src_pmap); > + PMAP_LOCK(dst_pmap); > + } > + > + end_addr = src_addr + len; > + addr = src_addr; > + /* > + * Iterate through all used l2_buckets in a given range. > + */ > + while (addr < end_addr) { > + pt_entry_t *src_ptep, *dst_ptep; > + pt_entry_t src_pte; > + > + next_bucket = L2_NEXT_BUCKET(addr); > + /* > + * If the next bucket VA is out of the > + * copy range then set it to end_addr in order > + * to copy all mappings until the given limit. > + */ > + if (next_bucket > end_addr) > + next_bucket = end_addr; > + > + l2b_src = pmap_get_l2_bucket(src_pmap, addr); > + if (l2b_src == NULL) { > + addr = next_bucket; > + continue; > + } > + src_ptep = &l2b_src->l2b_kva[l2pte_index(addr)]; > + > + while (addr < next_bucket) { > + vm_page_t srcmpte; > + > + src_pte = *src_ptep; > + srcmpte = PHYS_TO_VM_PAGE(l2pte_pa(src_pte)); > + /* > + * We only virtual copy managed pages > + */ > + if (srcmpte && (srcmpte->oflags & > VPO_UNMANAGED) == 0) { > + l2b_alloc = FALSE; > + l2b_dst = > pmap_get_l2_bucket(dst_pmap, addr); > + /* > + * Check if the allocation of another > + * l2_bucket is necessary. > + */ > + if (l2b_dst == NULL) { > + l2b_dst = > pmap_alloc_l2_bucket(dst_pmap, > + addr); > + l2b_alloc = TRUE; > + } > + if (l2b_dst == NULL) > + goto out; > + > + dst_ptep = > &l2b_dst->l2b_kva[l2pte_index(addr)]; + > + if (*dst_ptep == 0 && > + (pve = > pmap_get_pv_entry(dst_pmap, TRUE))) { > + /* > + * Check whether the source > mapping is > + * writable and set the > proper flag > + * for a copied mapping so > that right > + * permissions could be set > on the > + * access fault. > + */ > + flags = 0; > + if ((src_pte & L2_APX) == 0) > + flags = PVF_WRITE; > + pmap_enter_pv(srcmpte, pve, > dst_pmap, > + addr, flags); > + /* > + * Clear the modified and > + * accessed (referenced) > flags > + * and don't set the wired > flag > + * during the copy. > + */ > + *dst_ptep = src_pte; > + *dst_ptep &= ~L2_S_REF; > + *dst_ptep |= L2_APX; > + /* > + * Update stats > + */ > + l2b_dst->l2b_occupancy++; > + > dst_pmap->pm_stats.resident_count++; > + } else { > + /* > + * If the l2_bucket was > acquired as > + * a result of allocation > then free it. > + */ > + if (l2b_alloc) > + > pmap_free_l2_bucket(dst_pmap, > + l2b_dst, 1); > + goto out; > + } > + } > + addr += PAGE_SIZE; > + src_ptep++; > + } > + } > +out: > + rw_wunlock(&pvh_global_lock); > + PMAP_UNLOCK(src_pmap); > + PMAP_UNLOCK(dst_pmap); > } > > > >