From owner-freebsd-arm@FreeBSD.ORG Wed Jul 8 15:51:17 2009 Return-Path: Delivered-To: freebsd-arm@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 81E41106564A; Wed, 8 Jul 2009 15:51:17 +0000 (UTC) (envelope-from tinguely@casselton.net) Received: from casselton.net (casselton.net [63.165.140.2]) by mx1.freebsd.org (Postfix) with ESMTP id 1E08E8FC19; Wed, 8 Jul 2009 15:51:16 +0000 (UTC) (envelope-from tinguely@casselton.net) Received: from casselton.net (localhost [127.0.0.1]) by casselton.net (8.14.3/8.14.3) with ESMTP id n68FpGCJ073178; Wed, 8 Jul 2009 10:51:16 -0500 (CDT) (envelope-from tinguely@casselton.net) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=casselton.net; s=ccnMail; t=1247068276; bh=0bWtS0gVoGm+2SZNa3UUZuyVy2M/yvaa4rsJVO0YkRA=; h=Date:From:Message-Id:To:Subject:Cc:In-Reply-To; b=GTOxC5j2l2/Fsn2lHnnze16eQXpFP9d2oXekvCbTHZAvf/uoIj6sar61/84YH1Rr+ SuFVWIFkPmy2w4djEu70VRD92gt6gThx81aVUWtZjL1IsR5jSh25Zn4/K762KocLd+ L/KtjtWb42zrcue16PGkIEIXEWM3co2h8eGl0Vos= Received: (from tinguely@localhost) by casselton.net (8.14.3/8.14.2/Submit) id n68FpFeM073177; Wed, 8 Jul 2009 10:51:16 -0500 (CDT) (envelope-from tinguely) Date: Wed, 8 Jul 2009 10:51:16 -0500 (CDT) From: Mark Tinguely Message-Id: <200907081551.n68FpFeM073177@casselton.net> To: mih@semihalf.com, tinguely@casselton.net In-Reply-To: <200907081507.n68F7Vsu070524@casselton.net> X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.3.2 (casselton.net [127.0.0.1]); Wed, 08 Jul 2009 10:51:16 -0500 (CDT) Cc: freebsd-arm@freebsd.org Subject: Re: pmap problem in FreeBSD current X-BeenThere: freebsd-arm@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Porting FreeBSD to the StrongARM Processor List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 Jul 2009 15:51:17 -0000 I forgot to CC the rest. pmap_get_pv_entry() is called from pmap_enter_locked and can have the same UMA call that can happen in pmap_kenter_internal. move the kmap lock protection from pmap_enter_pv() to pmap_get_pv_entry() static void pmap_enter_pv(struct vm_page *pg, struct pv_entry *pve, pmap_t pm, vm_offset_t va, u_int flags) { - int km; mtx_assert(&vm_page_queue_mtx, MA_OWNED); if (pg->md.pv_kva) { /* PMAP_ASSERT_LOCKED(pmap_kernel()); */ pve->pv_pmap = pmap_kernel(); pve->pv_va = pg->md.pv_kva; pve->pv_flags = PVF_WRITE | PVF_UNMAN; pg->md.pv_kva = 0; TAILQ_INSERT_HEAD(&pg->md.pv_list, pve, pv_list); TAILQ_INSERT_HEAD(&pm->pm_pvlist, pve, pv_plist); - if ((km = PMAP_OWNED(pmap_kernel()))) - PMAP_UNLOCK(pmap_kernel()); vm_page_unlock_queues(); if ((pve = pmap_get_pv_entry()) == NULL) panic("pmap_kenter_internal: no pv entries"); vm_page_lock_queues(); - if (km) - PMAP_LOCK(pmap_kernel()); } PMAP_ASSERT_LOCKED(pm); pve->pv_pmap = pm; pve->pv_va = va; pve->pv_flags = flags; TAILQ_INSERT_HEAD(&pg->md.pv_list, pve, pv_list); TAILQ_INSERT_HEAD(&pm->pm_pvlist, pve, pv_plist); pg->md.pvh_attrs |= flags & (PVF_REF | PVF_MOD); if (pve->pv_flags & PVF_WIRED) ++pm->pm_stats.wired_count; vm_page_flag_set(pg, PG_REFERENCED); } static void pmap_free_pv_entry(pv_entry_t pv) { + int km; pv_entry_count--; + if ((km = PMAP_OWNED(pmap_kernel()))) + PMAP_UNLOCK(pmap_kernel()); uma_zfree(pvzone, pv); + if (km) + PMAP_LOCK(pmap_kernel()); } /* * get a new pv_entry, allocating a block from the system * when needed. * the memory allocation is performed bypassing the malloc code * because of the possibility of allocations at interrupt time. */ static pv_entry_t pmap_get_pv_entry(void) { pv_entry_t ret_value; + int km; pv_entry_count++; + if ((km = PMAP_OWNED(pmap_kernel()))) + PMAP_UNLOCK(pmap_kernel()); if (pv_entry_count > pv_entry_high_water) pagedaemon_wakeup(); ret_value = uma_zalloc(pvzone, M_NOWAIT); + if (km) + PMAP_LOCK(pmap_kernel()); return ret_value; }