From owner-svn-src-head@freebsd.org Tue Oct 8 12:52:54 2019 Return-Path: Delivered-To: svn-src-head@mailman.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.nyi.freebsd.org (Postfix) with ESMTP id 8AD831474F2; Tue, 8 Oct 2019 12:52:54 +0000 (UTC) (envelope-from mjguzik@gmail.com) Received: from mail-ot1-x343.google.com (mail-ot1-x343.google.com [IPv6:2607:f8b0:4864:20::343]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) server-signature RSA-PSS (4096 bits) client-signature RSA-PSS (2048 bits) client-digest SHA256) (Client CN "smtp.gmail.com", Issuer "GTS CA 1O1" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 46nchP5XJPz3Pkf; Tue, 8 Oct 2019 12:52:53 +0000 (UTC) (envelope-from mjguzik@gmail.com) Received: by mail-ot1-x343.google.com with SMTP id 67so13897001oto.3; Tue, 08 Oct 2019 05:52:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=uXouMLiNtSDffL7dpHV9PKR5iElfXIaQhd6D1+Z6w6w=; b=MMoO/dceZromU3NRIkXYVVVA011RQijAKVAeyapxBCHRy+gQsnfH+CF+mDCji1j0bS /FR9+5GeKMRYDNnNbtM9afO/SGmSNxEVmlFthOWVwOxicmHL+IJHYDYaHP5EsQcrmCMN 0BT1h/UNiVmRMJhedVKzHrJqA3DjdK57cl+0SOY7AowHB1cnMCcZV37LVBIezSHrDpoG q/Fxi3D0UUSSwc+/2RYyB78N/9TRWl2SjWs7yDqwr9+iQUd2B6L8qPGmcWB6RGo8OFY+ 5kp7wOIi0MfMqCNVPLwbzzEZ/ADgFBnRLnTkcRc1eJxlakPY/nbiVHk/nzv3576eWV6c phPQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=uXouMLiNtSDffL7dpHV9PKR5iElfXIaQhd6D1+Z6w6w=; b=cX+qLWhmqw+PAWKrG/mlSTnRjZe11pT4PbYnQ6f1CbwDW5lUWXIpCSoAT9cDP+iBSw et4tZxrWjlo9Q1opuWJYu7M9NMln72qUxhv0zzEgqMqVs2ivyXJNFfik8u3+fk+CTZy0 mOlKj4bxAZlOZhLdY8hGCQt+YxMOEZrlH8VD60+8F78AvRyWv4/kNBLu56rkqa3ZaP06 BTGIhQL6FohZj6gpcCyd2KmZu3hSZqOpgSWgJtndWf0vIYwc1HELTd6efLDxlsmYLshX PdKwmblcPcIYm84qDtqD006GR/VKUbZx2SFNLLSjoOqIOblZX6AfY1sI34Fi5YaFUVMd 24lg== X-Gm-Message-State: APjAAAUxHd4vLVwjlZm7sCBXpwdiJCYv1thMf5ku6tHRIwX/ZZaCKqKv dEFgBi16VkC0t29cox/m+Dr3wJDnficu7WIIeGY= X-Google-Smtp-Source: APXvYqz2HnmBocqSx+krbvjkeIkrZfE6GhxvbKJuNAFOA0fX7p9v22YAj1fLra05Mxir5MO1Tp5P1LGZoDCQ0mjQWrA= X-Received: by 2002:a05:6830:1e31:: with SMTP id t17mr23541844otr.201.1570539171963; Tue, 08 Oct 2019 05:52:51 -0700 (PDT) MIME-Version: 1.0 Received: by 2002:a4a:458d:0:0:0:0:0 with HTTP; Tue, 8 Oct 2019 05:52:50 -0700 (PDT) In-Reply-To: <201910080421.x984LO1D003374@slippy.cwsent.com> References: <201910062213.x96MDZv3085523@repo.freebsd.org> <201910070406.x9746N0U009068@slippy.cwsent.com> <201910070419.x974JOkQ020574@slippy.cwsent.com> <201910071612.x97GCVx3003714@slippy.cwsent.com> <201910080421.x984LO1D003374@slippy.cwsent.com> From: Mateusz Guzik Date: Tue, 8 Oct 2019 14:52:50 +0200 Message-ID: Subject: Re: svn commit: r353149 - head/sys/amd64/amd64 To: Cy Schubert Cc: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org, x11@freebsd.org Content-Type: text/plain; charset="UTF-8" X-Rspamd-Queue-Id: 46nchP5XJPz3Pkf X-Spamd-Bar: -- Authentication-Results: mx1.freebsd.org; dkim=pass header.d=gmail.com header.s=20161025 header.b=MMoO/dce; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (mx1.freebsd.org: domain of mjguzik@gmail.com designates 2607:f8b0:4864:20::343 as permitted sender) smtp.mailfrom=mjguzik@gmail.com X-Spamd-Result: default: False [-3.00 / 15.00]; ARC_NA(0.00)[]; NEURAL_HAM_MEDIUM(-1.00)[-1.000,0]; R_DKIM_ALLOW(-0.20)[gmail.com:s=20161025]; FROM_HAS_DN(0.00)[]; TO_DN_SOME(0.00)[]; R_SPF_ALLOW(-0.20)[+ip6:2607:f8b0:4000::/36:c]; FREEMAIL_FROM(0.00)[gmail.com]; MIME_GOOD(-0.10)[text/plain]; IP_SCORE(0.00)[ip: (3.01), ipnet: 2607:f8b0::/32(-2.54), asn: 15169(-2.14), country: US(-0.05)]; NEURAL_HAM_LONG(-1.00)[-1.000,0]; IP_SCORE_FREEMAIL(0.00)[]; RCPT_COUNT_FIVE(0.00)[5]; TO_MATCH_ENVRCPT_SOME(0.00)[]; DKIM_TRACE(0.00)[gmail.com:+]; DMARC_POLICY_ALLOW(-0.50)[gmail.com,none]; RCVD_IN_DNSWL_NONE(0.00)[3.4.3.0.0.0.0.0.0.0.0.0.0.0.0.0.0.2.0.0.4.6.8.4.0.b.8.f.7.0.6.2.list.dnswl.org : 127.0.5.0]; FROM_EQ_ENVFROM(0.00)[]; MIME_TRACE(0.00)[0:+]; FREEMAIL_ENVFROM(0.00)[gmail.com]; ASN(0.00)[asn:15169, ipnet:2607:f8b0::/32, country:US]; RCVD_COUNT_TWO(0.00)[2]; RCVD_TLS_ALL(0.00)[]; DWL_DNSWL_NONE(0.00)[gmail.com.dwl.dnswl.org : 127.0.5.0] X-BeenThere: svn-src-head@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: SVN commit messages for the src tree for head/-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 08 Oct 2019 12:52:54 -0000 It's definitely drm, I noted it does not like the sparse array. This one should do thet trick then: https://people.freebsd.org/~mjg/pmap-nosparse.diff On 10/8/19, Cy Schubert wrote: > Still no joy. > > I still think drm-current-kmod is involved because these are produced just > prior to the panic whereas the dmesg buffer is clean of them without > r353149. > > Unread portion of the kernel message buffer: > WARNING !drm_modeset_is_locked(&crtc->mutex) failed at > /usr/local/sys/modules/drm-current-kmod/drivers/gpu/drm/drm_atomic_helper.c: > 577 > WARNING !drm_modeset_is_locked(&crtc->mutex) failed at > /usr/local/sys/modules/drm-current-kmod/drivers/gpu/drm/drm_atomic_helper.c: > 577 > WARNING !drm_modeset_is_locked(&dev->mode_config.connection_mutex) failed > at > /usr/local/sys/modules/drm-current-kmod/drivers/gpu/drm/drm_atomic_helper > .c:622 > WARNING !drm_modeset_is_locked(&dev->mode_config.connection_mutex) failed > at > /usr/local/sys/modules/drm-current-kmod/drivers/gpu/drm/drm_atomic_helper > .c:622 > WARNING !drm_modeset_is_locked(&plane->mutex) failed at > /usr/local/sys/modules/drm-current-kmod/drivers/gpu/drm/drm_atomic_helper.c: > 821 > WARNING !drm_modeset_is_locked(&plane->mutex) failed at > /usr/local/sys/modules/drm-current-kmod/drivers/gpu/drm/drm_atomic_helper.c: > 821 > WARNING !drm_modeset_is_locked(&plane->mutex) failed at > /usr/local/sys/modules/drm-current-kmod/drivers/gpu/drm/drm_atomic_helper.c: > 821 > WARNING !drm_modeset_is_locked(&plane->mutex) failed at > /usr/local/sys/modules/drm-current-kmod/drivers/gpu/drm/drm_atomic_helper.c: > 821 > WARNING !drm_modeset_is_locked(&plane->mutex) failed at > /usr/local/sys/modules/drm-current-kmod/drivers/gpu/drm/drm_atomic_helper.c: > 821 > WARNING !drm_modeset_is_locked(&plane->mutex) failed at > /usr/local/sys/modules/drm-current-kmod/drivers/gpu/drm/drm_atomic_helper.c: > 821 > <4>WARN_ON(!mutex_is_locked(&dev->struct_mutex))WARN_ON(!mutex_is_locked(&de > v->struct_mutex)) > > <4>WARN_ON(!mutex_is_locked(&fbc->lock))WARN_ON(!mutex_is_locked(&fbc-> > lock)) > > My servers (no X11) work well with this. It's only drm-current-kmod that > has gas with this rev. > > I've cc'd the maintainer of drm-current-kmod (x11@). > > > -- > Cheers, > Cy Schubert > FreeBSD UNIX: Web: http://www.FreeBSD.org > > The need of the many outweighs the greed of the few. > > > > In message > om> > , Mateusz Guzik writes: >> Does this fix it for you? >> >> https://people.freebsd.org/~mjg/pmap-fict.diff >> >> On 10/7/19, Mateusz Guzik wrote: >> > Ok, looks ilke it does not like the sparse array for fictitious >> > mappings. I'll see about a patch. >> > >> > On 10/7/19, Cy Schubert wrote: >> >> In message >> >> > >> om> >> >> , Mateusz Guzik writes: >> >>> Can you show: >> >>> >> >>> sysctl vm.phys_segso >> >> >> >> vm.phys_segs: >> >> SEGMENT 0: >> >> >> >> start: 0x10000 >> >> end: 0x9d000 >> >> domain: 0 >> >> free list: 0xffffffff80f31070 >> >> >> >> SEGMENT 1: >> >> >> >> start: 0x100000 >> >> end: 0x1000000 >> >> domain: 0 >> >> free list: 0xffffffff80f31070 >> >> >> >> SEGMENT 2: >> >> >> >> start: 0x1000000 >> >> end: 0x1ca4000 >> >> domain: 0 >> >> free list: 0xffffffff80f30e00 >> >> >> >> SEGMENT 3: >> >> >> >> start: 0x1cb3000 >> >> end: 0x1ce3000 >> >> domain: 0 >> >> free list: 0xffffffff80f30e00 >> >> >> >> SEGMENT 4: >> >> >> >> start: 0x1f00000 >> >> end: 0x20000000 >> >> domain: 0 >> >> free list: 0xffffffff80f30e00 >> >> >> >> SEGMENT 5: >> >> >> >> start: 0x20200000 >> >> end: 0x40000000 >> >> domain: 0 >> >> free list: 0xffffffff80f30e00 >> >> >> >> SEGMENT 6: >> >> >> >> start: 0x40203000 >> >> end: 0xd4993000 >> >> domain: 0 >> >> free list: 0xffffffff80f30e00 >> >> >> >> SEGMENT 7: >> >> >> >> start: 0xd6fff000 >> >> end: 0xd7000000 >> >> domain: 0 >> >> free list: 0xffffffff80f30e00 >> >> >> >> SEGMENT 8: >> >> >> >> start: 0x100001000 >> >> end: 0x211d4d000 >> >> domain: 0 >> >> free list: 0xffffffff80f30e00 >> >> >> >> SEGMENT 9: >> >> >> >> start: 0x21fc00000 >> >> end: 0x21fd44000 >> >> domain: 0 >> >> free list: 0xffffffff80f30e00 >> >> >> >> >> >> >> >>> >> >>> and from the crashdump: >> >>> p pv_table >> >> >> >> $1 = (struct pmap_large_md_page *) 0xfffffe000e000000 >> >> >> >> kgdb) p *pv_table >> >> $1 = {pv_lock = {lock_object = {lo_name = 0xffffffff80b0a9ce "pmap pv >> >> list", >> >> lo_flags = 623050752, lo_data = 0, lo_witness = >> >> 0x800000000201f163}, >> >> rw_lock = 1}, pv_page = {pv_list = {tqh_first = 0x0, >> >> tqh_last = 0xfffffe000e000020}, pv_gen = 0, pat_mode = 0}, >> >> pv_invl_gen = 0} >> >> (kgdb) >> >> >> >> >> >> -- >> >> Cheers, >> >> Cy Schubert >> >> FreeBSD UNIX: Web: http://www.FreeBSD.org >> >> >> >> The need of the many outweighs the greed of the few. >> >> >> >> >> >>> >> >>> On 10/7/19, Cy Schubert wrote: >> >>> > In message <201910070406.x9746N0U009068@slippy.cwsent.com>, Cy >> >>> > Schubert >> >>> > writes: >> >>> >> In message <201910062213.x96MDZv3085523@repo.freebsd.org>, Mateusz >> >>> >> Guzik >> >>> >> writes >> >>> >> : >> >>> >> > Author: mjg >> >>> >> > Date: Sun Oct 6 22:13:35 2019 >> >>> >> > New Revision: 353149 >> >>> >> > URL: https://svnweb.freebsd.org/changeset/base/353149 >> >>> >> > >> >>> >> > Log: >> >>> >> > amd64 pmap: implement per-superpage locks >> >>> >> > >> >>> >> > The current 256-lock sized array is a problem in the following >> >>> >> > ways: >> >>> >> > - it's way too small >> >>> >> > - there are 2 locks per cacheline >> >>> >> > - it is not NUMA-aware >> >>> >> > >> >>> >> > Solve these issues by introducing per-superpage locks backed >> >>> >> > by >> >>> >> > pages >> >>> >> > allocated from respective domains. >> >>> >> > >> >>> >> > This significantly reduces contention e.g. during poudriere -j >> >>> >> > 104. >> >>> >> > See the review for results. >> >>> >> > >> >>> >> > Reviewed by: kib >> >>> >> > Discussed with: jeff >> >>> >> > Sponsored by: The FreeBSD Foundation >> >>> >> > Differential Revision: https://reviews.freebsd.org/D21833 >> >>> >> > >> >>> >> > Modified: >> >>> >> > head/sys/amd64/amd64/pmap.c >> >>> >> > >> >>> >> > Modified: head/sys/amd64/amd64/pmap.c >> >>> >> > ==================================================================== >> ==== >> >>> === >> >>> >> == >> >>> >> > = >> >>> >> > --- head/sys/amd64/amd64/pmap.c Sun Oct 6 20:36:25 2019 >> (r35314 >> >>> >> > 8) >> >>> >> > +++ head/sys/amd64/amd64/pmap.c Sun Oct 6 22:13:35 2019 >> (r35314 >> >>> >> > 9) >> >>> >> > @@ -316,13 +316,25 @@ pmap_pku_mask_bit(pmap_t pmap) >> >>> >> > #define PV_STAT(x) do { } while (0) >> >>> >> > #endif >> >>> >> > >> >>> >> > -#define pa_index(pa) ((pa) >> PDRSHIFT) >> >>> >> > +#undef pa_index >> >>> >> > +#define pa_index(pa) ({ >> \ >> >>> >> > + KASSERT((pa) <= vm_phys_segs[vm_phys_nsegs - 1].end, \ >> >>> >> > + ("address %lx beyond the last segment", (pa))); \ >> >>> >> > + (pa) >> PDRSHIFT; \ >> >>> >> > +}) >> >>> >> > +#if VM_NRESERVLEVEL > 0 >> >>> >> > +#define pa_to_pmdp(pa) (&pv_table[pa_index(pa)]) >> >>> >> > +#define pa_to_pvh(pa) (&(pa_to_pmdp(pa)->pv_page)) >> >>> >> > +#define PHYS_TO_PV_LIST_LOCK(pa) \ >> >>> >> > + (&(pa_to_pmdp(pa)->pv_lock)) >> >>> >> > +#else >> >>> >> > #define pa_to_pvh(pa) (&pv_table[pa_index(pa)]) >> >>> >> > >> >>> >> > #define NPV_LIST_LOCKS MAXCPU >> >>> >> > >> >>> >> > #define PHYS_TO_PV_LIST_LOCK(pa) \ >> >>> >> > (&pv_list_locks[pa_index(pa) % NPV_LIST_LOCKS]) >> >>> >> > +#endif >> >>> >> > >> >>> >> > #define CHANGE_PV_LIST_LOCK_TO_PHYS(lockp, pa) do { \ >> >>> >> > struct rwlock **_lockp = (lockp); \ >> >>> >> > @@ -400,14 +412,22 @@ static int pmap_initialized; >> >>> >> > >> >>> >> > /* >> >>> >> > * Data for the pv entry allocation mechanism. >> >>> >> > - * Updates to pv_invl_gen are protected by the pv_list_locks[] >> >>> >> > - * elements, but reads are not. >> >>> >> > + * Updates to pv_invl_gen are protected by the pv list lock but >> >>> >> > reads >> >>> >> > are >> >>> >> no >> >>> >> > t. >> >>> >> > */ >> >>> >> > static TAILQ_HEAD(pch, pv_chunk) pv_chunks = >> >>> >> > TAILQ_HEAD_INITIALIZER(pv_chu >> >>> >> nk >> >>> >> > s); >> >>> >> > static struct mtx __exclusive_cache_line pv_chunks_mutex; >> >>> >> > +#if VM_NRESERVLEVEL > 0 >> >>> >> > +struct pmap_large_md_page { >> >>> >> > + struct rwlock pv_lock; >> >>> >> > + struct md_page pv_page; >> >>> >> > + u_long pv_invl_gen; >> >>> >> > +}; >> >>> >> > +static struct pmap_large_md_page *pv_table; >> >>> >> > +#else >> >>> >> > static struct rwlock __exclusive_cache_line >> >>> >> > pv_list_locks[NPV_LIST_LOCKS]; >> >>> >> > static u_long pv_invl_gen[NPV_LIST_LOCKS]; >> >>> >> > static struct md_page *pv_table; >> >>> >> > +#endif >> >>> >> > static struct md_page pv_dummy; >> >>> >> > >> >>> >> > /* >> >>> >> > @@ -918,12 +938,21 @@ SYSCTL_LONG(_vm_pmap, OID_AUTO, >> >>> >> > invl_wait_slow, >> >>> >> > CTLFL >> >>> >> A >> >>> >> > "Number of slow invalidation waits for lockless DI"); >> >>> >> > #endif >> >>> >> > >> >>> >> > +#if VM_NRESERVLEVEL > 0 >> >>> >> > static u_long * >> >>> >> > pmap_delayed_invl_genp(vm_page_t m) >> >>> >> > { >> >>> >> > >> >>> >> > + return (&pa_to_pmdp(VM_PAGE_TO_PHYS(m))->pv_invl_gen); >> >>> >> > +} >> >>> >> > +#else >> >>> >> > +static u_long * >> >>> >> > +pmap_delayed_invl_genp(vm_page_t m) >> >>> >> > +{ >> >>> >> > + >> >>> >> > return (&pv_invl_gen[pa_index(VM_PAGE_TO_PHYS(m)) % >> >>> >> > NPV_LIST_LO >> >>> CKS]); >> >>> >> > } >> >>> >> > +#endif >> >>> >> > >> >>> >> > static void >> >>> >> > pmap_delayed_invl_callout_func(void *arg __unused) >> >>> >> > @@ -1803,6 +1832,112 @@ pmap_page_init(vm_page_t m) >> >>> >> > m->md.pat_mode = PAT_WRITE_BACK; >> >>> >> > } >> >>> >> > >> >>> >> > +#if VM_NRESERVLEVEL > 0 >> >>> >> > +static void >> >>> >> > +pmap_init_pv_table(void) >> >>> >> > +{ >> >>> >> > + struct pmap_large_md_page *pvd; >> >>> >> > + vm_size_t s; >> >>> >> > + long start, end, highest, pv_npg; >> >>> >> > + int domain, i, j, pages; >> >>> >> > + >> >>> >> > + /* >> >>> >> > + * We strongly depend on the size being a power of two, so the >> >>> assert >> >>> >> > + * is overzealous. However, should the struct be resized to a >> >>> >> > + * different power of two, the code below needs to be >> >>> >> > revisited >> >>> . >> >>> >> > + */ >> >>> >> > + CTASSERT((sizeof(*pvd) == 64)); >> >>> >> > + >> >>> >> > + /* >> >>> >> > + * Calculate the size of the array. >> >>> >> > + */ >> >>> >> > + pv_npg = howmany(vm_phys_segs[vm_phys_nsegs - 1].end, NBPDR); >> >>> >> > + s = (vm_size_t)pv_npg * sizeof(struct pmap_large_md_page); >> >>> >> > + s = round_page(s); >> >>> >> > + pv_table = (struct pmap_large_md_page *)kva_alloc(s); >> >>> >> > + if (pv_table == NULL) >> >>> >> > + panic("%s: kva_alloc failed\n", __func__); >> >>> >> > + >> >>> >> > + /* >> >>> >> > + * Iterate physical segments to allocate space for respective >> >>> >> > p >> >>> ages. >> >>> >> > + */ >> >>> >> > + highest = -1; >> >>> >> > + s = 0; >> >>> >> > + for (i = 0; i < vm_phys_nsegs; i++) { >> >>> >> > + start = vm_phys_segs[i].start / NBPDR; >> >>> >> > + end = vm_phys_segs[i].end / NBPDR; >> >>> >> > + domain = vm_phys_segs[i].domain; >> >>> >> > + >> >>> >> > + if (highest >= end) >> >>> >> > + continue; >> >>> >> > + >> >>> >> > + if (start < highest) { >> >>> >> > + start = highest + 1; >> >>> >> > + pvd = &pv_table[start]; >> >>> >> > + } else { >> >>> >> > + /* >> >>> >> > + * The lowest address may land somewhere in the >> >>> middle >> >>> >> > + * of our page. Simplify the code by pretending >> >>> it is >> >>> >> > + * at the beginning. >> >>> >> > + */ >> >>> >> > + pvd = pa_to_pmdp(vm_phys_segs[i].start); >> >>> >> > + pvd = (struct pmap_large_md_page *)trunc_page(p >> >>> vd); >> >>> >> > + start = pvd - pv_table; >> >>> >> > + } >> >>> >> > + >> >>> >> > + pages = end - start + 1; >> >>> >> > + s = round_page(pages * sizeof(*pvd)); >> >>> >> > + highest = start + (s / sizeof(*pvd)) - 1; >> >>> >> > + >> >>> >> > + for (j = 0; j < s; j += PAGE_SIZE) { >> >>> >> > + vm_page_t m = vm_page_alloc_domain(NULL, 0, >> >>> >> > + domain, VM_ALLOC_NORMAL | VM_ALLOC_NOOBJ); >> >>> >> > + if (m == NULL) >> >>> >> > + panic("vm_page_alloc_domain failed for >> >>> %lx\n", >> >>> >> > (vm_offset_t)pvd + j); >> >>> >> > + pmap_qenter((vm_offset_t)pvd + j, &m, 1); >> >>> >> > + } >> >>> >> > + >> >>> >> > + for (j = 0; j < s / sizeof(*pvd); j++) { >> >>> >> > + rw_init_flags(&pvd->pv_lock, "pmap pv list", RW >> >>> _NEW); >> >>> >> > + TAILQ_INIT(&pvd->pv_page.pv_list); >> >>> >> > + pvd->pv_page.pv_gen = 0; >> >>> >> > + pvd->pv_page.pat_mode = 0; >> >>> >> > + pvd->pv_invl_gen = 0; >> >>> >> > + pvd++; >> >>> >> > + } >> >>> >> > + } >> >>> >> > + TAILQ_INIT(&pv_dummy.pv_list); >> >>> >> > +} >> >>> >> > +#else >> >>> >> > +static void >> >>> >> > +pmap_init_pv_table(void) >> >>> >> > +{ >> >>> >> > + vm_size_t s; >> >>> >> > + long i, pv_npg; >> >>> >> > + >> >>> >> > + /* >> >>> >> > + * Initialize the pool of pv list locks. >> >>> >> > + */ >> >>> >> > + for (i = 0; i < NPV_LIST_LOCKS; i++) >> >>> >> > + rw_init(&pv_list_locks[i], "pmap pv list"); >> >>> >> > + >> >>> >> > + /* >> >>> >> > + * Calculate the size of the pv head table for superpages. >> >>> >> > + */ >> >>> >> > + pv_npg = howmany(vm_phys_segs[vm_phys_nsegs - 1].end, NBPDR); >> >>> >> > + >> >>> >> > + /* >> >>> >> > + * Allocate memory for the pv head table for superpages. >> >>> >> > + */ >> >>> >> > + s = (vm_size_t)pv_npg * sizeof(struct md_page); >> >>> >> > + s = round_page(s); >> >>> >> > + pv_table = (struct md_page *)kmem_malloc(s, M_WAITOK | >> >>> >> > M_ZERO); >> >>> >> > + for (i = 0; i < pv_npg; i++) >> >>> >> > + TAILQ_INIT(&pv_table[i].pv_list); >> >>> >> > + TAILQ_INIT(&pv_dummy.pv_list); >> >>> >> > +} >> >>> >> > +#endif >> >>> >> > + >> >>> >> > /* >> >>> >> > * Initialize the pmap module. >> >>> >> > * Called by vm_init, to initialize any structures that the >> >>> >> > pmap >> >>> >> > @@ -1813,8 +1948,7 @@ pmap_init(void) >> >>> >> > { >> >>> >> > struct pmap_preinit_mapping *ppim; >> >>> >> > vm_page_t m, mpte; >> >>> >> > - vm_size_t s; >> >>> >> > - int error, i, pv_npg, ret, skz63; >> >>> >> > + int error, i, ret, skz63; >> >>> >> > >> >>> >> > /* L1TF, reserve page @0 unconditionally */ >> >>> >> > vm_page_blacklist_add(0, bootverbose); >> >>> >> > @@ -1902,26 +2036,7 @@ pmap_init(void) >> >>> >> > */ >> >>> >> > mtx_init(&pv_chunks_mutex, "pmap pv chunk list", NULL, >> >>> >> > MTX_DEF) >> >>> ; >> >>> >> > >> >>> >> > - /* >> >>> >> > - * Initialize the pool of pv list locks. >> >>> >> > - */ >> >>> >> > - for (i = 0; i < NPV_LIST_LOCKS; i++) >> >>> >> > - rw_init(&pv_list_locks[i], "pmap pv list"); >> >>> >> > - >> >>> >> > - /* >> >>> >> > - * Calculate the size of the pv head table for superpages. >> >>> >> > - */ >> >>> >> > - pv_npg = howmany(vm_phys_segs[vm_phys_nsegs - 1].end, NBPDR); >> >>> >> > - >> >>> >> > - /* >> >>> >> > - * Allocate memory for the pv head table for superpages. >> >>> >> > - */ >> >>> >> > - s = (vm_size_t)(pv_npg * sizeof(struct md_page)); >> >>> >> > - s = round_page(s); >> >>> >> > - pv_table = (struct md_page *)kmem_malloc(s, M_WAITOK | >> >>> >> > M_ZERO); >> >>> >> > - for (i = 0; i < pv_npg; i++) >> >>> >> > - TAILQ_INIT(&pv_table[i].pv_list); >> >>> >> > - TAILQ_INIT(&pv_dummy.pv_list); >> >>> >> > + pmap_init_pv_table(); >> >>> >> > >> >>> >> > pmap_initialized = 1; >> >>> >> > for (i = 0; i < PMAP_PREINIT_MAPPING_COUNT; i++) { >> >>> >> > >> >>> >> >> >>> >> This causes a page fault during X (xdm) startup, which loads >> >>> >> drm-current-kmod. >> >>> >> >> >>> >> db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame >> >>> >> 0xfffffe0093e9c260 >> >>> >> vpanic() at vpanic+0x19d/frame 0xfffffe0093e9c2b0 >> >>> >> panic() at panic+0x43/frame 0xfffffe0093e9c310 >> >>> >> vm_fault() at vm_fault+0x2126/frame 0xfffffe0093e9c460 >> >>> >> vm_fault_trap() at vm_fault_trap+0x73/frame 0xfffffe0093e9c4b0 >> >>> >> trap_pfault() at trap_pfault+0x1b6/frame 0xfffffe0093e9c510 >> >>> >> trap() at trap+0x2a1/frame 0xfffffe0093e9c620 >> >>> >> calltrap() at calltrap+0x8/frame 0xfffffe0093e9c620 >> >>> >> --- trap 0xc, rip = 0xffffffff80a054b1, rsp = 0xfffffe0093e9c6f0, >> >>> >> rbp >> >>> >> = >> >>> >> 0xfffffe0093e9c7a0 --- >> >>> >> pmap_enter() at pmap_enter+0x861/frame 0xfffffe0093e9c7a0 >> >>> >> vm_fault() at vm_fault+0x1c69/frame 0xfffffe0093e9c8f0 >> >>> >> vm_fault_trap() at vm_fault_trap+0x73/frame 0xfffffe0093e9c940 >> >>> >> trap_pfault() at trap_pfault+0x1b6/frame 0xfffffe0093e9c9a0 >> >>> >> trap() at trap+0x438/frame 0xfffffe0093e9cab0 >> >>> >> calltrap() at calltrap+0x8/frame 0xfffffe0093e9cab0 >> >>> >> --- trap 0xc, rip = 0x30e2a9c3, rsp = 0x7fffffffea50, rbp = >> >>> >> 0x7fffffffeaa0 >> >>> >> >> >>> >> --- >> >>> >> Uptime: 3m33s >> >>> >> Dumping 945 out of 7974 >> >>> >> MB:..2%..11%..21%..31%..41%..51%..61%..72%..82%..92% >> >>> >> >> >>> >> (kgdb) bt >> >>> >> #0 doadump (textdump=1) at pcpu_aux.h:55 >> >>> >> #1 0xffffffff8068c5ed in kern_reboot (howto=260) >> >>> >> at /opt/src/svn-current/sys/kern/kern_shutdown.c:479 >> >>> >> #2 0xffffffff8068caa9 in vpanic (fmt=, >> >>> >> ap=) >> >>> >> at /opt/src/svn-current/sys/kern/kern_shutdown.c:908 >> >>> >> #3 0xffffffff8068c8a3 in panic (fmt=) >> >>> >> at /opt/src/svn-current/sys/kern/kern_shutdown.c:835 >> >>> >> #4 0xffffffff8098c966 in vm_fault (map=, >> >>> >> vaddr=, fault_type=, >> >>> >> fault_flags=, m_hold=> >>> >> out>) >> >>> >> at /opt/src/svn-current/sys/vm/vm_fault.c:672 >> >>> >> #5 0xffffffff8098a723 in vm_fault_trap (map=0xfffff80002001000, >> >>> >> vaddr=, fault_type=2 '\002', >> >>> >> fault_flags=, signo=0x0, ucode=0x0) >> >>> >> at /opt/src/svn-current/sys/vm/vm_fault.c:568 >> >>> >> #6 0xffffffff80a18326 in trap_pfault (frame=0xfffffe0093e9c630, >> >>> >> signo=, ucode=) >> >>> >> at /opt/src/svn-current/sys/amd64/amd64/trap.c:828 >> >>> >> #7 0xffffffff80a177f1 in trap (frame=0xfffffe0093e9c630) >> >>> >> at /opt/src/svn-current/sys/amd64/amd64/trap.c:407 >> >>> >> #8 0xffffffff809f1aac in calltrap () >> >>> >> at /opt/src/svn-current/sys/amd64/amd64/exception.S:289 >> >>> >> ---Type to continue, or q to quit--- >> >>> >> #9 0xffffffff80a054b1 in pmap_enter (pmap=, >> >>> >> va=851443712, m=0xfffffe0005b25ce8, prot=> >>> >> out>, >> >>> >> flags=2677542912, psind=) at atomic.h:221 >> >>> >> #10 0xffffffff8098c4a9 in vm_fault (map=, >> >>> >> vaddr=, fault_type=232 '\ufffd', >> >>> >> fault_flags=, m_hold=0x0) >> >>> >> at /opt/src/svn-current/sys/vm/vm_fault.c:489 >> >>> >> #11 0xffffffff8098a723 in vm_fault_trap (map=0xfffff80173eb5000, >> >>> >> vaddr=, fault_type=2 '\002', >> >>> >> fault_flags=, signo=0xfffffe0093e9ca84, >> >>> >> ucode=0xfffffe0093e9ca80) at >> >>> >> /opt/src/svn-current/sys/vm/vm_fault.c:568 >> >>> >> #12 0xffffffff80a18326 in trap_pfault (frame=0xfffffe0093e9cac0, >> >>> >> signo=, ucode=) >> >>> >> at /opt/src/svn-current/sys/amd64/amd64/trap.c:828 >> >>> >> #13 0xffffffff80a17988 in trap (frame=0xfffffe0093e9cac0) >> >>> >> at /opt/src/svn-current/sys/amd64/amd64/trap.c:347 >> >>> >> #14 0xffffffff809f1aac in calltrap () >> >>> >> at /opt/src/svn-current/sys/amd64/amd64/exception.S:289 >> >>> >> #15 0x0000000030e2a9c3 in ?? () >> >>> >> Previous frame inner to this frame (corrupt stack?) >> >>> >> Current language: auto; currently minimal >> >>> >> (kgdb) frame 9 >> >>> >> #9 0xffffffff80a054b1 in pmap_enter (pmap=, >> >>> >> va=851443712, m=0xfffffe0005b25ce8, prot=> >>> >> out>, >> >>> >> flags=2677542912, psind=) at atomic.h:221 >> >>> >> 221 ATOMIC_CMPSET(long); >> >>> >> (kgdb) l >> >>> >> 216 } >> >>> >> 217 >> >>> >> 218 ATOMIC_CMPSET(char); >> >>> >> 219 ATOMIC_CMPSET(short); >> >>> >> 220 ATOMIC_CMPSET(int); >> >>> >> 221 ATOMIC_CMPSET(long); >> >>> >> 222 >> >>> >> 223 /* >> >>> >> 224 * Atomically add the value of v to the integer pointed to by >> >>> >> p >> >>> and >> >>> >> return >> >>> >> 225 * the previous value of *p. >> >>> >> (kgdb) >> >>> > >> >>> > I should use kgdb from ports instead of /usr/libexec version. >> >>> > Similar >> >>> > result. >> >>> > >> >>> > <4>WARN_ON(!mutex_is_locked(&fbc->lock))WARN_ON(!mutex_is_locked(&fbc-> >> >>> > lock)) >> >>> > panic: vm_fault: fault on nofault entry, addr: 0xfffffe000e01c000 >> >>> > cpuid = 1 >> >>> > time = 1570417211 >> >>> > KDB: stack backtrace: >> >>> > db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame >> >>> > 0xfffffe0093e9c260 >> >>> > vpanic() at vpanic+0x19d/frame 0xfffffe0093e9c2b0 >> >>> > panic() at panic+0x43/frame 0xfffffe0093e9c310 >> >>> > vm_fault() at vm_fault+0x2126/frame 0xfffffe0093e9c460 >> >>> > vm_fault_trap() at vm_fault_trap+0x73/frame 0xfffffe0093e9c4b0 >> >>> > trap_pfault() at trap_pfault+0x1b6/frame 0xfffffe0093e9c510 >> >>> > trap() at trap+0x2a1/frame 0xfffffe0093e9c620 >> >>> > calltrap() at calltrap+0x8/frame 0xfffffe0093e9c620 >> >>> > --- trap 0xc, rip = 0xffffffff80a054b1, rsp = 0xfffffe0093e9c6f0, >> >>> > rbp >> >>> > = >> >>> > 0xfffffe0093e9c7a0 --- >> >>> > pmap_enter() at pmap_enter+0x861/frame 0xfffffe0093e9c7a0 >> >>> > vm_fault() at vm_fault+0x1c69/frame 0xfffffe0093e9c8f0 >> >>> > vm_fault_trap() at vm_fault_trap+0x73/frame 0xfffffe0093e9c940 >> >>> > trap_pfault() at trap_pfault+0x1b6/frame 0xfffffe0093e9c9a0 >> >>> > trap() at trap+0x438/frame 0xfffffe0093e9cab0 >> >>> > calltrap() at calltrap+0x8/frame 0xfffffe0093e9cab0 >> >>> > --- trap 0xc, rip = 0x30e2a9c3, rsp = 0x7fffffffea50, rbp = >> >>> > 0x7fffffffeaa0 >> >>> > --- >> >>> > Uptime: 3m33s >> >>> > Dumping 945 out of 7974 >> >>> > MB:..2%..11%..21%..31%..41%..51%..61%..72%..82%..92% >> >>> > >> >>> > __curthread () at >> >>> > /opt/src/svn-current/sys/amd64/include/pcpu_aux.h:55 >> >>> > 55 __asm("movq %%gs:%P1,%0" : "=r" (td) : "n" (offsetof(st >> ruct pcp >> >>> u, >> >>> > (kgdb) >> >>> > >> >>> > Backtrace stopped: Cannot access memory at address 0x7fffffffea50 >> >>> > (kgdb) frame 10 >> >>> > #10 0xffffffff80a054b1 in atomic_fcmpset_long (dst=, >> >>> > src=, expect=) >> >>> > at /opt/src/svn-current/sys/amd64/include/atomic.h:221 >> >>> > 221 ATOMIC_CMPSET(long); >> >>> > (kgdb) l >> >>> > 216 } >> >>> > 217 >> >>> > 218 ATOMIC_CMPSET(char); >> >>> > 219 ATOMIC_CMPSET(short); >> >>> > 220 ATOMIC_CMPSET(int); >> >>> > 221 ATOMIC_CMPSET(long); >> >>> > 222 >> >>> > 223 /* >> >>> > 224 * Atomically add the value of v to the integer pointed to by p >> >>> > and >> >>> > return >> >>> > 225 * the previous value of *p. >> >>> > (kgdb) >> >>> > >> >>> > >> >>> > >> >>> > -- >> >>> > Cheers, >> >>> > Cy Schubert >> >>> > FreeBSD UNIX: Web: http://www.FreeBSD.org >> >>> > >> >>> > The need of the many outweighs the greed of the few. >> >>> > >> >>> > >> >>> > >> >>> >> >>> >> >>> -- >> >>> Mateusz Guzik >> >> >> >> >> >> >> >> >> >> >> > >> > >> > -- >> > Mateusz Guzik >> > >> >> >> -- >> Mateusz Guzik > > > -- Mateusz Guzik