Date: Mon, 15 Feb 2010 09:31:26 -0800 From: Randall Stewart <rrs@lakerest.net> To: "C. Jayachandran" <c.jayachandran@gmail.com> Cc: freebsd-mips@freebsd.org Subject: Re: RMI status Message-ID: <C602FD3A-246E-4C1D-8A71-E81BF6225C30@lakerest.net> In-Reply-To: <98a59be81002150751l2871e825ycbc9dda736870e4f@mail.gmail.com> References: <5709963B-3F83-44FE-991F-A3227A2052DC@lakerest.net> <98a59be81002110655y60ab4e8cj473f4b6ecf6f5ae4@mail.gmail.com> <A718B150-C815-414E-947D-9FD94830DD7D@lakerest.net> <98a59be81002110915l2fa64189g28f13f8ad39c9584@mail.gmail.com> <98a59be81002150751l2871e825ycbc9dda736870e4f@mail.gmail.com>
next in thread | previous in thread | raw e-mail | index | archive | help
JC: Yeah, I figure you must be hitting something like that. The original hack was put in place just to find a easy way to use some of the memory above 512Meg on Octeon. The intent was always to go to n32 or n64 so we could use the full address space. I believe you will find comments in the code at one point that say the kernel will crash if it gets a page above 512Meg... Now I see two ways forward... Either we make something like you mentioned.. i.e. you setup and pre-reserve a bunch of the pages for the kernel ahead of time i.e. some sort of special allocation. Or We ignore this issue until we get to 64bit... I think the true answer is being in a mode such as n64 where you have a xseg that can address all the memory.... I think an overall plan to go to one of the new ABI's is a better one then make more special hacks (memory allocator).. R On Feb 15, 2010, at 7:51 AM, C. Jayachandran wrote: > On Thu, Feb 11, 2010 at 10:45 PM, C. Jayachandran > <c.jayachandran@gmail.com> wrote: >> On Thu, Feb 11, 2010 at 9:08 PM, Randall Stewart <rrs@lakerest.net> >> wrote: >>> Ahh.. I don't use a -jN since there is only one core >>> currently... That would use more memory... maybe running >>> the kernel out of memory below the magic 512Meg mark. If that >>> happens things will break... >>> >> >> I think you are right - I added the following patch (probably >> whitespace damaged) to trap this case and it certainly seems to get >> pages above 256M before it crashed(on XLR the default bootloader maps >> physmem from 0-256M after that is IO and flash mapping). > > The two places where MIPS_PHYS_TO_CACHED(pa) is called where the > physical address is not checked to be less than > MIPS_KSEG0_LARGEST_PHYS are below: > > mips/mips/pmap.c:pmap_pinit > | while ((ptdpg = vm_page_alloc(NULL, NUSERPGTBLS, req)) == > NULL) > | VM_WAIT; > | > | ptdpg->valid = VM_PAGE_BITS_ALL; > | > | pmap->pm_segtab = (pd_entry_t *) > | MIPS_PHYS_TO_CACHED(VM_PAGE_TO_PHYS(ptdpg)); > > mips/mips/pmap.c:_pmap_allocpte > | if ((m = vm_page_alloc(NULL, ptepindex, req)) == NULL) { > | if (flags & M_WAITOK) { > |[...deleted..] > | pmap->pm_stats.resident_count++; > | > | ptepa = VM_PAGE_TO_PHYS(m); > | pteva = MIPS_PHYS_TO_CACHED(ptepa); > > As I wrote earlier, I had added prints here, and we get addresses here > which are outside the direct mapped area when there is memory more > than 512M > > I cannot see how this can work, because vm_page_alloc can return pages > which can be above the maximum KSEG0 address, and we will crash in > this case.. > > I am trying to use 'vm_phys_alloc_contig' to allocated KSEG0 pages, > but I'm still figuring out how to use that correctly - and what the > performance penalty will be. Meanwhile, any ideas on this can be > fixed (or better, an explanation why this is not an issue) will be > very welcome. > > Also, in RMI's FreeBSD 6.4 code, we had a platform specific version of > uma_small_alloc which would maintain a pool of kseg0 pages (using a > kernel thread), I think something like that would be useful here to > maintain a pool of kseg0 pages which can be used for page table too. > > Thanks, > JC. > ------------------------------ Randall Stewart 803-317-4952 (cell) 803-345-0391(direct)
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?C602FD3A-246E-4C1D-8A71-E81BF6225C30>