From owner-freebsd-arch Wed Feb 28 7:34: 7 2001 Delivered-To: freebsd-arch@freebsd.org Received: from duke.cs.duke.edu (duke.cs.duke.edu [152.3.140.1]) by hub.freebsd.org (Postfix) with ESMTP id C69EA37B71A for ; Wed, 28 Feb 2001 07:34:01 -0800 (PST) (envelope-from gallatin@cs.duke.edu) Received: from grasshopper.cs.duke.edu (grasshopper.cs.duke.edu [152.3.145.30]) by duke.cs.duke.edu (8.9.3/8.9.3) with ESMTP id KAA02215 for ; Wed, 28 Feb 2001 10:34:00 -0500 (EST) Received: (from gallatin@localhost) by grasshopper.cs.duke.edu (8.11.2/8.9.1) id f1SFWjZ12832; Wed, 28 Feb 2001 10:32:45 -0500 (EST) (envelope-from gallatin@cs.duke.edu) From: Andrew Gallatin MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Message-ID: <15005.6685.789581.544314@grasshopper.cs.duke.edu> Date: Wed, 28 Feb 2001 10:32:45 -0500 (EST) To: freebsd-arch@freebsd.org Subject: Please review: moving vm_page_array[] X-Mailer: VM 6.75 under 21.1 (patch 12) "Channel Islands" XEmacs Lucid Sender: owner-freebsd-arch@FreeBSD.ORG Precedence: bulk X-Loop: FreeBSD.ORG FreeBSD currently panic's at boot on alpha UP1x00s with a lot of memory (> 512MB) with the message "isa_dmainit: unable to create dma map" The UP1x00 is the only alpha platform to use bounce buffers, and it is panicing because the contigmalloc in alloc_bounce_pages() is failing. Here's why: The vm_page_array[] (and vm_page_buckets) are currently carved out of the front of the largest chunk of physical memory. This isn't a problem on a PC, because there is typically 636k in a smaller chunk (starting at 4k) that the bounce buffers can be allocated from. There is typically no usable smaller chunk on alphas; in fact, the UP1000 family has only one chunk. Given that the vm_page_array[] entries for an alpha with 1 GB of ram consumes 13MB of memory, we need to allocate this structure at the end of memory, not at the start. The following patch moves vm_page_array[] and vm_page_buckets[] to the end of memory. This fixes the UP1000 isa_dmainit panics & appears to cause no harm on the other alphas and PCs I've tested it on. I'd really appreciate it if somebody could review this, as I'd like to get it committed & MFC'ed in time for 4.3 Thanks! Drew ------------------------------------------------------------------------------ Andrew Gallatin, Sr Systems Programmer http://www.cs.duke.edu/~gallatin Duke University Email: gallatin@cs.duke.edu Department of Computer Science Phone: (919) 660-6590 Index: vm_page.c =================================================================== RCS file: /home/ncvs/src/sys/vm/vm_page.c,v retrieving revision 1.156 diff -u -r1.156 vm_page.c --- vm_page.c 2000/12/26 19:41:38 1.156 +++ vm_page.c 2001/02/26 16:02:58 @@ -185,14 +185,14 @@ register vm_offset_t mapped; register struct vm_page **bucket; vm_size_t npages, page_range; - register vm_offset_t new_start; + register vm_offset_t new_end; int i; vm_offset_t pa; int nblocks; - vm_offset_t first_managed_page; + vm_offset_t last_pa; /* the biggest memory array is the second group of pages */ - vm_offset_t start; + vm_offset_t end; vm_offset_t biggestone, biggestsize; vm_offset_t total; @@ -219,7 +219,7 @@ total += size; } - start = phys_avail[biggestone]; + end = phys_avail[biggestone+1]; /* * Initialize the queue headers for the free queue, the active queue @@ -255,13 +255,11 @@ /* * Validate these addresses. */ - - new_start = start + vm_page_bucket_count * sizeof(struct vm_page *); - new_start = round_page(new_start); + new_end = end - vm_page_bucket_count * sizeof(struct vm_page *); + new_end = trunc_page(new_end); mapped = round_page(vaddr); - vaddr = pmap_map(mapped, start, new_start, + vaddr = pmap_map(mapped, new_end, end, VM_PROT_READ | VM_PROT_WRITE); - start = new_start; vaddr = round_page(vaddr); bzero((caddr_t) mapped, vaddr - mapped); @@ -280,8 +278,9 @@ page_range = phys_avail[(nblocks - 1) * 2 + 1] / PAGE_SIZE - first_page; npages = (total - (page_range * sizeof(struct vm_page)) - - (start - phys_avail[biggestone])) / PAGE_SIZE; + (end - new_end)) / PAGE_SIZE; + end = new_end; /* * Initialize the mem entry structures now, and put them in the free * queue. @@ -292,12 +291,10 @@ /* * Validate these addresses. */ - new_start = round_page(start + page_range * sizeof(struct vm_page)); - mapped = pmap_map(mapped, start, new_start, - VM_PROT_READ | VM_PROT_WRITE); - start = new_start; - first_managed_page = start / PAGE_SIZE; + new_end = trunc_page(end - page_range * sizeof(struct vm_page)); + mapped = pmap_map(mapped, new_end, end, + VM_PROT_READ | VM_PROT_WRITE); /* * Clear all of the page structures @@ -314,11 +311,12 @@ cnt.v_page_count = 0; cnt.v_free_count = 0; for (i = 0; phys_avail[i + 1] && npages > 0; i += 2) { + pa = phys_avail[i]; if (i == biggestone) - pa = ptoa(first_managed_page); + last_pa = new_end; else - pa = phys_avail[i]; - while (pa < phys_avail[i + 1] && npages-- > 0) { + last_pa = phys_avail[i + 1]; + while (pa < last_pa && npages-- > 0) { vm_add_new_page(pa); pa += PAGE_SIZE; } To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-arch" in the body of the message