Date: Wed, 7 Jul 2010 22:50:24 +0530 From: "Jayachandran C." <c.jayachandran@gmail.com> To: Randall Stewart <rrs@lakerest.net>, Juli Mallett <jmallett@freebsd.org>, "M. Warner Losh" <imp@bsdimp.com>, freebsd-mips@freebsd.org Subject: Merging 64 bit changes to -HEAD - part 4 Message-ID: <AANLkTikSVi27V2UICgLvKd8Bk7v6tuGty9YX6-C6-21H@mail.gmail.com>
index | next in thread | raw e-mail
[-- Attachment #1 --] On Tue, Jun 15, 2010 at 7:06 PM, Jayachandran C. <c.jayachandran@gmail.com> wrote: > I have volunteered to merge Juli's 64-bit work into HEAD, and > hopefully get it to work on XLR too. The tree > (http://svn.freebsd.org/base/user/jmallett/octeon) has quite a bit of > changes, so I would like to do this over multiple changesets and > without breaking the current o32 code. Here's the next installment, this has the next set of Juli's changes and some fixes to get it working on XLR. The patches in this set are: mips-segtab-macro.patch : Change PDE/PTE access to a macro (from Juli's branch) mips-cache-fix.patch : Minor fix for cache code (JC) rmi-other.patch: 64 bit compilation fixes for sys/mips/rmi (JC) Fixes to platform and driver code for 64 bit compilation, including changes to ethernet driver. mips-rmi-kx-enable.patch: Changes to enable KX bit for TARGET_XLR_XLS (JC). I have added another case for the TARGET_OCTEON #ifdef in exception.S and locore.S, but I think this can be moved to a header file later. pmap-n64.patch The main n64 patch (from Juli's branch) This still uses the old 2-level page tables. But this adds other pmap code to support n64. I have re-arranged some of Juli's code to reduce #ifdefs. runq-64.patch 64-bit rqb_word_t for n64 (JC) ldscript-64.patch 64 bit linker script (JC) linker script for 64 bit compilation, and XLR configuration file. With these changes, a n64 kernel can be compiled and it will to 'mountroot>' on XLR. There is more code left to merge in Juli's branch - 32 bit compat code, sfbuf/uio, UMA alloc, and DDB. which I should be able to get in the next one or two passes. Let me know your comments. There are only two minor changes should affect the existing o32 code paths in the above code (moving the check for >512M, and change in pmap_map to handle KSEG0 addresses), but let me know if something breaks. Thanks, JC. [-- Attachment #2 --] Index: sys/mips/include/pte.h =================================================================== --- sys/mips/include/pte.h (revision 209645) +++ sys/mips/include/pte.h (working copy) @@ -29,6 +29,12 @@ #ifndef _MACHINE_PTE_H_ #define _MACHINE_PTE_H_ +#ifndef _LOCORE +/* pt_entry_t is 32 bit for now, has to be made 64 bit for n64 */ +typedef uint32_t pt_entry_t; +typedef pt_entry_t *pd_entry_t; +#endif + /* * TLB and PTE management. Most things operate within the context of * EntryLo0,1, and begin with TLBLO_. Things which work with EntryHi @@ -65,25 +71,20 @@ #define TLBLO_PTE_TO_PA(pte) (TLBLO_PFN_TO_PA(TLBLO_PTE_TO_PFN((pte)))) /* + * XXX This comment is not correct for anything more modern than R4K. + * * VPN for EntryHi register. Upper two bits select user, supervisor, * or kernel. Bits 61 to 40 copy bit 63. VPN2 is bits 39 and down to * as low as 13, down to PAGE_SHIFT, to index 2 TLB pages*. From bit 12 * to bit 8 there is a 5-bit 0 field. Low byte is ASID. * + * XXX This comment is not correct for FreeBSD. * Note that in FreeBSD, we map 2 TLB pages is equal to 1 VM page. */ #define TLBHI_ASID_MASK (0xff) #define TLBHI_PAGE_MASK (2 * PAGE_SIZE - 1) #define TLBHI_ENTRY(va, asid) (((va) & ~TLBHI_PAGE_MASK) | ((asid) & TLBHI_ASID_MASK)) -#ifndef _LOCORE -typedef uint32_t pt_entry_t; -typedef pt_entry_t *pd_entry_t; -#endif - -#define PDESIZE sizeof(pd_entry_t) /* for assembly files */ -#define PTESIZE sizeof(pt_entry_t) /* for assembly files */ - /* * TLB flags managed in hardware: * C: Cache attribute. Index: sys/mips/include/pmap.h =================================================================== --- sys/mips/include/pmap.h (revision 209635) +++ sys/mips/include/pmap.h (working copy) @@ -50,7 +50,6 @@ #include <machine/pte.h> #define NKPT 120 /* actual number of kernel page tables */ -#define NUSERPGTBLS (VM_MAXUSER_ADDRESS >> SEGSHIFT) #ifndef LOCORE @@ -97,7 +96,6 @@ #ifdef _KERNEL pt_entry_t *pmap_pte(pmap_t, vm_offset_t); -pd_entry_t pmap_segmap(pmap_t pmap, vm_offset_t va); vm_offset_t pmap_kextract(vm_offset_t va); #define vtophys(va) pmap_kextract(((vm_offset_t) (va))) Index: sys/mips/mips/pmap.c =================================================================== --- sys/mips/mips/pmap.c (revision 209635) +++ sys/mips/mips/pmap.c (working copy) @@ -118,15 +118,26 @@ /* * Get PDEs and PTEs for user/kernel address space + * + * XXX The & for pmap_segshift() is wrong, as is the fact that it doesn't + * trim off gratuitous bits of the address space. By having the & + * there, we break defining NUSERPGTBLS below because the address space + * is defined such that it ends immediately after NPDEPG*NPTEPG*PAGE_SIZE, + * so we end up getting NUSERPGTBLS of 0. */ -#define pmap_pde(m, v) (&((m)->pm_segtab[(vm_offset_t)(v) >> SEGSHIFT])) -#define segtab_pde(m, v) (m[(vm_offset_t)(v) >> SEGSHIFT]) +#define pmap_segshift(v) (((v) >> SEGSHIFT) & (NPDEPG - 1)) +#define segtab_pde(m, v) ((m)[pmap_segshift((v))]) -#define MIPS_SEGSIZE (1L << SEGSHIFT) -#define mips_segtrunc(va) ((va) & ~(MIPS_SEGSIZE-1)) +#define NUSERPGTBLS (pmap_segshift(VM_MAXUSER_ADDRESS)) +#define mips_segtrunc(va) ((va) & ~SEGOFSET) #define is_kernel_pmap(x) ((x) == kernel_pmap) -#define vad_to_pte_offset(adr) (((adr) >> PAGE_SHIFT) & (NPTEPG -1)) +/* + * Given a virtual address, get the offset of its PTE within its page + * directory page. + */ +#define PDE_OFFSET(va) (((vm_offset_t)(va) >> PAGE_SHIFT) & (NPTEPG - 1)) + struct pmap kernel_pmap_store; pd_entry_t *kernel_segmap; @@ -155,8 +166,6 @@ static void pmap_pvh_free(struct md_page *pvh, pmap_t pmap, vm_offset_t va); static pv_entry_t pmap_pvh_remove(struct md_page *pvh, pmap_t pmap, vm_offset_t va); -static __inline void pmap_changebit(vm_page_t m, int bit, boolean_t setem); - static vm_page_t pmap_enter_quick_locked(pmap_t pmap, vm_offset_t va, vm_page_t m, vm_prot_t prot, vm_page_t mpte); static int pmap_remove_pte(struct pmap *pmap, pt_entry_t *ptq, vm_offset_t va); @@ -246,13 +255,13 @@ sysm->valid2 = 0; \ intr_restore(intr) -pd_entry_t +static inline pt_entry_t * pmap_segmap(pmap_t pmap, vm_offset_t va) { - if (pmap->pm_segtab) - return (pmap->pm_segtab[((vm_offset_t)(va) >> SEGSHIFT)]); + if (pmap->pm_segtab != NULL) + return (segtab_pde(pmap->pm_segtab, va)); else - return ((pd_entry_t)0); + return (NULL); } /* @@ -267,9 +276,9 @@ pt_entry_t *pdeaddr; if (pmap) { - pdeaddr = (pt_entry_t *)pmap_segmap(pmap, va); + pdeaddr = pmap_segmap(pmap, va); if (pdeaddr) { - return pdeaddr + vad_to_pte_offset(va); + return pdeaddr + PDE_OFFSET(va); } } return ((pt_entry_t *)0); @@ -878,12 +887,12 @@ return (0); if (mpte == NULL) { - ptepindex = (va >> SEGSHIFT); + ptepindex = pmap_segshift(va); if (pmap->pm_ptphint && (pmap->pm_ptphint->pindex == ptepindex)) { mpte = pmap->pm_ptphint; } else { - pteva = *pmap_pde(pmap, va); + pteva = pmap_segmap(pmap, va); mpte = PHYS_TO_VM_PAGE(MIPS_KSEG0_TO_PHYS(pteva)); pmap->pm_ptphint = mpte; } @@ -1082,7 +1091,7 @@ /* * Calculate pagetable page index */ - ptepindex = va >> SEGSHIFT; + ptepindex = pmap_segshift(va); retry: /* * Get the page directory entry @@ -1205,7 +1214,7 @@ nkpt++; pte = (pt_entry_t *)pageva; - segtab_pde(kernel_segmap, kernel_vm_end) = (pd_entry_t)pte; + segtab_pde(kernel_segmap, kernel_vm_end) = pte; /* * The R[4-7]?00 stores only one copy of the Global bit in @@ -1529,8 +1538,8 @@ goto out; } for (va = sva; va < eva; va = nva) { - if (!*pmap_pde(pmap, va)) { - nva = mips_segtrunc(va + MIPS_SEGSIZE); + if (pmap_segmap(pmap, va) == NULL) { + nva = mips_segtrunc(va + NBSEG); continue; } pmap_remove_page(pmap, va); @@ -1646,8 +1655,8 @@ /* * If segment table entry is empty, skip this segment. */ - if (!*pmap_pde(pmap, sva)) { - sva = mips_segtrunc(sva + MIPS_SEGSIZE); + if (pmap_segmap(pmap, sva) == NULL) { + sva = mips_segtrunc(sva + NBSEG); continue; } /* @@ -1934,7 +1943,7 @@ /* * Calculate pagetable page index */ - ptepindex = va >> SEGSHIFT; + ptepindex = pmap_segshift(va); if (mpte && (mpte->pindex == ptepindex)) { mpte->wire_count++; } else { @@ -2619,7 +2628,7 @@ rv = FALSE; PMAP_LOCK(pmap); - if (*pmap_pde(pmap, addr)) { + if (pmap_segmap(pmap, addr) != NULL) { pte = pmap_pte(pmap, addr); rv = (*pte == 0); } [-- Attachment #3 --] Index: sys/mips/mips/cache_mipsNN.c =================================================================== --- sys/mips/mips/cache_mipsNN.c (revision 209521) +++ sys/mips/mips/cache_mipsNN.c (working copy) @@ -404,7 +404,7 @@ void mipsNN_pdcache_wbinv_range_index_16(vm_offset_t va, vm_size_t size) { - unsigned int eva, tmpva; + vm_offset_t eva, tmpva; int i, stride, loopcount; /* @@ -445,7 +445,7 @@ void mipsNN_pdcache_wbinv_range_index_32(vm_offset_t va, vm_size_t size) { - unsigned int eva, tmpva; + vm_offset_t eva, tmpva; int i, stride, loopcount; /* [-- Attachment #4 --] Index: sys/mips/rmi/xlr_pci.c =================================================================== --- sys/mips/rmi/xlr_pci.c (revision 209756) +++ sys/mips/rmi/xlr_pci.c (working copy) @@ -223,7 +223,7 @@ pci_cfg_read_32bit(uint32_t addr) { uint32_t temp = 0; - uint32_t *p = (uint32_t *) ((uint32_t) xlr_pci_config_base + (addr & ~3)); + uint32_t *p = (uint32_t *)xlr_pci_config_base + addr / sizeof(uint32_t); uint64_t cerr_cpu_log = 0; disable_and_clear_cache_error(); @@ -285,7 +285,7 @@ data = val; } - p = (uint32_t *)((uint32_t) xlr_pci_config_base + (cfgaddr & ~3)); + p = (uint32_t *)xlr_pci_config_base + cfgaddr / sizeof(uint32_t); *p = bswap32(data); return; @@ -410,7 +410,7 @@ static void bridge_pcie_ack(void *arg) { - int irq = (int)arg; + int irq = (intptr_t)arg; uint32_t reg; xlr_reg_t *pcie_mmio_le = xlr_io_mmio(XLR_IO_PCIE_1_OFFSET); Index: sys/mips/rmi/iodi.c =================================================================== --- sys/mips/rmi/iodi.c (revision 209756) +++ sys/mips/rmi/iodi.c (working copy) @@ -115,7 +115,7 @@ int irq; /* This is a hack to pass in the irq */ - irq = (int)ires->__r_i; + irq = (intptr_t)ires->__r_i; if (rmi_spin_mutex_safe) mtx_lock_spin(&xlr_pic_lock); reg = xlr_read_reg(mmio, PIC_IRT_1_BASE + irq - PIC_IRQ_BASE); @@ -178,10 +178,10 @@ res->r_bustag = uart_bus_space_mem; } else if (strcmp(device_get_name(child), "ehci") == 0) { - res->r_bushandle = 0xbef24000; + res->r_bushandle = MIPS_PHYS_TO_KSEG1(0x1ef24000); res->r_bustag = rmi_pci_bus_space; } else if (strcmp(device_get_name(child), "cfi") == 0) { - res->r_bushandle = 0xbc000000; + res->r_bushandle = MIPS_PHYS_TO_KSEG1(0x1c000000); res->r_bustag = 0; } /* res->r_start = *rid; */ Index: sys/mips/rmi/dev/xlr/rge.c =================================================================== --- sys/mips/rmi/dev/xlr/rge.c (revision 209756) +++ sys/mips/rmi/dev/xlr/rge.c (working copy) @@ -583,14 +583,14 @@ struct mbuf *m; tx_desc = (struct p2d_tx_desc *)MIPS_PHYS_TO_KSEG0(msg->msg0); - chk_addr = (struct p2d_tx_desc *)(uint32_t) (tx_desc->frag[XLR_MAX_TX_FRAGS] & 0x00000000ffffffff); + chk_addr = (struct p2d_tx_desc *)(intptr_t)tx_desc->frag[XLR_MAX_TX_FRAGS]; if (tx_desc != chk_addr) { printf("Address %p does not match with stored addr %p - we leaked a descriptor\n", tx_desc, chk_addr); return; } if (rel_buf) { - m = (struct mbuf *)(uint32_t) (tx_desc->frag[XLR_MAX_TX_FRAGS + 1] & 0x00000000ffffffff); + m = (struct mbuf *)(intptr_t)tx_desc->frag[XLR_MAX_TX_FRAGS + 1]; m_freem(m); } free_p2d_desc(tx_desc); @@ -626,7 +626,7 @@ (u_long)paddr, mag); return; } - m = (struct mbuf *)um; + m = (struct mbuf *)(intptr_t)um; if (m != NULL) m_freem(m); } @@ -644,9 +644,9 @@ if (m_new == NULL) return NULL; - m_adj(m_new, XLR_CACHELINE_SIZE - ((unsigned int)m_new->m_data & 0x1f)); + m_adj(m_new, XLR_CACHELINE_SIZE - ((uintptr_t)m_new->m_data & 0x1f)); md = (unsigned int *)m_new->m_data; - md[0] = (unsigned int)m_new; /* Back Ptr */ + md[0] = (uintptr_t)m_new; /* Back Ptr */ md[1] = 0xf00bad; m_adj(m_new, XLR_CACHELINE_SIZE); @@ -996,7 +996,7 @@ bucket_map |= (1ULL << bucket); } } - printf("rmi_xlr_config_pde: bucket_map=%llx\n", bucket_map); + printf("rmi_xlr_config_pde: bucket_map=%jx\n", (uintmax_t)bucket_map); /* bucket_map = 0x1; */ xlr_write_reg(priv->mmio, R_PDE_CLASS_0, (bucket_map & 0xffffffff)); @@ -1480,8 +1480,8 @@ msgrng_access_disable(mflags); release_tx_desc(&msg, 0); xlr_rge_msg_snd_failed[vcpu]++; - dbg_msg("Failed packet to cpu %d, rv = %d, stid %d, msg0=%llx\n", - vcpu, rv, stid, msg.msg0); + dbg_msg("Failed packet to cpu %d, rv = %d, stid %d, msg0=%jx\n", + vcpu, rv, stid, (uintmax_t)msg.msg0); return MAC_TX_FAIL; } msgrng_access_disable(mflags); @@ -1489,7 +1489,8 @@ } /* Send the packet to MAC */ - dbg_msg("Sent tx packet to stid %d, msg0=%llx, msg1=%llx \n", stid, msg.msg0, msg.msg1); + dbg_msg("Sent tx packet to stid %d, msg0=%jx, msg1=%jx \n", stid, + (uintmax_t)msg.msg0, (uintmax_t)msg.msg1); #ifdef DUMP_PACKETS { int i = 0; @@ -1638,8 +1639,8 @@ int vcpu = xlr_cpu_id(); int cpu = xlr_core_id(); - dbg_msg("mac: bucket=%d, size=%d, code=%d, stid=%d, msg0=%llx msg1=%llx\n", - bucket, size, code, stid, msg->msg0, msg->msg1); + dbg_msg("mac: bucket=%d, size=%d, code=%d, stid=%d, msg0=%jx msg1=%jx\n", + bucket, size, code, stid, (uintmax_t)msg->msg0, (uintmax_t)msg->msg1); phys_addr = (uint64_t) (msg->msg0 & 0xffffffffe0ULL); length = (msg->msg0 >> 40) & 0x3fff; @@ -1670,8 +1671,8 @@ return; priv = &(sc->priv); - dbg_msg("msg0 = %llx, stid = %d, port = %d, addr=%lx, length=%d, ctrl=%d\n", - msg->msg0, stid, port, addr, length, ctrl); + dbg_msg("msg0 = %jx, stid = %d, port = %d, addr=%lx, length=%d, ctrl=%d\n", + (uintmax_t)msg->msg0, stid, port, addr, length, ctrl); if (ctrl == CTRL_REG_FREE || ctrl == CTRL_JUMBO_FREE) { xlr_rge_tx_ok_done[vcpu]++; @@ -1698,8 +1699,8 @@ if ((priv->frin_to_be_sent[cpu]) > MAC_FRIN_TO_BE_SENT_THRESHOLD) { mac_frin_replenish(NULL); } - dbg_msg("gmac_%d: rx packet: phys_addr = %llx, length = %x\n", - priv->instance, phys_addr, length); + dbg_msg("gmac_%d: rx packet: phys_addr = %jx, length = %x\n", + priv->instance, (uintmax_t)phys_addr, length); mac_stats_add(priv->stats.rx_packets, 1); mac_stats_add(priv->stats.rx_bytes, length); xlr_inc_counter(NETIF_RX); @@ -1887,7 +1888,7 @@ * note this is a hack to pass the irq to the iodi interrupt setup * routines */ - sc->rge_irq.__r_i = (struct resource_i *)sc->irq; + sc->rge_irq.__r_i = (struct resource_i *)(intptr_t)sc->irq; ret = bus_setup_intr(dev, &sc->rge_irq, INTR_FAST | INTR_TYPE_NET | INTR_MPSAFE, NULL, rge_intr, sc, &sc->rge_intrhand); @@ -2040,7 +2041,7 @@ mag = xlr_paddr_lw(paddr - XLR_CACHELINE_SIZE + sizeof(uint32_t)); mips_wr_status(sr); - m = (struct mbuf *)tm; + m = (struct mbuf *)(intptr_t)tm; if (mag != 0xf00bad) { /* somebody else packet Error - FIXME in intialization */ printf("cpu %d: *ERROR* Not my packet paddr %p\n", xlr_cpu_id(), (void *)paddr); @@ -2453,7 +2454,7 @@ panic("Unable to allocate memory for spill area!\n"); } phys_addr = vtophys(spill); - dbg_msg("Allocate spill %d bytes at %llx\n", size, phys_addr); + dbg_msg("Allocate spill %d bytes at %jx\n", size, (uintmax_t)phys_addr); xlr_write_reg(mmio, reg_start_0, (phys_addr >> 5) & 0xffffffff); xlr_write_reg(mmio, reg_start_1, (phys_addr >> 37) & 0x07); xlr_write_reg(mmio, reg_size, spill_size); Index: sys/mips/rmi/on_chip.c =================================================================== --- sys/mips/rmi/on_chip.c (revision 209756) +++ sys/mips/rmi/on_chip.c (working copy) @@ -210,8 +210,8 @@ if (!tx_stn_handlers[tx_stid].action) { printf("[%s]: No Handler for message from stn_id=%d, bucket=%d, " - "size=%d, msg0=%llx, dropping message\n", - __FUNCTION__, tx_stid, bucket, size, msg.msg0); + "size=%d, msg0=%jx, dropping message\n", + __FUNCTION__, tx_stid, bucket, size, (uintmax_t)msg.msg0); } else { //printf("[%s]: rx_stid = %d\n", __FUNCTION__, rx_stid); msgrng_flags_restore(mflags); Index: sys/mips/rmi/xlr_machdep.c =================================================================== --- sys/mips/rmi/xlr_machdep.c (revision 209756) +++ sys/mips/rmi/xlr_machdep.c (working copy) @@ -265,7 +265,6 @@ init_param2(physmem); /* XXX: Catch 22. Something touches the tlb. */ - mips_cpu_init(); pmap_bootstrap(); #ifdef DDB @@ -294,13 +293,13 @@ #endif /* XXX FIXME the code below is not 64 bit clean */ /* Save boot loader and other stuff from scratch regs */ - xlr_boot1_info = *(struct boot1_info *)read_c0_register32(MIPS_COP_0_OSSCRATCH, 0); + xlr_boot1_info = *(struct boot1_info *)(intptr_t)(int)read_c0_register32(MIPS_COP_0_OSSCRATCH, 0); cpu_mask_info = read_c0_register64(MIPS_COP_0_OSSCRATCH, 1); xlr_online_cpumask = read_c0_register32(MIPS_COP_0_OSSCRATCH, 2); xlr_run_mode = read_c0_register32(MIPS_COP_0_OSSCRATCH, 3); xlr_argc = read_c0_register32(MIPS_COP_0_OSSCRATCH, 4); - xlr_argv = (char **)read_c0_register32(MIPS_COP_0_OSSCRATCH, 5); - xlr_envp = (char **)read_c0_register32(MIPS_COP_0_OSSCRATCH, 6); + xlr_argv = (char **)(intptr_t)(int)read_c0_register32(MIPS_COP_0_OSSCRATCH, 5); + xlr_envp = (char **)(intptr_t)(int)read_c0_register32(MIPS_COP_0_OSSCRATCH, 6); /* TODO: Verify the magic number here */ /* FIXMELATER: xlr_boot1_info.magic_number */ @@ -387,9 +386,9 @@ * 64 bit > 4Gig and we are in 32 bit mode. */ phys_avail[j + 1] = 0xfffff000; - printf("boot map size was %llx\n", boot_map->physmem_map[i].size); + printf("boot map size was %jx\n", (intmax_t)boot_map->physmem_map[i].size); boot_map->physmem_map[i].size = phys_avail[j + 1] - phys_avail[j]; - printf("reduced to %llx\n", boot_map->physmem_map[i].size); + printf("reduced to %jx\n", (intmax_t)boot_map->physmem_map[i].size); } printf("Next segment : addr:%p -> %p \n", (void *)phys_avail[j], [-- Attachment #5 --] Index: sys/mips/mips/vm_machdep.c =================================================================== --- sys/mips/mips/vm_machdep.c (revision 209635) +++ sys/mips/mips/vm_machdep.c (working copy) @@ -148,7 +148,7 @@ pcb2->pcb_context[PCB_REG_S0] = (register_t)(intptr_t)fork_return; pcb2->pcb_context[PCB_REG_S1] = (register_t)(intptr_t)td2; pcb2->pcb_context[PCB_REG_S2] = (register_t)(intptr_t)td2->td_frame; - pcb2->pcb_context[PCB_REG_SR] = SR_INT_MASK & mips_rd_status(); + pcb2->pcb_context[PCB_REG_SR] = (MIPS_SR_KX | SR_INT_MASK) & mips_rd_status(); /* * FREEBSD_DEVELOPERS_FIXME: * Setup any other CPU-Specific registers (Not MIPS Standard) @@ -162,7 +162,6 @@ #ifdef TARGET_OCTEON pcb2->pcb_context[PCB_REG_SR] |= MIPS_SR_COP_2_BIT | MIPS32_SR_PX | MIPS_SR_UX | MIPS_SR_KX | MIPS_SR_SX; #endif - } /* @@ -351,7 +350,7 @@ pcb2->pcb_context[PCB_REG_S1] = (register_t)(intptr_t)td; pcb2->pcb_context[PCB_REG_S2] = (register_t)(intptr_t)td->td_frame; /* Dont set IE bit in SR. sched lock release will take care of it */ - pcb2->pcb_context[PCB_REG_SR] = SR_INT_MASK & mips_rd_status(); + pcb2->pcb_context[PCB_REG_SR] = (MIPS_SR_KX | SR_INT_MASK) & mips_rd_status(); #ifdef TARGET_OCTEON pcb2->pcb_context[PCB_REG_SR] |= MIPS_SR_COP_2_BIT | MIPS_SR_COP_0_BIT | Index: sys/mips/mips/exception.S =================================================================== --- sys/mips/mips/exception.S (revision 209635) +++ sys/mips/mips/exception.S (working copy) @@ -235,7 +235,7 @@ #define SAVE_REG(reg, offs, base) \ REG_S reg, CALLFRAME_SIZ + (SZREG * offs) (base) -#ifdef TARGET_OCTEON +#if defined(TARGET_OCTEON) #define CLEAR_STATUS \ mfc0 a0, COP_0_STATUS_REG ;\ li a2, (MIPS_SR_KX | MIPS_SR_SX | MIPS_SR_UX) ; \ @@ -244,6 +244,15 @@ and a0, a0, a2 ; \ mtc0 a0, COP_0_STATUS_REG ; \ ITLBNOPFIX +#elif defined(TARGET_XLR_XLS) +#define CLEAR_STATUS \ + mfc0 a0, COP_0_STATUS_REG ;\ + li a2, (MIPS_SR_KX | MIPS_SR_COP_2_BIT) ; \ + or a0, a0, a2 ; \ + li a2, ~(MIPS_SR_INT_IE | MIPS_SR_EXL | SR_KSU_USER) ; \ + and a0, a0, a2 ; \ + mtc0 a0, COP_0_STATUS_REG ; \ + ITLBNOPFIX #else #define CLEAR_STATUS \ mfc0 a0, COP_0_STATUS_REG ;\ @@ -475,8 +484,10 @@ PTR_LA gp, _C_LABEL(_gp) # switch to kernel GP # Turn off fpu and enter kernel mode and t0, a0, ~(SR_COP_1_BIT | SR_EXL | SR_KSU_MASK | SR_INT_ENAB) -#ifdef TARGET_OCTEON +#if defined(TARGET_OCTEON) or t0, t0, (MIPS_SR_KX | MIPS_SR_SX | MIPS_SR_UX | MIPS32_SR_PX) +#elif defined(TARGET_XLR_XLS) + or t0, t0, (MIPS_SR_KX | MIPS_SR_COP_2_BIT) #endif mtc0 t0, COP_0_STATUS_REG PTR_ADDU a0, k1, U_PCB_REGS @@ -693,6 +704,8 @@ and t0, a0, ~(SR_COP_1_BIT | SR_EXL | SR_INT_ENAB | SR_KSU_MASK) #ifdef TARGET_OCTEON or t0, t0, (MIPS_SR_KX | MIPS_SR_SX | MIPS_SR_UX | MIPS32_SR_PX) +#elif defined(TARGET_XLR_XLS) + or t0, t0, (MIPS_SR_KX | MIPS_SR_COP_2_BIT) #endif mtc0 t0, COP_0_STATUS_REG ITLBNOPFIX Index: sys/mips/mips/locore.S =================================================================== --- sys/mips/mips/locore.S (revision 209635) +++ sys/mips/mips/locore.S (working copy) @@ -99,7 +99,7 @@ /* Reset these bits */ li t0, ~(MIPS_SR_DE | MIPS_SR_SOFT_RESET | MIPS_SR_ERL | MIPS_SR_EXL | MIPS_SR_INT_IE) -#elif defined (CPU_XLR) +#elif defined (TARGET_XLR_XLS) /* Set these bits */ li t1, (MIPS_SR_COP_2_BIT | MIPS_SR_COP_0_BIT | MIPS_SR_KX) [-- Attachment #6 --] Index: sys/mips/include/cpuregs.h =================================================================== --- sys/mips/include/cpuregs.h (revision 209756) +++ sys/mips/include/cpuregs.h (working copy) @@ -78,6 +78,9 @@ * Caching of mapped addresses is controlled by bits in the TLB entry. */ +#define MIPS_KSEG0_LARGEST_PHYS (0x20000000) +#define MIPS_PHYS_MASK (0x1fffffff) + #if !defined(_LOCORE) #define MIPS_KUSEG_START 0x00000000 #define MIPS_KSEG0_START ((intptr_t)(int32_t)0x80000000) @@ -91,8 +94,19 @@ #define MIPS_KSEG2_START MIPS_KSSEG_START #define MIPS_KSEG2_END MIPS_KSSEG_END -#endif +#define MIPS_PHYS_TO_KSEG0(x) ((uintptr_t)(x) | MIPS_KSEG0_START) +#define MIPS_PHYS_TO_KSEG1(x) ((uintptr_t)(x) | MIPS_KSEG1_START) +#define MIPS_KSEG0_TO_PHYS(x) ((uintptr_t)(x) & MIPS_PHYS_MASK) +#define MIPS_KSEG1_TO_PHYS(x) ((uintptr_t)(x) & MIPS_PHYS_MASK) + +#define MIPS_IS_KSEG0_ADDR(x) \ + (((vm_offset_t)(x) >= MIPS_KSEG0_START) && \ + ((vm_offset_t)(x) <= MIPS_KSEG0_END)) +#define MIPS_IS_KSEG1_ADDR(x) \ + (((vm_offset_t)(x) >= MIPS_KSEG1_START) && \ + ((vm_offset_t)(x) <= MIPS_KSEG1_END)) + #define MIPS_XKPHYS_START 0x8000000000000000 #define MIPS_XKPHYS_END 0xbfffffffffffffff @@ -101,14 +115,21 @@ #define MIPS_PHYS_TO_XKPHYS(cca,x) \ ((0x2ULL << 62) | ((unsigned long long)(cca) << 59) | (x)) -#define MIPS_XKPHYS_TO_PHYS(x) ((x) & 0x07ffffffffffffffULL) +#define MIPS_PHYS_TO_XKPHYS_CACHED(x) \ + ((0x2ULL << 62) | ((unsigned long long)(MIPS_XKPHYS_CCA_CNC) << 59) | (x)) +#define MIPS_PHYS_TO_XKPHYS_UNCACHED(x) \ + ((0x2ULL << 62) | ((unsigned long long)(MIPS_XKPHYS_CCA_UC) << 59) | (x)) +#define MIPS_XKPHYS_TO_PHYS(x) ((x) & 0x07ffffffffffffffULL) + #define MIPS_XUSEG_START 0x0000000000000000 #define MIPS_XUSEG_END 0x0000010000000000 #define MIPS_XKSEG_START 0xc000000000000000 #define MIPS_XKSEG_END 0xc00000ff80000000 +#endif + /* CPU dependent mtc0 hazard hook */ #ifdef TARGET_OCTEON #define COP0_SYNC nop; nop; nop; nop; nop; Index: sys/mips/include/cpu.h =================================================================== --- sys/mips/include/cpu.h (revision 209756) +++ sys/mips/include/cpu.h (working copy) @@ -49,23 +49,6 @@ #include <machine/endian.h> -#define MIPS_KSEG0_LARGEST_PHYS (0x20000000) -#define MIPS_PHYS_MASK (0x1fffffff) - -#define MIPS_PHYS_TO_KSEG0(x) ((uintptr_t)(x) | MIPS_KSEG0_START) -#define MIPS_PHYS_TO_KSEG1(x) ((uintptr_t)(x) | MIPS_KSEG1_START) -#define MIPS_KSEG0_TO_PHYS(x) ((uintptr_t)(x) & MIPS_PHYS_MASK) -#define MIPS_KSEG1_TO_PHYS(x) ((uintptr_t)(x) & MIPS_PHYS_MASK) - -#define MIPS_IS_KSEG0_ADDR(x) \ - (((vm_offset_t)(x) >= MIPS_KSEG0_START) && \ - ((vm_offset_t)(x) <= MIPS_KSEG0_END)) -#define MIPS_IS_KSEG1_ADDR(x) \ - (((vm_offset_t)(x) >= MIPS_KSEG1_START) && \ - ((vm_offset_t)(x) <= MIPS_KSEG1_END)) -#define MIPS_IS_VALID_PTR(x) (MIPS_IS_KSEG0_ADDR(x) || \ - MIPS_IS_KSEG1_ADDR(x)) - /* * Status register. */ Index: sys/mips/include/pte.h =================================================================== --- sys/mips/include/pte.h (revision 209756) +++ sys/mips/include/pte.h (working copy) @@ -73,8 +73,24 @@ * Note that in FreeBSD, we map 2 TLB pages is equal to 1 VM page. */ #define TLBHI_ASID_MASK (0xff) +#if defined(__mips_n64) +#define TLBHI_R_SHIFT 62 +#define TLBHI_R_USER (0x00UL << TLBHI_R_SHIFT) +#define TLBHI_R_SUPERVISOR (0x01UL << TLBHI_R_SHIFT) +#define TLBHI_R_KERNEL (0x03UL << TLBHI_R_SHIFT) +#define TLBHI_R_MASK (0x03UL << TLBHI_R_SHIFT) +#define TLBHI_VA_R(va) ((va) & TLBHI_R_MASK) +#define TLBHI_FILL_SHIFT 40 +#define TLBHI_VPN2_SHIFT (TLB_PAGE_SHIFT + 1) +#define TLBHI_VPN2_MASK (((~((1UL << TLBHI_VPN2_SHIFT) - 1)) << (63 - TLBHI_FILL_SHIFT)) >> (63 - TLBHI_FILL_SHIFT)) +#define TLBHI_VA_TO_VPN2(va) ((va) & TLBHI_VPN2_MASK) +#define TLBHI_ENTRY(va, asid) ((TLBHI_VA_R((va))) /* Region. */ | \ + (TLBHI_VA_TO_VPN2((va))) /* VPN2. */ | \ + ((asid) & TLBHI_ASID_MASK)) +#else #define TLBHI_PAGE_MASK (2 * PAGE_SIZE - 1) #define TLBHI_ENTRY(va, asid) (((va) & ~TLBHI_PAGE_MASK) | ((asid) & TLBHI_ASID_MASK)) +#endif #ifndef _LOCORE typedef uint32_t pt_entry_t; Index: sys/mips/mips/pmap.c =================================================================== --- sys/mips/mips/pmap.c (revision 209756) +++ sys/mips/mips/pmap.c (working copy) @@ -128,7 +128,11 @@ #define pmap_segshift(v) (((v) >> SEGSHIFT) & (NPDEPG - 1)) #define segtab_pde(m, v) ((m)[pmap_segshift((v))]) +#if defined(__mips_n64) +#define NUSERPGTBLS (NPDEPG) +#else #define NUSERPGTBLS (pmap_segshift(VM_MAXUSER_ADDRESS)) +#endif #define mips_segtrunc(va) ((va) & ~SEGOFSET) #define is_kernel_pmap(x) ((x) == kernel_pmap) @@ -310,7 +310,7 @@ } /* - * Bootstrap the system enough to run with virtual memory. This + * Bootstrap the system enough to run with virtual memory. This * assumes that the phys_avail array has been initialized. */ void @@ -330,14 +330,11 @@ phys_avail[i] = round_page(phys_avail[i]); phys_avail[i + 1] = trunc_page(phys_avail[i + 1]); - if (phys_avail[i + 1] >= MIPS_KSEG0_LARGEST_PHYS) - memory_larger_than_512meg++; if (i < 2) continue; if (phys_avail[i - 2] > phys_avail[i]) { vm_paddr_t ptemp[2]; - ptemp[0] = phys_avail[i + 0]; ptemp[1] = phys_avail[i + 1]; @@ -350,6 +347,11 @@ } } +#if !defined(__mips_n64) + if (phys_avail[i - 1] >= MIPS_KSEG0_LARGEST_PHYS) + memory_larger_than_512meg = 1; +#endif + /* * Copy the phys_avail[] array before we start stealing memory from it. */ @@ -384,7 +386,6 @@ */ kstack0 = pmap_steal_memory(KSTACK_PAGES << PAGE_SHIFT); - virtual_avail = VM_MIN_KERNEL_ADDRESS; virtual_end = VM_MAX_KERNEL_ADDRESS; @@ -758,11 +759,21 @@ * update '*virt' with the first usable address after the mapped * region. */ +#if defined(__mips_n64) vm_offset_t pmap_map(vm_offset_t *virt, vm_offset_t start, vm_offset_t end, int prot) { + return (MIPS_PHYS_TO_XKPHYS_CACHED(start)); +} +#else +vm_offset_t +pmap_map(vm_offset_t *virt, vm_offset_t start, vm_offset_t end, int prot) +{ vm_offset_t va, sva; + if (end <= MIPS_KSEG0_LARGEST_PHYS) + return (MIPS_PHYS_TO_KSEG0(start)); + va = sva = *virt; while (start < end) { pmap_kenter(va, start); @@ -772,6 +783,7 @@ *virt = va; return (sva); } +#endif /* * Add a list of wired pages to the kva @@ -2027,9 +2039,20 @@ * Make a temporary mapping for a physical address. This is only intended * to be used for panic dumps. */ +#if defined(__mips_n64) void * pmap_kenter_temporary(vm_paddr_t pa, int i) { + return ((void *)MIPS_PHYS_TO_XKPHYS_CACHED(pa)); +} +void +pmap_kenter_temporary_free(vm_paddr_t pa) +{ +} +#else +void * +pmap_kenter_temporary(vm_paddr_t pa, int i) +{ vm_offset_t va; register_t intr; if (i != 0) @@ -2087,6 +2110,7 @@ sysm->valid1 = 0; } } +#endif /* * Moved the code to Machine Independent @@ -2193,11 +2217,23 @@ * pmap_zero_page zeros the specified hardware page by mapping * the page into KVM and using bzero to clear its contents. */ +#if defined (__mips_n64) void pmap_zero_page(vm_page_t m) { vm_offset_t va; vm_paddr_t phys = VM_PAGE_TO_PHYS(m); + + va = MIPS_PHYS_TO_XKPHYS_CACHED(phys); + bzero((caddr_t)va, PAGE_SIZE); + mips_dcache_wbinv_range(va, PAGE_SIZE); +} +#else +void +pmap_zero_page(vm_page_t m) +{ + vm_offset_t va; + vm_paddr_t phys = VM_PAGE_TO_PHYS(m); register_t intr; if (phys < MIPS_KSEG0_LARGEST_PHYS) { @@ -2214,18 +2250,30 @@ PMAP_LMEM_UNMAP(); } } - +#endif /* * pmap_zero_page_area zeros the specified hardware page by mapping * the page into KVM and using bzero to clear its contents. * * off and size may not cover an area beyond a single hardware page. */ +#if defined (__mips_n64) void pmap_zero_page_area(vm_page_t m, int off, int size) { vm_offset_t va; vm_paddr_t phys = VM_PAGE_TO_PHYS(m); + + va = MIPS_PHYS_TO_XKPHYS_CACHED(phys); + bzero((char *)(caddr_t)va + off, size); + mips_dcache_wbinv_range(va + off, size); +} +#else +void +pmap_zero_page_area(vm_page_t m, int off, int size) +{ + vm_offset_t va; + vm_paddr_t phys = VM_PAGE_TO_PHYS(m); register_t intr; if (phys < MIPS_KSEG0_LARGEST_PHYS) { @@ -2241,12 +2289,25 @@ PMAP_LMEM_UNMAP(); } } +#endif +#if defined (__mips_n64) void pmap_zero_page_idle(vm_page_t m) { vm_offset_t va; vm_paddr_t phys = VM_PAGE_TO_PHYS(m); + + va = MIPS_PHYS_TO_XKPHYS_CACHED(phys); + bzero((caddr_t)va, PAGE_SIZE); + mips_dcache_wbinv_range(va, PAGE_SIZE); +} +#else +void +pmap_zero_page_idle(vm_page_t m) +{ + vm_offset_t va; + vm_paddr_t phys = VM_PAGE_TO_PHYS(m); register_t intr; if (phys < MIPS_KSEG0_LARGEST_PHYS) { @@ -2262,6 +2323,7 @@ PMAP_LMEM_UNMAP(); } } +#endif /* * pmap_copy_page copies the specified (machine independent) @@ -2269,12 +2331,28 @@ * bcopy to copy the page, one machine dependent page at a * time. */ +#if defined (__mips_n64) void pmap_copy_page(vm_page_t src, vm_page_t dst) { vm_offset_t va_src, va_dst; vm_paddr_t phy_src = VM_PAGE_TO_PHYS(src); vm_paddr_t phy_dst = VM_PAGE_TO_PHYS(dst); + + pmap_flush_pvcache(src); + mips_dcache_wbinv_range_index(MIPS_PHYS_TO_XKPHYS_CACHED(phy_dst), PAGE_SIZE); + va_src = MIPS_PHYS_TO_XKPHYS_CACHED(phy_src); + va_dst = MIPS_PHYS_TO_XKPHYS_CACHED(phy_dst); + bcopy((caddr_t)va_src, (caddr_t)va_dst, PAGE_SIZE); + mips_dcache_wbinv_range(va_dst, PAGE_SIZE); +} +#else +void +pmap_copy_page(vm_page_t src, vm_page_t dst) +{ + vm_offset_t va_src, va_dst; + vm_paddr_t phy_src = VM_PAGE_TO_PHYS(src); + vm_paddr_t phy_dst = VM_PAGE_TO_PHYS(dst); register_t intr; if ((phy_src < MIPS_KSEG0_LARGEST_PHYS) && (phy_dst < MIPS_KSEG0_LARGEST_PHYS)) { @@ -2299,6 +2377,7 @@ PMAP_LMEM_UNMAP(); } } +#endif /* * Returns true if the pmap's pv is one of the first @@ -2705,9 +2784,21 @@ * routine is intended to be used for mapping device memory, * NOT real memory. */ +#if defined(__mips_n64) void * pmap_mapdev(vm_offset_t pa, vm_size_t size) { + return ((void *)MIPS_PHYS_TO_XKPHYS_UNCACHED(pa)); +} + +void +pmap_unmapdev(vm_offset_t va, vm_size_t size) +{ +} +#else +void * +pmap_mapdev(vm_offset_t pa, vm_size_t size) +{ vm_offset_t va, tmpva, offset; /* @@ -2751,6 +2842,7 @@ pmap_kremove(tmpva); kmem_free(kernel_map, base, size); } +#endif /* * perform the pmap work for mincore @@ -3067,6 +3159,7 @@ PHYS_TO_VM_PAGE(pa)->md.pv_flags |= (PV_TABLE_REF | PV_TABLE_MOD); } + /* * Routine: pmap_kextract * Function: @@ -3076,41 +3169,68 @@ /* PMAP_INLINE */ vm_offset_t pmap_kextract(vm_offset_t va) { - vm_offset_t pa = 0; + int mapped; - if (va < MIPS_KSEG0_START) { - /* user virtual address */ + /* + * First, the direct-mapped regions. + */ +#if defined(__mips_n64) + if (va >= MIPS_XKPHYS_START && va < MIPS_XKPHYS_END) + return (MIPS_XKPHYS_TO_PHYS(va)); +#endif + + if (va >= MIPS_KSEG0_START && va < MIPS_KSEG0_END) + return (MIPS_KSEG0_TO_PHYS(va)); + + if (va >= MIPS_KSEG1_START && va < MIPS_KSEG1_END) + return (MIPS_KSEG1_TO_PHYS(va)); + + /* + * User virtual addresses. + */ + if (va < VM_MAXUSER_ADDRESS) { pt_entry_t *ptep; if (curproc && curproc->p_vmspace) { ptep = pmap_pte(&curproc->p_vmspace->vm_pmap, va); - if (ptep) - pa = TLBLO_PTE_TO_PA(*ptep) | - (va & PAGE_MASK); + if (ptep) { + return (TLBLO_PTE_TO_PA(*ptep) | + (va & PAGE_MASK)); + } + return (0); } - } else if (va >= MIPS_KSEG0_START && - va < MIPS_KSEG1_START) - pa = MIPS_KSEG0_TO_PHYS(va); - else if (va >= MIPS_KSEG1_START && - va < MIPS_KSEG2_START) - pa = MIPS_KSEG1_TO_PHYS(va); - else if (va >= MIPS_KSEG2_START && va < VM_MAX_KERNEL_ADDRESS) { + } + + /* + * Should be kernel virtual here, otherwise fail + */ + mapped = (va >= MIPS_KSEG2_START || va < MIPS_KSEG2_END); +#if defined(__mips_n64) + mapped = mapped || (va >= MIPS_XKSEG_START || va < MIPS_XKSEG_END); +#endif + /* + * Kernel virtual. + */ + + if (mapped) { pt_entry_t *ptep; /* Is the kernel pmap initialized? */ if (kernel_pmap->pm_active) { - /* Its inside the virtual address range */ + /* It's inside the virtual address range */ ptep = pmap_pte(kernel_pmap, va); if (ptep) { return (TLBLO_PTE_TO_PA(*ptep) | (va & PAGE_MASK)); } - return (0); } + return (0); } - return pa; + + panic("%s for unknown address space %p.", __func__, (void *)va); } + void pmap_flush_pvcache(vm_page_t m) { [-- Attachment #7 --] Index: sys/mips/include/runq.h =================================================================== --- sys/mips/include/runq.h (revision 209756) +++ sys/mips/include/runq.h (working copy) @@ -30,11 +30,16 @@ #ifndef _MACHINE_RUNQ_H_ #define _MACHINE_RUNQ_H_ +#if defined(__mips_n64) +#define RQB_LEN (1) /* Number of priority status words. */ +#define RQB_L2BPW (6) /* Log2(sizeof(rqb_word_t) * NBBY)). */ +#else #define RQB_LEN (2) /* Number of priority status words. */ #define RQB_L2BPW (5) /* Log2(sizeof(rqb_word_t) * NBBY)). */ +#endif #define RQB_BPW (1<<RQB_L2BPW) /* Bits in an rqb_word_t. */ -#define RQB_BIT(pri) (1 << ((pri) & (RQB_BPW - 1))) +#define RQB_BIT(pri) (1ul << ((pri) & (RQB_BPW - 1))) #define RQB_WORD(pri) ((pri) >> RQB_L2BPW) #define RQB_FFS(word) (ffs(word) - 1) @@ -42,6 +47,10 @@ /* * Type of run queue status word. */ -typedef u_int32_t rqb_word_t; +#if defined(__mips_n64) +typedef u_int64_t rqb_word_t; +#else +typedef u_int32_t rqb_word_t; +#endif #endif [-- Attachment #8 --] Index: sys/conf/ldscript.mips.64 =================================================================== --- sys/conf/ldscript.mips.64 (revision 0) +++ sys/conf/ldscript.mips.64 (revision 0) @@ -0,0 +1,301 @@ +/*- + * Copyright (c) 2001, 2004, 2008, Juniper Networks, Inc. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * 1. Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * 2. Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in the + * documentation and/or other materials provided with the distribution. + * 3. Neither the name of the Juniper Networks, Inc. nor the names of its + * contributors may be used to endorse or promote products derived from + * this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY JUNIPER NETWORKS AND CONTRIBUTORS ``AS IS'' AND + * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE + * ARE DISCLAIMED. IN NO EVENT SHALL JUNIPER NETWORKS OR CONTRIBUTORS BE LIABLE + * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF + * SUCH DAMAGE. + * + * JNPR: ldscript.mips,v 1.3 2006/10/11 06:12:04 + * $FreeBSD: head/sys/conf/ldscript.mips.n32 209502 2010-06-24 10:14:31Z jchandra $ + */ + +OUTPUT_FORMAT("elf64-tradbigmips", "elf64-tradbigmips", "elf64-tradlittlemips") +OUTPUT_ARCH(mips) +ENTRY(_start) +SEARCH_DIR(/usr/lib); +/* Do we need any of these for elf? + __DYNAMIC = 0; +PROVIDE (_DYNAMIC = 0); +*/ +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + . = KERNLOADADDR + SIZEOF_HEADERS; + .text : + { + *(.trap) + *(.text) + *(.text.*) + *(.stub) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + *(.gnu.linkonce.t.*) + } =0x1000000 + .fini : + { + KEEP (*(.fini)) + } =0x1000000 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata) *(.rodata.*) *(.gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .interp : { *(.interp) } + .hash : { *(.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rel.init : { *(.rel.init) } + .rela.init : { *(.rela.init) } + .rel.text : + { + *(.rel.text) + *(.rel.text.*) + *(.rel.gnu.linkonce.t.*) + } + .rela.text : + { + *(.rela.text) + *(.rela.text.*) + *(.rela.gnu.linkonce.t.*) + } + .rel.fini : { *(.rel.fini) } + .rela.fini : { *(.rela.fini) } + .rel.rodata : + { + *(.rel.rodata) + *(.rel.rodata.*) + *(.rel.gnu.linkonce.r.*) + } + .rela.rodata : + { + *(.rela.rodata) + *(.rela.rodata.*) + *(.rela.gnu.linkonce.r.*) + } + .rel.data : + { + *(.rel.data) + *(.rel.data.*) + *(.rel.gnu.linkonce.d.*) + } + .rela.data : + { + *(.rela.data) + *(.rela.data.*) + *(.rela.gnu.linkonce.d.*) + } + .rel.ctors : { *(.rel.ctors) } + .rela.ctors : { *(.rela.ctors) } + .rel.dtors : { *(.rel.dtors) } + .rela.dtors : { *(.rela.dtors) } + .rel.got : { *(.rel.got) } + .rela.got : { *(.rela.got) } + .rel.sdata : + { + *(.rel.sdata) + *(.rel.sdata.*) + *(.rel.gnu.linkonce.s.*) + } + .rela.sdata : + { + *(.rela.sdata) + *(.rela.sdata.*) + *(.rela.gnu.linkonce.s.*) + } + .rel.sbss : + { + *(.rel.sbss) + *(.rel.sbss.*) + *(.rel.gnu.linkonce.sb.*) + } + .rela.sbss : + { + *(.rela.sbss) + *(.rela.sbss.*) + *(.rel.gnu.linkonce.sb.*) + } + .rel.sdata2 : + { + *(.rel.sdata2) + *(.rel.sdata2.*) + *(.rel.gnu.linkonce.s2.*) + } + .rela.sdata2 : + { + *(.rela.sdata2) + *(.rela.sdata2.*) + *(.rela.gnu.linkonce.s2.*) + } + .rel.sbss2 : + { + *(.rel.sbss2) + *(.rel.sbss2.*) + *(.rel.gnu.linkonce.sb2.*) + } + .rela.sbss2 : + { + *(.rela.sbss2) + *(.rela.sbss2.*) + *(.rela.gnu.linkonce.sb2.*) + } + .rel.bss : + { + *(.rel.bss) + *(.rel.bss.*) + *(.rel.gnu.linkonce.b.*) + } + .rela.bss : + { + *(.rela.bss) + *(.rela.bss.*) + *(.rela.gnu.linkonce.b.*) + } + .rel.plt : { *(.rel.plt) } + .rela.plt : { *(.rela.plt) } + .init : + { + KEEP (*(.init)) + } =0x1000000 + .reginfo : { *(.reginfo) } + .sdata2 : { *(.sdata2) *(.sdata2.*) *(.gnu.linkonce.s2.*) } + .sbss2 : { *(.sbss2) *(.sbss2.*) *(.gnu.linkonce.sb2.*) } + . = ALIGN(0x2000) + (. & (0x2000 - 1)); + .data : + { + *(.data) + *(.data.*) + *(.gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + .eh_frame : { KEEP (*(.eh_frame)) } + .gcc_except_table : { *(.gcc_except_table) } + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + /* We don't want to include the .ctor section from + from the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .plt : { *(.plt) } + _gp = ALIGN(16) + 0x7ff0; + .got : { *(.got.plt) *(.got) } + .dynamic : { *(.dynamic) } + /* We want the small data sections together, so single-instruction offsets + can access them all, and initialized data all before uninitialized, so + we can shorten the on-disk segment size. */ + .sdata : + { + *(.sdata) + *(.sdata.*) + *(.gnu.linkonce.s.*) + } + _edata = .; + PROVIDE (edata = .); + __bss_start = .; + .sbss : + { + PROVIDE (__sbss_start = .); + PROVIDE (___sbss_start = .); + *(.dynsbss) + *(.sbss) + *(.sbss.*) + *(.gnu.linkonce.sb.*) + *(.scommon) + PROVIDE (__sbss_end = .); + PROVIDE (___sbss_end = .); + } + .bss : + { + *(.dynbss) + *(.bss) + *(.bss.*) + *(.gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(64 / 8); + } + . = ALIGN(64 / 8); + _end = .; + PROVIDE (end = .); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info) *(.gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* These must appear regardless of . */ +} Index: sys/mips/conf/XLR64 =================================================================== --- sys/mips/conf/XLR64 (revision 0) +++ sys/mips/conf/XLR64 (revision 0) @@ -0,0 +1,133 @@ +# XLR64 -- Kernel configuration file for N64 kernel on XLR/XLS +# +# For more information on this file, please read the handbook section on +# Kernel Configuration Files: +# +# http://www.FreeBSD.org/doc/en_US.ISO8859-1/books/handbook/kernelconfig-config.html +# +# The handbook is also available locally in /usr/share/doc/handbook +# if you've installed the doc distribution, otherwise always see the +# FreeBSD World Wide Web server (http://www.FreeBSD.org/) for the +# latest information. +# +# An exhaustive list of options and more detailed explanations of the +# device lines is also present in the ../../conf/NOTES and NOTES files. +# If you are in doubt as to the purpose or necessity of a line, check first +# in NOTES. +# +# $FreeBSD: head/sys/mips/conf/XLRN32 209502 2010-06-24 10:14:31Z jchandra $ + +machine mips +cpu CPU_MIPS4KC +ident XLR + +makeoptions MODULES_OVERRIDE="" +makeoptions TARGET_BIG_ENDIAN + +include "../rmi/std.xlr" + +makeoptions DEBUG=-g # Build kernel with gdb(1) debug symbols +makeoptions ARCH_FLAGS="-march=mips64 -mabi=64" +makeoptions LDSCRIPT_NAME=ldscript.mips.64 + +#profile 2 + +options SCHED_ULE # ULE scheduler +#options VERBOSE_SYSINIT +#options SCHED_4BSD # 4BSD scheduler +#options SMP +#options PREEMPTION # Enable kernel thread preemption +#options FULL_PREEMPTION # Enable kernel thread preemption +options INET # InterNETworking +options INET6 # IPv6 communications protocols +options FFS # Berkeley Fast Filesystem +#options SOFTUPDATES # Enable FFS soft updates support +options UFS_ACL # Support for access control lists +options UFS_DIRHASH # Improve performance on big directories +options NFSCLIENT +options NFS_ROOT +# +options BOOTP +options BOOTP_NFSROOT +options BOOTP_NFSV3 +options BOOTP_WIRED_TO=rge0 +options BOOTP_COMPAT +options ROOTDEVNAME=\"nfs:10.1.1.8:/usr/extra/nfsroot\" +# +#options MD_ROOT # MD is a potential root device +#options MD_ROOT_SIZE=27000 +#options MD_ROOT_SIZE=5120 +#options ROOTDEVNAME=\"ufs:md0\" +options _KPOSIX_PRIORITY_SCHEDULING # POSIX P1003_1B real-time extensions +options HZ=1000 +options NO_SWAPPING + +#Debugging options +options KTRACE # ktrace(1) support +options DDB +options KDB +options GDB +options ALT_BREAK_TO_DEBUGGER +#options DEADLKRES #Enable the deadlock resolver +options INVARIANTS #Enable calls of extra sanity checking +options INVARIANT_SUPPORT #Extra sanity checks of internal structures, required by INVARIANTS +#options WITNESS #Enable checks to detect deadlocks and cycles +#options WITNESS_SKIPSPIN #Don't run witness on spinlocks for speed +#options KTR # ktr(4) and ktrdump(8) support +#options KTR_COMPILE=(KTR_LOCK|KTR_PROC|KTR_INTR|KTR_CALLOUT|KTR_UMA|KTR_SYSC|KTR_CRITICAL) +#options KTR_ENTRIES=131072 +#options MUTEX_DEBUG +#options MUTEX_PROFILING + +device pci +#device ata +#device atadisk +#options XLR_PERFMON # Enable XLR processor activity monitoring +options BREAK_TO_DEBUGGER +#device genclock +device uart +# Pseudo +device loop +device random +device md +device mem +device pty +device bpf + +# Network +device miibus +device rge +device ether +device re +device msk + +device da +device scbus +#device ohci # OHCI PCI->USB interface +device ehci # EHCI PCI->USB interface (USB 2.0) +device usb # USB Bus (required) +options USB_DEBUG # enable debug msgs +#device udbp # USB Double Bulk Pipe devices +#device ugen # Generic +#device uhid # "Human Interface Devices" +device umass # Disks/Mass storage - Requires scbus and da + +#device cfi + +#i2c +# Not yet +#device ic +#device iic +#device iicbb +#device iicbus +#device xlr_rtc +#device xlr_temperature +#device xlr_eeprom + +#crypto +# Not yet +#device cryptodev +#device crypto +#device rmisec +options ISA_MIPS64 +makeoptions KERNLOADADDR=0xffffffff80100000home | help
Want to link to this message? Use this
URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?AANLkTikSVi27V2UICgLvKd8Bk7v6tuGty9YX6-C6-21H>
