From owner-svn-src-all@FreeBSD.ORG Wed Jan 6 23:02:36 2010 Return-Path: Delivered-To: svn-src-all@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 0E927106568F; Wed, 6 Jan 2010 23:02:36 +0000 (UTC) (envelope-from yongari@FreeBSD.org) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:4f8:fff6::2c]) by mx1.freebsd.org (Postfix) with ESMTP id F06888FC1C; Wed, 6 Jan 2010 23:02:35 +0000 (UTC) Received: from svn.freebsd.org (localhost [127.0.0.1]) by svn.freebsd.org (8.14.3/8.14.3) with ESMTP id o06N2Zqr066473; Wed, 6 Jan 2010 23:02:35 GMT (envelope-from yongari@svn.freebsd.org) Received: (from yongari@localhost) by svn.freebsd.org (8.14.3/8.14.3/Submit) id o06N2ZXh066470; Wed, 6 Jan 2010 23:02:35 GMT (envelope-from yongari@svn.freebsd.org) Message-Id: <201001062302.o06N2ZXh066470@svn.freebsd.org> From: Pyun YongHyeon Date: Wed, 6 Jan 2010 23:02:35 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-stable@freebsd.org, svn-src-stable-8@freebsd.org X-SVN-Group: stable-8 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Cc: Subject: svn commit: r201687 - stable/8/sys/dev/bge X-BeenThere: svn-src-all@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: "SVN commit messages for the entire src tree \(except for " user" and " projects" \)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 06 Jan 2010 23:02:36 -0000 Author: yongari Date: Wed Jan 6 23:02:35 2010 New Revision: 201687 URL: http://svn.freebsd.org/changeset/base/201687 Log: MFC r198967,199009-199011,199014,199020,199035-199036,199054 r198967: Correct MSI mode register bits. r199009: bge(4) already switched to use UMA backed page allocator and local memory allocator for jumbo frame was removed long time ago. Remove no more used macros. r199010: Do bus_dmamap_sync call only if frame size is greater than standard buffer size. If controller is not capable of handling jumbo frame, interface MTU couldn't be larger than standard MTU which in turn the received should be fit in standard buffer. This fixes bus_dmamap_sync call for jumbo ring is called even if interface is configured to use standard MTU. Also if total frame size could be fit into standard buffer don't use jumbo buffers. r199011: Reimplement Rx buffer allocation to handle dma map load failure. Introduce two spare dma maps for standard buffer and jumbo buffer respectively. If loading a dma map failed reuse previously loaded dma map. This should fix unloaded dma map is used in case of dma map load failure. Also don't blindly unload dma map and defer dma map sync and unloading operation until we know dma map for new buffer is successfully loaded. This change saves unnecessary dma load/unload operation. Previously bge(4) tried to reuse mbuf with unloaded dma map which is really bad thing in bus_dma(9) perspective. While I'm here update if_iqdrops if we can't allocate Rx buffers. r199014: Fix I mssied in r199011. Rx ring index also should be updated. If we fill Rx ring full instead of half we can simplify this logic but this requires more experimentation. r199020: Tell upper layer we support long frames. ether_ifattach() initializes it to ETHER_HDR_LEN so we have to override it after calling ether_ifattch(). While I'm here remove setting if_mtu value, it's initialized in ether_ifattach(). r199035: Don't count input errors twice, we always read input errors from MAC in bge_tick. Previously it used to show more number of input errors. I noticed actual input errors were less than 8% even for 64 bytes UDP frames generated by netperf. Since we always access BGE_RXLP_LOCSTAT_IFIN_DROPS register in bge_tick, remove useless code protected by #ifdef notyet. r199036: Count number of inbound packets which were chosen to be discarded as input errors. Also count out of receive BDs as input errors. r199054: Partially revert r199035. Revision 1.158 says only lower ten bits of BGE_RXLP_LOCSTAT_IFIN_DROPS register is valid. For BCM5761 case it seems the controller maintains 16bits value for the register. However 16bits are still too small to count all dropped packets happened in a second. To get a correct counter we have to read the register in bge_rxeof() which would be too expensive. Modified: stable/8/sys/dev/bge/if_bge.c stable/8/sys/dev/bge/if_bgereg.h Directory Properties: stable/8/sys/ (props changed) stable/8/sys/amd64/include/xen/ (props changed) stable/8/sys/cddl/contrib/opensolaris/ (props changed) stable/8/sys/contrib/dev/acpica/ (props changed) stable/8/sys/contrib/pf/ (props changed) stable/8/sys/dev/xen/xenpci/ (props changed) Modified: stable/8/sys/dev/bge/if_bge.c ============================================================================== --- stable/8/sys/dev/bge/if_bge.c Wed Jan 6 22:49:10 2010 (r201686) +++ stable/8/sys/dev/bge/if_bge.c Wed Jan 6 23:02:35 2010 (r201687) @@ -393,8 +393,8 @@ static void bge_setpromisc(struct bge_so static void bge_setmulti(struct bge_softc *); static void bge_setvlan(struct bge_softc *); -static int bge_newbuf_std(struct bge_softc *, int, struct mbuf *); -static int bge_newbuf_jumbo(struct bge_softc *, int, struct mbuf *); +static int bge_newbuf_std(struct bge_softc *, int); +static int bge_newbuf_jumbo(struct bge_softc *, int); static int bge_init_rx_ring_std(struct bge_softc *); static void bge_free_rx_ring_std(struct bge_softc *); static int bge_init_rx_ring_jumbo(struct bge_softc *); @@ -912,37 +912,38 @@ bge_miibus_statchg(device_t dev) * Intialize a standard receive ring descriptor. */ static int -bge_newbuf_std(struct bge_softc *sc, int i, struct mbuf *m) +bge_newbuf_std(struct bge_softc *sc, int i) { - struct mbuf *m_new = NULL; + struct mbuf *m; struct bge_rx_bd *r; bus_dma_segment_t segs[1]; + bus_dmamap_t map; int error, nsegs; - if (m == NULL) { - m_new = m_getcl(M_DONTWAIT, MT_DATA, M_PKTHDR); - if (m_new == NULL) - return (ENOBUFS); - m_new->m_len = m_new->m_pkthdr.len = MCLBYTES; - } else { - m_new = m; - m_new->m_len = m_new->m_pkthdr.len = MCLBYTES; - m_new->m_data = m_new->m_ext.ext_buf; - } - + m = m_getcl(M_DONTWAIT, MT_DATA, M_PKTHDR); + if (m == NULL) + return (ENOBUFS); + m->m_len = m->m_pkthdr.len = MCLBYTES; if ((sc->bge_flags & BGE_FLAG_RX_ALIGNBUG) == 0) - m_adj(m_new, ETHER_ALIGN); + m_adj(m, ETHER_ALIGN); + error = bus_dmamap_load_mbuf_sg(sc->bge_cdata.bge_rx_mtag, - sc->bge_cdata.bge_rx_std_dmamap[i], m_new, segs, &nsegs, 0); + sc->bge_cdata.bge_rx_std_sparemap, m, segs, &nsegs, 0); if (error != 0) { - if (m == NULL) { - sc->bge_cdata.bge_rx_std_chain[i] = NULL; - m_freem(m_new); - } + m_freem(m); return (error); } - sc->bge_cdata.bge_rx_std_chain[i] = m_new; - r = &sc->bge_ldata.bge_rx_std_ring[i]; + if (sc->bge_cdata.bge_rx_std_chain[i] != NULL) { + bus_dmamap_sync(sc->bge_cdata.bge_rx_mtag, + sc->bge_cdata.bge_rx_std_dmamap[i], BUS_DMASYNC_POSTREAD); + bus_dmamap_unload(sc->bge_cdata.bge_rx_mtag, + sc->bge_cdata.bge_rx_std_dmamap[i]); + } + map = sc->bge_cdata.bge_rx_std_dmamap[i]; + sc->bge_cdata.bge_rx_std_dmamap[i] = sc->bge_cdata.bge_rx_std_sparemap; + sc->bge_cdata.bge_rx_std_sparemap = map; + sc->bge_cdata.bge_rx_std_chain[i] = m; + r = &sc->bge_ldata.bge_rx_std_ring[sc->bge_std]; r->bge_addr.bge_addr_lo = BGE_ADDR_LO(segs[0].ds_addr); r->bge_addr.bge_addr_hi = BGE_ADDR_HI(segs[0].ds_addr); r->bge_flags = BGE_RXBDFLAG_END; @@ -950,8 +951,7 @@ bge_newbuf_std(struct bge_softc *sc, int r->bge_idx = i; bus_dmamap_sync(sc->bge_cdata.bge_rx_mtag, - sc->bge_cdata.bge_rx_std_dmamap[i], - BUS_DMASYNC_PREREAD); + sc->bge_cdata.bge_rx_std_dmamap[i], BUS_DMASYNC_PREREAD); return (0); } @@ -961,48 +961,49 @@ bge_newbuf_std(struct bge_softc *sc, int * a jumbo buffer from the pool managed internally by the driver. */ static int -bge_newbuf_jumbo(struct bge_softc *sc, int i, struct mbuf *m) +bge_newbuf_jumbo(struct bge_softc *sc, int i) { bus_dma_segment_t segs[BGE_NSEG_JUMBO]; + bus_dmamap_t map; struct bge_extrx_bd *r; - struct mbuf *m_new = NULL; - int nsegs; - int error; + struct mbuf *m; + int error, nsegs; - if (m == NULL) { - MGETHDR(m_new, M_DONTWAIT, MT_DATA); - if (m_new == NULL) - return (ENOBUFS); + MGETHDR(m, M_DONTWAIT, MT_DATA); + if (m == NULL) + return (ENOBUFS); - m_cljget(m_new, M_DONTWAIT, MJUM9BYTES); - if (!(m_new->m_flags & M_EXT)) { - m_freem(m_new); - return (ENOBUFS); - } - m_new->m_len = m_new->m_pkthdr.len = MJUM9BYTES; - } else { - m_new = m; - m_new->m_len = m_new->m_pkthdr.len = MJUM9BYTES; - m_new->m_data = m_new->m_ext.ext_buf; + m_cljget(m, M_DONTWAIT, MJUM9BYTES); + if (!(m->m_flags & M_EXT)) { + m_freem(m); + return (ENOBUFS); } - + m->m_len = m->m_pkthdr.len = MJUM9BYTES; if ((sc->bge_flags & BGE_FLAG_RX_ALIGNBUG) == 0) - m_adj(m_new, ETHER_ALIGN); + m_adj(m, ETHER_ALIGN); error = bus_dmamap_load_mbuf_sg(sc->bge_cdata.bge_mtag_jumbo, - sc->bge_cdata.bge_rx_jumbo_dmamap[i], - m_new, segs, &nsegs, BUS_DMA_NOWAIT); - if (error) { - if (m == NULL) - m_freem(m_new); + sc->bge_cdata.bge_rx_jumbo_sparemap, m, segs, &nsegs, 0); + if (error != 0) { + m_freem(m); return (error); } - sc->bge_cdata.bge_rx_jumbo_chain[i] = m_new; + if (sc->bge_cdata.bge_rx_jumbo_chain[i] == NULL) { + bus_dmamap_sync(sc->bge_cdata.bge_mtag_jumbo, + sc->bge_cdata.bge_rx_jumbo_dmamap[i], BUS_DMASYNC_POSTREAD); + bus_dmamap_unload(sc->bge_cdata.bge_mtag_jumbo, + sc->bge_cdata.bge_rx_jumbo_dmamap[i]); + } + map = sc->bge_cdata.bge_rx_jumbo_dmamap[i]; + sc->bge_cdata.bge_rx_jumbo_dmamap[i] = + sc->bge_cdata.bge_rx_jumbo_sparemap; + sc->bge_cdata.bge_rx_jumbo_sparemap = map; + sc->bge_cdata.bge_rx_jumbo_chain[i] = m; /* * Fill in the extended RX buffer descriptor. */ - r = &sc->bge_ldata.bge_rx_jumbo_ring[i]; + r = &sc->bge_ldata.bge_rx_jumbo_ring[sc->bge_jumbo]; r->bge_flags = BGE_RXBDFLAG_JUMBO_RING | BGE_RXBDFLAG_END; r->bge_idx = i; r->bge_len3 = r->bge_len2 = r->bge_len1 = 0; @@ -1029,8 +1030,7 @@ bge_newbuf_jumbo(struct bge_softc *sc, i } bus_dmamap_sync(sc->bge_cdata.bge_mtag_jumbo, - sc->bge_cdata.bge_rx_jumbo_dmamap[i], - BUS_DMASYNC_PREREAD); + sc->bge_cdata.bge_rx_jumbo_dmamap[i], BUS_DMASYNC_PREREAD); return (0); } @@ -1046,9 +1046,11 @@ bge_init_rx_ring_std(struct bge_softc *s { int error, i; + sc->bge_std = 0; for (i = 0; i < BGE_SSLOTS; i++) { - if ((error = bge_newbuf_std(sc, i, NULL)) != 0) + if ((error = bge_newbuf_std(sc, i)) != 0) return (error); + BGE_INC(sc->bge_std, BGE_STD_RX_RING_CNT); }; bus_dmamap_sync(sc->bge_cdata.bge_rx_std_ring_tag, @@ -1087,9 +1089,11 @@ bge_init_rx_ring_jumbo(struct bge_softc struct bge_rcb *rcb; int error, i; + sc->bge_jumbo = 0; for (i = 0; i < BGE_JUMBO_RX_RING_CNT; i++) { - if ((error = bge_newbuf_jumbo(sc, i, NULL)) != 0) + if ((error = bge_newbuf_jumbo(sc, i)) != 0) return (error); + BGE_INC(sc->bge_jumbo, BGE_JUMBO_RX_RING_CNT); }; bus_dmamap_sync(sc->bge_cdata.bge_rx_jumbo_ring_tag, @@ -1979,6 +1983,9 @@ bge_dma_free(struct bge_softc *sc) bus_dmamap_destroy(sc->bge_cdata.bge_rx_mtag, sc->bge_cdata.bge_rx_std_dmamap[i]); } + if (sc->bge_cdata.bge_rx_std_sparemap) + bus_dmamap_destroy(sc->bge_cdata.bge_rx_mtag, + sc->bge_cdata.bge_rx_std_sparemap); /* Destroy DMA maps for jumbo RX buffers. */ for (i = 0; i < BGE_JUMBO_RX_RING_CNT; i++) { @@ -1986,6 +1993,9 @@ bge_dma_free(struct bge_softc *sc) bus_dmamap_destroy(sc->bge_cdata.bge_mtag_jumbo, sc->bge_cdata.bge_rx_jumbo_dmamap[i]); } + if (sc->bge_cdata.bge_rx_jumbo_sparemap) + bus_dmamap_destroy(sc->bge_cdata.bge_mtag_jumbo, + sc->bge_cdata.bge_rx_jumbo_sparemap); /* Destroy DMA maps for TX buffers. */ for (i = 0; i < BGE_TX_RING_CNT; i++) { @@ -2133,6 +2143,13 @@ bge_dma_alloc(device_t dev) } /* Create DMA maps for RX buffers. */ + error = bus_dmamap_create(sc->bge_cdata.bge_rx_mtag, 0, + &sc->bge_cdata.bge_rx_std_sparemap); + if (error) { + device_printf(sc->bge_dev, + "can't create spare DMA map for RX\n"); + return (ENOMEM); + } for (i = 0; i < BGE_STD_RX_RING_CNT; i++) { error = bus_dmamap_create(sc->bge_cdata.bge_rx_mtag, 0, &sc->bge_cdata.bge_rx_std_dmamap[i]); @@ -2234,6 +2251,13 @@ bge_dma_alloc(device_t dev) sc->bge_ldata.bge_rx_jumbo_ring_paddr = ctx.bge_busaddr; /* Create DMA maps for jumbo RX buffers. */ + error = bus_dmamap_create(sc->bge_cdata.bge_mtag_jumbo, + 0, &sc->bge_cdata.bge_rx_jumbo_sparemap); + if (error) { + device_printf(sc->bge_dev, + "can't create sapre DMA map for jumbo RX\n"); + return (ENOMEM); + } for (i = 0; i < BGE_JUMBO_RX_RING_CNT; i++) { error = bus_dmamap_create(sc->bge_cdata.bge_mtag_jumbo, 0, &sc->bge_cdata.bge_rx_jumbo_dmamap[i]); @@ -2699,7 +2723,6 @@ bge_attach(device_t dev) ifp->if_ioctl = bge_ioctl; ifp->if_start = bge_start; ifp->if_init = bge_init; - ifp->if_mtu = ETHERMTU; ifp->if_snd.ifq_drv_maxlen = BGE_TX_RING_CNT - 1; IFQ_SET_MAXLEN(&ifp->if_snd, ifp->if_snd.ifq_drv_maxlen); IFQ_SET_READY(&ifp->if_snd); @@ -2814,6 +2837,9 @@ again: ether_ifattach(ifp, eaddr); callout_init_mtx(&sc->bge_stat_ch, &sc->bge_mtx, 0); + /* Tell upper layer we support long frames. */ + ifp->if_data.ifi_hdrlen = sizeof(struct ether_vlan_header); + /* * Hookup IRQ last. */ @@ -3134,7 +3160,8 @@ bge_rxeof(struct bge_softc *sc) sc->bge_cdata.bge_rx_return_ring_map, BUS_DMASYNC_POSTREAD); bus_dmamap_sync(sc->bge_cdata.bge_rx_std_ring_tag, sc->bge_cdata.bge_rx_std_ring_map, BUS_DMASYNC_POSTWRITE); - if (BGE_IS_JUMBO_CAPABLE(sc)) + if (ifp->if_mtu + ETHER_HDR_LEN + ETHER_CRC_LEN + ETHER_VLAN_ENCAP_LEN > + (MCLBYTES - ETHER_ALIGN)) bus_dmamap_sync(sc->bge_cdata.bge_rx_jumbo_ring_tag, sc->bge_cdata.bge_rx_jumbo_ring_map, BUS_DMASYNC_POSTWRITE); @@ -3165,45 +3192,31 @@ bge_rxeof(struct bge_softc *sc) } if (cur_rx->bge_flags & BGE_RXBDFLAG_JUMBO_RING) { - BGE_INC(sc->bge_jumbo, BGE_JUMBO_RX_RING_CNT); - bus_dmamap_sync(sc->bge_cdata.bge_mtag_jumbo, - sc->bge_cdata.bge_rx_jumbo_dmamap[rxidx], - BUS_DMASYNC_POSTREAD); - bus_dmamap_unload(sc->bge_cdata.bge_mtag_jumbo, - sc->bge_cdata.bge_rx_jumbo_dmamap[rxidx]); - m = sc->bge_cdata.bge_rx_jumbo_chain[rxidx]; - sc->bge_cdata.bge_rx_jumbo_chain[rxidx] = NULL; jumbocnt++; + m = sc->bge_cdata.bge_rx_jumbo_chain[rxidx]; if (cur_rx->bge_flags & BGE_RXBDFLAG_ERROR) { - ifp->if_ierrors++; - bge_newbuf_jumbo(sc, sc->bge_jumbo, m); + BGE_INC(sc->bge_jumbo, BGE_JUMBO_RX_RING_CNT); continue; } - if (bge_newbuf_jumbo(sc, sc->bge_jumbo, NULL) != 0) { - ifp->if_ierrors++; - bge_newbuf_jumbo(sc, sc->bge_jumbo, m); + if (bge_newbuf_jumbo(sc, rxidx) != 0) { + BGE_INC(sc->bge_jumbo, BGE_JUMBO_RX_RING_CNT); + ifp->if_iqdrops++; continue; } + BGE_INC(sc->bge_jumbo, BGE_JUMBO_RX_RING_CNT); } else { - BGE_INC(sc->bge_std, BGE_STD_RX_RING_CNT); - bus_dmamap_sync(sc->bge_cdata.bge_rx_mtag, - sc->bge_cdata.bge_rx_std_dmamap[rxidx], - BUS_DMASYNC_POSTREAD); - bus_dmamap_unload(sc->bge_cdata.bge_rx_mtag, - sc->bge_cdata.bge_rx_std_dmamap[rxidx]); - m = sc->bge_cdata.bge_rx_std_chain[rxidx]; - sc->bge_cdata.bge_rx_std_chain[rxidx] = NULL; stdcnt++; if (cur_rx->bge_flags & BGE_RXBDFLAG_ERROR) { - ifp->if_ierrors++; - bge_newbuf_std(sc, sc->bge_std, m); + BGE_INC(sc->bge_std, BGE_STD_RX_RING_CNT); continue; } - if (bge_newbuf_std(sc, sc->bge_std, NULL) != 0) { - ifp->if_ierrors++; - bge_newbuf_std(sc, sc->bge_std, m); + m = sc->bge_cdata.bge_rx_std_chain[rxidx]; + if (bge_newbuf_std(sc, rxidx) != 0) { + BGE_INC(sc->bge_std, BGE_STD_RX_RING_CNT); + ifp->if_iqdrops++; continue; } + BGE_INC(sc->bge_std, BGE_STD_RX_RING_CNT); } ifp->if_ipackets++; @@ -3266,7 +3279,7 @@ bge_rxeof(struct bge_softc *sc) bus_dmamap_sync(sc->bge_cdata.bge_rx_std_ring_tag, sc->bge_cdata.bge_rx_std_ring_map, BUS_DMASYNC_PREWRITE); - if (BGE_IS_JUMBO_CAPABLE(sc) && jumbocnt > 0) + if (jumbocnt > 0) bus_dmamap_sync(sc->bge_cdata.bge_rx_jumbo_ring_tag, sc->bge_cdata.bge_rx_jumbo_ring_map, BUS_DMASYNC_PREWRITE); @@ -3542,7 +3555,9 @@ bge_stats_update_regs(struct bge_softc * ifp->if_collisions += CSR_READ_4(sc, BGE_MAC_STATS + offsetof(struct bge_mac_stats_regs, etherStatsCollisions)); + ifp->if_ierrors += CSR_READ_4(sc, BGE_RXLP_LOCSTAT_OUT_OF_BDS); ifp->if_ierrors += CSR_READ_4(sc, BGE_RXLP_LOCSTAT_IFIN_DROPS); + ifp->if_ierrors += CSR_READ_4(sc, BGE_RXLP_LOCSTAT_IFIN_ERRORS); } static void @@ -3920,7 +3935,8 @@ bge_init_locked(struct bge_softc *sc) } /* Init jumbo RX ring. */ - if (ifp->if_mtu > (ETHERMTU + ETHER_HDR_LEN + ETHER_CRC_LEN)) { + if (ifp->if_mtu + ETHER_HDR_LEN + ETHER_CRC_LEN + ETHER_VLAN_ENCAP_LEN > + (MCLBYTES - ETHER_ALIGN)) { if (bge_init_rx_ring_jumbo(sc) != 0) { device_printf(sc->bge_dev, "no memory for std Rx buffers.\n"); bge_stop(sc); Modified: stable/8/sys/dev/bge/if_bgereg.h ============================================================================== --- stable/8/sys/dev/bge/if_bgereg.h Wed Jan 6 22:49:10 2010 (r201686) +++ stable/8/sys/dev/bge/if_bgereg.h Wed Jan 6 23:02:35 2010 (r201687) @@ -1705,11 +1705,8 @@ /* MSI mode register */ #define BGE_MSIMODE_RESET 0x00000001 #define BGE_MSIMODE_ENABLE 0x00000002 -#define BGE_MSIMODE_PCI_TGT_ABRT_ATTN 0x00000004 -#define BGE_MSIMODE_PCI_MSTR_ABRT_ATTN 0x00000008 -#define BGE_MSIMODE_PCI_PERR_ATTN 0x00000010 -#define BGE_MSIMODE_MSI_FIFOUFLOW_ATTN 0x00000020 -#define BGE_MSIMODE_MSI_FIFOOFLOW_ATTN 0x00000040 +#define BGE_MSIMODE_ONE_SHOT_DISABLE 0x00000020 +#define BGE_MSIMODE_MULTIVEC_ENABLE 0x00000080 /* MSI status register */ #define BGE_MSISTAT_PCI_TGT_ABRT_ATTN 0x00000004 @@ -2484,13 +2481,6 @@ struct bge_gib { #define BGE_MSLOTS 256 #define BGE_JSLOTS 384 -#define BGE_JRAWLEN (BGE_JUMBO_FRAMELEN + ETHER_ALIGN) -#define BGE_JLEN (BGE_JRAWLEN + (sizeof(uint64_t) - \ - (BGE_JRAWLEN % sizeof(uint64_t)))) -#define BGE_JPAGESZ PAGE_SIZE -#define BGE_RESID (BGE_JPAGESZ - (BGE_JLEN * BGE_JSLOTS) % BGE_JPAGESZ) -#define BGE_JMEM ((BGE_JLEN * BGE_JSLOTS) + BGE_RESID) - #define BGE_NSEG_JUMBO 4 #define BGE_NSEG_NEW 32 @@ -2547,7 +2537,9 @@ struct bge_chain_data { bus_dma_tag_t bge_tx_mtag; /* Tx mbuf mapping tag */ bus_dma_tag_t bge_mtag_jumbo; /* Jumbo mbuf mapping tag */ bus_dmamap_t bge_tx_dmamap[BGE_TX_RING_CNT]; + bus_dmamap_t bge_rx_std_sparemap; bus_dmamap_t bge_rx_std_dmamap[BGE_STD_RX_RING_CNT]; + bus_dmamap_t bge_rx_jumbo_sparemap; bus_dmamap_t bge_rx_jumbo_dmamap[BGE_JUMBO_RX_RING_CNT]; bus_dmamap_t bge_rx_std_ring_map; bus_dmamap_t bge_rx_jumbo_ring_map;