From owner-freebsd-net Thu Sep 14 10:19:45 2000 Delivered-To: freebsd-net@freebsd.org Received: from rios.sitaranetworks.com (rios.sitaranetworks.com [199.103.141.78]) by hub.freebsd.org (Postfix) with ESMTP id DAEAC37B423 for ; Thu, 14 Sep 2000 10:19:37 -0700 (PDT) Received: by rios.sitaranetworks.com with Internet Mail Service (5.5.2650.21) id ; Thu, 14 Sep 2000 13:22:10 -0400 Message-ID: <31269226357BD211979E00A0C9866DABE411F5@rios.sitaranetworks.com> From: Charles Richmond To: 'mark tinguely' , bmilekic@dsuper.net, wollman@khavrinen.lcs.mit.edu Cc: freebsd-net@FreeBSD.ORG Subject: RE: Clusters larger than PAGE_SIZE and contigmalloc() Date: Thu, 14 Sep 2000 13:22:09 -0400 MIME-Version: 1.0 X-Mailer: Internet Mail Service (5.5.2650.21) Content-Type: text/plain; charset="iso-8859-1" Sender: owner-freebsd-net@FreeBSD.ORG Precedence: bulk X-Loop: FreeBSD.org So maybe I am being blind on this, but... It seems to me if the mbuf clusters cross page boundaries in an unaligned fashion as the earlier suggestion of 8 X 1.5k would do or if the size is privately configurable and thus not guaranteed to align, then the DMA code is forced into a scatter gather mode, even if the actual pages are contiguous. Can someone clear up my blindness? Charlie > -----Original Message----- > From: mark tinguely [mailto:tinguely@hookie.cs.ndsu.NoDak.edu] > Sent: Thursday, September 14, 2000 11:19 AM > To: bmilekic@dsuper.net; wollman@khavrinen.lcs.mit.edu > Cc: freebsd-net@FreeBSD.ORG > Subject: Re: Clusters larger than PAGE_SIZE and contigmalloc() > > > > my IDT NICStAR ATM card driver allocates contiguous memory for > mbuf external buffers. the card can use buffers larger than a physical > page, but I don't use it that way. there were a couple problems that > helped manually allocating buffers contiguously; one is in a couple > occasions, such as raw cell processing, I had the physical address of > the external buffer from the card but I need to use the kernel virtual > address. > > The ATM card needs to have external buffers programed into a queue to > be used when the data arrives. Instead allocating and > deallocating mbufs > as packets came in and were processed, as an experiment, I am > mucked up > the MBUF even more by making the mbuf structure and the > external buffer > permanent connected: > > #define M_PERM 0x8000 /* permanently allocated */ > > /* > * MFREE(struct mbuf *m, struct mbuf *n) > * Free a single mbuf and associated external storage. > * Place the successor, if any, in n. > */ > #define MFREE(m, n) MBUFLOCK( > \ > struct mbuf *_mm = (m); > \ > > \ > KASSERT(_mm->m_type != MT_FREE, ("freeing free mbuf")); > \ > mbstat.m_mtypes[_mm->m_type]--; > \ > if (_mm->m_flags & M_EXT) > \ > MEXTFREE1(m); > \ > (n) = _mm->m_next; > \ > if (_mm->m_flags & M_PERM) { > \ > _mm->m_next = (struct mbuf *) 0; > \ > } else { > \ > _mm->m_type = MT_FREE; > \ > mbstat.m_mtypes[MT_FREE]++; > \ > _mm->m_next = mmbfree; > \ > mmbfree = _mm; > \ > MMBWAKEUP(); > \ > } > \ > ) > > when the packet fills a buffer, I can have it return the > kernel virtual > address of the mbuf holding the external buffer, and link up the new > mbuf to the chain that was come in so far. I haven't actually counted > how much this really saves vs. the extra space required for > the permanently > allocated mbufs. > > the downside with having your own pool of mbuf is that you are at the > mercy of other code that may overwrite your ext_free() routine and you > never get your buffers back. I suspect this is happening to one person > using my driver. > > --mark tinguely. > > > To Unsubscribe: send mail to majordomo@FreeBSD.org > with "unsubscribe freebsd-net" in the body of the message > To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-net" in the body of the message