From owner-freebsd-net Fri Jul 5 5: 9:27 2002 Delivered-To: freebsd-net@freebsd.org Received: from mx1.FreeBSD.org (mx1.FreeBSD.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 6308337B407; Fri, 5 Jul 2002 05:09:22 -0700 (PDT) Received: from duke.cs.duke.edu (duke.cs.duke.edu [152.3.140.1]) by mx1.FreeBSD.org (Postfix) with ESMTP id 239FA43E5E; Fri, 5 Jul 2002 05:09:20 -0700 (PDT) (envelope-from gallatin@cs.duke.edu) Received: from grasshopper.cs.duke.edu (grasshopper.cs.duke.edu [152.3.145.30]) by duke.cs.duke.edu (8.9.3/8.9.3) with ESMTP id IAA20605; Fri, 5 Jul 2002 08:09:17 -0400 (EDT) Received: (from gallatin@localhost) by grasshopper.cs.duke.edu (8.11.6/8.9.1) id g65C8l928851; Fri, 5 Jul 2002 08:08:47 -0400 (EDT) (envelope-from gallatin@cs.duke.edu) From: Andrew Gallatin MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Message-ID: <15653.35919.24295.698563@grasshopper.cs.duke.edu> Date: Fri, 5 Jul 2002 08:08:47 -0400 (EDT) To: Bosko Milekic Cc: "Kenneth D. Merry" , current@FreeBSD.ORG, net@FreeBSD.ORG Subject: Re: virtually contig jumbo mbufs (was Re: new zero copy sockets snapshot) In-Reply-To: <20020705002056.A5365@unixdaemons.com> References: <20020619090046.A2063@panzer.kdm.org> <20020619120641.A18434@unixdaemons.com> <15633.17238.109126.952673@grasshopper.cs.duke.edu> <20020619233721.A30669@unixdaemons.com> <15633.62357.79381.405511@grasshopper.cs.duke.edu> <20020620114511.A22413@unixdaemons.com> <15634.534.696063.241224@grasshopper.cs.duke.edu> <20020620134723.A22954@unixdaemons.com> <15652.46870.463359.853754@grasshopper.cs.duke.edu> <20020705002056.A5365@unixdaemons.com> X-Mailer: VM 6.75 under 21.1 (patch 12) "Channel Islands" XEmacs Lucid Sender: owner-freebsd-net@FreeBSD.ORG Precedence: bulk List-ID: List-Archive: (Web Archive) List-Help: (List Instructions) List-Subscribe: List-Unsubscribe: X-Loop: FreeBSD.org Bosko Milekic writes: > > Yes, it certainly confirms the virtual-based caching assumptions. I > would like to provide virtually contiguous large buffers and believe I > can do that via mb_alloc... however, they would be several wired-down > pages. Would this be in line with the requirements that these buffers > would have, in your mind? (wired-down means that your buffers will > come out exactly as they would out of malloc(), so if you were using > malloc() already, I'm assuming that wired-down is OK). I'd use these virtually contiguous, physically discontigous mbufs for GigE drivers which support jumbo frames and multiple recv descripters, but are incapable of doing header-splitting, or any other sort of useful framing (almost all of them, I think). From that perspective, it doesn't really matter what the mbufs look like internally. > I think I can allocate the jumbo buffers via mb_alloc from the same map > as I allocate clusters from - the clust_map - and keep them in > buckets/slabs in per-CPU caches, like I do for mbufs and regular > clusters right now. Luigi is in the process of doing some optimisation > work around mb_alloc and I'll probably be doing the SMP-specific stuff > after he's done so once that's taken care of, we can take a stab at > this if you think it's worth it. Would this be easier or harder than simple, physically contiguous buffers? I think that its only worth doing if its easier to manage at the system level, otherwise you might as well use physically contiguous mbufs. My main goal is to see the per-driver cache's of physical memory disappear ;) Drew To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-net" in the body of the message