From owner-freebsd-arch@FreeBSD.ORG Mon Nov 24 10:57:31 2008 Return-Path: Delivered-To: arch@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 5DB7F1065749 for ; Mon, 24 Nov 2008 10:57:31 +0000 (UTC) (envelope-from jroberson@jroberson.net) Received: from rv-out-0506.google.com (rv-out-0506.google.com [209.85.198.234]) by mx1.freebsd.org (Postfix) with ESMTP id 3E6C28FC0A for ; Mon, 24 Nov 2008 10:57:31 +0000 (UTC) (envelope-from jroberson@jroberson.net) Received: by rv-out-0506.google.com with SMTP id b25so1971393rvf.43 for ; Mon, 24 Nov 2008 02:57:31 -0800 (PST) Received: by 10.142.139.19 with SMTP id m19mr1584560wfd.343.1227524250883; Mon, 24 Nov 2008 02:57:30 -0800 (PST) Received: from ?10.0.1.199? (cpe-66-91-191-118.hawaii.res.rr.com [66.91.191.118]) by mx.google.com with ESMTPS id 22sm7967265wfd.53.2008.11.24.02.57.28 (version=SSLv3 cipher=RC4-MD5); Mon, 24 Nov 2008 02:57:29 -0800 (PST) Date: Mon, 24 Nov 2008 00:54:51 -1000 (HST) From: Jeff Roberson X-X-Sender: jroberson@desktop To: Alfred Perlstein In-Reply-To: <20081124085223.GY28578@elvis.mu.org> Message-ID: <20081124005404.H971@desktop> References: <20081123213232.A971@desktop> <20081124085223.GY28578@elvis.mu.org> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Cc: arch@freebsd.org Subject: Re: Limiting mbuf memory. X-BeenThere: freebsd-arch@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Discussion related to FreeBSD architecture List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 24 Nov 2008 10:57:31 -0000 On Mon, 24 Nov 2008, Alfred Perlstein wrote: > * Jeff Roberson [081123 23:48] wrote: >> I'm developing a patch for an alternate memory layout for mbuf clusters >> that relies on contigmalloc. Since this can fail, we'll still have to >> retain the capability of allocating traditional clusters. I'll report >> details on that later. I'm writing this email to address the issue of >> resource accounting in mbufs. >> >> Presently we use a set of limits on individual zones or sizes of mbufs. >> Standard mbufs, clusters, page size jumbos, 9k jumbos, and 16k jumbos. >> Each is administered sperately. I think this is getting a bit unwieldy. >> In the future, we may have even more sizes. This also introduces problems >> because I will have two cluster zones do they each get their own limit? >> >> I would like to consolidate this into a single limit on the number of >> pages in total allocated to networking. With perhaps some fractional >> reservation for standard mbufs and clusters to make sure they aren't >> overwhelmed by the larger buffers. >> >> This would be implemented by overriding the uma zone page allocator for >> each of the mbuf zones with one that counts pages for all. Should we >> reach the limit we'll block depending on the wait settings of the >> requestor. The limit and sleep will probably be protected by a global >> lock which won't be an issue because trips to the back end allocator are >> infrequent and protected by their own global lock anyhow. >> >> How do people feel about this? To be clear this would eliminate: >> >> nmbclusters, nmbjumbop, nmbjumbo9, nmbjumbo16 and related config settings >> and sysctls. They would be replaced by something like 'maxmbufbytes'. >> Presently we place no limit on small mbufs. I could go either way on >> this. It could be added to the limit or not. > > This sounds good but please take into consideration the possibility > of deadlock due to resource allocation to a single pool that can > happen. > > It might make sense to keep the small and large mbuf limits separate > or something like that. This is what I meant in the third paragraph. > > Might also make sense to retain the limits but set them all to > "unlimited" (withing the global limit) unless configured otherwise > for various custom set ups. I think this is a good idea. > > I don't feel too strongly about this, just some points to consider. I appreciate the feedback. Jeff > > -- > - Alfred Perlstein >