Date: Sat, 10 Nov 2012 20:18:01 +0100 From: Andre Oppermann <andre@freebsd.org> To: Peter Wemm <peter@wemm.org> Cc: src-committers@freebsd.org, Eitan Adler <eadler@freebsd.org>, Alfred Perlstein <alfred@freebsd.org>, svn-src-all@freebsd.org, Alfred Perlstein <bright@mu.org>, svn-src-head@freebsd.org Subject: Re: svn commit: r242847 - in head/sys: i386/include kern Message-ID: <509EA869.6030407@freebsd.org> In-Reply-To: <CAGE5yCoeTXf7x4ZBDXnHJ4dnFi-_2R28kB8HxOB%2B=Je4aJGYQQ@mail.gmail.com> References: <201211100208.qAA28e0v004842@svn.freebsd.org> <CAF6rxg=HPmQS1T-LFsZ=DuKEqH30iJFpkz%2BJGhLr4OBL8nohjg@mail.gmail.com> <509DC25E.5030306@mu.org> <509E3162.5020702@FreeBSD.org> <509E7E7C.9000104@mu.org> <CAF6rxgmV8dx-gsQceQKuMQEsJ%2BGkExcKYxEvQ3kY%2B5_nSjvA3w@mail.gmail.com> <509E830D.5080006@mu.org> <509E847E.30509@mu.org> <CAF6rxgnfm4HURYp=O4MY8rB6H1tGiqJ3rdPx0rZ8Swko5mAOZg@mail.gmail.com> <509E8930.50800@mu.org> <CAF6rxgmabVuR0JoFURRUF%2Bed0hmT=LF_n5LXSip0ibU0hk6qWw@mail.gmail.com> <CAGE5yCouCWr4NKbgnjKfLcjc8EWqG0wRiSmXDDnrnM3%2BUc8KVQ@mail.gmail.com> <CAF6rxg=ryNEMEidJdgf8-Ab=bD15R1ypcz-bS8183U4JK_Q17g@mail.gmail.com> <CAGE5yCoeTXf7x4ZBDXnHJ4dnFi-_2R28kB8HxOB%2B=Je4aJGYQQ@mail.gmail.com>
next in thread | previous in thread | raw e-mail | index | archive | help
On 10.11.2012 19:04, Peter Wemm wrote: > On Sat, Nov 10, 2012 at 9:48 AM, Eitan Adler <eadler@freebsd.org> wrote: >> On 10 November 2012 12:45, Peter Wemm <peter@wemm.org> wrote: >>> On Sat, Nov 10, 2012 at 9:33 AM, Eitan Adler <eadler@freebsd.org> wrote: >>>> On 10 November 2012 12:04, Alfred Perlstein <bright@mu.org> wrote: >>>>> Sure, if you'd like you can help me craft that comment now? >>>> >>>> I think this is short and clear: >>>> === >>>> Limit the amount of kernel address space used to a fixed cap. >>>> 384 is an arbitrarily chosen value that leaves 270 MB of KVA available >>>> of the 2 MB total. On systems with large amount of memory reduce the >>>> the slope of the function in order to avoiding exhausting KVA. >>>> === >>> >>> That's actually completely 100% incorrect... >> >> okay. I'm going by the log messages posted so far. I have no idea how >> this works. Can you explain it better? > > That's exactly my point.. > > You get 1 maxuser per 2MB of physical ram. > If you get more than 384 maxusers (ie: 192GB of ram) we scale it > differently for the part past 192GB. I have no idea how the hell to Rather past 768MB of RAM. > calculate that. > You get an unlimited number of regular mbufs. > You get 64 clusters per maxuser (128k) > Unless I fubared the numbers, this currently works out to be 6%, or 1/16. > > Each MD backend gets to provide a cap for maxusers, which is in units > of 2MB. For an i386 PAE machine you have a finite amount of KVA space > (1GB, but this is adjustable.. you can easily configure it for 3GB kva > with one compile option for the kernel). The backends where the > nmbclusters comes out of KVA should calculate the number of 2MB units > to avoid running out of KVA. > > amd64 does a mixture of direct map and kva allocations. eg: mbufs and > clusters come from direct map, the jumbo clusters come from kva. > > So side effects of nmbclusters for amd64 are more complicated. > > 1/2 of the nmbclusters (which are in physcal ram) are allocated as > jumbo frames (kva) > 1/4 of nmbclusters (physical) are 9k jumbo frames (kva) > 1/8 of nmbclusters (physical) are used to set the 16k kva backed jumbo > frame pool. The mbufs and clusters of different types are not allocated at startup time, but rather their total allocation at runtime is *limited* to that maximal value in UMA. > amd64 kva is "large enough" now, but my recollection is that sparc64 > has a small kva plus a large direct map. Tuning for amd64 isn't > relevant for sparc64. mips has direct map, but doesn't have a "large" > direct map, nor a "large" kva. > > This is complicated but we need a simple user visible view of it. It > really needs to be something like "nmbclusters defaults to 6% of > physical ram, with machine dependent limits". The MD limits are bad > enough, and using bogo-units like "maxusers" just makes it worse. Yes, that would be optimal. -- Andre
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?509EA869.6030407>