Date: Sat, 20 Mar 2010 19:13:58 +0100 From: "C. P. Ghost" <cpghost@cordula.ws> To: Matthew Dillon <dillon@apollo.backplane.com> Cc: Alexander Motin <mav@freebsd.org>, FreeBSD-Current <freebsd-current@freebsd.org>, freebsd-arch@freebsd.org Subject: Re: Increasing MAXPHYS Message-ID: <d74eb87c1003201113q21ddde15nea6dc77be22ce846@mail.gmail.com> In-Reply-To: <201003201753.o2KHrH5x003946@apollo.backplane.com> References: <4BA4E7A9.3070502@FreeBSD.org> <201003201753.o2KHrH5x003946@apollo.backplane.com>
next in thread | previous in thread | raw e-mail | index | archive | help
On Sat, Mar 20, 2010 at 6:53 PM, Matthew Dillon <dillon@apollo.backplane.com> wrote: > > :All above I have successfully tested last months with MAXPHYS of 1MB on > :i386 and amd64 platforms. > : > :So my questions are: > :- does somebody know any issues denying increasing MAXPHYS in HEAD? > :- are there any specific opinions about value? 512K, 1MB, MD? > : > :-- > :Alexander Motin > > (nswbuf * MAXPHYS) of KVM is reserved for pbufs, so on i386 you > might hit up against KVM exhaustion issues in unrelated subsystems. > nswbuf typically maxes out at around 256. For i386 1MB is probably > too large (256M of reserved KVM is a lot for i386). On amd64 there > shouldn't be a problem. Pardon my ignorance, but wouldn't so much KVM make small embedded devices like Soekris boards with 128 MB of physical RAM totally unusable then? On my net4801, running RELENG_8: vm.kmem_size: 40878080 hw.physmem: 125272064 hw.usermen: 84840448 hw.realmem: 134217728 > Diminishing returns get hit pretty quickly with larger MAXPHYS values. > As long as the I/O can be pipelined the reduced transaction rate > becomes less interesting when the transaction rate is less than a > certain level. Off the cuff I'd say 2000 tps is a good basis for > considering whether it is an issue or not. 256K is actually quite > a reasonable value. Even 128K is reasonable. > > Nearly all the issues I've come up against in the last few years have > been related more to pipeline algorithms breaking down and less with > I/O size. The cluster_read() code is especially vulnerable to > algorithmic breakdowns when fast media (such as a SSD) is involved. > e.g. I/Os queued from the previous cluster op can create stall > conditions in subsequent cluster ops before they can issue new I/Os > to keep the pipeline hot. Thanks, -cpghost. -- Cordula's Web. http://www.cordula.ws/
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?d74eb87c1003201113q21ddde15nea6dc77be22ce846>
