Date: Sat, 20 Mar 2010 19:13:58 +0100 From: "C. P. Ghost" <cpghost@cordula.ws> To: Matthew Dillon <dillon@apollo.backplane.com> Cc: Alexander Motin <mav@freebsd.org>, FreeBSD-Current <freebsd-current@freebsd.org>, freebsd-arch@freebsd.org Subject: Re: Increasing MAXPHYS Message-ID: <d74eb87c1003201113q21ddde15nea6dc77be22ce846@mail.gmail.com> In-Reply-To: <201003201753.o2KHrH5x003946@apollo.backplane.com> References: <4BA4E7A9.3070502@FreeBSD.org> <201003201753.o2KHrH5x003946@apollo.backplane.com>
next in thread | previous in thread | raw e-mail | index | archive | help
On Sat, Mar 20, 2010 at 6:53 PM, Matthew Dillon <dillon@apollo.backplane.com> wrote: > > :All above I have successfully tested last months with MAXPHYS of 1MB on > :i386 and amd64 platforms. > : > :So my questions are: > :- does somebody know any issues denying increasing MAXPHYS in HEAD? > :- are there any specific opinions about value? 512K, 1MB, MD? > : > :-- > :Alexander Motin > > =A0 =A0(nswbuf * MAXPHYS) of KVM is reserved for pbufs, so on i386 you > =A0 =A0might hit up against KVM exhaustion issues in unrelated subsystems= . > =A0 =A0nswbuf typically maxes out at around 256. =A0For i386 1MB is proba= bly > =A0 =A0too large (256M of reserved KVM is a lot for i386). =A0On amd64 th= ere > =A0 =A0shouldn't be a problem. Pardon my ignorance, but wouldn't so much KVM make small embedded devices like Soekris boards with 128 MB of physical RAM totally unusable then? On my net4801, running RELENG_8: vm.kmem_size: 40878080 hw.physmem: 125272064 hw.usermen: 84840448 hw.realmem: 134217728 > =A0 =A0Diminishing returns get hit pretty quickly with larger MAXPHYS val= ues. > =A0 =A0As long as the I/O can be pipelined the reduced transaction rate > =A0 =A0becomes less interesting when the transaction rate is less than a > =A0 =A0certain level. =A0Off the cuff I'd say 2000 tps is a good basis fo= r > =A0 =A0considering whether it is an issue or not. =A0256K is actually qui= te > =A0 =A0a reasonable value. =A0Even 128K is reasonable. > > =A0 =A0Nearly all the issues I've come up against in the last few years h= ave > =A0 =A0been related more to pipeline algorithms breaking down and less wi= th > =A0 =A0I/O size. =A0The cluster_read() code is especially vulnerable to > =A0 =A0algorithmic breakdowns when fast media (such as a SSD) is involved= . > =A0 =A0e.g. =A0I/Os queued from the previous cluster op can create stall > =A0 =A0conditions in subsequent cluster ops before they can issue new I/O= s > =A0 =A0to keep the pipeline hot. Thanks, -cpghost. --=20 Cordula's Web. http://www.cordula.ws/
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?d74eb87c1003201113q21ddde15nea6dc77be22ce846>