Date: Sun, 21 Mar 2010 08:51:24 -0700 From: Julian Elischer <julian@elischer.org> To: Andriy Gapon <avg@icyb.net.ua> Cc: Alexander Motin <mav@FreeBSD.org>, freebsd-current@FreeBSD.org, Ivan Voras <ivoras@FreeBSD.org>, freebsd-arch@FreeBSD.org Subject: Re: Increasing MAXPHYS Message-ID: <4BA6407C.3020103@elischer.org> In-Reply-To: <4BA633A0.2090108@icyb.net.ua> References: <1269109391.00231800.1269099002@10.7.7.3> <1269120182.00231865.1269108002@10.7.7.3> <1269120188.00231888.1269109203@10.7.7.3> <1269123795.00231922.1269113402@10.7.7.3> <1269130981.00231933.1269118202@10.7.7.3> <1269130986.00231939.1269119402@10.7.7.3> <1269134581.00231948.1269121202@10.7.7.3> <1269134585.00231959.1269122405@10.7.7.3> <4BA6279E.3010201@FreeBSD.org> <4BA633A0.2090108@icyb.net.ua>
next in thread | previous in thread | raw e-mail | index | archive | help
Andriy Gapon wrote: > on 21/03/2010 16:05 Alexander Motin said the following: >> Ivan Voras wrote: >>> Hmm, it looks like it could be easy to spawn more g_* threads (and, >>> barring specific class behaviour, it has a fair chance of working out of >>> the box) but the incoming queue will need to also be broken up for >>> greater effect. >> According to "notes", looks there is a good chance to obtain races, as >> some places expect only one up and one down thread. > > I haven't given any deep thought to this issue, but I remember us discussing > them over beer :-) > I think one idea was making sure (somehow) that requests traveling over the same > edge of a geom graph (in the same direction) do it using the same queue/thread. > Another idea was to bring some netgraph-like optimization where some (carefully > chosen) geom vertices pass requests by a direct call instead of requeuing. > yeah, like the 1:1 single provider case. (which we an most of our custommers mostly use on our cards). i.e. no slicing or dicing, and just the raw flash card presented as /dev/fio0
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4BA6407C.3020103>