Date: Fri, 16 Dec 2005 07:36:15 -0800 (PST) From: Danial Thom <danial_thom@yahoo.com> To: Sasa Stupar <sasa@stupar.homelinux.net>, freebsd-questions@freebsd.org Subject: Re: Polling For 100 mbps Connections? (Was Re: Freebsd Theme Song) Message-ID: <20051216153615.74872.qmail@web33302.mail.mud.yahoo.com> In-Reply-To: <A15A67EAFAB435A5C96BF972@[192.168.10.249]>
next in thread | previous in thread | raw e-mail | index | archive | help
--- Sasa Stupar <sasa@stupar.homelinux.net> wrote: > > > --On 15. december 2005 6:33 -0800 Drew > Tomlinson <drew@mykitchentable.net> > wrote: > > > On 12/15/2005 12:33 AM Sasa Stupar wrote: > > > >> > >> > >> --On 14. december 2005 20:01 -0800 Ted > Mittelstaedt > >> <tedm@toybox.placo.com> wrote: > >> > >>> > >>> > >>>> -----Original Message----- > >>>> From: Danial Thom > [mailto:danial_thom@yahoo.com] > >>>> Sent: Wednesday, December 14, 2005 11:14 > AM > >>>> To: Ted Mittelstaedt; Drew Tomlinson > >>>> Cc: freebsd-questions@freebsd.org > >>>> Subject: RE: Polling For 100 mbps > Connections? (Was Re: Freebsd Theme > >>>> Song) > >>>> > >>> > >>>>> Well, if polling does no good for fxp, > due to > >>>>> the > >>>>> hardware doing controlled interrupts, > then why > >>>>> does > >>>>> the fxp driver even let you set it as an > >>>>> option? > >>>>> And why have many people who have enabled > it on > >>>>> fxp seen an improvement? > >>>> > >>>> > >>>> They haven't, freebsd accounting doesn't > work > >>>> properly with polling enabled, and "they" > don't > >>>> have the ability to "know" if they are > getting > >>>> better performance, because "they", like > you, > >>>> have no clue what they're doing. How about > all > >>>> the idiots running MP with FreeBSD 4.x, > when we > >>>> know its just a waste of time? "they" all > think > >>>> they're getting worthwhile performance, > because > >>>> "they" are clueless. > >>>> > >>> > >>> I would call them idiots if they are > running MP under > >>> FreeBSD and assuming that they are getting > better > >>> performance without actually testing for > it. But > >>> if they are just running MP because they > happen to be > >>> using an MP server, and they want to see if > it will > >>> work or not, who cares? > >>> > >>>> Maybe its tunable because they guy who > wrote the > >>>> driver made it a tunable? duh. I've yet to > see > >>>> one credible, controlled test that shows > polling > >>>> vs properly tuned interrupt-driven. > >>>> > >>> > >>> Hm, OK I believe that. As I recall I asked > you earlier to > >>> post the test setup you used for your own > tests > >>> "proving" that polling is worse, and you > haven't > >>> done so yet. Now you are saying you have > never seen > >>> a credible controlled test that shows > polling vs > >>> interrupt-driven. So I guess either you > were blind > >>> when you ran your own tests, or your own > tests > >>> are not credible, controlled polling vs > properly > >>> tuned interrupt-driven. As I have been > saying > >>> all along. Now your agreeing with me. > >>> > >>>> The only advantage of polling is that it > will > >>>> drop packets instead of going into > livelock. The > >>>> disadvantage is that it will drop packets > when > >>>> you have momentary bursts that would > harmlessly > >>>> put the machine into livelock. Thats about > it. > >>>> > >>> > >>> Ah, now I think suddenly I see what the > chip on your > >>> shoulder is. You would rather have your > router based > >>> on FreeBSD go into livelock while packets > stack up, > >>> than drop anything. You tested the polling > code and found > >>> that yipes, it drops packets. > >>> > >>> What may I ask do you think that a Cisco or > other > >>> router does when you shove 10Mbt of traffic > into it's > >>> Ethernet interface destined for a host > behind a T1 that > >>> is plugged into the other end? (and no, > source-quench > >>> is not the correct answer) > >>> > >>> I think the scenario of it being better to > momentary go into > >>> livelock during an overload is only > applicable to one scenario, > >>> where the 2 interfaces in the router are > the same capacity. > >>> As in ethernet-to-ethernet routers. Most > certainly not > >>> Ethernet-to-serial routers, like what most > routers are > >>> that aren't on DSL lines. > >>> > >>> If you have a different understanding then > please explain. > >>> > >>>>> > >>>>> I've read those datasheets as well and > the > >>>>> thing I > >>>>> don't understand is that if you are > pumping > >>>>> 100Mbt > >>>>> into an Etherexpress Pro/100 then if the > card > >>>>> will > >>>>> not interrupt more than this throttled > rate you > >>>>> keep > >>>>> talking about, then the card's interrupt > >>>>> throttling > >>>>> is going to limit the inbound bandwidth > to > >>>>> below > >>>>> 100Mbt. > >>>> > >>>> > >>>> Wrong again, Ted. It scares me that you > consider > >>>> yourself knowlegable about this. You can > process > >>>> # interrupts X ring_size packets; not one > per > >>>> interrupt. You're only polling 1000x per > second > >>>> (or whatever you have hz set to), so why > do you > >>>> think that you have to interrupt for every > packet > >>>> to do 100Mb/s? > >>> > >>> > >>> I never said anything about interrupting > for every > >>> packet, did I? Of course not since I know > what > >>> your talking about. However, it is you who > are throwing > >>> around the numbers - or were in your prior > post - > >>> regarding the fxp driver and hardware. Why > should > >>> I have to do the work digging around in the > datasheets > >>> and doing the math? > >>> > >>> Since you seem to be wanting to argue this > from a > >>> theory standpoint, then your only option is > to === message truncated === <message too large for stupid Yahoo mailer> Unfortunately your "test" is not controlled, which is pretty typical of most OS testers. Firstly, "efficiency" is the goal. How many packets you can pump through a socket interface is not an efficiency measurement. What was the load on the machine during your test? how many polls per second were being used? What was the interrupt rate for the non-polling test? You can't control the test, because iperf is a crappy test. You're not trying to measure how much tcp traffic you can push. There are way too many variables. For example, an ethernet card that iperf might test slower may use 1/2 the cpu of another, which makes it a much better card. As the load increased, the # of polls/second will increase. So unless you know what you're testing, your test results will be wrong. Its too bad you're wasting so much time testing things that should be obivious if you understood how they worked. It does make it easier to make money in the world, but its not doing the project any good. dt __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20051216153615.74872.qmail>