Date: Sun, 18 Dec 2005 01:52:56 -0800 From: "Ted Mittelstaedt" <tedm@toybox.placo.com> To: <danial_thom@yahoo.com>, "Sasa Stupar" <sasa@stupar.homelinux.net>, <freebsd-questions@freebsd.org> Subject: RE: Polling For 100 mbps Connections? (Was Re: Freebsd Theme Song) Message-ID: <LOBBIFDAGNMAMLGJJCKNEEBDFDAA.tedm@toybox.placo.com> In-Reply-To: <20051216153615.74872.qmail@web33302.mail.mud.yahoo.com>
next in thread | previous in thread | raw e-mail | index | archive | help
>-----Original Message----- >From: owner-freebsd-questions@freebsd.org >[mailto:owner-freebsd-questions@freebsd.org]On Behalf Of Danial Thom >Sent: Friday, December 16, 2005 7:36 AM >To: Sasa Stupar; freebsd-questions@freebsd.org >Subject: Re: Polling For 100 mbps Connections? (Was Re: Freebsd Theme >Song) > > > > >--- Sasa Stupar <sasa@stupar.homelinux.net> >wrote: > >> >> >> --On 15. december 2005 6:33 -0800 Drew >> Tomlinson <drew@mykitchentable.net> >> wrote: >> >> > On 12/15/2005 12:33 AM Sasa Stupar wrote: >> > >> >> >> >> >> >> --On 14. december 2005 20:01 -0800 Ted >> Mittelstaedt >> >> <tedm@toybox.placo.com> wrote: >> >> >> >>> >> >>> >> >>>> -----Original Message----- >> >>>> From: Danial Thom >> [mailto:danial_thom@yahoo.com] >> >>>> Sent: Wednesday, December 14, 2005 11:14 >> AM >> >>>> To: Ted Mittelstaedt; Drew Tomlinson >> >>>> Cc: freebsd-questions@freebsd.org >> >>>> Subject: RE: Polling For 100 mbps >> Connections? (Was Re: Freebsd Theme >> >>>> Song) >> >>>> >> >>> >> >>>>> Well, if polling does no good for fxp, >> due to >> >>>>> the >> >>>>> hardware doing controlled interrupts, >> then why >> >>>>> does >> >>>>> the fxp driver even let you set it as an >> >>>>> option? >> >>>>> And why have many people who have enabled >> it on >> >>>>> fxp seen an improvement? >> >>>> >> >>>> >> >>>> They haven't, freebsd accounting doesn't >> work >> >>>> properly with polling enabled, and "they" >> don't >> >>>> have the ability to "know" if they are >> getting >> >>>> better performance, because "they", like >> you, >> >>>> have no clue what they're doing. How about >> all >> >>>> the idiots running MP with FreeBSD 4.x, >> when we >> >>>> know its just a waste of time? "they" all >> think >> >>>> they're getting worthwhile performance, >> because >> >>>> "they" are clueless. >> >>>> >> >>> >> >>> I would call them idiots if they are >> running MP under >> >>> FreeBSD and assuming that they are getting >> better >> >>> performance without actually testing for >> it. But >> >>> if they are just running MP because they >> happen to be >> >>> using an MP server, and they want to see if >> it will >> >>> work or not, who cares? >> >>> >> >>>> Maybe its tunable because they guy who >> wrote the >> >>>> driver made it a tunable? duh. I've yet to >> see >> >>>> one credible, controlled test that shows >> polling >> >>>> vs properly tuned interrupt-driven. >> >>>> >> >>> >> >>> Hm, OK I believe that. As I recall I asked >> you earlier to >> >>> post the test setup you used for your own >> tests >> >>> "proving" that polling is worse, and you >> haven't >> >>> done so yet. Now you are saying you have >> never seen >> >>> a credible controlled test that shows >> polling vs >> >>> interrupt-driven. So I guess either you >> were blind >> >>> when you ran your own tests, or your own >> tests >> >>> are not credible, controlled polling vs >> properly >> >>> tuned interrupt-driven. As I have been >> saying >> >>> all along. Now your agreeing with me. >> >>> >> >>>> The only advantage of polling is that it >> will >> >>>> drop packets instead of going into >> livelock. The >> >>>> disadvantage is that it will drop packets >> when >> >>>> you have momentary bursts that would >> harmlessly >> >>>> put the machine into livelock. Thats about >> it. >> >>>> >> >>> >> >>> Ah, now I think suddenly I see what the >> chip on your >> >>> shoulder is. You would rather have your >> router based >> >>> on FreeBSD go into livelock while packets >> stack up, >> >>> than drop anything. You tested the polling >> code and found >> >>> that yipes, it drops packets. >> >>> >> >>> What may I ask do you think that a Cisco or >> other >> >>> router does when you shove 10Mbt of traffic >> into it's >> >>> Ethernet interface destined for a host >> behind a T1 that >> >>> is plugged into the other end? (and no, >> source-quench >> >>> is not the correct answer) >> >>> >> >>> I think the scenario of it being better to >> momentary go into >> >>> livelock during an overload is only >> applicable to one scenario, >> >>> where the 2 interfaces in the router are >> the same capacity. >> >>> As in ethernet-to-ethernet routers. Most >> certainly not >> >>> Ethernet-to-serial routers, like what most >> routers are >> >>> that aren't on DSL lines. >> >>> >> >>> If you have a different understanding then >> please explain. >> >>> >> >>>>> >> >>>>> I've read those datasheets as well and >> the >> >>>>> thing I >> >>>>> don't understand is that if you are >> pumping >> >>>>> 100Mbt >> >>>>> into an Etherexpress Pro/100 then if the >> card >> >>>>> will >> >>>>> not interrupt more than this throttled >> rate you >> >>>>> keep >> >>>>> talking about, then the card's interrupt >> >>>>> throttling >> >>>>> is going to limit the inbound bandwidth >> to >> >>>>> below >> >>>>> 100Mbt. >> >>>> >> >>>> >> >>>> Wrong again, Ted. It scares me that you >> consider >> >>>> yourself knowlegable about this. You can >> process >> >>>> # interrupts X ring_size packets; not one >> per >> >>>> interrupt. You're only polling 1000x per >> second >> >>>> (or whatever you have hz set to), so why >> do you >> >>>> think that you have to interrupt for every >> packet >> >>>> to do 100Mb/s? >> >>> >> >>> >> >>> I never said anything about interrupting >> for every >> >>> packet, did I? Of course not since I know >> what >> >>> your talking about. However, it is you who >> are throwing >> >>> around the numbers - or were in your prior >> post - >> >>> regarding the fxp driver and hardware. Why >> should >> >>> I have to do the work digging around in the >> datasheets >> >>> and doing the math? >> >>> >> >>> Since you seem to be wanting to argue this >> from a >> >>> theory standpoint, then your only option is >> to >=== message truncated === > ><message too large for stupid Yahoo mailer> > >Unfortunately your "test" is not controlled, >which is pretty typical of most OS testers. >Firstly, "efficiency" is the goal. How many >packets you can pump through a socket interface >is not an efficiency measurement. Actually, just about all benchmarks that are used in marketing routers, including the infamous "xxxx pps" benchmarks, pretty much only care about how many packets you can pump through the device. I've never seen a cpu utilization figure included in any of the Juniper and Cisco marketing literature which claims their product is faster then the competition. Not that I'm saying this is right, but it is how the market looks at things. >What was the >load on the machine during your test? how many >polls per second were being used? What was the >interrupt rate for the non-polling test? You >can't control the test, because iperf is a crappy >test. You're not trying to measure how much tcp >traffic you can push. There are way too many >variables. For example, an ethernet card that >iperf might test slower may use 1/2 the cpu of >another, which makes it a much better card. > That only matters if you need that extra CPU power. Assume for a second that you have 2 ethernet-to-ethernet routers, one with really efficient nics, and one with really poor nics. Both routers have enough power to completely saturate the ethernet with a packet stream going through them, no matter what the size of the packet used. You have tested and confirmed this. The router with the efficient nics will probably have a lot lower cpu utilization. No question there. But, if your need is for a simple router that does no access list filtering or any of that, on a protected internal network where the router won't get attacked, then you won't care if the CPU on the router is running at 90% utilization or 20% utilization. If the router with the inefficient nics is thousands of bucks cheaper, then you know what router most companies are going to buy. Once again, as I said before, not that I'm saying this is right, but it is how the market looks at things. >As the load increased, the # of polls/second will >increase. So unless you know what you're testing, >your test results will be wrong. > >Its too bad you're wasting so much time testing >things that should be obivious if you understood >how they worked. It does make it easier to make >money in the world, but its not doing the project >any good. > But is it really? Microsoft has answered the same kinds of inefficiency criticisms for time out of mind with a simple response: "buy a faster computer" They come out with a new OS like clockwork, every couple years, that is at least twice as slow as the predicessor. You are required to buy the latest fastest computer simply to keep the speed of the Windows experience the same speed as the prior version. You are arguing from the point that the FreeBSD Project shouldn't ship until they have completely optimized their "product" even if so doing delays the newer versions for years and years. The computer market seems to favor the Microsoft approach. Users will happily take a more inefficient operating system if it comes with the latest load of bells and whistles and crap in it. But they won't wait around for years and years for the FreeBSD project to make the code as efficient as possible. As I said before, not that I'm saying this is right, but it is how the market looks at things. So I question your assertion that "this isn't doing the project any good" no good from WHAT perspective? Ted
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?LOBBIFDAGNMAMLGJJCKNEEBDFDAA.tedm>