From owner-freebsd-questions@FreeBSD.ORG Thu Dec 15 08:33:57 2005 Return-Path: <owner-freebsd-questions@FreeBSD.ORG> X-Original-To: freebsd-questions@freebsd.org Delivered-To: freebsd-questions@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id DBDF416A446 for <freebsd-questions@freebsd.org>; Thu, 15 Dec 2005 08:33:57 +0000 (GMT) (envelope-from sasa@stupar.homelinux.net) Received: from avs4.arnes.si (avs4.arnes.si [193.2.1.77]) by mx1.FreeBSD.org (Postfix) with ESMTP id F1FB143D5A for <freebsd-questions@freebsd.org>; Thu, 15 Dec 2005 08:33:56 +0000 (GMT) (envelope-from sasa@stupar.homelinux.net) Received: from localhost (avs4.arnes.si [193.2.1.77]) by avs4.arnes.si (Postfix) with ESMTP id EFDC92C3536; Thu, 15 Dec 2005 09:33:55 +0100 (CET) Received: from avs4.arnes.si ([193.2.1.77]) by localhost (avs4.arnes.si [193.2.1.77]) (amavisd-new, port 10024) with ESMTP id 38941-03; Thu, 15 Dec 2005 09:33:55 +0100 (CET) Received: from xmail.homelinux.net (cmb16-74.dial-up.arnes.si [194.249.51.74]) by avs4.arnes.si (Postfix) with ESMTP id D7E312C359D; Thu, 15 Dec 2005 09:33:39 +0100 (CET) Received: from [192.168.10.249] (master.workgroup [192.168.10.249]) (authenticated bits=0) by xmail.homelinux.net (8.13.5/8.13.3) with ESMTP id jBF8Xchf030713; Thu, 15 Dec 2005 09:33:38 +0100 (CET) (envelope-from sasa@stupar.homelinux.net) Date: Thu, 15 Dec 2005 09:33:43 +0100 From: Sasa Stupar <sasa@stupar.homelinux.net> To: Ted Mittelstaedt <tedm@toybox.placo.com>, danial_thom@yahoo.com, Drew Tomlinson <drew@mykitchentable.net> Message-ID: <DFE9721AEE0E27C54C9B7741@[192.168.10.249]> In-Reply-To: <LOBBIFDAGNMAMLGJJCKNOEALFDAA.tedm@toybox.placo.com> References: <LOBBIFDAGNMAMLGJJCKNOEALFDAA.tedm@toybox.placo.com> X-Mailer: Mulberry/3.1.6 (Win32) MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit Content-Disposition: inline X-Virus-Scanned: ClamAV version 0.87.1, clamav-milter version 0.87 on mig29.workgroup X-Virus-Status: Clean X-Virus-Scanned: by amavisd-new at arnes.si Cc: freebsd-questions@freebsd.org Subject: RE: Polling For 100 mbps Connections? (Was Re: Freebsd Theme Song) X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: User questions <freebsd-questions.freebsd.org> List-Unsubscribe: <http://lists.freebsd.org/mailman/listinfo/freebsd-questions>, <mailto:freebsd-questions-request@freebsd.org?subject=unsubscribe> List-Archive: <http://lists.freebsd.org/pipermail/freebsd-questions> List-Post: <mailto:freebsd-questions@freebsd.org> List-Help: <mailto:freebsd-questions-request@freebsd.org?subject=help> List-Subscribe: <http://lists.freebsd.org/mailman/listinfo/freebsd-questions>, <mailto:freebsd-questions-request@freebsd.org?subject=subscribe> X-List-Received-Date: Thu, 15 Dec 2005 08:33:58 -0000 --On 14. december 2005 20:01 -0800 Ted Mittelstaedt <tedm@toybox.placo.com> wrote: > > >> -----Original Message----- >> From: Danial Thom [mailto:danial_thom@yahoo.com] >> Sent: Wednesday, December 14, 2005 11:14 AM >> To: Ted Mittelstaedt; Drew Tomlinson >> Cc: freebsd-questions@freebsd.org >> Subject: RE: Polling For 100 mbps Connections? (Was Re: Freebsd Theme >> Song) >> > >>> Well, if polling does no good for fxp, due to >>> the >>> hardware doing controlled interrupts, then why >>> does >>> the fxp driver even let you set it as an >>> option? >>> And why have many people who have enabled it on >>> fxp seen an improvement? >> >> They haven't, freebsd accounting doesn't work >> properly with polling enabled, and "they" don't >> have the ability to "know" if they are getting >> better performance, because "they", like you, >> have no clue what they're doing. How about all >> the idiots running MP with FreeBSD 4.x, when we >> know its just a waste of time? "they" all think >> they're getting worthwhile performance, because >> "they" are clueless. >> > > I would call them idiots if they are running MP under > FreeBSD and assuming that they are getting better > performance without actually testing for it. But > if they are just running MP because they happen to be > using an MP server, and they want to see if it will > work or not, who cares? > >> Maybe its tunable because they guy who wrote the >> driver made it a tunable? duh. I've yet to see >> one credible, controlled test that shows polling >> vs properly tuned interrupt-driven. >> > > Hm, OK I believe that. As I recall I asked you earlier to > post the test setup you used for your own tests > "proving" that polling is worse, and you haven't > done so yet. Now you are saying you have never seen > a credible controlled test that shows polling vs > interrupt-driven. So I guess either you were blind > when you ran your own tests, or your own tests > are not credible, controlled polling vs properly > tuned interrupt-driven. As I have been saying > all along. Now your agreeing with me. > >> The only advantage of polling is that it will >> drop packets instead of going into livelock. The >> disadvantage is that it will drop packets when >> you have momentary bursts that would harmlessly >> put the machine into livelock. Thats about it. >> > > Ah, now I think suddenly I see what the chip on your > shoulder is. You would rather have your router based > on FreeBSD go into livelock while packets stack up, > than drop anything. You tested the polling code and found > that yipes, it drops packets. > > What may I ask do you think that a Cisco or other > router does when you shove 10Mbt of traffic into it's > Ethernet interface destined for a host behind a T1 that > is plugged into the other end? (and no, source-quench > is not the correct answer) > > I think the scenario of it being better to momentary go into > livelock during an overload is only applicable to one scenario, > where the 2 interfaces in the router are the same capacity. > As in ethernet-to-ethernet routers. Most certainly not > Ethernet-to-serial routers, like what most routers are > that aren't on DSL lines. > > If you have a different understanding then please explain. > >>> >>> I've read those datasheets as well and the >>> thing I >>> don't understand is that if you are pumping >>> 100Mbt >>> into an Etherexpress Pro/100 then if the card >>> will >>> not interrupt more than this throttled rate you >>> keep >>> talking about, then the card's interrupt >>> throttling >>> is going to limit the inbound bandwidth to >>> below >>> 100Mbt. >> >> Wrong again, Ted. It scares me that you consider >> yourself knowlegable about this. You can process >># interrupts X ring_size packets; not one per >> interrupt. You're only polling 1000x per second >> (or whatever you have hz set to), so why do you >> think that you have to interrupt for every packet >> to do 100Mb/s? > > I never said anything about interrupting for every > packet, did I? Of course not since I know what > your talking about. However, it is you who are throwing > around the numbers - or were in your prior post - > regarding the fxp driver and hardware. Why should > I have to do the work digging around in the datasheets > and doing the math? > > Since you seem to be wanting to argue this from a > theory standpoint, then your only option is to do the > math. Go ahead, look up the datasheet for the 82557. > I'm sure it's online somewhere, and tell us what it says > about throttled interrupts, and run your numbers. > >> Do you not understand that packet >> processing is the same whether its done on a >> clock tick or a hardware interrupt? Do you not >> understand that a clock tick has more overhead >> (because of other assigned tasks)? Do you not >> understand that getting exactly 5000 hardware >> interrupts is much more efficient than having >> 5000 clock tick interrupts per second? What part >> of this don't you understand? >> > > Well, one part I don't understand is why when > one of those 5000 clock ticks happens and the fxp driver > finds no packets to take off the card, that it takes > the same amount of time for the driver to process > as when the fxp driver finds packets to process. > At least, that seems to be what your arguing. > > As I've stated before once, probably twice, polling > is obviously less efficient at lower bandwidth. In interrupt > driven mode, to get 5000 interrupts per second you are most > likely going to be having a lot of traffic coming in, > whereas you could get no traffic at all with polling mode > in 5000 clock ticks. So clearly, the comparison is always > stacked towards polling being only a competitor at high bandwidth. > Why you insist on using scenarios as examples that are low > bandwidth scenarios I cannot understand because nobody > in this debate so far has claimed that polling is better > at low bandwidth. > > I am as suspicious of testimonials as the next guy and > it is quite true that so far everyone promoting polling > in this thread has posted no test suites that are any better > than yours - you basically are blowing air at each other. > But there are a lot of others on the Internet that seem to > think it works great. I gave you some openings to > discredit them and you haven't taken them. > > I myself have never tried polling, so I > am certainly not going to argue against a logical, reasoned > explanation of why it's no good at high bandwidth. So > far, however, you have not posted anything like this. And > I am still waiting for the test suites you have used for > your claim that the networking in 5.4 and later is worse, > and I don't see why you want to diverge into this side issue > on polling when the real issue is the alleged worse networking > in the newer FreeBSD versions. > > Ted Hmmm, here is test with iperf what I have done with and without polling: ************** ------------------------------------------------------------ Client connecting to 192.168.1.200, TCP port 5001 TCP window size: 8.00 KByte (default) ------------------------------------------------------------ [1816] local 192.168.10.249 port 1088 connected with 192.168.1.200 port 5001 [ ID] Interval Transfer Bandwidth [1816] 0.0-10.0 sec 108 MBytes 90.1 Mbits/sec This is when I use Device polling option on m0n0. If I disable this option then my transfer is worse: ------------------------------------------------------------ Client connecting to 192.168.1.200, TCP port 5001 TCP window size: 8.00 KByte (default) ------------------------------------------------------------ [1816] local 192.168.10.249 port 1086 connected with 192.168.1.200 port 5001 [ ID] Interval Transfer Bandwidth [1816] 0.0-10.0 sec 69.7 MBytes 58.4 Mbits/sec *************** BTW: my router is m0n0wall (FBSD 4.11). -- Sasa Stupar