From owner-freebsd-performance@FreeBSD.ORG Sun Jun 18 00:06:44 2006 Return-Path: X-Original-To: performance@FreeBSD.org Delivered-To: freebsd-performance@FreeBSD.ORG Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 1E2ED16A47A for ; Sun, 18 Jun 2006 00:06:44 +0000 (UTC) (envelope-from danial_thom@yahoo.com) Received: from web33302.mail.mud.yahoo.com (web33302.mail.mud.yahoo.com [68.142.206.117]) by mx1.FreeBSD.org (Postfix) with SMTP id 4722343D45 for ; Sun, 18 Jun 2006 00:06:43 +0000 (GMT) (envelope-from danial_thom@yahoo.com) Received: (qmail 68956 invoked by uid 60001); 18 Jun 2006 00:06:42 -0000 DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com; h=Message-ID:Received:Date:From:Reply-To:Subject:To:In-Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding; b=q28EBKUGpapFyXRey24QnWS7qa9AJwi49LGLRxDOJFJ6jh/knhV3+SA07K1Gi55rwsm/Oh5chmIUCMS26zaxxrc0saCrn/K6ZwqZVfugO8mFDXHqbKV8cmkYH9R+5eK/phdowNuqpMIq10a8E/jy/HCiLAQrpoFzeRx0YGSNx3Y= ; Message-ID: <20060618000642.68954.qmail@web33302.mail.mud.yahoo.com> Received: from [65.34.182.15] by web33302.mail.mud.yahoo.com via HTTP; Sat, 17 Jun 2006 17:06:42 PDT Date: Sat, 17 Jun 2006 17:06:42 -0700 (PDT) From: Danial Thom To: Robert Watson , performance@FreeBSD.org In-Reply-To: <20060617134402.O8526@fledge.watson.org> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: 8bit Cc: Subject: Re: HZ=100: not necessarily better? X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: danial_thom@yahoo.com List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 18 Jun 2006 00:06:44 -0000 --- Robert Watson wrote: > > Scott asked me if I could take a look at the > impact of changing HZ for some > simple TCP performance tests. I ran the first > couple, and got some results > that were surprising, so I thought I'd post > about them and ask people who are > interested if they could do some investigation > also. The short of it is that > we had speculated that the increased CPU > overhead of a higher HZ would be > significant when it came to performance > measurement, but in fact, I measure > improved performance under high HTTP load with > a higher HZ. This was, of > course, the reason we first looked at > increasing HZ: improving timer > granularity helps improve the performance of > network protocols, such as TCP. > Recent popular opinion has swung in the > opposite direction, that higher HZ > overhead outweighs this benefit, and I think we > should be cautious and do a > lot more investigating before assuming that is > true. > > Simple performance results below. Two boxes on > a gig-e network with if_em > ethernet cards, one running a simple web server > hosting 100 byte pages, and > the other downloading them in parallel > (netrate/http and netrate/httpd). The > performance difference is marginal, but at > least in the SMP case, likely more > than a measurement error or cache alignment > fluke. Results are > transactions/second sustained over a 30 second > test -- bigger is better; box > is a dual xeon p4 with HTT; 'vendor.*' are the > default 7-CURRENT HZ setting > (1000) and 'hz.*' are the HZ=100 versions of > the same kernels. Regardless, > there wasn't an obvious performance improvement > by reducing HZ from 1000 to > 100. Results may vary, use only as directed. > > What we might want to explore is using a > programmable timer to set up high > precision timeouts, such as TCP timers, while > keeping base statistics > profiling and context switching at 100hz. I > think phk has previously proposed > doing this with the HPET timer. > > I'll run some more diverse tests today, such as > raw bandwidth tests, pps on > UDP, and so on, and see where things sit. The > reduced overhead should be > measurable in cases where the test is CPU-bound > and there's no clear benefit > to more accurate timing, such as with TCP, but > it would be good to confirm > that. > > Robert N M Watson > Computer Laboratory > University of Cambridge > > > peppercorn:~/tmp/netperf/hz> ministat *SMP > x hz.SMP > + vendor.SMP > +--------------------------------------------------------------------------+ > |xx x xx x xx x + + > + + + ++ + ++| > | |_______A________| > |_____________A___M________| | > +--------------------------------------------------------------------------+ > N Min Max > Median Avg Stddev > x 10 13715 13793 13750 > 13751.1 29.319883 > + 10 13813 13970 13921 > 13906.5 47.551726 > Difference at 95.0% confidence > 155.4 +/- 37.1159 > 1.13009% +/- 0.269913% > (Student's t, pooled s = 39.502) > > peppercorn:~/tmp/netperf/hz> ministat *UP > x hz.UP > + vendor.UP > +--------------------------------------------------------------------------+ > |x x xx x xx+ ++x+ ++ * + > + +| > | > |_________M_A_______|___|______M_A____________| > | > +--------------------------------------------------------------------------+ > N Min Max > Median Avg Stddev > x 10 14067 14178 14116 > 14121.2 31.279386 > + 10 14141 14257 14170 > 14175.9 33.248058 > Difference at 95.0% confidence > 54.7 +/- 30.329 > 0.387361% +/- 0.214776% > (Student's t, pooled s = 32.2787) > > _______________________________________________ > freebsd-performance@freebsd.org mailing list > --- Robert Watson wrote: > > Scott asked me if I could take a look at the > impact of changing HZ for some > simple TCP performance tests. I ran the first > couple, and got some results > that were surprising, so I thought I'd post > about them and ask people who are > interested if they could do some investigation > also. The short of it is that > we had speculated that the increased CPU > overhead of a higher HZ would be > significant when it came to performance > measurement, but in fact, I measure > improved performance under high HTTP load with > a higher HZ. This was, of > course, the reason we first looked at > increasing HZ: improving timer > granularity helps improve the performance of > network protocols, such as TCP. > Recent popular opinion has swung in the > opposite direction, that higher HZ > overhead outweighs this benefit, and I think we > should be cautious and do a > lot more investigating before assuming that is > true. > > Simple performance results below. Two boxes on > a gig-e network with if_em > ethernet cards, one running a simple web server > hosting 100 byte pages, and > the other downloading them in parallel > (netrate/http and netrate/httpd). The > performance difference is marginal, but at > least in the SMP case, likely more > than a measurement error or cache alignment > fluke. Results are > transactions/second sustained over a 30 second > test -- bigger is better; box > is a dual xeon p4 with HTT; 'vendor.*' are the > default 7-CURRENT HZ setting > (1000) and 'hz.*' are the HZ=100 versions of > the same kernels. Regardless, > there wasn't an obvious performance improvement > by reducing HZ from 1000 to > 100. Results may vary, use only as directed. > > What we might want to explore is using a > programmable timer to set up high > precision timeouts, such as TCP timers, while > keeping base statistics > profiling and context switching at 100hz. I > think phk has previously proposed > doing this with the HPET timer. > > I'll run some more diverse tests today, such as > raw bandwidth tests, pps on > UDP, and so on, and see where things sit. The > reduced overhead should be > measurable in cases where the test is CPU-bound > and there's no clear benefit > to more accurate timing, such as with TCP, but > it would be good to confirm > that. > > Robert N M Watson > Computer Laboratory > University of Cambridge > > > peppercorn:~/tmp/netperf/hz> ministat *SMP > x hz.SMP > + vendor.SMP > +--------------------------------------------------------------------------+ > |xx x xx x xx x + + > + + + ++ + ++| > | |_______A________| > |_____________A___M________| | > +--------------------------------------------------------------------------+ > N Min Max > Median Avg Stddev > x 10 13715 13793 13750 > 13751.1 29.319883 > + 10 13813 13970 13921 > 13906.5 47.551726 > Difference at 95.0% confidence > 155.4 +/- 37.1159 > 1.13009% +/- 0.269913% > (Student's t, pooled s = 39.502) > > peppercorn:~/tmp/netperf/hz> ministat *UP > x hz.UP > + vendor.UP > +--------------------------------------------------------------------------+ > |x x xx x xx+ ++x+ ++ * + > + +| > | > |_________M_A_______|___|______M_A____________| > | > +--------------------------------------------------------------------------+ > N Min Max > Median Avg Stddev > x 10 14067 14178 14116 > 14121.2 31.279386 > + 10 14141 14257 14170 > 14175.9 33.248058 > Difference at 95.0% confidence > 54.7 +/- 30.329 > 0.387361% +/- 0.214776% > (Student's t, pooled s = 32.2787) > > _______________________________________________ > freebsd-performance@freebsd.org mailing list > And what was the cost in cpu load to get the extra couple of bytes of throughput? Machines have to do other things too. That is the entire point of SMP processing. Of course increasing the granularity of your clocks will cause to you process events that are clock-reliant more quickly, so you might see more "throughput", but there is a cost. Weighing (and measuring) those costs are more important than what a single benchmark does. At some point you're going to have to figure out that there's a reason that every time anyone other than you tests FreeBSD it completely pigs out. Sqeezing out some extra bytes in netperf isn't "performance". Performance is everything that a system can do. If you're eating 10% more cpu to get a few more bytes in netperf, you haven't increased the performance of the system. You need to do things like run 2 benchmarks at once. What happens to the "performance" of one benchmark when you increase the "performance" of the other? Run a database benchmark while you're running a network benchmark, or while you're passing a controlled stream of traffic through the box. I just finished a couple of simple tests and find that 6.1 has not improved at all since 5.3 in basic interrupt processing and context switching performance (which is the basic building block for all system performance). Bridging 140K pps (a full 100Mb/s load) uses 33% of the cpu(s) in Freebsd 6.1, and 17% in Dragonfly 1.5.3, on a dual-core 1.8Ghz opteron system. (I finally got vmstat to work properly after getting rid of your stupid 2 second timeout in the MAC learning table). I'll be doing some mySQL benchmarks next week while passing a controlled stream through the system. But since I know that the controlled stream eats up twice as much CPU on FreeBSD, I already know much of the answer, since FreeBSD will have much less CPU left over to work with. Its unfortunate that you seem to be tuning for one thing while completely unaware of all of the other things you're breaking in the process. The Linux camp understands that in order to scale well they have to sacrifice some network performance. Sadly they've gone too far and now the OS is no longer suitable as a high-end network appliance. I'm not sure what Matt understands because he never answers any questions, but his results are so far quite impressive. One thing for certain is that its not all about how many packets you can hammer out your socket interface (nor has it ever been). Its about improving the efficiency of the system on an overall basis. Thats what SMP processing is all about, and you're never going to get where you want to be using netperf as your guide. I'd also love to see the results of the exact same test with only 1 cpu enabled, to see how well you scale generally. I'm astounded that no-one ever seems to post 1 vs 2 cpu performance, which is the entire point of SMP. DT __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com From owner-freebsd-performance@FreeBSD.ORG Sun Jun 18 00:17:37 2006 Return-Path: X-Original-To: performance@freebsd.org Delivered-To: freebsd-performance@FreeBSD.ORG Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 9364616A47A; Sun, 18 Jun 2006 00:17:37 +0000 (UTC) (envelope-from scottl@samsco.org) Received: from pooker.samsco.org (pooker.samsco.org [168.103.85.57]) by mx1.FreeBSD.org (Postfix) with ESMTP id 0337043D5F; Sun, 18 Jun 2006 00:17:30 +0000 (GMT) (envelope-from scottl@samsco.org) Received: from [192.168.254.14] (imini.samsco.home [192.168.254.14]) (authenticated bits=0) by pooker.samsco.org (8.13.4/8.13.4) with ESMTP id k5I0HMRA057252; Sat, 17 Jun 2006 18:17:27 -0600 (MDT) (envelope-from scottl@samsco.org) Message-ID: <44949B92.2010500@samsco.org> Date: Sat, 17 Jun 2006 18:17:22 -0600 From: Scott Long User-Agent: Mozilla/5.0 (Macintosh; U; PPC Mac OS X Mach-O; en-US; rv:1.7.7) Gecko/20050416 X-Accept-Language: en-us, en MIME-Version: 1.0 To: danial_thom@yahoo.com References: <20060618000642.68954.qmail@web33302.mail.mud.yahoo.com> In-Reply-To: <20060618000642.68954.qmail@web33302.mail.mud.yahoo.com> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-1.4 required=3.8 tests=ALL_TRUSTED autolearn=failed version=3.1.1 X-Spam-Checker-Version: SpamAssassin 3.1.1 (2006-03-10) on pooker.samsco.org Cc: performance@freebsd.org, Robert Watson Subject: Re: HZ=100: not necessarily better? X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 18 Jun 2006 00:17:37 -0000 Danial Thom wrote: > > --- Robert Watson wrote: > > >>Scott asked me if I could take a look at the >>impact of changing HZ for some >>simple TCP performance tests. I ran the first >>couple, and got some results >>that were surprising, so I thought I'd post >>about them and ask people who are >>interested if they could do some investigation >>also. The short of it is that >>we had speculated that the increased CPU >>overhead of a higher HZ would be >>significant when it came to performance >>measurement, but in fact, I measure >>improved performance under high HTTP load with >>a higher HZ. This was, of >>course, the reason we first looked at >>increasing HZ: improving timer >>granularity helps improve the performance of >>network protocols, such as TCP. >>Recent popular opinion has swung in the >>opposite direction, that higher HZ >>overhead outweighs this benefit, and I think we >>should be cautious and do a >>lot more investigating before assuming that is >>true. >> >>Simple performance results below. Two boxes on >>a gig-e network with if_em >>ethernet cards, one running a simple web server >>hosting 100 byte pages, and >>the other downloading them in parallel >>(netrate/http and netrate/httpd). The >>performance difference is marginal, but at >>least in the SMP case, likely more >>than a measurement error or cache alignment >>fluke. Results are >>transactions/second sustained over a 30 second >>test -- bigger is better; box >>is a dual xeon p4 with HTT; 'vendor.*' are the >>default 7-CURRENT HZ setting >>(1000) and 'hz.*' are the HZ=100 versions of >>the same kernels. Regardless, >>there wasn't an obvious performance improvement >>by reducing HZ from 1000 to >>100. Results may vary, use only as directed. >> >>What we might want to explore is using a >>programmable timer to set up high >>precision timeouts, such as TCP timers, while >>keeping base statistics >>profiling and context switching at 100hz. I >>think phk has previously proposed >>doing this with the HPET timer. >> >>I'll run some more diverse tests today, such as >>raw bandwidth tests, pps on >>UDP, and so on, and see where things sit. The >>reduced overhead should be >>measurable in cases where the test is CPU-bound >>and there's no clear benefit >>to more accurate timing, such as with TCP, but >>it would be good to confirm >>that. >> >>Robert N M Watson >>Computer Laboratory >>University of Cambridge >> >> >>peppercorn:~/tmp/netperf/hz> ministat *SMP >>x hz.SMP >>+ vendor.SMP >> > > +--------------------------------------------------------------------------+ > >>|xx x xx x xx x + + >>+ + + ++ + ++| >>| |_______A________| >>|_____________A___M________| | >> > > +--------------------------------------------------------------------------+ > >> N Min Max >>Median Avg Stddev >>x 10 13715 13793 13750 >> 13751.1 29.319883 >>+ 10 13813 13970 13921 >> 13906.5 47.551726 >>Difference at 95.0% confidence >> 155.4 +/- 37.1159 >> 1.13009% +/- 0.269913% >> (Student's t, pooled s = 39.502) >> >>peppercorn:~/tmp/netperf/hz> ministat *UP >>x hz.UP >>+ vendor.UP >> > > +--------------------------------------------------------------------------+ > >>|x x xx x xx+ ++x+ ++ * + >> + +| >>| >>|_________M_A_______|___|______M_A____________| >> | >> > > +--------------------------------------------------------------------------+ > >> N Min Max >>Median Avg Stddev >>x 10 14067 14178 14116 >> 14121.2 31.279386 >>+ 10 14141 14257 14170 >> 14175.9 33.248058 >>Difference at 95.0% confidence >> 54.7 +/- 30.329 >> 0.387361% +/- 0.214776% >> (Student's t, pooled s = 32.2787) >> >>_______________________________________________ >>freebsd-performance@freebsd.org mailing list >> > > > --- Robert Watson wrote: > > >>Scott asked me if I could take a look at the >>impact of changing HZ for some >>simple TCP performance tests. I ran the first >>couple, and got some results >>that were surprising, so I thought I'd post >>about them and ask people who are >>interested if they could do some investigation >>also. The short of it is that >>we had speculated that the increased CPU >>overhead of a higher HZ would be >>significant when it came to performance >>measurement, but in fact, I measure >>improved performance under high HTTP load with >>a higher HZ. This was, of >>course, the reason we first looked at >>increasing HZ: improving timer >>granularity helps improve the performance of >>network protocols, such as TCP. >>Recent popular opinion has swung in the >>opposite direction, that higher HZ >>overhead outweighs this benefit, and I think we >>should be cautious and do a >>lot more investigating before assuming that is >>true. >> >>Simple performance results below. Two boxes on >>a gig-e network with if_em >>ethernet cards, one running a simple web server >>hosting 100 byte pages, and >>the other downloading them in parallel >>(netrate/http and netrate/httpd). The >>performance difference is marginal, but at >>least in the SMP case, likely more >>than a measurement error or cache alignment >>fluke. Results are >>transactions/second sustained over a 30 second >>test -- bigger is better; box >>is a dual xeon p4 with HTT; 'vendor.*' are the >>default 7-CURRENT HZ setting >>(1000) and 'hz.*' are the HZ=100 versions of >>the same kernels. Regardless, >>there wasn't an obvious performance improvement >>by reducing HZ from 1000 to >>100. Results may vary, use only as directed. >> >>What we might want to explore is using a >>programmable timer to set up high >>precision timeouts, such as TCP timers, while >>keeping base statistics >>profiling and context switching at 100hz. I >>think phk has previously proposed >>doing this with the HPET timer. >> >>I'll run some more diverse tests today, such as >>raw bandwidth tests, pps on >>UDP, and so on, and see where things sit. The >>reduced overhead should be >>measurable in cases where the test is CPU-bound >>and there's no clear benefit >>to more accurate timing, such as with TCP, but >>it would be good to confirm >>that. >> >>Robert N M Watson >>Computer Laboratory >>University of Cambridge >> >> >>peppercorn:~/tmp/netperf/hz> ministat *SMP >>x hz.SMP >>+ vendor.SMP >> > > +--------------------------------------------------------------------------+ > >>|xx x xx x xx x + + >>+ + + ++ + ++| >>| |_______A________| >>|_____________A___M________| | >> > > +--------------------------------------------------------------------------+ > >> N Min Max >>Median Avg Stddev >>x 10 13715 13793 13750 >> 13751.1 29.319883 >>+ 10 13813 13970 13921 >> 13906.5 47.551726 >>Difference at 95.0% confidence >> 155.4 +/- 37.1159 >> 1.13009% +/- 0.269913% >> (Student's t, pooled s = 39.502) >> >>peppercorn:~/tmp/netperf/hz> ministat *UP >>x hz.UP >>+ vendor.UP >> > > +--------------------------------------------------------------------------+ > >>|x x xx x xx+ ++x+ ++ * + >> + +| >>| >>|_________M_A_______|___|______M_A____________| >> | >> > > +--------------------------------------------------------------------------+ > >> N Min Max >>Median Avg Stddev >>x 10 14067 14178 14116 >> 14121.2 31.279386 >>+ 10 14141 14257 14170 >> 14175.9 33.248058 >>Difference at 95.0% confidence >> 54.7 +/- 30.329 >> 0.387361% +/- 0.214776% >> (Student's t, pooled s = 32.2787) >> >>_______________________________________________ >>freebsd-performance@freebsd.org mailing list >> > > > And what was the cost in cpu load to get the > extra couple of bytes of throughput? > > Machines have to do other things too. That is the > entire point of SMP processing. Of course > increasing the granularity of your clocks will > cause to you process events that are > clock-reliant more quickly, so you might see more > "throughput", but there is a cost. Weighing (and > measuring) those costs are more important than > what a single benchmark does. > > At some point you're going to have to figure out > that there's a reason that every time anyone > other than you tests FreeBSD it completely pigs > out. Sqeezing out some extra bytes in netperf > isn't "performance". Performance is everything > that a system can do. If you're eating 10% more > cpu to get a few more bytes in netperf, you > haven't increased the performance of the system. > > You need to do things like run 2 benchmarks at > once. What happens to the "performance" of one > benchmark when you increase the "performance" of > the other? Run a database benchmark while you're > running a network benchmark, or while you're > passing a controlled stream of traffic through > the box. > > I just finished a couple of simple tests and find > that 6.1 has not improved at all since 5.3 in > basic interrupt processing and context switching > performance (which is the basic building block > for all system performance). Bridging 140K pps (a > full 100Mb/s load) uses 33% of the cpu(s) in > Freebsd 6.1, and 17% in Dragonfly 1.5.3, on a > dual-core 1.8Ghz opteron system. (I finally got > vmstat to work properly after getting rid of your > stupid 2 second timeout in the MAC learning > table). I'll be doing some mySQL benchmarks next > week while passing a controlled stream through > the system. But since I know that the controlled > stream eats up twice as much CPU on FreeBSD, I > already know much of the answer, since FreeBSD > will have much less CPU left over to work with. > > Its unfortunate that you seem to be tuning for > one thing while completely unaware of all of the > other things you're breaking in the process. The > Linux camp understands that in order to scale > well they have to sacrifice some network > performance. Sadly they've gone too far and now > the OS is no longer suitable as a high-end > network appliance. I'm not sure what Matt > understands because he never answers any > questions, but his results are so far quite > impressive. One thing for certain is that its not > all about how many packets you can hammer out > your socket interface (nor has it ever been). Its > about improving the efficiency of the system on > an overall basis. Thats what SMP processing is > all about, and you're never going to get where > you want to be using netperf as your guide. > > I'd also love to see the results of the exact > same test with only 1 cpu enabled, to see how > well you scale generally. I'm astounded that > no-one ever seems to post 1 vs 2 cpu performance, > which is the entire point of SMP. > > > DT > > You have some valid points, but they get lost in your overly abbrassive tone. Several of us have watched your behaviour on the DFly lists, and I dearly hope that it doesn't overflow to our lists. It would be a shame to loose your insight and input. Scott From owner-freebsd-performance@FreeBSD.ORG Sun Jun 18 00:21:45 2006 Return-Path: X-Original-To: performance@FreeBSD.org Delivered-To: freebsd-performance@FreeBSD.ORG Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 7098416A47C for ; Sun, 18 Jun 2006 00:21:45 +0000 (UTC) (envelope-from rwatson@FreeBSD.org) Received: from cyrus.watson.org (cyrus.watson.org [209.31.154.42]) by mx1.FreeBSD.org (Postfix) with ESMTP id 1E23743D48 for ; Sun, 18 Jun 2006 00:21:45 +0000 (GMT) (envelope-from rwatson@FreeBSD.org) Received: from fledge.watson.org (fledge.watson.org [209.31.154.41]) by cyrus.watson.org (Postfix) with ESMTP id 6504946B95; Sat, 17 Jun 2006 20:21:44 -0400 (EDT) Date: Sun, 18 Jun 2006 01:21:44 +0100 (BST) From: Robert Watson X-X-Sender: robert@fledge.watson.org To: Danial Thom In-Reply-To: <20060618000642.68954.qmail@web33302.mail.mud.yahoo.com> Message-ID: <20060618010959.O67789@fledge.watson.org> References: <20060618000642.68954.qmail@web33302.mail.mud.yahoo.com> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Cc: performance@FreeBSD.org Subject: Re: HZ=100: not necessarily better? X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 18 Jun 2006 00:21:45 -0000 On Sat, 17 Jun 2006, Danial Thom wrote: > At some point you're going to have to figure out that there's a reason that > every time anyone other than you tests FreeBSD it completely pigs out. > Sqeezing out some extra bytes in netperf isn't "performance". Performance is > everything that a system can do. If you're eating 10% more cpu to get a few > more bytes in netperf, you haven't increased the performance of the system. This test wasn't netperf, it was a 32-process web server and a 32-process client, doing sendfile on UFS-backed data files. It was definitely a potted benchmark, in that it omits some of the behaviors of web servers (dynamic content, significantly variable data set, etc), but is intended to be more than a simple micro-benchmark involving two sockets and packet blasting. Specifically, it was intended to validate whether or not there were immediately observable changes in TCP behavior based on adjusting HZ under load. The answer was a qualified yes: there was a small but noticeable negative affect on high load web serving in the test environment by reducing HZ, likely due to to reduced timer accuracy. Specifically: simply frobbing HZ isn't a strategy that necessarily results in a performance improvement. > You need to do things like run 2 benchmarks at once. What happens to the > "performance" of one benchmark when you increase the "performance" of the > other? Run a database benchmark while you're running a network benchmark, or > while you're passing a controlled stream of traffic through the box. The point of this exercise was to demonstrate the complexity of the issue of adjusting HZ, and to suggest that simply changing the value in the further absense of evidence could have negative effects, and that we might want to investigate a more mature middle ground, such as a modified timer architecture. I'm sorry if that conclusion wasn't clear from my e-mail. > I'd also love to see the results of the exact same test with only 1 cpu > enabled, to see how well you scale generally. I'm astounded that no-one ever > seems to post 1 vs 2 cpu performance, which is the entire point of SMP. Single CPU results were included in my e-mail. There are actually a couple of other variations of interest you want to measure in more general benchmarking exercises: - Kernel compiled without any SMP support. Specifically, without lock prefixes on atomic instructions. - Kernel compiled with SMP support, but with use of additional CPUs disabled. - Kernel compiled with SMP support, and with varying numbers of CPUs enabled. The first two cases are important, because they help identify the difference between the general overhead of compiling in locked instructions (and related issues), and the overheads associated with contention, caches, inter-CPU IPI traffic, scheduling, etc. By failing to compare the top to cases, it might be easy to conclude that a performance improve is due to the additional cost of atomic instructions, whereas in reality it may be the result of a poor scheduling decision, or of data unnecessarily cache missing in both CPUsrather than one because processing of the data is split poorly over available CPUs. Robert N M Watson Computer Laboratory University of Cambridge From owner-freebsd-performance@FreeBSD.ORG Sun Jun 18 01:30:54 2006 Return-Path: X-Original-To: performance@freebsd.org Delivered-To: freebsd-performance@FreeBSD.ORG Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id E2FCA16A47D for ; Sun, 18 Jun 2006 01:30:54 +0000 (UTC) (envelope-from danial_thom@yahoo.com) Received: from web33303.mail.mud.yahoo.com (web33303.mail.mud.yahoo.com [68.142.206.118]) by mx1.FreeBSD.org (Postfix) with SMTP id EFE1043D46 for ; Sun, 18 Jun 2006 01:30:53 +0000 (GMT) (envelope-from danial_thom@yahoo.com) Received: (qmail 45146 invoked by uid 60001); 18 Jun 2006 01:30:53 -0000 DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com; h=Message-ID:Received:Date:From:Reply-To:Subject:To:Cc:In-Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding; b=u+HLcEI5NMow4ORyEQoVQO5NMV5UM/qnYjff5Jcx3TV/1NL6OWGw4oGa+jftqaTp37JnBRo0LxVPUxVzYwRTTJ/7CAu5TlCCFdyFzlcK5BbgPUZ2L4l9Dw1LEIw9HD65aFfStuiY9TM9CfT06bmKOL7oWLfGtvcqs111VsVuFu4= ; Message-ID: <20060618013053.45144.qmail@web33303.mail.mud.yahoo.com> Received: from [65.34.182.15] by web33303.mail.mud.yahoo.com via HTTP; Sat, 17 Jun 2006 18:30:53 PDT Date: Sat, 17 Jun 2006 18:30:53 -0700 (PDT) From: Danial Thom To: Scott Long In-Reply-To: <44949B92.2010500@samsco.org> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: 8bit Cc: Robert Watson , performance@freebsd.org Subject: Re: HZ=100: not necessarily better? X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: danial_thom@yahoo.com List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 18 Jun 2006 01:30:55 -0000 ------- You have some valid points, but they get lost in your overly abbrassive tone. Several of us have watched your behaviour on the DFly lists, and I dearly hope that it doesn't overflow to our lists. It would be a shame to loose your insight and input. Scott ------- Well I only have a few days to play with it, as it seems unlikely I'll be able to use FreeBSD for what I need it for. So the scorched-earth approach is the current choice of action :) If you tell Matt he's wrong he ignores you whether you're a nice guy or a lout, so at least I get to blow off some steam. Its easy enough to create another alias. DT __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com From owner-freebsd-performance@FreeBSD.ORG Sun Jun 18 15:31:11 2006 Return-Path: X-Original-To: performance@FreeBSD.org Delivered-To: freebsd-performance@FreeBSD.ORG Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id D518316A47A for ; Sun, 18 Jun 2006 15:31:11 +0000 (UTC) (envelope-from danial_thom@yahoo.com) Received: from web33304.mail.mud.yahoo.com (web33304.mail.mud.yahoo.com [68.142.206.119]) by mx1.FreeBSD.org (Postfix) with SMTP id 3765D43D4C for ; Sun, 18 Jun 2006 15:31:11 +0000 (GMT) (envelope-from danial_thom@yahoo.com) Received: (qmail 52760 invoked by uid 60001); 18 Jun 2006 15:31:10 -0000 DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com; h=Message-ID:Received:Date:From:Reply-To:Subject:To:Cc:In-Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding; b=Neqa/n2z2Iw0uLNNf0OU00rm+fljHONsI2xszDZ9FgvDIc0xyvBZF+veAGkiB4eXSjXrhXw3OU7annhm2A9MSYSccxi8f7rZ3/m4rDDFdmj8xu2U11LSepzjama2myTOp9C4bX2lR/BcQpdifIYBk3buP9YPykE6qDf6NKrtt5U= ; Message-ID: <20060618153110.52758.qmail@web33304.mail.mud.yahoo.com> Received: from [65.34.182.15] by web33304.mail.mud.yahoo.com via HTTP; Sun, 18 Jun 2006 08:31:10 PDT Date: Sun, 18 Jun 2006 08:31:10 -0700 (PDT) From: Danial Thom To: Robert Watson In-Reply-To: <20060618010959.O67789@fledge.watson.org> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: 8bit Cc: performance@FreeBSD.org Subject: Re: HZ=100: not necessarily better? X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: danial_thom@yahoo.com List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 18 Jun 2006 15:31:11 -0000 --- Robert Watson wrote: > > On Sat, 17 Jun 2006, Danial Thom wrote: > > > At some point you're going to have to figure > out that there's a reason that > > every time anyone other than you tests > FreeBSD it completely pigs out. > > Sqeezing out some extra bytes in netperf > isn't "performance". Performance is > > everything that a system can do. If you're > eating 10% more cpu to get a few > > more bytes in netperf, you haven't increased > the performance of the system. > > This test wasn't netperf, it was a 32-process > web server and a 32-process > client, doing sendfile on UFS-backed data > files. It was definitely a potted > benchmark, in that it omits some of the > behaviors of web servers (dynamic > content, significantly variable data set, etc), > but is intended to be more > than a simple micro-benchmark involving two > sockets and packet blasting. > Specifically, it was intended to validate > whether or not there were > immediately observable changes in TCP behavior > based on adjusting HZ under > load. The answer was a qualified yes: there > was a small but noticeable > negative affect on high load web serving in the > test environment by reducing > HZ, likely due to to reduced timer accuracy. > Specifically: simply frobbing HZ > isn't a strategy that necessarily results in a > performance improvement. > > > You need to do things like run 2 benchmarks > at once. What happens to the > > "performance" of one benchmark when you > increase the "performance" of the > > other? Run a database benchmark while you're > running a network benchmark, or > > while you're passing a controlled stream of > traffic through the box. > > The point of this exercise was to demonstrate > the complexity of the issue of > adjusting HZ, and to suggest that simply > changing the value in the further > absense of evidence could have negative > effects, and that we might want to > investigate a more mature middle ground, such > as a modified timer > architecture. I'm sorry if that conclusion > wasn't clear from my e-mail. > > > I'd also love to see the results of the exact > same test with only 1 cpu > > enabled, to see how well you scale generally. > I'm astounded that no-one ever > > seems to post 1 vs 2 cpu performance, which > is the entire point of SMP. > > Single CPU results were included in my e-mail. > There are actually a couple of > other variations of interest you want to > measure in more general benchmarking > exercises: > > - Kernel compiled without any SMP support. > Specifically, without lock > prefixes on atomic instructions. > > - Kernel compiled with SMP support, but with > use of additional CPUs disabled. > > - Kernel compiled with SMP support, and with > varying numbers of CPUs enabled. > > The first two cases are important, because they > help identify the difference > between the general overhead of compiling in > locked instructions (and related > issues), and the overheads associated with > contention, caches, inter-CPU IPI > traffic, scheduling, etc. By failing to > compare the top to cases, it might be > easy to conclude that a performance improve is > due to the additional cost of > atomic instructions, whereas in reality it may > be the result of a poor > scheduling decision, or of data unnecessarily > cache missing in both CPUsrather > than one because processing of the data is > split poorly over available CPUs. Of course there is a UP test, and now I see that UP wins again. It would be interesting to see some sort of test run at lower contention levels. I'd think that UP would gain an advantage as resources become scarce, as more switching and locking would be required while waiting. As contention for sockets or kernel-level resources grows, SMP would be less and less efficient with the added overhead. It seems to me that the decision of what the default value of HZ should be for a general purpose OS should take into the account what the majority of users are doing. *Most* people aren't running fully loaded web servers. The argument shouldn't be "can you get better performance at the high end with a different setting", it should be "what's the most efficient setting for general use". Thats what "GENERIC" is all about. I tried to impress upon Matt (without any reponse at all of course) that raising ITR to 10000 for the em driver doesn't make sense, because virtually no-one in their camp is pushing enough traffic to make that setting worthwhile. The possible fact that the performance is better when pushing 70K pps should make it a tuning note and not a default setting. If you're not on a gigabit network a setting of 10K makes no sense at all. DT __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com From owner-freebsd-performance@FreeBSD.ORG Thu Jun 22 10:35:21 2006 Return-Path: X-Original-To: performance@freebsd.org Delivered-To: freebsd-performance@FreeBSD.ORG Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id C4D4716A5C5 for ; Thu, 22 Jun 2006 10:35:14 +0000 (UTC) (envelope-from maamoo@gmail.com) Received: from wx-out-0102.google.com (wx-out-0102.google.com [66.249.82.193]) by mx1.FreeBSD.org (Postfix) with ESMTP id E5A1F43D53 for ; Thu, 22 Jun 2006 10:35:13 +0000 (GMT) (envelope-from maamoo@gmail.com) Received: by wx-out-0102.google.com with SMTP id h30so264534wxd for ; Thu, 22 Jun 2006 03:35:13 -0700 (PDT) DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:to:subject:mime-version:content-type; b=dDcJG+iAZMMhMKB8oAEGhEXLsPjtrZY6BiFVJjBlS/qcQwFCbUgV0+Q3vcWXV72cQnYo4bVFsOGGrkDhilraox5FTpcVV7if9T9CkdFjtRnpHYEv4HZ+LlloRaUSQ1BPceLDDuFVsMQzRQfgJaAfuRFrhr9cT7rae5MNQXumEA0= Received: by 10.70.30.10 with SMTP id d10mr2868510wxd; Thu, 22 Jun 2006 03:35:13 -0700 (PDT) Received: by 10.70.90.16 with HTTP; Thu, 22 Jun 2006 03:35:12 -0700 (PDT) Message-ID: Date: Thu, 22 Jun 2006 18:35:12 +0800 From: "S H A N" To: performance@freebsd.org MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Content-Disposition: inline X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: Subject: Poor FreeBSD Performance under cacti/snmp usage when using remote shell X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 22 Jun 2006 10:35:21 -0000 hi folks, I am observing that for a machine with the following specs:- Copyright (c) 1992-2005 The FreeBSD Project. Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994 The Regents of the University of California. All rights reserved. FreeBSD 6.0-STABLE #3: Thu Jan 26 20:38:29 SGT 2006 root@ix-nw.ix.singtel.com:/usr/obj/usr/src/sys/IX-NW Timecounter "i8254" frequency 1193182 Hz quality 0 CPU: Intel(R) Xeon(TM) CPU 2.80GHz (2791.01-MHz 686-class CPU) Origin = "GenuineIntel" Id = 0xf29 Stepping = 9 Features=0xbfebfbff Features2=0x4400> Hyperthreading: 2 logical CPUs real memory = 4160552960 (3967 MB) avail memory = 4074123264 (3885 MB) -- snip -- amr0: mem 0xfebf0000-0xfebfffff irq 72 at device 8.0 on pci8 amr0: Firmware 2.37, BIOS 1.05, 128MB RAM -- snip -- amrd0: on amr0 amrd0: 104034MB (213061632 sectors) RAID 5 (optimal) ses0 at amr0 bus 0 target 6 lun 0 ses0: Fixed Processor SCSI-2 device ses0: SAF-TE Compliant Device configured with:- em0: flags=8843 mtu 1500 options=b -- snip -- ether 00:0b:db:94:f8:65 media: Ethernet autoselect (100baseTX ) status: active and running cacti/snmp applications (to poll 83 network devices/1831 interfaces in less then 9secs for every 300sec) is hitting these kind of gstats dT: 0.501 flag_I 500000us sizeof 240 i -1 L(q) ops/s r/s kBps ms/r w/s kBps ms/w d/s kBps ms/d %busy Name 46 315 0 0 0.0 315 5491 138.3 0 0 0.0 99.9| amrd0 46 315 0 0 0.0 315 5491 139.8 0 0 0.0 99.9| amrd0s1 46 315 0 0 0.0 315 5491 139.9 0 0 0.0 99.9| amrd0s1f for consistently few minutes... while the memory condition is reported to be:- Active: 288006144 Bytes Inactive: 3293306880 Bytes Wired: 218345472 Bytes Reserved: 5644288 Bytes Cache: 192761856 Bytes Kernel: 139264 Bytes Interrupt: 8192 Bytes Buffer: 117211136 Bytes Total: 4078235648 Bytes Free: 85340160 Bytes my question would be is that an acceptible behaviour? i.e. should i look for upgrade or is there a tuning which can still make something out of it? because when i access this m/c for usual tasks of editing files and copy stuff it sometimes hangs for few minutes.. it is a source compiled box with the following make.conf --- make.conf --- # apache WITH_APACHE_PERF_TUNING=yes WITH_THREADS_MODULES=yes # libiconv WITHOUT_EXTRA_ENCODINGS=yes # php WITH_GD=yes # gd WITH_XPM=yes WITH_LZW=yes # mysql WITH_CHARSET=latin1 WITH_XCHARSET=latin1 WITH_PROC_SCOPE_PTH=yes BUILD_OPTIMIZED=yes BUILD_STATIC=yes #mtr WITHOUT_X11=yes # snmp NET_SNMP_SYS_CONTACT="shanali@singtel.com" NET_SNMP_SYS_LOCATION="Telepark" DEFAULT_SNMP_VERSION=3 NET_SNMP_MIB_MODULES="host smux ucd-snmp/diskio" NET_SNMP_LOGFILE=/var/log/snmpd.log NET_SNMP_PERSISTENTDIR=/var/net-snmp #make MAKEOPTS=-j4 #noprofile NO_PROFILE= true # added by use.perl 2006-01-26 21:28:26 PERL_VER=5.8.7 PERL_VERSION=5.8.7 --- make.conf --- while cacti is doing pretty fine... 06/22/2006 06:10:09 PM - SYSTEM STATS: Time:7.9338 Method:cactid Processes:1 Threads:30 Hosts:83 HostsPerProcess:83 DataSources:3218 RRDsProcessed:1831 06/22/2006 06:15:12 PM - SYSTEM STATS: Time:9.7663 Method:cactid Processes:1 Threads:30 Hosts:83 HostsPerProcess:83 DataSources:3218 RRDsProcessed:1819 06/22/2006 06:20:09 PM - SYSTEM STATS: Time:7.6809 Method:cactid Processes:1 Threads:30 Hosts:83 HostsPerProcess:83 DataSources:3218 RRDsProcessed:1831 06/22/2006 06:25:10 PM - SYSTEM STATS: Time:8.2421 Method:cactid Processes:1 Threads:30 Hosts:83 HostsPerProcess:83 DataSources:3218 RRDsProcessed:1831 06/22/2006 06:30:10 PM - SYSTEM STATS: Time:8.6010 Method:cactid Processes:1 Threads:30 Hosts:83 HostsPerProcess:83 DataSources:3218 RRDsProcessed:1831 Any assistance on where to look for an appropriate tuning would be much appreciated. -- Best Regards. From owner-freebsd-performance@FreeBSD.ORG Thu Jun 22 13:19:32 2006 Return-Path: X-Original-To: performance@freebsd.org Delivered-To: freebsd-performance@FreeBSD.ORG Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id E527516A47B for ; Thu, 22 Jun 2006 13:19:32 +0000 (UTC) (envelope-from fehwalker@gmail.com) Received: from ug-out-1314.google.com (ug-out-1314.google.com [66.249.92.172]) by mx1.FreeBSD.org (Postfix) with ESMTP id 4C01C43D62 for ; Thu, 22 Jun 2006 13:19:32 +0000 (GMT) (envelope-from fehwalker@gmail.com) Received: by ug-out-1314.google.com with SMTP id m3so513000uge for ; Thu, 22 Jun 2006 06:19:31 -0700 (PDT) DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:to:subject:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=IjuCtPWYM2Uw355k+CPEXXpm3uy9ObwqExWolxfuLBe813QOkBPPnpSLC0ncTttwbywmYwkqfqrscznZV6VrIIg9wg/Seq6lmxdBANVQIJ9LsBKYsQ2YTQL2pY6vbNzq32PTVwcRoybRw3aNCzMQVAa6SEYXIKjQZqlMGqT3qPM= Received: by 10.66.240.12 with SMTP id n12mr1072140ugh; Thu, 22 Jun 2006 06:13:07 -0700 (PDT) Received: by 10.67.22.8 with HTTP; Thu, 22 Jun 2006 06:13:06 -0700 (PDT) Message-ID: <35de0c300606220613n44abe4ccy2d9a1670a1c6ee9c@mail.gmail.com> Date: Thu, 22 Jun 2006 09:13:06 -0400 From: "Bryan Fullerton" To: performance@freebsd.org In-Reply-To: MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Content-Disposition: inline References: Cc: Subject: Re: Poor FreeBSD Performance under cacti/snmp usage when using remote shell X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 22 Jun 2006 13:19:33 -0000 On 6/22/06, S H A N wrote: > because when i access this m/c for usual tasks of editing files and copy > stuff it sometimes hangs for few minutes.. Have you confirmed that there are no issues with the network? Do you see this hanging when using the machine locally? > it is a source compiled box with the following make.conf How does the kernel config differ from GENERIC? Bryan From owner-freebsd-performance@FreeBSD.ORG Fri Jun 23 01:06:15 2006 Return-Path: X-Original-To: performance@freebsd.org Delivered-To: freebsd-performance@FreeBSD.ORG Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 6650716A4D5 for ; Fri, 23 Jun 2006 01:06:15 +0000 (UTC) (envelope-from maamoo@gmail.com) Received: from wx-out-0102.google.com (wx-out-0102.google.com [66.249.82.201]) by mx1.FreeBSD.org (Postfix) with ESMTP id 7EEF643D46 for ; Fri, 23 Jun 2006 01:06:14 +0000 (GMT) (envelope-from maamoo@gmail.com) Received: by wx-out-0102.google.com with SMTP id h30so360599wxd for ; Thu, 22 Jun 2006 18:06:13 -0700 (PDT) DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:references; b=T30JB8MWpEW+VytsSPRU2EGessb+iEunSKAlO2n+TEvttkNBKYfHonH824/AdDUph4aXiXxit/lyLToFulGwo8xQwuZpeYvoo1QggVdu4vntlvIw+OneER8G/jYz5U73wIV4Nn/suNUvRA4zEF5jZmX2zQKpQQOS4/bfSFTV5ZA= Received: by 10.70.71.9 with SMTP id t9mr3844334wxa; Thu, 22 Jun 2006 18:06:13 -0700 (PDT) Received: by 10.70.90.16 with HTTP; Thu, 22 Jun 2006 18:06:13 -0700 (PDT) Message-ID: Date: Fri, 23 Jun 2006 09:06:13 +0800 From: "S H A N" To: "Bryan Fullerton" In-Reply-To: <35de0c300606220613n44abe4ccy2d9a1670a1c6ee9c@mail.gmail.com> MIME-Version: 1.0 References: <35de0c300606220613n44abe4ccy2d9a1670a1c6ee9c@mail.gmail.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Content-Disposition: inline X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: performance@freebsd.org Subject: Re: Poor FreeBSD Performance under cacti/snmp usage when using remote shell X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 23 Jun 2006 01:06:15 -0000 hi, console access is not tested but i do have another FreeBSD machine (same specs just not doing anything other then secure shell services) and its on the same network (network path, switch etc) basically sitting next to it on the rack.. and its always very very responsive and fast... for the kernel config it differs for the following options (as the rest is all GENERIC) machine i386 cpu I686_CPU ident IX-NW device snp options ALTQ options ALTQ_CBQ options ALTQ_RED options ALTQ_RIO options ALTQ_HFSC options ALTQ_PRIQ options ALTQ_NOPCC options SMP best regards! On 6/22/06, Bryan Fullerton wrote: > > On 6/22/06, S H A N wrote: > > because when i access this m/c for usual tasks of editing files and copy > > stuff it sometimes hangs for few minutes.. > > Have you confirmed that there are no issues with the network? Do you > see this hanging when using the machine locally? > > > it is a source compiled box with the following make.conf > > How does the kernel config differ from GENERIC? > > Bryan > _______________________________________________ > freebsd-performance@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-performance > To unsubscribe, send any mail to " > freebsd-performance-unsubscribe@freebsd.org" > -- Best Regards.