From owner-freebsd-performance@FreeBSD.ORG Wed Dec 3 11:10:12 2008 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 1AEBF1065676 for ; Wed, 3 Dec 2008 11:10:12 +0000 (UTC) (envelope-from gofp-freebsd-performance@m.gmane.org) Received: from ciao.gmane.org (main.gmane.org [80.91.229.2]) by mx1.freebsd.org (Postfix) with ESMTP id 9623F8FC18 for ; Wed, 3 Dec 2008 11:10:11 +0000 (UTC) (envelope-from gofp-freebsd-performance@m.gmane.org) Received: from list by ciao.gmane.org with local (Exim 4.43) id 1L7pcL-00006K-Vz for freebsd-performance@freebsd.org; Wed, 03 Dec 2008 11:10:05 +0000 Received: from 195.208.174.178 ([195.208.174.178]) by main.gmane.org with esmtp (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Wed, 03 Dec 2008 11:10:05 +0000 Received: from vadim_nuclight by 195.208.174.178 with local (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Wed, 03 Dec 2008 11:10:05 +0000 X-Injected-Via-Gmane: http://gmane.org/ To: freebsd-performance@freebsd.org From: Vadim Goncharov Date: Wed, 3 Dec 2008 11:09:58 +0000 (UTC) Organization: Nuclear Lightning @ Tomsk, TPU AVTF Hostel Lines: 57 Message-ID: References: X-Complaints-To: usenet@ger.gmane.org X-Gmane-NNTP-Posting-Host: 195.208.174.178 X-Comment-To: Adrian Chadd User-Agent: slrn/0.9.8.1 (FreeBSD) Sender: news Subject: Re: hwpmc granularity and 6.4 network performance X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: vadim_nuclight@mail.ru List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 03 Dec 2008 11:10:12 -0000 Hi Adrian Chadd! On Tue, 25 Nov 2008 15:09:19 -0500; Adrian Chadd wrote about 'Re: hwpmc granularity and 6.4 network performance': > * Since you've changed two things - hwpmc _AND_ the kernel version - > you can't easily conclude which one (if any!) has any influence on > Giant showing up in your top output. I suggest recompiling without > hwpmc and seeing if the behaviour changes. This is not so easy to do at the time when I want :) I will check this some weeks later, may be. > * The gprof utility expects something resembling "time" for the > sampling data, but pmcstat doesn't record time, it records "events". > The counts you see in gprof are "events", so change "seconds" to > "events" in your reading of the gprof output. Of course, I know this, but it doesn't change the percentage. > * I don't know if the backported pmc to 6.4 handles stack call graphs > or not. Easy way to check - pmcstat -R sample.out | more ; see if you > just see "sample" lines or "sample" and "callgraph" lines. No. > * I bet that ipfw_chk is a big enough hint. How big is your ipfw ruleset? :) It's not so big in terms of rule count and not so big in terms of exact hint, but it is of course big as a CPU hog :) router# ipfw show | wc -l 70 Surely, not so much, yes? So I want to see which parts are more CPU-intensive, to use as a hint when rewriting ruleset. I've heard about a pmcannotate tool, in -arch@, and I think that it is tool which does the thing exactly what I want, but that requires patch for pmcstat which didn't apply on my 6.4, too much was different :( >> OK, I can conclude from this that I should optimize my ipfw ruleset, but >> that's all. I know from sources that ipfw_chk() is a big function with a >> bunch of 'case's in a large 'switch'. I want to know which parts of that >> switch are executed more often. It says in listing that granularity is >> 4 bytes, I assume that it has a sample for each of 4-byte chunks of binary >> code, so that it must have such information. My kernel is compiled with: >> >> makeoptions DEBUG=-g >> >> so kgdb does know where are instructions for each line of source code. >> How can I obtain this info from profiling? It also would be useful to know >> which places do calls to that bcmp() and rn_match(). -- WBR, Vadim Goncharov. ICQ#166852181 mailto:vadim_nuclight@mail.ru [Moderator of RU.ANTI-ECOLOGY][FreeBSD][http://antigreen.org][LJ:/nuclight]