Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 26 Dec 2004 00:19:33 -0200
From:      =?ISO-8859-1?Q?Jo=E3o_Carlos_Mendes_Lu=EDs?= <jonny@jonny.eng.br>
To:        Robert Watson <rwatson@freebsd.org>
Cc:        freebsd-net@freebsd.org
Subject:   Re: %cpu in system - squid performance in FreeBSD 5.3
Message-ID:  <41CE1FB5.4080401@jonny.eng.br>
In-Reply-To: <Pine.NEB.3.96L.1041225121903.27724E-100000@fledge.watson.org>
References:  <Pine.NEB.3.96L.1041225121903.27724E-100000@fledge.watson.org>

next in thread | previous in thread | raw e-mail | index | archive | help
Robert Watson wrote:
> On Thu, 23 Dec 2004, Jeff Behl wrote:
> 
>>As a follow up to the below (original message at the very bottom), I
>>installed a load balancer in front of the machines which terminates the
>>tcp connections from clients and opens up a few, persistent connections
>>to each server over which requests are pipelined.  In this scenario
>>everything is copasetic: 
> 
> I'm not very familiar with Squid's architecture, but I would anticipate
> that what you're seeing is that the cost of additional connections served
> in parallel is pretty high due to the use of processes.  Specifically: if
> each TCP connection being served gets its own process, and there are a lot
> of TCP connections, you'll be doing a lot of process forking, context
> switching, exceeding cache sizes, etc.  With just a couple of connections,
> even if they're doing the same "work", the overhead is much lower. 
> Depending on how much time you're willing to invest in this, we can
> probably do quite a bit to diagnose where the cost is coming from and look
> for any specific problems or areas we could optimize.

     It must not be this.  Squid is mostly a single process system, with 
scheduling based on descriptors and select/poll.  Recent versions added 
some parallelism in other processes, but just for file reading/writing 
(diskd) and regular expression processing for ACLs.  Even DNS, which 
previously ran on blocking I/O in secondary processes now run internally 
in the select/poll scheduler.

     I also have some experience in older versions of squid, in which 
the same machine running the same version of squid, and changing Linux 
for FreeBSD raised the maximum simultaneus conection limit.

> I might start by turning on kernel profiling and doing a profile dump
> under load.  Be aware that turning on profiling uses up a lot of CPU
> itself, so will reduce the capacity of the system.  There's probably
> documentation elsewhere, but the process I use to set up profiling is
> here:

     I did not make any tests on this, but I would expect profiling to 
fail, since every step of the scheduler is very small, and deals with 
the smallest I/O available at that time.

     Indeed, based on the original report I would search for some 
optimization on descriptor searching in poll or select, whichever squid 
has chosen to use on FreeBSD (probably select, looking at the top 
output).  This is one of the crucial points on squid performance.  The 
other one is disk access, for sure, but the experimente describe would 
not change disk access patterns, would it?

>   http://www.watson.org/~robert/freebsd/netperf/profile/
> 
> Note that it warns the some results may be incorrect on SMP.  I think it
> would be useful to give it a try anyway just to see if we get something
> useful.

     As I said before, beeing a single process scheduler, squid does not 
gain much from SMP.  The secondary processes would benefit from the 
extra CPU, though.  Maybe interrupt processing also, if the giant lock 
does not interfere in any part of the processing path.

> As a final question: other than CPU consumption, do you have a reliable
> way to measure how efficiently the system is operating -- in particular,
> how fast it is able to serve data?  Having some sort of metric for
> performance can be quite useful in optimizing, as it can tell us whether

     One thing I fail to measure in FreeBSD is the reason for delays in 
disk access times.  How can I prove that the delay is on disk, and 
determine how to optimize it?  systat -v is very useful, but does not 
give me all answers.

>>last pid:  3377;  load averages:  0.12,  0.09,  0.08
>>up 0+17:24:53  10:02:13
>>31 processes:  1 running, 30 sleeping
>>CPU states:  5.1% user,  0.0% nice,  1.8% system,  1.2% interrupt, 92.0%
>>idle
>>Mem: 75M Active, 187M Inact, 168M Wired, 40K Cache, 214M Buf, 1482M Free
>>Swap: 4069M Total, 4069M Free
>>
>>  PID USERNAME PRI NICE   SIZE    RES STATE  C   TIME   WCPU    CPU
>>COMMAND
>>  474 squid     96    0 68276K 62480K select 0  53:38 16.80% 16.80%
>>squid
>>  311 bind      20    0 10628K  6016K kserel 0  12:28  0.00%  0.00%
>>named


                                         Jonny

-- 
João Carlos Mendes Luís - Networking Engineer - jonny@jonny.eng.br



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?41CE1FB5.4080401>