Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 26 Nov 2008 19:18:43 +0800
From:      "Archimedes Gaviola" <archimedes.gaviola@gmail.com>
To:        ivoras@freebsd.org, "John Baldwin" <jhb@freebsd.org>
Cc:        freebsd-smp@freebsd.org
Subject:   Re: CPU affinity with ULE scheduler
Message-ID:  <42e3d810811260318j2656ac57k465c56d1c2b0dcf2@mail.gmail.com>
In-Reply-To: <200811171609.54527.jhb@freebsd.org>
References:  <42e3d810811100033w172e90dbl209ecbab640cc24f@mail.gmail.com> <42e3d810811170311uddc77daj176bc285722a0c8@mail.gmail.com> <42e3d810811170336rf0a0357sf32035e8bd1489e9@mail.gmail.com> <200811171609.54527.jhb@freebsd.org>

next in thread | previous in thread | raw e-mail | index | archive | help
------=_Part_14844_13537311.1227698324243
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

> Is there a tool that can we used to trace
> this process just to be able to know which part of the kernel internal
> is doing the bottleneck especially when net.isr.direct=1? By the way
> with device polling enabled, the system experienced packet errors and
> the interface throughput is worst, so I avoid using it though.
>

Since I was really looking for a tool to be able to know how packets
are being processed from the interface and up to the network stack and
applications, but I haven't found any tool for my concern. What I have
found is the LOCK_PROFILING tool. Although I'm sure that this really
not answer my concern but I just tried because I need to know
something about locks which FreeBSD is using with. Some people
consider that there's a lot of factors and variables with regards to
network performance in FreeBSD, so I got a try on this tool. I also
get valuable info from this link
http://markmail.org/message/3uqxi4pipvvoy6jx#query:lock%20profiling%20freebsd+page:1+mid:ymqgrxqf4min54zd+state:results.
Instead of the IBM machine with Broadcom NICs, I use another machine
with 4 x Quad-Core AMD64 with still Broadcom NICs on FreeBSD-7.1
BETA2. I took data results with traffic and without traffic. With
traffic, I use both TCP and UDP protocols in bombarding traffic. UDP
for upload and TCP for download in a back-to-back setup.

What I have found is that there's a high wait_total on some of the
following when there's traffic:

max      total              wait_total   count         avg  wait_avg
cnt_hold      cnt_lock     name

517       24761291      6165864     4460995     5     1
552124      1558183 net/route.c:293 (sleep mutex:radix node head)
277       1427082       140797       354220       4     0
14476        20674 amd64/amd64/io_apic.c:212 (spin mutex:icu)
33         25275           20744        5401          4      3
    0               5400 amd64/amd64/mp_machdep.c:974 (spin
mutex:sched lock 4)
17283   3346679       104214       107262       31     0
4545         4072 kern/kern_sysctl.c:1334 (sleep mutex:Giant)
257       28599           386           1302           21     0
   35             30 vm/vm_fault.c:667 (sleep mutex:vm object)
282       2821743        2673         977635       2     0
926           552 net/if_ethersubr.c:405 (sleep mutex:bce1)
22        743637          157239      256274       2     0
5304         48357 dev/random/randomdev_soft.c:308 (spin mutex:entropy
harvest mutex)
301      16301894       881827     1255534      12     0
241491       45973 dev/bce/if_bce.c:5016 (sleep mutex:bce0)
273      1228787         55458       103863       11     0
3733          4736 kern/subr_sleepqueue.c:232 (spin mutex:sleepq
chain)
624      4682305         1339783    1251253     3     1
32664        254211 dev/bce/if_bce.c:4320 (sleep mutex:bce1)

With lock profiling, how do we know that a certain kernel structure or
function is causing a contention? I only have little knowledge about
mutex, can someone elaborate on these especially sleep and spin mutex?

Unfortunately due to the log result is too big for the mailing list
then I only attached the complete log in compressed format.

Thanks,
Archimedes

------=_Part_14844_13537311.1227698324243--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?42e3d810811260318j2656ac57k465c56d1c2b0dcf2>