From owner-freebsd-smp@FreeBSD.ORG Wed Nov 26 11:18:45 2008 Return-Path: Delivered-To: freebsd-smp@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 0C61B106564A for ; Wed, 26 Nov 2008 11:18:45 +0000 (UTC) (envelope-from archimedes.gaviola@gmail.com) Received: from wa-out-1112.google.com (wa-out-1112.google.com [209.85.146.178]) by mx1.freebsd.org (Postfix) with ESMTP id C59998FC1B for ; Wed, 26 Nov 2008 11:18:44 +0000 (UTC) (envelope-from archimedes.gaviola@gmail.com) Received: by wa-out-1112.google.com with SMTP id m34so206349wag.27 for ; Wed, 26 Nov 2008 03:18:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:date:from:to :subject:cc:in-reply-to:mime-version:content-type:references; bh=NlAg08kU/ALLkJNNl9Dj2HMduJeTPxpfNw2nb+u8nWs=; b=w4/4qhyCJIfEGaBc+/D7J/a0L/8lvfB7vbiYkVjAcI1IdnqUIRMLCkeemqSqUSocpE jVBHmvZh8ZaCOKUQu7gyuiXJlQNa7ui4mm28061HZMgNDWXzGLm03gHiebbCUEk4sOY5 hzt4tDzmdcC3rCvbcim1VrRrqmdatmG0VowyY= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:to:subject:cc:in-reply-to:mime-version :content-type:references; b=vNOMdIH3SJnZOxN8ZhaI0Tjo1gDVceff9wcrF14i1n1CqmIVxttn44bvpaVaIybb+v aNflarCNCsfwgF2jIXaGaC3gHXqIzpOin0j9Ew/8aDFxPuWq1NUc9/3uEghiiRE6tejZ rYbpxzKQiavYgp+/TnMdEFlBlqeVwI+WmlW/g= Received: by 10.115.46.10 with SMTP id y10mr3277276waj.182.1227698324246; Wed, 26 Nov 2008 03:18:44 -0800 (PST) Received: by 10.115.78.8 with HTTP; Wed, 26 Nov 2008 03:18:43 -0800 (PST) Message-ID: <42e3d810811260318j2656ac57k465c56d1c2b0dcf2@mail.gmail.com> Date: Wed, 26 Nov 2008 19:18:43 +0800 From: "Archimedes Gaviola" To: ivoras@freebsd.org, "John Baldwin" In-Reply-To: <200811171609.54527.jhb@freebsd.org> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="----=_Part_14844_13537311.1227698324243" References: <42e3d810811100033w172e90dbl209ecbab640cc24f@mail.gmail.com> <42e3d810811170311uddc77daj176bc285722a0c8@mail.gmail.com> <42e3d810811170336rf0a0357sf32035e8bd1489e9@mail.gmail.com> <200811171609.54527.jhb@freebsd.org> X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: freebsd-smp@freebsd.org Subject: Re: CPU affinity with ULE scheduler X-BeenThere: freebsd-smp@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: FreeBSD SMP implementation group List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 26 Nov 2008 11:18:45 -0000 ------=_Part_14844_13537311.1227698324243 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Content-Disposition: inline > Is there a tool that can we used to trace > this process just to be able to know which part of the kernel internal > is doing the bottleneck especially when net.isr.direct=1? By the way > with device polling enabled, the system experienced packet errors and > the interface throughput is worst, so I avoid using it though. > Since I was really looking for a tool to be able to know how packets are being processed from the interface and up to the network stack and applications, but I haven't found any tool for my concern. What I have found is the LOCK_PROFILING tool. Although I'm sure that this really not answer my concern but I just tried because I need to know something about locks which FreeBSD is using with. Some people consider that there's a lot of factors and variables with regards to network performance in FreeBSD, so I got a try on this tool. I also get valuable info from this link http://markmail.org/message/3uqxi4pipvvoy6jx#query:lock%20profiling%20freebsd+page:1+mid:ymqgrxqf4min54zd+state:results. Instead of the IBM machine with Broadcom NICs, I use another machine with 4 x Quad-Core AMD64 with still Broadcom NICs on FreeBSD-7.1 BETA2. I took data results with traffic and without traffic. With traffic, I use both TCP and UDP protocols in bombarding traffic. UDP for upload and TCP for download in a back-to-back setup. What I have found is that there's a high wait_total on some of the following when there's traffic: max total wait_total count avg wait_avg cnt_hold cnt_lock name 517 24761291 6165864 4460995 5 1 552124 1558183 net/route.c:293 (sleep mutex:radix node head) 277 1427082 140797 354220 4 0 14476 20674 amd64/amd64/io_apic.c:212 (spin mutex:icu) 33 25275 20744 5401 4 3 0 5400 amd64/amd64/mp_machdep.c:974 (spin mutex:sched lock 4) 17283 3346679 104214 107262 31 0 4545 4072 kern/kern_sysctl.c:1334 (sleep mutex:Giant) 257 28599 386 1302 21 0 35 30 vm/vm_fault.c:667 (sleep mutex:vm object) 282 2821743 2673 977635 2 0 926 552 net/if_ethersubr.c:405 (sleep mutex:bce1) 22 743637 157239 256274 2 0 5304 48357 dev/random/randomdev_soft.c:308 (spin mutex:entropy harvest mutex) 301 16301894 881827 1255534 12 0 241491 45973 dev/bce/if_bce.c:5016 (sleep mutex:bce0) 273 1228787 55458 103863 11 0 3733 4736 kern/subr_sleepqueue.c:232 (spin mutex:sleepq chain) 624 4682305 1339783 1251253 3 1 32664 254211 dev/bce/if_bce.c:4320 (sleep mutex:bce1) With lock profiling, how do we know that a certain kernel structure or function is causing a contention? I only have little knowledge about mutex, can someone elaborate on these especially sleep and spin mutex? Unfortunately due to the log result is too big for the mailing list then I only attached the complete log in compressed format. Thanks, Archimedes ------=_Part_14844_13537311.1227698324243--