From owner-freebsd-net@FreeBSD.ORG Fri Apr 26 16:28:39 2013 Return-Path: Delivered-To: freebsd-net@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 39460B12 for ; Fri, 26 Apr 2013 16:28:39 +0000 (UTC) (envelope-from nodens2099@gmail.com) Received: from mail-ea0-x22b.google.com (mail-ea0-x22b.google.com [IPv6:2a00:1450:4013:c01::22b]) by mx1.freebsd.org (Postfix) with ESMTP id C57BB118D for ; Fri, 26 Apr 2013 16:28:38 +0000 (UTC) Received: by mail-ea0-f171.google.com with SMTP id b10so871719eae.16 for ; Fri, 26 Apr 2013 09:28:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=x-received:message-id:date:from:user-agent:mime-version:to:subject :references:in-reply-to:content-type:content-transfer-encoding; bh=TyeugP1wn79GQiH7B9pjxoy18NYgiIxvOlSfXFahYEA=; b=cKMt4hAjRI/QHXVy/wI5YD6qaKbZ7Hc6IC3ydhIznaIqjH/OZ/is+NMzxc0WXuTX8H Cw5Bxy5kaP7EYuxnszlOW15j+O0YCRXS0g4NFbyeE808OUWQraPRvCEzAOvS+cVlhag3 ZniCF0n+9cnqamhENxH3xALicuYR4PsyDWnZJlYnYnRDTS8cdZJ/n0RcKyRJdLSfGRXj jQVxzm/ACz09rc9/Xm8bjhkOMRyfoRbDq1G6PArCvuWznp1XsKpEh4tz7Sx06315zJmN 5UID6gKjo8xin4giyBhk9YKtx6RH6Juf9SFVTjyYOItvrggHObhG5VLNteA/KhAwDXfB m3eQ== X-Received: by 10.14.115.131 with SMTP id e3mr15004898eeh.43.1366993717913; Fri, 26 Apr 2013 09:28:37 -0700 (PDT) Received: from 11A0116.eolas.loc ([178.237.98.13]) by mx.google.com with ESMTPSA id v48sm16799693eeg.7.2013.04.26.09.28.36 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 26 Apr 2013 09:28:37 -0700 (PDT) Message-ID: <517AAB58.6020703@gmail.com> Date: Fri, 26 Apr 2013 18:29:12 +0200 From: =?ISO-8859-1?Q?=22Cl=E9ment_Hermann_=28nodens=29=22?= User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130329 Thunderbird/17.0.5 MIME-Version: 1.0 To: freebsd-net@freebsd.org Subject: Re: pf performance? References: <5176E5C1.9090601@soe.ucsc.edu> <20130426134224.GV76816@FreeBSD.org> <517A93FE.7020209@soe.ucsc.edu> <517AA337.8050505@freebsd.org> In-Reply-To: <517AA337.8050505@freebsd.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 8bit X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 26 Apr 2013 16:28:39 -0000 Hi, this thread seems it seems to be related to my problem (see High CPU interrupt load on intekl i350T4 with igb on 8.3). So let me jump in ;) Le 26/04/2013 17:54, Andre Oppermann a écrit : > On 26.04.2013 16:49, Erich Weiler wrote: >>> The pf isn't a process, so you can't see it in top. pf has some helper >>> threads however, but packet processing isn't performed by any of them. >> >> But the work pf does would show up in 'system' on top right? So if I >> see all my CPUs tied up 100% >> in 'interrupts' and very little 'system', would it be a reasonable >> assumption to think that if I got >> more CPU cores to handle the interrupts that eventually I would see >> 'system' load increase as the >> interrupt load became faster to be handled? And thus increase my >> bandwidth? > > Having the work of pf show up in 'interrupts' or 'system' depends on the > network driver and how it handles sending packets up the stack. In most > cases drivers deliver packets from interrupt context. > That is very interesting. Do you think my High CPU interrupt problem could be related to the fact that we use pf + altq ? >> In other words, until I see like 100% system usage in one core, I >> would have room to grow? > > You have room to grow if 'idle' is more than 0% and the interrupts of > the networks cards are running on different cores. If one core gets > all the interrupts a second idle core doesn't get the chance to help > out. IIRC the interrupt allocation to cores is done at interrupt > registration time or driver attach time. It can be re-distributed > at run time on most architecture but I'm not sure we have an easily > accessible API for that. > Would it be possible to use cpu affinity to reserve a core to pf ? For instance, use only 7 queues per card and keep a core available to pf ? Does that make sense or would the pf lock impact the interrupts of the interface queues ? I see a lot of "WAIT" state on the queues when I do a top -CHSIz. Cheers, -- Clément (nodens)