From owner-freebsd-net@FreeBSD.ORG Fri Apr 25 12:01:38 2014 Return-Path: Delivered-To: freebsd-net@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 06B38855 for ; Fri, 25 Apr 2014 12:01:38 +0000 (UTC) Received: from umail.aei.mpg.de (umail.aei.mpg.de [194.94.224.6]) by mx1.freebsd.org (Postfix) with ESMTP id 8956E195B for ; Fri, 25 Apr 2014 12:01:37 +0000 (UTC) Received: from mailgate.aei.mpg.de (mailgate.aei.mpg.de [194.94.224.5]) by umail.aei.mpg.de (Postfix) with ESMTP id 5AFF82009E7; Fri, 25 Apr 2014 14:01:34 +0200 (CEST) Received: from mailgate.aei.mpg.de (localhost [127.0.0.1]) by localhost (Postfix) with SMTP id 4E780405889; Fri, 25 Apr 2014 14:01:34 +0200 (CEST) Received: from intranet.aei.uni-hannover.de (ahin1.aei.uni-hannover.de [130.75.117.40]) by mailgate.aei.mpg.de (Postfix) with ESMTP id 1F787406AF1; Fri, 25 Apr 2014 14:01:34 +0200 (CEST) Received: from cascade.aei.uni-hannover.de ([10.117.15.111]) by intranet.aei.uni-hannover.de (Lotus Domino Release 8.5.3FP6) with ESMTP id 2014042514012379-88367 ; Fri, 25 Apr 2014 14:01:23 +0200 Date: Fri, 25 Apr 2014 14:01:23 +0200 From: Gerrit =?ISO-8859-1?Q?K=FChn?= To: Marek Salwerowicz Subject: Re: NFS over LAGG / lacp poor performance Message-Id: <20140425140123.a76c18f9.gerrit.kuehn@aei.mpg.de> In-Reply-To: <535A482E.1030106@wp.pl> References: <535A1354.2040309@wp.pl> <20140425113711.e7c7d1c2.gerrit.kuehn@aei.mpg.de> <535A482E.1030106@wp.pl> Organization: Max Planck Gesellschaft X-Mailer: Sylpheed 3.1.3 (GTK+ 2.24.19; amd64-portbld-freebsd8.2) Mime-Version: 1.0 X-MIMETrack: Itemize by SMTP Server on intranet/aei-hannover(Release 8.5.3FP6|November 21, 2013) at 04/25/2014 14:01:23, Serialize by Router on intranet/aei-hannover(Release 8.5.3FP6|November 21, 2013) at 04/25/2014 14:01:33, Serialize complete at 04/25/2014 14:01:33 Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset=US-ASCII X-PMX-Version: 6.0.2.2308539, Antispam-Engine: 2.7.2.2107409, Antispam-Data: 2014.4.25.114821 X-PerlMx-Spam: Gauge=IIIIIIII, Probability=8%, Report=' HTML_00_01 0.05, HTML_00_10 0.05, MIME_LOWER_CASE 0.05, BODYTEXTP_SIZE_3000_LESS 0, BODY_SIZE_1800_1899 0, BODY_SIZE_2000_LESS 0, BODY_SIZE_5000_LESS 0, BODY_SIZE_7000_LESS 0, __ANY_URI 0, __BOUNCE_CHALLENGE_SUBJ 0, __BOUNCE_NDR_SUBJ_EXEMPT 0, __C230066_P5 0, __CT 0, __CTE 0, __CT_TEXT_PLAIN 0, __HAS_FROM 0, __HAS_MSGID 0, __HAS_X_MAILER 0, __IN_REP_TO 0, __MIME_TEXT_ONLY 0, __MIME_VERSION 0, __RUS_OBFU_PHONE 0, __SANE_MSGID 0, __SUBJ_ALPHA_NEGATE 0, __TO_MALFORMED_2 0, __URI_NO_PATH 0, __URI_NO_WWW 0, __URI_NS ' Cc: freebsd-net@freebsd.org X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 25 Apr 2014 12:01:38 -0000 On Fri, 25 Apr 2014 13:34:06 +0200 Marek Salwerowicz wrote about Re: NFS over LAGG / lacp poor performance: GK> irq256: igb0:que 0 99396134 64 GK> irq257: igb0:que 1 61496018 39 GK> irq258: igb0:que 2 101687742 66 GK> irq259: igb0:que 3 100824264 65 GK> irq260: igb0:link 2 0 GK> irq261: igb1:que 0 1666960 1 GK> irq262: igb1:que 1 2325576555 1510 GK> irq263: igb1:que 2 1563283 1 GK> irq264: igb1:que 3 1897428 1 GK> irq265: igb1:link 2 0 MS> For me on storage1 (9.1-RELEASE) it looks like: MS> irq265: igb0:que 0 2307223482 323 MS> irq266: igb0:link 4 0 MS> irq267: igb1:que 0 271641638 38 MS> irq268: igb1:link 6 0 MS> irq269: igb2:que 0 91665104 12 MS> irq270: igb2:link 6 0 MS> irq271: igb3:que 0 628139928 88 MS> irq272: igb3:link 5 0 MS> But in my case all igb links are aggregated using LACP (lagg0), then MS> there are 2 vlans over lagg0 (vlan14 and vlan900) and the vlan900 is MS> one dedicated for NFS MS> And I don't have more than one queue per interface Thanks for your input. As far as I understood so far, there should be one igb queue created per cpu core in the system by default (and this is what I see on my system). But my irq rate looks quite high to me (and it is only on one of these queues). Maybe I'll try to reduce this to one queue and see what happens. Does anybody else in here happen to know something about this? cu Gerrit