From owner-freebsd-net@FreeBSD.ORG Tue Sep 23 15:41:19 2014 Return-Path: Delivered-To: freebsd-net@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 5822CB32; Tue, 23 Sep 2014 15:41:19 +0000 (UTC) Received: from onelab2.iet.unipi.it (onelab2.iet.unipi.it [131.114.59.238]) by mx1.freebsd.org (Postfix) with ESMTP id F35B2F2F; Tue, 23 Sep 2014 15:41:18 +0000 (UTC) Received: by onelab2.iet.unipi.it (Postfix, from userid 275) id B90927300A; Tue, 23 Sep 2014 17:46:10 +0200 (CEST) Date: Tue, 23 Sep 2014 17:46:10 +0200 From: Luigi Rizzo To: "Alexander V. Chernikov" Subject: Re: How do I balance bandwidth over several virtual NICs? Message-ID: <20140923154610.GD84074@onelab2.iet.unipi.it> References: <5421310C.5010406@FreeBSD.org> <54218EF4.6090102@FreeBSD.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <54218EF4.6090102@FreeBSD.org> User-Agent: Mutt/1.5.20 (2009-06-14) Cc: "freebsd-net@freebsd.org" , Adrian Chadd , Elof Ofel X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 23 Sep 2014 15:41:19 -0000 On Tue, Sep 23, 2014 at 07:17:08PM +0400, Alexander V. Chernikov wrote: > On 23.09.2014 18:44, Luigi Rizzo wrote: ... > However, in addition to non-symmetric RSS (which is hopefully being > addressed), there is another > usual "producer - multuple consumers" problem: one snort process can > start process packets very slowly, or hang, or crash. > In that case host RX ring is getting full, NIC fails to push packets to > given queue and start storing them inside > its skid buffer (512k for Niantic afair). After that buffer becomes full > traffic and all processing stops. interesting. Actually, scary! Do you have any reference to the data sheets documenting that behaviour ? I have indeed received reports saying something similar but always suspected user errors. The fact that a starved queue can consume the entire internal buffer seems a really bad bug. At least you can overcome this one by having the demux done in software. cheers luigi