From owner-freebsd-virtualization@freebsd.org Mon Feb 19 11:02:28 2018 Return-Path: Delivered-To: freebsd-virtualization@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 093E8F1A800; Mon, 19 Feb 2018 11:02:28 +0000 (UTC) (envelope-from prvs=5814e79b2=roger.pau@citrix.com) Received: from SMTP.EU.CITRIX.COM (smtp.eu.citrix.com [185.25.65.24]) (using TLSv1.2 with cipher RC4-SHA (128/128 bits)) (Client CN "mail.citrix.com", Issuer "DigiCert SHA2 Secure Server CA" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4F9C96E80D; Mon, 19 Feb 2018 11:02:27 +0000 (UTC) (envelope-from prvs=5814e79b2=roger.pau@citrix.com) X-IronPort-AV: E=Sophos;i="5.46,534,1511827200"; d="scan'208";a="68152684" Date: Mon, 19 Feb 2018 11:02:19 +0000 From: Roger Pau =?iso-8859-1?Q?Monn=E9?= To: Laurence Pawling CC: "freebsd-xen@freebsd.org" , "freebsd-virtualization@freebsd.org" , "freebsd-net@freebsd.org" , David King , Vlad Galu Subject: Re: multi-vCPU networking issues as client OS under Xen Message-ID: <20180219110219.r4yrgbc4yomb3gly@MacBook-Pro-de-Roger.local> References: <20180219100558.adgb6m5ukdfvxehp@MacBook-Pro-de-Roger.local> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Disposition: inline In-Reply-To: User-Agent: NeoMutt/20171208 X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To AMSPEX02CL02.citrite.net (10.69.22.126) X-BeenThere: freebsd-virtualization@freebsd.org X-Mailman-Version: 2.1.25 Precedence: list List-Id: "Discussion of various virtualization techniques FreeBSD supports." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Feb 2018 11:02:28 -0000 On Mon, Feb 19, 2018 at 10:42:08AM +0000, Laurence Pawling wrote: > > When using >1 vCPUs can you set hw.xn.num_queues=1 on > > /boot/loader.conf and try to reproduce the issue? > > > > I'm afraid this is rather related to multiqueue (which is only used > > if >1 vCPUs). > > > > Thanks, Roger. > > Roger - thanks for your quick reply, this is confirmed. Setting hw.xn.num_queues=1 on the server VM when vCPUs > 1 prevents the issue. I've also been told that in order to discard this being a XenServer specific issue you should execute the following on Dom0 and reboot the server: # xe-switch-network-backend bridge And then try to reproduce the issue again with >1 vCPUs (and of course removing the queue limit in loader.conf) > For reference, please can you comment on the performance impact of this? I'm afraid I don't have any numbers. Roger.