From owner-freebsd-emulation@FreeBSD.ORG Mon Jul 18 05:30:28 2011 Return-Path: Delivered-To: freebsd-emulation@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id EBA371065672 for ; Mon, 18 Jul 2011 05:30:28 +0000 (UTC) (envelope-from Peter.Ross@bogen.in-berlin.de) Received: from einhorn.in-berlin.de (einhorn.in-berlin.de [192.109.42.8]) by mx1.freebsd.org (Postfix) with ESMTP id 796858FC08 for ; Mon, 18 Jul 2011 05:30:28 +0000 (UTC) X-Envelope-From: Peter.Ross@bogen.in-berlin.de Received: from localhost (okapi.in-berlin.de [192.109.42.117]) by einhorn.in-berlin.de (8.13.6/8.13.6/Debian-1) with ESMTP id p6I5UQFu023561; Mon, 18 Jul 2011 07:30:26 +0200 Received: from 124-254-118-24-static.bb.ispone.net.au (124-254-118-24-static.bb.ispone.net.au [124.254.118.24]) by webmail.in-berlin.de (Horde Framework) with HTTP; Mon, 18 Jul 2011 15:30:26 +1000 Message-ID: <20110718153026.10384ps0jqajxrle@webmail.in-berlin.de> Date: Mon, 18 Jul 2011 15:30:26 +1000 From: "Peter Ross" To: "Adam Vande More" References: <20110714095717.35581xj4rdju1pel@webmail.in-berlin.de> <20110714115504.20182xr8y5z7o3ug@webmail.in-berlin.de> In-Reply-To: MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; DelSp="Yes"; format="flowed" Content-Disposition: inline Content-Transfer-Encoding: 7bit User-Agent: Internet Messaging Program (IMP) 4.3.3 X-Scanned-By: MIMEDefang_at_IN-Berlin_e.V. on 192.109.42.8 Cc: freebsd-emulation@freebsd.org Subject: Re: Network problems while running VirtualBox X-BeenThere: freebsd-emulation@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Development of Emulators of other operating systems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 18 Jul 2011 05:30:29 -0000 Quoting "Adam Vande More" : > On Wed, Jul 13, 2011 at 10:02 PM, Adam Vande More > wrote: > >> I suspect this has less to do with actual memory and more to do with some >> other buffer-like bottleneck. Does tuning any of the network buffers make >> any difference? A couple to try: >> >> net.inet.ip.intr_queue_maxlen >> net.link.ifqmaxlen >> kern.ipc.nmbclusters >> >> If possible, does changing from VM bridged -> NAT or vice-versa result in >> any behavior change? >> > > Also check vmstat -z, net.graph.maxdata may be a candidate as well. I tried FTP (to have something completely different) and it fails as well: (ftp: netout: Cannot allocate memory) I watched vmstat -z, and every time it fails, I have another failure reported for "NetGraph data items". Regards Peter