From owner-freebsd-net@FreeBSD.ORG Sat Nov 10 23:58:28 2007 Return-Path: Delivered-To: freebsd-net@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id EBFEC16A417 for ; Sat, 10 Nov 2007 23:58:28 +0000 (UTC) (envelope-from emandy@triticom.com) Received: from vesuvius.triticom.com (vesuvius.triticom.com [204.72.168.4]) by mx1.freebsd.org (Postfix) with ESMTP id A28F313C480 for ; Sat, 10 Nov 2007 23:58:28 +0000 (UTC) (envelope-from emandy@triticom.com) Received: from whistler.triticom.com ([204.72.168.89] helo=ermxp) by vesuvius.triticom.com with esmtpsa (TLS-1.0:RSA_ARCFOUR_MD5:16) (Exim 4.63) (envelope-from ) id 1IqzXy-0004LV-10 for freebsd-net@freebsd.org; Sat, 10 Nov 2007 17:15:26 -0600 Message-ID: <008501c823ef$93a26af0$25a8a8c0@ermxp> From: "Ed Mandy" To: Date: Sat, 10 Nov 2007 17:15:25 -0600 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.3138 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.3198 Subject: System Freezes When MBufClust Usages Rises X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 10 Nov 2007 23:58:29 -0000 We are using FreeBSD to run the Dante SOCKS proxy server to accelerate a high-latency (approximately 1-second round-trip) network link. We need to support many concurrent transfers of large files. To do this, we have set the machine up with the following parameters. Compiled Dante with the following setting in include/config.h SOCKD_BUFSIZETCP = (1024*1000) /etc/sysctl.conf : kern.ipc.maxsockbuf=4194304 net.inet.tcp.sendspace=2097152 net.inet.tcp.recvspace=2097152 /boot/loader.conf : kern.ipc.maxsockets="0" (also tried 25600, 51200, 102400, and 409600) kern.ipc.nmbclusters="0" (also tried 102400 and 409600) (Looking at the code, it seems that 0 means not to set a max for the above two controls.) If kern.ipc.nmbclusters is set to 25600, the system will hard freeze when "vmstat -z" shows the number of clusters reaches 25600. If kern.ipc.nmbclusters is set to 0 (or 102400), the system will hard freeze when "vmstat -z" shows the number of clusters is around 66000. When it freezes, the number of Kbytes allocated to network (as shown by "netstat -m") is roughly 160,000 (160MB). For a while, we thought that there may be a limit of 65536 mbuf clusters, so we tested building the kernel with MCLSHIFT=12, which makes each mbcluster 4096-bytes. With this configuration, nmbclusters only reached about 33000 before the system froze. The number of Kbytes allocated to network (as shown by "netstat -m") still maxed out at around 160,000. Now, it seems that we are running into some other memory limitation that occurs when our network allocation gets close to 160MB. We have tried tuning paramaters such as KVA_PAGES, vm.kmem_size, vm.kmem_size_max, etc. Though, we are unsure if the mods we made there helped in any way. This is all being done on Celeron 2.8GHz machines with 3+ GB of RAM running FreeBSD 5.3. We are very much tied to this platform at the moment, and upgrading is not a realistic option for us. We would like to tune the systems to not lockup. We can currently work around the problem (by using smaller buffers and such), but it is at the expense of network throughput, which is less than ideal. Are there any other parameters that would help us to allocate more memory to the kernel networking? What other options should we look into? Thanks, Ed Mandy