From owner-freebsd-net@FreeBSD.ORG Thu Jun 5 11:19:59 2008 Return-Path: Delivered-To: net@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id F2D4A106566C for ; Thu, 5 Jun 2008 11:19:59 +0000 (UTC) (envelope-from lev@serebryakov.spb.ru) Received: from ftp.translate.ru (ftp.translate.ru [195.131.4.140]) by mx1.freebsd.org (Postfix) with ESMTP id A06AD8FC1F for ; Thu, 5 Jun 2008 11:19:59 +0000 (UTC) (envelope-from lev@serebryakov.spb.ru) Received: from [192.18.98.64] (brmea-proxy-3.Sun.COM [192.18.98.64]) (Authenticated sender: lev@serebryakov.spb.ru) by ftp.translate.ru (Postfix) with ESMTPA id 9F3AD13DFBE; Thu, 5 Jun 2008 15:19:57 +0400 (MSD) Message-ID: <4847CC58.4060104@serebryakov.spb.ru> Date: Thu, 05 Jun 2008 15:22:00 +0400 From: "Lev A. Serebryakov" User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.0.14) Gecko/20071210 Thunderbird/1.5.0.14 Mnenhy/0.7.4.0 MIME-Version: 1.0 To: Adrian Chadd References: <1761236634.20080604234231@serebryakov.spb.ru> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: net@freebsd.org Subject: Re: samba performance on 1Gig link: how to replace black magic with science? And why TCP windows scaling is not in play? X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 05 Jun 2008 11:20:00 -0000 Adrian Chadd wrote: > Figure out why window scaling isn't working - look at the options > being negotiated (use tcpdump) and try to figure out which side isn't > offering or is rejecting window size scaling negotiation. FreeBSD suggest scaling 9, Windows -- scaling 0. After that FreeBSD uses scaling, but windows is 49152 (scaled! 0x0060 in header!) always from FreeBSD to Win due to SO_RCVBUF=49152. Without this option window is 130560, but speed is MUCH worse! > CIFS isn't the same profile as iperf/etc - its not just shovelling raw > data down the socket, there's a whole protocol involved in scheduling > what to transfer. Latency in handling commands screws your > performance.. But how this "magic values" in socket buffers can be explained? As far as I know, there are "big read/big write" commands in CIFS, which allows use more than 64K in one operation? -- // Lev Sserebryakov