From owner-freebsd-hackers@FreeBSD.ORG Sat Oct 20 19:53:36 2012 Return-Path: Delivered-To: freebsd-hackers@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 94974295; Sat, 20 Oct 2012 19:53:36 +0000 (UTC) (envelope-from ndenev@gmail.com) Received: from mail-wg0-f50.google.com (mail-wg0-f50.google.com [74.125.82.50]) by mx1.freebsd.org (Postfix) with ESMTP id E1B638FC08; Sat, 20 Oct 2012 19:53:35 +0000 (UTC) Received: by mail-wg0-f50.google.com with SMTP id 16so1215971wgi.31 for ; Sat, 20 Oct 2012 12:53:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=subject:mime-version:content-type:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to:x-mailer; bh=8OJs8Z6barj7ClQvDt1qnWXKmAlqaQhZpsG86iyrlYY=; b=o0kEJq7puhWUgm0D7pR0Ip1ghaczPi22n4Z43E4NvvZBRBxDn8I1XUmh+iq3DrDhCt V/rl+4qicmG4rtUC+W4m31Lv7JDaDiJqZoddMD7oP0KDHfXBkjouHSKOHwuRHazWc9J7 xPDOn4rMBksRxQxeCkkwK1/HFUCGHx3E4AiDaARFInS2fPUf2KVPvnvJt9cidHYW54Wo UgNpBU9hMOdr/KXweVgyT2u92nh9aCxwr0yS5j8Z9CwlT986sqoEXuuHLAe4Hcv1loXT h/QBVut176+6+s59dkab6SEwB5CC5RRS3rPXQFymTmBHknrxgWt9qHD8Tf6k+7giZQSj JrZQ== Received: by 10.180.95.130 with SMTP id dk2mr10928240wib.18.1350762814815; Sat, 20 Oct 2012 12:53:34 -0700 (PDT) Received: from [10.0.0.86] ([93.152.184.10]) by mx.google.com with ESMTPS id eq2sm11472426wib.1.2012.10.20.12.53.33 (version=TLSv1/SSLv3 cipher=OTHER); Sat, 20 Oct 2012 12:53:34 -0700 (PDT) Subject: Re: NFS server bottlenecks Mime-Version: 1.0 (Mac OS X Mail 6.1 \(1498\)) Content-Type: text/plain; charset=iso-8859-1 From: Nikolay Denev In-Reply-To: Date: Sat, 20 Oct 2012 22:53:31 +0300 Content-Transfer-Encoding: quoted-printable Message-Id: References: <191784842.2570110.1350737132305.JavaMail.root@erie.cs.uoguelph.ca> To: Outback Dingo X-Mailer: Apple Mail (2.1498) Cc: "freebsd-hackers@freebsd.org Hackers" , Rick Macklem , Ivan Voras X-BeenThere: freebsd-hackers@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Technical Discussions relating to FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 20 Oct 2012 19:53:36 -0000 On Oct 20, 2012, at 10:45 PM, Outback Dingo = wrote: > On Sat, Oct 20, 2012 at 3:28 PM, Ivan Voras = wrote: >> On 20 October 2012 14:45, Rick Macklem wrote: >>> Ivan Voras wrote: >>=20 >>>> I don't know how to interpret the rise in context switches; as this = is >>>> kernel code, I'd expect no context switches. I hope someone else = can >>>> explain. >>>>=20 >>> Don't the mtx_lock() calls spin for a little while and then context >>> switch if another thread still has it locked? >>=20 >> Yes, but are in-kernel context switches also counted? I was assuming >> they are light-weight enough not to count. >>=20 >>> Hmm, I didn't look, but were there any tests using UDP mounts? >>> (I would have thought that your patch would mainly affect UDP = mounts, >>> since that is when my version still has the single LRU queue/mutex. >>=20 >> Another assumption - I thought UDP was the default. >>=20 >>> As I think you know, my concern with your patch would be correctness >>> for UDP, not performance.) >>=20 >> Yes. >=20 > Ive got a similar box config here, with 2x 10GB intel nics, and 24 2TB > drives on an LSI controller. > Im watching the thread patiently, im kinda looking for results, and > answers, Though Im also tempted to > run benchmarks on my system also see if i get similar results I also > considered that netmap might be one > but not quite sure if it would help NFS, since its to hard to tell if > its a network bottle neck, though it appears > to be network related. >=20 >> _______________________________________________ >> freebsd-hackers@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-hackers >> To unsubscribe, send any mail to = "freebsd-hackers-unsubscribe@freebsd.org" Doesn't look like network issue to me. =46rom my observations it's more = like some overhead in nfs and arc. The boxes easily push 10G with simple iperf test. Running two iperf test over each port of the dual ported 10G nics gives = 960MB/sec regardles which machine is the server. Also, I've seen over 960Gb/sec over NFS with this setup, but I can't = understand what type of workload was able to do this. At some point I was able to do this with simple dd, then after a reboot = I was no longer to push this traffic. I'm thinking something like ARC/kmem fragmentation might be the issue? =20