From owner-freebsd-performance@FreeBSD.ORG Tue Apr 19 11:18:55 2005 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id AB3B316A4CE for ; Tue, 19 Apr 2005 11:18:55 +0000 (GMT) Received: from rproxy.gmail.com (rproxy.gmail.com [64.233.170.204]) by mx1.FreeBSD.org (Postfix) with ESMTP id 2CFCA43D49 for ; Tue, 19 Apr 2005 11:18:55 +0000 (GMT) (envelope-from kometen@gmail.com) Received: by rproxy.gmail.com with SMTP id a41so1249462rng for ; Tue, 19 Apr 2005 04:18:54 -0700 (PDT) DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:reply-to:to:subject:mime-version:content-type:content-transfer-encoding:content-disposition; b=D1osuG4j7ZmDJWUd30I33xHltxY40qVZ570u4L4Wt3MtkuTgQYnBMLZ3kztNt0CLpiZ7N469kmQfeY4Sl2lLLlBqI28EGVVVaItPpv1PuIFgEao/9IVMnyKTK4RcttynqsleSGGIH7EmhGiTji25DcPTa2LDYn6RMG1TOW+7HfE= Received: by 10.38.74.31 with SMTP id w31mr7045431rna; Tue, 19 Apr 2005 04:18:54 -0700 (PDT) Received: by 10.38.149.53 with HTTP; Tue, 19 Apr 2005 04:18:54 -0700 (PDT) Message-ID: Date: Tue, 19 Apr 2005 13:18:54 +0200 From: Claus Guttesen To: freebsd-stable@freebsd.org, freebsd-performance@freebsd.org Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Content-Disposition: inline Subject: some simple nfs-benchmarks on 5.4 RC2 X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list Reply-To: Claus Guttesen List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 19 Apr 2005 11:18:55 -0000 Hi. Sorry for x-posting but the thread was originally meant for freebsd-stable but then a performance-related question slowly emerged into the message ;-) Inspired by the nfs-benchmarks by Willem Jan Withagen I ran some simple benchmarks against a FreeBSD 5.4 RC2-server. My seven clients are RC1 and is a mix of i386 and amd64. The purpose of this test was *not* to measure throughput using various r/w-sizes. So all clients were mounted using r/w-sizes of 32768. The only difference was the usage of udp- or tcp-mounts. I only ran the test once. The server has net.isr.enable set to 1 (active), gbit-nic is em. Used 'systat -ifstat 1' to measure throughput. The storage is ide->fiber using a qlogic 2310 hba. It's a dual PIII at 1.3 GHz. I'm rsyncing to and from the nfsserver, the files are some KB (thumbnails) and and at most 1 MB (the image itself). The folder is approx. 1.8 GB. The mix of files very much reflects our load. *to* nfs-server *from* nfs-server tcp 41 MB/s 100 MB/s udp 30 MB/s 74 MB/s In my environment tcp is (quite) faster than udp, so I'll stick to that in the near future. So eventhough I only made one run the tcp-times are so much faster and it utilized the cpu more that I beleive doing more runs would only level the score a bit. Q: Will I get better performance upgrading the server from dual PIII to dual X= eon? A: regards Claus