From owner-freebsd-fs@FreeBSD.ORG Sun Jun 28 10:30:27 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 85E76106568D; Sun, 28 Jun 2009 10:30:27 +0000 (UTC) (envelope-from dan.naumov@gmail.com) Received: from mail-yx0-f181.google.com (mail-yx0-f181.google.com [209.85.210.181]) by mx1.freebsd.org (Postfix) with ESMTP id 2EC788FC24; Sun, 28 Jun 2009 10:30:27 +0000 (UTC) (envelope-from dan.naumov@gmail.com) Received: by yxe11 with SMTP id 11so2764255yxe.3 for ; Sun, 28 Jun 2009 03:30:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=j6IbRWSGP4VxhJUPN5R97FXETiJgSZKKWcuKyQrn6Wc=; b=LvT/zQIOA6DH64RwpVctPUYYS9fuwSm8AcR6hrxQZtid9Gk6gn6mFDLTiW7LkKnyyx VWOuRuI6fCwNcMGxVWqSyNaW/yn+fIeULxPSCvIEoHXvZ1jO9rXsNyY3epiNm1ajBLTk yAbgTGHmdKlsH9VuQy1xMjxorYwWPMUDBXaCg= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; b=aQM+5VBu+4VYZ8BQAAW+q9iHwD/0RDqwJZdWwCJiSMuThkbNDbCyMtIbndXBdIfcoH PbJ0GfOSfumdRn6vKJjMPbXCsNfMoRg8Eih4ut30Q4gW4//K5dFRX0AkulE72GwCloV+ OCsvyr14n59mlGzBkbR4Vfv616jbfZWdbJQx0= MIME-Version: 1.0 Received: by 10.100.46.18 with SMTP id t18mr7516635ant.54.1246185026686; Sun, 28 Jun 2009 03:30:26 -0700 (PDT) In-Reply-To: <4A4725FA.80505@modulus.org> References: <4A4725FA.80505@modulus.org> Date: Sun, 28 Jun 2009 13:30:26 +0300 Message-ID: From: Dan Naumov To: Andrew Snow Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Cc: freebsd-fs@freebsd.org, freebsd-geom@freebsd.org Subject: Re: read/write benchmarking: UFS2 vs ZFS vs EXT3 vs ZFS RAIDZ vs Linux MDRAID X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 28 Jun 2009 10:30:28 -0000 > What confuses me about these results is that the '5 disk' performance was > barely higher than the 'single disk' performance. =A0All figures are also > lower than I get from a single modern SATA disk. > > My own testing with dd from /dev/zero with FreeBSD ZFS an Intel ICH10 > chipset motherboard with Core2duo 2.66ghz showed RAIDZ performance scalin= g > linearly with number of disks: > > > What =A0 =A0 =A0 =A0 =A0 =A0 =A0 Write =A0 Read > -------------------------------- > 7 disk RAIDZ2 =A0 =A0 =A0220 =A0 =A0 305 > 6 disk RAIDZ2 =A0 =A0 =A0173 =A0 =A0 260 > 5 disk RAIDZ2 =A0 =A0 =A0120 =A0 =A0 213 What's confusing is that your results are actually out of place with how ZFS numbers are supposed to look, not mine :) When using ZFS RAIDZ, due to the way parity checking works in ZFS, your pool is SUPPOSED to have throughput of the average single disk from that pool and not some numbers growing skyhigh in a linear fashion. The numbers that did surprise me the most were actually gmirror reads (results posted earlier to this list): a geom gmirror is consistently SLOWER for reading that a single disk (and it only gets progressively worse the more disks you have in your gmirror). Read performance of all other mirroring implementations pretty much scale up linearly with the amount of disks present in the mirror. - Sincerely, Dan Naumov