From owner-freebsd-stable@FreeBSD.ORG Sun Jan 24 18:29:54 2010 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 1E4B3106568D; Sun, 24 Jan 2010 18:29:54 +0000 (UTC) (envelope-from dan.naumov@gmail.com) Received: from mail-yw0-f197.google.com (mail-yw0-f197.google.com [209.85.211.197]) by mx1.freebsd.org (Postfix) with ESMTP id A40A48FC17; Sun, 24 Jan 2010 18:29:53 +0000 (UTC) Received: by ywh35 with SMTP id 35so2195803ywh.7 for ; Sun, 24 Jan 2010 10:29:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=Jd59GblytqTsj/A2jL+22mSk53JxWa7kKpN2Tgprzag=; b=fJgnzGR/GnQ1t0OKMUyoXZkAoPpLFO3lPDjBTvvAP7Pa/eNs9edsqR+fpAObo7R1s4 TdgieZBAfaiTRbntM5vFqyuj1/+GYRFZa89XQowAgSsUP4JsCwrl66qzILFqjhSuI3vK 7+FWh/UZStxlHbEeByew33jvhYweVqf1S8Nmg= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; b=W9lTWgQ4payHtP/RFPn1cchDjsPrvIwRzSdhPO4wyAQJGnIyVZ+5O0WSGqq4wV/R2e /sKogAa96O7PgkgzNMgIIiLoMzfwbYK+xqpGzZmzFcw5By8iAP+5RTDlaR5FAmzOFK54 Pu6HOpQcJKrlY+ijAA58U6HqgHfE587eKxIOM= MIME-Version: 1.0 Received: by 10.101.12.6 with SMTP id p6mr6559698ani.248.1264357792623; Sun, 24 Jan 2010 10:29:52 -0800 (PST) In-Reply-To: References: <883b2dc51001240905r4cfbf830i3b9b400969ac261b@mail.gmail.com> Date: Sun, 24 Jan 2010 20:29:52 +0200 Message-ID: From: Dan Naumov To: Bob Friesenhahn , sub.mesa@gmail.com Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Cc: freebsd-fs@freebsd.org, FreeBSD-STABLE Mailing List , freebsd-questions@freebsd.org Subject: Re: 8.0-RELEASE/amd64 - full ZFS install - low read and write disk performance X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 24 Jan 2010 18:29:54 -0000 On Sun, Jan 24, 2010 at 8:12 PM, Bob Friesenhahn wrote: > On Sun, 24 Jan 2010, Dan Naumov wrote: >> >> This works out to 1GB in 36,2 seconds / 28,2mb/s in the first test and >> 4GB in 143.8 seconds / 28,4mb/s and somewhat consistent with the >> bonnie results. It also sadly seems to confirm the very slow speed :( >> The disks are attached to a 4-port Sil3124 controller and again, my >> Windows benchmarks showing 65mb/s+ were done on exact same machine, >> with same disks attached to the same controller. Only difference was >> that in Windows the disks weren't in a mirror configuration but were >> tested individually. I do understand that a mirror setup offers >> roughly the same write speed as individual disk, while the read speed >> usually varies from "equal to individual disk speed" to "nearly the >> throughput of both disks combined" depending on the implementation, >> but there is no obvious reason I am seeing why my setup offers both >> read and write speeds roughly 1/3 to 1/2 of what the individual disks >> are capable of. Dmesg shows: > > There is a mistatement in the above in that a "mirror setup offers roughl= y > the same write speed as individual disk". =A0It is possible for a mirror = setup > to offer a similar write speed to an individual disk, but it is also quit= e > possible to get 1/2 (or even 1/3) the speed. ZFS writes to a mirror pair > requires two independent writes. =A0If these writes go down independent I= /O > paths, then there is hardly any overhead from the 2nd write. =A0If the wr= ites > go through a bandwidth-limited shared path then they will contend for tha= t > bandwidth and you will see much less write performance. > > As a simple test, you can temporarily remove the mirror device from the p= ool > and see if the write performance dramatically improves. Before doing that= , > it is useful to see the output of 'iostat -x 30' while under heavy write > load to see if one device shows a much higher svc_t value than the other. Ow, ow, WHOA: atombsd# zpool offline tank ad8s1a [jago@atombsd ~]$ dd if=3D/dev/zero of=3D/home/jago/test3 bs=3D1M count=3D1= 024 1024+0 records in 1024+0 records out 1073741824 bytes transferred in 16.826016 secs (63814382 bytes/sec) Offlining one half of the mirror bumps DD write speed from 28mb/s to 64mb/s! Let's see how Bonnie results change: Mirror with both parts attached: -------Sequential Output-------- ---Sequential Input-- --Rand= om-- -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seek= s--- Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec = %CPU 8192 18235 46.7 23137 19.9 13927 13.6 24818 49.3 44919 17.3 134.3 = 2.1 Mirror with 1 half offline: -------Sequential Output-------- ---Sequential Input-- --Rand= om-- -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seek= s--- Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec = %CPU 1024 22888 58.0 41832 35.1 22764 22.0 26775 52.3 54233 18.3 166.0 = 1.6 Ok, the Bonnie results have improved, but only very little. - Sincerely, Dan Naumov