From owner-freebsd-stable@FreeBSD.ORG Mon Oct 4 17:31:08 2010 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 83FCF10656A3 for ; Mon, 4 Oct 2010 17:31:08 +0000 (UTC) (envelope-from dan@langille.org) Received: from nyi.unixathome.org (nyi.unixathome.org [64.147.113.42]) by mx1.freebsd.org (Postfix) with ESMTP id 5056F8FC12 for ; Mon, 4 Oct 2010 17:31:07 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by nyi.unixathome.org (Postfix) with ESMTP id A56B6509A8; Mon, 4 Oct 2010 18:31:07 +0100 (BST) X-Virus-Scanned: amavisd-new at unixathome.org Received: from nyi.unixathome.org ([127.0.0.1]) by localhost (nyi.unixathome.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id zsKHhJa9gnvl; Mon, 4 Oct 2010 18:31:07 +0100 (BST) Received: from nyi.unixathome.org (localhost [127.0.0.1]) by nyi.unixathome.org (Postfix) with ESMTP id B0682509A5; Mon, 4 Oct 2010 18:31:06 +0100 (BST) Received: from 68.64.144.221 (SquirrelMail authenticated user dan) by nyi.unixathome.org with HTTP; Mon, 4 Oct 2010 13:31:07 -0400 Message-ID: <283dbba8841ab6da40c1d72b05fda618.squirrel@nyi.unixathome.org> In-Reply-To: <4CA981FC.80405@FreeBSD.org> References: <45cfd27021fb93f9b0877a1596089776.squirrel@nyi.unixathome.org> <4C511EF8-591C-4BB9-B7AA-30D5C3DDC0FF@langille.org> <4CA68BBD.6060601@langille.org> <4CA929A8.6000708@langille.org> <4CA981FC.80405@FreeBSD.org> Date: Mon, 4 Oct 2010 13:31:07 -0400 From: "Dan Langille" To: "Martin Matuska" User-Agent: SquirrelMail/1.4.20-RC2 MIME-Version: 1.0 Content-Type: text/plain;charset=iso-8859-1 Content-Transfer-Encoding: 8bit X-Priority: 3 (Normal) Importance: Normal Cc: freebsd-stable , Artem Belevich , Dan Langille Subject: Re: zfs send/receive: is this slow? X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 04 Oct 2010 17:31:08 -0000 On Mon, October 4, 2010 3:27 am, Martin Matuska wrote: > Try using zfs receive with the -v flag (gives you some stats at the end): > # zfs send storage/bacula@transfer | zfs receive -v > storage/compressed/bacula > > And use the following sysctl (you may set that in /boot/loader.conf, too): > # sysctl vfs.zfs.txg.write_limit_override=805306368 > > I have good results with the 768MB writelimit on systems with at least > 8GB RAM. With 4GB ram, you might want to try to set the TXG write limit > to a lower threshold (e.g. 256MB): > # sysctl vfs.zfs.txg.write_limit_override=268435456 > > You can experiment with that setting to get the best results on your > system. A value of 0 means using calculated default (which is very high). I will experiment with the above. In the meantime: > During the operation you can observe what your disks actually do: > a) via ZFS pool I/O statistics: > # zpool iostat -v 1 > b) via GEOM: > # gstat -a The following output was produced while the original copy was underway. $ sudo gstat -a -b -I 20s dT: 20.002s w: 20.000s L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name 7 452 387 24801 9.5 64 2128 7.1 79.4 ada0 7 452 387 24801 9.5 64 2128 7.2 79.4 ada0p1 4 492 427 24655 6.7 64 2128 6.6 63.0 ada1 4 494 428 24691 6.9 65 2127 6.6 66.9 ada2 8 379 313 24798 13.5 65 2127 7.5 78.6 ada3 5 372 306 24774 14.2 64 2127 7.5 77.6 ada4 10 355 291 24741 15.9 63 2127 7.4 79.6 ada5 4 380 316 24807 13.2 64 2128 7.7 77.0 ada6 7 452 387 24801 9.5 64 2128 7.4 79.7 gpt/disk06-live 4 492 427 24655 6.7 64 2128 6.7 63.1 ada1p1 4 494 428 24691 6.9 65 2127 6.6 66.9 ada2p1 8 379 313 24798 13.5 65 2127 7.6 78.6 ada3p1 5 372 306 24774 14.2 64 2127 7.6 77.6 ada4p1 10 355 291 24741 15.9 63 2127 7.5 79.6 ada5p1 4 380 316 24807 13.2 64 2128 7.8 77.0 ada6p1 4 492 427 24655 6.8 64 2128 6.9 63.4 gpt/disk01-live 4 494 428 24691 6.9 65 2127 6.8 67.2 gpt/disk02-live 8 379 313 24798 13.5 65 2127 7.7 78.8 gpt/disk03-live 5 372 306 24774 14.2 64 2127 7.8 77.8 gpt/disk04-live 10 355 291 24741 15.9 63 2127 7.7 79.8 gpt/disk05-live 4 380 316 24807 13.2 64 2128 8.0 77.2 gpt/disk07-live $ zpool ol iostat 10 capacity operations bandwidth pool used avail read write read write ---------- ----- ----- ----- ----- ----- ----- storage 8.08T 4.60T 364 161 41.7M 7.94M storage 8.08T 4.60T 926 133 112M 5.91M storage 8.08T 4.60T 738 164 89.0M 9.75M storage 8.08T 4.60T 1.18K 179 146M 8.10M storage 8.08T 4.60T 1.09K 193 135M 9.94M storage 8.08T 4.60T 1010 185 122M 8.68M storage 8.08T 4.60T 1.06K 184 131M 9.65M storage 8.08T 4.60T 867 178 105M 11.8M storage 8.08T 4.60T 1.06K 198 131M 12.0M storage 8.08T 4.60T 1.06K 185 131M 12.4M Yeterday's write bandwidth was more 80-90M. It's down, a lot. I'll look closer this evening. > > mm > > Dňa 4. 10. 2010 4:06, Artem Belevich wrote / napísal(a): >> On Sun, Oct 3, 2010 at 6:11 PM, Dan Langille wrote: >>> I'm rerunning my test after I had a drive go offline[1]. But I'm not >>> getting anything like the previous test: >>> >>> time zfs send storage/bacula@transfer | mbuffer | zfs receive >>> storage/compressed/bacula-buffer >>> >>> $ zpool iostat 10 10 >>> capacity operations bandwidth >>> pool used avail read write read write >>> ---------- ----- ----- ----- ----- ----- ----- >>> storage 6.83T 5.86T 8 31 1.00M 2.11M >>> storage 6.83T 5.86T 207 481 25.7M 17.8M >> >> It may be worth checking individual disk activity using gstat -f 'da.$' >> >> Some time back I had one drive that was noticeably slower than the >> rest of the drives in RAID-Z2 vdev and was holding everything back. >> SMART looked OK, there were no obvious errors and yet performance was >> much worse than what I'd expect. gstat clearly showed that one drive >> was almost constantly busy with much lower number of reads and writes >> per second than its peers. >> >> Perhaps previously fast transfer rates were due to caching effects. >> I.e. if all metadata already made it into ARC, subsequent "zfs send" >> commands would avoid a lot of random seeks and would show much better >> throughput. >> >> --Artem >> _______________________________________________ >> freebsd-stable@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-stable >> To unsubscribe, send any mail to >> "freebsd-stable-unsubscribe@freebsd.org" > > -- Dan Langille -- http://langille.org/