Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 04 Oct 2010 09:27:56 +0200
From:      Martin Matuska <mm@FreeBSD.org>
To:        Artem Belevich <fbsdlist@src.cx>
Cc:        freebsd-stable <freebsd-stable@freebsd.org>, Dan Langille <dan@langille.org>
Subject:   Re: zfs send/receive: is this slow?
Message-ID:  <4CA981FC.80405@FreeBSD.org>
In-Reply-To: <AANLkTi=-JcAXW3wfJZoQMoQX3885GFpDAJ2Pa3OLKSUE@mail.gmail.com>
References:  <a263c3beaeb0fa3acd82650775e31ee3.squirrel@nyi.unixathome.org>	<45cfd27021fb93f9b0877a1596089776.squirrel@nyi.unixathome.org>	<AANLkTik0aTDDSNRUBvfX5sMfhW%2B-nfSV9Q89v%2BeJo0ov@mail.gmail.com>	<4C511EF8-591C-4BB9-B7AA-30D5C3DDC0FF@langille.org>	<AANLkTinyHZ1r39AYrV_Wwc2H3B=xMv3vbeDLY2Gc%2Bkez@mail.gmail.com>	<4CA68BBD.6060601@langille.org> <4CA929A8.6000708@langille.org> <AANLkTi=-JcAXW3wfJZoQMoQX3885GFpDAJ2Pa3OLKSUE@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Try using zfs receive with the -v flag (gives you some stats at the end):
# zfs send storage/bacula@transfer | zfs receive -v
storage/compressed/bacula

And use the following sysctl (you may set that in /boot/loader.conf, too):
# sysctl vfs.zfs.txg.write_limit_override=805306368

I have good results with the 768MB writelimit on systems with at least
8GB RAM. With 4GB ram, you might want to try to set the TXG write limit
to a lower threshold (e.g. 256MB):
# sysctl vfs.zfs.txg.write_limit_override=268435456

You can experiment with that setting to get the best results on your
system. A value of 0 means using calculated default (which is very high).

During the operation you can observe what your disks actually do:
a) via ZFS pool I/O statistics:
# zpool iostat -v 1
b) via GEOM:
# gstat -a

mm

Dňa 4. 10. 2010 4:06, Artem Belevich  wrote / napísal(a):
> On Sun, Oct 3, 2010 at 6:11 PM, Dan Langille <dan@langille.org> wrote:
>> I'm rerunning my test after I had a drive go offline[1].  But I'm not
>> getting anything like the previous test:
>>
>> time zfs send storage/bacula@transfer | mbuffer | zfs receive
>> storage/compressed/bacula-buffer
>>
>> $ zpool iostat 10 10
>>               capacity     operations    bandwidth
>> pool         used  avail   read  write   read  write
>> ----------  -----  -----  -----  -----  -----  -----
>> storage     6.83T  5.86T      8     31  1.00M  2.11M
>> storage     6.83T  5.86T    207    481  25.7M  17.8M
> 
> It may be worth checking individual disk activity using gstat -f 'da.$'
> 
> Some time back I had one drive that was noticeably slower than the
> rest of the  drives in RAID-Z2 vdev and was holding everything back.
> SMART looked OK, there were no obvious errors and yet performance was
> much worse than what I'd expect. gstat clearly showed that one drive
> was almost constantly busy with much lower number of reads and writes
> per second than its peers.
> 
> Perhaps previously fast transfer rates were due to caching effects.
> I.e. if all metadata already made it into ARC, subsequent "zfs send"
> commands would avoid a lot of random seeks and would show much better
> throughput.
> 
> --Artem
> _______________________________________________
> freebsd-stable@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-stable
> To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org"



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4CA981FC.80405>