Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 4 Oct 2010 13:31:07 -0400
From:      "Dan Langille" <dan@langille.org>
To:        "Martin Matuska" <mm@FreeBSD.org>
Cc:        freebsd-stable <freebsd-stable@freebsd.org>, Artem Belevich <fbsdlist@src.cx>, Dan Langille <dan@langille.org>
Subject:   Re: zfs send/receive: is this slow?
Message-ID:  <283dbba8841ab6da40c1d72b05fda618.squirrel@nyi.unixathome.org>
In-Reply-To: <4CA981FC.80405@FreeBSD.org>
References:  <a263c3beaeb0fa3acd82650775e31ee3.squirrel@nyi.unixathome.org> <45cfd27021fb93f9b0877a1596089776.squirrel@nyi.unixathome.org> <AANLkTik0aTDDSNRUBvfX5sMfhW%2B-nfSV9Q89v%2BeJo0ov@mail.gmail.com> <4C511EF8-591C-4BB9-B7AA-30D5C3DDC0FF@langille.org> <AANLkTinyHZ1r39AYrV_Wwc2H3B=xMv3vbeDLY2Gc%2Bkez@mail.gmail.com> <4CA68BBD.6060601@langille.org> <4CA929A8.6000708@langille.org> <AANLkTi=-JcAXW3wfJZoQMoQX3885GFpDAJ2Pa3OLKSUE@mail.gmail.com> <4CA981FC.80405@FreeBSD.org>

next in thread | previous in thread | raw e-mail | index | archive | help

On Mon, October 4, 2010 3:27 am, Martin Matuska wrote:
> Try using zfs receive with the -v flag (gives you some stats at the end):
> # zfs send storage/bacula@transfer | zfs receive -v
> storage/compressed/bacula
>
> And use the following sysctl (you may set that in /boot/loader.conf, too):
> # sysctl vfs.zfs.txg.write_limit_override=805306368
>
> I have good results with the 768MB writelimit on systems with at least
> 8GB RAM. With 4GB ram, you might want to try to set the TXG write limit
> to a lower threshold (e.g. 256MB):
> # sysctl vfs.zfs.txg.write_limit_override=268435456
>
> You can experiment with that setting to get the best results on your
> system. A value of 0 means using calculated default (which is very high).

I will experiment with the above.  In the meantime:

> During the operation you can observe what your disks actually do:
> a) via ZFS pool I/O statistics:
> # zpool iostat -v 1
> b) via GEOM:
> # gstat -a

The following output was produced while the original copy was underway.

$ sudo gstat -a -b -I 20s
dT: 20.002s  w: 20.000s
 L(q)  ops/s    r/s   kBps   ms/r    w/s   kBps   ms/w   %busy Name
    7    452    387  24801    9.5     64   2128    7.1   79.4  ada0
    7    452    387  24801    9.5     64   2128    7.2   79.4  ada0p1
    4    492    427  24655    6.7     64   2128    6.6   63.0  ada1
    4    494    428  24691    6.9     65   2127    6.6   66.9  ada2
    8    379    313  24798   13.5     65   2127    7.5   78.6  ada3
    5    372    306  24774   14.2     64   2127    7.5   77.6  ada4
   10    355    291  24741   15.9     63   2127    7.4   79.6  ada5
    4    380    316  24807   13.2     64   2128    7.7   77.0  ada6
    7    452    387  24801    9.5     64   2128    7.4   79.7 
gpt/disk06-live
    4    492    427  24655    6.7     64   2128    6.7   63.1  ada1p1
    4    494    428  24691    6.9     65   2127    6.6   66.9  ada2p1
    8    379    313  24798   13.5     65   2127    7.6   78.6  ada3p1
    5    372    306  24774   14.2     64   2127    7.6   77.6  ada4p1
   10    355    291  24741   15.9     63   2127    7.5   79.6  ada5p1
    4    380    316  24807   13.2     64   2128    7.8   77.0  ada6p1
    4    492    427  24655    6.8     64   2128    6.9   63.4 
gpt/disk01-live
    4    494    428  24691    6.9     65   2127    6.8   67.2 
gpt/disk02-live
    8    379    313  24798   13.5     65   2127    7.7   78.8 
gpt/disk03-live
    5    372    306  24774   14.2     64   2127    7.8   77.8 
gpt/disk04-live
   10    355    291  24741   15.9     63   2127    7.7   79.8 
gpt/disk05-live
    4    380    316  24807   13.2     64   2128    8.0   77.2 
gpt/disk07-live


$ zpool ol iostat 10
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
storage     8.08T  4.60T    364    161  41.7M  7.94M
storage     8.08T  4.60T    926    133   112M  5.91M
storage     8.08T  4.60T    738    164  89.0M  9.75M
storage     8.08T  4.60T  1.18K    179   146M  8.10M
storage     8.08T  4.60T  1.09K    193   135M  9.94M
storage     8.08T  4.60T   1010    185   122M  8.68M
storage     8.08T  4.60T  1.06K    184   131M  9.65M
storage     8.08T  4.60T    867    178   105M  11.8M
storage     8.08T  4.60T  1.06K    198   131M  12.0M
storage     8.08T  4.60T  1.06K    185   131M  12.4M

Yeterday's write bandwidth was more 80-90M.  It's down, a lot.

I'll look closer this evening.


>
> mm
>
> Dňa 4. 10. 2010 4:06, Artem Belevich  wrote / napísal(a):
>> On Sun, Oct 3, 2010 at 6:11 PM, Dan Langille <dan@langille.org> wrote:
>>> I'm rerunning my test after I had a drive go offline[1].  But I'm not
>>> getting anything like the previous test:
>>>
>>> time zfs send storage/bacula@transfer | mbuffer | zfs receive
>>> storage/compressed/bacula-buffer
>>>
>>> $ zpool iostat 10 10
>>>               capacity     operations    bandwidth
>>> pool         used  avail   read  write   read  write
>>> ----------  -----  -----  -----  -----  -----  -----
>>> storage     6.83T  5.86T      8     31  1.00M  2.11M
>>> storage     6.83T  5.86T    207    481  25.7M  17.8M
>>
>> It may be worth checking individual disk activity using gstat -f 'da.$'
>>
>> Some time back I had one drive that was noticeably slower than the
>> rest of the  drives in RAID-Z2 vdev and was holding everything back.
>> SMART looked OK, there were no obvious errors and yet performance was
>> much worse than what I'd expect. gstat clearly showed that one drive
>> was almost constantly busy with much lower number of reads and writes
>> per second than its peers.
>>
>> Perhaps previously fast transfer rates were due to caching effects.
>> I.e. if all metadata already made it into ARC, subsequent "zfs send"
>> commands would avoid a lot of random seeks and would show much better
>> throughput.
>>
>> --Artem
>> _______________________________________________
>> freebsd-stable@freebsd.org mailing list
>> http://lists.freebsd.org/mailman/listinfo/freebsd-stable
>> To unsubscribe, send any mail to
>> "freebsd-stable-unsubscribe@freebsd.org"
>
>


-- 
Dan Langille -- http://langille.org/




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?283dbba8841ab6da40c1d72b05fda618.squirrel>