Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 13 Jun 2013 11:56:27 -0400
From:      Jona Schuman <jonaschuman@gmail.com>
To:        Ivailo Tanusheff <Ivailo.Tanusheff@skrill.com>
Cc:        "freebsd-fs@freebsd.org" <freebsd-fs@freebsd.org>
Subject:   Re: zfs send/recv dies when transferring large-ish dataset
Message-ID:  <CAC-LZTajW0SO_dH9ZtUH80zX628vcosL_vOzwkwB1JF1PZy0qA@mail.gmail.com>
In-Reply-To: <57e0551229684b69bc27476b8a08fb91@DB3PR07MB059.eurprd07.prod.outlook.com>
References:  <CAC-LZTYLzFPTvA6S4CN0xTd-E_x9c3kxYwQoFed5LkVBrwVk0Q@mail.gmail.com> <57e0551229684b69bc27476b8a08fb91@DB3PR07MB059.eurprd07.prod.outlook.com>

next in thread | previous in thread | raw e-mail | index | archive | help
machine2# nc -d -l 9999 | zfs receive -v -F -d storagepool
machine1# zfs send -v -R dataset@snap | nc machine2 9999

machine1-output: sending from @ to dataset@snap
machine2-output: receiving full stream of dataset@snap into
storagepool/dataset@snap
machine1-output: warning: cannot send 'dataset@snap': Broken pipe
machine1-output: Broken pipe


On Thu, Jun 13, 2013 at 3:42 AM, Ivailo Tanusheff
<Ivailo.Tanusheff@skrill.com> wrote:
> Hi,
>
> Can you try send/recv with the -v or with -vP swiches, so you can see mor=
e verbose information?
>
> Regards,
> Ivailo Tanusheff
>
> -----Original Message-----
> From: owner-freebsd-fs@freebsd.org [mailto:owner-freebsd-fs@freebsd.org] =
On Behalf Of Jona Schuman
> Sent: Thursday, June 13, 2013 2:41 AM
> To: freebsd-fs@freebsd.org
> Subject: zfs send/recv dies when transferring large-ish dataset
>
> Hi,
>
> I'm getting some strange behavior from zfs send/recv and I'm hoping someo=
ne may be able to provide some insight. I have two identical machines runni=
ng 9.0-RELEASE-p3, each having a ZFS pool (zfs 5, zpool
> 28) for storage. I want to use zfs send/recv for replication between the =
two machines. For the most part, this has worked as expected.
> However, send/recv fails when transferring the largest dataset (both in a=
ctual size and in terms of number of files) on either machine.
> With these datasets, issuing:
>
> machine2# nc -d -l 9999 | zfs recv -d storagepool machine1# zfs send data=
set@snap | nc machine2 9999
>
> terminates early on the sending side without any error messages. The rece=
iving end continues on as expected, cleaning up the partial data received s=
o far and reverting to its initial state. (I've tried using mbuffer instead=
 of nc, or just using ssh, both with similar results.) Oddly, zfs send dies=
 slightly differently depending on how the two machines are connected. When=
 connected through the racktop switch, zfs send dies quietly without any in=
dication that the transfer has failed.
> When connected directly using a crossover cable, zfs send dies quietly an=
d machine1 becomes unresponsive (no network, no keyboard, hard reset requir=
ed). In both cases, no messages are printed to screen or to anything in /va=
r/log/.
>
>
> I can transfer the same datasets successfully if I send/recv to/from file=
:
>
> machine1# zfs send dataset@snap > /tmp/dump machine1# scp /tmp/dump machi=
ne2:/tmp/dump machine2# zfs recv -d storagepool < /tmp/dump
>
> so I don't think the datasets themselves are the issue. I've also success=
fully tried send/recv over the network using different network interfaces (=
10GbE ixgbe cards instead of the 1GbE igb links), which would suggest the i=
ssue is with the 1GbE links.
>
> Might there be some buffering parameter that I'm neglecting to tune, whic=
h is essential on the 1GbE links but may be less important on the faster li=
nks? Are there any known issues with the igb driver that might be the culpr=
it here? Any other suggestions?
>
> Thanks,
> Jona
> _______________________________________________
> freebsd-fs@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"
>
>



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAC-LZTajW0SO_dH9ZtUH80zX628vcosL_vOzwkwB1JF1PZy0qA>