Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 7 Sep 2012 15:53:52 GMT
From:      Martin Birgmeier <Martin.Birgmeier@aon.at>
To:        freebsd-gnats-submit@FreeBSD.org
Subject:   kern/171415: [zfs] zfs recv fails with "cannot receive incremental stream: invalid backup stream"
Message-ID:  <201209071553.q87Frqrj065306@red.freebsd.org>
Resent-Message-ID: <201209071600.q87G0Bcg098646@freefall.freebsd.org>

next in thread | raw e-mail | index | archive | help

>Number:         171415
>Category:       kern
>Synopsis:       [zfs] zfs recv fails with "cannot receive incremental stream: invalid backup stream"
>Confidential:   no
>Severity:       non-critical
>Priority:       low
>Responsible:    freebsd-bugs
>State:          open
>Quarter:        
>Keywords:       
>Date-Required:
>Class:          sw-bug
>Submitter-Id:   current-users
>Arrival-Date:   Fri Sep 07 16:00:10 UTC 2012
>Closed-Date:
>Last-Modified:
>Originator:     Martin Birgmeier
>Release:        8.2.0 + head as of 2012-09-01
>Organization:
MBi at home
>Environment:
FreeBSD hal.xyzzy 8.2-RELEASE FreeBSD 8.2-RELEASE #4: Sat Aug 27 09:30:11 CEST 2011     root@hal.xyzzy:/z/OBJ/FreeBSD/amd64/RELENG_8_2_0_RELEASE/src/sys/XYZZY_SMP  amd64

FreeBSD v903.xyzzy 10.0-CURRENT FreeBSD 10.0-CURRENT #1: Sat Sep  1 17:30:01 CEST 2012     root@v903.xyzzy:/usr/obj/.../hal/z/SRC/FreeBSD/head/sys/XYZZY_SMP  amd64

>Description:
I have a machine "hal", running release/8.2.0, which is equipped with 6 TB disks, labeled disk31..disk36. These are more or less split in half using gpart(8), yielding partitions disk31p3..disk36p3 and disk31p4..disk36p4, each slightly less than 1 TB in size.

The first set of these halves, disk31p3..disk36p3, is combined into a raidz2 zpool "hal.1", yielding approximately 4 TB. This pool is used for production. It was created on 2010-10-22, with zpool version 14 and zfs version 3 ("zpool get version hal.1", "zfs get version hal.1").

The second set of these halves, disk31p4..disk36p4, is exported as iSCSI targets using net/istgt. These are used for experimental purposes.

On hal, using emulators/virtualbox-ose, I run a virtual machine v903, which I usually keep quite close to head. In v903, using iscontrol(8), I import the iSCSI targets, yielding (in order) da0..da5.

I want to see what will happen when I create a zpool under head and try to duplicate the file systems in hal.1 to it. To test this with the file system "hal.1/backup/databases", I start v903 and on it run:

[0]# kldload iscsi_initiator.ko
iscsi: version 2.3.1
[0]# for i in disk3{1..6}p4
do
echo "*** $i ***"
iscontrol -n ${i} || break 
sleep 1
done
*** disk31p4 ***
iscontrol[1257]: running
iscontrol[1257]: (pass2:iscsi0:0:0:0):  tagged openings now 0
*** disk32p4 ***
iscontrol[1262]: running
iscontrol[1262]: (pass3:iscsi1:0:0:0):  tagged openings now 0
*** disk33p4 ***
iscontrol[1267]: running
iscontrol[1267]: (pass4:iscsi2:0:0:0):  tagged openings now 0
*** disk34p4 ***
iscontrol[1272]: running
iscontrol[1272]: (pass5:iscsi3:0:0:0):  tagged openings now 0
*** disk35p4 ***
iscontrol[1277]: running
iscontrol[1277]: (pass6:iscsi4:0:0:0):  tagged openings now 0
*** disk36p4 ***
iscontrol[1282]: running
iscontrol[1282]: (pass7:iscsi5:0:0:0):  tagged openings now 0
[0]# zpool create v903.2 raidz2 da0 da1 da2 da3 da4 da5
ZFS NOTICE: Prefetch is disabled by default if less than 4GB of RAM is present;
            to enable, add "vfs.zfs.prefetch_disable=0" to /boot/loader.conf.
ZFS filesystem version: 5
ZFS storage pool version: features support (5000)
[0]# zfs create v903.2/backup
[0]# rsh -n hal "zfs send -vR hal.1/backup/databases@2012-07-21.14:23:36" | zfs receive -v v903.2/backup/databases
sending from @ to hal.1/backup/databases@2011-05-01.08:34:45
receiving full stream of hal.1/backup/databases@2011-05-01.08:34:45 into v903.2/backup/databases@2011-05-01.08:34:45
sending from @2011-05-01.08:34:45 to hal.1/backup/databases@2011-06-02.20:42:45
received 48.2MB stream in 13 seconds (3.71MB/sec)
receiving incremental stream of hal.1/backup/databases@2011-06-02.20:42:45 into v903.2/backup/databases@2011-06-02.20:42:45
sending from @2011-06-02.20:42:45 to hal.1/backup/databases@2011-10-15.13:56:24
received 254MB stream in 70 seconds (3.63MB/sec)
receiving incremental stream of hal.1/backup/databases@2011-10-15.13:56:24 into v903.2/backup/databases@2011-10-15.13:56:24
sending from @2011-10-15.13:56:24 to hal.1/backup/databases@2011-10-27.19:21:00
received 268MB stream in 75 seconds (3.57MB/sec)
receiving incremental stream of hal.1/backup/databases@2011-10-27.19:21:00 into v903.2/backup/databases@2011-10-27.19:21:00
sending from @2011-10-27.19:21:00 to hal.1/backup/databases@2012-03-19.21:39:06
received 305MB stream in 82 seconds (3.72MB/sec)
receiving incremental stream of hal.1/backup/databases@2012-03-19.21:39:06 into v903.2/backup/databases@2012-03-19.21:39:06
sending from @2012-03-19.21:39:06 to hal.1/backup/databases@2012-07-21.14:23:36
received 345MB stream in 101 seconds (3.42MB/sec)
receiving incremental stream of hal.1/backup/databases@2012-07-21.14:23:36 into v903.2/backup/databases@2012-07-21.14:23:36
cannot receive incremental stream: invalid backup stream
rsh -n hal "zfs send -vR hal.1/backup/databases@2012-07-21.14:23:36"  0.22s user 21.94s system 5% cpu 6:33.67 total
zfs receive -v v903.2/backup/databases  0.02s user 5.46s system 1% cpu 6:37.24 total
[1]# 

As can be seen, importing the snapshot created on 2012-07-21 fails.

On hal, I have the following:

[0]# zpool history | grep backup/databases
2010-12-04.10:37:37 zfs create hal.1/backup/databases
2011-05-01.08:34:45 zfs snapshot hal.1/backup/databases@2011-05-01.08:34:45
2011-06-02.20:42:46 zfs snapshot hal.1/backup/databases@2011-06-02.20:42:45
2011-10-15.13:56:24 zfs snapshot hal.1/backup/databases@2011-10-15.13:56:24
2011-10-27.19:21:00 zfs snapshot hal.1/backup/databases@2011-10-27.19:21:00
2012-03-19.21:39:07 zfs snapshot hal.1/backup/databases@2012-03-19.21:39:06
2012-07-21.14:23:37 zfs snapshot hal.1/backup/databases@2012-07-21.14:23:36
[0]# 

Googling reveals very little information about this problem, one possibility might be http://wesunsolve.net/bugid/id/7002362, but as can be seen above, this file system has never been renamed.

I have tried a similar command also with another filesystem in hal.1, which already fails when receiving the first intermediate snapshot.
>How-To-Repeat:

>Fix:


>Release-Note:
>Audit-Trail:
>Unformatted:



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201209071553.q87Frqrj065306>