Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 05 Jan 2015 04:03:26 +0000
From:      bugzilla-noreply@freebsd.org
To:        freebsd-bugs@FreeBSD.org
Subject:   [Bug 196498] zpool create panic with file-backed pool
Message-ID:  <bug-196498-8@https.bugs.freebsd.org/bugzilla/>

next in thread | raw e-mail | index | archive | help
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=196498

            Bug ID: 196498
           Summary: zpool create panic with file-backed pool
           Product: Base System
           Version: 10.1-STABLE
          Hardware: amd64
                OS: Any
            Status: New
          Severity: Affects Some People
          Priority: ---
         Component: kern
          Assignee: freebsd-bugs@FreeBSD.org
          Reporter: editor@callfortesting.org

File-backed zpools appear to experienced a regression in 10.1, resulting in a
panic. This bug appears to be absent in 9.3 and 10-STABLE.

To reproduce it:

# truncate -s 300M foo.img
# zpool create foo /dev/foo.img

Fatal trap 12: page fault while in kernel mode
cpuid = 2; apic id = 02
fault virtual address    = 0x0
fault code        = supervisor read data, page not present
instruction pointer    = 0x20:0xffffffff80d3e2d6
stack pointer            = 0x28:0xfffffe011efbce50
frame pointer            = 0x28:0xfffffe011efbceb0
code segment        = base 0x0, limit 0xfffff, type 0x1b
            = DPL 0, pres 1, long 1, def32 0, gran 1
processor eflags    = interrupt enabled, resume, IOPL = 0
current process        = 1040 (zpool)
[ thread pid 1040 tid 100401 ]
Stopped at      bcopy+0x16:     repe movsq      (%rsi),%es:(%rdi)
db> bt
Tracing pid 1040 tid 100401 td 0xfffff8000b976920
bcopy() at bcopy+0x16/frame 0xfffffe011efbceb0
dmu_write_uio_dnode() at dmu_write_uio_dnode+0xcc/frame 0xfffffe011efbcf30
dmu_write_uio_dbuf() at dmu_write_uio_dbuf+0x3b/frame 0xfffffe011efbcf60
zfs_freebsd_write() at zfs_freebsd_write+0x5e2/frame 0xfffffe011efbd190
VOP_WRITE_APV() at VOP_WRITE_APV+0x145/frame 0xfffffe011efbd2a0
vn_rdwr() at vn_rdwr+0x299/frame 0xfffffe011efbd380
vdev_file_io_start() at vdev_file_io_start+0x165/frame 0xfffffe011efbd400
zio_vdev_io_start() at zio_vdev_io_start+0x326/frame 0xfffffe011efbd460
zio_execute() at zio_execute+0x162/frame 0xfffffe011efbd4c0
zio_wait() at zio_wait+0x23/frame 0xfffffe011efbd4f0
vdev_label_init() at vdev_label_init+0x22d/frame 0xfffffe011efbd5c0
vdev_label_init() at vdev_label_init+0x57/frame 0xfffffe011efbd690
vdev_create() at vdev_create+0x54/frame 0xfffffe011efbd6c0
spa_create() at spa_create+0x217/frame 0xfffffe011efbd750
zfs_ioc_pool_create() at zfs_ioc_pool_create+0x25d/frame 0xfffffe011efbd7d0
zfsdev_ioctl() at zfsdev_ioctl+0x6f0/frame 0xfffffe011efbd890
devfs_ioctl_f() at devfs_ioctl_f+0x114/frame 0xfffffe011efbd8e0
kern_ioctl() at kern_ioctl+0x255/frame 0xfffffe011efbd950
sys_ioctl() at sys_ioctl+0x13c/frame 0xfffffe011efbd9a0
amd64_syscall() at amd64_syscall+0x351/frame 0xfffffe011efbdab0
Xfast_syscall() at Xfast_syscall+0xfb/frame 0xfffffe011efbdab0
--- syscall (54, FreeBSD ELF64, sys_ioctl), rip = 0x8019f9b9a, rsp =
0x7fffffffb9e8, rbp = 0x7fffffffba60 ---
db>

I have tested this with 64M (ZFS minimum) through 300M virtual block devices.

Confirmed by grehan@

Removing this functionality is not a valid solution.

-- 
You are receiving this mail because:
You are the assignee for the bug.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?bug-196498-8>