From owner-freebsd-bugs@freebsd.org Mon Apr 29 05:52:39 2019 Return-Path: Delivered-To: freebsd-bugs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 377A01581C1E for ; Mon, 29 Apr 2019 05:52:39 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from mailman.ysv.freebsd.org (mailman.ysv.freebsd.org [IPv6:2001:1900:2254:206a::50:5]) by mx1.freebsd.org (Postfix) with ESMTP id C3DF68D4A3 for ; Mon, 29 Apr 2019 05:52:38 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: by mailman.ysv.freebsd.org (Postfix) id 84B591581C1D; Mon, 29 Apr 2019 05:52:38 +0000 (UTC) Delivered-To: bugs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 485191581C1C for ; Mon, 29 Apr 2019 05:52:38 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from mxrelay.ysv.freebsd.org (mxrelay.ysv.freebsd.org [IPv6:2001:1900:2254:206a::19:3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) server-signature RSA-PSS (4096 bits) client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "mxrelay.ysv.freebsd.org", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id D7E208D4A2 for ; Mon, 29 Apr 2019 05:52:37 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mxrelay.ysv.freebsd.org (Postfix) with ESMTPS id 2A72E16EA3 for ; Mon, 29 Apr 2019 05:52:37 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id x3T5qb78005199 for ; Mon, 29 Apr 2019 05:52:37 GMT (envelope-from bugzilla-noreply@freebsd.org) Received: (from www@localhost) by kenobi.freebsd.org (8.15.2/8.15.2/Submit) id x3T5qbLX005197 for bugs@FreeBSD.org; Mon, 29 Apr 2019 05:52:37 GMT (envelope-from bugzilla-noreply@freebsd.org) X-Authentication-Warning: kenobi.freebsd.org: www set sender to bugzilla-noreply@freebsd.org using -f From: bugzilla-noreply@freebsd.org To: bugs@FreeBSD.org Subject: [Bug 237637] ZFS kernel panic after removing a vdev Date: Mon, 29 Apr 2019 05:52:36 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: new X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 11.2-RELEASE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: thalunil@kallisti.at X-Bugzilla-Status: New X-Bugzilla-Resolution: X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: bugs@FreeBSD.org X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: bug_id short_desc product version rep_platform op_sys bug_status bug_severity priority component assigned_to reporter Message-ID: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-bugs@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Bug reports List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 29 Apr 2019 05:52:39 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D237637 Bug ID: 237637 Summary: ZFS kernel panic after removing a vdev Product: Base System Version: 11.2-RELEASE Hardware: Any OS: Any Status: New Severity: Affects Only Me Priority: --- Component: kern Assignee: bugs@FreeBSD.org Reporter: thalunil@kallisti.at Hi, on FreeBSD 11.2-RELEASE-p9 i removed a vdev on a ZFS pool, ZFS then started with "evacuating" the data on this device and proceeded for 40mins. When zpool status reported 100% the system panicked and rebooted. According to man 7 zpool-features device_removal is supported. Current zpool status (after invoking the kernel immediately crashes): pool: zfspool state: ONLINE scan: scrub repaired 0 in 4h10m with 0 errors on Sat Mar 16 23:39:35 2019 remove: Removal of vdev 4 copied 49.9G in 0h40m, completed on Sun Apr 21 21:01:31 2019 1.49M memory used for removed device mappings config: NAME STATE READ WRITE CKSUM zfspool ONLINE 0 0 0 da1 ONLINE 0 0 0 da2 ONLINE 0 0 0 da0 ONLINE 0 0 0 errors: No known data errors Example kernel panic: ZFS filesystem version: 5 ZFS storage pool version: features support (5000) Fatal trap 12: page fault while in kernel mode cpuid =3D 0; apic id =3D 00 fault virtual address =3D 0x0 fault code =3D supervisor read data, page not present instruction pointer =3D 0x20:0xffffffff8246e994 stack pointer =3D 0x28:0xfffffe02384547e0 frame pointer =3D 0x28:0xfffffe0238454810 code segment =3D base 0x0, limit 0xfffff, type 0x1b =3D DPL 0, pres 1, long 1, def32 0, gran 1 processor eflags =3D interrupt enabled, resume, IOPL =3D 0 current process =3D 0 (zio_free_issue_6_6) trap number =3D 12 panic: page fault cpuid =3D 0 KDB: stack backtrace: #0 0xffffffff80b3d5b7 at kdb_backtrace+0x67 #1 0xffffffff80af6b57 at vpanic+0x177 #2 0xffffffff80af69d3 at panic+0x43 #3 0xffffffff80f77fdf at trap_fatal+0x35f #4 0xffffffff80f78039 at trap_pfault+0x49 #5 0xffffffff80f77807 at trap+0x2c7 #6 0xffffffff80f580cc at calltrap+0x8 #7 0xffffffff824e81d7 at vdev_indirect_io_start_cb+0x37 #8 0xffffffff824e7e58 at vdev_indirect_remap+0x2f8 #9 0xffffffff824e7b3d at vdev_indirect_io_start+0x2d #10 0xffffffff82512cae at zio_vdev_io_start+0x2ae #11 0xffffffff8250f75c at zio_execute+0xac #12 0xffffffff8250f07b at zio_nowait+0xcb #13 0xffffffff824eb8ef at vdev_mirror_io_start+0x3ff #14 0xffffffff82512b62 at zio_vdev_io_start+0x162 #15 0xffffffff8250f75c at zio_execute+0xac #16 0xffffffff80b4edc4 at taskqueue_run_locked+0x154 #17 0xffffffff80b4ff28 at taskqueue_thread_loop+0x98 Uptime: 5d9h32m23s Dumping 719 out of 8157 MB:..3%..12%..21%..32%..41%..52%..61%..72%..81%..92% Dump complete Automatic reboot in 15 seconds - press a key on the console to abort Rebooting... Expected behaviour after device removal would be to have a usable, albeit reduced-size ZFS pool. thanks thal --=20 You are receiving this mail because: You are the assignee for the bug.=