From owner-freebsd-stable@freebsd.org Mon Apr 2 13:55:40 2018 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id A742CF6B8E4 for ; Mon, 2 Apr 2018 13:55:40 +0000 (UTC) (envelope-from rb@gid.co.uk) Received: from mx0.gid.co.uk (mx0.gid.co.uk [194.32.164.250]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 27A5C7FD14 for ; Mon, 2 Apr 2018 13:55:39 +0000 (UTC) (envelope-from rb@gid.co.uk) Received: from [194.32.164.27] ([194.32.164.27]) by mx0.gid.co.uk (8.14.2/8.14.2) with ESMTP id w32DlosC068584 for ; Mon, 2 Apr 2018 14:47:50 +0100 (BST) (envelope-from rb@gid.co.uk) From: Bob Bishop Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable Mime-Version: 1.0 (Mac OS X Mail 10.3 \(3273\)) Subject: ZFS panic, ARC compression? Message-Id: Date: Mon, 2 Apr 2018 14:47:50 +0100 To: FreeBSD Stable X-Mailer: Apple Mail (2.3273) X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.25 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 02 Apr 2018 13:55:40 -0000 Hi, Anyone offer any suggestions about this? kernel: panic: solaris assert: arc_decompress(buf) =3D=3D 0 (0x5 =3D=3D = 0x0), file: = /usr/src/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c, line: = 4923 kernel: cpuid =3D 1 kernel: KDB: stack backtrace: kernel: #0 0xffffffff80aadac7 at kdb_backtrace+0x67 kernel: #1 0xffffffff80a6bba6 at vpanic+0x186 kernel: #2 0xffffffff80a6ba13 at panic+0x43 kernel: #3 0xffffffff8248023c at assfail3+0x2c =20 kernel: #4 0xffffffff8218e2e0 at arc_read+0x9f0 =20 kernel: #5 0xffffffff82198e5e at dbuf_read+0x69e kernel: #6 0xffffffff821b3db4 at dnode_hold_impl+0x194 kernel: #7 0xffffffff821a11dd at dmu_bonus_hold+0x1d kernel: #8 0xffffffff8220fb05 at zfs_zget+0x65 kernel: #9 0xffffffff82227d42 at zfs_dirent_lookup+0x162 kernel: #10 0xffffffff82227e07 at zfs_dirlook+0x77 kernel: #11 0xffffffff8223fcea at zfs_lookup+0x44a =20 kernel: #12 0xffffffff822403fd at zfs_freebsd_lookup+0x6d kernel: #13 0xffffffff8104b963 at VOP_CACHEDLOOKUP_APV+0x83 kernel: #14 0xffffffff80b13816 at vfs_cache_lookup+0xd6 kernel: #15 0xffffffff8104b853 at VOP_LOOKUP_APV+0x83 kernel: #16 0xffffffff80b1d151 at lookup+0x701 =20 kernel: #17 0xffffffff80b1c606 at namei+0x486 Roughly 24 hours earlier (during the scrub), there was: ZFS: vdev state changed, pool_guid=3D11921811386284628759 = vdev_guid=3D1644286782598989949 ZFS: vdev state changed, pool_guid=3D11921811386284628759 = vdev_guid=3D17800276530669255627 % uname -a FreeBSD xxxxxxxxxxx 11.1-RELEASE-p4 FreeBSD 11.1-RELEASE-p4 #0: Tue Nov = 14 06:12:40 UTC 2017 = root@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC amd64 % % zpool status pool: zroot state: ONLINE status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in question if possible. Otherwise restore the entire pool from backup. see: http://illumos.org/msg/ZFS-8000-8A scan: scrub repaired 15.7M in 2h37m with 1 errors on Sun Apr 1 = 09:44:39 2018 config: NAME STATE READ WRITE CKSUM zroot ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 ada0p4 ONLINE 0 0 0 ada1p4 ONLINE 0 0 0 errors: 1 data errors, use '-v' for a list % The affected file (in a snapshot) is unimportant. This pool is a daily rsync backup and contains about 120 snapshots. No device or SMART errors were logged. -- Bob Bishop rb@gid.co.uk