From owner-freebsd-bugs@freebsd.org Thu Aug 31 06:47:20 2017 Return-Path: Delivered-To: freebsd-bugs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 0CF83E174D1 for ; Thu, 31 Aug 2017 06:47:20 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id EE8D46BA1E for ; Thu, 31 Aug 2017 06:47:19 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id v7V6lJkX030430 for ; Thu, 31 Aug 2017 06:47:19 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-bugs@FreeBSD.org Subject: [Bug 219935] Kernel panic in getnewvnode (possibly ZFS related) Date: Thu, 31 Aug 2017 06:47:19 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 10.3-STABLE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Some People X-Bugzilla-Who: raimo+freebsd@erix.ericsson.se X-Bugzilla-Status: New X-Bugzilla-Resolution: X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-bugs@FreeBSD.org X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-bugs@freebsd.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Bug reports List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 31 Aug 2017 06:47:20 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D219935 --- Comment #13 from Raimo Niskanen --- It seems this time it was executing a backup script that did a 'zfs send' m= ost probably: zfs send -R -I weekly-2017-08-26_04.25.43--1m zroot@daily-2017-08-31_03.38.14--1w This does not seem to be one of my typical panics, though. The last time t= his zfs send failed was June 20:th. #0 doadump (textdump=3D) at pcpu.h:219 219 pcpu.h: No such file or directory. in pcpu.h (kgdb) bt #0 doadump (textdump=3D) at pcpu.h:219 #1 0xffffffff80951142 in kern_reboot (howto=3D260) at /usr/src/sys/kern/kern_shutdown.c:486 #2 0xffffffff80951525 in vpanic (fmt=3D, ap=3D) at /usr/src/sys/kern/kern_shutdown.c:889 #3 0xffffffff809513b3 in panic (fmt=3D0x0) at /usr/src/sys/kern/kern_shutdown.c:818 #4 0xffffffff809f77e5 in vholdl (vp=3D) at /usr/src/sys/kern/vfs_subr.c:2453 #5 0xffffffff809f0f40 in dounmount (mp=3D0xfffff8003414a660, flags=3D52428= 8, td=3D0xfffff8002ca3b000) at /usr/src/sys/kern/vfs_mount.c:1223 #6 0xffffffff81a6dfe4 in zfs_unmount_snap (snapname=3D) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/z= fs_ioctl.c:3485 #7 0xffffffff81a10663 in dsl_dataset_user_release_impl (holds=3D0xfffff801d2054740, errlist=3D0x0, tmpdp=3D0xfffff8002c5cb000) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/d= sl_userhold.c:581 #8 0xffffffff81a10f2c in dsl_dataset_user_release_onexit (arg=3D0xfffff80034930600) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/d= sl_userhold.c:629 #9 0xffffffff81a79fb6 in zfs_onexit_destroy (zo=3D0xfffff8020ee82140) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/z= fs_onexit.c:93 #10 0xffffffff81a70072 in zfsdev_close (data=3D0x2) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/z= fs_ioctl.c:5995 #11 0xffffffff80833e19 in devfs_fpdrop (fp=3D) at /usr/src/sys/fs/devfs/devfs_vnops.c:186 #12 0xffffffff80836585 in devfs_close_f (fp=3D, td=3D<= value optimized out>) at /usr/src/sys/fs/devfs/devfs_vnops.c:646 #13 0xffffffff80905fa9 in _fdrop (fp=3D0xfffff8002cf50050, td=3D0x0) at fil= e.h:344 #14 0xffffffff8090884e in closef (fp=3D, td=3D) at /usr/src/sys/kern/kern_descrip.c:2339 #15 0xffffffff80906358 in closefp (fdp=3D0xfffff801c76f1800, fd=3D, fp=3D0xfffff8002cf50050, td=3D0xfffff8002ca3b000,=20 holdleaders=3D) at /usr/src/sys/kern/kern_descrip.= c:1195 #16 0xffffffff80d56e9f in amd64_syscall (td=3D0xfffff8002ca3b000, traced=3D= 0) at subr_syscall.c:141 #17 0xffffffff80d3c0fb in Xfast_syscall () at /usr/src/sys/amd64/amd64/exception.S:396 #18 0x0000000801a05f3a in ?? () Previous frame inner to this frame (corrupt stack?) Current language: auto; currently minimal (kgdb) fr 5 #5 0xffffffff809f0f40 in dounmount (mp=3D0xfffff8003414a660, flags=3D52428= 8, td=3D0xfffff8002ca3b000) at /usr/src/sys/kern/vfs_mount.c:1223 1223 vholdl(coveredvp); (kgdb) set print pretty (kgdb) p *mp $1 =3D { mnt_mtx =3D { lock_object =3D { lo_name =3D 0xffffffff80fe41c1 "struct mount mtx",=20 lo_flags =3D 16973824,=20 lo_data =3D 0,=20 lo_witness =3D 0x0 },=20 mtx_lock =3D 4 },=20 mnt_gen =3D 1,=20 mnt_list =3D { tqe_next =3D 0xfffff8003414a330,=20 tqe_prev =3D 0xfffff8003414a9b8 },=20 mnt_op =3D 0xffffffff81b047c8,=20 mnt_vfc =3D 0xffffffff81b04780,=20 mnt_vnodecovered =3D 0xfffff8017144e588,=20 mnt_syncer =3D 0x0,=20 mnt_ref =3D 1,=20 mnt_nvnodelist =3D { tqh_first =3D 0x0,=20 tqh_last =3D 0xfffff8003414a6c0 },=20 mnt_nvnodelistsize =3D 0,=20 mnt_activevnodelist =3D { tqh_first =3D 0x0,=20 tqh_last =3D 0xfffff8003414a6d8 },=20 mnt_activevnodelistsize =3D 0,=20 mnt_writeopcount =3D 0,=20 mnt_kern_flag =3D 1073742016,=20 mnt_flag =3D 276828185,=20 mnt_opt =3D 0xfffff800842bc9a0,=20 mnt_optnew =3D 0x0,=20 mnt_maxsymlinklen =3D 0,=20 mnt_stat =3D { f_version =3D 537068824,=20 f_type =3D 222,=20 f_flags =3D 0,=20 f_bsize =3D 512,=20 f_iosize =3D 131072,=20 f_blocks =3D 17666176,=20 f_bfree =3D 7687944,=20 f_bavail =3D 7687944,=20 f_files =3D 7962179,=20 f_ffree =3D 7687944,=20 f_syncwrites =3D 0,=20 f_asyncwrites =3D 0,=20 f_syncreads =3D 0,=20 f_asyncreads =3D 0,=20 f_spare =3D {0, 0, 0, 0, 0, 0, 0, 0, 0, 0},=20 f_namemax =3D 255,=20 f_owner =3D 0,=20 f_fsid =3D { val =3D {1516185005, -21157410} },=20 f_charspare =3D '\0' ,=20 f_fstypename =3D "zfs", '\0' ,=20 f_mntfromname =3D "zroot/export/otp_support@weekly-2017-08-26_04.25.43-= -1m", '\0' ,=20 f_mntonname =3D "/export/otp_support/.zfs/snapshot/weekly-2017-08-26_04.25.43--1m", '\0' },=20 mnt_cred =3D 0xfffff80181c3a500,=20 mnt_data =3D 0xfffff800aa443000,=20 mnt_time =3D 0,=20 mnt_iosize_max =3D 65536,=20 mnt_export =3D 0x0,=20 mnt_label =3D 0x0,=20 mnt_hashseed =3D 72392620,=20 mnt_lockref =3D 0,=20 mnt_secondary_writes =3D 0,=20 mnt_secondary_accwrites =3D 0,=20 mnt_susp_owner =3D 0x0,=20 mnt_gjprovider =3D 0x0,=20 mnt_explock =3D { lock_object =3D { lo_name =3D 0xffffffff80fc9c44 "explock",=20 lo_flags =3D 108199936,=20 lo_data =3D 0,=20 lo_witness =3D 0x0 },=20 lk_lock =3D 1,=20 lk_exslpfail =3D 0,=20 lk_timo =3D 0,=20 lk_pri =3D 96 },=20 mnt_upper_link =3D { tqe_next =3D 0x0,=20 tqe_prev =3D 0x0 },=20 mnt_uppers =3D { tqh_first =3D 0x0,=20 tqh_last =3D 0xfffff8003414a980 } } --=20 You are receiving this mail because: You are the assignee for the bug.=