From owner-freebsd-current@FreeBSD.ORG Sat May 31 21:49:33 2014 Return-Path: Delivered-To: freebsd-current@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A623726B for ; Sat, 31 May 2014 21:49:33 +0000 (UTC) Received: from borg.macktronics.com (borg.macktronics.com [209.181.253.68]) by mx1.freebsd.org (Postfix) with ESMTP id 8346C20B2 for ; Sat, 31 May 2014 21:49:33 +0000 (UTC) Received: from olive.macktronics.com (olive.macktronics.com [209.181.253.67]) by borg.macktronics.com (Postfix) with ESMTP id B2557111 for ; Sat, 31 May 2014 16:41:10 -0500 (CDT) Date: Sat, 31 May 2014 16:41:10 -0500 (CDT) From: Dan Mack To: freebsd-current@freebsd.org Subject: LOR / ZFS on current 266923 Message-ID: User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; format=flowed; charset=US-ASCII X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 31 May 2014 21:49:33 -0000 FYI: Just saw this today after a fresh install of 266923 - GENERIC kernel: root@darkstor:/ # uname -a FreeBSD darkstor 11.0-CURRENT FreeBSD 11.0-CURRENT #0 r266923: Sat May 31 10:21:54 CDT 2014 root@darkstor:/usr/obj/usr/src/sys/GENERIC amd64 lock order reversal: 1st 0xfffff8029cfe79a0 syncer (syncer) @ /usr/src/sys/kern/vfs_subr.c:1720 2nd 0xfffff8029b2a69a0 zfs (zfs) @ /usr/src/sys/kern/vfs_subr.c:2101 KDB: stack backtrace: db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame 0xfffffe085efe55a0 kdb_backtrace() at kdb_backtrace+0x39/frame 0xfffffe085efe5650 witness_checkorder() at witness_checkorder+0xdc2/frame 0xfffffe085efe56e0 __lockmgr_args() at __lockmgr_args+0x9ca/frame 0xfffffe085efe5810 vop_stdlock() at vop_stdlock+0x3c/frame 0xfffffe085efe5830 VOP_LOCK1_APV() at VOP_LOCK1_APV+0xfc/frame 0xfffffe085efe5860 _vn_lock() at _vn_lock+0xaa/frame 0xfffffe085efe58d0 vget() at vget+0x67/frame 0xfffffe085efe5910 vfs_msync() at vfs_msync+0xa7/frame 0xfffffe085efe5970 sync_fsync() at sync_fsync+0xff/frame 0xfffffe085efe59a0 VOP_FSYNC_APV() at VOP_FSYNC_APV+0xf7/frame 0xfffffe085efe59d0 sched_sync() at sched_sync+0x34b/frame 0xfffffe085efe5a70 fork_exit() at fork_exit+0x84/frame 0xfffffe085efe5ab0 fork_trampoline() at fork_trampoline+0xe/frame 0xfffffe085efe5ab0 --- trap 0, rip = 0, rsp = 0xfffffe085efe5b70, rbp = 0 --- scrub of the pool is in progresss, some mongodb usage/testing was occuring at the same time. Pool info: root@darkstor:~ # zpool status -v pool: tank state: ONLINE scan: scrub in progress since Sat May 31 13:00:39 2014 1.97T scanned out of 4.67T at 157M/s, 5h0m to go 0 repaired, 42.16% done config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 gpt/disk0 ONLINE 0 0 0 gpt/disk1 ONLINE 0 0 0 gpt/disk2 ONLINE 0 0 0 gpt/disk3 ONLINE 0 0 0 gpt/disk4 ONLINE 0 0 0 cache gpt/larc5 ONLINE 0 0 0 errors: No known data errors Hope this helps, dan -- Dan Mack