From owner-freebsd-current@FreeBSD.ORG Wed Apr 25 22:08:40 2007 Return-Path: X-Original-To: freebsd-current@freebsd.org Delivered-To: freebsd-current@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id DE37016A401 for ; Wed, 25 Apr 2007 22:08:40 +0000 (UTC) (envelope-from ascheepers@nl.clara.net) Received: from smtp-vbr14.xs4all.nl (smtp-vbr14.xs4all.nl [194.109.24.34]) by mx1.freebsd.org (Postfix) with ESMTP id 695F313C44B for ; Wed, 25 Apr 2007 22:08:40 +0000 (UTC) (envelope-from ascheepers@nl.clara.net) Received: from ceridwen.thuis.net (void-ptr.xs4all.nl [80.126.86.58]) by smtp-vbr14.xs4all.nl (8.13.8/8.13.8) with ESMTP id l3PLsnIr052445 for ; Wed, 25 Apr 2007 23:54:50 +0200 (CEST) (envelope-from ascheepers@nl.clara.net) Message-ID: <462FCE2B.50601@nl.clara.net> Date: Wed, 25 Apr 2007 23:54:51 +0200 From: Axel Scheepers User-Agent: Thunderbird 2.0.0.0 (X11/20070423) MIME-Version: 1.0 To: freebsd-current@freebsd.org Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: by XS4ALL Virus Scanner X-Mailman-Approved-At: Wed, 25 Apr 2007 22:19:05 +0000 Subject: zfs lor, panic X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 25 Apr 2007 22:08:40 -0000 Hello current list, I've just installed -current on my amd64(smp) workstation and tried zfs a bit. I've setup my ports and src dir as gzip compressed zfs mounts and used it without any problems. Then I moved my homedir to zfs, and all went fine for a while.. until.. I tried to use rtorrent on a zfs volume. Suddenly zfs would take all vm.kmem_size and eventually paniced as described on this list before. I've set it to 512MB in /boot/loader.conf and restricted the ARC to 64MB+1, this didn't panic but gave my a lock order reversal and a system which completely spents it's time in system time (according to top 100% with 640MB kmem) slowing it down to nearly a halt. I've captured one; Apr 25 16:38:00 ceridwen kernel: lock order reversal: Apr 25 16:38:00 ceridwen kernel: 1st 0xffffff0024654d70 user map (user map) @ /u sr/src/sys/vm/vm_map.c:2172 Apr 25 16:38:00 ceridwen kernel: 2nd 0xffffff0027704dd8 zfs:&zp->z_lock (zfs:&zp ->z_lock) @ /usr/src/sys/modules/zfs/../../contrib/opensolaris/uts/common/fs/zfs /zfs_znode.c:751 Apr 25 16:38:00 ceridwen kernel: KDB: stack backtrace: Apr 25 16:38:00 ceridwen kernel: db_trace_self_wrapper() at db_trace_self_wrappe r+0x3a Apr 25 16:38:00 ceridwen kernel: witness_checkorder() at witness_checkorder+0x4f 9 Apr 25 16:38:00 ceridwen kernel: _sx_xlock() at _sx_xlock+0x3a Apr 25 16:38:00 ceridwen kernel: zfs_time_stamper() at zfs_time_stamper+0x3c Apr 25 16:38:00 ceridwen kernel: zfs_freebsd_write() at zfs_freebsd_write+0x780 Apr 25 16:38:00 ceridwen kernel: VOP_WRITE_APV() at VOP_WRITE_APV+0xa4 Apr 25 16:38:00 ceridwen kernel: vnode_pager_generic_putpages() at vnode_pager_g eneric_putpages+0x218 Apr 25 16:38:00 ceridwen kernel: VOP_PUTPAGES_APV() at VOP_PUTPAGES_APV+0x77 Apr 25 16:38:00 ceridwen kernel: vnode_pager_putpages() at vnode_pager_putpages+ 0x97 Apr 25 16:38:00 ceridwen kernel: vm_pageout_flush() at vm_pageout_flush+0x136 Apr 25 16:38:00 ceridwen kernel: vm_object_page_collect_flush() at vm_object_pag e_collect_flush+0x2d1 Apr 25 16:38:00 ceridwen kernel: vm_object_page_clean() at vm_object_page_clean+ 0x18e Apr 25 16:38:00 ceridwen kernel: vm_object_sync() at vm_object_sync+0x239 Apr 25 16:38:00 ceridwen kernel: vm_map_sync() at vm_map_sync+0x107 Apr 25 16:38:00 ceridwen kernel: msync() at msync+0x66 Apr 25 16:38:00 ceridwen kernel: syscall() at syscall+0x1f0 Apr 25 16:38:00 ceridwen kernel: Xfast_syscall() at Xfast_syscall+0xab Apr 25 16:38:00 ceridwen kernel: --- syscall (65, FreeBSD ELF64, msync), rip = 0 x8015a474c, rsp = 0x7fffffffe508, rbp = 0x801afd740 --- Apr 25 16:43:35 ceridwen su: axel to root on /dev/ttyp0 I'm very pleased with zfs coming to freebsd; I've been using it on solaris for a while now with excellent results so far. I think this is described as not being dangerous but the side effects might cause severe effects/panics as far as I've noticed. Kind regards, Axel Scheepers