From owner-freebsd-fs@FreeBSD.ORG Thu Mar 8 06:02:38 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id D2EAA106566B; Thu, 8 Mar 2012 06:02:38 +0000 (UTC) (envelope-from wollman@hergotha.csail.mit.edu) Received: from hergotha.csail.mit.edu (wollman-1-pt.tunnel.tserv4.nyc4.ipv6.he.net [IPv6:2001:470:1f06:ccb::2]) by mx1.freebsd.org (Postfix) with ESMTP id 73E3F8FC08; Thu, 8 Mar 2012 06:02:38 +0000 (UTC) Received: from hergotha.csail.mit.edu (localhost [127.0.0.1]) by hergotha.csail.mit.edu (8.14.4/8.14.4) with ESMTP id q2862b19064512; Thu, 8 Mar 2012 01:02:37 -0500 (EST) (envelope-from wollman@hergotha.csail.mit.edu) Received: (from wollman@localhost) by hergotha.csail.mit.edu (8.14.4/8.14.4/Submit) id q2862blY064509; Thu, 8 Mar 2012 01:02:37 -0500 (EST) (envelope-from wollman) MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Message-ID: <20312.19325.130822.520853@hergotha.csail.mit.edu> Date: Thu, 8 Mar 2012 01:02:37 -0500 From: Garrett Wollman To: freebsd-fs@freebsd.org, freebsd-stable@freebsd.org X-Mailer: VM 7.17 under 21.4 (patch 22) "Instant Classic" XEmacs Lucid X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.2.7 (hergotha.csail.mit.edu [127.0.0.1]); Thu, 08 Mar 2012 01:02:37 -0500 (EST) X-Spam-Status: No, score=-1.0 required=5.0 tests=ALL_TRUSTED autolearn=disabled version=3.3.2 X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on hergotha.csail.mit.edu Cc: Subject: Deadlock (?) with ZFS, NFS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 08 Mar 2012 06:02:38 -0000 This is unfortunately a very difficult issue to report (particularly because I don't have console access to the machine until I get into the office to reboot it). My server was happily serving NFS on top of a huge ZFS pool when it ground to a halt -- but only partially. All of the nfsd threads got stuck in ZFS, and there were a large number of pending I/Os, but there was nothing apparently wrong with the storage system itself. (At a minimum, smartctl could talk to the drives, so CAM and the mps driver were working enough to get commands to them.) ssh logins worked fine, but anything that required writing to that zpool (such as zpool scrub, sync, and reboot) would get stuck somewhere in ZFS. zpool status and zfs-stats reported no issues; netstat -p tcp reported many NFS connections with large unhandled receive buffers (owing to the nfsds being unable to complete the request they were working on). Nothing in the kernel message buffer to indicate a problem. Eventually, the machine stopped responding to network requests as well, although for a while after sshd stopped working, it still responded to pings. Here's a snapshot of top(1). Note that the zfskern{txg_thread_enter} thread is getting some CPU, although I can't tell if it's making any progress or just spinning. zfskern{l2arc_feed_thread} would occasionally get some CPU as well, but appeared to do nothing. PID USERNAME PRI NICE SIZE RES STATE C TIME WCPU COMMAND 11 root 155 ki31 0K 128K CPU6 6 45.7H 100.00% idle{idle: cpu6} 11 root 155 ki31 0K 128K CPU3 3 45.7H 100.00% idle{idle: cpu3} 11 root 155 ki31 0K 128K CPU5 5 44.9H 100.00% idle{idle: cpu5} 11 root 155 ki31 0K 128K CPU7 7 44.7H 100.00% idle{idle: cpu7} 11 root 155 ki31 0K 128K CPU2 2 44.3H 100.00% idle{idle: cpu2} 11 root 155 ki31 0K 128K CPU1 1 44.1H 100.00% idle{idle: cpu1} 11 root 155 ki31 0K 128K RUN 4 43.7H 100.00% idle{idle: cpu4} 11 root 155 ki31 0K 128K CPU0 0 43.1H 99.46% idle{idle: cpu0} 5 root -8 - 0K 128K zio->i 5 67:25 0.98% zfskern{txg_thread_enter} 12 root -92 - 0K 800K WAIT 7 297:15 0.00% intr{irq264: ix0:que } 0 root -16 0 0K 6144K - 1 297:11 0.00% kernel{zio_write_issue_} 0 root -16 0 0K 6144K - 2 297:09 0.00% kernel{zio_write_issue_} 0 root -16 0 0K 6144K - 3 297:07 0.00% kernel{zio_write_issue_} 0 root -16 0 0K 6144K - 7 296:58 0.00% kernel{zio_write_issue_} 0 root -16 0 0K 6144K - 6 296:57 0.00% kernel{zio_write_issue_} 0 root -16 0 0K 6144K - 5 296:54 0.00% kernel{zio_write_issue_} 0 root -16 0 0K 6144K - 0 296:53 0.00% kernel{zio_write_issue_} 0 root -16 0 0K 6144K - 4 296:40 0.00% kernel{zio_write_issue_} 5 root -8 - 0K 128K l2arc_ 4 220:18 0.00% zfskern{l2arc_feed_threa} 12 root -92 - 0K 800K WAIT 2 163:56 0.00% intr{irq259: ix0:que } 13 root -8 - 0K 48K - 2 93:35 0.00% geom{g_down} 12 root -92 - 0K 800K WAIT 3 85:26 0.00% intr{irq260: ix0:que } 0 root -92 0 0K 6144K - 0 77:53 0.00% kernel{ix0 que} 8 root -16 - 0K 16K ipmire 5 74:03 0.00% ipmi1: kcs 1815 root 20 0 10052K 456K zfs 4 72:07 0.00% nfsd{nfsd: master} 1815 root 20 0 10052K 456K zfs 0 71:51 0.00% nfsd{nfsd: service} 1815 root 20 0 10052K 456K tx->tx 5 71:45 0.00% nfsd{nfsd: service} 1815 root 20 0 10052K 456K zfs 2 71:43 0.00% nfsd{nfsd: service} 1815 root 20 0 10052K 456K zfs 1 71:31 0.00% nfsd{nfsd: service} 1815 root 20 0 10052K 456K zfs 0 71:25 0.00% nfsd{nfsd: service} 1815 root 20 0 10052K 456K zfs 6 71:23 0.00% nfsd{nfsd: service} 1815 root 20 0 10052K 456K zfs 1 71:18 0.00% nfsd{nfsd: service} 1815 root 20 0 10052K 456K zfs 4 71:15 0.00% nfsd{nfsd: service} 1815 root 20 0 10052K 456K zfs 1 71:13 0.00% nfsd{nfsd: service} 1815 root 20 0 10052K 456K zfs 7 71:10 0.00% nfsd{nfsd: service} 1815 root 20 0 10052K 456K zfs 0 71:10 0.00% nfsd{nfsd: service} 1815 root 20 0 10052K 456K zfs 6 71:07 0.00% nfsd{nfsd: service} 1815 root 20 0 10052K 456K zfs 2 71:02 0.00% nfsd{nfsd: service} 1815 root 20 0 10052K 456K zfs 5 70:58 0.00% nfsd{nfsd: service} 12 root -68 - 0K 800K WAIT 2 70:50 0.00% intr{swi2: cambio} 1815 root 20 0 10052K 456K zfs 7 70:33 0.00% nfsd{nfsd: service} 0 root -16 0 0K 6144K - 3 67:16 0.00% kernel{zio_write_intr_7} 0 root -16 0 0K 6144K - 3 67:14 0.00% kernel{zio_write_intr_4} 0 root -16 0 0K 6144K - 7 67:13 0.00% kernel{zio_write_intr_0} 0 root -16 0 0K 6144K - 4 67:13 0.00% kernel{zio_write_intr_3} 0 root -16 0 0K 6144K - 5 67:12 0.00% kernel{zio_write_intr_6} 0 root -16 0 0K 6144K - 6 67:11 0.00% kernel{zio_write_intr_1} 0 root -16 0 0K 6144K - 2 67:11 0.00% kernel{zio_write_intr_5} 0 root -16 0 0K 6144K - 1 67:10 0.00% kernel{zio_write_intr_2} 0 root -16 0 0K 6144K - 6 63:38 0.00% kernel{zio_write_issue_} 0 root -16 0 0K 6144K - 5 63:37 0.00% kernel{zio_write_issue_} 0 root -16 0 0K 6144K - 2 63:36 0.00% kernel{zio_write_issue_} 0 root -16 0 0K 6144K - 3 63:32 0.00% kernel{zio_write_issue_} 0 root -16 0 0K 6144K - 0 63:31 0.00% kernel{zio_write_issue_} 13 root -8 - 0K 48K - 6 62:49 0.00% geom{g_up} 12 root -88 - 0K 800K WAIT 7 52:19 0.00% intr{irq266: mps0} 0 root -92 0 0K 6144K - 0 46:25 0.00% kernel{ix0 que} 12 root -92 - 0K 800K WAIT 5 42:43 0.00% intr{irq262: ix0:que } This is a 9.0-RELEASE system with the mps driver backported from 9-stable. Hourly and daily snapshots were enabled. It had been working extremely well up to this point, and we were looking at possibly replacing our existing NFS servers with this architecture. -GAWollman