From owner-freebsd-fs@FreeBSD.ORG Fri Sep 4 23:08:21 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 99F23106566B for ; Fri, 4 Sep 2009 23:08:21 +0000 (UTC) (envelope-from rondzierwa@comcast.net) Received: from QMTA08.westchester.pa.mail.comcast.net (qmta08.westchester.pa.mail.comcast.net [76.96.62.80]) by mx1.freebsd.org (Postfix) with ESMTP id 487FF8FC16 for ; Fri, 4 Sep 2009 23:08:21 +0000 (UTC) Received: from OMTA05.westchester.pa.mail.comcast.net ([76.96.62.43]) by QMTA08.westchester.pa.mail.comcast.net with comcast id cmqN1c0060vyq2s58mv4jG; Fri, 04 Sep 2009 22:55:04 +0000 Received: from sz0128.wc.mail.comcast.net ([76.96.58.192]) by OMTA05.westchester.pa.mail.comcast.net with comcast id cmv61c00748qnZY3Rmv6b5; Fri, 04 Sep 2009 22:55:06 +0000 Date: Fri, 4 Sep 2009 22:55:06 +0000 (UTC) From: rondzierwa@comcast.net To: freebsd-fs@freebsd.org Message-ID: <398426296.7593551252104906444.JavaMail.root@sz0128a.westchester.pa.mail.comcast.net> In-Reply-To: <910253830.7592771252104642453.JavaMail.root@sz0128a.westchester.pa.mail.comcast.net> MIME-Version: 1.0 X-Originating-IP: [76.111.0.24] X-Mailer: Zimbra 5.0.16_GA_2927.RHEL5_64 (ZimbraWebClient - FF3.0 (Win)/5.0.16_GA_2927.RHEL5_64) Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Subject: zfs:lo lockup X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 04 Sep 2009 23:08:21 -0000 I'm running zfs on FreeBSD 7.2-RELEASE-p2. i csup'ed sys src 4 weeks ago. I have a 3Ware raid card with 8 1tb drives in raid5. I put the whole space in zfs as a tank and created several file systems from it. these are served to windoze clients via samba 3.0.34. several times a week i have to forcibly reset and reboot the system because one of the smbd processes is stuck in zfs:lo state (as displayed by "top"), and cannot be killed. Typically this happens when there is more than one client machine accessing the zfs shares. It happens when they are both accessing the same share, or different ones. is this something that has already been fixed? if so, what do i have to upgrade to get the fix? if not, what information can i provide to help somebody find a fix? thanks much ron.