From owner-freebsd-fs@FreeBSD.ORG Sun Dec 28 09:19:58 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 2A640B89; Sun, 28 Dec 2014 09:19:58 +0000 (UTC) Received: from kib.kiev.ua (kib.kiev.ua [IPv6:2001:470:d5e7:1::1]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id C048F64EF2; Sun, 28 Dec 2014 09:19:57 +0000 (UTC) Received: from tom.home (kostik@localhost [127.0.0.1]) by kib.kiev.ua (8.14.9/8.14.9) with ESMTP id sBS9JmNC030880 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Sun, 28 Dec 2014 11:19:48 +0200 (EET) (envelope-from kostikbel@gmail.com) DKIM-Filter: OpenDKIM Filter v2.9.2 kib.kiev.ua sBS9JmNC030880 Received: (from kostik@localhost) by tom.home (8.14.9/8.14.9/Submit) id sBS9Jl7p030879; Sun, 28 Dec 2014 11:19:47 +0200 (EET) (envelope-from kostikbel@gmail.com) X-Authentication-Warning: tom.home: kostik set sender to kostikbel@gmail.com using -f Date: Sun, 28 Dec 2014 11:19:47 +0200 From: Konstantin Belousov To: Rick Macklem Subject: Re: RFC: new NFS mount option or restore old behaviour for Solaris server bug? Message-ID: <20141228091947.GB98945@kib.kiev.ua> References: <1894262154.2825656.1419690232046.JavaMail.root@uoguelph.ca> <1190766207.2826601.1419690496079.JavaMail.root@uoguelph.ca> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1190766207.2826601.1419690496079.JavaMail.root@uoguelph.ca> User-Agent: Mutt/1.5.23 (2014-03-12) X-Spam-Status: No, score=-2.0 required=5.0 tests=ALL_TRUSTED,BAYES_00, DKIM_ADSP_CUSTOM_MED,FREEMAIL_FROM,NML_ADSP_CUSTOM_MED autolearn=no autolearn_force=no version=3.4.0 X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on tom.home Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 28 Dec 2014 09:19:58 -0000 On Sat, Dec 27, 2014 at 09:28:16AM -0500, Rick Macklem wrote: > Hi, > > The FreeBSD9.1 and earlier NFS clients almost always (unless the > tod clock ticked to next second while the operation was in progress) > set the mtime to the server's time (xx_TOSERVER) for exclusive open. > Starting with FreeBSD9.2, the mtime would be set to the client's time > due to r245508, which fixed the code for utimes() to use VA_UTIMES_NULL. > > This change tickled a bug in recent Solaris servers, which return > NFS_OK to the Setattr RPC but don't actually set the file's mode bits. > (The bug isn't tickled when mtime is set to the server's time.) > I have patches to work around this in two ways: > 1 - Add a new "useservertime" mount option that forces xx_TOSERVER. > (This patch would force xx_TOSERVER for exclusive open.) > It permits the man page to document why it is needed-->broken Solaris servers. > 2 - Use xx_TOSERVER for exclusive open always. Since this was the normal > behaviour until FreeBSD9.2, I don't think this would cause problems or > be a POLA violation, but I can't be sure? > > I am leaning towards #2, since it avoids yet another mount option. > However, I'd like other people's opinions on which option is better, > or any other suggestions? I still do not quite understand the reasoning. What are drawbacks of using #2, comparing with #1 ? #1 requires manual configuration, and worse, it is not known which Solaris NFS servers require workaround, so arguments against #1 and for #2 are clean. But what are arguments against #2, if any ? At least for me, #2 looks obviously better. > > Thanks in advance for your comments, rick > ps: The trivial patch for #2 is attached, in case you are interested. > --- fs/nfsclient/nfs_clport.c.sav 2014-12-25 12:54:25.000000000 -0500 > +++ fs/nfsclient/nfs_clport.c 2014-12-25 12:55:49.000000000 -0500 > @@ -1096,9 +1096,16 @@ nfscl_checksattr(struct vattr *vap, stru > * us to do a SETATTR RPC. FreeBSD servers store the verifier > * in atime, but we can't really assume that all servers will > * so we ensure that our SETATTR sets both atime and mtime. > + * Set the VA_UTIMES_NULL flag for this case, so that > + * the server's time will be used. This is needed to > + * work around a bug in some Solaris servers, where > + * setting the time TOCLIENT causes the Setattr RPC > + * to return NFS_OK, but not set va_mode. > */ > - if (vap->va_mtime.tv_sec == VNOVAL) > + if (vap->va_mtime.tv_sec == VNOVAL) { > vfs_timestamp(&vap->va_mtime); > + vap->va_vaflags |= VA_UTIMES_NULL; > + } > if (vap->va_atime.tv_sec == VNOVAL) > vap->va_atime = vap->va_mtime; > return (1); From owner-freebsd-fs@FreeBSD.ORG Sun Dec 28 12:49:40 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id CD48B75B; Sun, 28 Dec 2014 12:49:40 +0000 (UTC) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id 68B2E1C4F; Sun, 28 Dec 2014 12:49:39 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AqsEAG/7n1SDaFve/2dsb2JhbABchDSDAccHglACgSABAQEBAX2EDAEBAQMBIwRSBRYOCgICDRkCWQaINwivTJReAQEBAQEBAQMBAQEBAQEBG4EhiGyFBwERARw0B4JogUEFiUuOSo0KgzkihAwggT05fgEBAQ X-IronPort-AV: E=Sophos;i="5.07,655,1413259200"; d="scan'208";a="181502771" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-annu.net.uoguelph.ca with ESMTP; 28 Dec 2014 07:49:39 -0500 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id A46C13CE1D; Sun, 28 Dec 2014 07:49:38 -0500 (EST) Date: Sun, 28 Dec 2014 07:49:38 -0500 (EST) From: Rick Macklem To: Konstantin Belousov Message-ID: <1766916330.3067344.1419770978659.JavaMail.root@uoguelph.ca> In-Reply-To: <20141228091947.GB98945@kib.kiev.ua> Subject: Re: RFC: new NFS mount option or restore old behaviour for Solaris server bug? MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.95.12] X-Mailer: Zimbra 7.2.6_GA_2926 (ZimbraWebClient - FF3.0 (Win)/7.2.6_GA_2926) Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 28 Dec 2014 12:49:41 -0000 Konstantin Belousov wrote: > On Sat, Dec 27, 2014 at 09:28:16AM -0500, Rick Macklem wrote: > > Hi, > > > > The FreeBSD9.1 and earlier NFS clients almost always (unless the > > tod clock ticked to next second while the operation was in > > progress) > > set the mtime to the server's time (xx_TOSERVER) for exclusive > > open. > > Starting with FreeBSD9.2, the mtime would be set to the client's > > time > > due to r245508, which fixed the code for utimes() to use > > VA_UTIMES_NULL. > > > > This change tickled a bug in recent Solaris servers, which return > > NFS_OK to the Setattr RPC but don't actually set the file's mode > > bits. > > (The bug isn't tickled when mtime is set to the server's time.) > > I have patches to work around this in two ways: > > 1 - Add a new "useservertime" mount option that forces xx_TOSERVER. > > (This patch would force xx_TOSERVER for exclusive open.) > > It permits the man page to document why it is needed-->broken > > Solaris servers. > > 2 - Use xx_TOSERVER for exclusive open always. Since this was the > > normal > > behaviour until FreeBSD9.2, I don't think this would cause > > problems or > > be a POLA violation, but I can't be sure? > > > > I am leaning towards #2, since it avoids yet another mount option. > > However, I'd like other people's opinions on which option is > > better, > > or any other suggestions? > I still do not quite understand the reasoning. > > What are drawbacks of using #2, comparing with #1 ? #1 requires > manual > configuration, and worse, it is not known which Solaris NFS servers > require workaround, so arguments against #1 and for #2 are clean. > But what are arguments against #2, if any ? > The only risk with #2 that I can think of is that some post-FreeBSD9.1 change to the system breaks when the xx_TOSERVER is done, due to clock skew. (There was recently a separate email thread on the resolution of vfs_timestamp(), but with the default 1sec resolution, it doesn't seem likely to me that this will happen.) I originally proposed #1 because I didn't realize the behaviour had been xx_TOSERVER prior to FreeBSD9.2 and thought it was just a broken Solaris server. Thanks for the comments. It looks like #2 is preferred unless someone comes up with a good reason for #1 over it. rick > At least for me, #2 looks obviously better. > > > > Thanks in advance for your comments, rick > > ps: The trivial patch for #2 is attached, in case you are > > interested. > > > --- fs/nfsclient/nfs_clport.c.sav 2014-12-25 12:54:25.000000000 > > -0500 > > +++ fs/nfsclient/nfs_clport.c 2014-12-25 12:55:49.000000000 -0500 > > @@ -1096,9 +1096,16 @@ nfscl_checksattr(struct vattr *vap, stru > > * us to do a SETATTR RPC. FreeBSD servers store the verifier > > * in atime, but we can't really assume that all servers will > > * so we ensure that our SETATTR sets both atime and mtime. > > + * Set the VA_UTIMES_NULL flag for this case, so that > > + * the server's time will be used. This is needed to > > + * work around a bug in some Solaris servers, where > > + * setting the time TOCLIENT causes the Setattr RPC > > + * to return NFS_OK, but not set va_mode. > > */ > > - if (vap->va_mtime.tv_sec == VNOVAL) > > + if (vap->va_mtime.tv_sec == VNOVAL) { > > vfs_timestamp(&vap->va_mtime); > > + vap->va_vaflags |= VA_UTIMES_NULL; > > + } > > if (vap->va_atime.tv_sec == VNOVAL) > > vap->va_atime = vap->va_mtime; > > return (1); > > From owner-freebsd-fs@FreeBSD.ORG Sun Dec 28 16:19:29 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 1E9BC1E3; Sun, 28 Dec 2014 16:19:29 +0000 (UTC) Received: from bigwig.baldwin.cx (bigwig.baldwin.cx [IPv6:2001:470:1f11:75::1]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id EB43E647D1; Sun, 28 Dec 2014 16:19:28 +0000 (UTC) Received: from ralph.baldwin.cx (pool-173-70-85-31.nwrknj.fios.verizon.net [173.70.85.31]) by bigwig.baldwin.cx (Postfix) with ESMTPSA id 930B5B924; Sun, 28 Dec 2014 11:19:27 -0500 (EST) From: John Baldwin To: Rick Macklem Subject: Re: RFC: new NFS mount option or restore old behaviour for Solaris server bug? Date: Sun, 28 Dec 2014 11:19:14 -0500 Message-ID: <3494907.XSdoyu9NPX@ralph.baldwin.cx> User-Agent: KMail/4.14.2 (FreeBSD/10.1-STABLE; KDE/4.14.2; amd64; ; ) In-Reply-To: <1190766207.2826601.1419690496079.JavaMail.root@uoguelph.ca> References: <1190766207.2826601.1419690496079.JavaMail.root@uoguelph.ca> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.7 (bigwig.baldwin.cx); Sun, 28 Dec 2014 11:19:27 -0500 (EST) Cc: FreeBSD Filesystems , Konstantin Belousov X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 28 Dec 2014 16:19:29 -0000 On Saturday, December 27, 2014 09:28:16 AM Rick Macklem wrote: > Hi, > > The FreeBSD9.1 and earlier NFS clients almost always (unless the > tod clock ticked to next second while the operation was in progress) > set the mtime to the server's time (xx_TOSERVER) for exclusive open. > Starting with FreeBSD9.2, the mtime would be set to the client's time > due to r245508, which fixed the code for utimes() to use VA_UTIMES_NULL. > > This change tickled a bug in recent Solaris servers, which return > NFS_OK to the Setattr RPC but don't actually set the file's mode bits. > (The bug isn't tickled when mtime is set to the server's time.) > I have patches to work around this in two ways: > 1 - Add a new "useservertime" mount option that forces xx_TOSERVER. > (This patch would force xx_TOSERVER for exclusive open.) > It permits the man page to document why it is needed-->broken Solaris servers. > 2 - Use xx_TOSERVER for exclusive open always. Since this was the normal > behaviour until FreeBSD9.2, I don't think this would cause problems or > be a POLA violation, but I can't be sure? > > I am leaning towards #2, since it avoids yet another mount option. > However, I'd like other people's opinions on which option is better, > or any other suggestions? Definitely prefer #2. In general I think we want to use TOSERVER timestamps aside from an explicit call to utimes() with a non-NULL argument (i.e. if userland wants to force a specific timestamp). I think any implicit timestamps should be TOSERVER. -- John Baldwin From owner-freebsd-fs@FreeBSD.ORG Sun Dec 28 17:53:05 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id C2BCBFD6 for ; Sun, 28 Dec 2014 17:53:05 +0000 (UTC) Received: from mail-pa0-x233.google.com (mail-pa0-x233.google.com [IPv6:2607:f8b0:400e:c03::233]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 9411C665F2 for ; Sun, 28 Dec 2014 17:53:05 +0000 (UTC) Received: by mail-pa0-f51.google.com with SMTP id ey11so15937573pad.24 for ; Sun, 28 Dec 2014 09:53:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:reply-to:sender:date:message-id:subject:from:to :content-type; bh=RwYG10d0p54BpZDDs6mu86cK9+LB3oiI/5HeC9V2Jfk=; b=D6Y2fJF1Vm2nExcqIAI/6sFJseEVzYHETX1VPqYtU73dLfe76FoY/pZRNQJwThF3Bw T3LEbSPYLnrZsquGXox3nWf0Ij2RYMWKZTYDHuI9Fta3xiHGug/WHzZHlPQfLSZ8eQ6S XI3tSfii4TNXBlSYP7ptlWsXNa7eNeIBxhb4Sxp11Rld/RHe1027y/q2Kwr27xkCwgM9 LHQXeEeqbJXlpwS9cJgDbn5Qzr6Po522G8Opb6MlU2krcWJAYj0tWeDBd5/fb9ylr58Y Th5Lsc9x1jtV+mDQBdYkQ0awTUtSu06MoT5A1CoJRqBaS6PnA56IGSm7G+EgcuEDB7/r PD+A== MIME-Version: 1.0 X-Received: by 10.70.96.35 with SMTP id dp3mr71548537pdb.115.1419789185022; Sun, 28 Dec 2014 09:53:05 -0800 (PST) Reply-To: kirk@ba23.org Sender: kirk.j.russell@gmail.com Received: by 10.66.26.193 with HTTP; Sun, 28 Dec 2014 09:53:04 -0800 (PST) Date: Sun, 28 Dec 2014 12:53:04 -0500 X-Google-Sender-Auth: Hn9zXKFhUYWrVqTLYO0JE3YPyss Message-ID: Subject: help using mount_smbfs with apple time capsule server From: kirk russell To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 28 Dec 2014 17:53:05 -0000 Hi, I cannot get FreeBSD 10's smbfs client to work with my server -- an Apple time capsule 4th generation version 7.6.4. Here are the commands I ran, to reproduce the issue: # uname -a FreeBSD 10.1-RELEASE FreeBSD 10.1-RELEASE #0 r274401: Tue Nov 11 21:02:49 UTC 2014 root@releng1.nyi.freebsd.org:/usr/obj/usr/src/sys/GENERIC amd64 # mount_smbfs -R 16 //lamb@Meganium/Data /mnt Password: # dd if=/mnt/afile bs=1 count=1 of=/dev/null dd: /mnt/afile: No such file or directory For the FreeBSD 10 session, I tried to capture the raw packets, using tcpdump, in file bad.tcpdump. This works with FreeBSD 9. For this working session, I tried to capture the raw packets, using tcpdump, in file good.tcpdump. # uname -a FreeBSD 9.3-STABLE FreeBSD 9.3-STABLE #0: Wed Dec 24 16:16:05 EST 2014 kirk@freenas:/usr/obj/usr/src/sys/GENERIC amd64 # mount_smbfs -R 16 //lamb@Meganium/Data /mnt Password: # dd if=/mnt/afile bs=1 count=1 of=/dev/null 1+0 records in 1+0 records out 1 bytes transferred in 0.000345 secs (2899 bytes/sec) The two raw packets dumps are in this archive: http://www.employees.org/~kirk/bstgbugs/smbfs.tar.gz Any pointers how to get his working? The server appears to be returning an ERRbadpath error. -- Kirk Russell http://www.ba23.org/ From owner-freebsd-fs@FreeBSD.ORG Sun Dec 28 18:15:27 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 1378B629 for ; Sun, 28 Dec 2014 18:15:27 +0000 (UTC) Received: from mail-wi0-x234.google.com (mail-wi0-x234.google.com [IPv6:2a00:1450:400c:c05::234]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 95ADF2850 for ; Sun, 28 Dec 2014 18:15:26 +0000 (UTC) Received: by mail-wi0-f180.google.com with SMTP id n3so20420621wiv.1 for ; Sun, 28 Dec 2014 10:15:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=content-type:mime-version:subject:from:in-reply-to:date :content-transfer-encoding:message-id:references:to; bh=6HIzF27w1Qvqjwkh4qJxQWfkXHICx2VIY+urIV/7NkA=; b=RwSU6H+J5uIVbNw04OsL24ZvPF7RoAUHECCoFebxbBAvrGsYvyJdjHiYz6H+PSjXH1 LrsN9vDVxA+o07HInLKLYFZtYwNXpx5JyVe4ImjHLJf7UqvYjRtguwiiyMMp8JwZfLco 2T2c1ry4PWPNl8vopfT00RerSY2pyVwkkNlI1WM2+yV0v07qK++v68IqeXIuI0eH8EiL DqzBhKDQkyZSaVEs11pe0XSaPUiwVplzisM5RZfvw2EjSj2FB8m5GDwY2iFXDIenTtY8 /q5uOMtWG+LYIDq3Xh4yXG9m49RnbnPVSffclgkMhHEVexUPFhwgX8YOUGZZKpEkLWEu wRSg== X-Received: by 10.180.23.104 with SMTP id l8mr81416780wif.64.1419790525011; Sun, 28 Dec 2014 10:15:25 -0800 (PST) Received: from [10.0.1.108] (cpc15-stav13-2-0-cust197.17-3.cable.virginm.net. [77.100.102.198]) by mx.google.com with ESMTPSA id l9sm36428888wic.21.2014.12.28.10.15.23 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Sun, 28 Dec 2014 10:15:23 -0800 (PST) Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Mac OS X Mail 8.1 \(1993\)) Subject: Re: ZFS: Mount partition from ZVOL with volmode=dev From: Paul Chakravarti In-Reply-To: <32BEFAB7-936E-42F0-AE75-FB978C13885C@gmail.com> Date: Sun, 28 Dec 2014 18:15:22 +0000 Content-Transfer-Encoding: quoted-printable Message-Id: <53312533-1A45-4BCF-9841-B9CA2E48578B@gmail.com> References: <91E1211B-7E84-472B-8098-630AE8C97251@gmail.com> <32BEFAB7-936E-42F0-AE75-FB978C13885C@gmail.com> To: freebsd-fs@freebsd.org X-Mailer: Apple Mail (2.1993) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 28 Dec 2014 18:15:27 -0000 > > Sorry - I should have been clearer. The zvol shows up on the host > > system but the partitions aren=E2=80=99t exposed to geom. > > That's exactly what volmode=3Ddev does, if you want geom ones don't > specify volmode. OK, that make sense > > Using volmode=3Ddefault (geom - vfs.zfs.vol.mode=3D1) causes the = installer to fail > > when you try to create a UFS filesystem under bhyve - it is possible = to get > > round this by creating the partitions manually but my preference = would be to > > use volmode=3Ddev. >=20 > What error do you get? If you try a UFS install on a device with volmode=3Dgeom you get the = following error message (zfs install or creating partitions manually = outside the VM works fine): Error mounting partition /mnt: mount: /dev/vtbd0p2: Invalid argument This has been discussed on freebsd-virtualisation = (https://lists.freebsd.org/pipermail/freebsd-virtualization/2014-August/00= 2748.html) where they recommend setting volmode=3Ddev I actually think I have solved my immediate problem (though it is a bit = cumbersome)- if I create my template vol with volmode=3Ddev I can = send/recv a snapshot to create a clone with volmode=3Dgeom, mount the = partition on the host system and edit the contents, and then send/recv a = second clone with volmode=3Ddev. Thanks, Paul From owner-freebsd-fs@FreeBSD.ORG Sun Dec 28 21:00:04 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E2ECC277 for ; Sun, 28 Dec 2014 21:00:04 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id BA16112FD for ; Sun, 28 Dec 2014 21:00:04 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.9/8.14.9) with ESMTP id sBSL04Nx073165 for ; Sun, 28 Dec 2014 21:00:04 GMT (envelope-from bugzilla-noreply@FreeBSD.org) Message-Id: <201412282100.sBSL04Nx073165@kenobi.freebsd.org> From: bugzilla-noreply@FreeBSD.org To: freebsd-fs@FreeBSD.org Subject: Problem reports for freebsd-fs@FreeBSD.org that need special attention X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 Date: Sun, 28 Dec 2014 21:00:04 +0000 Content-Type: text/plain; charset="UTF-8" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 28 Dec 2014 21:00:05 -0000 To view an individual PR, use: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=(Bug Id). The following is a listing of current problems submitted by FreeBSD users, which need special attention. These represent problem reports covering all versions including experimental development code and obsolete releases. Status | Bug Id | Description ------------+-----------+--------------------------------------------------- Open | 136470 | [nfs] Cannot mount / in read-only, over NFS Open | 139651 | [nfs] mount(8): read-only remount of NFS volume d Open | 144447 | [zfs] sharenfs fsunshare() & fsshare_main() non f 3 problems total for which you should take action. From owner-freebsd-fs@FreeBSD.ORG Sun Dec 28 23:12:09 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id CC956C3C; Sun, 28 Dec 2014 23:12:09 +0000 (UTC) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id 823CF18EA; Sun, 28 Dec 2014 23:12:08 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AgIFACuNoFSDaFve/2dsb2JhbABcFoMSMFgEgwHDXgqFKUoCgSEBAQEBAX2EDQEBBAEBASArIAsbGAICDQcMBgIpAQkmBgEHAgUEAQgUBIgLDa89hS2PMAEBAQEBAQEDAQEBAQEBAQEBGYEhjgUBARs0BxiCFQwvEYEwBYlLiAmDHoNTiCqHaSKBfx+BbiAxAQaBBTl+AQEB X-IronPort-AV: E=Sophos;i="5.07,657,1413259200"; d="scan'208";a="181574239" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-annu.net.uoguelph.ca with ESMTP; 28 Dec 2014 18:12:07 -0500 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 99F96AEA36; Sun, 28 Dec 2014 18:12:07 -0500 (EST) Date: Sun, 28 Dec 2014 18:12:07 -0500 (EST) From: Rick Macklem To: kirk@ba23.org, Kevin Lo Message-ID: <1027320326.3203438.1419808327617.JavaMail.root@uoguelph.ca> In-Reply-To: Subject: Re: help using mount_smbfs with apple time capsule server MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.95.12] X-Mailer: Zimbra 7.2.6_GA_2926 (ZimbraWebClient - FF3.0 (Win)/7.2.6_GA_2926) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 28 Dec 2014 23:12:09 -0000 Kirk Russell wrote: > Hi, > > I cannot get FreeBSD 10's smbfs client to work with my server -- an > Apple time > capsule 4th generation version 7.6.4. > > Here are the commands I ran, to reproduce the issue: > # uname -a > FreeBSD 10.1-RELEASE FreeBSD 10.1-RELEASE #0 r274401: Tue Nov 11 > 21:02:49 UTC 2014 > root@releng1.nyi.freebsd.org:/usr/obj/usr/src/sys/GENERIC amd64 > # mount_smbfs -R 16 //lamb@Meganium/Data /mnt > Password: > # dd if=/mnt/afile bs=1 count=1 of=/dev/null > dd: /mnt/afile: No such file or directory > > For the FreeBSD 10 session, I tried to capture the raw packets, using > tcpdump, > in file bad.tcpdump. > > > This works with FreeBSD 9. For this working session, I tried to > capture the > raw packets, using tcpdump, in file good.tcpdump. > > # uname -a > FreeBSD 9.3-STABLE FreeBSD 9.3-STABLE #0: Wed Dec 24 16:16:05 EST > 2014 kirk@freenas:/usr/obj/usr/src/sys/GENERIC amd64 > # mount_smbfs -R 16 //lamb@Meganium/Data /mnt > Password: > # dd if=/mnt/afile bs=1 count=1 of=/dev/null > 1+0 records in > 1+0 records out > 1 bytes transferred in 0.000345 secs (2899 bytes/sec) > > > The two raw packets dumps are in this archive: > http://www.employees.org/~kirk/bstgbugs/smbfs.tar.gz > > Any pointers how to get his working? > The server appears to be returning an ERRbadpath error. > Well, my guess is that it has something to do with the Unicode changes added to smbfs about three years ago by kevlo@ (r227650 and friends in head). These changes are not in FreeBSD9. It appears that it now sends "\\afile" instead of "\afile" and I know nothing about the code/protocol but r227650 added changes like: error = mb_put_uint8(mbp, '\\'); replaced with: if (SMB_UNICODE_STRINGS(vcp)) error = mb_put_uint16le(mbp, '\\') else error = mb_put_uint8(mbp, '\\'); Note that the '\\' is actually 4 \ characters. Hopefully someone knows enough about Unicode or how SMB uses it to make sense of this? rick > -- > Kirk Russell > http://www.ba23.org/ > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Mon Dec 29 05:31:22 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.ORG Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id EE736FEC for ; Mon, 29 Dec 2014 05:31:22 +0000 (UTC) Received: from mail.jrv.org (rrcs-24-73-246-106.sw.biz.rr.com [24.73.246.106]) by mx1.freebsd.org (Postfix) with ESMTP id B8A483B7 for ; Mon, 29 Dec 2014 05:31:18 +0000 (UTC) Received: from localhost (localhost.localdomain [127.0.0.1]) by mail.jrv.org (Postfix) with ESMTP id AF170239D84; Sun, 28 Dec 2014 23:31:11 -0600 (CST) Received: from mail.jrv.org ([127.0.0.1]) by localhost (zimbra64.housenet.jrv [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id 4kUDS2DpxNae; Sun, 28 Dec 2014 23:31:07 -0600 (CST) Received: from localhost (localhost.localdomain [127.0.0.1]) by mail.jrv.org (Postfix) with ESMTP id 06D47239D7F; Sun, 28 Dec 2014 23:31:07 -0600 (CST) X-Virus-Scanned: amavisd-new at zimbra64.housenet.jrv Received: from mail.jrv.org ([127.0.0.1]) by localhost (zimbra64.housenet.jrv [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id FGS39cxj4L1v; Sun, 28 Dec 2014 23:31:06 -0600 (CST) Received: from [192.168.138.128] (BMX.housenet.jrv [192.168.3.140]) by mail.jrv.org (Postfix) with ESMTPSA id D656F239D7C; Sun, 28 Dec 2014 23:31:06 -0600 (CST) Message-ID: <54A0E737.6000101@jrv.org> Date: Sun, 28 Dec 2014 23:31:35 -0600 From: "James R. Van Artsdalen" User-Agent: Mozilla/5.0 (Windows NT 5.0; rv:12.0) Gecko/20120428 Thunderbird/12.0.1 MIME-Version: 1.0 To: Steven Hartland Subject: Re: ZFS: FreeBSD 10.1 can't import/mount FreeBSD 9 pool References: <549DF2B1.3030909@jrv.org> <549DF7EB.1080308@multiplay.co.uk> <549E18AB.8060708@jrv.org> <549E1C4F.7090400@multiplay.co.uk> <549EC457.6010509@multiplay.co.uk> In-Reply-To: <549EC457.6010509@multiplay.co.uk> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: "freebsd-fs@FreeBSD.ORG" , d@delphij.net X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 29 Dec 2014 05:31:23 -0000 It's not clear what upstream had in mind. This actually breaks pools that worked. Worse, it doesn't seem to have occurred to anyone that archival pools perform just fine at 99%+ capacity: I often went to 99.9%, leaving no more than 20 GB free. The performance analysis was incomplete. Moreover, the solution *already existed* - not a line of new code was needed. For myself I'm setting the tunable to 63 (effectively off) since my write pools already have a filesystem with these properties: # zfs get used,readonly,reservation BIGTEX/UNIX/var/empty NAME PROPERTY VALUE SOURCE BIGTEX/UNIX/var/empty used 628K - BIGTEX/UNIX/var/empty readonly on local BIGTEX/UNIX/var/empty reservation 25G local # PS. I consider the upstream change actually broken. My pool that did not import properly due to lack of free space was not exactly full: # zpool list SAS01 NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT SAS01 43.5T 42.6T 948G - 0% 97% 1.00x ONLINE - # PPS. If the tunable is changed be sure *every* pool is protected with a reservation since it really does happen that full pools can jam themselves into a permanent read-only state that can't be recovered. On 12/27/2014 8:38 AM, Steven Hartland wrote: > This was an upstream change so I couldn't comment on the details. > > The intention is clear, to prevent a pool getting into a state where > it has no space and from which it can't recover, but why so much of > the pool is reserved is unclear particularly for large pools e.g. on a > 8 TB pool 256GB is reserved. > > I can seen the benefit of making this configurable on a pool by pool > basis, or at least capping it to a reasonable value, as well as making > it backwards compatible so scenarios like this don't occur but there > may be implementation details which prevent this I'm not sure as I've > not looked into the details. > > If this is where your mind is too then I would suggest looking to > raise issue upstream. > > > On 27/12/2014 05:04, Craig Yoshioka wrote: >> I brought this up before, but I will again. This was not a great >> change. I also have archival drives which can now give me problems. >> Why was this reserved space not implemented as a user configurable FS >> option? >> >> Sent from my iPhone >> >>> On Dec 26, 2014, at 8:41 PM, Steven Hartland >>> wrote: >>> >>> It was introduced by: >>> https://svnweb.freebsd.org/base?view=revision&revision=268473 >>> >>> Tuning of it was added by: >>> https://svnweb.freebsd.org/base?view=revision&revision=274674 >>> >>> Hope this helps. >>> >>> Regards >>> Steve >>> >>>> On 27/12/2014 02:25, James R. Van Artsdalen wrote: >>>> Oops - this will break every single one of my archival pools. >>>> >>>> If there is no userland ability to enable backwards compatibility, can >>>> you tell me where it is in the source or about when it was added? >>>> >>>>> On 12/26/2014 6:06 PM, Steven Hartland wrote: >>>>> Later versions reserve space for deletions etc, so if your volume is >>>>> too full could fail in this manor. >>>>> >>>>> The fix would be to clear down space so this is no longer an issue. >>>>> >>>>>> On 26/12/2014 23:43, James R. Van Artsdalen wrote: >>>>>> FreeBSD bigtex.housenet.jrv 10.1-PRERELEASE FreeBSD >>>>>> 10.1-PRERELEASE #2 >>>>>> r273476M: Thu Oct 23 20:39:40 CDT 2014 >>>>>> james@bigtex.housenet.jrv:/usr/obj/usr/src/sys/GENERIC amd64 >>>>>> >>>>>> A pool created by a FreeBSD 9 system was imported into FreeBSD >>>>>> 10.1 but >>>>>> failed to create the recursive mountpoints as shown below. >>>>>> >>>>>> What's especially interesting is that the free space reported by >>>>>> zpool(1) and zfs(1) are wildly different, even though there are no >>>>>> reservations. >>>>>> >>>>>> Note that I was able to do a zpool upgrade, but that zfs upgrade >>>>>> failed >>>>>> on the children datasets. >>>>>> >>>>>> # zpool import SAS01 >>>>>> cannot mount '/SAS01/t03': failed to create mountpoint >>>>>> cannot mount '/SAS01/t04': failed to create mountpoint >>>>>> cannot mount '/SAS01/t05': failed to create mountpoint >>>>>> cannot mount '/SAS01/t06': failed to create mountpoint >>>>>> cannot mount '/SAS01/t07': failed to create mountpoint >>>>>> cannot mount '/SAS01/t08': failed to create mountpoint >>>>>> cannot mount '/SAS01/t12': failed to create mountpoint >>>>>> cannot mount '/SAS01/t13': failed to create mountpoint >>>>>> # zpool list SAS01 >>>>>> NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH >>>>>> ALTROOT >>>>>> SAS01 43.5T 42.6T 948G - 0% 97% 1.00x >>>>>> ONLINE - >>>>>> # zfs list -p SAS01 >>>>>> NAME USED AVAIL REFER MOUNTPOINT >>>>>> SAS01 33279222543840 0 314496 /SAS01 >>>>>> # zpool get all SAS01 >>>>>> NAME PROPERTY VALUE >>>>>> SOURCE >>>>>> SAS01 size >>>>>> 43.5T - >>>>>> SAS01 capacity >>>>>> 97% - >>>>>> SAS01 altroot - >>>>>> default >>>>>> SAS01 health >>>>>> ONLINE - >>>>>> SAS01 guid 1341452135 >>>>>> default >>>>>> SAS01 version - >>>>>> default >>>>>> SAS01 bootfs - >>>>>> default >>>>>> SAS01 delegation on >>>>>> default >>>>>> SAS01 autoreplace off >>>>>> default >>>>>> SAS01 cachefile - >>>>>> default >>>>>> SAS01 failmode wait >>>>>> default >>>>>> SAS01 listsnapshots off >>>>>> default >>>>>> SAS01 autoexpand off >>>>>> default >>>>>> SAS01 dedupditto 0 >>>>>> default >>>>>> SAS01 dedupratio >>>>>> 1.00x - >>>>>> SAS01 free >>>>>> 948G - >>>>>> SAS01 allocated >>>>>> 42.6T - >>>>>> SAS01 readonly >>>>>> off - >>>>>> SAS01 comment - >>>>>> default >>>>>> SAS01 expandsize >>>>>> - - >>>>>> SAS01 freeing 0 >>>>>> default >>>>>> SAS01 fragmentation >>>>>> 0% - >>>>>> SAS01 leaked 0 >>>>>> default >>>>>> SAS01 feature@async_destroy enabled >>>>>> local >>>>>> SAS01 feature@empty_bpobj active >>>>>> local >>>>>> SAS01 feature@lz4_compress active >>>>>> local >>>>>> SAS01 feature@multi_vdev_crash_dump enabled >>>>>> local >>>>>> SAS01 feature@spacemap_histogram active >>>>>> local >>>>>> SAS01 feature@enabled_txg active >>>>>> local >>>>>> SAS01 feature@hole_birth active >>>>>> local >>>>>> SAS01 feature@extensible_dataset enabled >>>>>> local >>>>>> SAS01 feature@embedded_data active >>>>>> local >>>>>> SAS01 feature@bookmarks enabled >>>>>> local >>>>>> SAS01 feature@filesystem_limits enabled >>>>>> local >>>>>> # zfs get all SAS01 >>>>>> NAME PROPERTY VALUE SOURCE >>>>>> SAS01 type filesystem - >>>>>> SAS01 creation Tue Dec 23 2:51 2014 - >>>>>> SAS01 used 30.3T - >>>>>> SAS01 available 0 - >>>>>> SAS01 referenced 307K - >>>>>> SAS01 compressratio 1.00x - >>>>>> SAS01 mounted yes - >>>>>> SAS01 quota none default >>>>>> SAS01 reservation none default >>>>>> SAS01 recordsize 128K default >>>>>> SAS01 mountpoint /SAS01 default >>>>>> SAS01 sharenfs off default >>>>>> SAS01 checksum on default >>>>>> SAS01 compression off default >>>>>> SAS01 atime on default >>>>>> SAS01 devices on default >>>>>> SAS01 exec on default >>>>>> SAS01 setuid on default >>>>>> SAS01 readonly off default >>>>>> SAS01 jailed off default >>>>>> SAS01 snapdir hidden default >>>>>> SAS01 aclmode discard default >>>>>> SAS01 aclinherit restricted default >>>>>> SAS01 canmount on default >>>>>> SAS01 xattr off temporary >>>>>> SAS01 copies 1 default >>>>>> SAS01 version 5 - >>>>>> SAS01 utf8only off - >>>>>> SAS01 normalization none - >>>>>> SAS01 casesensitivity sensitive - >>>>>> SAS01 vscan off default >>>>>> SAS01 nbmand off default >>>>>> SAS01 sharesmb off default >>>>>> SAS01 refquota none default >>>>>> SAS01 refreservation none default >>>>>> SAS01 primarycache all default >>>>>> SAS01 secondarycache all default >>>>>> SAS01 usedbysnapshots 0 - >>>>>> SAS01 usedbydataset 307K - >>>>>> SAS01 usedbychildren 30.3T - >>>>>> SAS01 usedbyrefreservation 0 - >>>>>> SAS01 logbias latency default >>>>>> SAS01 dedup off default >>>>>> SAS01 mlslabel - >>>>>> SAS01 sync standard default >>>>>> SAS01 refcompressratio 1.00x - >>>>>> SAS01 written 307K - >>>>>> SAS01 logicalused 30.2T - >>>>>> SAS01 logicalreferenced 12K - >>>>>> SAS01 volmode default default >>>>>> SAS01 filesystem_limit none default >>>>>> SAS01 snapshot_limit none default >>>>>> SAS01 filesystem_count none default >>>>>> SAS01 snapshot_count none default >>>>>> SAS01 redundant_metadata all default >>>>>> # zpool status SAS01 >>>>>> pool: SAS01 >>>>>> state: ONLINE >>>>>> scan: scrub repaired 0 in 20h26m with 0 errors on Thu Dec 25 >>>>>> 20:57:34 2014 >>>>>> config: >>>>>> >>>>>> NAME STATE READ WRITE CKSUM >>>>>> SAS01 ONLINE 0 0 0 >>>>>> raidz2-0 ONLINE 0 0 0 >>>>>> da45 ONLINE 0 0 0 >>>>>> da44 ONLINE 0 0 0 >>>>>> da47 ONLINE 0 0 0 >>>>>> da43 ONLINE 0 0 0 >>>>>> da42 ONLINE 0 0 0 >>>>>> da46 ONLINE 0 0 0 >>>>>> da41 ONLINE 0 0 0 >>>>>> da40 ONLINE 0 0 0 >>>>>> >>>>>> errors: No known data errors >>>>>> # zfs upgrade -r SAS01 >>>>>> cannot set property for 'SAS01/t03': out of space >>>>>> cannot set property for 'SAS01/t04': out of space >>>>>> cannot set property for 'SAS01/t05': out of space >>>>>> cannot set property for 'SAS01/t06': out of space >>>>>> cannot set property for 'SAS01/t07': out of space >>>>>> cannot set property for 'SAS01/t08': out of space >>>>>> cannot set property for 'SAS01/t12': out of space >>>>>> cannot set property for 'SAS01/t13': out of space >>>>>> 0 filesystems upgraded >>>>>> 1 filesystems already at this version >>>>>> # >>>>>> _______________________________________________ >>>>>> freebsd-fs@freebsd.org mailing list >>>>>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >>>>>> To unsubscribe, send any mail to >>>>>> "freebsd-fs-unsubscribe@freebsd.org" >>>>> _______________________________________________ >>>>> freebsd-fs@freebsd.org mailing list >>>>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >>>>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >>> _______________________________________________ >>> freebsd-fs@freebsd.org mailing list >>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Mon Dec 29 05:39:22 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 263AB14E for ; Mon, 29 Dec 2014 05:39:22 +0000 (UTC) Received: from ns.kevlo.org (220-135-115-6.HINET-IP.hinet.net [220.135.115.6]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "ns.kevlo.org", Issuer "ns.kevlo.org" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 90BB16B2 for ; Mon, 29 Dec 2014 05:39:19 +0000 (UTC) Received: from ns.kevlo.org (localhost [127.0.0.1]) by ns.kevlo.org (8.14.9/8.14.9) with ESMTP id sBT5bGRm066804 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Mon, 29 Dec 2014 13:37:16 +0800 (CST) (envelope-from kevlo@ns.kevlo.org) Received: (from kevlo@localhost) by ns.kevlo.org (8.14.9/8.14.9/Submit) id sBT5bEE8066803; Mon, 29 Dec 2014 13:37:15 +0800 (CST) (envelope-from kevlo) Date: Mon, 29 Dec 2014 13:37:14 +0800 From: Kevin Lo To: Rick Macklem Subject: Re: help using mount_smbfs with apple time capsule server Message-ID: <20141229053714.GA66793@ns.kevlo.org> References: <1027320326.3203438.1419808327617.JavaMail.root@uoguelph.ca> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1027320326.3203438.1419808327617.JavaMail.root@uoguelph.ca> User-Agent: Mutt/1.5.23 (2014-03-12) Cc: freebsd-fs@freebsd.org, kirk@ba23.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 29 Dec 2014 05:39:22 -0000 On Sun, Dec 28, 2014 at 06:12:07PM -0500, Rick Macklem wrote: > > Kirk Russell wrote: > > Hi, > > > > I cannot get FreeBSD 10's smbfs client to work with my server -- an > > Apple time > > capsule 4th generation version 7.6.4. > > > > Here are the commands I ran, to reproduce the issue: > > # uname -a > > FreeBSD 10.1-RELEASE FreeBSD 10.1-RELEASE #0 r274401: Tue Nov 11 > > 21:02:49 UTC 2014 > > root@releng1.nyi.freebsd.org:/usr/obj/usr/src/sys/GENERIC amd64 > > # mount_smbfs -R 16 //lamb@Meganium/Data /mnt > > Password: > > # dd if=/mnt/afile bs=1 count=1 of=/dev/null > > dd: /mnt/afile: No such file or directory Are you sure the file "afile" really exists? > > For the FreeBSD 10 session, I tried to capture the raw packets, using > > tcpdump, > > in file bad.tcpdump. > > > > > > This works with FreeBSD 9. For this working session, I tried to > > capture the > > raw packets, using tcpdump, in file good.tcpdump. > > > > # uname -a > > FreeBSD 9.3-STABLE FreeBSD 9.3-STABLE #0: Wed Dec 24 16:16:05 EST > > 2014 kirk@freenas:/usr/obj/usr/src/sys/GENERIC amd64 > > # mount_smbfs -R 16 //lamb@Meganium/Data /mnt > > Password: > > # dd if=/mnt/afile bs=1 count=1 of=/dev/null > > 1+0 records in > > 1+0 records out > > 1 bytes transferred in 0.000345 secs (2899 bytes/sec) > > > > > > The two raw packets dumps are in this archive: > > http://www.employees.org/~kirk/bstgbugs/smbfs.tar.gz > > > > Any pointers how to get his working? > > The server appears to be returning an ERRbadpath error. > > > Well, my guess is that it has something to do with the Unicode > changes added to smbfs about three years ago by kevlo@ (r227650 > and friends in head). These changes are not in FreeBSD9. Hmm, it was MFC'ed to stable/9 (r230196). > It appears that it now sends "\\afile" instead of "\afile" and > I know nothing about the code/protocol but r227650 added changes > like: > error = mb_put_uint8(mbp, '\\'); > replaced with: > if (SMB_UNICODE_STRINGS(vcp)) > error = mb_put_uint16le(mbp, '\\') > else > error = mb_put_uint8(mbp, '\\'); > Note that the '\\' is actually 4 \ characters. > > Hopefully someone knows enough about Unicode or how SMB > uses it to make sense of this? I tested it under FreeBSD -current [1] and and 9.3-STABLE [2], it works perfectly... [1] ftp://ftp.freebsd.org/pub/FreeBSD/snapshots/ISO-IMAGES/11.0/FreeBSD-11.0-CURRENT-amd64-20141222-r276066-memstick.img [2] ftp://ftp.freebsd.org/pub/FreeBSD/snapshots/ISO-IMAGES/9.3/FreeBSD-9.3-STABLE-amd64-20141222-r276041-memstick.img > rick > > > -- > > Kirk Russell > > http://www.ba23.org/ Kevin From owner-freebsd-fs@FreeBSD.ORG Mon Dec 29 07:02:21 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E81ECB29; Mon, 29 Dec 2014 07:02:21 +0000 (UTC) Received: from chez.mckusick.com (chez.mckusick.com [IPv6:2001:5a8:4:7e72:4a5b:39ff:fe12:452]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id C7EB321D2; Mon, 29 Dec 2014 07:02:21 +0000 (UTC) Received: from chez.mckusick.com (localhost [127.0.0.1]) by chez.mckusick.com (8.14.3/8.14.3) with ESMTP id sBT72I8G087361; Sun, 28 Dec 2014 23:02:18 -0800 (PST) (envelope-from mckusick@chez.mckusick.com) Message-Id: <201412290702.sBT72I8G087361@chez.mckusick.com> To: Rick Macklem Subject: Re: patch that makes d_fileno 64bits In-reply-to: <1966344327.2961798.1419723168645.JavaMail.root@uoguelph.ca> Date: Sun, 28 Dec 2014 23:02:18 -0800 From: Kirk McKusick Cc: FreeBSD Filesystems , Gleb Kurtsou , Konstantin Belousov X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 29 Dec 2014 07:02:22 -0000 > Date: Sat, 27 Dec 2014 18:32:48 -0500 (EST) > From: Rick Macklem > To: FreeBSD Filesystems , > Kirk McKusick , Gleb Kurtsou , > Konstantin Belousov > Subject: patch that makes d_fileno 64bits > > Hi, > > Kirk and Gleb Kurtsou (plus some others) are working through > the difficult job of changing ino_t to 64bits. (Changes to syscalls, > libraries, etc.) > > This patch: > http://people.freebsd.org/~rmacklem/64bitfileno.patch > > is somewhat tangential to the above, in that it changes the > d_fileno field of "struct dirent" and va_fileid to uint64_t. > It also includes adding a field called d_cookie to "struct dirent", > which is the position of the next directory entry in the underlying > file system. A majority of this patch are changes to the NFS code, > but it includes a simple "new struct dirent"->"old struct dirent32" > copy routine for getdirentries(2) and small changes to all the > file systems so they fill in the "new struct dirent". > > This patch can be applied to head/current and the resultant kernel > should work fine, although I've only been able to test some of the > file systems. However, DO NOT propagate the changes to sys/sys/dirent.h > out to userland (/usr/include/sys/dirent.h) and build a userland from > it or things will get badly broken. > > I don't know if Kirk and/or Gleb will find some of this useful for > their updates to project/ino64, but it will allow people to test > these changes. (It modifies the NFS server so that it no longer uses > the "cookie" args to VOP_READDIR(), but that part can easily > be removed from the patch.) > > If folks can test this patch, I think it would be helpful for > the effort of changing ino_t to 64bits. > > Have fun with it, rick Thanks Rick, this does look useful. Since Gleb is leading the charge with changing ino_t to 64 bits, I will let him have final say on when it would be helpful to have this go into HEAD. But it does seem that it should be possible to do it before the other changes and independently of them since it only chnges the internal kernel interfaces. But perhaps I am missing something that Gleb or kib can point out. It seems to me that the cookies calculation could be taken out of the VOP_GETDIRENTRIES interface since NFS is the only client of it. In looking through your patch, I did not see anything that looked wrong. But as you point out, more testing is needed :-) Kirk McKusick From owner-freebsd-fs@FreeBSD.ORG Mon Dec 29 13:42:30 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 5C7FF47C; Mon, 29 Dec 2014 13:42:30 +0000 (UTC) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id C96DC64FF6; Mon, 29 Dec 2014 13:42:29 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AtIEAH5ZoVSDaFve/2dsb2JhbABcFoNCWASDAcNshXMCgSIBAQEBAX2EDAEBAQMBIwRSBRYRAwECAQICDRkCUQgGE4gkCA2uP5R6AQEBAQEBAQECAQEBAQEBAQEagSGOIjQHgmiBQQWJS4JjhSaBf4RCMIUugnyEMIM5IoQMIDEBgUR+AQEB X-IronPort-AV: E=Sophos;i="5.07,659,1413259200"; d="scan'208";a="179898763" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-jnhn.mail.uoguelph.ca with ESMTP; 29 Dec 2014 08:42:22 -0500 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id AE1A6B3F44; Mon, 29 Dec 2014 08:42:22 -0500 (EST) Date: Mon, 29 Dec 2014 08:42:22 -0500 (EST) From: Rick Macklem To: Kirk McKusick Message-ID: <293906398.3337213.1419860542696.JavaMail.root@uoguelph.ca> In-Reply-To: <201412290702.sBT72I8G087361@chez.mckusick.com> Subject: Re: patch that makes d_fileno 64bits MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.95.10] X-Mailer: Zimbra 7.2.6_GA_2926 (ZimbraWebClient - FF3.0 (Win)/7.2.6_GA_2926) Cc: FreeBSD Filesystems , Gleb Kurtsou , Konstantin Belousov X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 29 Dec 2014 13:42:30 -0000 Kirk wrote: > > Date: Sat, 27 Dec 2014 18:32:48 -0500 (EST) > > From: Rick Macklem > > To: FreeBSD Filesystems , > > Kirk McKusick , Gleb Kurtsou > > , > > Konstantin Belousov > > Subject: patch that makes d_fileno 64bits > > > > Hi, > > > > Kirk and Gleb Kurtsou (plus some others) are working through > > the difficult job of changing ino_t to 64bits. (Changes to > > syscalls, > > libraries, etc.) > > > > This patch: > > http://people.freebsd.org/~rmacklem/64bitfileno.patch > > > > is somewhat tangential to the above, in that it changes the > > d_fileno field of "struct dirent" and va_fileid to uint64_t. > > It also includes adding a field called d_cookie to "struct dirent", > > which is the position of the next directory entry in the underlying > > file system. A majority of this patch are changes to the NFS code, > > but it includes a simple "new struct dirent"->"old struct dirent32" > > copy routine for getdirentries(2) and small changes to all the > > file systems so they fill in the "new struct dirent". > > > > This patch can be applied to head/current and the resultant kernel > > should work fine, although I've only been able to test some of the > > file systems. However, DO NOT propagate the changes to > > sys/sys/dirent.h > > out to userland (/usr/include/sys/dirent.h) and build a userland > > from > > it or things will get badly broken. > > > > I don't know if Kirk and/or Gleb will find some of this useful for > > their updates to project/ino64, but it will allow people to test > > these changes. (It modifies the NFS server so that it no longer > > uses > > the "cookie" args to VOP_READDIR(), but that part can easily > > be removed from the patch.) > > > > If folks can test this patch, I think it would be helpful for > > the effort of changing ino_t to 64bits. > > > > Have fun with it, rick > > Thanks Rick, this does look useful. Since Gleb is leading the charge > with changing ino_t to 64 bits, I will let him have final say on when > it would be helpful to have this go into HEAD. But it does seem that > it should be possible to do it before the other changes and > independently > of them since it only chnges the internal kernel interfaces. But > perhaps > I am missing something that Gleb or kib can point out. > Well, I don't think it can go into head before the rest as it stands, because it changes "struct dirent", which then propagates out to /usr/include/sys and breaks userland. However, if the patch was changed so that the old one remained "struct dirent" and the new one called something else like "struct dirent64", then I think it could go into HEAD. However, the patch would then include a whole bunch of "struct dirent"-->"struct dirent64" changes in the kernel. (Not difficult to do. That was the way I started out doing it and then switched simply to keep the diff smaller.) Btw, ZFS has exactly that already. They call it "struct dirent64" and then there is are lines in their dirent.h that: typedef struct dirent dirent64_t #define dirent64 dirent so for ZFS it is just changing these lines. Oh, and UFS mostly uses "struct direct" except for one little chunk of code, so it's easy, too. Doing this might be worth considering, since it would help flush out any bugs in this area before the big ino_t patch goes in? (I can easily grind out a version of the patch using "struct dirent64", if that would be useful?) I'll admit I never got to look at Gleb changes that weren't yet in projects/ino64 (I couldn't get git to work and the URL to github wouldn't work for me either for some reason;-), so I suspect he'll want to merge with what he has. (C code is simple, but these new fangled things like git...scarey for an old guy like me;-) > It seems to me that the cookies calculation could be taken out of the > VOP_GETDIRENTRIES interface since NFS is the only client of it. > I think this would be nice, but if you guys want to delay that, the part of the patch that makes nfsd use "d_cookie" (or whatever you choose to call it) instead of the cookie args can just be reverted. (Most of the file systems code will look cleaner and simpler without the cookies stuff. In particular, unionfs gets kinda weird with the cookie stuff.) > In looking through your patch, I did not see anything that looked > wrong. Yea, it was pretty straightforward and I did spot a couple of weird bits that Gleb might want to know about. fuse - Has a bogus line of code (I think was meant to do null termination of the name) that actually writes 1 byte past the "struct dirent". Fortunately, the fuse code that calculated the buffer size is bogus too and makes the buffer larger than necessary so the assignment is harmless. - I just took out the bogus assignment (since the buffer is already bzero()'d) and fixed the buffer sizing. fdesc - There is a constant UIO_MX, which is the size used for all "struct dirent"s that had to be increased. For va_fileid, there were a few file systems that assigned it a value from a variable declared "int" or "long". I decided to put a "(u_int)" or "(u_long)" cast in front of them, just to make sure they didn't sign extend for 32bit arches if the high order bit was somehow set. (I didn't do this for fdesc, because I was pretty sure it would never have more than 2**31 nodes, but I think it is harmless to do so.) Alternately, you could put a KASSERT() to make sure the value isn't negative before the assignment. Good to see progress on the 64bit ino_t front, rick > But as you point out, more testing is needed :-) > > Kirk McKusick > From owner-freebsd-fs@FreeBSD.ORG Mon Dec 29 14:38:54 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 19C203B0; Mon, 29 Dec 2014 14:38:54 +0000 (UTC) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id BB0EA2AD1; Mon, 29 Dec 2014 14:38:53 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AtMEAHRmoVSDaFve/2dsb2JhbABcFoNCWASDAcNiCoVzAoEiAQEBAQF9hAwBAQEDASNWGw4KAgINBwwGAlkGCgkJiBsIDa4zhS2PTgEBAQEBAQEDAQEBAQEBARuBIY4iNAcYghU7EYEwBYlLi0mDDohNh2kigX8fgW4gMQEBAYFCfgEBAQ X-IronPort-AV: E=Sophos;i="5.07,659,1413259200"; d="scan'208";a="181723928" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-annu.net.uoguelph.ca with ESMTP; 29 Dec 2014 09:38:52 -0500 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id A3BE8B408C; Mon, 29 Dec 2014 09:38:52 -0500 (EST) Date: Mon, 29 Dec 2014 09:38:52 -0500 (EST) From: Rick Macklem To: Kevin Lo Message-ID: <2050662882.3355858.1419863932660.JavaMail.root@uoguelph.ca> In-Reply-To: <20141229053714.GA66793@ns.kevlo.org> Subject: Re: help using mount_smbfs with apple time capsule server MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.95.12] X-Mailer: Zimbra 7.2.6_GA_2926 (ZimbraWebClient - FF3.0 (Win)/7.2.6_GA_2926) Cc: freebsd-fs@freebsd.org, kirk@ba23.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 29 Dec 2014 14:38:54 -0000 Kevin Lo wrote: > On Sun, Dec 28, 2014 at 06:12:07PM -0500, Rick Macklem wrote: > > > > Kirk Russell wrote: > > > Hi, > > > > > > I cannot get FreeBSD 10's smbfs client to work with my server -- > > > an > > > Apple time > > > capsule 4th generation version 7.6.4. > > > > > > Here are the commands I ran, to reproduce the issue: > > > # uname -a > > > FreeBSD 10.1-RELEASE FreeBSD 10.1-RELEASE #0 r274401: Tue Nov 11 > > > 21:02:49 UTC 2014 > > > root@releng1.nyi.freebsd.org:/usr/obj/usr/src/sys/GENERIC amd64 > > > # mount_smbfs -R 16 //lamb@Meganium/Data /mnt > > > Password: > > > # dd if=/mnt/afile bs=1 count=1 of=/dev/null > > > dd: /mnt/afile: No such file or directory > > Are you sure the file "afile" really exists? > > > > For the FreeBSD 10 session, I tried to capture the raw packets, > > > using > > > tcpdump, > > > in file bad.tcpdump. > > > > > > > > > This works with FreeBSD 9. For this working session, I tried to > > > capture the > > > raw packets, using tcpdump, in file good.tcpdump. > > > > > > # uname -a > > > FreeBSD 9.3-STABLE FreeBSD 9.3-STABLE #0: Wed Dec 24 16:16:05 > > > EST > > > 2014 kirk@freenas:/usr/obj/usr/src/sys/GENERIC amd64 > > > # mount_smbfs -R 16 //lamb@Meganium/Data /mnt > > > Password: > > > # dd if=/mnt/afile bs=1 count=1 of=/dev/null > > > 1+0 records in > > > 1+0 records out > > > 1 bytes transferred in 0.000345 secs (2899 bytes/sec) > > > > > > > > > The two raw packets dumps are in this archive: > > > http://www.employees.org/~kirk/bstgbugs/smbfs.tar.gz > > > > > > Any pointers how to get his working? > > > The server appears to be returning an ERRbadpath error. > > > > > Well, my guess is that it has something to do with the Unicode > > changes added to smbfs about three years ago by kevlo@ (r227650 > > and friends in head). These changes are not in FreeBSD9. > > Hmm, it was MFC'ed to stable/9 (r230196). > Oops, my mistake. When I looked at svnweb, I saw the first log entry for smbfs_subr.c listing "MFC: r228796", but didn't notice the second "MFC: r227650" and thought it hadn't been MFC'd. All I can tell you is that wireshark shows "\file" for the good.tcpdump vs "\\afile" for bad.tcpdump, so I guessed that was why the Mac didn't find it? I'll leave now, since I know nothing about SMB, rick > > It appears that it now sends "\\afile" instead of "\afile" and > > I know nothing about the code/protocol but r227650 added changes > > like: > > error = mb_put_uint8(mbp, '\\'); > > replaced with: > > if (SMB_UNICODE_STRINGS(vcp)) > > error = mb_put_uint16le(mbp, '\\') > > else > > error = mb_put_uint8(mbp, '\\'); > > Note that the '\\' is actually 4 \ characters. > > > > Hopefully someone knows enough about Unicode or how SMB > > uses it to make sense of this? > > I tested it under FreeBSD -current [1] and and 9.3-STABLE [2], > it works perfectly... > > [1] > ftp://ftp.freebsd.org/pub/FreeBSD/snapshots/ISO-IMAGES/11.0/FreeBSD-11.0-CURRENT-amd64-20141222-r276066-memstick.img > > [2] > ftp://ftp.freebsd.org/pub/FreeBSD/snapshots/ISO-IMAGES/9.3/FreeBSD-9.3-STABLE-amd64-20141222-r276041-memstick.img > > > rick > > > > > -- > > > Kirk Russell > > > http://www.ba23.org/ > > Kevin > From owner-freebsd-fs@FreeBSD.ORG Tue Dec 30 02:06:48 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 9BB20FE9 for ; Tue, 30 Dec 2014 02:06:48 +0000 (UTC) Received: from ns.kevlo.org (220-135-115-6.HINET-IP.hinet.net [220.135.115.6]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "ns.kevlo.org", Issuer "ns.kevlo.org" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 5675C2758 for ; Tue, 30 Dec 2014 02:06:47 +0000 (UTC) Received: from ns.kevlo.org (localhost [127.0.0.1]) by ns.kevlo.org (8.14.9/8.14.9) with ESMTP id sBU25F7f073130 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Tue, 30 Dec 2014 10:05:16 +0800 (CST) (envelope-from kevlo@ns.kevlo.org) Received: (from kevlo@localhost) by ns.kevlo.org (8.14.9/8.14.9/Submit) id sBU25ACQ073129; Tue, 30 Dec 2014 10:05:10 +0800 (CST) (envelope-from kevlo) Date: Tue, 30 Dec 2014 10:05:10 +0800 From: Kevin Lo To: Rick Macklem Subject: Re: help using mount_smbfs with apple time capsule server Message-ID: <20141230020510.GA73120@ns.kevlo.org> References: <20141229053714.GA66793@ns.kevlo.org> <2050662882.3355858.1419863932660.JavaMail.root@uoguelph.ca> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <2050662882.3355858.1419863932660.JavaMail.root@uoguelph.ca> User-Agent: Mutt/1.5.23 (2014-03-12) Cc: freebsd-fs@freebsd.org, kirk@ba23.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 30 Dec 2014 02:06:48 -0000 On Mon, Dec 29, 2014 at 09:38:52AM -0500, Rick Macklem wrote: > > Kevin Lo wrote: > > On Sun, Dec 28, 2014 at 06:12:07PM -0500, Rick Macklem wrote: > > > > > > Kirk Russell wrote: > > > > Hi, > > > > > > > > I cannot get FreeBSD 10's smbfs client to work with my server -- > > > > an > > > > Apple time > > > > capsule 4th generation version 7.6.4. > > > > > > > > Here are the commands I ran, to reproduce the issue: > > > > # uname -a > > > > FreeBSD 10.1-RELEASE FreeBSD 10.1-RELEASE #0 r274401: Tue Nov 11 > > > > 21:02:49 UTC 2014 > > > > root@releng1.nyi.freebsd.org:/usr/obj/usr/src/sys/GENERIC amd64 > > > > # mount_smbfs -R 16 //lamb@Meganium/Data /mnt > > > > Password: > > > > # dd if=/mnt/afile bs=1 count=1 of=/dev/null > > > > dd: /mnt/afile: No such file or directory > > > > Are you sure the file "afile" really exists? > > > > > > For the FreeBSD 10 session, I tried to capture the raw packets, > > > > using > > > > tcpdump, > > > > in file bad.tcpdump. > > > > > > > > > > > > This works with FreeBSD 9. For this working session, I tried to > > > > capture the > > > > raw packets, using tcpdump, in file good.tcpdump. > > > > > > > > # uname -a > > > > FreeBSD 9.3-STABLE FreeBSD 9.3-STABLE #0: Wed Dec 24 16:16:05 > > > > EST > > > > 2014 kirk@freenas:/usr/obj/usr/src/sys/GENERIC amd64 > > > > # mount_smbfs -R 16 //lamb@Meganium/Data /mnt > > > > Password: > > > > # dd if=/mnt/afile bs=1 count=1 of=/dev/null > > > > 1+0 records in > > > > 1+0 records out > > > > 1 bytes transferred in 0.000345 secs (2899 bytes/sec) > > > > > > > > > > > > The two raw packets dumps are in this archive: > > > > http://www.employees.org/~kirk/bstgbugs/smbfs.tar.gz > > > > > > > > Any pointers how to get his working? > > > > The server appears to be returning an ERRbadpath error. > > > > > > > Well, my guess is that it has something to do with the Unicode > > > changes added to smbfs about three years ago by kevlo@ (r227650 > > > and friends in head). These changes are not in FreeBSD9. > > > > Hmm, it was MFC'ed to stable/9 (r230196). > > > Oops, my mistake. When I looked at svnweb, I saw the first log > entry for smbfs_subr.c listing "MFC: r228796", but didn't notice > the second "MFC: r227650" and thought it hadn't been MFC'd. No worries :-) > All I can tell you is that wireshark shows "\file" for the good.tcpdump > vs "\\afile" for bad.tcpdump, so I guessed that was why the Mac didn't > find it? I don't have machines running samba on OS X, I'll try to borrow my colleague's laptop and test it out, thanks. > I'll leave now, since I know nothing about SMB, rick > > > > It appears that it now sends "\\afile" instead of "\afile" and > > > I know nothing about the code/protocol but r227650 added changes > > > like: > > > error = mb_put_uint8(mbp, '\\'); > > > replaced with: > > > if (SMB_UNICODE_STRINGS(vcp)) > > > error = mb_put_uint16le(mbp, '\\') > > > else > > > error = mb_put_uint8(mbp, '\\'); > > > Note that the '\\' is actually 4 \ characters. > > > > > > Hopefully someone knows enough about Unicode or how SMB > > > uses it to make sense of this? > > > > I tested it under FreeBSD -current [1] and and 9.3-STABLE [2], > > it works perfectly... > > > > [1] > > ftp://ftp.freebsd.org/pub/FreeBSD/snapshots/ISO-IMAGES/11.0/FreeBSD-11.0-CURRENT-amd64-20141222-r276066-memstick.img > > > > [2] > > ftp://ftp.freebsd.org/pub/FreeBSD/snapshots/ISO-IMAGES/9.3/FreeBSD-9.3-STABLE-amd64-20141222-r276041-memstick.img > > > > > rick > > > > > > > -- > > > > Kirk Russell > > > > http://www.ba23.org/ > > Kevin From owner-freebsd-fs@FreeBSD.ORG Tue Dec 30 15:16:00 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id DE1F83CC for ; Tue, 30 Dec 2014 15:16:00 +0000 (UTC) Received: from smtp.unix-experience.fr (195-154-176-227.rev.poneytelecom.eu [195.154.176.227]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 374FE1AA6 for ; Tue, 30 Dec 2014 15:15:59 +0000 (UTC) Received: from smtp.unix-experience.fr (unknown [192.168.200.21]) by smtp.unix-experience.fr (Postfix) with ESMTP id AC94727561; Tue, 30 Dec 2014 15:07:44 +0000 (UTC) X-Virus-Scanned: scanned by unix-experience.fr Received: from smtp.unix-experience.fr ([192.168.200.21]) by smtp.unix-experience.fr (smtp.unix-experience.fr [192.168.200.21]) (amavisd-new, port 10024) with ESMTP id idOsHh867P8E; Tue, 30 Dec 2014 15:07:37 +0000 (UTC) Received: from mail.unix-experience.fr (repo.unix-experience.fr [192.168.200.30]) by smtp.unix-experience.fr (Postfix) with ESMTPSA id 6E21B27555; Tue, 30 Dec 2014 15:07:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=unix-experience.fr; s=uxselect; t=1419952057; bh=kbXv42RXutOHp5YCGutyRozHoLCOX3wDap7XuOBGGeQ=; h=Date:From:Subject:To:Cc:In-Reply-To:References; b=EmWx5eAWzAdm3l5YIb8iBcMhNINldNs2cgXNVIIr+/MAUbuMCVCnRSaTa6oRu/7jG O7AWlpg7ynfW0CPPBCY+NKoHbR2x9LloqwIEKOvHRDcFAoh7WWfcpWDY1PMHtgyRT1 AYmv7c96wm8VUXcOC/MEUn8J/DqPj8G//6urIFMY= Mime-Version: 1.0 Date: Tue, 30 Dec 2014 15:07:37 +0000 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-ID: <82f8ef92c6bf5f2323bae5ce5e4e2394@mail.unix-experience.fr> X-Mailer: RainLoop/1.7.0.203 From: "=?utf-8?B?TG/Dr2MgQmxvdA==?=" Subject: Re: ZFS vnode lock deadlock in zfs_fhtovp was: High Kernel Load with nfsv4 To: "Rick Macklem" In-Reply-To: <1479765128.1118136.1419290403230.JavaMail.root@uoguelph.ca> References: <1479765128.1118136.1419290403230.JavaMail.root@uoguelph.ca> Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 30 Dec 2014 15:16:01 -0000 Hi Rick,=0Ai upgraded my jail host from FreeBSD 9.3 to 10.1 to use NFS v4= .1 (mountoptions: rw,rsize=3D32768,wsize=3D32768,tcp,nfsv4,minorversion= =3D1)=0A=0APerformance is quite stable but it's slow. Not as slow as befo= re but slow... services was launched but no client are using them and sys= tem CPU % was 10-50%.=0A=0AI don't see anything on NFSv4.1 server, it's p= erfectly stable and functionnal.=0A=0ARegards,=0A=0ALo=C3=AFc Blot,=0AUNI= X Systems, Network and Security Engineer=0Ahttp://www.unix-experience.fr= =0A=0A23 d=C3=A9cembre 2014 00:20 "Rick Macklem" a= =C3=A9crit: =0A> Loic Blot wrote:=0A> =0A>> Hi,=0A>> =0A>> To clarify be= cause of our exchanges. Here are the current sysctl=0A>> options for serv= er:=0A>> =0A>> vfs.nfsd.enable_nobodycheck=3D0=0A>> vfs.nfsd.enable_nogro= upcheck=3D0=0A>> =0A>> vfs.nfsd.maxthreads=3D200=0A>> vfs.nfsd.tcphighwat= er=3D10000=0A>> vfs.nfsd.tcpcachetimeo=3D300=0A>> vfs.nfsd.server_min_nfs= vers=3D4=0A>> =0A>> kern.maxvnodes=3D10000000=0A>> kern.ipc.maxsockbuf=3D= 4194304=0A>> net.inet.tcp.sendbuf_max=3D4194304=0A>> net.inet.tcp.recvbuf= _max=3D4194304=0A>> =0A>> vfs.lookup_shared=3D0=0A>> =0A>> Regards,=0A>> = =0A>> Lo=C3=AFc Blot,=0A>> UNIX Systems, Network and Security Engineer=0A= >> http://www.unix-experience.fr=0A>> =0A>> 22 d=C3=A9cembre 2014 09:42 "= Lo=C3=AFc Blot" a=0A>> =C3=A9crit:=0A>> = =0A>> Hi Rick,=0A>> my 5 jails runs this weekend and now i have some stat= s on this=0A>> monday.=0A>> =0A>> Hopefully deadlock was fixed, yeah, but= everything isn't good :(=0A>> =0A>> On NFSv4 server (FreeBSD 10.1) syste= m uses 35% CPU=0A>> =0A>> As i can see this is because of nfsd:=0A>> =0A>= > 918 root 96 20 0 12352K 3372K rpcsvc 6 51.4H=0A>> 273= .68% nfsd: server (nfsd)=0A>> =0A>> If i look at dmesg i see:=0A>> nfsd s= erver cache flooded, try increasing vfs.nfsd.tcphighwater=0A> =0A> Well, = you have a couple of choices:=0A> 1 - Use NFSv4.1 (add "minorversion=3D1"= to your mount options).=0A> (NFSv4.1 avoids use of the DRC and instead u= ses something=0A> called sessions. See below.)=0A> OR=0A> =0A>> vfs.nfsd.= tcphighwater was set to 10000, i increase it to 15000=0A> =0A> 2 - Bump v= fs.nfsd.tcphighwater way up, until you no longer see=0A> "nfs server cach= e flooded" messages. (I think Garrett Wollman uses=0A> 100000. (You may s= till see quite a bit of CPU overheads.)=0A> =0A> OR=0A> =0A> 3 - Set vfs.= nfsd.cachetcp=3D0 (which disables the DRC and gets rid=0A> of the CPU ove= rheads). However, there is a risk of data corruption=0A> if you have a cl= ient->server network partitioning of a moderate=0A> duration, because a n= on-idempotent RPC may get redone, becasue=0A> the client times out waitin= g for a reply. If a non-idempotent=0A> RPC gets done twice on the server,= data corruption can happen.=0A> (The DRC provides improved correctness, = but does add overhead.)=0A> =0A> If #1 works for you, it is the preferred= solution, since Sessions=0A> in NFSv4.1 solves the correctness problem i= n a good, space bound=0A> way. A session basically has N (usually 32 or 6= 4) slots and only=0A> allows one outstanding RPC/slot. As such, it can ca= che the previous=0A> reply for each slot (32 or 64 of them) and guarantee= "exactly once"=0A> RPC semantics.=0A> =0A> rick=0A> =0A>> Here is 'nfsst= at -s' output:=0A>> =0A>> Server Info:=0A>> Getattr Setattr Lookup = Readlink Read Write Create=0A>> Remove=0A>> 12600652 181= 2 2501097 156 1386423 1983729 123=0A>> 162067=0A>> Rena= me Link Symlink Mkdir Rmdir Readdir RdirPlus=0A>> Acces= s=0A>> 36762 9 0 0 0 3147 0= =0A>> 623524=0A>> Mknod Fsstat Fsinfo PathConf Commit=0A>> 0 = 0 0 0 328117=0A>> Server Ret-Failed=0A>> 0=0A>> = Server Faults=0A>> 0=0A>> Server Cache Stats:=0A>> Inprog Idem Non-= idem Misses=0A>> 0 0 0 12635512=0A>> Server Write Gat= hering:=0A>> WriteOps WriteRPC Opsaved=0A>> 1983729 1983729 = 0=0A>> =0A>> And here is 'procstat -kk' for nfsd (server)=0A>> =0A>> 918 = 100528 nfsd nfsd: master mi_switch+0xe1=0A>> sleepq_catch= _signals+0xab sleepq_timedwait_sig+0x10=0A>> _cv_timedwait_sig_sbt+0x18b = svc_run_internal+0x4a1 svc_run+0x1de=0A>> nfsrvd_nfsd+0x1ca nfssvc_nfsd+0= x107 sys_nfssvc+0x9c=0A>> amd64_syscall+0x351 Xfast_syscall+0xfb=0A>> 918= 100568 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_catc= h_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_intern= al+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>= > 918 100569 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq= _catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_i= nternal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0x= e=0A>> 918 100570 nfsd nfsd: service mi_switch+0xe1=0A>> s= leepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_= run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoli= ne+0xe=0A>> 918 100571 nfsd nfsd: service mi_switch+0xe1= =0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A= >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_t= rampoline+0xe=0A>> 918 100572 nfsd nfsd: service mi_switch= +0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x1= 6a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> f= ork_trampoline+0xe=0A>> 918 100573 nfsd nfsd: service mi_s= witch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_si= g+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>> fork_trampoline+0xe=0A>> 918 100574 nfsd nfsd: service = mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_= wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit= +0x9a=0A>> fork_trampoline+0xe=0A>> 918 100575 nfsd nfsd: ser= vice mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf= _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork= _exit+0x9a=0A>> fork_trampoline+0xe=0A>> 918 100576 nfsd nfsd= : service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_si= g+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb= fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 918 100577 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wa= it_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_star= t+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 918 100578 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab slee= pq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread= _start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 918 100579 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab= sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_t= hread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 918 100580 n= fsd nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals= +0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e = svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 918 100= 581 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_catch_si= gnals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0= x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 91= 8 100582 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_cat= ch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_inter= nal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A= >> 918 100583 nfsd nfsd: service mi_switch+0xe1=0A>> sleep= q_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_= internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0= xe=0A>> 918 100584 nfsd nfsd: service mi_switch+0xe1=0A>> = sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc= _run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampol= ine+0xe=0A>> 918 100585 nfsd nfsd: service mi_switch+0xe1= =0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A= >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_t= rampoline+0xe=0A>> 918 100586 nfsd nfsd: service mi_switch= +0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x1= 6a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> f= ork_trampoline+0xe=0A>> 918 100587 nfsd=20=20=20=20=20=20=20=20 nfsd:= service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig= +0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb = fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 918 100588 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start= +0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 918 100589 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleep= q_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_= start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 918 100590 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab = sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_th= read_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 918 100591 nf= sd nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+= 0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e s= vc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 918 1005= 92 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_catch_sig= nals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x= 87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 918= 100593 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_catc= h_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_intern= al+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>= > 918 100594 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq= _catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_i= nternal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0x= e=0A>> 918 100595 nfsd nfsd: service mi_switch+0xe1=0A>> s= leepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_= run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoli= ne+0xe=0A>> 918 100596 nfsd nfsd: service mi_switch+0xe1= =0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A= >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_t= rampoline+0xe=0A>> 918 100597 nfsd nfsd: service mi_switch= +0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x1= 6a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> f= ork_trampoline+0xe=0A>> 918 100598 nfsd nfsd: service mi_s= witch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_si= g+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>> fork_trampoline+0xe=0A>> 918 100599 nfsd nfsd: service = mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_= wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit= +0x9a=0A>> fork_trampoline+0xe=0A>> 918 100600 nfsd nfsd: ser= vice mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf= _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork= _exit+0x9a=0A>> fork_trampoline+0xe=0A>> 918 100601 nfsd nfsd= : service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_si= g+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb= fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 918 100602 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wa= it_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_star= t+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 918 100603 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab slee= pq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread= _start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 918 100604 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab= sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_t= hread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 918 100605 n= fsd nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals= +0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e = svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 918 100= 606 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_catch_si= gnals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0= x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 91= 8 100607 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_cat= ch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_inter= nal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A= >> 918 100608 nfsd nfsd: service mi_switch+0xe1=0A>> sleep= q_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_= internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0= xe=0A>> 918 100609 nfsd nfsd: service mi_switch+0xe1=0A>> = sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc= _run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampol= ine+0xe=0A>> 918 100610 nfsd nfsd: service mi_switch+0xe1= =0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A= >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_t= rampoline+0xe=0A>> 918 100611 nfsd nfsd: service mi_switch= +0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x1= 6a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> f= ork_trampoline+0xe=0A>> 918 100612 nfsd nfsd: service mi_s= witch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_si= g+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>> fork_trampoline+0xe=0A>> 918 100613 nfsd nfsd: service = mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_= wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit= +0x9a=0A>> fork_trampoline+0xe=0A>> 918 100614 nfsd nfsd: ser= vice mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf= _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork= _exit+0x9a=0A>> fork_trampoline+0xe=0A>> 918 100615 nfsd nfsd= : service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_si= g+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb= fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 918 100616 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wa= it_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_star= t+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 918 100617 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab slee= pq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread= _start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 918 100618 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab= sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_t= hread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 918 100619 n= fsd nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals= +0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e = svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 918 100= 620 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_catch_si= gnals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0= x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 91= 8 100621 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_cat= ch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_inter= nal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A= >> 918 100622 nfsd nfsd: service mi_switch+0xe1=0A>> sleep= q_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_= internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0= xe=0A>> 918 100623 nfsd nfsd: service mi_switch+0xe1=0A>> = sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc= _run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampol= ine+0xe=0A>> 918 100624 nfsd nfsd: service mi_switch+0xe1= =0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A= >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_t= rampoline+0xe=0A>> 918 100625 nfsd nfsd: service mi_switch= +0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x1= 6a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> f= ork_trampoline+0xe=0A>> 918 100626 nfsd nfsd: service mi_s= witch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_si= g+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>> fork_trampoline+0xe=0A>> 918 100627 nfsd nfsd: service = mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_= wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit= +0x9a=0A>> fork_trampoline+0xe=0A>> 918 100628 nfsd nfsd: ser= vice mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf= _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork= _exit+0x9a=0A>> fork_trampoline+0xe=0A>> 918 100629 nfsd nfsd= : service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_si= g+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb= fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 918 100630 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wa= it_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_star= t+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 918 100631 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab slee= pq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread= _start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 918 100632 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab= sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_t= hread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 918 100633 n= fsd nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals= +0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e = svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 918 100= 634 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_catch_si= gnals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0= x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 91= 8 100635 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_cat= ch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_inter= nal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A= >> 918 100636 nfsd nfsd: service mi_switch+0xe1=0A>> sleep= q_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_= internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0= xe=0A>> 918 100637 nfsd nfsd: service mi_switch+0xe1=0A>> = sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc= _run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampol= ine+0xe=0A>> 918 100638 nfsd nfsd: service mi_switch+0xe1= =0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A= >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_t= rampoline+0xe=0A>> 918 100639 nfsd nfsd: service mi_switch= +0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x1= 6a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> f= ork_trampoline+0xe=0A>> 918 100640 nfsd nfsd: service mi_s= witch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_si= g+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>> fork_trampoline+0xe=0A>> 918 100641 nfsd nfsd: service = mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_= wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit= +0x9a=0A>> fork_trampoline+0xe=0A>> 918 100642 nfsd nfsd: ser= vice mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf= _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork= _exit+0x9a=0A>> fork_trampoline+0xe=0A>> 918 100643 nfsd nfsd= : service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_si= g+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb= fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 918 100644 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wa= it_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_star= t+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 918 100645 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab slee= pq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread= _start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 918 100646 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab= sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_t= hread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 918 100647 n= fsd nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals= +0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e = svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 918 100= 648 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_catch_si= gnals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0= x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 91= 8 100649 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_cat= ch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_inter= nal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A= >> 918 100650 nfsd nfsd: service mi_switch+0xe1=0A>> sleep= q_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_= internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0= xe=0A>> 918 100651 nfsd nfsd: service mi_switch+0xe1=0A>> = sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc= _run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampol= ine+0xe=0A>> 918 100652 nfsd nfsd: service mi_switch+0xe1= =0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A= >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_t= rampoline+0xe=0A>> 918 100653 nfsd nfsd: service mi_switch= +0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x1= 6a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> f= ork_trampoline+0xe=0A>> 918 100654 nfsd nfsd: service mi_s= witch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_si= g+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>> fork_trampoline+0xe=0A>> 918 100655 nfsd nfsd: service = mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_= wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit= +0x9a=0A>> fork_trampoline+0xe=0A>> 918 100656 nfsd nfsd: ser= vice mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf= _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork= _exit+0x9a=0A>> fork_trampoline+0xe=0A>> 918 100657 nfsd nfsd= : service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_si= g+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb= fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 918 100658 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wa= it_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_star= t+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 918 100659 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab slee= pq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread= _start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 918 100660 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab= sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_t= hread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 918 100661 n= fsd nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals= +0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e = svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 918 100= 662 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_catch_si= gnals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0= x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> --= -=0A>> =0A>> Now if we look at client (FreeBSD 9.3)=0A>> =0A>> We see sys= tem was very busy and do many and many interrupts=0A>> =0A>> CPU: 0.0% u= ser, 0.0% nice, 37.8% system, 51.2% interrupt, 11.0%=0A>> idle=0A>> =0A>= > A look at process list shows that there are many sendmail process in=0A= >> state nfstry=0A>> =0A>> nfstry 18 32:27 0.88% sendmail: Queue runne= r@00:30:00 for=0A>> /var/spool/clientm=0A>> =0A>> Here is 'nfsstat -c' ou= tput:=0A>> =0A>> Client Info:=0A>> Rpc Counts:=0A>> Getattr Setattr = Lookup Readlink Read Write Create=0A>> Remove=0A>> 1051347 = 1724 2494481 118 903902 1901285 162676=0A>> 161899=0A= >> Rename Link Symlink Mkdir Rmdir Readdir RdirPlus=0A>= > Access=0A>> 36744 2 0 114 40 3131 = 0=0A>> 544136=0A>> Mknod Fsstat Fsinfo PathConf Commit=0A>= > 9 0 0 0 245821=0A>> Rpc Info:=0A>> TimedOut = Invalid X Replies Retries Requests=0A>> 0 0 0 = 0 8356557=0A>> Cache Info:=0A>> Attr Hits Misses Lkup Hits Miss= es BioR Hits Misses BioW Hits=0A>> Misses=0A>> 108754455 491475 54= 229224 2437229 46814561 821723 5132123=0A>> 1871871=0A>> BioRLHit= s Misses BioD Hits Misses DirE Hits Misses Accs Hits=0A>> Misses= =0A>> 144035 118 53736 2753 27813 1 57238839= =0A>> 544205=0A>> =0A>> If you need more things, tell me, i let the PoC i= n this state.=0A>> =0A>> Thanks=0A>> =0A>> Regards,=0A>> =0A>> Lo=C3=AFc = Blot,=0A>> UNIX Systems, Network and Security Engineer=0A>> http://www.un= ix-experience.fr=0A>> =0A>> 21 d=C3=A9cembre 2014 01:33 "Rick Macklem" a =C3=A9crit:=0A>> =0A>> Loic Blot wrote:=0A>> =0A>>= > Hi Rick,=0A>>> ok, i don't need locallocks, i haven't understand option= was for=0A>>> that=0A>>> usage, i removed it.=0A>>> I do more tests on m= onday.=0A>>> Thanks for the deadlock fix, for other people :)=0A>> =0A>> = Good. Please let us know if running with vfs.nfsd.enable_locallocks=3D0= =0A>> gets rid of the deadlocks? (I think it fixes the one you saw.)=0A>>= =0A>> On the performance side, you might also want to try different valu= es=0A>> of=0A>> readahead, if the Linux client has such a mount option. (= With the=0A>> NFSv4-ZFS sequential vs random I/O heuristic, I have no ide= a what the=0A>> optimal readahead value would be.)=0A>> =0A>> Good luck w= ith it and please let us know how it goes, rick=0A>> ps: I now have a pat= ch to fix the deadlock when=0A>> vfs.nfsd.enable_locallocks=3D1=0A>> is s= et. I'll post it for anyone who is interested after I put it=0A>> through= some testing.=0A>> =0A>> --=0A>> Best regards,=0A>> Lo=C3=AFc BLOT,=0A>>= UNIX systems, security and network engineer=0A>> http://www.unix-experie= nce.fr=0A>> =0A>> Le jeudi 18 d=C3=A9cembre 2014 =C3=A0 19:46 -0500, Rick= Macklem a =C3=A9crit :=0A>> =0A>> Loic Blot wrote:=0A>>> Hi rick,=0A>>> = i tried to start a LXC container on Debian Squeeze from my=0A>>> freebsd= =0A>>> ZFS+NFSv4 server and i also have a deadlock on nfsd=0A>>> (vfs.loo= kup_shared=3D0). Deadlock procs each time i launch a=0A>>> squeeze=0A>>> = container, it seems (3 tries, 3 fails).=0A>> =0A>> Well, I`ll take a look= at this `procstat -kk`, but the only thing=0A>> I`ve seen posted w.r.t. = avoiding deadlocks in ZFS is to not use=0A>> nullfs. (I have no idea if y= ou are using any nullfs mounts, but=0A>> if so, try getting rid of them.)= =0A>> =0A>> Here`s a high level post about the ZFS and vnode locking prob= lem,=0A>> but there is no patch available, as far as I know.=0A>> =0A>> h= ttp://docs.FreeBSD.org/cgi/mid.cgi?54739F41.8030407=0A>> =0A>> rick=0A>> = =0A>> 921 - D 0:00.02 nfsd: server (nfsd)=0A>> =0A>> Here is the p= rocstat -kk=0A>> =0A>> PID TID COMM TDNAME KSTAC= K=0A>> 921 100538 nfsd nfsd: master mi_switch+0xe1=0A>> s= leepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0xc9e=0A>> vop_stdlock+0x3c = VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>> nfsvno_advlock+0x119 nfsrv_dolocal+= 0x84 nfsrv_lockctrl+0x14ad=0A>> nfsrvd_locku+0x283 nfsrvd_dorpc+0xec6 nfs= svc_program+0x554=0A>> svc_run_internal+0xc77 svc_run+0x1de nfsrvd_nfsd+0= x1ca=0A>> nfssvc_nfsd+0x107 sys_nfssvc+0x9c=0A>> 921 100572 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq= _wait_sig+0xf=0A>> _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thr= ead_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 921 100573 nfs= d nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0= xab sleepq_wait_sig+0xf=0A>> _cv_wait_sig+0x16a=0A>> svc_run_internal+0x8= 7e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 921 = 100574 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_catch= _signals+0xab sleepq_wait_sig+0xf=0A>> _cv_wait_sig+0x16a=0A>> svc_run_in= ternal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe= =0A>> 921 100575 nfsd nfsd: service mi_switch+0xe1=0A>> sl= eepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>> _cv_wait_sig+0x16a=0A>> = svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_tram= poline+0xe=0A>> 921 100576 nfsd nfsd: service mi_switch+0x= e1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>> _cv_wait_sig+0= x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>= fork_trampoline+0xe=0A>> 921 100577 nfsd nfsd: service mi= _switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>> _cv_= wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit= +0x9a=0A>> fork_trampoline+0xe=0A>> 921 100578 nfsd nfsd: ser= vice mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf= =0A>> _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb= fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 921 100579 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wa= it_sig+0xf=0A>> _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread= _start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 921 100580 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab= sleepq_wait_sig+0xf=0A>> _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e = svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 921 100= 581 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_catch_si= gnals+0xab sleepq_wait_sig+0xf=0A>> _cv_wait_sig+0x16a=0A>> svc_run_inter= nal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A= >> 921 100582 nfsd nfsd: service mi_switch+0xe1=0A>> sleep= q_catch_signals+0xab sleepq_wait_sig+0xf=0A>> _cv_wait_sig+0x16a=0A>> svc= _run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampol= ine+0xe=0A>> 921 100583 nfsd nfsd: service mi_switch+0xe1= =0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>> _cv_wait_sig+0x1= 6a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> f= ork_trampoline+0xe=0A>> 921 100584 nfsd nfsd: service mi_s= witch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>> _cv_wa= it_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0= x9a=0A>> fork_trampoline+0xe=0A>> 921 100585 nfsd nfsd: servi= ce mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf= =0A>> _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb= fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 921 100586 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wa= it_sig+0xf=0A>> _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread= _start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 921 100587 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab= sleepq_wait_sig+0xf=0A>> _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e = svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 921 100= 588 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_catch_si= gnals+0xab sleepq_wait_sig+0xf=0A>> _cv_wait_sig+0x16a=0A>> svc_run_inter= nal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A= >> 921 100589 nfsd nfsd: service mi_switch+0xe1=0A>> sleep= q_catch_signals+0xab sleepq_wait_sig+0xf=0A>> _cv_wait_sig+0x16a=0A>> svc= _run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampol= ine+0xe=0A>> 921 100590 nfsd nfsd: service mi_switch+0xe1= =0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>> _cv_wait_sig+0x1= 6a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> f= ork_trampoline+0xe=0A>> 921 100591 nfsd nfsd: service mi_s= witch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>> _cv_wa= it_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0= x9a=0A>> fork_trampoline+0xe=0A>> 921 100592 nfsd nfsd: servi= ce mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf= =0A>> _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb= fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 921 100593 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wa= it_sig+0xf=0A>> _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread= _start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 921 100594 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab= sleepq_wait_sig+0xf=0A>> _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e = svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 921 100= 595 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_catch_si= gnals+0xab sleepq_wait_sig+0xf=0A>> _cv_wait_sig+0x16a=0A>> svc_run_inter= nal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A= >> 921 100596 nfsd nfsd: service mi_switch+0xe1=0A>> sleep= q_catch_signals+0xab sleepq_wait_sig+0xf=0A>> _cv_wait_sig+0x16a=0A>> svc= _run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampol= ine+0xe=0A>> 921 100597 nfsd nfsd: service mi_switch+0xe1= =0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>> _cv_wait_sig+0x1= 6a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> f= ork_trampoline+0xe=0A>> 921 100598 nfsd nfsd: service mi_s= witch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>> _cv_wa= it_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0= x9a=0A>> fork_trampoline+0xe=0A>> 921 100599 nfsd nfsd: servi= ce mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf= =0A>> _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb= fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 921 100600 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wa= it_sig+0xf=0A>> _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread= _start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 921 100601 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab= sleepq_wait_sig+0xf=0A>> _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e = svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 921 100= 602 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_catch_si= gnals+0xab sleepq_wait_sig+0xf=0A>> _cv_wait_sig+0x16a=0A>> svc_run_inter= nal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A= >> 921 100603 nfsd nfsd: service mi_switch+0xe1=0A>> sleep= q_catch_signals+0xab sleepq_wait_sig+0xf=0A>> _cv_wait_sig+0x16a=0A>> svc= _run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampol= ine+0xe=0A>> 921 100604 nfsd nfsd: service mi_switch+0xe1= =0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>> _cv_wait_sig+0x1= 6a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> f= ork_trampoline+0xe=0A>> 921 100605 nfsd nfsd: service mi_s= witch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>> _cv_wa= it_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0= x9a=0A>> fork_trampoline+0xe=0A>> 921 100606 nfsd nfsd: servi= ce mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf= =0A>> _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb= fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 921 100607 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wa= it_sig+0xf=0A>> _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread= _start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 921 100608 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab= sleepq_wait_sig+0xf=0A>> _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e = svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 921 100= 609 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_catch_si= gnals+0xab sleepq_wait_sig+0xf=0A>> _cv_wait_sig+0x16a=0A>> svc_run_inter= nal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A= >> 921 100610 nfsd nfsd: service mi_switch+0xe1=0A>> sleep= q_catch_signals+0xab sleepq_wait_sig+0xf=0A>> _cv_wait_sig+0x16a=0A>> svc= _run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampol= ine+0xe=0A>> 921 100611 nfsd nfsd: service mi_switch+0xe1= =0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>> _cv_wait_sig+0x1= 6a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> f= ork_trampoline+0xe=0A>> 921 100612 nfsd nfsd: service mi_s= witch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>> _cv_wa= it_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0= x9a=0A>> fork_trampoline+0xe=0A>> 921 100613 nfsd nfsd: servi= ce mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf= =0A>> _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb= fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 921 100614 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wa= it_sig+0xf=0A>> _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread= _start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 921 100615 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab= sleepq_wait_sig+0xf=0A>> _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e = svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 921 100= 616 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0x3= a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A>> nfsrv_getlockfile+0x17= 9 nfsrv_lockctrl+0x21f nfsrvd_lock+0x5b1=0A>> nfsrvd_dorpc+0xec6 nfssvc_p= rogram+0x554 svc_run_internal+0xc77=0A>> svc_thread_start+0xb fork_exit+0= x9a fork_trampoline+0xe=0A>> 921 100617 nfsd nfsd: service = mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>> _= cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_e= xit+0x9a=0A>> fork_trampoline+0xe=0A>> 921 100618 nfsd nfsd: = service mi_switch+0xe1=0A>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x= 66 nfsv4_lock+0x9b=0A>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_i= nternal+0xc77=0A>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0x= e=0A>> 921 100619 nfsd nfsd: service mi_switch+0xe1=0A>> s= leepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>> _cv_wait_sig+0x16a=0A>>= svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_tra= mpoline+0xe=0A>> 921 100620 nfsd nfsd: service mi_switch+0= xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>> _cv_wait_sig+= 0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>= > fork_trampoline+0xe=0A>> 921 100621 nfsd nfsd: service m= i_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>> _cv= _wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exi= t+0x9a=0A>> fork_trampoline+0xe=0A>> 921 100622 nfsd nfsd: se= rvice mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0x= f=0A>> _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0x= b fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 921 100623 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_w= ait_sig+0xf=0A>> _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_threa= d_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 921 100624 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xa= b sleepq_wait_sig+0xf=0A>> _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e= svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 921 10= 0625 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_catch_s= ignals+0xab sleepq_wait_sig+0xf=0A>> _cv_wait_sig+0x16a=0A>> svc_run_inte= rnal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe= =0A>> 921 100626 nfsd nfsd: service mi_switch+0xe1=0A>> sl= eepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>> _cv_wait_sig+0x16a=0A>> = svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_tram= poline+0xe=0A>> 921 100627 nfsd nfsd: service mi_switch+0x= e1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>> _cv_wait_sig+0= x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>= fork_trampoline+0xe=0A>> 921 100628 nfsd nfsd: service=20=20= mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>> = _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_= exit+0x9a=0A>> fork_trampoline+0xe=0A>> 921 100629 nfsd nfsd:= service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig= +0xf=0A>> _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start= +0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 921 100630 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleep= q_wait_sig+0xf=0A>> _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_th= read_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 921 100631 nf= sd nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+= 0xab sleepq_wait_sig+0xf=0A>> _cv_wait_sig+0x16a=0A>> svc_run_internal+0x= 87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 921= 100632 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_catc= h_signals+0xab sleepq_wait_sig+0xf=0A>> _cv_wait_sig+0x16a=0A>> svc_run_i= nternal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0x= e=0A>> 921 100633 nfsd nfsd: service mi_switch+0xe1=0A>> s= leepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>> _cv_wait_sig+0x16a=0A>>= svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_tra= mpoline+0xe=0A>> 921 100634 nfsd nfsd: service mi_switch+0= xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>> _cv_wait_sig+= 0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>= > fork_trampoline+0xe=0A>> 921 100635 nfsd nfsd: service m= i_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>> _cv= _wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exi= t+0x9a=0A>> fork_trampoline+0xe=0A>> 921 100636 nfsd nfsd: se= rvice mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0x= f=0A>> _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0x= b fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 921 100637 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_w= ait_sig+0xf=0A>> _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_threa= d_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 921 100638 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xa= b sleepq_wait_sig+0xf=0A>> _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e= svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 921 10= 0639 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_catch_s= ignals+0xab sleepq_wait_sig+0xf=0A>> _cv_wait_sig+0x16a=0A>> svc_run_inte= rnal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe= =0A>> 921 100640 nfsd nfsd: service mi_switch+0xe1=0A>> sl= eepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>> _cv_wait_sig+0x16a=0A>> = svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_tram= poline+0xe=0A>> 921 100641 nfsd nfsd: service mi_switch+0x= e1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>> _cv_wait_sig+0= x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>= fork_trampoline+0xe=0A>> 921 100642 nfsd nfsd: service mi= _switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>> _cv_= wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit= +0x9a=0A>> fork_trampoline+0xe=0A>> 921 100643 nfsd nfsd: ser= vice mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf= =0A>> _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb= fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 921 100644 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wa= it_sig+0xf=0A>> _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread= _start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 921 100645 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab= sleepq_wait_sig+0xf=0A>> _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e = svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 921 100= 646 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_catch_si= gnals+0xab sleepq_wait_sig+0xf=0A>> _cv_wait_sig+0x16a=0A>> svc_run_inter= nal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A= >> 921 100647 nfsd nfsd: service mi_switch+0xe1=0A>> sleep= q_catch_signals+0xab sleepq_wait_sig+0xf=0A>> _cv_wait_sig+0x16a=0A>> svc= _run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampol= ine+0xe=0A>> 921 100648 nfsd nfsd: service mi_switch+0xe1= =0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>> _cv_wait_sig+0x1= 6a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> f= ork_trampoline+0xe=0A>> 921 100649 nfsd nfsd: service mi_s= witch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>> _cv_wa= it_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0= x9a=0A>> fork_trampoline+0xe=0A>> 921 100650 nfsd nfsd: servi= ce mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf= =0A>> _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb= fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 921 100651 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wa= it_sig+0xf=0A>> _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread= _start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 921 100652 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab= sleepq_wait_sig+0xf=0A>> _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e = svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 921 100= 653 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_catch_si= gnals+0xab sleepq_wait_sig+0xf=0A>> _cv_wait_sig+0x16a=0A>> svc_run_inter= nal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A= >> 921 100654 nfsd nfsd: service mi_switch+0xe1=0A>> sleep= q_catch_signals+0xab sleepq_wait_sig+0xf=0A>> _cv_wait_sig+0x16a=0A>> svc= _run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampol= ine+0xe=0A>> 921 100655 nfsd nfsd: service mi_switch+0xe1= =0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>> _cv_wait_sig+0x1= 6a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> f= ork_trampoline+0xe=0A>> 921 100656 nfsd nfsd: service mi_s= witch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>> _cv_wa= it_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0= x9a=0A>> fork_trampoline+0xe=0A>> 921 100657 nfsd nfsd: servi= ce mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf= =0A>> _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb= fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 921 100658 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wa= it_sig+0xf=0A>> _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread= _start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 921 100659 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab= sleepq_wait_sig+0xf=0A>> _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e = svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 921 100= 660 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_catch_si= gnals+0xab sleepq_wait_sig+0xf=0A>> _cv_wait_sig+0x16a=0A>> svc_run_inter= nal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A= >> 921 100661 nfsd nfsd: service mi_switch+0xe1=0A>> sleep= q_catch_signals+0xab sleepq_wait_sig+0xf=0A>> _cv_wait_sig+0x16a=0A>> svc= _run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampol= ine+0xe=0A>> 921 100662 nfsd nfsd: service mi_switch+0xe1= =0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>> _cv_wait_sig+0x1= 6a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> f= ork_trampoline+0xe=0A>> 921 100663 nfsd nfsd: service mi_s= witch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>> _cv_wa= it_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0= x9a=0A>> fork_trampoline+0xe=0A>> 921 100664 nfsd nfsd: servi= ce mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf= =0A>> _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb= fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 921 100665 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wa= it_sig+0xf=0A>> _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread= _start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 921 100666 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0x3a _sleep+0= x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A>> nfsrv_setclient+0xbd nfsrvd_setc= lientid+0x3c8=0A>> nfsrvd_dorpc+0xc76=0A>> nfssvc_program+0x554 svc_run_i= nternal+0xc77=0A>> svc_thread_start+0xb=0A>> fork_exit+0x9a fork_trampoli= ne+0xe=0A>> =0A>> Regards,=0A>> =0A>> Lo=C3=AFc Blot,=0A>> UNIX Systems, = Network and Security Engineer=0A>> http://www.unix-experience.fr=0A>> =0A= >> 15 d=C3=A9cembre 2014 15:18 "Rick Macklem" a=0A= >> =C3=A9crit:=0A>> =0A>> Loic Blot wrote:=0A>> =0A>>> For more informati= ons, here is procstat -kk on nfsd, if you=0A>>> need=0A>>> more=0A>>> hot= datas, tell me.=0A>>> =0A>>> Regards, PID TID COMM TDNAM= E KSTACK=0A>>> 918 100529 nfsd nfsd: master mi_= switch+0xe1=0A>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>= >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>> zfs_fhtovp+0x3= 8d=0A>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>> nfs= svc_program+0x554 svc_run_internal+0xc77 svc_run+0x1de=0A>>> nfsrvd_nfsd+= 0x1ca nfssvc_nfsd+0x107 sys_nfssvc+0x9c=0A>>> amd64_syscall+0x351=0A>> = =0A>> Well, most of the threads are stuck like this one, waiting for=0A>>= a=0A>> vnode=0A>> lock in ZFS. All of them appear to be in zfs_fhtovp().= =0A>> I`m not a ZFS guy, so I can`t help much. I`ll try changing the=0A>>= subject line=0A>> to include ZFS vnode lock, so maybe the ZFS guys will = take a=0A>> look.=0A>> =0A>> The only thing I`ve seen suggested is trying= :=0A>> sysctl vfs.lookup_shared=3D0=0A>> to disable shared vop_lookup()s.= Apparently zfs_lookup()=0A>> doesn`t=0A>> obey the vnode locking rules f= or lookup and rename, according=0A>> to=0A>> the posting I saw.=0A>> =0A>= > I`ve added a couple of comments about the other threads below,=0A>> but= =0A>> they are all either waiting for an RPC request or waiting for=0A>> = the=0A>> threads stuck on the ZFS vnode lock to complete.=0A>> =0A>> rick= =0A>> =0A>>> 918 100564 nfsd nfsd: service mi_switch+0xe1= =0A>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>> _cv_wait_sig+0= x16a=0A>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>= >> fork_trampoline+0xe=0A>> =0A>> Fyi, this thread is just waiting for an= RPC to arrive. (Normal)=0A>> =0A>>> 918 100565 nfsd nfsd: se= rvice mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wait_sig+0= xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_thread_start= +0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 918 100566 nfsd = nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sl= eepq_wait_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e s= vc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 918 10= 0567 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_= signals+0xab sleepq_wait_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_i= nternal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0= xe=0A>>> 918 100568 nfsd nfsd: service mi_switch+0xe1=0A>>= > sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>> _cv_wait_sig+0x16a= =0A>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> f= ork_trampoline+0xe=0A>>> 918 100569 nfsd nfsd: service mi_= switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>> _cv= _wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_thread_start+0xb fork_ex= it+0x9a=0A>>> fork_trampoline+0xe=0A>>> 918 100570 nfsd nfsd:= service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wait_si= g+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_thread_st= art+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 918 100571 nfsd = nfsd: service mi_switch+0xe1=0A>>> sleepq_wait+0x3a _sleep+0= x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A>>> nfsrvd_dorpc+0x316 nfssvc_progr= am+0x554 svc_run_internal+0xc77=0A>>> svc_thread_start+0xb fork_exit+0x9a= fork_trampoline+0xe=0A>>> 918 100572 nfsd nfsd: service m= i_switch+0xe1=0A>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lo= ck+0x9b=0A>>> nfsrv_setclient+0xbd nfsrvd_setclientid+0x3c8=0A>>> nfsrvd_= dorpc+0xc76=0A>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>> svc_t= hread_start+0xb=0A>>> fork_exit+0x9a fork_trampoline+0xe=0A>> =0A>> This = one (and a few others) are waiting for the nfsv4_lock.=0A>> This=0A>> hap= pens=0A>> because other threads are stuck with RPCs in progress. (ie. The= =0A>> ones=0A>> waiting on the vnode lock in zfs_fhtovp().)=0A>> For thes= e, the RPC needs to lock out other threads to do the=0A>> operation,=0A>>= so it waits for the nfsv4_lock() which can exclusively lock the=0A>> NFS= v4=0A>> data structures once all other nfsd threads complete their RPCs= =0A>> in=0A>> progress.=0A>> =0A>>> 918 100573 nfsd nfsd: ser= vice mi_switch+0xe1=0A>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66= nfsv4_lock+0x9b=0A>>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_in= ternal+0xc77=0A>>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0x= e=0A>> =0A>> Same as above.=0A>> =0A>>> 918 100574 nfsd nfsd:= service mi_switch+0xe1=0A>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr= _args+0x902=0A>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>= zfs_fhtovp+0x38d=0A>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+= 0x917=0A>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>> svc_thread_= start+0xb=0A>>> fork_exit+0x9a fork_trampoline+0xe=0A>>> 918 100575 nfsd = nfsd: service mi_switch+0xe1=0A>>> sleepq_wait+0x3a sleepl= k+0x15d __lockmgr_args+0x902=0A>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _v= n_lock+0x43=0A>>> zfs_fhtovp+0x38d=0A>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0= xc8 nfsrvd_dorpc+0x917=0A>>> nfssvc_program+0x554 svc_run_internal+0xc77= =0A>>> svc_thread_start+0xb=0A>>> fork_exit+0x9a fork_trampoline+0xe=0A>>= > 918 100576 nfsd nfsd: service mi_switch+0xe1=0A>>> sleep= q_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>> vop_stdlock+0x3c VOP= _LOCK1_APV+0xab _vn_lock+0x43=0A>>> zfs_fhtovp+0x38d=0A>>> nfsvno_fhtovp+= 0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>> nfssvc_program+0x554 svc_r= un_internal+0xc77=0A>>> svc_thread_start+0xb=0A>>> fork_exit+0x9a fork_tr= ampoline+0xe=0A>>> 918 100577 nfsd nfsd: service mi_switch= +0xe1=0A>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>> vop= _stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>> zfs_fhtovp+0x38d=0A>= >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>> nfssvc_pr= ogram+0x554 svc_run_internal+0xc77=0A>>> svc_thread_start+0xb=0A>>> fork_= exit+0x9a fork_trampoline+0xe=0A>>> 918 100578 nfsd nfsd: ser= vice mi_switch+0xe1=0A>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_arg= s+0x902=0A>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>> zfs= _fhtovp+0x38d=0A>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x91= 7=0A>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>> svc_thread_star= t+0xb=0A>>> fork_exit+0x9a fork_trampoline+0xe=0A>>> 918 100579 nfsd = nfsd: service mi_switch+0xe1=0A>>> sleepq_wait+0x3a sleeplk+0x= 15d __lockmgr_args+0x902=0A>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lo= ck+0x43=0A>>> zfs_fhtovp+0x38d=0A>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 = nfsrvd_dorpc+0x917=0A>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>= > svc_thread_start+0xb=0A>>> fork_exit+0x9a fork_trampoline+0xe=0A>>> 918= 100580 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_wai= t+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>> vop_stdlock+0x3c VOP_LOCK= 1_APV+0xab _vn_lock+0x43=0A>>> zfs_fhtovp+0x38d=0A>>> nfsvno_fhtovp+0x7c = nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>> nfssvc_program+0x554 svc_run_in= ternal+0xc77=0A>>> svc_thread_start+0xb=0A>>> fork_exit+0x9a fork_trampol= ine+0xe=0A>>> 918 100581 nfsd nfsd: service mi_switch+0xe1= =0A>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>> vop_stdl= ock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>> zfs_fhtovp+0x38d=0A>>> nf= svno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>> nfssvc_program= +0x554 svc_run_internal+0xc77=0A>>> svc_thread_start+0xb=0A>>> fork_exit+= 0x9a fork_trampoline+0xe=0A>>> 918 100582 nfsd nfsd: service = mi_switch+0xe1=0A>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x9= 02=0A>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>> zfs_fhto= vp+0x38d=0A>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>= >> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>> svc_thread_start+0xb= =0A>>> fork_exit+0x9a fork_trampoline+0xe=0A>>> 918 100583 nfsd = nfsd: service mi_switch+0xe1=0A>>> sleepq_wait+0x3a sleeplk+0x15d _= _lockmgr_args+0x902=0A>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x= 43=0A>>> zfs_fhtovp+0x38d=0A>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrv= d_dorpc+0x917=0A>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>> svc= _thread_start+0xb=0A>>> fork_exit+0x9a fork_trampoline+0xe=0A>>> 918 1005= 84 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_wait+0x3= a sleeplk+0x15d __lockmgr_args+0x902=0A>>> vop_stdlock+0x3c VOP_LOCK1_APV= +0xab _vn_lock+0x43=0A>>> zfs_fhtovp+0x38d=0A>>> nfsvno_fhtovp+0x7c nfsd_= fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>> nfssvc_program+0x554 svc_run_interna= l+0xc77=0A>>> svc_thread_start+0xb=0A>>> fork_exit+0x9a fork_trampoline+0= xe=0A>>> 918 100585 nfsd nfsd: service mi_switch+0xe1=0A>>= > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>> vop_stdlock+0= x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>> zfs_fhtovp+0x38d=0A>>> nfsvno_= fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>> nfssvc_program+0x55= 4 svc_run_internal+0xc77=0A>>> svc_thread_start+0xb=0A>>> fork_exit+0x9a = fork_trampoline+0xe=0A>>> 918 100586 nfsd nfsd: service mi= _switch+0xe1=0A>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A= >>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>> zfs_fhtovp+0x= 38d=0A>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>> nf= ssvc_program+0x554 svc_run_internal+0xc77=0A>>> svc_thread_start+0xb=0A>>= > fork_exit+0x9a fork_trampoline+0xe=0A>>> 918 100587 nfsd nf= sd: service mi_switch+0xe1=0A>>> sleepq_wait+0x3a sleeplk+0x15d __lock= mgr_args+0x902=0A>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A= >>> zfs_fhtovp+0x38d=0A>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dor= pc+0x917=0A>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>> svc_thre= ad_start+0xb=0A>>> fork_exit+0x9a fork_trampoline+0xe=0A>>> 918 100588 nf= sd nfsd: service mi_switch+0xe1=0A>>> sleepq_wait+0x3a sle= eplk+0x15d __lockmgr_args+0x902=0A>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab= _vn_lock+0x43=0A>>> zfs_fhtovp+0x38d=0A>>> nfsvno_fhtovp+0x7c nfsd_fhtov= p+0xc8 nfsrvd_dorpc+0x917=0A>>> nfssvc_program+0x554 svc_run_internal+0xc= 77=0A>>> svc_thread_start+0xb=0A>>> fork_exit+0x9a fork_trampoline+0xe=0A= >>> 918 100589 nfsd nfsd: service mi_switch+0xe1=0A>>> sle= epq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>> vop_stdlock+0x3c V= OP_LOCK1_APV+0xab _vn_lock+0x43=0A>>> zfs_fhtovp+0x38d=0A>>> nfsvno_fhtov= p+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>> nfssvc_program+0x554 svc= _run_internal+0xc77=0A>>> svc_thread_start+0xb=0A>>> fork_exit+0x9a fork_= trampoline+0xe=0A>>> 918 100590 nfsd nfsd: service mi_swit= ch+0xe1=0A>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>> v= op_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>> zfs_fhtovp+0x38d= =0A>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>> nfssv= c_program+0x554 svc_run_internal+0xc77=0A>>> svc_thread_start+0xb=0A>>> f= ork_exit+0x9a fork_trampoline+0xe=0A>>> 918 100591 nfsd nfsd:= service mi_switch+0xe1=0A>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr= _args+0x902=0A>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>= zfs_fhtovp+0x38d=0A>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+= 0x917=0A>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>> svc_thread_= start+0xb=0A>>> fork_exit+0x9a fork_trampoline+0xe=0A>>> 918 100592 nfsd = nfsd: service mi_switch+0xe1=0A>>> sleepq_wait+0x3a sleepl= k+0x15d __lockmgr_args+0x902=0A>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _v= n_lock+0x43=0A>>> zfs_fhtovp+0x38d=0A>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0= xc8 nfsrvd_dorpc+0x917=0A>>> nfssvc_program+0x554 svc_run_internal+0xc77= =0A>>> svc_thread_start+0xb=0A>>> fork_exit+0x9a fork_trampoline+0xe=0A>>= > 918 100593 nfsd nfsd: service mi_switch+0xe1=0A>>> sleep= q_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>> vop_stdlock+0x3c VOP= _LOCK1_APV+0xab _vn_lock+0x43=0A>>> zfs_fhtovp+0x38d=0A>>> nfsvno_fhtovp+= 0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>> nfssvc_program+0x554 svc_r= un_internal+0xc77=0A>>> svc_thread_start+0xb=0A>>> fork_exit+0x9a fork_tr= ampoline+0xe=0A>>> 918 100594 nfsd nfsd: service mi_switch= +0xe1=0A>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>> vop= _stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>> zfs_fhtovp+0x38d=0A>= >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>> nfssvc_pr= ogram+0x554 svc_run_internal+0xc77=0A>>> svc_thread_start+0xb=0A>>> fork_= exit+0x9a fork_trampoline+0xe=0A>>> 918 100595 nfsd nfsd: ser= vice mi_switch+0xe1=0A>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_arg= s+0x902=0A>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>> zfs= _fhtovp+0x38d=0A>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x91= 7=0A>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>> svc_thread_star= t+0xb=0A>>> fork_exit+0x9a fork_trampoline+0xe=0A>>> 918 100596 nfsd = nfsd: service mi_switch+0xe1=0A>>> sleepq_wait+0x3a sleeplk+0x= 15d __lockmgr_args+0x902=0A>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lo= ck+0x43=0A>>> zfs_fhtovp+0x38d=0A>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 = nfsrvd_dorpc+0x917=0A>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>= > svc_thread_start+0xb=0A>>> fork_exit+0x9a fork_trampoline+0xe=0A>>> 918= 100597 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_wai= t+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>> vop_stdlock+0x3c VOP_LOCK= 1_APV+0xab _vn_lock+0x43=0A>>> zfs_fhtovp+0x38d=0A>>> nfsvno_fhtovp+0x7c = nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>> nfssvc_program+0x554 svc_run_in= ternal+0xc77=0A>>> svc_thread_start+0xb=0A>>> fork_exit+0x9a fork_trampol= ine+0xe=0A>>> 918 100598 nfsd nfsd: service mi_switch+0xe1= =0A>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>> vop_stdl= ock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>> zfs_fhtovp+0x38d=0A>>> nf= svno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>> nfssvc_program= +0x554 svc_run_internal+0xc77=0A>>> svc_thread_start+0xb=0A>>> fork_exit+= 0x9a fork_trampoline+0xe=0A>>> 918 100599 nfsd nfsd: service = mi_switch+0xe1=0A>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x9= 02=0A>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>> zfs_fhto= vp+0x38d=0A>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>= >> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>> svc_thread_start+0xb= =0A>>> fork_exit+0x9a fork_trampoline+0xe=0A>>> 918 100600 nfsd = nfsd: service mi_switch+0xe1=0A>>> sleepq_wait+0x3a sleeplk+0x15d _= _lockmgr_args+0x902=0A>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x= 43=0A>>> zfs_fhtovp+0x38d=0A>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrv= d_dorpc+0x917=0A>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>> svc= _thread_start+0xb=0A>>> fork_exit+0x9a fork_trampoline+0xe=0A>>> 918 1006= 01 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_wait+0x3= a sleeplk+0x15d __lockmgr_args+0x902=0A>>> vop_stdlock+0x3c VOP_LOCK1_APV= +0xab _vn_lock+0x43=0A>>> zfs_fhtovp+0x38d=0A>>> nfsvno_fhtovp+0x7c nfsd_= fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>> nfssvc_program+0x554 svc_run_interna= l+0xc77=0A>>> svc_thread_start+0xb=0A>>> fork_exit+0x9a fork_trampoline+0= xe=0A>>> 918 100602 nfsd nfsd: service mi_switch+0xe1=0A>>= > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>> vop_stdlock+0= x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>> zfs_fhtovp+0x38d=0A>>> nfsvno_= fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>> nfssvc_program+0x55= 4 svc_run_internal+0xc77=0A>>> svc_thread_start+0xb=0A>>> fork_exit+0x9a = fork_trampoline+0xe=0A>>> 918 100603 nfsd nfsd: service mi= _switch+0xe1=0A>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A= >>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>> zfs_fhtovp+0x= 38d=0A>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>> nf= ssvc_program+0x554 svc_run_internal+0xc77=0A>>> svc_thread_start+0xb=0A>>= > fork_exit+0x9a fork_trampoline+0xe=0A>>> 918 100604 nfsd nf= sd: service mi_switch+0xe1=0A>>> sleepq_wait+0x3a sleeplk+0x15d __lock= mgr_args+0x902=0A>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A= >>> zfs_fhtovp+0x38d=0A>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dor= pc+0x917=0A>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>> svc_thre= ad_start+0xb=0A>>> fork_exit+0x9a fork_trampoline+0xe=0A>>> 918 100605 nf= sd nfsd: service mi_switch+0xe1=0A>>> sleepq_wait+0x3a sle= eplk+0x15d __lockmgr_args+0x902=0A>>> vop_stdlock+0x3c=20VOP_LOCK1_APV+0x= ab _vn_lock+0x43=0A>>> zfs_fhtovp+0x38d=0A>>> nfsvno_fhtovp+0x7c nfsd_fht= ovp+0xc8 nfsrvd_dorpc+0x917=0A>>> nfssvc_program+0x554 svc_run_internal+0= xc77=0A>>> svc_thread_start+0xb=0A>>> fork_exit+0x9a fork_trampoline+0xe= =0A>>> 918 100606 nfsd nfsd: service mi_switch+0xe1=0A>>> = sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>> vop_stdlock+0x3= c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>> zfs_fhtovp+0x38d=0A>>> nfsvno_fh= tovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>> nfssvc_program+0x554 = svc_run_internal+0xc77=0A>>> svc_thread_start+0xb=0A>>> fork_exit+0x9a fo= rk_trampoline+0xe=0A>>> 918 100607 nfsd nfsd: service mi_s= witch+0xe1=0A>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>= > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>> zfs_fhtovp+0x38= d=0A>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>> nfss= vc_program+0x554 svc_run_internal+0xc77=0A>>> svc_thread_start+0xb=0A>>> = fork_exit+0x9a fork_trampoline+0xe=0A>> =0A>> Lots more waiting for the Z= FS vnode lock in zfs_fhtovp().=0A>> =0A>> 918 100608 nfsd nfs= d: service mi_switch+0xe1=0A>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep= +0x66 nfsv4_lock+0x9b=0A>> nfsrv_getlockfile+0x179 nfsrv_lockctrl+0x21f n= fsrvd_lock+0x5b1=0A>> nfsrvd_dorpc+0xec6 nfssvc_program+0x554 svc_run_int= ernal+0xc77=0A>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe= =0A>> 918 100609 nfsd nfsd: service mi_switch+0xe1=0A>> sl= eepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>> vop_stdlock+0x3c V= OP_LOCK1_APV+0xab _vn_lock+0x43=0A>> zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+= 0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x554 svc_ru= n_internal+0xc77=0A>> svc_thread_start+0xb=0A>> fork_exit+0x9a fork_tramp= oline+0xe=0A>> 918 100610 nfsd nfsd: service mi_switch+0xe= 1=0A>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0xc9e=0A>> vop_stdlo= ck+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>> nfsvno_advlock+0x119 nfsrv_= dolocal+0x84 nfsrv_lockctrl+0x14ad=0A>> nfsrvd_locku+0x283 nfsrvd_dorpc+0= xec6 nfssvc_program+0x554=0A>> svc_run_internal+0xc77 svc_thread_start+0x= b fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 918 100611 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0x3a _sleep+0x287 nfsm= sleep+0x66 nfsv4_lock+0x9b=0A>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 s= vc_run_internal+0xc77=0A>> svc_thread_start+0xb fork_exit+0x9a fork_tramp= oline+0xe=0A>> 918 100612 nfsd nfsd: service mi_switch+0xe= 1=0A>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A>> = nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77=0A>> svc_t= hread_start+0xb fork_exit+0x9a fork_trampoline+0xe=0A>> 918 100613 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0x3a _sleep+0= x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A>> nfsrvd_dorpc+0x316 nfssvc_progra= m+0x554 svc_run_internal+0xc77=0A>> svc_thread_start+0xb fork_exit+0x9a f= ork_trampoline+0xe=0A>> 918 100614 nfsd nfsd: service mi_s= witch+0xe1=0A>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0= x9b=0A>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77= =0A>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe=0A>> 918 10= 0615 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0x= 3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A>> nfsrvd_dorpc+0x316 nf= ssvc_program+0x554 svc_run_internal+0xc77=0A>> svc_thread_start+0xb fork_= exit+0x9a fork_trampoline+0xe=0A>> 918 100616 nfsd nfsd: serv= ice mi_switch+0xe1=0A>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 n= fsv4_lock+0x9b=0A>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_inter= nal+0xc77=0A>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe=0A= >> 918 100617 nfsd nfsd: service mi_switch+0xe1=0A>> sleep= q_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A>> nfsrvd_dorpc= +0x316 nfssvc_program+0x554 svc_run_internal+0xc77=0A>> svc_thread_start+= 0xb fork_exit+0x9a fork_trampoline+0xe=0A>> 918 100618 nfsd n= fsd: service mi_switch+0xe1=0A>> sleepq_wait+0x3a _sleep+0x287 nfsmsle= ep+0x66 nfsv4_lock+0x9b=0A>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_= run_internal+0xc77=0A>> svc_thread_start+0xb fork_exit+0x9a fork_trampoli= ne+0xe=0A>> 918 100619 nfsd nfsd: service mi_switch+0xe1= =0A>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>> vop_stdloc= k+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>> zfs_fhtovp+0x38d=0A>> nfsvno= _fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x55= 4 svc_run_internal+0xc77=0A>> svc_thread_start+0xb=0A>> fork_exit+0x9a fo= rk_trampoline+0xe=0A>> 918 100620 nfsd nfsd: service mi_sw= itch+0xe1=0A>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>> v= op_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>> zfs_fhtovp+0x38d=0A= >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_pro= gram+0x554 svc_run_internal+0xc77=0A>> svc_thread_start+0xb=0A>> fork_exi= t+0x9a fork_trampoline+0xe=0A>> 918 100621 nfsd nfsd: service= mi_switch+0xe1=0A>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv= 4_lock+0x9b=0A>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal= +0xc77=0A>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe=0A>> = 918 100622 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_w= ait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>> vop_stdlock+0x3c VOP_LOC= K1_APV+0xab _vn_lock+0x43=0A>> zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c n= fsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x554 svc_run_inte= rnal+0xc77=0A>> svc_thread_start+0xb=0A>> fork_exit+0x9a fork_trampoline+= 0xe=0A>> 918 100623 nfsd nfsd: service mi_switch+0xe1=0A>>= sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A>> nfsrvd= _dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77=0A>> svc_thread_= start+0xb fork_exit+0x9a fork_trampoline+0xe=0A>> 918 100624 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0x3a sleeplk+0x15d = __lockmgr_args+0x902=0A>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x= 43=0A>> zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_= dorpc+0x917=0A>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>> svc_thr= ead_start+0xb=0A>> fork_exit+0x9a fork_trampoline+0xe=0A>> 918 100625 nfs= d nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0x3a sleep= lk+0x15d __lockmgr_args+0x902=0A>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _v= n_lock+0x43=0A>> zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc= 8 nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>= > svc_thread_start+0xb=0A>> fork_exit+0x9a fork_trampoline+0xe=0A>> 918 1= 00626 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0= x3a sleeplk+0x15d __lockmgr_args+0x902=0A>> vop_stdlock+0x3c VOP_LOCK1_AP= V+0xab _vn_lock+0x43=0A>> zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c nfsd_f= htovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x554 svc_run_internal+= 0xc77=0A>> svc_thread_start+0xb=0A>> fork_exit+0x9a fork_trampoline+0xe= =0A>> 918 100627 nfsd nfsd: service mi_switch+0xe1=0A>> sl= eepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>> vop_stdlock+0x3c V= OP_LOCK1_APV+0xab _vn_lock+0x43=0A>> zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+= 0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x554 svc_ru= n_internal+0xc77=0A>> svc_thread_start+0xb=0A>> fork_exit+0x9a fork_tramp= oline+0xe=0A>> 918 100628 nfsd nfsd: service mi_switch+0xe= 1=0A>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>> vop_stdlo= ck+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>> zfs_fhtovp+0x38d=0A>> nfsvn= o_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x5= 54 svc_run_internal+0xc77=0A>> svc_thread_start+0xb=0A>> fork_exit+0x9a f= ork_trampoline+0xe=0A>> 918 100629 nfsd nfsd: service mi_s= witch+0xe1=0A>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>> = vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>> zfs_fhtovp+0x38d= =0A>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_= program+0x554 svc_run_internal+0xc77=0A>> svc_thread_start+0xb=0A>> fork_= exit+0x9a fork_trampoline+0xe=0A>> 918 100630 nfsd nfsd: serv= ice mi_switch+0xe1=0A>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+= 0x902=0A>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>> zfs_fht= ovp+0x38d=0A>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>= > nfssvc_program+0x554 svc_run_internal+0xc77=0A>> svc_thread_start+0xb= =0A>> fork_exit+0x9a fork_trampoline+0xe=0A>> 918=20100631 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0x3a sleeplk+0x15d __= lockmgr_args+0x902=0A>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43= =0A>> zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_do= rpc+0x917=0A>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>> svc_threa= d_start+0xb=0A>> fork_exit+0x9a fork_trampoline+0xe=0A>> 918 100632 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0x3a sleeplk= +0x15d __lockmgr_args+0x902=0A>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_= lock+0x43=0A>> zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 = nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>> = svc_thread_start+0xb=0A>> fork_exit+0x9a fork_trampoline+0xe=0A>> 918 100= 633 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0x3= a sleeplk+0x15d __lockmgr_args+0x902=0A>> vop_stdlock+0x3c VOP_LOCK1_APV+= 0xab _vn_lock+0x43=0A>> zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c nfsd_fht= ovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x554 svc_run_internal+0x= c77=0A>> svc_thread_start+0xb=0A>> fork_exit+0x9a fork_trampoline+0xe=0A>= > 918 100634 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq= _wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>> vop_stdlock+0x3c VOP_L= OCK1_APV+0xab _vn_lock+0x43=0A>> zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c= nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x554 svc_run_in= ternal+0xc77=0A>> svc_thread_start+0xb=0A>> fork_exit+0x9a fork_trampolin= e+0xe=0A>> 918 100635 nfsd nfsd: service mi_switch+0xe1=0A= >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>> vop_stdlock+0= x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>> zfs_fhtovp+0x38d=0A>> nfsvno_fh= tovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x554 s= vc_run_internal+0xc77=0A>> svc_thread_start+0xb=0A>> fork_exit+0x9a fork_= trampoline+0xe=0A>> 918 100636 nfsd nfsd: service mi_switc= h+0xe1=0A>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>> vop_= stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>> zfs_fhtovp+0x38d=0A>> = nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_progra= m+0x554 svc_run_internal+0xc77=0A>> svc_thread_start+0xb=0A>> fork_exit+0= x9a fork_trampoline+0xe=0A>> 918 100637 nfsd nfsd: service = mi_switch+0xe1=0A>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902= =0A>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>> zfs_fhtovp+0= x38d=0A>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfs= svc_program+0x554 svc_run_internal+0xc77=0A>> svc_thread_start+0xb=0A>> f= ork_exit+0x9a fork_trampoline+0xe=0A>> 918 100638 nfsd nfsd: = service mi_switch+0xe1=0A>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_a= rgs+0x902=0A>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>> zfs= _fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917= =0A>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>> svc_thread_start+0= xb=0A>> fork_exit+0x9a fork_trampoline+0xe=0A>> 918 100639 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0x3a sleeplk+0x15d __= lockmgr_args+0x902=0A>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43= =0A>> zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_do= rpc+0x917=0A>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>> svc_threa= d_start+0xb=0A>> fork_exit+0x9a fork_trampoline+0xe=0A>> 918 100640 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0x3a sleeplk= +0x15d __lockmgr_args+0x902=0A>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_= lock+0x43=0A>> zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 = nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>> = svc_thread_start+0xb=0A>> fork_exit+0x9a fork_trampoline+0xe=0A>> 918 100= 641 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0x3= a sleeplk+0x15d __lockmgr_args+0x902=0A>> vop_stdlock+0x3c VOP_LOCK1_APV+= 0xab _vn_lock+0x43=0A>> zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c nfsd_fht= ovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x554 svc_run_internal+0x= c77=0A>> svc_thread_start+0xb=0A>> fork_exit+0x9a fork_trampoline+0xe=0A>= > 918 100642 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq= _wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>> vop_stdlock+0x3c VOP_L= OCK1_APV+0xab _vn_lock+0x43=0A>> zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c= nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x554 svc_run_in= ternal+0xc77=0A>> svc_thread_start+0xb=0A>> fork_exit+0x9a fork_trampolin= e+0xe=0A>> 918 100643 nfsd nfsd: service mi_switch+0xe1=0A= >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>> vop_stdlock+0= x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>> zfs_fhtovp+0x38d=0A>> nfsvno_fh= tovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x554 s= vc_run_internal+0xc77=0A>> svc_thread_start+0xb=0A>> fork_exit+0x9a fork_= trampoline+0xe=0A>> 918 100644 nfsd nfsd: service mi_switc= h+0xe1=0A>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>> vop_= stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>> zfs_fhtovp+0x38d=0A>> = nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_progra= m+0x554 svc_run_internal+0xc77=0A>> svc_thread_start+0xb=0A>> fork_exit+0= x9a fork_trampoline+0xe=0A>> 918 100645 nfsd nfsd: service = mi_switch+0xe1=0A>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902= =0A>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>> zfs_fhtovp+0= x38d=0A>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfs= svc_program+0x554 svc_run_internal+0xc77=0A>> svc_thread_start+0xb=0A>> f= ork_exit+0x9a fork_trampoline+0xe=0A>> 918 100646 nfsd nfsd: = service mi_switch+0xe1=0A>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_a= rgs+0x902=0A>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>> zfs= _fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917= =0A>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>> svc_thread_start+0= xb=0A>> fork_exit+0x9a fork_trampoline+0xe=0A>> 918 100647 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0x3a sleeplk+0x15d __= lockmgr_args+0x902=0A>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43= =0A>> zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_do= rpc+0x917=0A>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>> svc_threa= d_start+0xb=0A>> fork_exit+0x9a fork_trampoline+0xe=0A>> 918 100648 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0x3a sleeplk= +0x15d __lockmgr_args+0x902=0A>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_= lock+0x43=0A>> zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 = nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>> = svc_thread_start+0xb=0A>> fork_exit+0x9a fork_trampoline+0xe=0A>> 918 100= 649 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0x3= a sleeplk+0x15d __lockmgr_args+0x902=0A>> vop_stdlock+0x3c VOP_LOCK1_APV+= 0xab _vn_lock+0x43=0A>> zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c nfsd_fht= ovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x554 svc_run_internal+0x= c77=0A>> svc_thread_start+0xb=0A>> fork_exit+0x9a fork_trampoline+0xe=0A>= > 918 100650 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq= _wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>> vop_stdlock+0x3c VOP_L= OCK1_APV+0xab _vn_lock+0x43=0A>> zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c= nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x554 svc_run_in= ternal+0xc77=0A>> svc_thread_start+0xb=0A>> fork_exit+0x9a fork_trampolin= e+0xe=0A>> 918 100651 nfsd nfsd: service mi_switch+0xe1=0A= >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>> vop_stdlock+0= x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>> zfs_fhtovp+0x38d=0A>> nfsvno_fh= tovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x554 s= vc_run_internal+0xc77=0A>> svc_thread_start+0xb=0A>> fork_exit+0x9a fork_= trampoline+0xe=0A>> 918 100652 nfsd nfsd: service mi_switc= h+0xe1=0A>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>> vop_= stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>> zfs_fhtovp+0x38d=0A>> = nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_progra= m+0x554 svc_run_internal+0xc77=0A>> svc_thread_start+0xb=0A>> fork_exit+0= x9a fork_trampoline+0xe=0A>> 918 100653 nfsd nfsd: service = mi_switch+0xe1=0A>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902= =0A>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>> zfs_fhtovp+0= x38d=0A>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfs= svc_program+0x554 svc_run_internal+0xc77=0A>> svc_thread_start+0xb=0A>>= =20fork_exit+0x9a fork_trampoline+0xe=0A>> 918 100654 nfsd nf= sd: service mi_switch+0xe1=0A>> sleepq_wait+0x3a sleeplk+0x15d __lockm= gr_args+0x902=0A>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>= zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0= x917=0A>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>> svc_thread_sta= rt+0xb=0A>> fork_exit+0x9a fork_trampoline+0xe=0A>> 918 100655 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0x3a sleeplk+0x15= d __lockmgr_args+0x902=0A>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+= 0x43=0A>> zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrv= d_dorpc+0x917=0A>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>> svc_t= hread_start+0xb=0A>> fork_exit+0x9a fork_trampoline+0xe=0A>> 918 100656 n= fsd nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0x3a sle= eplk+0x15d __lockmgr_args+0x902=0A>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab = _vn_lock+0x43=0A>> zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0= xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x554 svc_run_internal+0xc77= =0A>> svc_thread_start+0xb=0A>> fork_exit+0x9a fork_trampoline+0xe=0A>> 9= 18 100657 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_wa= it+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>> vop_stdlock+0x3c VOP_LOCK= 1_APV+0xab _vn_lock+0x43=0A>> zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c nf= sd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x554 svc_run_inter= nal+0xc77=0A>> svc_thread_start+0xb=0A>> fork_exit+0x9a fork_trampoline+0= xe=0A>> 918 100658 nfsd nfsd: service mi_switch+0xe1=0A>> = sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>> vop_stdlock+0x3c= VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>> zfs_fhtovp+0x38d=0A>> nfsvno_fhtov= p+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x554 svc_= run_internal+0xc77=0A>> svc_thread_start+0xb=0A>> fork_exit+0x9a fork_tra= mpoline+0xe=0A>> =0A>> Lo=C3=AFc Blot,=0A>> UNIX Systems, Network and Sec= urity Engineer=0A>> http://www.unix-experience.fr=0A>> =0A>> 15 d=C3=A9ce= mbre 2014 13:29 "Lo=C3=AFc Blot"=0A>> =0A>>= a=0A>> =C3=A9crit:=0A>> =0A>> Hmmm...=0A>> now i'm experiencing a deadlo= ck.=0A>> =0A>> 0 918 915 0 21 0 12352 3372 zfs D - 1:48.64 nfsd: server= =0A>> (nfsd)=0A>> =0A>> the only issue was to reboot the server, but afte= r rebooting=0A>> deadlock arrives a second time when i=0A>> start my jail= s over NFS.=0A>> =0A>> Regards,=0A>> =0A>> Lo=C3=AFc Blot,=0A>> UNIX Syst= ems, Network and Security Engineer=0A>> http://www.unix-experience.fr=0A>= > =0A>> 15 d=C3=A9cembre 2014 10:07 "Lo=C3=AFc Blot"=0A>> =0A>> a=0A>> =C3=A9crit:=0A>> =0A>> Hi Rick,=0A>> after ta= lking with my N+1, NFSv4 is required on our=0A>> infrastructure.=0A>> I t= ried to upgrade NFSv4+ZFS=0A>> server from 9.3 to 10.1, i hope this will = resolve some=0A>> issues...=0A>> =0A>> Regards,=0A>> =0A>> Lo=C3=AFc Blot= ,=0A>> UNIX Systems, Network and Security Engineer=0A>> http://www.unix-e= xperience.fr=0A>> =0A>> 10 d=C3=A9cembre 2014 15:36 "Lo=C3=AFc Blot"=0A>>= =0A>> a=0A>> =C3=A9crit:=0A>> =0A>> Hi Ric= k,=0A>> thanks for your suggestion.=0A>> For my locking bug, rpc.lockd is= stucked in rpcrecv state on=0A>> the=0A>> server. kill -9 doesn't affect= the=0A>> process, it's blocked.... (State: Ds)=0A>> =0A>> for the perfor= mances=0A>> =0A>> NFSv3: 60Mbps=0A>> NFSv4: 45Mbps=0A>> Regards,=0A>> =0A= >> Lo=C3=AFc Blot,=0A>> UNIX Systems, Network and Security Engineer=0A>> = http://www.unix-experience.fr=0A>> =0A>> 10 d=C3=A9cembre 2014 13:56 "Ric= k Macklem" =0A>> a=0A>> =C3=A9crit:=0A>> =0A>> Loic= Blot wrote:=0A>> =0A>>> Hi Rick,=0A>>> I'm trying NFSv3.=0A>>> Some jail= s are starting very well but now i have an issue=0A>>> with=0A>>> lockd= =0A>>> after some minutes:=0A>>> =0A>>> nfs server 10.10.X.8:/jails: lock= d not responding=0A>>> nfs server 10.10.X.8:/jails lockd is alive again= =0A>>> =0A>>> I look at mbuf, but i seems there is no problem.=0A>> =0A>>= Well, if you need locks to be visible across multiple=0A>> clients,=0A>>= then=0A>> I'm afraid you are stuck with using NFSv4 and the=0A>> perform= ance=0A>> you=0A>> get=0A>> from it. (There is no way to do file handle a= ffinity for=0A>> NFSv4=0A>> because=0A>> the read and write ops are burie= d in the compound RPC and=0A>> not=0A>> easily=0A>> recognized.)=0A>> =0A= >> If the locks don't need to be visible across multiple=0A>> clients,=0A= >> I'd=0A>> suggest trying the "nolockd" option with nfsv3.=0A>> =0A>>> H= ere is my rc.conf on server:=0A>>> =0A>>> nfs_server_enable=3D"YES"=0A>>>= nfsv4_server_enable=3D"YES"=0A>>> nfsuserd_enable=3D"YES"=0A>>> nfsd_ser= ver_flags=3D"-u -t -n 256"=0A>>> mountd_enable=3D"YES"=0A>>> mountd_flags= =3D"-r"=0A>>> nfsuserd_flags=3D"-usertimeout 0 -force 20"=0A>>> rpcbind_e= nable=3D"YES"=0A>>> rpc_lockd_enable=3D"YES"=0A>>> rpc_statd_enable=3D"YE= S"=0A>>> =0A>>> Here is the client:=0A>>> =0A>>> nfsuserd_enable=3D"YES"= =0A>>> nfsuserd_flags=3D"-usertimeout 0 -force 20"=0A>>> nfscbd_enable=3D= "YES"=0A>>> rpc_lockd_enable=3D"YES"=0A>>> rpc_statd_enable=3D"YES"=0A>>>= =0A>>> Have you got an idea ?=0A>>> =0A>>> Regards,=0A>>> =0A>>> Lo=C3= =AFc Blot,=0A>>> UNIX Systems, Network and Security Engineer=0A>>> http:/= /www.unix-experience.fr=0A>>> =0A>>> 9 d=C3=A9cembre 2014 04:31 "Rick Mac= klem" =0A>>> a=0A>>> =C3=A9crit:=0A>>>> Loic Blot w= rote:=0A>>>> =0A>>>>> Hi rick,=0A>>>>> =0A>>>>> I waited 3 hours (no lag = at jail launch) and now I do:=0A>>>>> sysrc=0A>>>>> memcached_flags=3D"-v= -m 512"=0A>>>>> Command was very very slow...=0A>>>>> =0A>>>>> Here is a= dd over NFS:=0A>>>>> =0A>>>>> 601062912 bytes transferred in 21.060679 s= ecs (28539579=0A>>>>> bytes/sec)=0A>>>> =0A>>>> Can you try the same read= using an NFSv3 mount?=0A>>>> (If it runs much faster, you have probably = been bitten by=0A>>>> the=0A>>>> ZFS=0A>>>> "sequential vs random" read h= euristic which I've been told=0A>>>> things=0A>>>> NFS is doing "random" = reads without file handle affinity.=0A>>>> File=0A>>>> handle affinity is= very hard to do for NFSv4, so it isn't=0A>>>> done.)=0A>> =0A>> I was ac= tually suggesting that you try the "dd" over nfsv3=0A>> to=0A>> see=0A>> = how=0A>> the performance compared with nfsv4. If you do that, please=0A>>= post=0A>> the=0A>> comparable results.=0A>> =0A>> Someday I would like t= o try and get ZFS's sequential vs=0A>> random=0A>> read=0A>> heuristic mo= dified and any info on what difference in=0A>> performance=0A>> that=0A>>= might make for NFS would be useful.=0A>> =0A>> rick=0A>> =0A>> rick=0A>>= =0A>> This is quite slow...=0A>> =0A>> You can found some nfsstat below = (command isn't finished=0A>> yet)=0A>> =0A>> nfsstat -c -w 1=0A>> =0A>> G= tAttr Lookup Rdlink Read Write Rename Access Rddir=0A>> 0 0 0 0 0 0 0 0= =0A>> 4 0 0 0 0 0 16 0=0A>> 2 0 0 0 0 0 17 0=0A>> 0 0 0 0 0 0 0 0=0A>> 0 = 0 0 0 0 0 0 0=0A>> 0 0 0 0 0 0 0 0=0A>> 0 0 0 0 0 0 0 0=0A>> 0 4 0 0 0 0 = 4 0=0A>> 0 0 0 0 0 0 0 0=0A>> 0 0 0 0 0 0 0 0=0A>> 0 0 0 0 0 0 0 0=0A>> 0= 0 0 0 0 0 0 0=0A>> 4 0 0 0 0 0 3 0=0A>> 0 0 0 0 0 0 3 0=0A>> 37 10 0 8 0= 0 14 1=0A>> 18 16 0 4 1 2 4 0=0A>> 78 91 0 82 6 12 30 0=0A>> 19 18 0 2 2= 4 2 0=0A>> 0 0 0 0 2 0 0 0=0A>> 0 0 0 0 0 0 0 0=0A>> GtAttr Lookup Rdlin= k Read Write Rename Access Rddir=0A>> 0 0 0 0 0 0 0 0=0A>> 0 0 0 0 0 0 0 = 0=0A>> 0 0 0 0 0 0 0 0=0A>> 0 1 0 0 0 0 1 0=0A>> 4 6 0 0 6 0 3 0=0A>> 2 0= 0 0 0 0 0 0=0A>> 0 0 0 0 0 0 0 0=0A>> 1 0 0 0 0 0 0 0=0A>> 0 0 0 0 1 0 0= 0=0A>> 0 0 0 0 0 0 0 0=0A>> 0 0 0 0 0 0 0 0=0A>> 0 0 0 0 0 0 0 0=0A>> 0 = 0 0 0 0 0 0 0=0A>> 0 0 0 0 0 0 0 0=0A>> 0 0 0 0 0 0 0 0=0A>> 0 0 0 0 0 0 = 0 0=0A>> 0 0 0 0 0 0 0 0=0A>> 6 108 0 0 0 0 0 0=0A>> 0 0 0 0 0 0 0 0=0A>>= 0 0 0 0 0 0 0 0=0A>> GtAttr Lookup Rdlink Read Write Rename Access Rddir= =0A>> 0 0 0 0 0 0 0 0=0A>> 0 0 0 0 0 0 0 0=0A>> 0 0 0 0 0 0 0 0=0A>> 0 0 = 0 0 0 0 0 0=0A>> 0 0 0 0 0 0 0 0=0A>> 0 0 0 0 0 0 0 0=0A>> 0 0 0 0 0 0 0 = 0=0A>> 98 54 0 86 11 0 25 0=0A>> 36 24 0 39 25 0 10 1=0A>> 67 8 0 63 63 0= 41 0=0A>> 34 0 0 35 34 0 0 0=0A>> 75 0 0 75 77 0 0 0=0A>> 34 0 0 35 35 0= 0 0=0A>> 75 0 0 74 76 0 0 0=0A>> 33 0 0 34 33 0 0 0=0A>> 0 0 0 0 5 0 0 0= =0A>> 0 0 0 0 0 0 6 0=0A>> 11 0 0 0 0 0 11 0=0A>> 0 0 0 0 0 0 0 0=0A>> 0 = 17 0 0 0 0 1 0=0A>> GtAttr Lookup Rdlink Read Write Rename Access Rddir= =0A>> 4 5 0 0 0 0 12 0=0A>> 2 0 0 0 0 0 26 0=0A>> 0 0 0 0 0 0 0 0=0A>> 0 = 0 0 0 0 0 0 0=0A>> 0 0 0 0 0 0 0 0=0A>> 0 0 0 0 0 0 0 0=0A>> 0 0 0 0 0 0 = 0 0=0A>> 0 4 0 0 0 0 4 0=0A>> 0 0 0 0 0 0 0 0=0A>> 0 0 0 0 0 0 0 0=0A>> 0= 0 0 0 0 0 0 0=0A>> 4 0 0 0 0 0 2 0=0A>> 2 0 0 0 0 0 24 0=0A>> 0 0 0 0 0 = 0 0 0=0A>> 0 0 0 0 0 0 0 0=0A>> 0 0 0 0 0 0 0 0=0A>> 0 0 0 0 0 0 0 0=0A>>= 0 0 0 0 0 0 0 0=0A>> 0 0 0 0 0 0 0 0=0A>> 0 0 0 0 0 0 0 0=0A>> GtAttr Lo= okup Rdlink Read Write Rename Access Rddir=0A>> 0 0 0 0 0 0 0 0=0A>> 0 0 = 0 0 0 0 0 0=0A>> 4 0 0 0 0 0 7 0=0A>> 2 1 0 0 0 0 1 0=0A>> 0 0 0 0 2 0 0 = 0=0A>> 0 0 0 0 0 0 0 0=0A>> 0 0 0 0 6 0 0 0=0A>> 0 0 0 0 0 0 0 0=0A>> 0 0= 0 0 0 0 0 0=0A>> 0 0 0 0 0 0 0 0=0A>> 0 0 0 0 0 0 0 0=0A>>=200 0 0 0 0 0= 0 0=0A>> 0 0 0 0 0 0 0 0=0A>> 4 6 0 0 0 0 3 0=0A>> 0 0 0 0 0 0 0 0=0A>> = 2 0 0 0 0 0 0 0=0A>> 0 0 0 0 0 0 0 0=0A>> 0 0 0 0 0 0 0 0=0A>> 0 0 0 0 0 = 0 0 0=0A>> 0 0 0 0 0 0 0 0=0A>> GtAttr Lookup Rdlink Read Write Rename Ac= cess Rddir=0A>> 0 0 0 0 0 0 0 0=0A>> 0 0 0 0 0 0 0 0=0A>> 0 0 0 0 0 0 0 0= =0A>> 0 0 0 0 0 0 0 0=0A>> 0 0 0 0 0 0 0 0=0A>> 4 71 0 0 0 0 0 0=0A>> 0 1= 0 0 0 0 0 0=0A>> 2 36 0 0 0 0 1 0=0A>> 0 0 0 0 0 0 0 0=0A>> 0 0 0 0 0 0 = 0 0=0A>> 0 0 0 0 0 0 0 0=0A>> 0 0 0 0 0 0 0 0=0A>> 1 0 0 0 0 0 1 0=0A>> 0= 0 0 0 0 0 0 0=0A>> 0 0 0 0 0 0 0 0=0A>> 79 6 0 79 79 0 2 0=0A>> 25 0 0 2= 5 26 0 6 0=0A>> 43 18 0 39 46 0 23 0=0A>> 36 0 0 36 36 0 31 0=0A>> 68 1 0= 66 68 0 0 0=0A>> GtAttr Lookup Rdlink Read Write Rename Access Rddir=0A>= > 36 0 0 36 36 0 0 0=0A>> 48 0 0 48 49 0 0 0=0A>> 20 0 0 20 20 0 0 0=0A>>= 0 0 0 0 0 0 0 0=0A>> 3 14 0 1 0 0 11 0=0A>> 0 0 0 0 0 0 0 0=0A>> 0 0 0 0= 0 0 0 0=0A>> 0 4 0 0 0 0 4 0=0A>> 0 0 0 0 0 0 0 0=0A>> 4 22 0 0 0 0 16 0= =0A>> 2 0 0 0 0 0 23 0=0A>> =0A>> Regards,=0A>> =0A>> Lo=C3=AFc Blot,=0A>= > UNIX Systems, Network and Security Engineer=0A>> http://www.unix-experi= ence.fr=0A>> =0A>> 8 d=C3=A9cembre 2014 09:36 "Lo=C3=AFc Blot"=0A>> a=0A>> =C3=A9crit:=0A>>> Hi Rick,=0A>>> I stopp= ed the jails this week-end and started it this=0A>>> morning,=0A>>> i'll= =0A>>> give you some stats this week.=0A>>> =0A>>> Here is my nfsstat -m = output (with your rsize/wsize=0A>>> tweaks)=0A>> =0A>> =0A> nfsv4,tcp,res= vport,hard,cto,sec=3Dsys,acdirmin=3D3,acdirmax=3D60,acregmin=3D5,acregmax= =3D60,nametimeo=3D60,negna=0A>> =0A>> =0A> etimeo=3D60,rsize=3D32768,wsiz= e=3D32768,readdirsize=3D32768,readahead=3D1,wcommitsize=3D773136,timeout= =3D120,retra=0A>> =0A>> s=3D2147483647=0A>> =0A>> On server side my disks= are on a raid controller which show a=0A>> 512b=0A>> volume and write pe= rformances=0A>> are very honest (dd if=3D/dev/zero of=3D/jails/test.dd bs= =3D4096=0A>> count=3D100000000 =3D> 450MBps)=0A>> =0A>> Regards,=0A>> =0A= >> Lo=C3=AFc Blot,=0A>> UNIX Systems, Network and Security Engineer=0A>> = http://www.unix-experience.fr=0A>> =0A>> 5 d=C3=A9cembre 2014 15:14 "Rick= Macklem" a=0A>> =C3=A9crit:=0A>> =0A>> Loic Blot = wrote:=0A>> =0A>> Hi,=0A>> i'm trying to create a virtualisation environm= ent based on=0A>> jails.=0A>> Those jails are stored under a big ZFS pool= on a FreeBSD=0A>> 9.3=0A>> which=0A>> export a NFSv4 volume. This NFSv4 = volume was mounted on a=0A>> big=0A>> hypervisor (2 Xeon E5v3 + 128GB mem= ory and 8 ports (but=0A>> only 1=0A>> was=0A>> used at this time).=0A>> = =0A>> The problem is simple, my hypervisors runs 6 jails (used 1%=0A>> cp= u=0A>> and=0A>> 10GB RAM approximatively and less than 1MB bandwidth) and= =0A>> works=0A>> fine at start but the system slows down and after 2-3 da= ys=0A>> become=0A>> unusable. When i look at top command i see 80-100% on= =0A>> system=0A>> and=0A>> commands are very very slow. Many process are = tagged with=0A>> nfs_cl*.=0A>> =0A>> To be honest, I would expect the slo= wness to be because of=0A>> slow=0A>> response=0A>> from the NFSv4 server= , but if you do:=0A>> # ps axHl=0A>> on a client when it is slow and post= that, it would give us=0A>> some=0A>> more=0A>> information on where the= client side processes are sitting.=0A>> If you also do something like:= =0A>> # nfsstat -c -w 1=0A>> and let it run for a while, that should show= you how many=0A>> RPCs=0A>> are=0A>> being done and which ones.=0A>> =0A= >> # nfsstat -m=0A>> will show you what your mount is actually using.=0A>= > The only mount option I can suggest trying is=0A>> "rsize=3D32768,wsize= =3D32768",=0A>> since some network environments have difficulties with 64= K.=0A>> =0A>> There are a few things you can try on the NFSv4 server side= ,=0A>> if=0A>> it=0A>> appears=0A>> that the clients are generating a lar= ge RPC load.=0A>> - disabling the DRC cache for TCP by setting=0A>> vfs.n= fsd.cachetcp=3D0=0A>> - If the server is seeing a large write RPC load, t= hen=0A>> "sync=3Ddisabled"=0A>> might help, although it does run a risk o= f data loss when=0A>> the=0A>> server=0A>> crashes.=0A>> Then there are a= couple of other ZFS related things (I'm not=0A>> a=0A>> ZFS=0A>> guy,=0A= >> but these have shown up on the mailing lists).=0A>> - make sure your v= olumes are 4K aligned and ashift=3D12 (in=0A>> case a=0A>> drive=0A>> tha= t uses 4K sectors is pretending to be 512byte sectored)=0A>> - never run = over 70-80% full if write performance is an=0A>> issue=0A>> - use a zil o= n an SSD with good write performance=0A>> =0A>> The only NFSv4 thing I ca= n tell you is that it is known that=0A>> ZFS's=0A>> algorithm for determi= ning sequential vs random I/O fails for=0A>> NFSv4=0A>> during writing an= d this can be a performance hit. The only=0A>> workaround=0A>> is to use = NFSv3 mounts, since file handle affinity=0A>> apparently=0A>> fixes=0A>> = the problem and this is only done for NFSv3.=0A>> =0A>> rick=0A>> =0A>> I= saw that there are TSO issues with igb then i'm trying to=0A>> disable= =0A>> it with sysctl but the situation wasn't solved.=0A>> =0A>> Someone = has got ideas ? I can give you more informations if=0A>> you=0A>> need.= =0A>> =0A>> Thanks in advance.=0A>> Regards,=0A>> =0A>> Lo=C3=AFc Blot,= =0A>> UNIX Systems, Network and Security Engineer=0A>> http://www.unix-ex= perience.fr=0A>> _______________________________________________=0A>> fre= ebsd-fs@freebsd.org mailing list=0A>> http://lists.freebsd.org/mailman/li= stinfo/freebsd-fs=0A>> To unsubscribe, send any mail to=0A>> "freebsd-fs-= unsubscribe@freebsd.org"=0A>> =0A>> _____________________________________= __________=0A>> freebsd-fs@freebsd.org mailing list=0A>> http://lists.fre= ebsd.org/mailman/listinfo/freebsd-fs=0A>> To unsubscribe, send any mail t= o=0A>> "freebsd-fs-unsubscribe@freebsd.org"=0A>> =0A>> __________________= _____________________________=0A>> freebsd-fs@freebsd.org mailing list=0A= >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs=0A>> To unsubscri= be, send any mail to=0A>> "freebsd-fs-unsubscribe@freebsd.org"=0A>> =0A>>= _______________________________________________=0A>> freebsd-fs@freebsd.= org mailing list=0A>> http://lists.freebsd.org/mailman/listinfo/freebsd-f= s=0A>> To unsubscribe, send any mail to=0A>> "freebsd-fs-unsubscribe@free= bsd.org"=0A>> _______________________________________________=0A>> freebs= d-fs@freebsd.org mailing list=0A>> http://lists.freebsd.org/mailman/listi= nfo/freebsd-fs=0A>> To unsubscribe, send any mail to=0A>> "freebsd-fs-uns= ubscribe@freebsd.org"=0A>> =0A>> ________________________________________= _______=0A>> freebsd-fs@freebsd.org mailing list=0A>> http://lists.freebs= d.org/mailman/listinfo/freebsd-fs=0A>> To unsubscribe, send any mail to "= freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Tue Dec 30 18:07:50 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id C28B3357; Tue, 30 Dec 2014 18:07:50 +0000 (UTC) Received: from mail-pd0-x22a.google.com (mail-pd0-x22a.google.com [IPv6:2607:f8b0:400e:c02::22a]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 8A507876; Tue, 30 Dec 2014 18:07:50 +0000 (UTC) Received: by mail-pd0-f170.google.com with SMTP id v10so19650959pde.29; Tue, 30 Dec 2014 10:07:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:date:from:to:cc:subject:message-id:mail-followup-to :references:mime-version:content-type:content-disposition :in-reply-to:user-agent; bh=MLfyexk2hsZ3dMoiZyoy14BmvAJHnQtFMtSAlM2je9Q=; b=bPekc+4K6RpS0UtKZKqQo1xVj/YjUAhEA0UOo+IN8V3ZRsUSj21j4KqonHKu84yS4k /21zMghHKODQRrXgue8uj1522N1ifH0t3Ueaxo2i8Ux/lPDUCKK4xQccs8PzjEwz3h0F 1GyOA3VbtSN2763zqa1h5OoLDbB7F5KsRAlWoBLP1thtBDg5VzLq3sT71yMjlEg/Hy56 j0UR2cJyrISukd+Ku/XZB8PR41ZcVOuUmduNBNp0b4eOipLPcMjcnRCxliVJADNMoWaS LAeAFS0LayeUJYyHvDSbU6vqGv7G1J1Aa0Fhb8gjkRFNaaoQi4km108dCols49n9sw4J X6qA== X-Received: by 10.66.156.38 with SMTP id wb6mr83238344pab.139.1419962870175; Tue, 30 Dec 2014 10:07:50 -0800 (PST) Received: from localhost (c-76-21-76-83.hsd1.ca.comcast.net. [76.21.76.83]) by mx.google.com with ESMTPSA id fu1sm38924209pbb.91.2014.12.30.10.07.49 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 30 Dec 2014 10:07:49 -0800 (PST) Sender: Gleb Kurtsou Date: Tue, 30 Dec 2014 10:08:54 -0800 From: Gleb Kurtsou To: Kirk McKusick Subject: Re: patch that makes d_fileno 64bits Message-ID: <20141230180854.GA964@reks> Mail-Followup-To: Kirk McKusick , Rick Macklem , FreeBSD Filesystems , Konstantin Belousov References: <1966344327.2961798.1419723168645.JavaMail.root@uoguelph.ca> <201412290702.sBT72I8G087361@chez.mckusick.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <201412290702.sBT72I8G087361@chez.mckusick.com> User-Agent: Mutt/1.5.23 (2014-03-12) Cc: FreeBSD Filesystems , Konstantin Belousov X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 30 Dec 2014 18:07:51 -0000 On (28/12/2014 23:02), Kirk McKusick wrote: > > Date: Sat, 27 Dec 2014 18:32:48 -0500 (EST) > > From: Rick Macklem > > To: FreeBSD Filesystems , > > Kirk McKusick , Gleb Kurtsou , > > Konstantin Belousov > > Subject: patch that makes d_fileno 64bits > > > > Hi, > > > > Kirk and Gleb Kurtsou (plus some others) are working through > > the difficult job of changing ino_t to 64bits. (Changes to syscalls, > > libraries, etc.) > > > > This patch: > > http://people.freebsd.org/~rmacklem/64bitfileno.patch > > > > is somewhat tangential to the above, in that it changes the > > d_fileno field of "struct dirent" and va_fileid to uint64_t. > > It also includes adding a field called d_cookie to "struct dirent", > > which is the position of the next directory entry in the underlying > > file system. A majority of this patch are changes to the NFS code, > > but it includes a simple "new struct dirent"->"old struct dirent32" > > copy routine for getdirentries(2) and small changes to all the > > file systems so they fill in the "new struct dirent". > > > > This patch can be applied to head/current and the resultant kernel > > should work fine, although I've only been able to test some of the > > file systems. However, DO NOT propagate the changes to sys/sys/dirent.h > > out to userland (/usr/include/sys/dirent.h) and build a userland from > > it or things will get badly broken. > > > > I don't know if Kirk and/or Gleb will find some of this useful for > > their updates to project/ino64, but it will allow people to test > > these changes. (It modifies the NFS server so that it no longer uses > > the "cookie" args to VOP_READDIR(), but that part can easily > > be removed from the patch.) > > > > If folks can test this patch, I think it would be helpful for > > the effort of changing ino_t to 64bits. > > > > Have fun with it, rick > > Thanks Rick, this does look useful. Since Gleb is leading the charge > with changing ino_t to 64 bits, I will let him have final say on when > it would be helpful to have this go into HEAD. But it does seem that > it should be possible to do it before the other changes and independently > of them since it only chnges the internal kernel interfaces. But perhaps > I am missing something that Gleb or kib can point out. Full 64-bit ino_t patch is now committed to projects/ino64. Reviews are welcome and encouraged. It would be interesting to compare Rick's changes to VOP_READDIR implementations to the code I had in 2011. As far as I remember I didn't get to the point of using d_off in NFS. I intend to get back to it after we are done with ino64. Preferably those changes should be committed to CURRENT in several chunks with at least a couple of weeks intervals in between, otherwise IMHO that would be too much of a change for the interfaces virtually every program uses: 1. Merge projects/ino64 2. Commit VOP_READDIR changes to populate d_off / d_cookie 3. Use d_off in NFS On projects/ino64 branch: - No problems found so far with kernel compat shims. I'm running tests with new kernel and old userland, mostly things like make universe, /usr/tests, pjdfstest, blogbench, dbench. - I had another look at changing mode_t to 32-bit. At this point I would suggest to keep mode_t as 16-bit. Compat shims won't be trivial to implement and test mostly because of the usage in ipc subsystems: struct ipc_perm, ksem, shmfd. kinfo_file and libprocstat are already extended to use 32-bit for mode_t equivalent, so we may keep them as is. My current TODO: - Double check ABI with shlib-compat (including lib32) - Bump majors for shlibs without symbol versioning. It is my understanding that we should bump them all. kib? ABI changes for at least libutil, libufs and libarchive - Retest with new libc.so only - Install complete ino64 userland and retest - Ask for help with running complete ports build > It seems to me that the cookies calculation could be taken out of the > VOP_GETDIRENTRIES interface since NFS is the only client of it. Do you mean cookies and ncookies arguments? That would be nice. And while there replace uio with handler function argument for dirent processing (was mentioned elsewhere). E.g. NFS doesn't need entries to be stored in the buffer and can process them directly. From owner-freebsd-fs@FreeBSD.ORG Tue Dec 30 18:49:05 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 7B336368 for ; Tue, 30 Dec 2014 18:49:05 +0000 (UTC) Received: from kib.kiev.ua (kib.kiev.ua [IPv6:2001:470:d5e7:1::1]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 0625F2EB4 for ; Tue, 30 Dec 2014 18:49:04 +0000 (UTC) Received: from tom.home (kostik@localhost [127.0.0.1]) by kib.kiev.ua (8.14.9/8.14.9) with ESMTP id sBUImxqL052955 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Tue, 30 Dec 2014 20:48:59 +0200 (EET) (envelope-from kostikbel@gmail.com) DKIM-Filter: OpenDKIM Filter v2.9.2 kib.kiev.ua sBUImxqL052955 Received: (from kostik@localhost) by tom.home (8.14.9/8.14.9/Submit) id sBUImx5p052954; Tue, 30 Dec 2014 20:48:59 +0200 (EET) (envelope-from kostikbel@gmail.com) X-Authentication-Warning: tom.home: kostik set sender to kostikbel@gmail.com using -f Date: Tue, 30 Dec 2014 20:48:59 +0200 From: Konstantin Belousov To: Kirk McKusick , Rick Macklem , FreeBSD Filesystems Subject: Re: patch that makes d_fileno 64bits Message-ID: <20141230184859.GL42409@kib.kiev.ua> References: <1966344327.2961798.1419723168645.JavaMail.root@uoguelph.ca> <201412290702.sBT72I8G087361@chez.mckusick.com> <20141230180854.GA964@reks> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20141230180854.GA964@reks> User-Agent: Mutt/1.5.23 (2014-03-12) X-Spam-Status: No, score=-2.0 required=5.0 tests=ALL_TRUSTED,BAYES_00, DKIM_ADSP_CUSTOM_MED,FREEMAIL_FROM,NML_ADSP_CUSTOM_MED autolearn=no autolearn_force=no version=3.4.0 X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on tom.home X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 30 Dec 2014 18:49:05 -0000 On Tue, Dec 30, 2014 at 10:08:54AM -0800, Gleb Kurtsou wrote: > - Bump majors for shlibs without symbol versioning. It is my > understanding that we should bump them all. kib? ABI changes for at > least libutil, libufs and libarchive Yes, it must be bumped. Hope for libutil sometime getting symver already died. From owner-freebsd-fs@FreeBSD.ORG Tue Dec 30 21:52:49 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 2B2C1ABC; Tue, 30 Dec 2014 21:52:49 +0000 (UTC) Received: from mail-pd0-x22b.google.com (mail-pd0-x22b.google.com [IPv6:2607:f8b0:400e:c02::22b]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id ED69964D58; Tue, 30 Dec 2014 21:52:48 +0000 (UTC) Received: by mail-pd0-f171.google.com with SMTP id y13so19910749pdi.30; Tue, 30 Dec 2014 13:52:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:reply-to:sender:in-reply-to:references:date:message-id :subject:from:to:cc:content-type; bh=jrtc7e2v4KZG/PJrExwNiJ+qI2cCTkzJno9a7m5rLzw=; b=PDECIIYByH1ri1iFtadYoVoJ88pT9dd92xHcR6gIhIwQmPPvCGUaqyLwK3gsnB3eRn AJ4xC/HlFFLnbp22/ZT+kcrHV92mLtFTCHaNBnUxQE6ttFkdgbU27M6dJGuVWJfT2/T1 6cksmUqa1CNBa9+dY5NLpkcrscJKzBFSG4dJuW05TiVPcplRbeDh1AMZkeL1jNolTN4c b4ikGy3Lsq0TBJgmJzMWB3h7FK4A/lXt3nT+c587gODMqpzKczNX/oreSUZl2zBvv6o5 F43RsZcZ98lQup8TtlGzT2Gv9KZZTXJcVW9xZi154iDSE2CknekD8i3lcazUF43VxvK0 XQvg== MIME-Version: 1.0 X-Received: by 10.70.37.4 with SMTP id u4mr64238655pdj.40.1419976368471; Tue, 30 Dec 2014 13:52:48 -0800 (PST) Reply-To: kirk@ba23.org Sender: kirk.j.russell@gmail.com Received: by 10.66.26.193 with HTTP; Tue, 30 Dec 2014 13:52:48 -0800 (PST) In-Reply-To: <20141230020510.GA73120@ns.kevlo.org> References: <20141229053714.GA66793@ns.kevlo.org> <2050662882.3355858.1419863932660.JavaMail.root@uoguelph.ca> <20141230020510.GA73120@ns.kevlo.org> Date: Tue, 30 Dec 2014 16:52:48 -0500 X-Google-Sender-Auth: DafnCSwB57I6d52dajy3k-7nlFA Message-ID: Subject: Re: help using mount_smbfs with apple time capsule server From: kirk russell To: Kevin Lo Content-Type: text/plain; charset=UTF-8 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 30 Dec 2014 21:52:49 -0000 On 29 December 2014 at 21:05, Kevin Lo wrote: > On Mon, Dec 29, 2014 at 09:38:52AM -0500, Rick Macklem wrote: >> >> Kevin Lo wrote: >> > On Sun, Dec 28, 2014 at 06:12:07PM -0500, Rick Macklem wrote: >> > > >> > > Kirk Russell wrote: >> > > > Hi, >> > > > >> > > > I cannot get FreeBSD 10's smbfs client to work with my server -- >> > > > an >> > > > Apple time >> > > > capsule 4th generation version 7.6.4. >> > > > >> > > > Here are the commands I ran, to reproduce the issue: >> > > > # uname -a >> > > > FreeBSD 10.1-RELEASE FreeBSD 10.1-RELEASE #0 r274401: Tue Nov 11 >> > > > 21:02:49 UTC 2014 >> > > > root@releng1.nyi.freebsd.org:/usr/obj/usr/src/sys/GENERIC amd64 >> > > > # mount_smbfs -R 16 //lamb@Meganium/Data /mnt >> > > > Password: >> > > > # dd if=/mnt/afile bs=1 count=1 of=/dev/null >> > > > dd: /mnt/afile: No such file or directory >> > >> > Are you sure the file "afile" really exists? I checked again, and the file "afile" exists on the server. Using FreeBSD 10, 'ls /mnt' comes back as empty. If there is more debugging that I can do from here, please let me know. >> > >> > > > For the FreeBSD 10 session, I tried to capture the raw packets, >> > > > using >> > > > tcpdump, >> > > > in file bad.tcpdump. >> > > > >> > > > >> > > > This works with FreeBSD 9. For this working session, I tried to >> > > > capture the >> > > > raw packets, using tcpdump, in file good.tcpdump. >> > > > >> > > > # uname -a >> > > > FreeBSD 9.3-STABLE FreeBSD 9.3-STABLE #0: Wed Dec 24 16:16:05 >> > > > EST >> > > > 2014 kirk@freenas:/usr/obj/usr/src/sys/GENERIC amd64 >> > > > # mount_smbfs -R 16 //lamb@Meganium/Data /mnt >> > > > Password: >> > > > # dd if=/mnt/afile bs=1 count=1 of=/dev/null >> > > > 1+0 records in >> > > > 1+0 records out >> > > > 1 bytes transferred in 0.000345 secs (2899 bytes/sec) >> > > > >> > > > >> > > > The two raw packets dumps are in this archive: >> > > > http://www.employees.org/~kirk/bstgbugs/smbfs.tar.gz >> > > > >> > > > Any pointers how to get his working? >> > > > The server appears to be returning an ERRbadpath error. >> > > > >> > > Well, my guess is that it has something to do with the Unicode >> > > changes added to smbfs about three years ago by kevlo@ (r227650 >> > > and friends in head). These changes are not in FreeBSD9. >> > >> > Hmm, it was MFC'ed to stable/9 (r230196). >> > >> Oops, my mistake. When I looked at svnweb, I saw the first log >> entry for smbfs_subr.c listing "MFC: r228796", but didn't notice >> the second "MFC: r227650" and thought it hadn't been MFC'd. > > No worries :-) > >> All I can tell you is that wireshark shows "\file" for the good.tcpdump >> vs "\\afile" for bad.tcpdump, so I guessed that was why the Mac didn't >> find it? > > I don't have machines running samba on OS X, I'll try to borrow my > colleague's laptop and test it out, thanks. The SMB/CIFS server is one of these kinda boxes: http://en.wikipedia.org/wiki/AirPort_Time_Capsule > >> I'll leave now, since I know nothing about SMB, rick >> >> > > It appears that it now sends "\\afile" instead of "\afile" and >> > > I know nothing about the code/protocol but r227650 added changes >> > > like: >> > > error = mb_put_uint8(mbp, '\\'); >> > > replaced with: >> > > if (SMB_UNICODE_STRINGS(vcp)) >> > > error = mb_put_uint16le(mbp, '\\') >> > > else >> > > error = mb_put_uint8(mbp, '\\'); >> > > Note that the '\\' is actually 4 \ characters. >> > > >> > > Hopefully someone knows enough about Unicode or how SMB >> > > uses it to make sense of this? >> > >> > I tested it under FreeBSD -current [1] and and 9.3-STABLE [2], >> > it works perfectly... >> > >> > [1] >> > ftp://ftp.freebsd.org/pub/FreeBSD/snapshots/ISO-IMAGES/11.0/FreeBSD-11.0-CURRENT-amd64-20141222-r276066-memstick.img >> > >> > [2] >> > ftp://ftp.freebsd.org/pub/FreeBSD/snapshots/ISO-IMAGES/9.3/FreeBSD-9.3-STABLE-amd64-20141222-r276041-memstick.img >> > >> > > rick >> > > >> > > > -- >> > > > Kirk Russell >> > > > http://www.ba23.org/ >> > > > Kevin -- Kirk Russell http://www.ba23.org/