From owner-freebsd-fs@FreeBSD.ORG Mon May 2 20:58:16 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 74B41106566B for ; Mon, 2 May 2011 20:58:16 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-annu.mail.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id 34DF38FC12 for ; Mon, 2 May 2011 20:58:15 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AiwHAEQav02DaFvO/2dsb2JhbACEUZNxjkKlW40CkEeBKoNVgQEEjnmGfIdC X-IronPort-AV: E=Sophos;i="4.64,304,1301889600"; d="scan'208";a="119356546" Received: from erie.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.206]) by esa-annu-pri.mail.uoguelph.ca with ESMTP; 02 May 2011 16:58:15 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 4D324B3F54 for ; Mon, 2 May 2011 16:58:15 -0400 (EDT) Date: Mon, 2 May 2011 16:58:15 -0400 (EDT) From: Rick Macklem To: freebsd-fs@freebsd.org Message-ID: <924130649.898737.1304369895239.JavaMail.root@erie.cs.uoguelph.ca> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.201] X-Mailer: Zimbra 6.0.10_GA_2692 (ZimbraWebClient - FF3.0 (Win)/6.0.10_GA_2692) Subject: Re: RFC: NFS server handling of negative f_bavail? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 02 May 2011 20:58:16 -0000 I just ran a little test where I ran an FFS volume on a FreeBSD-current server out of space so that it showed negative avail and then mounted it on Solaris10. Here are the dfs for the server and client. FreeBSD server (nfsv4-newlap): Filesystem 1K-blocks Used Avail Capacity Mounted on /dev/ad4s3a 2026030 671492 1192456 36% / devfs 1 1 0 100% /dev /dev/ad4s3e 4697030 4544054 -222786 105% /sub1 /dev/ad4s3d 5077038 641462 4029414 14% /usr and for the Solaris10 client: Filesystem kbytes used avail capacity Mounted on /dev/dsk/c0d0s0 3870110 2790938 1040471 73% / /devices 0 0 0 0% /devices ctfs 0 0 0 0% /system/contract proc 0 0 0 0% /proc mnttab 0 0 0 0% /etc/mnttab swap 975736 624 975112 1% /etc/svc/volatile objfs 0 0 0 0% /system/object /usr/lib/libc/libc_hwcap1.so.1 3870110 2790938 1040471 73% /lib/libc.so.1 fd 0 0 0 0% /dev/fd swap 975112 0 975112 0% /tmp swap 975140 28 975112 1% /var/run /dev/dsk/c0d0s7 5608190 4118091 1434018 75% /export/home nfsv4-newlap:/sub1 4697030 4544054 18014398509259198 1% /mnt You can see that the Solaris10 client thinks there is lottsa avail. I think sending the field as 0 over the wire would provide better interoperability. rick