From owner-freebsd-fs@FreeBSD.ORG Sun Nov 29 01:00:16 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 0C20E1065676 for ; Sun, 29 Nov 2009 01:00:16 +0000 (UTC) (envelope-from dimitry@andric.com) Received: from tensor.andric.com (cl-327.ede-01.nl.sixxs.net [IPv6:2001:7b8:2ff:146::2]) by mx1.freebsd.org (Postfix) with ESMTP id C7B3B8FC14 for ; Sun, 29 Nov 2009 01:00:15 +0000 (UTC) Received: from [IPv6:2001:7b8:3a7:0:319f:9c16:d53d:1cec] (unknown [IPv6:2001:7b8:3a7:0:319f:9c16:d53d:1cec]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (No client certificate requested) by tensor.andric.com (Postfix) with ESMTPSA id 25C4E5C43; Sun, 29 Nov 2009 02:00:14 +0100 (CET) Message-ID: <4B11C7A1.1040801@andric.com> Date: Sun, 29 Nov 2009 02:00:17 +0100 From: Dimitry Andric User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.2; en-US; rv:1.9.1.5) Gecko/20091126 Shredder/3.0.1pre MIME-Version: 1.0 To: Wes Morgan References: In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org Subject: Re: raidz configuration X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 29 Nov 2009 01:00:16 -0000 On 2009-11-28 23:22, Wes Morgan wrote: > Simple question: > > 8 devices in a raidz2 > or > 4 devices in a raidz x 2 With the first configuration, any two drives can fail, and all data is still preserved. With the second configuration, if two drives fail within the same RAID set, you are screwed. E.g., if safety is your concern, I would definitely choose the first configuration. :) From owner-freebsd-fs@FreeBSD.ORG Sun Nov 29 02:03:10 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 5366D106566C for ; Sun, 29 Nov 2009 02:03:10 +0000 (UTC) (envelope-from filipe@wsbr.com.br) Received: from fg-out-1718.google.com (fg-out-1718.google.com [72.14.220.155]) by mx1.freebsd.org (Postfix) with ESMTP id D08B38FC08 for ; Sun, 29 Nov 2009 02:03:09 +0000 (UTC) Received: by fg-out-1718.google.com with SMTP id l26so983584fgb.13 for ; Sat, 28 Nov 2009 18:03:08 -0800 (PST) MIME-Version: 1.0 Received: by 10.239.138.13 with SMTP id n13mr322389hbn.9.1259458581062; Sat, 28 Nov 2009 17:36:21 -0800 (PST) Date: Sat, 28 Nov 2009 23:36:21 -0200 Message-ID: <6bd9b7940911281736j12ba1b3fs544cae3274d8d4cb@mail.gmail.com> From: Filipe Paternot To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Subject: Issues with ZFS under remote NFS directory X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 29 Nov 2009 02:03:10 -0000 Hello there, I've got an pretty weird issue with an remote filesystem which is mounted on a given server. Renovatio is the NFS server is running FreeBSD-7.2-release 32bit. Neo server is running FreeBSD-8.0-release 64bit. The problem is this: I mount the pool rootdir at /mnt/renovatio2. Then, when i go list one of my dirs, it shows some content that was saved (snapshot was deleted) couple months ago for that same folder. That is weird, since im not accessing the hidden .zfs dir, but it should be empty anyways as the snapshot was already removed. Then, if i mount the specific dir, it works fine. I'm not using the zfs sharenfs option. I am a little bit out of ideas, and not sure if its a bug too, but if anyone can assist me, i'd be very grateful. Thank you [root@neo /mnt]# mount_nfs -3 -b -o tcp -o nolockd -o intr -o soft -o retrycnt=1 -o retrans=3 renovatio:/pool/filipe /mnt/renovatio2 [root@neo /mnt]# ls renovatio2/ brutalforce/ eggfacil/ patches/ psyx/ pub/ root/ scripts/ src/ sup/ vds/ [root@neo /mnt]# mount_nfs -3 -b -o tcp -o nolockd -o intr -o soft -o retrycnt=1 -o retrans=3 renovatio:/pool /mnt/renovatio3 [root@neo /mnt]# df | grep renovatio Filesystem Size Used Avail Capacity Mounted on renovatio:/usr/home/filipe 447G 109G 302G 27% /mnt/renovatio1 renovatio:/pool/filipe 417G 94G 323G 23% /mnt/renovatio2 renovatio:/pool 348G 25G 323G 7% /mnt/renovatio3 [root@neo /mnt]# ls renovatio2 brutalforce/ eggfacil/ patches/ psyx/ pub/ root/ scripts/ src/ sup/ vds/ [root@neo /mnt]# ls renovatio3/ filipe/ [root@neo /mnt]# ls renovatio3/filipe/ docs/ jail/ patches/ scripts/ sup/ webmin-1.480.tar.gz ircds/ mem_jail.pl* psyx/ src/ vds/ [root@neo /mnt]# [root@renovatio /pool]# ls filipe/ obj.tgz ports.tgz src.tgz vds/ [root@renovatio /pool]# ls filipe/ brutalforce/ eggfacil/ patches/ psyx/ pub/ root/ scripts/ src/ sup/ vds/ [root@renovatio /pool]# zfs list -t all | egrep -i '(pool |pool/filipe)' NAME USED AVAIL REFER MOUNTPOINT pool 134G 323G 25.0G /pool pool/filipe 94.4G 323G 94.4G /pool/filipe [root@renovatio /pool]# From owner-freebsd-fs@FreeBSD.ORG Sun Nov 29 17:20:27 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 5B7CC106568F for ; Sun, 29 Nov 2009 17:20:27 +0000 (UTC) (envelope-from olivier@gid0.org) Received: from mail-gx0-f214.google.com (mail-gx0-f214.google.com [209.85.217.214]) by mx1.freebsd.org (Postfix) with ESMTP id 2469D8FC13 for ; Sun, 29 Nov 2009 17:20:26 +0000 (UTC) Received: by gxk6 with SMTP id 6so669403gxk.13 for ; Sun, 29 Nov 2009 09:20:26 -0800 (PST) MIME-Version: 1.0 Received: by 10.101.175.39 with SMTP id c39mr1248362anp.87.1259515226034; Sun, 29 Nov 2009 09:20:26 -0800 (PST) In-Reply-To: <4B11C7A1.1040801@andric.com> References: <4B11C7A1.1040801@andric.com> Date: Sun, 29 Nov 2009 18:20:26 +0100 Message-ID: <367b2c980911290920x570a3164o54fb1b61a65c8189@mail.gmail.com> From: Olivier Smedts To: Dimitry Andric Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-fs@freebsd.org Subject: Re: raidz configuration X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 29 Nov 2009 17:20:27 -0000 2009/11/29 Dimitry Andric : > On 2009-11-28 23:22, Wes Morgan wrote: >> Simple question: >> >> 8 devices in a raidz2 >> or >> 4 devices in a raidz x 2 > > With the first configuration, any two drives can fail, and all data is > still preserved. > > With the second configuration, if two drives fail within the same RAID > set, you are screwed. A raidz on top of four zfs mirrors would be better (ie raid1+0 vs raid 0+1). > > E.g., if safety is your concern, I would definitely choose the first > configuration. :) > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > -- Olivier Smedts _ ASCII ribbon campaign ( ) e-mail: olivier@gid0.org - against HTML email & vCards X www: http://www.gid0.org - against proprietary attachments / \ "Il y a seulement 10 sortes de gens dans le monde : ceux qui comprennent le binaire, et ceux qui ne le comprennent pas." From owner-freebsd-fs@FreeBSD.ORG Sun Nov 29 21:28:14 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id C284A1065672 for ; Sun, 29 Nov 2009 21:28:14 +0000 (UTC) (envelope-from pjd@garage.freebsd.pl) Received: from mail.garage.freebsd.pl (chello089077043238.chello.pl [89.77.43.238]) by mx1.freebsd.org (Postfix) with ESMTP id 1E95A8FC19 for ; Sun, 29 Nov 2009 21:28:13 +0000 (UTC) Received: by mail.garage.freebsd.pl (Postfix, from userid 65534) id 37E9045D8D; Sun, 29 Nov 2009 20:27:33 +0100 (CET) Received: from localhost (pdawidek.wheel.pl [10.0.1.1]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.garage.freebsd.pl (Postfix) with ESMTP id 7476045685; Sun, 29 Nov 2009 20:27:28 +0100 (CET) Date: Sun, 29 Nov 2009 20:27:26 +0100 From: Pawel Jakub Dawidek To: Olivier Smedts Message-ID: <20091129192726.GX1567@garage.freebsd.pl> References: <4B11C7A1.1040801@andric.com> <367b2c980911290920x570a3164o54fb1b61a65c8189@mail.gmail.com> Mime-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="+te1M04ZWIfilseV" Content-Disposition: inline In-Reply-To: <367b2c980911290920x570a3164o54fb1b61a65c8189@mail.gmail.com> User-Agent: Mutt/1.4.2.3i X-PGP-Key-URL: http://people.freebsd.org/~pjd/pjd.asc X-OS: FreeBSD 9.0-CURRENT i386 X-Spam-Checker-Version: SpamAssassin 3.0.4 (2005-06-05) on mail.garage.freebsd.pl X-Spam-Level: X-Spam-Status: No, score=-5.9 required=4.5 tests=ALL_TRUSTED,BAYES_00 autolearn=ham version=3.0.4 Cc: freebsd-fs@freebsd.org Subject: Re: raidz configuration X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 29 Nov 2009 21:28:14 -0000 --+te1M04ZWIfilseV Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Sun, Nov 29, 2009 at 06:20:26PM +0100, Olivier Smedts wrote: > 2009/11/29 Dimitry Andric : > > On 2009-11-28 23:22, Wes Morgan wrote: > >> Simple question: > >> > >> 8 devices in a raidz2 > >> or > >> 4 devices in a raidz x 2 > > > > With the first configuration, any two drives can fail, and all data is > > still preserved. > > > > With the second configuration, if two drives fail within the same RAID > > set, you are screwed. >=20 > A raidz on top of four zfs mirrors would be better (ie raid1+0 vs raid 0+= 1). This can't be configure. And raidz is equivalent of RAID5, not RAID0. --=20 Pawel Jakub Dawidek http://www.wheel.pl pjd@FreeBSD.org http://www.FreeBSD.org FreeBSD committer Am I Evil? Yes, I Am! --+te1M04ZWIfilseV Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.4 (FreeBSD) iD8DBQFLEsseForvXbEpPzQRAq6zAKCMhvjBvCvsti4fPREfB7PFNRytTACbB/SZ YxAtG7FSjNfYkFGRe4BJHGE= =fOxc -----END PGP SIGNATURE----- --+te1M04ZWIfilseV-- From owner-freebsd-fs@FreeBSD.ORG Sun Nov 29 21:44:52 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 9EB2710656A3 for ; Sun, 29 Nov 2009 21:44:52 +0000 (UTC) (envelope-from pjd@garage.freebsd.pl) Received: from mail.garage.freebsd.pl (chello089077043238.chello.pl [89.77.43.238]) by mx1.freebsd.org (Postfix) with ESMTP id DD0AA8FC13 for ; Sun, 29 Nov 2009 21:44:51 +0000 (UTC) Received: by mail.garage.freebsd.pl (Postfix, from userid 65534) id 9E13545CDD; Sun, 29 Nov 2009 20:35:41 +0100 (CET) Received: from localhost (pdawidek.wheel.pl [10.0.1.1]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.garage.freebsd.pl (Postfix) with ESMTP id 5A51745C89; Sun, 29 Nov 2009 20:35:36 +0100 (CET) Date: Sun, 29 Nov 2009 20:35:34 +0100 From: Pawel Jakub Dawidek To: Filipe Paternot Message-ID: <20091129193534.GA34378@garage.freebsd.pl> References: <6bd9b7940911281736j12ba1b3fs544cae3274d8d4cb@mail.gmail.com> Mime-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="u3/rZRmxL6MmkK24" Content-Disposition: inline In-Reply-To: <6bd9b7940911281736j12ba1b3fs544cae3274d8d4cb@mail.gmail.com> User-Agent: Mutt/1.4.2.3i X-PGP-Key-URL: http://people.freebsd.org/~pjd/pjd.asc X-OS: FreeBSD 9.0-CURRENT i386 X-Spam-Checker-Version: SpamAssassin 3.0.4 (2005-06-05) on mail.garage.freebsd.pl X-Spam-Level: X-Spam-Status: No, score=-5.9 required=4.5 tests=ALL_TRUSTED,BAYES_00 autolearn=ham version=3.0.4 Cc: freebsd-fs@freebsd.org Subject: Re: Issues with ZFS under remote NFS directory X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 29 Nov 2009 21:44:52 -0000 --u3/rZRmxL6MmkK24 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Sat, Nov 28, 2009 at 11:36:21PM -0200, Filipe Paternot wrote: > Hello there, >=20 > I've got an pretty weird issue with an remote filesystem which is mounted= on > a given server. > Renovatio is the NFS server is running FreeBSD-7.2-release 32bit. > Neo server is running FreeBSD-8.0-release 64bit. >=20 > The problem is this: > I mount the pool rootdir at /mnt/renovatio2. Then, when i go list one of = my > dirs, it shows some content that was saved (snapshot was deleted) couple > months ago for that same folder. That is weird, since im not accessing the > hidden .zfs dir, but it should be empty anyways as the snapshot was alrea= dy > removed. > Then, if i mount the specific dir, it works fine. Could you try the following on the renovatio server: # zfs unmount pool/filipe # ls /pool/ My guess is that the content you see when you NFS-mount /pool/ is covered by pool/filipe file system on renovatio, that's why you can't see it there. The commands above will tell. --=20 Pawel Jakub Dawidek http://www.wheel.pl pjd@FreeBSD.org http://www.FreeBSD.org FreeBSD committer Am I Evil? Yes, I Am! --u3/rZRmxL6MmkK24 Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.4 (FreeBSD) iD8DBQFLEs0GForvXbEpPzQRAl/gAJkBjF/BFTRXUxXgRiCt1uDsCqqbDACg30xk 4v8RtWJ4LYMtjbXxcFurb18= =3PN5 -----END PGP SIGNATURE----- --u3/rZRmxL6MmkK24-- From owner-freebsd-fs@FreeBSD.ORG Mon Nov 30 02:18:52 2009 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 642B31065679; Mon, 30 Nov 2009 02:18:52 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 39AB08FC12; Mon, 30 Nov 2009 02:18:52 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.3/8.14.3) with ESMTP id nAU2Iq6E049034; Mon, 30 Nov 2009 02:18:52 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.3/8.14.3/Submit) id nAU2Iq85049030; Mon, 30 Nov 2009 02:18:52 GMT (envelope-from linimon) Date: Mon, 30 Nov 2009 02:18:52 GMT Message-Id: <200911300218.nAU2Iq85049030@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Cc: Subject: Re: kern/141010: [zfs] "zfs scrub" fails when backed by files in UFS2 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 30 Nov 2009 02:18:52 -0000 Old Synopsis: "zfs scrub" fails when backed by files in UFS2 New Synopsis: [zfs] "zfs scrub" fails when backed by files in UFS2 Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Mon Nov 30 02:18:31 UTC 2009 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=141010 From owner-freebsd-fs@FreeBSD.ORG Mon Nov 30 09:45:13 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 5374B1065672 for ; Mon, 30 Nov 2009 09:45:13 +0000 (UTC) (envelope-from google@vink.pl) Received: from fg-out-1718.google.com (fg-out-1718.google.com [72.14.220.153]) by mx1.freebsd.org (Postfix) with ESMTP id E4EF78FC18 for ; Mon, 30 Nov 2009 09:45:12 +0000 (UTC) Received: by fg-out-1718.google.com with SMTP id e12so852053fga.13 for ; Mon, 30 Nov 2009 01:45:11 -0800 (PST) Received: by 10.86.198.20 with SMTP id v20mr3866258fgf.54.1259572801317; Mon, 30 Nov 2009 01:20:01 -0800 (PST) Received: from mail-fx0-f218.google.com (mail-fx0-f218.google.com [209.85.220.218]) by mx.google.com with ESMTPS id e20sm13538394fga.17.2009.11.30.01.20.00 (version=SSLv3 cipher=RC4-MD5); Mon, 30 Nov 2009 01:20:01 -0800 (PST) Received: by fxm10 with SMTP id 10so2792564fxm.14 for ; Mon, 30 Nov 2009 01:20:00 -0800 (PST) MIME-Version: 1.0 Received: by 10.223.161.215 with SMTP id s23mr669459fax.44.1259572800407; Mon, 30 Nov 2009 01:20:00 -0800 (PST) Date: Mon, 30 Nov 2009 10:20:00 +0100 Message-ID: <2ae8edf30911300120x627e42a9ha2cf003e847d4fbd@mail.gmail.com> From: Wiktor Niesiobedzki To: freebsd-fs Content-Type: text/plain; charset=UTF-8 Subject: ZFS guidelines - preparing for future storage expansion X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 30 Nov 2009 09:45:13 -0000 Hi, I'm planning to setup a ZFS pool and trying to get best practices for that, so the layout I'll propose, will be future proof. As far as I read ZFS Best Practices it is advised to use whole disk as a vdev instead of silces/partitions to ease the administration. I'm trying to setup a RAIDZ pool, that I may want to grow some day (by replacing all disks in the pool). So i made a quick check on 8.0 Release, and did the following: create 3 disks - each of size 256M - da0, da1, da2 # zpool create tank raidz da0 da1 da2 It gave a tank of size ~750M Then I created another 3 disks, each of size 512M - da3, da4, da5, and # zpool replace tank da0 da3 # zpool replace tank da1 da4 # zpool replace tank da2 da5 But size of the tank haven't changed. As it was mentioned few times on the list, that you need only to change all of the disks to get the new storage, do I need to do something more, to get new space used by RAIDZ? Because otherwise, I'd suggest, that on FreeBSD, it should be advised, to always use slice/partition, so when you replace the disk with bigger one, you create two partitions, and stripe two raidz. But this in turn, after few such exercises may lead to quite complicated layout. Ofcourse - the easiest way would be to setup a whole new pool and move the data, but that might be hard thing to do at this time, as I might be lacking controller to connect all six devices on the same time, so I was rather thinking about replacing the disks one-by-one Is my use-case falling on the case, described here: http://blogs.sun.com/ahl/entry/expand_o_matic_raid_z , as a "home user to want to increase his total storage capacity by a disk or two at a time"? Any hints for me? Cheers, Wiktor Niesiobedzki From owner-freebsd-fs@FreeBSD.ORG Mon Nov 30 10:14:08 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 7C445106566B for ; Mon, 30 Nov 2009 10:14:08 +0000 (UTC) (envelope-from james-freebsd-fs2@jrv.org) Received: from mail.jrv.org (adsl-70-243-84-13.dsl.austtx.swbell.net [70.243.84.13]) by mx1.freebsd.org (Postfix) with ESMTP id C3F2F8FC15 for ; Mon, 30 Nov 2009 10:14:07 +0000 (UTC) Received: from kremvax.housenet.jrv (kremvax.housenet.jrv [192.168.3.124]) by mail.jrv.org (8.14.3/8.14.3) with ESMTP id nAUAE3Qd051566; Mon, 30 Nov 2009 04:14:04 -0600 (CST) (envelope-from james-freebsd-fs2@jrv.org) Authentication-Results: mail.jrv.org; domainkeys=pass (testing) header.from=james-freebsd-fs2@jrv.org DomainKey-Signature: a=rsa-sha1; s=enigma; d=jrv.org; c=nofws; q=dns; h=message-id:date:from:user-agent:mime-version:cc:subject: references:in-reply-to:content-type:content-transfer-encoding; b=ch2f/9HxvKxULJpn/G/e+7vns8WnbtFnWeufl83SIDdrAIStyLYkbnQ2JQNpB09tU quMxPqb3uiL8KEGVKgmzB6UGrza8SNEw+lRwbFvK//0obe7jVAKZ7cGdX9vUSKU5PSt X0CyY8K7NgT8LPf3CB0+V7NsvPYTeCBFQwpFbHo= Message-ID: <4B139AEB.8060900@jrv.org> Date: Mon, 30 Nov 2009 04:14:03 -0600 From: "James R. Van Artsdalen" User-Agent: Thunderbird 2.0.0.23 (Macintosh/20090812) MIME-Version: 1.0 References: <2ae8edf30911300120x627e42a9ha2cf003e847d4fbd@mail.gmail.com> In-Reply-To: <2ae8edf30911300120x627e42a9ha2cf003e847d4fbd@mail.gmail.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: freebsd-fs Subject: Re: ZFS guidelines - preparing for future storage expansion X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 30 Nov 2009 10:14:08 -0000 Wiktor Niesiobedzki wrote: > do I need to do something more, to get new space used by > RAIDZ? Export the pool, the import it. Adding a vdev is done right away, but increasing the size of a vdev (as you did) only happens on import. From owner-freebsd-fs@FreeBSD.ORG Mon Nov 30 11:06:52 2009 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 3CC9E1065676 for ; Mon, 30 Nov 2009 11:06:52 +0000 (UTC) (envelope-from owner-bugmaster@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 0647E8FC08 for ; Mon, 30 Nov 2009 11:06:52 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.3/8.14.3) with ESMTP id nAUB6pNE043408 for ; Mon, 30 Nov 2009 11:06:51 GMT (envelope-from owner-bugmaster@FreeBSD.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.3/8.14.3/Submit) id nAUB6pWd043406 for freebsd-fs@FreeBSD.org; Mon, 30 Nov 2009 11:06:51 GMT (envelope-from owner-bugmaster@FreeBSD.org) Date: Mon, 30 Nov 2009 11:06:51 GMT Message-Id: <200911301106.nAUB6pWd043406@freefall.freebsd.org> X-Authentication-Warning: freefall.freebsd.org: gnats set sender to owner-bugmaster@FreeBSD.org using -f From: FreeBSD bugmaster To: freebsd-fs@FreeBSD.org Cc: Subject: Current problem reports assigned to freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 30 Nov 2009 11:06:52 -0000 Note: to view an individual PR, use: http://www.freebsd.org/cgi/query-pr.cgi?pr=(number). The following is a listing of current problems submitted by FreeBSD users. These represent problem reports covering all versions including experimental development code and obsolete releases. S Tracker Resp. Description -------------------------------------------------------------------------------- o kern/141010 fs [zfs] "zfs scrub" fails when backed by files in UFS2 o kern/140888 fs [zfs] boot fail from zfs root while the pool resilveri o kern/140853 fs [nfs] [patch] NFSv2 remove calls fail to send error re o sparc/140797 fs [nfs] [panic] panic on 8.0-RC3/sparc64 as an NFS serve o kern/140682 fs [netgraph] [panic] random panic in netgraph o kern/140661 fs [zfs] /boot/loader fails to work on a GPT/ZFS-only sys o kern/140640 fs [zfs] snapshot crash o kern/140433 fs [zfs] [panic] panic while replaying ZIL after crash o kern/140134 fs [msdosfs] write and fsck destroy filesystem integrity o kern/140068 fs [smbfs] [patch] smbfs does not allow semicolon in file o kern/139725 fs [zfs] zdb(1) dumps core on i386 when examining zpool c o kern/139715 fs [zfs] vfs.numvnodes leak on busy zfs o bin/139651 fs [nfs] mount(8): read-only remount of NFS volume does n o kern/139597 fs [patch] [tmpfs] tmpfs initializes va_gen but doesn't u o kern/139564 fs [zfs] [panic] 8.0-RC1 - Fatal trap 12 at end of shutdo o kern/139407 fs [smbfs] [panic] smb mount causes system crash if remot o kern/139363 fs [nfs] diskless root nfs mount from non FreeBSD server o kern/138790 fs [zfs] ZFS ceases caching when mem demand is high o kern/138524 fs [msdosfs] disks and usb flashes/cards with Russian lab o kern/138421 fs [ufs] [patch] remove UFS label limitations o kern/138367 fs [tmpfs] [panic] 'panic: Assertion pages > 0 failed' wh o kern/138202 fs mount_msdosfs(1) see only 2Gb o kern/138109 fs [extfs] [patch] Minor cleanups to the sys/gnu/fs/ext2f f kern/137037 fs [zfs] [hang] zfs rollback on root causes FreeBSD to fr o kern/136968 fs [ufs] [lor] ufs/bufwait/ufs (open) o kern/136945 fs [ufs] [lor] filedesc structure/ufs (poll) o kern/136944 fs [ffs] [lor] bufwait/snaplk (fsync) o kern/136873 fs [ntfs] Missing directories/files on NTFS volume o kern/136865 fs [nfs] [patch] NFS exports atomic and on-the-fly atomic o kern/136470 fs [nfs] Cannot mount / in read-only, over NFS o kern/135594 fs [zfs] Single dataset unresponsive with Samba o kern/135546 fs [zfs] zfs.ko module doesn't ignore zpool.cache filenam o kern/135469 fs [ufs] [panic] kernel crash on md operation in ufs_dirb o kern/135050 fs [zfs] ZFS clears/hides disk errors on reboot o kern/134491 fs [zfs] Hot spares are rather cold... o kern/133980 fs [panic] [ffs] panic: ffs_valloc: dup alloc o kern/133676 fs [smbfs] [panic] umount -f'ing a vnode-based memory dis o kern/133614 fs [panic] panic: ffs_truncate: read-only filesystem o kern/133174 fs [msdosfs] [patch] msdosfs must support utf-encoded int f kern/133150 fs [zfs] Page fault with ZFS on 7.1-RELEASE/amd64 while w o kern/132960 fs [ufs] [panic] panic:ffs_blkfree: freeing free frag o kern/132597 fs [tmpfs] [panic] tmpfs-related panic while interrupting o kern/132397 fs reboot causes filesystem corruption (failure to sync b o kern/132331 fs [ufs] [lor] LOR ufs and syncer o kern/132237 fs [msdosfs] msdosfs has problems to read MSDOS Floppy o kern/132145 fs [panic] File System Hard Crashes o kern/131995 fs [nfs] Failure to mount NFSv4 server o kern/131441 fs [unionfs] [nullfs] unionfs and/or nullfs not combineab o kern/131360 fs [nfs] poor scaling behavior of the NFS server under lo o kern/131342 fs [nfs] mounting/unmounting of disks causes NFS to fail o bin/131341 fs makefs: error "Bad file descriptor" on the mount poin o kern/130979 fs [smbfs] [panic] boot/kernel/smbfs.ko o kern/130920 fs [msdosfs] cp(1) takes 100% CPU time while copying file o kern/130229 fs [iconv] usermount fails on fs that need iconv o kern/130210 fs [nullfs] Error by check nullfs o kern/129760 fs [nfs] after 'umount -f' of a stale NFS share FreeBSD l o kern/129488 fs [smbfs] Kernel "bug" when using smbfs in smbfs_smb.c: o kern/129231 fs [ufs] [patch] New UFS mount (norandom) option - mostly o kern/129152 fs [panic] non-userfriendly panic when trying to mount(8) o kern/129059 fs [zfs] [patch] ZFS bootloader whitelistable via WITHOUT f kern/128829 fs smbd(8) causes periodic panic on 7-RELEASE o kern/127659 fs [tmpfs] tmpfs memory leak o kern/127420 fs [gjournal] [panic] Journal overflow on gmirrored gjour o kern/127029 fs [panic] mount(8): trying to mount a write protected zi o kern/126287 fs [ufs] [panic] Kernel panics while mounting an UFS file s kern/125738 fs [zfs] [request] SHA256 acceleration in ZFS p kern/124621 fs [ext3] [patch] Cannot mount ext2fs partition f bin/124424 fs [zfs] zfs(8): zfs list -r shows strange snapshots' siz o kern/123939 fs [msdosfs] corrupts new files o kern/122380 fs [ffs] ffs_valloc:dup alloc (Soekris 4801/7.0/USB Flash o bin/122172 fs [fs]: amd(8) automount daemon dies on 6.3-STABLE i386, o bin/121898 fs [nullfs] pwd(1)/getcwd(2) fails with Permission denied o bin/121779 fs [ufs] snapinfo(8) (and related tools?) only work for t o bin/121366 fs [zfs] [patch] Automatic disk scrubbing from periodic(8 o bin/121072 fs [smbfs] mount_smbfs(8) cannot normally convert the cha f kern/120991 fs [panic] [fs] [snapshot] System crashes when manipulati o kern/120483 fs [ntfs] [patch] NTFS filesystem locking changes o kern/120482 fs [ntfs] [patch] Sync style changes between NetBSD and F f kern/119735 fs [zfs] geli + ZFS + samba starting on boot panics 7.0-B o kern/118912 fs [2tb] disk sizing/geometry problem with large array o kern/118713 fs [minidump] [patch] Display media size required for a k o bin/118249 fs mv(1): moving a directory changes its mtime o kern/118107 fs [ntfs] [panic] Kernel panic when accessing a file at N o bin/117315 fs [smbfs] mount_smbfs(8) and related options can't mount o kern/117314 fs [ntfs] Long-filename only NTFS fs'es cause kernel pani o kern/117158 fs [zfs] zpool scrub causes panic if geli vdevs detach on o bin/116980 fs [msdosfs] [patch] mount_msdosfs(8) resets some flags f o kern/116913 fs [ffs] [panic] ffs_blkfree: freeing free block p kern/116608 fs [msdosfs] [patch] msdosfs fails to check mount options o kern/116583 fs [ffs] [hang] System freezes for short time when using o kern/116170 fs [panic] Kernel panic when mounting /tmp o kern/115645 fs [snapshots] [panic] lockmgr: thread 0xc4c00d80, not ex o bin/115361 fs [zfs] mount(8) gets into a state where it won't set/un o kern/114955 fs [cd9660] [patch] [request] support for mask,dirmask,ui o kern/114847 fs [ntfs] [patch] [request] dirmask support for NTFS ala o kern/114676 fs [ufs] snapshot creation panics: snapacct_ufs2: bad blo o bin/114468 fs [patch] [request] add -d option to umount(8) to detach o kern/113852 fs [smbfs] smbfs does not properly implement DFS referral o bin/113838 fs [patch] [request] mount(8): add support for relative p o bin/113049 fs [patch] [request] make quot(8) use getopt(3) and show o kern/112658 fs [smbfs] [patch] smbfs and caching problems (resolves b o kern/111843 fs [msdosfs] Long Names of files are incorrectly created o kern/111782 fs [ufs] dump(8) fails horribly for large filesystems s bin/111146 fs [2tb] fsck(8) fails on 6T filesystem o kern/109024 fs [msdosfs] mount_msdosfs: msdosfs_iconv: Operation not o kern/109010 fs [msdosfs] can't mv directory within fat32 file system o bin/107829 fs [2TB] fdisk(8): invalid boundary checking in fdisk / w o kern/106030 fs [ufs] [panic] panic in ufs from geom when a dead disk o kern/104406 fs [ufs] Processes get stuck in "ufs" state under persist o kern/104133 fs [ext2fs] EXT2FS module corrupts EXT2/3 filesystems o kern/103035 fs [ntfs] Directories in NTFS mounted disc images appear o kern/101324 fs [smbfs] smbfs sometimes not case sensitive when it's s o kern/99290 fs [ntfs] mount_ntfs ignorant of cluster sizes o kern/97377 fs [ntfs] [patch] syntax cleanup for ntfs_ihash.c o kern/95222 fs [iso9660] File sections on ISO9660 level 3 CDs ignored o kern/94849 fs [ufs] rename on UFS filesystem is not atomic o kern/94769 fs [ufs] Multiple file deletions on multi-snapshotted fil o kern/94733 fs [smbfs] smbfs may cause double unlock o kern/93942 fs [vfs] [patch] panic: ufs_dirbad: bad dir (patch from D o kern/92272 fs [ffs] [hang] Filling a filesystem while creating a sna f kern/91568 fs [ufs] [panic] writing to UFS/softupdates DVD media in o kern/91134 fs [smbfs] [patch] Preserve access and modification time a kern/90815 fs [smbfs] [patch] SMBFS with character conversions somet o kern/88657 fs [smbfs] windows client hang when browsing a samba shar o kern/88266 fs [smbfs] smbfs does not implement UIO_NOCOPY and sendfi o kern/87859 fs [smbfs] System reboot while umount smbfs. o kern/86587 fs [msdosfs] rm -r /PATH fails with lots of small files o kern/85326 fs [smbfs] [panic] saving a file via samba to an overquot o kern/84589 fs [2TB] 5.4-STABLE unresponsive during background fsck 2 o kern/80088 fs [smbfs] Incorrect file time setting on NTFS mounted vi o kern/73484 fs [ntfs] Kernel panic when doing `ls` from the client si o bin/73019 fs [ufs] fsck_ufs(8) cannot alloc 607016868 bytes for ino o kern/71774 fs [ntfs] NTFS cannot "see" files on a WinXP filesystem o kern/68978 fs [panic] [ufs] crashes with failing hard disk, loose po o kern/65920 fs [nwfs] Mounted Netware filesystem behaves strange o kern/65901 fs [smbfs] [patch] smbfs fails fsx write/truncate-down/tr o kern/61503 fs [smbfs] mount_smbfs does not work as non-root o kern/55617 fs [smbfs] Accessing an nsmb-mounted drive via a smb expo o kern/51685 fs [hang] Unbounded inode allocation causes kernel to loc o kern/51583 fs [nullfs] [patch] allow to work with devices and socket o kern/36566 fs [smbfs] System reboot with dead smb mount and umount o kern/18874 fs [2TB] 32bit NFS servers export wrong negative values t 142 problems total. From owner-freebsd-fs@FreeBSD.ORG Mon Nov 30 12:25:34 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 08EF11065692 for ; Mon, 30 Nov 2009 12:25:34 +0000 (UTC) (envelope-from google@vink.pl) Received: from mail-bw0-f213.google.com (mail-bw0-f213.google.com [209.85.218.213]) by mx1.freebsd.org (Postfix) with ESMTP id 91C698FC12 for ; Mon, 30 Nov 2009 12:25:32 +0000 (UTC) Received: by bwz5 with SMTP id 5so2463669bwz.3 for ; Mon, 30 Nov 2009 04:25:32 -0800 (PST) Received: by 10.204.26.131 with SMTP id e3mr1421462bkc.27.1259583931687; Mon, 30 Nov 2009 04:25:31 -0800 (PST) Received: from mail-fx0-f218.google.com (mail-fx0-f218.google.com [209.85.220.218]) by mx.google.com with ESMTPS id 16sm1123840bwz.11.2009.11.30.04.25.31 (version=SSLv3 cipher=RC4-MD5); Mon, 30 Nov 2009 04:25:31 -0800 (PST) Received: by fxm10 with SMTP id 10so2932702fxm.14 for ; Mon, 30 Nov 2009 04:25:30 -0800 (PST) MIME-Version: 1.0 Received: by 10.223.73.20 with SMTP id o20mr647567faj.71.1259583930435; Mon, 30 Nov 2009 04:25:30 -0800 (PST) In-Reply-To: <4B139AEB.8060900@jrv.org> References: <2ae8edf30911300120x627e42a9ha2cf003e847d4fbd@mail.gmail.com> <4B139AEB.8060900@jrv.org> Date: Mon, 30 Nov 2009 13:25:30 +0100 Message-ID: <2ae8edf30911300425g4026909bm9262f6abcf82ddcd@mail.gmail.com> From: Wiktor Niesiobedzki To: "James R. Van Artsdalen" Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Cc: freebsd-fs Subject: Re: ZFS guidelines - preparing for future storage expansion X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 30 Nov 2009 12:25:34 -0000 2009/11/30 James R. Van Artsdalen : > Wiktor Niesiobedzki wrote: >> do I need to do something more, to get new space used by >> RAIDZ? > > Export the pool, the import it. =C2=A0Adding a vdev is done right away, b= ut > increasing the size of a vdev (as you did) only happens on import. > That did the trick, thanks :-) Cool thing! :-) Cheers, Wiktor From owner-freebsd-fs@FreeBSD.ORG Mon Nov 30 16:43:49 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 46366106566B; Mon, 30 Nov 2009 16:43:49 +0000 (UTC) (envelope-from jhb@freebsd.org) Received: from cyrus.watson.org (cyrus.watson.org [65.122.17.42]) by mx1.freebsd.org (Postfix) with ESMTP id 1A8438FC13; Mon, 30 Nov 2009 16:43:49 +0000 (UTC) Received: from bigwig.baldwin.cx (66.111.2.69.static.nyinternet.net [66.111.2.69]) by cyrus.watson.org (Postfix) with ESMTPSA id CBBE846B06; Mon, 30 Nov 2009 11:43:48 -0500 (EST) Received: from jhbbsd.localnet (unknown [209.249.190.9]) by bigwig.baldwin.cx (Postfix) with ESMTPA id 272EB8A024; Mon, 30 Nov 2009 11:43:48 -0500 (EST) From: John Baldwin To: freebsd-fs@freebsd.org Date: Mon, 30 Nov 2009 11:37:08 -0500 User-Agent: KMail/1.12.1 (FreeBSD/7.2-CBSD-20091103; KDE/4.3.1; amd64; ; ) References: <200911250340.nAP3e5ud052278@freefall.freebsd.org> In-Reply-To: <200911250340.nAP3e5ud052278@freefall.freebsd.org> MIME-Version: 1.0 Content-Type: Text/Plain; charset="iso-8859-15" Content-Transfer-Encoding: 7bit Message-Id: <200911301137.08967.jhb@freebsd.org> X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.0.1 (bigwig.baldwin.cx); Mon, 30 Nov 2009 11:43:48 -0500 (EST) X-Virus-Scanned: clamav-milter 0.95.1 at bigwig.baldwin.cx X-Virus-Status: Clean X-Spam-Status: No, score=-2.5 required=4.2 tests=AWL,BAYES_00,RDNS_NONE autolearn=no version=3.2.5 X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on bigwig.baldwin.cx Cc: Rick Macklem Subject: Re: kern/140853: [nfs] [patch] NFSv2 remove calls fail to send error replies (memory leak!) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 30 Nov 2009 16:43:49 -0000 On Tuesday 24 November 2009 10:40:05 pm linimon@freebsd.org wrote: > Old Synopsis: NFSv2 remove calls fail to send error replies (memory leak!) > New Synopsis: [nfs] [patch] NFSv2 remove calls fail to send error replies (memory leak!) > > Responsible-Changed-From-To: freebsd-bugs->freebsd-fs > Responsible-Changed-By: linimon > Responsible-Changed-When: Wed Nov 25 03:39:42 UTC 2009 > Responsible-Changed-Why: > Over to maintainer(s). I think nfsrv_link() has the same leak as well. Rick, does this look ok to you? Index: nfs_serv.c =================================================================== --- nfs_serv.c (revision 199529) +++ nfs_serv.c (working copy) @@ -1810,10 +1810,9 @@ } ereply: nfsm_reply(NFSX_WCCDATA(v3)); - if (v3) { + if (v3) nfsm_srvwcc_data(dirfor_ret, &dirfor, diraft_ret, &diraft); - error = 0; - } + error = 0; nfsmout: NDFREE(&nd, NDF_ONLY_PNBUF); if (nd.ni_dvp) { @@ -2187,8 +2186,8 @@ if (v3) { nfsm_srvpostop_attr(getret, &at); nfsm_srvwcc_data(dirfor_ret, &dirfor, diraft_ret, &diraft); - error = 0; } + error = 0; /* fall through */ nfsmout: -- John Baldwin From owner-freebsd-fs@FreeBSD.ORG Mon Nov 30 20:33:10 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 062AD106566B for ; Mon, 30 Nov 2009 20:33:10 +0000 (UTC) (envelope-from zbeeble@gmail.com) Received: from fg-out-1718.google.com (fg-out-1718.google.com [72.14.220.156]) by mx1.freebsd.org (Postfix) with ESMTP id 87E8E8FC0A for ; Mon, 30 Nov 2009 20:33:09 +0000 (UTC) Received: by fg-out-1718.google.com with SMTP id l26so1505691fgb.13 for ; Mon, 30 Nov 2009 12:33:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:cc:content-type; bh=x2yyCHCgUdA3rHLKwzpXneexlDnzeBDNJw04wt8uRYI=; b=Wm3FjDfZMXEHwcz2I1fDrtGbF20dRiXBqXIp/t33l2aQz2lA+rAJgZXTBYL1BCnM6g gawDtXRqjm0QKmIuV4nvFdT/BYZUEOBL+Ff47B4QHOawr1+3mVyaqfYfP37B8cWB7QdS QiA1F/cIL4xz8RAFoLrtP7qG7xqt9Ahq7RPfk= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=mcl/JRyhrFsBIqon3XAYnw9km2ZK8MNorBUNdPb3T6jBYvVuQlErqQ+xSLXry7Yrfm Io3STrbjQV9m9+fNQ3xLaPGnxYHtPMItFwAq2LnXzTi9UQktn9CZUSv31/DO2RxYoFsB RubfF+OpaooutildQxZmrTCQM13y1m5+/FK6I= MIME-Version: 1.0 Received: by 10.216.89.5 with SMTP id b5mr1528725wef.143.1259613188404; Mon, 30 Nov 2009 12:33:08 -0800 (PST) In-Reply-To: <2ae8edf30911300425g4026909bm9262f6abcf82ddcd@mail.gmail.com> References: <2ae8edf30911300120x627e42a9ha2cf003e847d4fbd@mail.gmail.com> <4B139AEB.8060900@jrv.org> <2ae8edf30911300425g4026909bm9262f6abcf82ddcd@mail.gmail.com> Date: Mon, 30 Nov 2009 15:33:08 -0500 Message-ID: <5f67a8c40911301233s46a2818at9051c4ebbacf7e25@mail.gmail.com> From: Zaphod Beeblebrox To: Wiktor Niesiobedzki Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: freebsd-fs Subject: Re: ZFS guidelines - preparing for future storage expansion X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 30 Nov 2009 20:33:10 -0000 On Mon, Nov 30, 2009 at 7:25 AM, Wiktor Niesiobedzki wrote: > 2009/11/30 James R. Van Artsdalen : > > Wiktor Niesiobedzki wrote: > >> do I need to do something more, to get new space used by > >> RAIDZ? > > > > Export the pool, the import it. Adding a vdev is done right away, but > > increasing the size of a vdev (as you did) only happens on import. > > > > That did the trick, thanks :-) Cool thing! :-) > Wasn't a reboot also an option (which might disturb active NFS mounts less)? I moved from 5x 750G to 5x 1.5T disks this way earlier this year. It takes a _long_ time. resilvering 750g (they were about 98% full when I did this) onto the 1.5T disks took about 12 hours each. With work and sleep and other distractions, it took most of a week to perform the upgrade. And keep in mind that while you're upgrading, you're vulnerable to data loss (no more replicas). I suppose RAIDZ2 would make that safer, but more costly. This form of upgrade is a cool feature --- in the end the "cost" of running a home RAID array is the cost of the electricity ... and I'm pretty sure the draw of the 1.5T drives is very similar to the 750G drives. I'm not sure I see this feature being used a lot in production, tho. It's a pretty high stress on the array for a long time. From owner-freebsd-fs@FreeBSD.ORG Mon Nov 30 21:08:37 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 4076C106566C for ; Mon, 30 Nov 2009 21:08:37 +0000 (UTC) (envelope-from james-freebsd-fs2@jrv.org) Received: from mail.jrv.org (adsl-70-243-84-13.dsl.austtx.swbell.net [70.243.84.13]) by mx1.freebsd.org (Postfix) with ESMTP id D202E8FC12 for ; Mon, 30 Nov 2009 21:08:36 +0000 (UTC) Received: from kremvax.housenet.jrv (kremvax.housenet.jrv [192.168.3.124]) by mail.jrv.org (8.14.3/8.14.3) with ESMTP id nAUL8ZlE067299; Mon, 30 Nov 2009 15:08:35 -0600 (CST) (envelope-from james-freebsd-fs2@jrv.org) Authentication-Results: mail.jrv.org; domainkeys=pass (testing) header.from=james-freebsd-fs2@jrv.org DomainKey-Signature: a=rsa-sha1; s=enigma; d=jrv.org; c=nofws; q=dns; h=message-id:date:from:user-agent:mime-version:to:cc:subject: references:in-reply-to:content-type:content-transfer-encoding; b=S2YirZhpEB9E/rjUIdQ+DESfdmo59TbAj6zBAxJihQ5fbT5FYKKZ1vhyBzVHP7914 ELS/v6Rpu+lSbB/2DlSoCA4o5NsmgnWFezjyMnN9iwGlrQ7NuNLeS/gCE3BFFAbZ/cg 6iBSwQdjaO3uQbm42s1wgBsL6OL9P9fW+E0I9KI= Message-ID: <4B143453.5090603@jrv.org> Date: Mon, 30 Nov 2009 15:08:35 -0600 From: "James R. Van Artsdalen" User-Agent: Thunderbird 2.0.0.23 (Macintosh/20090812) MIME-Version: 1.0 To: Zaphod Beeblebrox References: <2ae8edf30911300120x627e42a9ha2cf003e847d4fbd@mail.gmail.com> <4B139AEB.8060900@jrv.org> <2ae8edf30911300425g4026909bm9262f6abcf82ddcd@mail.gmail.com> <5f67a8c40911301233s46a2818at9051c4ebbacf7e25@mail.gmail.com> In-Reply-To: <5f67a8c40911301233s46a2818at9051c4ebbacf7e25@mail.gmail.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: freebsd-fs Subject: Re: ZFS guidelines - preparing for future storage expansion X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 30 Nov 2009 21:08:37 -0000 Zaphod Beeblebrox wrote: > I moved from 5x 750G to 5x 1.5T disks this way earlier this year. > [...] And keep in mind that while you're upgrading, you're vulnerable > to data loss (no more replicas). This is one of the (many) reasons I prefer mirrors rather than parity (RAID-5). You can "attach" the new drive, wait for the resilver to complete, then detach the old drive - never having fewer than two drives in the mirror. And of course you can gain space in the pool as each mirror is upgraded whereas a parity group (RAIDZ) usually involves more drives. Note that the zpool(1) man page says of the "Replace" command: "Replaces old_device with new_device. This is equivalent to attaching new_device, waiting for it to resilver, and then detaching old_device". This is not quite true: the reads for the resilver come from all available devices if you do attach/detach, but do not come from old_device if you do "replace". This is for MIRRORs; I'm not sure how RAIDZ behaves. From owner-freebsd-fs@FreeBSD.ORG Mon Nov 30 21:20:03 2009 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 367271065676 for ; Mon, 30 Nov 2009 21:20:03 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 0D87E8FC08 for ; Mon, 30 Nov 2009 21:20:03 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.3/8.14.3) with ESMTP id nAULK2fi075406 for ; Mon, 30 Nov 2009 21:20:02 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.3/8.14.3/Submit) id nAULK2gs075405; Mon, 30 Nov 2009 21:20:02 GMT (envelope-from gnats) Date: Mon, 30 Nov 2009 21:20:02 GMT Message-Id: <200911302120.nAULK2gs075405@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org From: Alex Keda Cc: Subject: Re: kern/134491: [zfs] Hot spares are rather cold... X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: Alex Keda List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 30 Nov 2009 21:20:03 -0000 The following reply was made to PR kern/134491; it has been noted by GNATS. From: Alex Keda To: bug-followup@FreeBSD.org Cc: Subject: Re: kern/134491: [zfs] Hot spares are rather cold... Date: Tue, 01 Dec 2009 00:11:55 +0300 may be some zfs developers comment this? From owner-freebsd-fs@FreeBSD.ORG Mon Nov 30 22:40:40 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id C42301065694 for ; Mon, 30 Nov 2009 22:40:40 +0000 (UTC) (envelope-from andrew@modulus.org) Received: from email.octopus.com.au (email.octopus.com.au [122.100.2.232]) by mx1.freebsd.org (Postfix) with ESMTP id D54B98FC1D for ; Mon, 30 Nov 2009 22:40:39 +0000 (UTC) Received: by email.octopus.com.au (Postfix, from userid 1002) id 8D4355CB94D; Tue, 1 Dec 2009 09:17:58 +1100 (EST) X-Spam-Checker-Version: SpamAssassin 3.2.3 (2007-08-08) on email.octopus.com.au X-Spam-Level: X-Spam-Status: No, score=-1.4 required=10.0 tests=ALL_TRUSTED autolearn=failed version=3.2.3 Received: from [220.233.52.14] (14.52.233.220.static.exetel.com.au [220.233.52.14]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) (Authenticated sender: admin@email.octopus.com.au) by email.octopus.com.au (Postfix) with ESMTP id 273385CB8BB; Tue, 1 Dec 2009 09:17:54 +1100 (EST) Message-ID: <4B14495E.7050306@modulus.org> Date: Tue, 01 Dec 2009 09:38:22 +1100 From: Andrew Snow User-Agent: Thunderbird 2.0.0.6 (X11/20070926) MIME-Version: 1.0 To: Zaphod Beeblebrox , freebsd-fs@freebsd.org References: <2ae8edf30911300120x627e42a9ha2cf003e847d4fbd@mail.gmail.com> <4B139AEB.8060900@jrv.org> <2ae8edf30911300425g4026909bm9262f6abcf82ddcd@mail.gmail.com> <5f67a8c40911301233s46a2818at9051c4ebbacf7e25@mail.gmail.com> In-Reply-To: <5f67a8c40911301233s46a2818at9051c4ebbacf7e25@mail.gmail.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: Subject: Re: ZFS guidelines - preparing for future storage expansion X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 30 Nov 2009 22:40:40 -0000 Zaphod Beeblebrox wrote: > I moved from 5x 750G to 5x 1.5T disks this way earlier this year. It takes > a _long_ time. resilvering 750g (they were about 98% full when I did this) > onto the 1.5T disks took about 12 hours each. Currently there is no "read-ahead" for scrubbing and resilvering, so it only talks to one disk and at a time and proceeds using only about half the I/O capacity of your disks (or less). Read-ahead is one of the planned features for ZFS next year. Also, when your disks are 98% or more full and you are doing any writes at all ZFS spends a long time looking for free blocks with an inefficient algorithm. An improved "disk full" algorithm is also planned for next year. - Andrew From owner-freebsd-fs@FreeBSD.ORG Mon Nov 30 22:56:03 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 264F0106566B for ; Mon, 30 Nov 2009 22:56:03 +0000 (UTC) (envelope-from fbsd@dannysplace.net) Received: from mail.dannysplace.net (mail.dannysplace.net [80.69.71.124]) by mx1.freebsd.org (Postfix) with ESMTP id CEFA08FC0C for ; Mon, 30 Nov 2009 22:56:02 +0000 (UTC) Received: from nas.lan ([203.206.171.212] helo=[192.168.10.10]) by mail.dannysplace.net with esmtpsa (TLSv1:AES256-SHA:256) (Exim 4.69 (FreeBSD)) (envelope-from ) id 1NFFA1-000Lbs-IM; Tue, 01 Dec 2009 08:56:02 +1000 Message-ID: <4B144D79.90806@dannysplace.net> Date: Tue, 01 Dec 2009 08:55:53 +1000 From: Danny Carroll User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-GB; rv:1.9.1.1) Gecko/20090715 Thunderbird/3.0b3 MIME-Version: 1.0 To: Zaphod Beeblebrox References: <2ae8edf30911300120x627e42a9ha2cf003e847d4fbd@mail.gmail.com> <4B139AEB.8060900@jrv.org> <2ae8edf30911300425g4026909bm9262f6abcf82ddcd@mail.gmail.com> <5f67a8c40911301233s46a2818at9051c4ebbacf7e25@mail.gmail.com> In-Reply-To: <5f67a8c40911301233s46a2818at9051c4ebbacf7e25@mail.gmail.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Authenticated-User: danny X-Authenticator: plain X-Sender-Verify: SUCCEEDED (sender exists & accepts mail) X-Exim-Version: 4.69 (build at 13-Aug-2009 20:22:24) X-Date: 2009-12-01 08:56:02 X-Connected-IP: 203.206.171.212:59473 X-Message-Linecount: 36 X-Body-Linecount: 22 X-Message-Size: 1843 X-Body-Size: 1019 X-Received-Count: 1 X-Recipient-Count: 2 X-Local-Recipient-Count: 2 X-Local-Recipient-Defer-Count: 0 X-Local-Recipient-Fail-Count: 0 X-SA-Exim-Connect-IP: 203.206.171.212 X-SA-Exim-Rcpt-To: zbeeble@gmail.com, freebsd-fs@freebsd.org X-SA-Exim-Mail-From: fbsd@dannysplace.net X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on ferrari.dannysplace.net X-Spam-Level: X-Spam-Status: No, score=-1.3 required=8.0 tests=ALL_TRUSTED,AWL autolearn=disabled version=3.2.5 X-SA-Exim-Version: 4.2 X-SA-Exim-Scanned: Yes (on mail.dannysplace.net) Cc: freebsd-fs Subject: Re: ZFS guidelines - preparing for future storage expansion X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: fbsd@dannysplace.net List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 30 Nov 2009 22:56:03 -0000 On 1/12/2009 6:33 AM, Zaphod Beeblebrox wrote: > I moved from 5x 750G to 5x 1.5T disks this way earlier this year. It takes > a _long_ time. resilvering 750g (they were about 98% full when I did this) > onto the 1.5T disks took about 12 hours each. With work and sleep and other > distractions, it took most of a week to perform the upgrade. And keep in > mind that while you're upgrading, you're vulnerable to data loss (no more > replicas). I suppose RAIDZ2 would make that safer, but more costly. > Would it be possible to mitigate the risk of data loss by marking all ZFS volumes read only? That way if you lose a disk, you could simply put back the old disk. I have no idea if this is possible, I'd imagine ZFS may not be happy to see a drive again that it was told to replace. Also, it might not be appropriate for most production systems, but if you can afford the inconvenience of not being able to write while the array is resilvering then it may be suitable for some. Just a thought.... -D From owner-freebsd-fs@FreeBSD.ORG Tue Dec 1 00:31:52 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 207B4106566B for ; Tue, 1 Dec 2009 00:31:52 +0000 (UTC) (envelope-from james-freebsd-fs2@jrv.org) Received: from mail.jrv.org (rrcs-24-73-246-106.sw.biz.rr.com [24.73.246.106]) by mx1.freebsd.org (Postfix) with ESMTP id D2B2F8FC08 for ; Tue, 1 Dec 2009 00:31:51 +0000 (UTC) Received: from kremvax.housenet.jrv (kremvax.housenet.jrv [192.168.3.124]) by mail.jrv.org (8.14.3/8.14.3) with ESMTP id nB10Vns1071345; Mon, 30 Nov 2009 18:31:50 -0600 (CST) (envelope-from james-freebsd-fs2@jrv.org) Authentication-Results: mail.jrv.org; domainkeys=pass (testing) header.from=james-freebsd-fs2@jrv.org DomainKey-Signature: a=rsa-sha1; s=enigma; d=jrv.org; c=nofws; q=dns; h=message-id:date:from:user-agent:mime-version:to:cc:subject: references:in-reply-to:content-type:content-transfer-encoding; b=Emr7usYPCV6u2RIM4lpLVfaMo8B5bWBgB7bcDIxjQs4LcBYeTEjYzpN6IYheBigwD ZMc1hwq/cgNkR9zsyuPcci0FJJoKwyzm8W2n7K3beDU0p3eXz8n9kG1oO7dbzYnNWr/ +mOHFcmL8KwEIQ4fK+5mt0JkzGYAF3XBOKy4TVo= Message-ID: <4B1463F5.5020403@jrv.org> Date: Mon, 30 Nov 2009 18:31:49 -0600 From: "James R. Van Artsdalen" User-Agent: Thunderbird 2.0.0.23 (Macintosh/20090812) MIME-Version: 1.0 To: Andrew Snow References: <2ae8edf30911300120x627e42a9ha2cf003e847d4fbd@mail.gmail.com> <4B139AEB.8060900@jrv.org> <2ae8edf30911300425g4026909bm9262f6abcf82ddcd@mail.gmail.com> <5f67a8c40911301233s46a2818at9051c4ebbacf7e25@mail.gmail.com> <4B14495E.7050306@modulus.org> In-Reply-To: <4B14495E.7050306@modulus.org> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org Subject: Re: ZFS guidelines - preparing for future storage expansion X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 01 Dec 2009 00:31:52 -0000 Andrew Snow wrote: > Currently there is no "read-ahead" for scrubbing and resilvering, so > it only talks to one disk and at a time and proceeds using only about > half the I/O capacity of your disks (or less). Read-ahead is one of > the planned features for ZFS next year. gstat sometimes shows multiple outstanding I/O requests to a drive during a scrub. All of the disks are lit up at the same time: there's no one-disk-at-a-time. I see roughly 500 MB/sec during a scrub, which is around 50% of the theoretical bandwidth of both the disk-to-HBA links and the HBA-to-system slot in my case. I hope to be able to fix both this spring and see if I can reach gigabyte-per-second levels, especially for userland reads (I've seen 420 MB/s so far). (each vdev in my case is a 2-way mirror so 500 MB/s of disk is 250 MB/s of user data) > Also, when your disks are 98% or more full and you are doing any > writes at all ZFS spends a long time looking for free blocks with an > inefficient algorithm. An improved "disk full" algorithm is also > planned for next year. As the disk approaches 100% capacity the free space list(s) become shorter, not longer. It's fragmentation, or the need to search a long time for a large block in the right area, that is likely the problem. If you can accept the block at the head of the list there is no search at all. A quick snapshot during a scrub: dT: 1.006s w: 1.000s L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name 0 238 238 30525 4.4 0 0 0.0 34.8| ada2 0 235 235 30016 3.1 0 0 0.0 26.3| ada3 0 274 274 35104 3.7 0 0 0.0 35.4| ada4 0 277 277 35485 4.0 0 0 0.0 40.5| ada5 0 273 273 34976 2.9 0 0 0.0 29.4| ada6 4 270 270 34474 7.2 0 0 0.0 53.4| ada7 0 271 271 34722 3.2 0 0 0.0 32.4| ada8 5 270 270 34410 3.4 0 0 0.0 34.0| ada9 7 268 268 34277 5.8 0 0 0.0 43.6| ada10 0 267 267 34213 4.3 0 0 0.0 32.1| ada11 4 269 269 34468 5.7 0 0 0.0 41.6| ada12 7 268 268 34277 4.7 0 0 0.0 33.4| ada13 0 277 277 35421 5.1 0 0 0.0 36.5| ada14 4 270 270 34595 5.4 0 0 0.0 37.5| ada15 0 269 269 34468 6.3 0 0 0.0 43.9| ada16 0 275 275 35167 6.2 0 0 0.0 44.9| ada17 From owner-freebsd-fs@FreeBSD.ORG Tue Dec 1 00:42:03 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id BBC3E106566C for ; Tue, 1 Dec 2009 00:42:03 +0000 (UTC) (envelope-from andrew@modulus.org) Received: from email.octopus.com.au (email.octopus.com.au [122.100.2.232]) by mx1.freebsd.org (Postfix) with ESMTP id 7921B8FC08 for ; Tue, 1 Dec 2009 00:42:03 +0000 (UTC) Received: by email.octopus.com.au (Postfix, from userid 1002) id DE2BE5CBA95; Tue, 1 Dec 2009 11:19:21 +1100 (EST) X-Spam-Checker-Version: SpamAssassin 3.2.3 (2007-08-08) on email.octopus.com.au X-Spam-Level: X-Spam-Status: No, score=-1.4 required=10.0 tests=ALL_TRUSTED autolearn=failed version=3.2.3 Received: from [10.1.50.60] (ppp121-45-161-121.lns20.syd6.internode.on.net [121.45.161.121]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) (Authenticated sender: admin@email.octopus.com.au) by email.octopus.com.au (Postfix) with ESMTP id D83CE5CB8C1; Tue, 1 Dec 2009 11:19:17 +1100 (EST) Message-ID: <4B146631.8040305@modulus.org> Date: Tue, 01 Dec 2009 11:41:21 +1100 From: Andrew Snow User-Agent: Thunderbird 2.0.0.14 (X11/20080523) MIME-Version: 1.0 To: "James R. Van Artsdalen" References: <2ae8edf30911300120x627e42a9ha2cf003e847d4fbd@mail.gmail.com> <4B139AEB.8060900@jrv.org> <2ae8edf30911300425g4026909bm9262f6abcf82ddcd@mail.gmail.com> <5f67a8c40911301233s46a2818at9051c4ebbacf7e25@mail.gmail.com> <4B14495E.7050306@modulus.org> <4B1463F5.5020403@jrv.org> In-Reply-To: <4B1463F5.5020403@jrv.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs Subject: Re: ZFS guidelines - preparing for future storage expansion X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 01 Dec 2009 00:42:03 -0000 James R. Van Artsdalen wrote: > All of the disks are lit up at the same time: there's no > one-disk-at-a-time. Yeah, that is the case for large files but I have seen it slow down alot for highly fragmented filesystems with lots of small files. - Andrew From owner-freebsd-fs@FreeBSD.ORG Tue Dec 1 02:27:20 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id E3152106566C for ; Tue, 1 Dec 2009 02:27:20 +0000 (UTC) (envelope-from josh@multipart-mixed.com) Received: from joshcarter.com (67-207-137-80.slicehost.net [67.207.137.80]) by mx1.freebsd.org (Postfix) with ESMTP id C12588FC17 for ; Tue, 1 Dec 2009 02:27:20 +0000 (UTC) Received: from [192.168.1.141] (dsl081-096-235.den1.dsl.speakeasy.net [64.81.96.235]) by joshcarter.com (Postfix) with ESMTPSA id 42B7FC858A; Tue, 1 Dec 2009 02:27:17 +0000 (UTC) Mime-Version: 1.0 (Apple Message framework v1077) Content-Type: text/plain; charset=us-ascii From: Josh Carter In-Reply-To: <5f67a8c40911301233s46a2818at9051c4ebbacf7e25@mail.gmail.com> Date: Mon, 30 Nov 2009 19:27:14 -0700 Content-Transfer-Encoding: quoted-printable Message-Id: <459EBAB9-483C-49B3-8B87-B3F3AEA3A03E@multipart-mixed.com> References: <2ae8edf30911300120x627e42a9ha2cf003e847d4fbd@mail.gmail.com> <4B139AEB.8060900@jrv.org> <2ae8edf30911300425g4026909bm9262f6abcf82ddcd@mail.gmail.com> <5f67a8c40911301233s46a2818at9051c4ebbacf7e25@mail.gmail.com> To: Zaphod Beeblebrox X-Mailer: Apple Mail (2.1077) Cc: freebsd-fs Subject: Re: ZFS guidelines - preparing for future storage expansion X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 01 Dec 2009 02:27:21 -0000 On Nov 30, 2009, at 1:33 PM, Zaphod Beeblebrox wrote: > This form of upgrade is a cool feature --- in the end the "cost" of = running > a home RAID array is the cost of the electricity ... and I'm pretty = sure the > draw of the 1.5T drives is very similar to the 750G drives. I'm not = sure I > see this feature being used a lot in production, tho. It's a pretty = high > stress on the array for a long time. The best solution for production arrays hinges on this not-yet-done = feature, allowing the removal of a top-level vdev when there's = sufficient space on other vdevs: http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=3D4852783 Assuming you've got several top-level vdevs, which is reasonable when = you've got a lot of drives, you'd want to migrate data off a vdev, pull = all the vdev's drives, and add new drives in their place. This both = saves hands-on time and there's no additional vulnerability to data = loss. FWIW, I asked the bug's owner about the status of this feature back in = April and he said "ETA into OpenSolaris this calendar year" but given = that the "commit to fix" field of the bug is still blank, I consider it = unlikely. One can still wish for a holiday gift. ;) -Josh From owner-freebsd-fs@FreeBSD.ORG Tue Dec 1 03:06:04 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id B02FE106566B for ; Tue, 1 Dec 2009 03:06:04 +0000 (UTC) (envelope-from rincebrain@gmail.com) Received: from mail-fx0-f218.google.com (mail-fx0-f218.google.com [209.85.220.218]) by mx1.freebsd.org (Postfix) with ESMTP id 440408FC1A for ; Tue, 1 Dec 2009 03:06:03 +0000 (UTC) Received: by fxm10 with SMTP id 10so3714367fxm.14 for ; Mon, 30 Nov 2009 19:06:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:cc:content-type; bh=sSYWAXW+CqZiNzgRi1ZjRZAft2jZiwPIwcI67x7dOCU=; b=Z3l8kwSeGE4riv8NU5EjoMelEz4ZSJRzH7gTOo1kwnnIwP0wROILazZpA/nN9Js87w 6JD/aqz0itE9Bx/4XlMRrounA20gQdZ6gL8dL1yTz+n66ZJCvxbt+3rMT4901Md11JZ/ PuO8yJtN2H9mup9ZpENDaoopPDiH7o1gNgySg= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=pR1ZNmQZZt4vxWvUeDAvcZlnEp6oiSeyRFeCwTMud9SDCjgBPqWWTDA5eC0t3DwRj8 TFzSaNDzOeutQ+t3cJml44K6X9aHrB5ZRMey0kOIRdskv/osb2TtLFO2I4vYc2qRu1vy 8Qxa8do3vCTKqrHFzak1LVDz8Qoc1u6F/u6N8= MIME-Version: 1.0 Received: by 10.239.144.129 with SMTP id o1mr620487hba.62.1259636763136; Mon, 30 Nov 2009 19:06:03 -0800 (PST) In-Reply-To: <459EBAB9-483C-49B3-8B87-B3F3AEA3A03E@multipart-mixed.com> References: <2ae8edf30911300120x627e42a9ha2cf003e847d4fbd@mail.gmail.com> <4B139AEB.8060900@jrv.org> <2ae8edf30911300425g4026909bm9262f6abcf82ddcd@mail.gmail.com> <5f67a8c40911301233s46a2818at9051c4ebbacf7e25@mail.gmail.com> <459EBAB9-483C-49B3-8B87-B3F3AEA3A03E@multipart-mixed.com> Date: Mon, 30 Nov 2009 22:06:03 -0500 Message-ID: <5da0588e0911301906h7d0c4867of86f6855ef44e35a@mail.gmail.com> From: Rich To: Josh Carter Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-fs Subject: Re: ZFS guidelines - preparing for future storage expansion X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 01 Dec 2009 03:06:04 -0000 Apparently Bonwick said something about it being putback into Solaris during Q4 2009. http://blogs.sun.com/video/entry/kernel_conference_australia_2009_jeff - Rich From owner-freebsd-fs@FreeBSD.ORG Tue Dec 1 12:32:10 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 270BF106566C for ; Tue, 1 Dec 2009 12:32:10 +0000 (UTC) (envelope-from gerrit@pmp.uni-hannover.de) Received: from mrelay1.uni-hannover.de (mrelay1.uni-hannover.de [130.75.2.106]) by mx1.freebsd.org (Postfix) with ESMTP id 930808FC0A for ; Tue, 1 Dec 2009 12:32:09 +0000 (UTC) Received: from www.pmp.uni-hannover.de (www.pmp.uni-hannover.de [130.75.117.2]) by mrelay1.uni-hannover.de (8.14.2/8.14.2) with ESMTP id nB1CW6s1009506 for ; Tue, 1 Dec 2009 13:32:07 +0100 Received: from pmp.uni-hannover.de (arc.pmp.uni-hannover.de [130.75.117.1]) by www.pmp.uni-hannover.de (Postfix) with SMTP id 960C324 for ; Tue, 1 Dec 2009 13:32:06 +0100 (CET) Date: Tue, 1 Dec 2009 13:32:06 +0100 From: Gerrit =?ISO-8859-1?Q?K=FChn?= To: freebsd-fs@freebsd.org Message-Id: <20091201133206.20754b2b.gerrit@pmp.uni-hannover.de> Organization: Albert-Einstein-Institut (MPI =?ISO-8859-1?Q?f=FCr?= Gravitationsphysik & IGP =?ISO-8859-1?Q?Universit=E4t?= Hannover) X-Mailer: Sylpheed 2.7.1 (GTK+ 2.12.11; i386-portbld-freebsd7.0) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-PMX-Version: 5.5.5.374460, Antispam-Engine: 2.7.1.369594, Antispam-Data: 2009.12.1.122120 Subject: zfs/zpool upgrade X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 01 Dec 2009 12:32:10 -0000 Hi all, I have one question concerning zfs and zpool upgrades: Some time ago I saw some postings here with procedures to upgrade to newer zpool and zfs versions. These contained exporting and importing the pool (and upgrading in-between, of course :-). Is the export strictly neccessary? I have no machine where I could try it right now, but I think I did use zpool upgrade on imported and online pools before without noticing any problems. cu Gerrit From owner-freebsd-fs@FreeBSD.ORG Tue Dec 1 16:20:05 2009 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id EC5771065670 for ; Tue, 1 Dec 2009 16:20:05 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id A5E5D8FC1C for ; Tue, 1 Dec 2009 16:20:05 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.3/8.14.3) with ESMTP id nB1GK5e7096830 for ; Tue, 1 Dec 2009 16:20:05 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.3/8.14.3/Submit) id nB1GK5fR096829; Tue, 1 Dec 2009 16:20:05 GMT (envelope-from gnats) Date: Tue, 1 Dec 2009 16:20:05 GMT Message-Id: <200912011620.nB1GK5fR096829@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org From: Andrey Simonenko Cc: Subject: Re: kern/136865: NFS exports atomic and on-the-fly atomic updates X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: Andrey Simonenko List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 01 Dec 2009 16:20:06 -0000 The following reply was made to PR kern/136865; it has been noted by GNATS. From: Andrey Simonenko To: bug-followup@FreeBSD.org Cc: Subject: Re: kern/136865: NFS exports atomic and on-the-fly atomic updates Date: Tue, 1 Dec 2009 18:18:57 +0200 There were several updates with improvements and corrections. The noticeable change is the addition of three new options -no_nfsv2, -no_nfsv3 and -no_nfsv4 that allow to disable particular NFS version globally, per file system and/or per address specification. The -no_nfsv2 and -no_nfsv3 do not completely disable the MOUNT protocol and there are -no_mnt_dump and -no_mnt_export options that allow to disable MOUNT protocol's procedures DUMP and EXPORT respectively. From owner-freebsd-fs@FreeBSD.ORG Tue Dec 1 16:29:37 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id D36171065789 for ; Tue, 1 Dec 2009 16:29:37 +0000 (UTC) (envelope-from zbeeble@gmail.com) Received: from mail-fx0-f218.google.com (mail-fx0-f218.google.com [209.85.220.218]) by mx1.freebsd.org (Postfix) with ESMTP id 6878F8FC0A for ; Tue, 1 Dec 2009 16:29:37 +0000 (UTC) Received: by fxm10 with SMTP id 10so4249285fxm.14 for ; Tue, 01 Dec 2009 08:29:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:cc:content-type; bh=h/csBBOHDelq2FZWyo6q7ShTnILnbctQDG0CcR4uLwE=; b=hVYrZ8AaQgGUFat5kMc6rrDBM1nqVEzZQAV8Gpgc1JsojZfb3BiJIcacXpwPUnFGzf hvn+2W8DAC/EEPniAU87/aOgKxQ10y96OZlkem4P9Jl1qdYOAo9fib0c9N4NSD2cUMyC ZvecevV/RE4At+h/kOqVSk1aOG7ExvOqZAfKw= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=Xh6URbG5oEAyQ3YvzErGkv2D8CJJvfsimk0F4vE7vn2KABywPO+iJ9d7Vij1UAvPl1 002XW904p+173MBgZQSRX4Zg1mAX7LAhdPIVi+2IUXLPNQJCXUFr7bTTbOk4BoG4qhlc Alln6/HTmpHaY3AU/K44CgyISEYQnR4zGEpfE= MIME-Version: 1.0 Received: by 10.216.88.18 with SMTP id z18mr24631wee.78.1259684975521; Tue, 01 Dec 2009 08:29:35 -0800 (PST) In-Reply-To: <20091201133206.20754b2b.gerrit@pmp.uni-hannover.de> References: <20091201133206.20754b2b.gerrit@pmp.uni-hannover.de> Date: Tue, 1 Dec 2009 11:29:35 -0500 Message-ID: <5f67a8c40912010829u1cc99b57ubde368413a8e53dd@mail.gmail.com> From: Zaphod Beeblebrox To: =?ISO-8859-1?Q?Gerrit_K=FChn?= Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: freebsd-fs@freebsd.org Subject: Re: zfs/zpool upgrade X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 01 Dec 2009 16:29:37 -0000 2009/12/1 Gerrit K=FChn > > I have one question concerning zfs and zpool upgrades: > > Some time ago I saw some postings here with procedures to upgrade to newe= r > zpool and zfs versions. These contained exporting and importing the pool > (and upgrading in-between, of course :-). > Is the export strictly neccessary? I have no machine where I could try it > right now, but I think I did use zpool upgrade on imported and online > pools before without noticing any problems. > I upgraded my ZFS pool and all that was required was "zpool upgrade" followed by one "zfs upgrade" for each filesystem on the pool. No import/export was required. From owner-freebsd-fs@FreeBSD.ORG Tue Dec 1 17:10:03 2009 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 10FC51065670 for ; Tue, 1 Dec 2009 17:10:03 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id DAEDF8FC12 for ; Tue, 1 Dec 2009 17:10:02 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.3/8.14.3) with ESMTP id nB1HA2PL039275 for ; Tue, 1 Dec 2009 17:10:02 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.3/8.14.3/Submit) id nB1HA2uL039274; Tue, 1 Dec 2009 17:10:02 GMT (envelope-from gnats) Date: Tue, 1 Dec 2009 17:10:02 GMT Message-Id: <200912011710.nB1HA2uL039274@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org From: Jaakko Heinonen Cc: Subject: Re: kern/133980: [panic] [ffs] panic: ffs_valloc: dup alloc X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: Jaakko Heinonen List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 01 Dec 2009 17:10:03 -0000 The following reply was made to PR kern/133980; it has been noted by GNATS. From: Jaakko Heinonen To: bug-followup@FreeBSD.org Cc: Subject: Re: kern/133980: [panic] [ffs] panic: ffs_valloc: dup alloc Date: Tue, 1 Dec 2009 19:01:11 +0200 Here is a link to Bruce Evans' follow-up which didn't make it to the audit-trail: http://docs.freebsd.org/cgi/mid.cgi?20090508120355.S1497 From owner-freebsd-fs@FreeBSD.ORG Wed Dec 2 04:55:39 2009 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 43764106566B; Wed, 2 Dec 2009 04:55:39 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 1AC318FC08; Wed, 2 Dec 2009 04:55:39 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.3/8.14.3) with ESMTP id nB24tcsU052979; Wed, 2 Dec 2009 04:55:38 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.3/8.14.3/Submit) id nB24tcT7052975; Wed, 2 Dec 2009 04:55:38 GMT (envelope-from linimon) Date: Wed, 2 Dec 2009 04:55:38 GMT Message-Id: <200912020455.nB24tcT7052975@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Cc: Subject: Re: kern/141091: [patch] [nullfs] fix panics with DIAGNOSTIC enabled X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 02 Dec 2009 04:55:39 -0000 Synopsis: [patch] [nullfs] fix panics with DIAGNOSTIC enabled Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Wed Dec 2 04:55:31 UTC 2009 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=141091 From owner-freebsd-fs@FreeBSD.ORG Wed Dec 2 10:23:43 2009 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 10F31106566C; Wed, 2 Dec 2009 10:23:43 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id DC4A38FC0C; Wed, 2 Dec 2009 10:23:42 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.3/8.14.3) with ESMTP id nB2ANgQk079277; Wed, 2 Dec 2009 10:23:42 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.3/8.14.3/Submit) id nB2ANgZX079273; Wed, 2 Dec 2009 10:23:42 GMT (envelope-from linimon) Date: Wed, 2 Dec 2009 10:23:42 GMT Message-Id: <200912021023.nB2ANgZX079273@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Cc: Subject: Re: kern/141086: [nfs] [panic] panic("nfs: bioread, not dir") on FreeBSD 7.2-STABLE X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 02 Dec 2009 10:23:43 -0000 Old Synopsis: panic("nfs: bioread, not dir") on FreeBSD 7.2-STABLE New Synopsis: [nfs] [panic] panic("nfs: bioread, not dir") on FreeBSD 7.2-STABLE Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Wed Dec 2 10:23:29 UTC 2009 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=141086 From owner-freebsd-fs@FreeBSD.ORG Wed Dec 2 16:53:07 2009 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A757B106566B for ; Wed, 2 Dec 2009 16:53:07 +0000 (UTC) (envelope-from lists@mschuette.name) Received: from mail.asta.uni-potsdam.de (mail.asta.uni-potsdam.de [IPv6:2001:638:807:3a:20d:56ff:fefd:1183]) by mx1.freebsd.org (Postfix) with ESMTP id 37E398FC16 for ; Wed, 2 Dec 2009 16:53:07 +0000 (UTC) Received: from localhost (mail.asta.uni-potsdam.de [141.89.58.198]) by mail.asta.uni-potsdam.de (Postfix) with ESMTP id 35751502448 for ; Wed, 2 Dec 2009 17:53:06 +0100 (CET) X-Virus-Scanned: on mail at asta.uni-potsdam.de Received: from mail.asta.uni-potsdam.de ([141.89.58.198]) by localhost (mail.asta.uni-potsdam.de [141.89.58.198]) (amavisd-new, port 10024) with ESMTP id YPhDuIXdN6Ij for ; Wed, 2 Dec 2009 17:52:40 +0100 (CET) Received: from dagny.mschuette.name (cl-485.dus-01.de.sixxs.net [IPv6:2a01:198:200:1e4::2]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client CN "Martin Schuette", Issuer "AStA-CA" (verified OK)) by mail.asta.uni-potsdam.de (Postfix) with ESMTPSA id B5017502437 for ; Wed, 2 Dec 2009 17:52:40 +0100 (CET) Message-ID: <4B169B57.1070603@mschuette.name> Date: Wed, 02 Dec 2009 17:52:39 +0100 From: =?UTF-8?B?TWFydGluIFNjaMO8dHRl?= User-Agent: Thunderbird 2.0.0.23 (X11/20090908) MIME-Version: 1.0 To: freebsd-fs@FreeBSD.org References: <4B09EDB2.7020002@mschuette.name> <20091124132357.GA1941@tops.skynet.lt> In-Reply-To: <20091124132357.GA1941@tops.skynet.lt> X-Enigmail-Version: 0.95.7 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Cc: Subject: Re: [nullfs] [panic] null with unref'ed lowervp X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 02 Dec 2009 16:53:07 -0000 Gleb Kurtsou wrote: > In my understanding null_checkvp assumptions doesn't hold in null_lock > and null_unlock. So I'd suggest you running without DIAGNOSTIC or try > attached patch instead. Thanks. I installed the patch now. Because I already reduced the number of nullmounts I cannot really 'test' it but I will write in case I see the issue again. -- Martin From owner-freebsd-fs@FreeBSD.ORG Wed Dec 2 21:11:32 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id F09171065679 for ; Wed, 2 Dec 2009 21:11:31 +0000 (UTC) (envelope-from kevin@your.org) Received: from mail.your.org (chi02.mail.your.org [204.9.55.23]) by mx1.freebsd.org (Postfix) with ESMTP id B71F88FC14 for ; Wed, 2 Dec 2009 21:11:31 +0000 (UTC) Received: from mail.your.org (chi02.mail.your.org [204.9.55.23]) by mail.your.org (Postfix) with ESMTP id 221991806A6F for ; Wed, 2 Dec 2009 20:55:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=your.org; h=from :content-type:content-transfer-encoding:subject:date:message-id :to:mime-version; s=selector1; bh=7tg1p5XR2M9gxbTVFs2SrAYQ+Ls=; b=o6oceVUY/VIJqJzl3BvSru9D0wSXIU0ykIDWQ4xMKK5nWdaWbZ/ApqRyEKUVX 2GY8EVfocvTRN5FjGPUPKOssSja/2RvzKuhx6b3hF5b6huPFLN9RN5EbKEdlf3i4 5mtgRTSGpO48+8IikTtK2y9xrcJg7gOoNvN0DXebrtW78I= DomainKey-Signature: a=rsa-sha1; c=nofws; d=your.org; h=from:content-type :content-transfer-encoding:subject:date:message-id:to: mime-version; q=dns; s=selector1; b=b881oy/Z5C6Y63i+7nmK3GJhWuYP 1M3PMZBmO4FLQAO3ymHY3Qh1OxkaWGAUzTqjMO3bv5vfPP6iTZ2LPjbzQcYWd7D2 ACaRyki/9D8jm/twJaMJYtYsBjKZHovb2GNAQw4l86YW3PxiEpEG702qaLAkrP2R dHSY1KayP9xwwWI= Received: from vpn177.ord02.your.org (vpn177.ord02.your.org [204.9.55.177]) by mail.your.org (Postfix) with ESMTPA id BCB8A1806A6D for ; Wed, 2 Dec 2009 20:55:23 +0000 (UTC) From: Kevin Content-Type: text/plain; charset=us-ascii; format=flowed; delsp=yes Content-Transfer-Encoding: 7bit Date: Wed, 2 Dec 2009 14:55:23 -0600 Message-Id: <30582AF2-B1E8-4C07-A487-C220845963D2@your.org> To: freebsd-fs@freebsd.org Mime-Version: 1.0 (Apple Message framework v1076) X-Mailer: Apple Mail (2.1076) Subject: "zfs receive" lock time X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 02 Dec 2009 21:11:32 -0000 I have two very very fast systems (12-disk 15krpm raid array, 16 cores, etc). I'm using zfs send/receive to replicate a zfs volume from the "master" box to the "slave" box. Every minute, the master takes a new snapshot, then uses "send -i" to send an incremental snapshot to the slave. Normally, no files are changed during the minute so the operation is very fast (<1 second, and most of that is ssh negotiation time). If the slave is completely idle, "zfs receive" takes a fraction of a second. If the slave has been very busy (lots of read activity, no writes - the slave has everything mounted read only), suddenly "zfs receive" can take 30 seconds or more to complete, the whole time it has the filesystem locked. For example, I'd see: 49345 root 1 76 0 13600K 1956K zio->i 9 0:01 1.37% zfs 48910 www 1 46 0 36700K 21932K rrl->r 3 0:24 0.00% lighttpd 48913 www 1 46 0 41820K 26108K rrl->r 2 0:24 0.00% lighttpd 48912 www 1 46 0 37724K 23484K rrl->r 0 0:24 0.00% lighttpd 48911 www 1 46 0 41820K 26460K rrl->r 10 0:23 0.00% lighttpd 48909 www 1 46 0 39772K 24488K rrl->r 5 0:22 0.00% lighttpd 48908 www 1 46 0 36700K 21460K rrl->r 14 0:19 0.00% lighttpd 48907 www 1 45 0 30556K 16216K rrl->r 13 0:14 0.00% lighttpd 48906 www 1 44 0 26460K 11452K rrl->r 6 0:06 0.00% lighttpd At first, I thought it was possibly cache pressure... when the system was busy, whatever data necessary to create a new snapshot was getting pushed out of the cache so it had to be re-read. I increased arc_max and arc_meta_limit to very high values, and it seemed to have no effect, even when arc_meta_used was far below arc_meta_limit. Disabling cache flushes had no impact. Disabling zil cut the time in half, but it's still too long for this application. ktrace on the "zfs receive" shows: 1062 zfs 0.000024 CALL ioctl(0x3,0xcc285a11 ,0x7fffffffa320) 1062 zfs 0.000081 RET ioctl 0 1062 zfs 0.000058 CALL ioctl(0x3,0xcc285a05 ,0x7fffffffa2f0) 1062 zfs 0.000037 RET ioctl 0 1062 zfs 0.000019 CALL ioctl(0x3,0xcc285a11 ,0x7fffffffa320) 1062 zfs 0.000055 RET ioctl 0 1062 zfs 0.000031 CALL ioctl(0x3,0xcc285a11 ,0x7fffffff9f00) 1062 zfs 0.000053 RET ioctl 0 1062 zfs 0.000020 CALL ioctl(0x3,0xcc285a1c ,0x7fffffffc930) 1062 zfs 24.837084 RET ioctl 0 1062 zfs 0.000028 CALL ioctl(0x3,0xcc285a11 ,0x7fffffff9f00) 1062 zfs 0.000074 RET ioctl 0 1062 zfs 0.000037 CALL close(0x6) 1062 zfs 0.000006 RET close 0 1062 zfs 0.000007 CALL close(0x3) 1062 zfs 0.000005 RET close 0 The 24 second call to 0xcc285a1c is ZFS_IOC_RECV, so whatever is going on is in the kernel, not a delay in getting the kernel any data. "systat" is showing that the drives are 100% busy during the operation, so it's obviously doing something. :) Does anyone know what "zfs receive" is doing while it has everything locked like this, and why a lot of read activity beforehand would drastically effect the performance of doing this? -- Kevin From owner-freebsd-fs@FreeBSD.ORG Thu Dec 3 08:38:18 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 5FB3E1065672 for ; Thu, 3 Dec 2009 08:38:18 +0000 (UTC) (envelope-from gallasch@free.de) Received: from smtp.free.de (smtp.free.de [91.204.6.103]) by mx1.freebsd.org (Postfix) with ESMTP id AA4788FC0A for ; Thu, 3 Dec 2009 08:38:17 +0000 (UTC) Received: (qmail 95512 invoked from network); 3 Dec 2009 09:38:15 +0100 Received: from smtp.free.de (HELO orwell.free.de) (gallasch@free.de@[91.204.4.103]) (envelope-sender ) by smtp.free.de (qmail-ldap-1.03) with AES128-SHA encrypted SMTP for ; 3 Dec 2009 09:38:15 +0100 Date: Thu, 3 Dec 2009 09:38:09 +0100 From: Kai Gallasch To: freebsd-fs@freebsd.org Message-ID: <20091203093809.3d54ea2e@orwell.free.de> X-Mailer: Claws Mail 3.7.0 (GTK+ 2.18.2; powerpc-apple-darwin9.7.0) X-Face: 7"x0zA5=*cXGZw-xjU<">'+!3(KXTUXZVLD42KVN{'go[UQr"Mc.e(XW92N8plZ(9x.{x; I<|95e+b&GH-36\15F~L$YD*Y +u}o&KV?6.%"mJIkaY3G>BKNt`1|Y+%K1P4t; 47D65&(Y7h5Ll-[ltkhamx.-; ,jggK'}oMpUgEHFG YQ"9oXKAl>!d,J}T{)@uxvfu?YFWC*\~h+,^f Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Subject: questions using zfs on raid controllers without jbod option X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 03 Dec 2009 08:38:18 -0000 Hi list. What's the best way to deploy zfs on a server with builtin raid controller and missing JBOD functionality? I am currently testing a hp/compaq proliant server with Battery Backed SmartArray P400 controller (ciss) and 5 sas disks which I use for a raidz1 pool. What I did was to create a raid0 array on the controller for each disk, with raid0 chunksize set to 32K (Those raid0 drives show up as da2-da6 in FreeBSD) and used them for a raidz1 pool. Following zpool iostat I can see, that there are almost all of the time no continous writes, but most of the copied data is written in spikes of write operations. My guess is, that this behaviour is caching related and that it might be caused by zfs-arc and raid-controller cache not playing too well together. questions: "raid0 drives": - What's the best chunksize for a single raid0 drive that is used as a device for a pool ? (I use 32K) - Should the write cache on the physical disks that are used as raid0 drives for zfs be enabled, if the raid controller has a battery backup unit ? ( I enabled the disk write cache for all disks) raid controller cache: My current settings for the raid controller cache are: "cache 50% reads and 50% writes" - Does it make sense to have caching of read- and write-ops enabled with this setup? I wonder: Shouldn't it be the job of the zfs arc to do the caching? - Does zfs prefetch make any sense If your raid controller already caches read operations? Cheers, Kai. From owner-freebsd-fs@FreeBSD.ORG Thu Dec 3 09:18:35 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 3EE551065694 for ; Thu, 3 Dec 2009 09:18:35 +0000 (UTC) (envelope-from phoemix@harmless.hu) Received: from marvin.harmless.hu (marvin.harmless.hu [195.56.55.204]) by mx1.freebsd.org (Postfix) with ESMTP id 004818FC13 for ; Thu, 3 Dec 2009 09:18:34 +0000 (UTC) Received: from [217.150.130.134] (helo=unknown) by marvin.harmless.hu with esmtpsa (TLSv1:AES128-SHA:128) (Exim 4.69 (FreeBSD)) (envelope-from ) id 1NG7ZV-000LXx-TP; Thu, 03 Dec 2009 10:01:57 +0100 Date: Thu, 3 Dec 2009 10:01:52 +0100 From: Gergely CZUCZY To: Kai Gallasch Message-ID: <20091203100152.00006e1f@unknown> In-Reply-To: <20091203093809.3d54ea2e@orwell.free.de> References: <20091203093809.3d54ea2e@orwell.free.de> Organization: Harmless Digital Bt X-Mailer: Claws Mail 3.7.1 (GTK+ 2.16.0; i586-pc-mingw32msvc) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org Subject: Re: questions using zfs on raid controllers without jbod option X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 03 Dec 2009 09:18:35 -0000 On Thu, 3 Dec 2009 09:38:09 +0100 Kai Gallasch wrote: > > Hi list. Hello, > > What's the best way to deploy zfs on a server with builtin raid > controller and missing JBOD functionality? > > I am currently testing a hp/compaq proliant server with Battery Backed > SmartArray P400 controller (ciss) and 5 sas disks which I use for a > raidz1 pool. > > What I did was to create a raid0 array on the controller for each > disk, with raid0 chunksize set to 32K (Those raid0 drives show up as > da2-da6 in FreeBSD) and used them for a raidz1 pool. > > Following zpool iostat I can see, that there are almost all of the > time no continous writes, but most of the copied data is written in > spikes of write operations. My guess is, that this behaviour is > caching related and that it might be caused by zfs-arc and > raid-controller cache not playing too well together. I'm also having such a controller, and I have also created a raid0 for each of my disks, so i've got 14 raid0s on the smartarray. Thinking of it, the stripe size of a raid0 speaks for itself. It's the size in which data is striped among drives. When you've got a single drive in a raid0 the data is striped among that single drive. That is, that data is continuous on the drive, no effective striping is done. In my opinion this loses the point of this parameter of tha raid0 array. > > questions: > > "raid0 drives": > > - What's the best chunksize for a single raid0 drive that is used as a > device for a pool ? (I use 32K) I don't really think this makes any differences in this configuration, as I have noted above. > > - Should the write cache on the physical disks that are used as raid0 > drives for zfs be enabled, if the raid controller has a battery > backup unit ? ( I enabled the disk write cache for all disks) Both caches are working. There is a hungarian guy who did some testing on also such a controller, and next to ARC, the controller's own cache also had some nice effect on performance. > > raid controller cache: > > My current settings for the raid controller cache are: "cache 50% > reads and 50% writes" > > - Does it make sense to have caching of read- and write-ops enabled > with this setup? I wonder: Shouldn't it be the job of the zfs arc to > do the caching? > > - Does zfs prefetch make any sense If your raid controller already > caches read operations? > > > Cheers, > Kai. > > > > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" -- Sincerely, Gergely CZUCZY Harmless Digital Bt +36-30-9702963 From owner-freebsd-fs@FreeBSD.ORG Thu Dec 3 17:00:23 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 0DA1D106568F for ; Thu, 3 Dec 2009 17:00:23 +0000 (UTC) (envelope-from josh@multipart-mixed.com) Received: from joshcarter.com (67-207-137-80.slicehost.net [67.207.137.80]) by mx1.freebsd.org (Postfix) with ESMTP id D6B248FC1B for ; Thu, 3 Dec 2009 17:00:22 +0000 (UTC) Received: from [192.168.3.53] (unknown [63.172.79.253]) by joshcarter.com (Postfix) with ESMTPSA id 893B6C85F4; Thu, 3 Dec 2009 17:00:22 +0000 (UTC) Content-Type: text/plain; charset=us-ascii Mime-Version: 1.0 (Apple Message framework v1077) From: Josh Carter In-Reply-To: <20091203093809.3d54ea2e@orwell.free.de> Date: Thu, 3 Dec 2009 10:00:21 -0700 Content-Transfer-Encoding: quoted-printable Message-Id: <661F4A80-846F-44B4-9FA9-E0E630B984B3@multipart-mixed.com> References: <20091203093809.3d54ea2e@orwell.free.de> To: Kai Gallasch , freebsd-fs X-Mailer: Apple Mail (2.1077) Cc: Subject: Re: questions using zfs on raid controllers without jbod option X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 03 Dec 2009 17:00:23 -0000 Kai, Does your controller have the option of creating a "volume" rather than = a RAID0? On Adaptec and LSI cards I've tested, they've had the option of = creating a simple catenated volume of disks, thus bypassing any = re-chunking of data. I created one volume per drive and performance was = on-par with using a non-RAID card. (As a side note, ZFS could push the = driver harder as separate volumes than the RAID card could push the = drives using the hardware's RAID controller.) The spikes you see in write performance are normal. ZFS gathers up = individual writes and commits them to disk as transactions; when a = transaction flushes you see the spike in iostat. As for caching, I'd go ahead and turn on write caching on the RAID card = if you've got a battery. To use write caching in ZFS effectively (i.e. = with the ZIL) you need a very fast write device or you'll slow the = system down. STEC Zeus solid-state drives make good ZIL devices but = they're super-expensive. I would let ZFS do its own caching on the read = side. Best regards, Josh On Dec 3, 2009, at 1:38 AM, Kai Gallasch wrote: >=20 > Hi list. >=20 > What's the best way to deploy zfs on a server with builtin raid > controller and missing JBOD functionality? >=20 > I am currently testing a hp/compaq proliant server with Battery Backed > SmartArray P400 controller (ciss) and 5 sas disks which I use for a > raidz1 pool. >=20 > What I did was to create a raid0 array on the controller for each = disk, > with raid0 chunksize set to 32K (Those raid0 drives show up as da2-da6 > in FreeBSD) and used them for a raidz1 pool. >=20 > Following zpool iostat I can see, that there are almost all of the = time > no continous writes, but most of the copied data is written in spikes = of > write operations. My guess is, that this behaviour is caching related > and that it might be caused by zfs-arc and raid-controller cache not > playing too well together. >=20 > questions: >=20 > "raid0 drives": >=20 > - What's the best chunksize for a single raid0 drive that is used as a > device for a pool ? (I use 32K) >=20 > - Should the write cache on the physical disks that are used as raid0 > drives for zfs be enabled, if the raid controller has a battery > backup unit ? ( I enabled the disk write cache for all disks) >=20 > raid controller cache: >=20 > My current settings for the raid controller cache are: "cache 50% = reads > and 50% writes" >=20 > - Does it make sense to have caching of read- and write-ops enabled > with this setup? I wonder: Shouldn't it be the job of the zfs arc to > do the caching? >=20 > - Does zfs prefetch make any sense If your raid controller already > caches read operations? >=20 >=20 > Cheers, > Kai. >=20 >=20 >=20 >=20 > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Thu Dec 3 21:00:06 2009 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id E7ACA106568D for ; Thu, 3 Dec 2009 21:00:06 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id D6F938FC13 for ; Thu, 3 Dec 2009 21:00:06 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.3/8.14.3) with ESMTP id nB3L06LF070536 for ; Thu, 3 Dec 2009 21:00:06 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.3/8.14.3/Submit) id nB3L06Ct070535; Thu, 3 Dec 2009 21:00:06 GMT (envelope-from gnats) Date: Thu, 3 Dec 2009 21:00:06 GMT Message-Id: <200912032100.nB3L06Ct070535@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org From: dfilter@FreeBSD.ORG (dfilter service) Cc: Subject: Re: kern/140853: commit references a PR X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: dfilter service List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 03 Dec 2009 21:00:07 -0000 The following reply was made to PR kern/140853; it has been noted by GNATS. From: dfilter@FreeBSD.ORG (dfilter service) To: bug-followup@FreeBSD.org Cc: Subject: Re: kern/140853: commit references a PR Date: Thu, 3 Dec 2009 20:59:36 +0000 (UTC) Author: jhb Date: Thu Dec 3 20:59:28 2009 New Revision: 200084 URL: http://svn.freebsd.org/changeset/base/200084 Log: Properly return an error reply if an NFS remove or link operation fails. Previously the failing operation would allocate an mbuf and construct an error reply, but because the function did not return 0, the NFS server assumed it had failed to generate a reply and would leak the reply mbuf as well as not sending the reply to the NFS client. PR: kern/140853 Submitted by: Ted Faber faber at isi edu (remove) Reviewed by: rmacklem (remove) MFC after: 1 week Modified: head/sys/nfsserver/nfs_serv.c Modified: head/sys/nfsserver/nfs_serv.c ============================================================================== --- head/sys/nfsserver/nfs_serv.c Thu Dec 3 20:55:09 2009 (r200083) +++ head/sys/nfsserver/nfs_serv.c Thu Dec 3 20:59:28 2009 (r200084) @@ -1810,10 +1810,9 @@ out: } ereply: nfsm_reply(NFSX_WCCDATA(v3)); - if (v3) { + if (v3) nfsm_srvwcc_data(dirfor_ret, &dirfor, diraft_ret, &diraft); - error = 0; - } + error = 0; nfsmout: NDFREE(&nd, NDF_ONLY_PNBUF); if (nd.ni_dvp) { @@ -2187,8 +2186,8 @@ ereply: if (v3) { nfsm_srvpostop_attr(getret, &at); nfsm_srvwcc_data(dirfor_ret, &dirfor, diraft_ret, &diraft); - error = 0; } + error = 0; /* fall through */ nfsmout: _______________________________________________ svn-src-all@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/svn-src-all To unsubscribe, send any mail to "svn-src-all-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Fri Dec 4 02:20:02 2009 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id F0C14106566B for ; Fri, 4 Dec 2009 02:20:02 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id C5A498FC14 for ; Fri, 4 Dec 2009 02:20:02 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.3/8.14.3) with ESMTP id nB42K2p9050044 for ; Fri, 4 Dec 2009 02:20:02 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.3/8.14.3/Submit) id nB42K21G050043; Fri, 4 Dec 2009 02:20:02 GMT (envelope-from gnats) Date: Fri, 4 Dec 2009 02:20:02 GMT Message-Id: <200912040220.nB42K21G050043@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org From: John Hein Cc: Subject: Re: kern/135412: [zfs] [nfs] zfs(v13)+nfs and open(..., O_WRONLY|O_CREAT|O_EXCL, ...) returns io error X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: John Hein List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 04 Dec 2009 02:20:03 -0000 The following reply was made to PR kern/135412; it has been noted by GNATS. From: John Hein To: bug-followup@FreeBSD.org, danny@cs.huji.ac.il Cc: jilles@FreeBSD.org Subject: Re: kern/135412: [zfs] [nfs] zfs(v13)+nfs and open(..., O_WRONLY|O_CREAT|O_EXCL, ...) returns io error Date: Thu, 3 Dec 2009 19:18:39 -0700 I still get this. But it happens when the nfs client is a FreeBSD 4.x machine or a linux machine (tested with Fedora 10 and 11). And it does not seem to happen with nfs v2, just nfs v3. This breaks xauth and various programs but the common thread is "open(..., O_WRONLY|O_CREAT|O_EXCL, ...)". From owner-freebsd-fs@FreeBSD.ORG Fri Dec 4 06:30:06 2009 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 2CECD106566B for ; Fri, 4 Dec 2009 06:30:06 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 1C0238FC16 for ; Fri, 4 Dec 2009 06:30:06 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.3/8.14.3) with ESMTP id nB46U56s073149 for ; Fri, 4 Dec 2009 06:30:05 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.3/8.14.3/Submit) id nB46U583073148; Fri, 4 Dec 2009 06:30:05 GMT (envelope-from gnats) Date: Fri, 4 Dec 2009 06:30:05 GMT Message-Id: <200912040630.nB46U583073148@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org From: Jaakko Heinonen Cc: Subject: Re: kern/135412: [zfs] [nfs] zfs(v13)+nfs and open(..., O_WRONLY|O_CREAT|O_EXCL, ...) returns io error X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: Jaakko Heinonen List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 04 Dec 2009 06:30:06 -0000 The following reply was made to PR kern/135412; it has been noted by GNATS. From: Jaakko Heinonen To: John Hein Cc: bug-followup@FreeBSD.org, danny@cs.huji.ac.il, jilles@FreeBSD.org Subject: Re: kern/135412: [zfs] [nfs] zfs(v13)+nfs and open(..., O_WRONLY|O_CREAT|O_EXCL, ...) returns io error Date: Fri, 4 Dec 2009 08:26:17 +0200 On 2009-12-04, John Hein wrote: > But it happens when the nfs client is a FreeBSD 4.x machine or a linux > machine (tested with Fedora 10 and 11). And it does not seem to > happen with nfs v2, just nfs v3. Which FreeBSD version your server is running? There was an additional fix (r197525) but it hasn't been MFCd to stable/7. Here's the patch against stable/7. %%% Index: sys/nfsserver/nfs_serv.c =================================================================== --- sys/nfsserver/nfs_serv.c (revision 200062) +++ sys/nfsserver/nfs_serv.c (working copy) @@ -1743,7 +1743,7 @@ nfsrv_create(struct nfsrv_descript *nfsd tl = nfsm_dissect_nonblock(u_int32_t *, NFSX_V3CREATEVERF); /* Unique bytes, endianness is not important. */ - cverf.tv_sec = tl[0]; + cverf.tv_sec = (int32_t)tl[0]; cverf.tv_nsec = tl[1]; exclusive_flag = 1; break; %%% -- Jaakko From owner-freebsd-fs@FreeBSD.ORG Fri Dec 4 16:40:04 2009 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id B60871065697 for ; Fri, 4 Dec 2009 16:40:04 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id A4B258FC17 for ; Fri, 4 Dec 2009 16:40:04 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.3/8.14.3) with ESMTP id nB4Ge3vn009909 for ; Fri, 4 Dec 2009 16:40:03 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.3/8.14.3/Submit) id nB4Ge3xE009908; Fri, 4 Dec 2009 16:40:03 GMT (envelope-from gnats) Date: Fri, 4 Dec 2009 16:40:03 GMT Message-Id: <200912041640.nB4Ge3xE009908@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org From: John Hein Cc: Subject: Re: kern/135412: [zfs] [nfs] zfs(v13)+nfs and open(..., O_WRONLY|O_CREAT|O_EXCL, ...) returns io error X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: John Hein List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 04 Dec 2009 16:40:04 -0000 The following reply was made to PR kern/135412; it has been noted by GNATS. From: John Hein To: Jaakko Heinonen Cc: bug-followup@FreeBSD.org, danny@cs.huji.ac.il, jilles@FreeBSD.org Subject: Re: kern/135412: [zfs] [nfs] zfs(v13)+nfs and open(..., O_WRONLY|O_CREAT|O_EXCL, ...) returns io error Date: Fri, 4 Dec 2009 09:37:10 -0700 Jaakko Heinonen wrote at 08:26 +0200 on Dec 4, 2009: > On 2009-12-04, John Hein wrote: > > But it happens when the nfs client is a FreeBSD 4.x machine or a linux > > machine (tested with Fedora 10 and 11). And it does not seem to > > happen with nfs v2, just nfs v3. > > Which FreeBSD version your server is running? There was an additional > fix (r197525) but it hasn't been MFCd to stable/7. > > Here's the patch against stable/7. > > %%% > Index: sys/nfsserver/nfs_serv.c > =================================================================== > --- sys/nfsserver/nfs_serv.c (revision 200062) > +++ sys/nfsserver/nfs_serv.c (working copy) > @@ -1743,7 +1743,7 @@ nfsrv_create(struct nfsrv_descript *nfsd > tl = nfsm_dissect_nonblock(u_int32_t *, > NFSX_V3CREATEVERF); > /* Unique bytes, endianness is not important. */ > - cverf.tv_sec = tl[0]; > + cverf.tv_sec = (int32_t)tl[0]; > cverf.tv_nsec = tl[1]; > exclusive_flag = 1; > break; > %%% Yes, I saw the same thing, built a new kernel with that change last night. I tested this morning, and it fixes the problem (and causes no new ones that I have seen). +1 for MFC to 7. Incidentally, is this going to have to be reworked because of the 32-bit time_t rollover (admittedly, still quite a ways away)? From owner-freebsd-fs@FreeBSD.ORG Fri Dec 4 20:21:42 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id DF4591065670 for ; Fri, 4 Dec 2009 20:21:42 +0000 (UTC) (envelope-from pjd@garage.freebsd.pl) Received: from mail.garage.freebsd.pl (chello089077043238.chello.pl [89.77.43.238]) by mx1.freebsd.org (Postfix) with ESMTP id 5E6C08FC17 for ; Fri, 4 Dec 2009 20:21:42 +0000 (UTC) Received: by mail.garage.freebsd.pl (Postfix, from userid 65534) id 8734245E13; Fri, 4 Dec 2009 21:21:40 +0100 (CET) Received: from localhost (chello089077043238.chello.pl [89.77.43.238]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.garage.freebsd.pl (Postfix) with ESMTP id E381945CD9; Fri, 4 Dec 2009 21:21:34 +0100 (CET) Date: Fri, 4 Dec 2009 21:21:34 +0100 From: Pawel Jakub Dawidek To: Kevin Message-ID: <20091204202134.GA1716@garage.freebsd.pl> References: <30582AF2-B1E8-4C07-A487-C220845963D2@your.org> Mime-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="9jxsPFA5p3P2qPhR" Content-Disposition: inline In-Reply-To: <30582AF2-B1E8-4C07-A487-C220845963D2@your.org> User-Agent: Mutt/1.4.2.3i X-PGP-Key-URL: http://people.freebsd.org/~pjd/pjd.asc X-OS: FreeBSD 9.0-CURRENT i386 X-Spam-Checker-Version: SpamAssassin 3.0.4 (2005-06-05) on mail.garage.freebsd.pl X-Spam-Level: X-Spam-Status: No, score=-0.6 required=4.5 tests=BAYES_00,RCVD_IN_SORBS_DUL autolearn=no version=3.0.4 Cc: freebsd-fs@freebsd.org Subject: Re: "zfs receive" lock time X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 04 Dec 2009 20:21:43 -0000 --9jxsPFA5p3P2qPhR Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Wed, Dec 02, 2009 at 02:55:23PM -0600, Kevin wrote: >=20 > I have two very very fast systems (12-disk 15krpm raid array, 16 =20 > cores, etc). I'm using zfs send/receive to replicate a zfs volume from = =20 > the "master" box to the "slave" box. >=20 > Every minute, the master takes a new snapshot, then uses "send -i" to =20 > send an incremental snapshot to the slave. Normally, no files are =20 > changed during the minute so the operation is very fast (<1 second, =20 > and most of that is ssh negotiation time). >=20 > If the slave is completely idle, "zfs receive" takes a fraction of a =20 > second. If the slave has been very busy (lots of read activity, no =20 > writes - the slave has everything mounted read only), suddenly "zfs =20 > receive" can take 30 seconds or more to complete, the whole time it =20 > has the filesystem locked. For example, I'd see: >=20 > 49345 root 1 76 0 13600K 1956K zio->i 9 0:01 1.37% zfs > 48910 www 1 46 0 36700K 21932K rrl->r 3 0:24 0.00% =20 > lighttpd > 48913 www 1 46 0 41820K 26108K rrl->r 2 0:24 0.00% =20 > lighttpd > 48912 www 1 46 0 37724K 23484K rrl->r 0 0:24 0.00% =20 > lighttpd > 48911 www 1 46 0 41820K 26460K rrl->r 10 0:23 0.00% =20 > lighttpd > 48909 www 1 46 0 39772K 24488K rrl->r 5 0:22 0.00% =20 > lighttpd > 48908 www 1 46 0 36700K 21460K rrl->r 14 0:19 0.00% =20 > lighttpd > 48907 www 1 45 0 30556K 16216K rrl->r 13 0:14 0.00% =20 > lighttpd > 48906 www 1 44 0 26460K 11452K rrl->r 6 0:06 0.00% =20 > lighttpd >=20 > At first, I thought it was possibly cache pressure... when the system =20 > was busy, whatever data necessary to create a new snapshot was getting = =20 > pushed out of the cache so it had to be re-read. I increased arc_max =20 > and arc_meta_limit to very high values, and it seemed to have no =20 > effect, even when arc_meta_used was far below arc_meta_limit. >=20 > Disabling cache flushes had no impact. Disabling zil cut the time in =20 > half, but it's still too long for this application. >=20 > ktrace on the "zfs receive" shows: >=20 > 1062 zfs 0.000024 CALL ioctl(0x3,0xcc285a11 ,0x7fffffffa320) > 1062 zfs 0.000081 RET ioctl 0 > 1062 zfs 0.000058 CALL ioctl(0x3,0xcc285a05 ,0x7fffffffa2f0) > 1062 zfs 0.000037 RET ioctl 0 > 1062 zfs 0.000019 CALL ioctl(0x3,0xcc285a11 ,0x7fffffffa320) > 1062 zfs 0.000055 RET ioctl 0 > 1062 zfs 0.000031 CALL ioctl(0x3,0xcc285a11 ,0x7fffffff9f00) > 1062 zfs 0.000053 RET ioctl 0 > 1062 zfs 0.000020 CALL ioctl(0x3,0xcc285a1c ,0x7fffffffc930) > 1062 zfs 24.837084 RET ioctl 0 > 1062 zfs 0.000028 CALL ioctl(0x3,0xcc285a11 ,0x7fffffff9f00) > 1062 zfs 0.000074 RET ioctl 0 > 1062 zfs 0.000037 CALL close(0x6) > 1062 zfs 0.000006 RET close 0 > 1062 zfs 0.000007 CALL close(0x3) > 1062 zfs 0.000005 RET close 0 >=20 > The 24 second call to 0xcc285a1c is ZFS_IOC_RECV, so whatever is going = =20 > on is in the kernel, not a delay in getting the kernel any data. =20 > "systat" is showing that the drives are 100% busy during the =20 > operation, so it's obviously doing something. :) >=20 > Does anyone know what "zfs receive" is doing while it has everything =20 > locked like this, and why a lot of read activity beforehand would =20 > drastically effect the performance of doing this? Read activity is related to the dataset on the slave that is being received? Is that right? There are two operations that can suspend you file system this way: rollback and receive. The suspend is done by acquiring write lock for the given file system where every other operation acquires read lock. In the end receive to acquire write lock has to wait for all read operations to finish. I'm not sure how your applications use it, but if files are open for short period of time only and then closed, you could do something like this: master# curtime=3D`date "+%Y%m%d%H%M%S"` master# zfs snapshot pool/fs@${curtime} master# zfs send -i pool/fs@${oldtime} pool/fs@${curtime} | \ ssh slave zfs recv pool/fs slave# zfs clone pool/fs@${curtime} pool/fs_${curtime} slave# ln -fs /pool/fs_${curtime} /pool/usethis Then point your application to use directory /pool/usethis/ (clone, instead of received file system). And clean up clones as you wish. Read activity on clones shouldn't affect received file system. --=20 Pawel Jakub Dawidek http://www.wheel.pl pjd@FreeBSD.org http://www.FreeBSD.org FreeBSD committer Am I Evil? Yes, I Am! --9jxsPFA5p3P2qPhR Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.4 (FreeBSD) iD8DBQFLGW9NForvXbEpPzQRAsmsAKC8g4p8SCaM9ZFBTDy0fYUvFs5OdACdG4FW OvTYOfKYu6x+7Mk3gJVCbFY= =WtXy -----END PGP SIGNATURE----- --9jxsPFA5p3P2qPhR-- From owner-freebsd-fs@FreeBSD.ORG Fri Dec 4 23:07:28 2009 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 26DAA1065670; Fri, 4 Dec 2009 23:07:28 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id F25C38FC08; Fri, 4 Dec 2009 23:07:27 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.3/8.14.3) with ESMTP id nB4N7RIv044458; Fri, 4 Dec 2009 23:07:27 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.3/8.14.3/Submit) id nB4N7RoJ044454; Fri, 4 Dec 2009 23:07:27 GMT (envelope-from linimon) Date: Fri, 4 Dec 2009 23:07:27 GMT Message-Id: <200912042307.nB4N7RoJ044454@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Cc: Subject: Re: kern/141177: [zfs] fsync() on FIFO causes panic() on zfs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 04 Dec 2009 23:07:28 -0000 Old Synopsis: fsync() on FIFO causes panic() on zfs New Synopsis: [zfs] fsync() on FIFO causes panic() on zfs Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Fri Dec 4 23:07:15 UTC 2009 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=141177 From owner-freebsd-fs@FreeBSD.ORG Fri Dec 4 23:50:04 2009 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 218A9106566B for ; Fri, 4 Dec 2009 23:50:04 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 10D268FC12 for ; Fri, 4 Dec 2009 23:50:04 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.3/8.14.3) with ESMTP id nB4No3BM080011 for ; Fri, 4 Dec 2009 23:50:03 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.3/8.14.3/Submit) id nB4No3eo080010; Fri, 4 Dec 2009 23:50:03 GMT (envelope-from gnats) Date: Fri, 4 Dec 2009 23:50:03 GMT Message-Id: <200912042350.nB4No3eo080010@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org From: Kostik Belousov Cc: Subject: Re: kern/141177: [zfs] fsync() on FIFO causes panic() on zfs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: Kostik Belousov List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 04 Dec 2009 23:50:04 -0000 The following reply was made to PR kern/141177; it has been noted by GNATS. From: Kostik Belousov To: Dominik Ernst Cc: bug-followup@freebsd.org Subject: Re: kern/141177: [zfs] fsync() on FIFO causes panic() on zfs Date: Sat, 5 Dec 2009 01:41:31 +0200 ZFS explicitely puts VOP_PANIC as fsync vop for fifos. I think the following patch fixes it. diff --git a/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c b/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c index 7608d76..4f61f5f 100644 --- a/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c +++ b/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c @@ -5009,7 +5009,7 @@ struct vop_vector zfs_vnodeops = { struct vop_vector zfs_fifoops = { .vop_default = &fifo_specops, - .vop_fsync = VOP_PANIC, + .vop_fsync = zfs_freebsd_fsync, .vop_access = zfs_freebsd_access, .vop_getattr = zfs_freebsd_getattr, .vop_inactive = zfs_freebsd_inactive, From owner-freebsd-fs@FreeBSD.ORG Sat Dec 5 15:43:47 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 7E3E6106566B for ; Sat, 5 Dec 2009 15:43:47 +0000 (UTC) (envelope-from baldur@foo.is) Received: from gremlin.foo.is (gremlin.foo.is [194.105.250.10]) by mx1.freebsd.org (Postfix) with ESMTP id 46ACF8FC0A for ; Sat, 5 Dec 2009 15:43:47 +0000 (UTC) Received: by gremlin.foo.is (Postfix, from userid 1000) id B66ABDA85D; Sat, 5 Dec 2009 15:27:57 +0000 (GMT) Date: Sat, 5 Dec 2009 15:27:57 +0000 From: Baldur Gislason To: freebsd-fs@freebsd.org Message-ID: <20091205152757.GK73250@gremlin.foo.is> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.5.18 (2008-05-17) Subject: ZFS and reordering drives X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 05 Dec 2009 15:43:47 -0000 I have a machine running 8.0-STABLE amd64 since last week, it was upgraded from 7.2. Today I had a problem with the system hard drive and had to unplug all the drives to get to it. When I plugged them back in they didn't go in the right order and now both of my pools are broken. I rearranged the cables in what I think is the right order but still no go. One pool says all drives are online but the pool is broken. The other lists ad13 twice in the list and ignores ad12, running in degraded mode. What should I do? Baldur zroot@enigma:~# zpool status pool: zirconium state: UNAVAIL status: The pool is formatted using an older on-disk format. The pool can still be used, but some features are unavailable. action: Upgrade the pool using 'zpool upgrade'. Once this is done, the pool will no longer be accessible on older software versions. scrub: none requested config: NAME STATE READ WRITE CKSUM zirconium UNAVAIL 0 0 0 insufficient replicas raidz1 UNAVAIL 0 0 0 corrupted data ad4 ONLINE 0 0 0 ad6 ONLINE 0 0 0 ad18 ONLINE 0 0 0 ad20 ONLINE 0 0 0 pool: zorglub state: DEGRADED status: One or more devices could not be used because the label is missing or invalid. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Replace the device using 'zpool replace'. see: http://www.sun.com/msg/ZFS-8000-4J scrub: none requested config: NAME STATE READ WRITE CKSUM zorglub DEGRADED 0 0 0 raidz1 DEGRADED 0 0 0 ad15 ONLINE 0 0 0 ad11 ONLINE 0 0 0 ad13 ONLINE 0 0 0 ad17 ONLINE 0 0 0 ad13 FAULTED 0 169 0 corrupted data errors: No known data errors from dmesg: ad4: 953869MB at ata2-master SATA300 ad6: 953869MB at ata3-master SATA300 ad11: 381554MB at ata5-slave UDMA133 ad12: 381554MB at ata6-master UDMA133 ad13: 381554MB at ata6-slave UDMA100 ad14: 152626MB at ata7-master SATA300 ad15: 381554MB at ata7-slave SATA150 ad16: 476940MB at ata8-master SATA300 ad17: 381554MB at ata8-slave SATA150 ad18: 953868MB at ata9-master SATA300 ad20: 953869MB at ata10-master SATA300 From owner-freebsd-fs@FreeBSD.ORG Sat Dec 5 16:01:18 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 5C868106566C for ; Sat, 5 Dec 2009 16:01:18 +0000 (UTC) (envelope-from james-freebsd-fs2@jrv.org) Received: from mail.jrv.org (rrcs-24-73-246-106.sw.biz.rr.com [24.73.246.106]) by mx1.freebsd.org (Postfix) with ESMTP id D91368FC12 for ; Sat, 5 Dec 2009 16:01:17 +0000 (UTC) Received: from kremvax.housenet.jrv (kremvax.housenet.jrv [192.168.3.124]) by bigtex.housenet.jrv (8.14.3/8.14.3) with ESMTP id nB5Fw5Sl098798; Sat, 5 Dec 2009 09:58:05 -0600 (CST) (envelope-from james-freebsd-fs2@jrv.org) Authentication-Results: bigtex.housenet.jrv; domainkeys=pass (testing) header.from=james-freebsd-fs2@jrv.org DomainKey-Signature: a=rsa-sha1; s=enigma; d=jrv.org; c=nofws; q=dns; h=message-id:date:from:user-agent:mime-version:cc:subject: references:in-reply-to:content-type:content-transfer-encoding; b=Ccv43ct6A8zm11ZeK3Mb+tMd0yzZFZY8ABM/8C17RKwbJ9OIcol+oSSSyZESTPazh fYOkveSXqUSaOkj45PSjbuJBSI5kiuWVQ/SkKRElukftOUuMAMSwRZ2K3ZLrLyjSK3X 5fDzipEDdVFYiKsYPlYRXBLSNFfy1zivi7AH8MM= Message-ID: <4B1A830D.3090900@jrv.org> Date: Sat, 05 Dec 2009 09:58:05 -0600 From: "James R. Van Artsdalen" User-Agent: Thunderbird 2.0.0.23 (Macintosh/20090812) MIME-Version: 1.0 References: <20091205152757.GK73250@gremlin.foo.is> In-Reply-To: <20091205152757.GK73250@gremlin.foo.is> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org Subject: Re: ZFS and reordering drives X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 05 Dec 2009 16:01:18 -0000 Baldur Gislason wrote: > When I plugged them back in they didn't go in the right order > and now both of my pools are broken. zpool.cache is broken. Rename /boot/zfs/zpool.cache so that ZFS won't load it, then import the pools manually. (a reboot might be needed before the import; not sure). The problem is that ZFS is recording the boot-time assigned name (/dev/ad0) in the cache. I'm hoping to get GEOM to put the disk serial number in /dev, i.e., /dev/serialnum/5LZ958QL. If you created the pool using serial numbers then the cache would always work right. From owner-freebsd-fs@FreeBSD.ORG Sat Dec 5 16:31:24 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 46909106566C for ; Sat, 5 Dec 2009 16:31:24 +0000 (UTC) (envelope-from gcorcoran@rcn.com) Received: from smtp02.lnh.mail.rcn.net (smtp02.lnh.mail.rcn.net [207.172.157.102]) by mx1.freebsd.org (Postfix) with ESMTP id 07E4B8FC15 for ; Sat, 5 Dec 2009 16:31:23 +0000 (UTC) Received: from mr02.lnh.mail.rcn.net ([207.172.157.22]) by smtp02.lnh.mail.rcn.net with ESMTP; 05 Dec 2009 11:31:23 -0500 Received: from smtp01.lnh.mail.rcn.net (smtp01.lnh.mail.rcn.net [207.172.4.11]) by mr02.lnh.mail.rcn.net (MOS 3.10.7-GA) with ESMTP id QIX89029; Sat, 5 Dec 2009 11:31:15 -0500 (EST) X-Auth-ID: gcorcoran Received: from 216-164-180-100.c3-0.tlg-ubr8.atw-tlg.pa.cable.rcn.com (HELO [10.56.78.161]) ([216.164.180.100]) by smtp01.lnh.mail.rcn.net with ESMTP; 05 Dec 2009 11:31:15 -0500 Message-ID: <4B1A8B5D.6050808@rcn.com> Date: Sat, 05 Dec 2009 11:33:33 -0500 From: Gary Corcoran User-Agent: Thunderbird 2.0.0.23 (Windows/20090812) MIME-Version: 1.0 To: "James R. Van Artsdalen" References: <20091205152757.GK73250@gremlin.foo.is> <4B1A830D.3090900@jrv.org> In-Reply-To: <4B1A830D.3090900@jrv.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Junkmail-Whitelist: YES (by domain whitelist at mr02.lnh.mail.rcn.net) Cc: freebsd-fs@freebsd.org Subject: Re: ZFS and reordering drives X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 05 Dec 2009 16:31:24 -0000 James R. Van Artsdalen wrote: > Baldur Gislason wrote: >> When I plugged them back in they didn't go in the right order >> and now both of my pools are broken. > zpool.cache is broken. Rename /boot/zfs/zpool.cache so that ZFS won't > load it, then import the pools manually. (a reboot might be needed > before the import; not sure). If one were booting from ZFS, would you be out of luck (since you wouldn't be able to access the zpool.cache before booting), or is there a way around this problem? Just wondering, I've avoided booting from ZFS so far. > The problem is that ZFS is recording the boot-time assigned name > (/dev/ad0) in the cache. I'm hoping to get GEOM to put the disk serial > number in /dev, i.e., /dev/serialnum/5LZ958QL. If you created the pool > using serial numbers then the cache would always work right. Is there any way today, to avoid using the boot assigned drive name (e.g. /dev/ad2) when creating the zpool? Again just wondering, I don't need a solution this year... Thanks, Gary From owner-freebsd-fs@FreeBSD.ORG Sat Dec 5 16:39:45 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 4B7001065670 for ; Sat, 5 Dec 2009 16:39:45 +0000 (UTC) (envelope-from baldur@foo.is) Received: from gremlin.foo.is (gremlin.foo.is [194.105.250.10]) by mx1.freebsd.org (Postfix) with ESMTP id 11A618FC2E for ; Sat, 5 Dec 2009 16:39:44 +0000 (UTC) Received: by gremlin.foo.is (Postfix, from userid 1000) id 22365DA85D; Sat, 5 Dec 2009 16:39:44 +0000 (GMT) Date: Sat, 5 Dec 2009 16:39:44 +0000 From: Baldur Gislason To: freebsd-fs@freebsd.org Message-ID: <20091205163943.GL73250@gremlin.foo.is> References: <20091205152757.GK73250@gremlin.foo.is> <4B1A830D.3090900@jrv.org> <4B1A8B5D.6050808@rcn.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4B1A8B5D.6050808@rcn.com> User-Agent: Mutt/1.5.18 (2008-05-17) Subject: Re: ZFS and reordering drives X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 05 Dec 2009 16:39:45 -0000 Ok. The pool that was degraded imported cleanly but the pool that went unavailable won't import. If it is of any significance, I did change the BIOS disk controller settings from IDE to AHCI and then back to IDE before I noticed this pool was gone. root@enigma:~# zpool import zirconium cannot import 'zirconium': invalid vdev configuration pool: zirconium id: 16708799643457239163 state: UNAVAIL status: The pool is formatted using an older on-disk version. action: The pool cannot be imported due to damaged devices or data. config: zirconium UNAVAIL insufficient replicas raidz1 UNAVAIL corrupted data ad4 ONLINE ad6 ONLINE ad18 ONLINE ad20 ONLINE How do I go about debugging this? Baldur On Sat, Dec 05, 2009 at 11:33:33AM -0500, Gary Corcoran wrote: > James R. Van Artsdalen wrote: > > Baldur Gislason wrote: > >> When I plugged them back in they didn't go in the right order > >> and now both of my pools are broken. > > zpool.cache is broken. Rename /boot/zfs/zpool.cache so that ZFS won't > > load it, then import the pools manually. (a reboot might be needed > > before the import; not sure). > > If one were booting from ZFS, would you be out of luck (since you wouldn't > be able to access the zpool.cache before booting), or is there a way > around this problem? Just wondering, I've avoided booting from ZFS so far. > > > The problem is that ZFS is recording the boot-time assigned name > > (/dev/ad0) in the cache. I'm hoping to get GEOM to put the disk serial > > number in /dev, i.e., /dev/serialnum/5LZ958QL. If you created the pool > > using serial numbers then the cache would always work right. > > Is there any way today, to avoid using the boot assigned drive name (e.g. > /dev/ad2) when creating the zpool? Again just wondering, I don't need > a solution this year... > > Thanks, > Gary > > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Sat Dec 5 17:02:38 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 00F7A106566C for ; Sat, 5 Dec 2009 17:02:38 +0000 (UTC) (envelope-from james-freebsd-fs2@jrv.org) Received: from mail.jrv.org (adsl-70-243-84-13.dsl.austtx.swbell.net [70.243.84.13]) by mx1.freebsd.org (Postfix) with ESMTP id 9E1E98FC13 for ; Sat, 5 Dec 2009 17:02:37 +0000 (UTC) Received: from kremvax.housenet.jrv (kremvax.housenet.jrv [192.168.3.124]) by mail.jrv.org (8.14.3/8.14.3) with ESMTP id nB5H2ajq000361; Sat, 5 Dec 2009 11:02:36 -0600 (CST) (envelope-from james-freebsd-fs2@jrv.org) Authentication-Results: mail.jrv.org; domainkeys=pass (testing) header.from=james-freebsd-fs2@jrv.org DomainKey-Signature: a=rsa-sha1; s=enigma; d=jrv.org; c=nofws; q=dns; h=message-id:date:from:user-agent:mime-version:cc:subject: references:in-reply-to:content-type:content-transfer-encoding; b=k5gsjsFVD98zMbSNaKBpDLhOKR5FeGrdgdy81NR04CZgUMzaHocWXJJcAKiddNjJ4 Fb2qIXClI1CO3oH+mm+jQANON+/Kq8HY1DUFIKYRIuu19XXwBMbEikCaOypfF94ijK7 c7/kjhyAHrVAj7EcDdoYfl74SEFIehUsLPXPMf0= Message-ID: <4B1A922C.7000909@jrv.org> Date: Sat, 05 Dec 2009 11:02:36 -0600 From: "James R. Van Artsdalen" User-Agent: Thunderbird 2.0.0.23 (Macintosh/20090812) MIME-Version: 1.0 References: <20091205152757.GK73250@gremlin.foo.is> <4B1A830D.3090900@jrv.org> <4B1A8B5D.6050808@rcn.com> In-Reply-To: <4B1A8B5D.6050808@rcn.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: freebsd-fs Subject: Re: ZFS and reordering drives X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 05 Dec 2009 17:02:38 -0000 Gary Corcoran wrote: > If one were booting from ZFS, would you be out of luck (since you > wouldn't > be able to access the zpool.cache before booting), or is there a way > around this problem? Boot the CD, run fixit mode, mkdir -p /boot/zfs, import the pool, copy the resulting /boot/zfs/zpool.cache file into the pool. Import will likely mount, in fixit, fileystems with the "mountpoint" property set which may be a nuisance: some zfs unmounts may be needed in practice in fixit. > Is there any way today, to avoid using the boot assigned drive name (e.g. > /dev/ad2) when creating the zpool? Partition the disk GPT with gpart. Create one partition covering the entire disk and give that partition a label. Use that label creating the pool: ... gpart add -b 34 -s 9999 -l zfs-label -t freebsd-zfs ad0 zpool create zpool /dev/gpt/zfs-label From owner-freebsd-fs@FreeBSD.ORG Sat Dec 5 17:04:01 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id D621B106568D for ; Sat, 5 Dec 2009 17:04:01 +0000 (UTC) (envelope-from baldur@foo.is) Received: from gremlin.foo.is (gremlin.foo.is [194.105.250.10]) by mx1.freebsd.org (Postfix) with ESMTP id 645BB8FC13 for ; Sat, 5 Dec 2009 17:04:01 +0000 (UTC) Received: by gremlin.foo.is (Postfix, from userid 1000) id 5425DDA85D; Sat, 5 Dec 2009 17:04:00 +0000 (GMT) Date: Sat, 5 Dec 2009 17:04:00 +0000 From: Baldur Gislason To: freebsd-fs@freebsd.org Message-ID: <20091205170400.GM73250@gremlin.foo.is> References: <20091205152757.GK73250@gremlin.foo.is> <4B1A830D.3090900@jrv.org> <4B1A8B5D.6050808@rcn.com> <20091205163943.GL73250@gremlin.foo.is> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20091205163943.GL73250@gremlin.foo.is> User-Agent: Mutt/1.5.18 (2008-05-17) Subject: Re: ZFS and reordering drives X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 05 Dec 2009 17:04:02 -0000 Ok. Running zdb -l on the four drives seems to indicate that one of them has some label issues. http://foo.is/~baldur/brokenzfs/ ad4, ad6 and ad20 all have identical labels, only differences are the ids of the disk holding the label, as expected. root@enigma:~# diff ad4.label ad6.label 12c12 < guid=12923783381249452341 --- > guid=972519640617937764 61c61 < guid=12923783381249452341 --- > guid=972519640617937764 110c110 < guid=12923783381249452341 --- > guid=972519640617937764 159c159 < guid=12923783381249452341 --- > guid=972519640617937764 root@enigma:~# diff ad4.label ad20.label 12c12 < guid=12923783381249452341 --- > guid=10715749107930065182 61c61 < guid=12923783381249452341 --- > guid=10715749107930065182 110c110 < guid=12923783381249452341 --- > guid=10715749107930065182 159c159 < guid=12923783381249452341 --- > guid=10715749107930065182 ad18 has a somewhat broken label. Label 0 and 1 exist identical to the labels on the rest label 2 and 3 are broken or nonexistant. -------------------------------------------- LABEL 2 -------------------------------------------- failed to unpack label 2 -------------------------------------------- LABEL 3 -------------------------------------------- failed to unpack label 3 How should I go about recovering this? Baldur On Sat, Dec 05, 2009 at 04:39:44PM +0000, Baldur Gislason wrote: > Ok. The pool that was degraded imported cleanly but the pool that went > unavailable won't import. > If it is of any significance, I did change the BIOS disk controller settings > from IDE to AHCI and then back to IDE before I noticed this pool was gone. > > root@enigma:~# zpool import zirconium > cannot import 'zirconium': invalid vdev configuration > > pool: zirconium > id: 16708799643457239163 > state: UNAVAIL > status: The pool is formatted using an older on-disk version. > action: The pool cannot be imported due to damaged devices or data. > config: > > zirconium UNAVAIL insufficient replicas > raidz1 UNAVAIL corrupted data > ad4 ONLINE > ad6 ONLINE > ad18 ONLINE > ad20 ONLINE > > How do I go about debugging this? > > Baldur > > > On Sat, Dec 05, 2009 at 11:33:33AM -0500, Gary Corcoran wrote: > > James R. Van Artsdalen wrote: > > > Baldur Gislason wrote: > > >> When I plugged them back in they didn't go in the right order > > >> and now both of my pools are broken. > > > zpool.cache is broken. Rename /boot/zfs/zpool.cache so that ZFS won't > > > load it, then import the pools manually. (a reboot might be needed > > > before the import; not sure). > > > > If one were booting from ZFS, would you be out of luck (since you wouldn't > > be able to access the zpool.cache before booting), or is there a way > > around this problem? Just wondering, I've avoided booting from ZFS so far. > > > > > The problem is that ZFS is recording the boot-time assigned name > > > (/dev/ad0) in the cache. I'm hoping to get GEOM to put the disk serial > > > number in /dev, i.e., /dev/serialnum/5LZ958QL. If you created the pool > > > using serial numbers then the cache would always work right. > > > > Is there any way today, to avoid using the boot assigned drive name (e.g. > > /dev/ad2) when creating the zpool? Again just wondering, I don't need > > a solution this year... > > > > Thanks, > > Gary > > > > > > _______________________________________________ > > freebsd-fs@freebsd.org mailing list > > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Sat Dec 5 17:13:10 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id D52CB106566B for ; Sat, 5 Dec 2009 17:13:10 +0000 (UTC) (envelope-from baldur@foo.is) Received: from gremlin.foo.is (gremlin.foo.is [194.105.250.10]) by mx1.freebsd.org (Postfix) with ESMTP id 632308FC08 for ; Sat, 5 Dec 2009 17:13:10 +0000 (UTC) Received: by gremlin.foo.is (Postfix, from userid 1000) id 8FFC1DA85D; Sat, 5 Dec 2009 17:13:09 +0000 (GMT) Date: Sat, 5 Dec 2009 17:13:09 +0000 From: Baldur Gislason To: freebsd-fs@freebsd.org Message-ID: <20091205171309.GN73250@gremlin.foo.is> References: <20091205152757.GK73250@gremlin.foo.is> <4B1A830D.3090900@jrv.org> <4B1A8B5D.6050808@rcn.com> <20091205163943.GL73250@gremlin.foo.is> <20091205170400.GM73250@gremlin.foo.is> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20091205170400.GM73250@gremlin.foo.is> User-Agent: Mutt/1.5.18 (2008-05-17) Subject: Re: ZFS and reordering drives X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 05 Dec 2009 17:13:10 -0000 Ok. Managed to import the pool by using atacontrol to detach ad18 from the system first. I guess I'll just do a replace then to rebuild. Baldur On Sat, Dec 05, 2009 at 05:04:00PM +0000, Baldur Gislason wrote: > Ok. Running zdb -l on the four drives seems to indicate that one of them > has some label issues. > http://foo.is/~baldur/brokenzfs/ > ad4, ad6 and ad20 all have identical labels, only differences are the ids of the disk > holding the label, as expected. > root@enigma:~# diff ad4.label ad6.label > 12c12 > < guid=12923783381249452341 > --- > > guid=972519640617937764 > 61c61 > < guid=12923783381249452341 > --- > > guid=972519640617937764 > 110c110 > < guid=12923783381249452341 > --- > > guid=972519640617937764 > 159c159 > < guid=12923783381249452341 > --- > > guid=972519640617937764 > root@enigma:~# diff ad4.label ad20.label > 12c12 > < guid=12923783381249452341 > --- > > guid=10715749107930065182 > 61c61 > < guid=12923783381249452341 > --- > > guid=10715749107930065182 > 110c110 > < guid=12923783381249452341 > --- > > guid=10715749107930065182 > 159c159 > < guid=12923783381249452341 > --- > > guid=10715749107930065182 > > ad18 has a somewhat broken label. Label 0 and 1 exist identical to the labels on the rest > label 2 and 3 are broken or nonexistant. > -------------------------------------------- > LABEL 2 > -------------------------------------------- > failed to unpack label 2 > -------------------------------------------- > LABEL 3 > -------------------------------------------- > failed to unpack label 3 > > How should I go about recovering this? > > Baldur > > On Sat, Dec 05, 2009 at 04:39:44PM +0000, Baldur Gislason wrote: > > Ok. The pool that was degraded imported cleanly but the pool that went > > unavailable won't import. > > If it is of any significance, I did change the BIOS disk controller settings > > from IDE to AHCI and then back to IDE before I noticed this pool was gone. > > > > root@enigma:~# zpool import zirconium > > cannot import 'zirconium': invalid vdev configuration > > > > pool: zirconium > > id: 16708799643457239163 > > state: UNAVAIL > > status: The pool is formatted using an older on-disk version. > > action: The pool cannot be imported due to damaged devices or data. > > config: > > > > zirconium UNAVAIL insufficient replicas > > raidz1 UNAVAIL corrupted data > > ad4 ONLINE > > ad6 ONLINE > > ad18 ONLINE > > ad20 ONLINE > > > > How do I go about debugging this? > > > > Baldur > > > > > > On Sat, Dec 05, 2009 at 11:33:33AM -0500, Gary Corcoran wrote: > > > James R. Van Artsdalen wrote: > > > > Baldur Gislason wrote: > > > >> When I plugged them back in they didn't go in the right order > > > >> and now both of my pools are broken. > > > > zpool.cache is broken. Rename /boot/zfs/zpool.cache so that ZFS won't > > > > load it, then import the pools manually. (a reboot might be needed > > > > before the import; not sure). > > > > > > If one were booting from ZFS, would you be out of luck (since you wouldn't > > > be able to access the zpool.cache before booting), or is there a way > > > around this problem? Just wondering, I've avoided booting from ZFS so far. > > > > > > > The problem is that ZFS is recording the boot-time assigned name > > > > (/dev/ad0) in the cache. I'm hoping to get GEOM to put the disk serial > > > > number in /dev, i.e., /dev/serialnum/5LZ958QL. If you created the pool > > > > using serial numbers then the cache would always work right. > > > > > > Is there any way today, to avoid using the boot assigned drive name (e.g. > > > /dev/ad2) when creating the zpool? Again just wondering, I don't need > > > a solution this year... > > > > > > Thanks, > > > Gary > > > > > > > > > _______________________________________________ > > > freebsd-fs@freebsd.org mailing list > > > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > > > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > > _______________________________________________ > > freebsd-fs@freebsd.org mailing list > > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Sat Dec 5 17:31:18 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 7498D106566C for ; Sat, 5 Dec 2009 17:31:18 +0000 (UTC) (envelope-from baldur@foo.is) Received: from gremlin.foo.is (gremlin.foo.is [194.105.250.10]) by mx1.freebsd.org (Postfix) with ESMTP id 193F98FC14 for ; Sat, 5 Dec 2009 17:31:18 +0000 (UTC) Received: by gremlin.foo.is (Postfix, from userid 1000) id 1A66EDA85D; Sat, 5 Dec 2009 17:31:17 +0000 (GMT) Date: Sat, 5 Dec 2009 17:31:17 +0000 From: Baldur Gislason To: freebsd-fs@freebsd.org Message-ID: <20091205173116.GO73250@gremlin.foo.is> References: <20091205152757.GK73250@gremlin.foo.is> <4B1A830D.3090900@jrv.org> <4B1A8B5D.6050808@rcn.com> <20091205163943.GL73250@gremlin.foo.is> <20091205170400.GM73250@gremlin.foo.is> <20091205171309.GN73250@gremlin.foo.is> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20091205171309.GN73250@gremlin.foo.is> User-Agent: Mutt/1.5.18 (2008-05-17) Subject: Re: ZFS and reordering drives X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 05 Dec 2009 17:31:18 -0000 Minor problem here. If I detach ata9 with atacontrol I'm able to connect the pool in degraded mode. However atacontrol won't let me attach ata9 again so I have to reboot to get ad18 back online. After reboot, ZFS will figure out that oh, there's a drive that belongs to this pool, let's connect it even if it breaks the pool. Any suggestions? I'm thinking maybe manually overwrite the label on ad18 to make it look uninitialized. Baldur On Sat, Dec 05, 2009 at 05:13:09PM +0000, Baldur Gislason wrote: > Ok. Managed to import the pool by using atacontrol to detach ad18 from the > system first. I guess I'll just do a replace then to rebuild. > > Baldur > > On Sat, Dec 05, 2009 at 05:04:00PM +0000, Baldur Gislason wrote: > > Ok. Running zdb -l on the four drives seems to indicate that one of them > > has some label issues. > > http://foo.is/~baldur/brokenzfs/ > > ad4, ad6 and ad20 all have identical labels, only differences are the ids of the disk > > holding the label, as expected. > > root@enigma:~# diff ad4.label ad6.label > > 12c12 > > < guid=12923783381249452341 > > --- > > > guid=972519640617937764 > > 61c61 > > < guid=12923783381249452341 > > --- > > > guid=972519640617937764 > > 110c110 > > < guid=12923783381249452341 > > --- > > > guid=972519640617937764 > > 159c159 > > < guid=12923783381249452341 > > --- > > > guid=972519640617937764 > > root@enigma:~# diff ad4.label ad20.label > > 12c12 > > < guid=12923783381249452341 > > --- > > > guid=10715749107930065182 > > 61c61 > > < guid=12923783381249452341 > > --- > > > guid=10715749107930065182 > > 110c110 > > < guid=12923783381249452341 > > --- > > > guid=10715749107930065182 > > 159c159 > > < guid=12923783381249452341 > > --- > > > guid=10715749107930065182 > > > > ad18 has a somewhat broken label. Label 0 and 1 exist identical to the labels on the rest > > label 2 and 3 are broken or nonexistant. > > -------------------------------------------- > > LABEL 2 > > -------------------------------------------- > > failed to unpack label 2 > > -------------------------------------------- > > LABEL 3 > > -------------------------------------------- > > failed to unpack label 3 > > > > How should I go about recovering this? > > > > Baldur > > > > On Sat, Dec 05, 2009 at 04:39:44PM +0000, Baldur Gislason wrote: > > > Ok. The pool that was degraded imported cleanly but the pool that went > > > unavailable won't import. > > > If it is of any significance, I did change the BIOS disk controller settings > > > from IDE to AHCI and then back to IDE before I noticed this pool was gone. > > > > > > root@enigma:~# zpool import zirconium > > > cannot import 'zirconium': invalid vdev configuration > > > > > > pool: zirconium > > > id: 16708799643457239163 > > > state: UNAVAIL > > > status: The pool is formatted using an older on-disk version. > > > action: The pool cannot be imported due to damaged devices or data. > > > config: > > > > > > zirconium UNAVAIL insufficient replicas > > > raidz1 UNAVAIL corrupted data > > > ad4 ONLINE > > > ad6 ONLINE > > > ad18 ONLINE > > > ad20 ONLINE > > > > > > How do I go about debugging this? > > > > > > Baldur > > > > > > > > > On Sat, Dec 05, 2009 at 11:33:33AM -0500, Gary Corcoran wrote: > > > > James R. Van Artsdalen wrote: > > > > > Baldur Gislason wrote: > > > > >> When I plugged them back in they didn't go in the right order > > > > >> and now both of my pools are broken. > > > > > zpool.cache is broken. Rename /boot/zfs/zpool.cache so that ZFS won't > > > > > load it, then import the pools manually. (a reboot might be needed > > > > > before the import; not sure). > > > > > > > > If one were booting from ZFS, would you be out of luck (since you wouldn't > > > > be able to access the zpool.cache before booting), or is there a way > > > > around this problem? Just wondering, I've avoided booting from ZFS so far. > > > > > > > > > The problem is that ZFS is recording the boot-time assigned name > > > > > (/dev/ad0) in the cache. I'm hoping to get GEOM to put the disk serial > > > > > number in /dev, i.e., /dev/serialnum/5LZ958QL. If you created the pool > > > > > using serial numbers then the cache would always work right. > > > > > > > > Is there any way today, to avoid using the boot assigned drive name (e.g. > > > > /dev/ad2) when creating the zpool? Again just wondering, I don't need > > > > a solution this year... > > > > > > > > Thanks, > > > > Gary > > > > > > > > > > > > _______________________________________________ > > > > freebsd-fs@freebsd.org mailing list > > > > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > > > > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > > > _______________________________________________ > > > freebsd-fs@freebsd.org mailing list > > > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > > > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > > _______________________________________________ > > freebsd-fs@freebsd.org mailing list > > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Sat Dec 5 18:41:13 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A43A61065679 for ; Sat, 5 Dec 2009 18:41:13 +0000 (UTC) (envelope-from baldur@foo.is) Received: from gremlin.foo.is (gremlin.foo.is [194.105.250.10]) by mx1.freebsd.org (Postfix) with ESMTP id 492DD8FC1F for ; Sat, 5 Dec 2009 18:41:13 +0000 (UTC) Received: by gremlin.foo.is (Postfix, from userid 1000) id 50A1FDA85D; Sat, 5 Dec 2009 18:41:12 +0000 (GMT) Date: Sat, 5 Dec 2009 18:41:12 +0000 From: Baldur Gislason To: freebsd-fs@freebsd.org Message-ID: <20091205184112.GP73250@gremlin.foo.is> References: <20091205170400.GM73250@gremlin.foo.is> <8555674.871260033069220.JavaMail.root@zimbra> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <8555674.871260033069220.JavaMail.root@zimbra> User-Agent: Mutt/1.5.18 (2008-05-17) Subject: Re: ZFS and reordering drives X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 05 Dec 2009 18:41:13 -0000 I have managed to import the pool in degraded mode, however I am having problems getting it back into normal operating mode. I can detach the drive either by pulling the sata cable or by using atacontrol but I don't have any ways of reattaching the drive to a running system. Running atacontrol attach on the ata channel after detaching it just gives an error, running atacontrol reinit on the ata channel after reconnecting the physically disconnected drive doesn't find the drive. And there doesn't seem to be any option in zpool to forcefully destroy a device in a zpool1, forcefully degrading the pool, and the pool refuses to cooperate when the drive is attached. I even tried putting the drive on a USB->SATA controller but zpool wouldn't let me replace it that way, saying it was too small. What to do, what to do? Baldur On Sat, Dec 05, 2009 at 11:11:09AM -0600, James R. Van Artsdalen wrote: > This is beyond what I know - someone else will need to step in. > > If it's a raidz1 with one bad disk you can probably just unplug the bad disk and import the pool DEGRADED (due to the missing disk). > > ----- Original Message ----- > From: "Baldur Gislason" > To: freebsd-fs@freebsd.org > Sent: Saturday, December 5, 2009 11:04:00 AM > Subject: Re: ZFS and reordering drives > > Ok. Running zdb -l on the four drives seems to indicate that one of them > has some label issues. > http://foo.is/~baldur/brokenzfs/ > ad4, ad6 and ad20 all have identical labels, only differences are the ids of the disk > holding the label, as expected. > root@enigma:~# diff ad4.label ad6.label > 12c12 > < guid=12923783381249452341 > --- > > guid=972519640617937764 > 61c61 > < guid=12923783381249452341 > --- > > guid=972519640617937764 > 110c110 > < guid=12923783381249452341 > --- > > guid=972519640617937764 > 159c159 > < guid=12923783381249452341 > --- > > guid=972519640617937764 > root@enigma:~# diff ad4.label ad20.label > 12c12 > < guid=12923783381249452341 > --- > > guid=10715749107930065182 > 61c61 > < guid=12923783381249452341 > --- > > guid=10715749107930065182 > 110c110 > < guid=12923783381249452341 > --- > > guid=10715749107930065182 > 159c159 > < guid=12923783381249452341 > --- > > guid=10715749107930065182 > > ad18 has a somewhat broken label. Label 0 and 1 exist identical to the labels on the rest > label 2 and 3 are broken or nonexistant. > -------------------------------------------- > LABEL 2 > -------------------------------------------- > failed to unpack label 2 > -------------------------------------------- > LABEL 3 > -------------------------------------------- > failed to unpack label 3 > > How should I go about recovering this? > > Baldur > > On Sat, Dec 05, 2009 at 04:39:44PM +0000, Baldur Gislason wrote: > > Ok. The pool that was degraded imported cleanly but the pool that went > > unavailable won't import. > > If it is of any significance, I did change the BIOS disk controller settings > > from IDE to AHCI and then back to IDE before I noticed this pool was gone. > > > > root@enigma:~# zpool import zirconium > > cannot import 'zirconium': invalid vdev configuration > > > > pool: zirconium > > id: 16708799643457239163 > > state: UNAVAIL > > status: The pool is formatted using an older on-disk version. > > action: The pool cannot be imported due to damaged devices or data. > > config: > > > > zirconium UNAVAIL insufficient replicas > > raidz1 UNAVAIL corrupted data > > ad4 ONLINE > > ad6 ONLINE > > ad18 ONLINE > > ad20 ONLINE > > > > How do I go about debugging this? > > > > Baldur > > > > > > On Sat, Dec 05, 2009 at 11:33:33AM -0500, Gary Corcoran wrote: > > > James R. Van Artsdalen wrote: > > > > Baldur Gislason wrote: > > > >> When I plugged them back in they didn't go in the right order > > > >> and now both of my pools are broken. > > > > zpool.cache is broken. Rename /boot/zfs/zpool.cache so that ZFS won't > > > > load it, then import the pools manually. (a reboot might be needed > > > > before the import; not sure). > > > > > > If one were booting from ZFS, would you be out of luck (since you wouldn't > > > be able to access the zpool.cache before booting), or is there a way > > > around this problem? Just wondering, I've avoided booting from ZFS so far. > > > > > > > The problem is that ZFS is recording the boot-time assigned name > > > > (/dev/ad0) in the cache. I'm hoping to get GEOM to put the disk serial > > > > number in /dev, i.e., /dev/serialnum/5LZ958QL. If you created the pool > > > > using serial numbers then the cache would always work right. > > > > > > Is there any way today, to avoid using the boot assigned drive name (e.g. > > > /dev/ad2) when creating the zpool? Again just wondering, I don't need > > > a solution this year... > > > > > > Thanks, > > > Gary > > > > > > > > > _______________________________________________ > > > freebsd-fs@freebsd.org mailing list > > > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > > > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > > _______________________________________________ > > freebsd-fs@freebsd.org mailing list > > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Sat Dec 5 18:58:01 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 8735E1065693 for ; Sat, 5 Dec 2009 18:58:01 +0000 (UTC) (envelope-from rincebrain@gmail.com) Received: from mail-fx0-f209.google.com (mail-fx0-f209.google.com [209.85.220.209]) by mx1.freebsd.org (Postfix) with ESMTP id 108BA8FC0A for ; Sat, 5 Dec 2009 18:58:00 +0000 (UTC) Received: by fxm2 with SMTP id 2so1005006fxm.13 for ; Sat, 05 Dec 2009 10:58:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=n3e9RPrWefHyTW50wvzYsRHlnS0Cogaln4NS00BArtM=; b=M5b1BiY92bSZqRF4MjcWD/VN6RPDiI8StDnoYtFIhXsaw752cm6u6/cdHROzeMaKFb aip585+bP5IcpewTb/UxFcT5YZNjl+R30rq5xM3tfmcEjrL2bu+aaavjMwtGiHeojS1S fzpUXwhezSCHDlGYZtGp1vKxupZClwYpInLKk= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; b=U8viq85IifmVeLd97yRaH6o7KTTvNbFconKNsoA35vd3f0O8Dq7ae6yYItNXG33kgD gkZ7HhCJuIXgzKGfNrw8Vjpyc2jYYBneurHbojODk98HvvMkJTIR354VJwDXch1D2/mQ ao2mwruUMltOY4+FQbJIb8Du5TKmxBVQugInY= MIME-Version: 1.0 Received: by 10.239.185.77 with SMTP id b13mr454568hbh.158.1260039129048; Sat, 05 Dec 2009 10:52:09 -0800 (PST) In-Reply-To: <20091205184112.GP73250@gremlin.foo.is> References: <20091205170400.GM73250@gremlin.foo.is> <8555674.871260033069220.JavaMail.root@zimbra> <20091205184112.GP73250@gremlin.foo.is> Date: Sat, 5 Dec 2009 13:52:08 -0500 Message-ID: <5da0588e0912051052p25fb743ele098ed9cb9de8fa0@mail.gmail.com> From: Rich To: Baldur Gislason Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Cc: freebsd-fs@freebsd.org Subject: Re: ZFS and reordering drives X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 05 Dec 2009 18:58:01 -0000 My suggestion would be to plug in the faulty drive in another machine (or this machine with the pool not imported, it's mostly academic), zero the entire drive, and then reboot with all the drives in. It can hardly attempt to import a label it can't find. - Rich On Sat, Dec 5, 2009 at 1:41 PM, Baldur Gislason wrote: > I have managed to import the pool in degraded mode, however I am having p= roblems > getting it back into normal operating mode. > I can detach the drive either by pulling the sata cable or by using ataco= ntrol > but I don't have any ways of reattaching the drive to a running system. > Running atacontrol attach on the ata channel after detaching it just give= s an error, > running atacontrol reinit on the ata channel after reconnecting the physi= cally disconnected > drive doesn't find the drive. > And there doesn't seem to be any option in zpool to forcefully destroy a = device in a zpool1, > forcefully degrading the pool, and the pool refuses to cooperate when the= drive is attached. > I even tried putting the drive on a USB->SATA controller but zpool wouldn= 't let me replace > it that way, saying it was too small. > What to do, what to do? > > Baldur > > On Sat, Dec 05, 2009 at 11:11:09AM -0600, James R. Van Artsdalen wrote: >> This is beyond what I know - someone else will need to step in. >> >> If it's a raidz1 with one bad disk you can probably just unplug the bad = disk and import the pool DEGRADED (due to the missing disk). >> >> ----- Original Message ----- >> From: "Baldur Gislason" >> To: freebsd-fs@freebsd.org >> Sent: Saturday, December 5, 2009 11:04:00 AM >> Subject: Re: ZFS and reordering drives >> >> Ok. Running zdb -l on the four drives seems to indicate that one of them >> has some label issues. >> http://foo.is/~baldur/brokenzfs/ >> ad4, ad6 and ad20 all have identical labels, only differences are the id= s of the disk >> holding the label, as expected. >> root@enigma:~# diff ad4.label ad6.label >> 12c12 >> < =A0 =A0 guid=3D12923783381249452341 >> --- >> > =A0 =A0 guid=3D972519640617937764 >> 61c61 >> < =A0 =A0 guid=3D12923783381249452341 >> --- >> > =A0 =A0 guid=3D972519640617937764 >> 110c110 >> < =A0 =A0 guid=3D12923783381249452341 >> --- >> > =A0 =A0 guid=3D972519640617937764 >> 159c159 >> < =A0 =A0 guid=3D12923783381249452341 >> --- >> > =A0 =A0 guid=3D972519640617937764 >> root@enigma:~# diff ad4.label ad20.label >> 12c12 >> < =A0 =A0 guid=3D12923783381249452341 >> --- >> > =A0 =A0 guid=3D10715749107930065182 >> 61c61 >> < =A0 =A0 guid=3D12923783381249452341 >> --- >> > =A0 =A0 guid=3D10715749107930065182 >> 110c110 >> < =A0 =A0 guid=3D12923783381249452341 >> --- >> > =A0 =A0 guid=3D10715749107930065182 >> 159c159 >> < =A0 =A0 guid=3D12923783381249452341 >> --- >> > =A0 =A0 guid=3D10715749107930065182 >> >> ad18 has a somewhat broken label. Label 0 and 1 exist identical to the l= abels on the rest >> label 2 and 3 are broken or nonexistant. >> -------------------------------------------- >> LABEL 2 >> -------------------------------------------- >> failed to unpack label 2 >> -------------------------------------------- >> LABEL 3 >> -------------------------------------------- >> failed to unpack label 3 >> >> How should I go about recovering this? >> >> Baldur >> >> On Sat, Dec 05, 2009 at 04:39:44PM +0000, Baldur Gislason wrote: >> > Ok. The pool that was degraded imported cleanly but the pool that went >> > unavailable won't import. >> > If it is of any significance, I did change the BIOS disk controller se= ttings >> > from IDE to AHCI and then back to IDE before I noticed this pool was g= one. >> > >> > root@enigma:~# zpool import zirconium >> > cannot import 'zirconium': invalid vdev configuration >> > >> > =A0 pool: zirconium >> > =A0 =A0 id: 16708799643457239163 >> > =A0state: UNAVAIL >> > status: The pool is formatted using an older on-disk version. >> > action: The pool cannot be imported due to damaged devices or data. >> > config: >> > >> > =A0 =A0 =A0 =A0 zirconium =A0 UNAVAIL =A0insufficient replicas >> > =A0 =A0 =A0 =A0 =A0 raidz1 =A0 =A0UNAVAIL =A0corrupted data >> > =A0 =A0 =A0 =A0 =A0 =A0 ad4 =A0 =A0 ONLINE >> > =A0 =A0 =A0 =A0 =A0 =A0 ad6 =A0 =A0 ONLINE >> > =A0 =A0 =A0 =A0 =A0 =A0 ad18 =A0 =A0ONLINE >> > =A0 =A0 =A0 =A0 =A0 =A0 ad20 =A0 =A0ONLINE >> > >> > How do I go about debugging this? >> > >> > Baldur >> > >> > >> > On Sat, Dec 05, 2009 at 11:33:33AM -0500, Gary Corcoran wrote: >> > > James R. Van Artsdalen wrote: >> > > > Baldur Gislason wrote: >> > > >> When I plugged them back in they didn't go in the right order >> > > >> and now both of my pools are broken. >> > > > zpool.cache is broken. =A0Rename /boot/zfs/zpool.cache so that ZFS= won't >> > > > load it, then import the pools manually. =A0(a reboot might be nee= ded >> > > > before the import; not sure). >> > > >> > > If one were booting from ZFS, would you be out of luck (since you wo= uldn't >> > > be able to access the zpool.cache before booting), or is there a way >> > > around this problem? =A0Just wondering, I've avoided booting from ZF= S so far. >> > > >> > > > The problem is that ZFS is recording the boot-time assigned name >> > > > (/dev/ad0) in the cache. =A0I'm hoping to get GEOM to put the disk= serial >> > > > number in /dev, i.e., /dev/serialnum/5LZ958QL. =A0If you created t= he pool >> > > > using serial numbers then the cache would always work right. >> > > >> > > Is there any way today, to avoid using the boot assigned drive name = (e.g. >> > > /dev/ad2) when creating the zpool? =A0Again just wondering, I don't = need >> > > a solution this year... >> > > >> > > Thanks, >> > > Gary >> > > >> > > >> > > _______________________________________________ >> > > freebsd-fs@freebsd.org mailing list >> > > http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> > > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org= " >> > _______________________________________________ >> > freebsd-fs@freebsd.org mailing list >> > http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > --=20 my US geograpy is lousy...lol so's mine and I live here Make no little plans; they have no ma... From owner-freebsd-fs@FreeBSD.ORG Sat Dec 5 19:06:43 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 849E31065670 for ; Sat, 5 Dec 2009 19:06:43 +0000 (UTC) (envelope-from baldur@foo.is) Received: from gremlin.foo.is (gremlin.foo.is [194.105.250.10]) by mx1.freebsd.org (Postfix) with ESMTP id F033F8FC17 for ; Sat, 5 Dec 2009 19:06:42 +0000 (UTC) Received: by gremlin.foo.is (Postfix, from userid 1000) id 3714EDA85D; Sat, 5 Dec 2009 19:06:41 +0000 (GMT) Date: Sat, 5 Dec 2009 19:06:41 +0000 From: Baldur Gislason To: freebsd-fs@freebsd.org Message-ID: <20091205190641.GQ73250@gremlin.foo.is> References: <20091205170400.GM73250@gremlin.foo.is> <8555674.871260033069220.JavaMail.root@zimbra> <20091205184112.GP73250@gremlin.foo.is> <5da0588e0912051052p25fb743ele098ed9cb9de8fa0@mail.gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <5da0588e0912051052p25fb743ele098ed9cb9de8fa0@mail.gmail.com> User-Agent: Mutt/1.5.18 (2008-05-17) Subject: Re: ZFS and reordering drives X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 05 Dec 2009 19:06:43 -0000 I did that already, I figured out that the labels are found at the beginnin and the end of the disk. Overwriting the first and last megabytes of the disk has given the result that zdb -l finds no label on the disk but even after removing the cache file and rebooting, zpool still insists the disk is a part of the pool and won't import the pool when it's attached. Baldur On Sat, Dec 05, 2009 at 01:52:08PM -0500, Rich wrote: > My suggestion would be to plug in the faulty drive in another machine > (or this machine with the pool not imported, it's mostly academic), > zero the entire drive, and then reboot with all the drives in. > > It can hardly attempt to import a label it can't find. > > - Rich > > On Sat, Dec 5, 2009 at 1:41 PM, Baldur Gislason wrote: > > I have managed to import the pool in degraded mode, however I am having problems > > getting it back into normal operating mode. > > I can detach the drive either by pulling the sata cable or by using atacontrol > > but I don't have any ways of reattaching the drive to a running system. > > Running atacontrol attach on the ata channel after detaching it just gives an error, > > running atacontrol reinit on the ata channel after reconnecting the physically disconnected > > drive doesn't find the drive. > > And there doesn't seem to be any option in zpool to forcefully destroy a device in a zpool1, > > forcefully degrading the pool, and the pool refuses to cooperate when the drive is attached. > > I even tried putting the drive on a USB->SATA controller but zpool wouldn't let me replace > > it that way, saying it was too small. > > What to do, what to do? > > > > Baldur > > > > On Sat, Dec 05, 2009 at 11:11:09AM -0600, James R. Van Artsdalen wrote: > >> This is beyond what I know - someone else will need to step in. > >> > >> If it's a raidz1 with one bad disk you can probably just unplug the bad disk and import the pool DEGRADED (due to the missing disk). > >> > >> ----- Original Message ----- > >> From: "Baldur Gislason" > >> To: freebsd-fs@freebsd.org > >> Sent: Saturday, December 5, 2009 11:04:00 AM > >> Subject: Re: ZFS and reordering drives > >> > >> Ok. Running zdb -l on the four drives seems to indicate that one of them > >> has some label issues. > >> http://foo.is/~baldur/brokenzfs/ > >> ad4, ad6 and ad20 all have identical labels, only differences are the ids of the disk > >> holding the label, as expected. > >> root@enigma:~# diff ad4.label ad6.label > >> 12c12 > >> <     guid=12923783381249452341 > >> --- > >> >     guid=972519640617937764 > >> 61c61 > >> <     guid=12923783381249452341 > >> --- > >> >     guid=972519640617937764 > >> 110c110 > >> <     guid=12923783381249452341 > >> --- > >> >     guid=972519640617937764 > >> 159c159 > >> <     guid=12923783381249452341 > >> --- > >> >     guid=972519640617937764 > >> root@enigma:~# diff ad4.label ad20.label > >> 12c12 > >> <     guid=12923783381249452341 > >> --- > >> >     guid=10715749107930065182 > >> 61c61 > >> <     guid=12923783381249452341 > >> --- > >> >     guid=10715749107930065182 > >> 110c110 > >> <     guid=12923783381249452341 > >> --- > >> >     guid=10715749107930065182 > >> 159c159 > >> <     guid=12923783381249452341 > >> --- > >> >     guid=10715749107930065182 > >> > >> ad18 has a somewhat broken label. Label 0 and 1 exist identical to the labels on the rest > >> label 2 and 3 are broken or nonexistant. > >> -------------------------------------------- > >> LABEL 2 > >> -------------------------------------------- > >> failed to unpack label 2 > >> -------------------------------------------- > >> LABEL 3 > >> -------------------------------------------- > >> failed to unpack label 3 > >> > >> How should I go about recovering this? > >> > >> Baldur > >> > >> On Sat, Dec 05, 2009 at 04:39:44PM +0000, Baldur Gislason wrote: > >> > Ok. The pool that was degraded imported cleanly but the pool that went > >> > unavailable won't import. > >> > If it is of any significance, I did change the BIOS disk controller settings > >> > from IDE to AHCI and then back to IDE before I noticed this pool was gone. > >> > > >> > root@enigma:~# zpool import zirconium > >> > cannot import 'zirconium': invalid vdev configuration > >> > > >> >   pool: zirconium > >> >     id: 16708799643457239163 > >> >  state: UNAVAIL > >> > status: The pool is formatted using an older on-disk version. > >> > action: The pool cannot be imported due to damaged devices or data. > >> > config: > >> > > >> >         zirconium   UNAVAIL  insufficient replicas > >> >           raidz1    UNAVAIL  corrupted data > >> >             ad4     ONLINE > >> >             ad6     ONLINE > >> >             ad18    ONLINE > >> >             ad20    ONLINE > >> > > >> > How do I go about debugging this? > >> > > >> > Baldur > >> > > >> > > >> > On Sat, Dec 05, 2009 at 11:33:33AM -0500, Gary Corcoran wrote: > >> > > James R. Van Artsdalen wrote: > >> > > > Baldur Gislason wrote: > >> > > >> When I plugged them back in they didn't go in the right order > >> > > >> and now both of my pools are broken. > >> > > > zpool.cache is broken.  Rename /boot/zfs/zpool.cache so that ZFS won't > >> > > > load it, then import the pools manually.  (a reboot might be needed > >> > > > before the import; not sure). > >> > > > >> > > If one were booting from ZFS, would you be out of luck (since you wouldn't > >> > > be able to access the zpool.cache before booting), or is there a way > >> > > around this problem?  Just wondering, I've avoided booting from ZFS so far. > >> > > > >> > > > The problem is that ZFS is recording the boot-time assigned name > >> > > > (/dev/ad0) in the cache.  I'm hoping to get GEOM to put the disk serial > >> > > > number in /dev, i.e., /dev/serialnum/5LZ958QL.  If you created the pool > >> > > > using serial numbers then the cache would always work right. > >> > > > >> > > Is there any way today, to avoid using the boot assigned drive name (e.g. > >> > > /dev/ad2) when creating the zpool?  Again just wondering, I don't need > >> > > a solution this year... > >> > > > >> > > Thanks, > >> > > Gary > >> > > > >> > > > >> > > _______________________________________________ > >> > > freebsd-fs@freebsd.org mailing list > >> > > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > >> > > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > >> > _______________________________________________ > >> > freebsd-fs@freebsd.org mailing list > >> > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > >> > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > >> _______________________________________________ > >> freebsd-fs@freebsd.org mailing list > >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs > >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > > _______________________________________________ > > freebsd-fs@freebsd.org mailing list > > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > > > > > > -- > > my US geograpy is lousy...lol so's mine and I live here Make no little > plans; they have no ma... From owner-freebsd-fs@FreeBSD.ORG Sat Dec 5 19:15:28 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 553451065679 for ; Sat, 5 Dec 2009 19:15:28 +0000 (UTC) (envelope-from baldur@foo.is) Received: from gremlin.foo.is (gremlin.foo.is [194.105.250.10]) by mx1.freebsd.org (Postfix) with ESMTP id D71078FC46 for ; Sat, 5 Dec 2009 19:15:27 +0000 (UTC) Received: by gremlin.foo.is (Postfix, from userid 1000) id 08D88DA85D; Sat, 5 Dec 2009 19:15:27 +0000 (GMT) Date: Sat, 5 Dec 2009 19:15:26 +0000 From: Baldur Gislason To: freebsd-fs@freebsd.org Message-ID: <20091205191526.GR73250@gremlin.foo.is> References: <20091205170400.GM73250@gremlin.foo.is> <8555674.871260033069220.JavaMail.root@zimbra> <20091205184112.GP73250@gremlin.foo.is> <5da0588e0912051052p25fb743ele098ed9cb9de8fa0@mail.gmail.com> <20091205190641.GQ73250@gremlin.foo.is> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20091205190641.GQ73250@gremlin.foo.is> User-Agent: Mutt/1.5.18 (2008-05-17) Subject: Re: ZFS and reordering drives X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 05 Dec 2009 19:15:28 -0000 Ok. I'm starting to think that all hope is lost and I just have to rebuild the pool. I moved the drive to a different sata port, which let me start the system with the drive attached and import the pool in degraded mode, but when I try to do a replace, no luck. root@enigma:~# zpool replace zirconium 10805584276324217146 ad16 cannot replace 10805584276324217146 with ad16: device is too small Baldur On Sat, Dec 05, 2009 at 07:06:41PM +0000, Baldur Gislason wrote: > I did that already, I figured out that the labels are > found at the beginnin and the end of the disk. Overwriting the first > and last megabytes of the disk has given the result that zdb -l finds no label > on the disk but even after removing the cache file and rebooting, zpool still > insists the disk is a part of the pool and won't import the pool when it's attached. > > Baldur > > > On Sat, Dec 05, 2009 at 01:52:08PM -0500, Rich wrote: > > My suggestion would be to plug in the faulty drive in another machine > > (or this machine with the pool not imported, it's mostly academic), > > zero the entire drive, and then reboot with all the drives in. > > > > It can hardly attempt to import a label it can't find. > > > > - Rich > > > > On Sat, Dec 5, 2009 at 1:41 PM, Baldur Gislason wrote: > > > I have managed to import the pool in degraded mode, however I am having problems > > > getting it back into normal operating mode. > > > I can detach the drive either by pulling the sata cable or by using atacontrol > > > but I don't have any ways of reattaching the drive to a running system. > > > Running atacontrol attach on the ata channel after detaching it just gives an error, > > > running atacontrol reinit on the ata channel after reconnecting the physically disconnected > > > drive doesn't find the drive. > > > And there doesn't seem to be any option in zpool to forcefully destroy a device in a zpool1, > > > forcefully degrading the pool, and the pool refuses to cooperate when the drive is attached. > > > I even tried putting the drive on a USB->SATA controller but zpool wouldn't let me replace > > > it that way, saying it was too small. > > > What to do, what to do? > > > > > > Baldur > > > > > > On Sat, Dec 05, 2009 at 11:11:09AM -0600, James R. Van Artsdalen wrote: > > >> This is beyond what I know - someone else will need to step in. > > >> > > >> If it's a raidz1 with one bad disk you can probably just unplug the bad disk and import the pool DEGRADED (due to the missing disk). > > >> > > >> ----- Original Message ----- > > >> From: "Baldur Gislason" > > >> To: freebsd-fs@freebsd.org > > >> Sent: Saturday, December 5, 2009 11:04:00 AM > > >> Subject: Re: ZFS and reordering drives > > >> > > >> Ok. Running zdb -l on the four drives seems to indicate that one of them > > >> has some label issues. > > >> http://foo.is/~baldur/brokenzfs/ > > >> ad4, ad6 and ad20 all have identical labels, only differences are the ids of the disk > > >> holding the label, as expected. > > >> root@enigma:~# diff ad4.label ad6.label > > >> 12c12 > > >> <     guid=12923783381249452341 > > >> --- > > >> >     guid=972519640617937764 > > >> 61c61 > > >> <     guid=12923783381249452341 > > >> --- > > >> >     guid=972519640617937764 > > >> 110c110 > > >> <     guid=12923783381249452341 > > >> --- > > >> >     guid=972519640617937764 > > >> 159c159 > > >> <     guid=12923783381249452341 > > >> --- > > >> >     guid=972519640617937764 > > >> root@enigma:~# diff ad4.label ad20.label > > >> 12c12 > > >> <     guid=12923783381249452341 > > >> --- > > >> >     guid=10715749107930065182 > > >> 61c61 > > >> <     guid=12923783381249452341 > > >> --- > > >> >     guid=10715749107930065182 > > >> 110c110 > > >> <     guid=12923783381249452341 > > >> --- > > >> >     guid=10715749107930065182 > > >> 159c159 > > >> <     guid=12923783381249452341 > > >> --- > > >> >     guid=10715749107930065182 > > >> > > >> ad18 has a somewhat broken label. Label 0 and 1 exist identical to the labels on the rest > > >> label 2 and 3 are broken or nonexistant. > > >> -------------------------------------------- > > >> LABEL 2 > > >> -------------------------------------------- > > >> failed to unpack label 2 > > >> -------------------------------------------- > > >> LABEL 3 > > >> -------------------------------------------- > > >> failed to unpack label 3 > > >> > > >> How should I go about recovering this? > > >> > > >> Baldur > > >> > > >> On Sat, Dec 05, 2009 at 04:39:44PM +0000, Baldur Gislason wrote: > > >> > Ok. The pool that was degraded imported cleanly but the pool that went > > >> > unavailable won't import. > > >> > If it is of any significance, I did change the BIOS disk controller settings > > >> > from IDE to AHCI and then back to IDE before I noticed this pool was gone. > > >> > > > >> > root@enigma:~# zpool import zirconium > > >> > cannot import 'zirconium': invalid vdev configuration > > >> > > > >> >   pool: zirconium > > >> >     id: 16708799643457239163 > > >> >  state: UNAVAIL > > >> > status: The pool is formatted using an older on-disk version. > > >> > action: The pool cannot be imported due to damaged devices or data. > > >> > config: > > >> > > > >> >         zirconium   UNAVAIL  insufficient replicas > > >> >           raidz1    UNAVAIL  corrupted data > > >> >             ad4     ONLINE > > >> >             ad6     ONLINE > > >> >             ad18    ONLINE > > >> >             ad20    ONLINE > > >> > > > >> > How do I go about debugging this? > > >> > > > >> > Baldur > > >> > > > >> > > > >> > On Sat, Dec 05, 2009 at 11:33:33AM -0500, Gary Corcoran wrote: > > >> > > James R. Van Artsdalen wrote: > > >> > > > Baldur Gislason wrote: > > >> > > >> When I plugged them back in they didn't go in the right order > > >> > > >> and now both of my pools are broken. > > >> > > > zpool.cache is broken.  Rename /boot/zfs/zpool.cache so that ZFS won't > > >> > > > load it, then import the pools manually.  (a reboot might be needed > > >> > > > before the import; not sure). > > >> > > > > >> > > If one were booting from ZFS, would you be out of luck (since you wouldn't > > >> > > be able to access the zpool.cache before booting), or is there a way > > >> > > around this problem?  Just wondering, I've avoided booting from ZFS so far. > > >> > > > > >> > > > The problem is that ZFS is recording the boot-time assigned name > > >> > > > (/dev/ad0) in the cache.  I'm hoping to get GEOM to put the disk serial > > >> > > > number in /dev, i.e., /dev/serialnum/5LZ958QL.  If you created the pool > > >> > > > using serial numbers then the cache would always work right. > > >> > > > > >> > > Is there any way today, to avoid using the boot assigned drive name (e.g. > > >> > > /dev/ad2) when creating the zpool?  Again just wondering, I don't need > > >> > > a solution this year... > > >> > > > > >> > > Thanks, > > >> > > Gary > > >> > > > > >> > > > > >> > > _______________________________________________ > > >> > > freebsd-fs@freebsd.org mailing list > > >> > > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > > >> > > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > > >> > _______________________________________________ > > >> > freebsd-fs@freebsd.org mailing list > > >> > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > > >> > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > > >> _______________________________________________ > > >> freebsd-fs@freebsd.org mailing list > > >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs > > >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > > > _______________________________________________ > > > freebsd-fs@freebsd.org mailing list > > > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > > > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > > > > > > > > > > > -- > > > > my US geograpy is lousy...lol so's mine and I live here Make no little > > plans; they have no ma... > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Sat Dec 5 19:21:11 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 987011065672 for ; Sat, 5 Dec 2009 19:21:11 +0000 (UTC) (envelope-from rincebrain@gmail.com) Received: from mail-fx0-f209.google.com (mail-fx0-f209.google.com [209.85.220.209]) by mx1.freebsd.org (Postfix) with ESMTP id EC5608FC0A for ; Sat, 5 Dec 2009 19:21:10 +0000 (UTC) Received: by fxm2 with SMTP id 2so1014325fxm.13 for ; Sat, 05 Dec 2009 11:21:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:cc:content-type; bh=kYkVpC0L4qo5OMnzWBiqR7V7s02FaBiMj/S430haKwE=; b=cro3XoiP2gxwLNO7SyNj8sNIxdNMKp0f26pWbOXy5Pk991mPMKehRI3pOgPZLYRNSk CzH6izjrrpX3RfmJn+I5v32xEujqMmP8Y/KR9E0FrBxAE28TeWqFvjRncmB+2duaioC7 WVKf2uVqt8TRRduMUDk/yEgBv+cu2LWga/bLE= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=wPPs8nkerk8o9NfN8W04C3RcEXQB9nBTVO2O3wISHu4SEDoqwhxrcMgfuobtTKTr0O qg7EO15i1N+zsjeiqAIAvt36t/NxV69OSuuMWFd6iUnZb0VMNSgf5afm1fTm5FAhfWjU HQ5XQQJq0UpMnPrjp9BE+LfPjqx7TCimRvXos= MIME-Version: 1.0 Received: by 10.239.145.15 with SMTP id q15mr462957hba.121.1260040869799; Sat, 05 Dec 2009 11:21:09 -0800 (PST) In-Reply-To: <20091205191526.GR73250@gremlin.foo.is> References: <20091205170400.GM73250@gremlin.foo.is> <8555674.871260033069220.JavaMail.root@zimbra> <20091205184112.GP73250@gremlin.foo.is> <5da0588e0912051052p25fb743ele098ed9cb9de8fa0@mail.gmail.com> <20091205190641.GQ73250@gremlin.foo.is> <20091205191526.GR73250@gremlin.foo.is> Date: Sat, 5 Dec 2009 14:21:09 -0500 Message-ID: <5da0588e0912051121i32d09b37xe44057add4c052f3@mail.gmail.com> From: Rich To: Baldur Gislason Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-fs@freebsd.org Subject: Re: ZFS and reordering drives X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 05 Dec 2009 19:21:11 -0000 That's quite fascinating. What are the details on the drives? [relatedly, use smartctl to tell you if there are any unrecoverable sectors? It may have shrunk the visible drive size, though I find that unlikely.] - Rich From owner-freebsd-fs@FreeBSD.ORG Sat Dec 5 20:06:35 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 9E367106566C for ; Sat, 5 Dec 2009 20:06:35 +0000 (UTC) (envelope-from baldur@foo.is) Received: from gremlin.foo.is (gremlin.foo.is [194.105.250.10]) by mx1.freebsd.org (Postfix) with ESMTP id 5F74C8FC0C for ; Sat, 5 Dec 2009 20:06:35 +0000 (UTC) Received: by gremlin.foo.is (Postfix, from userid 1000) id 7BB69DA889; Sat, 5 Dec 2009 20:06:34 +0000 (GMT) Date: Sat, 5 Dec 2009 20:06:34 +0000 From: Baldur Gislason To: Rich Message-ID: <20091205200634.GS73250@gremlin.foo.is> References: <20091205170400.GM73250@gremlin.foo.is> <8555674.871260033069220.JavaMail.root@zimbra> <20091205184112.GP73250@gremlin.foo.is> <5da0588e0912051052p25fb743ele098ed9cb9de8fa0@mail.gmail.com> <20091205190641.GQ73250@gremlin.foo.is> <20091205191526.GR73250@gremlin.foo.is> <5da0588e0912051121i32d09b37xe44057add4c052f3@mail.gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <5da0588e0912051121i32d09b37xe44057add4c052f3@mail.gmail.com> User-Agent: Mutt/1.5.18 (2008-05-17) Cc: freebsd-fs@freebsd.org Subject: Re: ZFS and reordering drives X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 05 Dec 2009 20:06:35 -0000 It is indeed. root@enigma:~# smartctl -a /dev/ad4 | grep Capa User Capacity: 1,000,204,886,016 bytes root@enigma:~# smartctl -a /dev/ad6 | grep Capa User Capacity: 1,000,204,886,016 bytes root@enigma:~# smartctl -a /dev/ad16 | grep Capa User Capacity: 1,000,203,804,160 bytes root@enigma:~# smartctl -a /dev/ad20 | grep Capa User Capacity: 1,000,204,886,016 bytes The problematic drive now seems to have 1MB less capacity than the rest. I uploaded the output from smartctl -a for all the drives to http://foo.is/~baldur/brokenzfs They're all identical drives, bought at the same time and the same place. root@enigma:~# smartctl -a /dev/ad4 | grep WD Device Model: WDC WD10EADS-00L5B1 Serial Number: WD-WCAU48443245 root@enigma:~# smartctl -a /dev/ad6 | grep WD Device Model: WDC WD10EADS-00L5B1 Serial Number: WD-WCAU48472608 root@enigma:~# smartctl -a /dev/ad16 | grep WD Device Model: WDC WD10EADS-00L5B1 Serial Number: WD-WCAU48509212 root@enigma:~# smartctl -a /dev/ad20 | grep WD Device Model: WDC WD10EADS-00L5B1 Serial Number: WD-WCAU48410170 The fact that it has shrunk in size explains why the labels at the end of the drive got lost. Baldur On Sat, Dec 05, 2009 at 02:21:09PM -0500, Rich wrote: > That's quite fascinating. > > What are the details on the drives? > > [relatedly, use smartctl to tell you if there are any unrecoverable > sectors? It may have shrunk the visible drive size, though I find that > unlikely.] > > - Rich From owner-freebsd-fs@FreeBSD.ORG Sat Dec 5 20:40:03 2009 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 945D31065670 for ; Sat, 5 Dec 2009 20:40:03 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 83EC48FC08 for ; Sat, 5 Dec 2009 20:40:03 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.3/8.14.3) with ESMTP id nB5Ke30l097302 for ; Sat, 5 Dec 2009 20:40:03 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.3/8.14.3/Submit) id nB5Ke3Qd097301; Sat, 5 Dec 2009 20:40:03 GMT (envelope-from gnats) Date: Sat, 5 Dec 2009 20:40:03 GMT Message-Id: <200912052040.nB5Ke3Qd097301@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org From: dfilter@FreeBSD.ORG (dfilter service) Cc: Subject: Re: kern/141177: commit references a PR X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: dfilter service List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 05 Dec 2009 20:40:03 -0000 The following reply was made to PR kern/141177; it has been noted by GNATS. From: dfilter@FreeBSD.ORG (dfilter service) To: bug-followup@FreeBSD.org Cc: Subject: Re: kern/141177: commit references a PR Date: Sat, 5 Dec 2009 20:36:54 +0000 (UTC) Author: kib Date: Sat Dec 5 20:36:42 2009 New Revision: 200162 URL: http://svn.freebsd.org/changeset/base/200162 Log: Change VOP_FSYNC for zfs vnode from VOP_PANIC to zfs_freebsd_fsync(), both to not panic when fsync(2) is called for fifo on zfs filedescriptor, and to actually fsync fifo inode to permanent storage. PR: kern/141177 Reviewed by: pjd MFC after: 1 week Modified: head/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c Modified: head/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c ============================================================================== --- head/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c Sat Dec 5 20:26:55 2009 (r200161) +++ head/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c Sat Dec 5 20:36:42 2009 (r200162) @@ -5009,7 +5009,7 @@ struct vop_vector zfs_vnodeops = { struct vop_vector zfs_fifoops = { .vop_default = &fifo_specops, - .vop_fsync = VOP_PANIC, + .vop_fsync = zfs_freebsd_fsync, .vop_access = zfs_freebsd_access, .vop_getattr = zfs_freebsd_getattr, .vop_inactive = zfs_freebsd_inactive, _______________________________________________ svn-src-all@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/svn-src-all To unsubscribe, send any mail to "svn-src-all-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Sat Dec 5 20:47:59 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 8C4D7106566C for ; Sat, 5 Dec 2009 20:47:59 +0000 (UTC) (envelope-from rincebrain@gmail.com) Received: from fg-out-1718.google.com (fg-out-1718.google.com [72.14.220.152]) by mx1.freebsd.org (Postfix) with ESMTP id 1F1738FC17 for ; Sat, 5 Dec 2009 20:47:58 +0000 (UTC) Received: by fg-out-1718.google.com with SMTP id 19so274181fgg.13 for ; Sat, 05 Dec 2009 12:47:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:cc:content-type; bh=y4kirvFeqHbFE0d8TqD+KyhOC/7WyAkw30y4WJwNOSM=; b=VZDi2zLOU+qXdrfbp+PT9EnXsej9yRjeA+fDp2ECl7WJfB0agdear4mrvPPALDvRjH mEExQteGlFdiCuMgTqTyKjgnRuci1U4yMGG+ccHY1T6wPXo/bDnDFHqgc9Vy+d6UCs6d XNy7vUzxzPzhZEhhTGh2tpwFBLiMdjky8nOa4= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=pV7mO5h1YdIcsbTmxvzg9WxxdxouF1r8HlpHcdTEAAYaahOcf9t2Z0LI9GEukBY2M4 DSVXZjvuSe1Y+fnWcvGV7lSguSxXkXL4ZOfyBTVPlzEqD708bfL35k/tKHqrjpdM1MY0 Sy2gCPX27e2LCXYDdegQQbkf9ioJEtWqG+5c4= MIME-Version: 1.0 Received: by 10.239.197.135 with SMTP id z7mr464282hbi.211.1260046078112; Sat, 05 Dec 2009 12:47:58 -0800 (PST) In-Reply-To: <20091205200634.GS73250@gremlin.foo.is> References: <20091205170400.GM73250@gremlin.foo.is> <8555674.871260033069220.JavaMail.root@zimbra> <20091205184112.GP73250@gremlin.foo.is> <5da0588e0912051052p25fb743ele098ed9cb9de8fa0@mail.gmail.com> <20091205190641.GQ73250@gremlin.foo.is> <20091205191526.GR73250@gremlin.foo.is> <5da0588e0912051121i32d09b37xe44057add4c052f3@mail.gmail.com> <20091205200634.GS73250@gremlin.foo.is> Date: Sat, 5 Dec 2009 15:47:58 -0500 Message-ID: <5da0588e0912051247u14eb92afm5a7e8edeec299f6f@mail.gmail.com> From: Rich To: Baldur Gislason Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-fs@freebsd.org Subject: Re: ZFS and reordering drives X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 05 Dec 2009 20:47:59 -0000 I'm simultaneously amused and confused, as I didn't expect to be right... RMA the drive? - Rich