From owner-freebsd-stable@FreeBSD.ORG Sun May 18 18:12:38 2008 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id D7A90106564A; Sun, 18 May 2008 18:12:38 +0000 (UTC) (envelope-from joao@matik.com.br) Received: from msrv.matik.com.br (msrv.matik.com.br [200.152.83.14]) by mx1.freebsd.org (Postfix) with ESMTP id 6908B8FC17; Sun, 18 May 2008 18:12:38 +0000 (UTC) (envelope-from joao@matik.com.br) Received: from nbc.matik.com.br (nbc.matik.com.br [200.152.88.34] (may be forged)) by msrv.matik.com.br (8.14.1/8.13.1) with ESMTP id m4IICQpE077529; Sun, 18 May 2008 15:12:26 -0300 (BRT) (envelope-from joao@matik.com.br) From: JoaoBR Organization: Infomatik To: freebsd-bugs@freebsd.org Date: Sun, 18 May 2008 15:11:55 -0300 User-Agent: KMail/1.9.9 References: <200805180956.18211.joao@matik.com.br> <200805181220.33599.joao@matik.com.br> <20080518153911.GA22300@eos.sc1.parodius.com> In-Reply-To: <20080518153911.GA22300@eos.sc1.parodius.com> MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Content-Disposition: inline Message-Id: <200805181511.56646.joao@matik.com.br> X-Virus-Scanned: ClamAV version 0.91.2, clamav-milter version 0.91.2 on msrv.matik.com.br X-Virus-Status: Clean Cc: Torfinn Ingolfsen , Jeremy Chadwick , freebsd-stable , Greg Byshenk Subject: Re: possible zfs bug? lost all pools X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 18 May 2008 18:12:39 -0000 On Sunday 18 May 2008 12:39:11 Jeremy Chadwick wrote: > On Sun, May 18, 2008 at 12:20:33PM -0300, JoaoBR wrote: > > On Sunday 18 May 2008 11:11:38 Greg Byshenk wrote: > > > On Sun, May 18, 2008 at 09:56:17AM -0300, JoaoBR wrote: > > > > after trying to mount my zfs pools in single user mode I got the > > > > following message for each: > > > > > > > > May 18 09:09:36 gw kernel: ZFS: WARNING: pool 'cache1' could not be > > > > loaded as it was last accessed by another system (host: > > > > gw.bb1.matik.com.br hostid: 0xbefb4a0f). See: > > > > http://www.sun.com/msg/ZFS-8000-EY > > > > > > > > any zpool cmd returned nothing else as not existing zfs, seems the > > > > zfs info on disks was gone > > > > > > > > to double-check I recreated them, rebooted in single user mode and > > > > repeated the story, same thing, trying to /etc/rc.d/zfs start > > > > returnes the above msg and pools are gone ... > > > > > > > > I guess this is kind of wrong > > > > > > I think that the problem is related to the absence of a hostid when in > > > single-user. Try running '/etc/rc.d/hostid start' before mouning. > > > > well, obviously that came to my mind after seeing the msg ... > > > > anyway the pools should not vanish don't you agree? > > > > and if necessary /etc/rc.d/zfs should start hostid or at least set > > REQUIRE different and warn > > I've been in the same boat you are, and I was told the same thing. I've > documented the situation on my Wiki, and the necessary workarounds. > > http://wiki.freebsd.org/JeremyChadwick/Commonly_reported_issues > nice work this page, thanks > This sort of thing needs to get hammered out before ZFS can be > considered "usable" from a system administration perspective. Expecting > people to remember to run an rc.d startup script before they can use any > of their filesystems borders on unrealistic. yes but on the other side we know it is new stuff and sometimes the price i= s=20 what happens to me this morning but then it also helps to make things better anyway, a little fix to rc.d/zfs like if [ ! "`sysctl -n kern.hostiid 2>/dev/null`" ]; then echo "zfs needs hosti= d=20 first"; exit 0; fi or something as precmd or first in zfs_start_main should fix this issue talking about there are more things, I experienced still not working swapon|off from rc.d/zfs script does not work either not sure what it is because same part of script run as root works, adding a= =20 dash to #!/bin/sh does not help either, from rc.d/zfs the state returns a=20 dash do not see sense in rc.d/zfs `zfs share` since it is the default when share= nfs=20 property is enabled man page tipo tells swap -a ... not swapon -a subcommands volini and volfini not in manual at all man page thar zfs can not be a dump device, not sure if I understand it as= =20 meant but I can dump to zfs very well and fast as long as recordsize=3D128 but at the end, for the short time zfs is there it gives me respectable=20 performance results and it is stable for me as well =2D-=20 Jo=E3o A mensagem foi scaneada pelo sistema de e-mail e pode ser considerada segura. Service fornecido pelo Datacenter Matik https://datacenter.matik.com.br