Date: Sun, 18 May 2008 15:11:55 -0300 From: JoaoBR <joao@matik.com.br> To: freebsd-bugs@freebsd.org Cc: Torfinn Ingolfsen <torfinn.ingolfsen@broadpark.no>, Jeremy Chadwick <koitsu@freebsd.org>, freebsd-stable <freebsd-stable@freebsd.org>, Greg Byshenk <freebsd@byshenk.net> Subject: Re: possible zfs bug? lost all pools Message-ID: <200805181511.56646.joao@matik.com.br> In-Reply-To: <20080518153911.GA22300@eos.sc1.parodius.com> References: <200805180956.18211.joao@matik.com.br> <200805181220.33599.joao@matik.com.br> <20080518153911.GA22300@eos.sc1.parodius.com>
next in thread | previous in thread | raw e-mail | index | archive | help
On Sunday 18 May 2008 12:39:11 Jeremy Chadwick wrote: > On Sun, May 18, 2008 at 12:20:33PM -0300, JoaoBR wrote: > > On Sunday 18 May 2008 11:11:38 Greg Byshenk wrote: > > > On Sun, May 18, 2008 at 09:56:17AM -0300, JoaoBR wrote: > > > > after trying to mount my zfs pools in single user mode I got the > > > > following message for each: > > > > > > > > May 18 09:09:36 gw kernel: ZFS: WARNING: pool 'cache1' could not be > > > > loaded as it was last accessed by another system (host: > > > > gw.bb1.matik.com.br hostid: 0xbefb4a0f). See: > > > > http://www.sun.com/msg/ZFS-8000-EY > > > > > > > > any zpool cmd returned nothing else as not existing zfs, seems the > > > > zfs info on disks was gone > > > > > > > > to double-check I recreated them, rebooted in single user mode and > > > > repeated the story, same thing, trying to /etc/rc.d/zfs start > > > > returnes the above msg and pools are gone ... > > > > > > > > I guess this is kind of wrong > > > > > > I think that the problem is related to the absence of a hostid when in > > > single-user. Try running '/etc/rc.d/hostid start' before mouning. > > > > well, obviously that came to my mind after seeing the msg ... > > > > anyway the pools should not vanish don't you agree? > > > > and if necessary /etc/rc.d/zfs should start hostid or at least set > > REQUIRE different and warn > > I've been in the same boat you are, and I was told the same thing. I've > documented the situation on my Wiki, and the necessary workarounds. > > http://wiki.freebsd.org/JeremyChadwick/Commonly_reported_issues > nice work this page, thanks > This sort of thing needs to get hammered out before ZFS can be > considered "usable" from a system administration perspective. Expecting > people to remember to run an rc.d startup script before they can use any > of their filesystems borders on unrealistic. yes but on the other side we know it is new stuff and sometimes the price i= s=20 what happens to me this morning but then it also helps to make things better anyway, a little fix to rc.d/zfs like if [ ! "`sysctl -n kern.hostiid 2>/dev/null`" ]; then echo "zfs needs hosti= d=20 first"; exit 0; fi or something as precmd or first in zfs_start_main should fix this issue talking about there are more things, I experienced still not working swapon|off from rc.d/zfs script does not work either not sure what it is because same part of script run as root works, adding a= =20 dash to #!/bin/sh does not help either, from rc.d/zfs the state returns a=20 dash do not see sense in rc.d/zfs `zfs share` since it is the default when share= nfs=20 property is enabled man page tipo tells swap -a ... not swapon -a subcommands volini and volfini not in manual at all man page thar zfs can not be a dump device, not sure if I understand it as= =20 meant but I can dump to zfs very well and fast as long as recordsize=3D128 but at the end, for the short time zfs is there it gives me respectable=20 performance results and it is stable for me as well =2D-=20 Jo=E3o A mensagem foi scaneada pelo sistema de e-mail e pode ser considerada segura. Service fornecido pelo Datacenter Matik https://datacenter.matik.com.br
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200805181511.56646.joao>