Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 11 Apr 2013 16:04:15 -0400
From:      Charles Sprickman <spork@bway.net>
To:        Dmitry Morozovsky <marck@rinet.ru>
Cc:        freebsd-fs@FreeBSD.org
Subject:   Re: ZFS-inly server and dedicated ZIL
Message-ID:  <D416AA98-D78A-4743-A1E6-AA2C28B9A602@bway.net>
In-Reply-To: <alpine.BSF.2.00.1304101713530.38433@woozle.rinet.ru>
References:  <alpine.BSF.2.00.1304101713530.38433@woozle.rinet.ru>

next in thread | previous in thread | raw e-mail | index | archive | help
On Apr 10, 2013, at 9:23 AM, Dmitry Morozovsky wrote:

> Dear colleagues,
>=20
> I'm planning to make new PostgreSQL server using zaid10-like ZFS with =
two SSDs=20
> splitted into mirrored ZIL and striped arc2.

This might seem like an odd suggestion, but if you're putting the pool =
on SSDs (is that correct?), I'd totally skip the separate ZIL device and =
ARC.  I think you'll find the SSDs will need zero help from another log =
device and L2ARC is probably just not that helpful for DB loads.

> However, it seems current=20
> ZFS implementation does not support this:
>=20
> ./lib/libzfs/common/libzfs_pool.c-		case EDOM:
> ./lib/libzfs/common/libzfs_pool.c-			=
zfs_error_aux(hdl, dgettext(TEXT_DOMAIN,
> ./lib/libzfs/common/libzfs_pool.c:			    "root pool =
can not have multiple vdevs"
> ./lib/libzfs/common/libzfs_pool.c-			    " or =
separate logs"));
> ./lib/libzfs/common/libzfs_pool.c-			(void) =
zfs_error(hdl, EZFS_POOL_NOTSUP, msg);
>=20
> Am I right, or did I missed something obvious?

I've asked about this on this very list some time ago, but no one really =
had any answers:

http://lists.freebsd.org/pipermail/freebsd-fs/2012-September/015142.html

The last post in that thread brings up an interesting point that was not =
answered, which is can our zfs boot loader handle the ZIL playback on =
boot?  I would assume so (regardless of where the ZIL device lives), but =
who knows?  Yet another ZFS mystery. :)

To summarize, yes, you can work around the root pool restriction, which =
is supposedly a Solaris thing that got carried over.  You do this by =
unsetting the "bootfs" property on the pool (ie: "zpool set bootfs=3D'' =
poolname"), adding your log devices to the pool, and then setting the =
bootfs property again.  Works for me.

Someone noted this only works for mirrors, but I've done it on raidz =
pools as well.

It would be great to have someone weigh-in on whether this is valid or =
not.

The blog post most people reference regarding this issue is here:

http://astralblue.livejournal.com/371755.html

Charles

>=20
> Ok, if so: In this situation, I see two possibilities:
> - make system boot from internal USB stick (only /bootdisk with /boot =
and=20
> /rescue) with the rest of ZFS-on-root
> - use dedicated pair of disks for ZFS pool without ZIL for system.
>=20
> what would you recommend?
>=20
> Thanks!
>=20
> --=20
> Sincerely,
> D.Marck                                     [DM5020, MCK-RIPE, =
DM3-RIPN]
> [ FreeBSD committer:                                 marck@FreeBSD.org =
]
> =
------------------------------------------------------------------------
> *** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- marck@rinet.ru =
***
> =
------------------------------------------------------------------------
> _______________________________________________
> freebsd-fs@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?D416AA98-D78A-4743-A1E6-AA2C28B9A602>