Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 13 Jan 2011 23:31:25 +0100
From:      Pawel Jakub Dawidek <pjd@FreeBSD.org>
To:        Chris Forgeron <cforgeron@acsi.ca>
Cc:        "freebsd-fs@freebsd.org" <freebsd-fs@freebsd.org>, "freebsd-current@freebsd.org" <freebsd-current@freebsd.org>
Subject:   Re: My ZFS v28 Testing Experience
Message-ID:  <20110113223125.GA2330@garage.freebsd.pl>
In-Reply-To: <BEBC15BA440AB24484C067A3A9D38D7E0149F32D3487@server7.acsi.ca>
References:  <20101213214556.GC2038@garage.freebsd.pl> <AANLkTik=HLp09RkDykzjT1NHb3Bu7U1br6R1-jKJxmiy@mail.gmail.com> <BEBC15BA440AB24484C067A3A9D38D7E0149F32D3487@server7.acsi.ca>

next in thread | previous in thread | raw e-mail | index | archive | help

--0F1p//8PRICkK4MW
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Wed, Jan 12, 2011 at 11:03:19PM -0400, Chris Forgeron wrote:
> I've been testing out the v28 patch code for a month now, and I've yet to=
 report any real issues other than what is mentioned below.=20
>=20
> I'll detail some of the things I've tested, hopefully the stability of v2=
8 in FreeBSD will convince others to give it a try so the final release of =
v28 will be as solid as possible.
>=20
> I've been using FreeBSD 9.0-CURRENT as of Dec 12th, and 8.2PRE as of Dec =
16th
>=20
> What's worked well:
>=20
> - I've made and destroyed small raidz's (3-5 disks), large 26 disk raid-1=
0's, and a large 20 disk raid-50.
> - I've upgraded from v15, zfs 4, no issues on the different arrays noted =
above
> - I've confirmed that a v15 or v28 pool will import into Solaris 11 Expre=
ss, and vice versa, with the exception about dual log or cache devices note=
d below.=20
> - I've run many TB of data through the ZFS storage via benchmarks from my=
 VM's connected via NFS, to simple copies inside the same pool, or copies f=
rom one pool to another.=20
> - I've tested pretty much every compression level, and changing them as I=
 tweak my setup and try to find the best blend.
> - I've added and subtracted many a log and cache device, some in failed s=
tates from hot-removals, and the pools always stayed intact.

Thank you very much for all your testing, that's really a valuable
contribution. I'll be happy to work with you on tracking down the
bottleneck in ZFSv28.

> Issues:
>=20
> - Import of pools with multiple cache or log devices. (May be a very mino=
r point)
>=20
> A v28 pool created in Solaris 11 Express with 2 or more log devices, or 2=
 or more cache devices won't import in FreeBSD 9. This also applies to a po=
ol that is created in FreeBSD, is imported in Solaris to have the 2 log dev=
ices added there, then exported and attempted to be imported back in FreeBS=
D. No errors, zpool import just hangs forever. If I reboot into Solaris, im=
port the pool, remove the dual devices, then reboot into FreeBSD, I can the=
n import the pool without issue. A single cache, or log device will import =
just fine. Unfortunately I deleted my witness-enabled FreeBSD-9 drive, so I=
 can't easily fire it back up to give more debug info. I'm hoping some kind=
 soul will attempt this type of transaction and report more detail to the l=
ist.
>=20
> Note - I just decided to try adding 2 cache devices to a raidz pool in Fr=
eeBSD, export, and then importing, all without rebooting. That seems to wor=
k. BUT - As soon as you try to reboot FreeBSD with this pool staying active=
, it hangs on boot. Booting into Solaris, removing the 2 cache devices, the=
n booting back into FreeBSD then works. Something is kept in memory between=
 exporting then importing that allows this to work. =20

Unfortunately I'm unable to reproduce this. It works for me with 2 cache
and 2 log vdevs. I tried to reboot, etc. My test exactly looks like
this:

	# zpool create tank raidz ada0 ada1
	# zpool add tank cache ada0 ada1
	# zpool export tank
	# kldunload zfs
	# zpool import tank
	<works>
	# reboot
	<works>

> - Speed. (More of an issue, but what do we do?)
>=20
> Wow, it's much slower than Solaris 11 Express for transactions. I do unde=
rstand that Solaris will have a slight advantage over any port of ZFS. All =
of my speed tests are made with a kernel without debug, and yes, these are =
-CURRENT and -PRE releases, but the speed difference is very large.

Before we go any further could you please confirm that you commented out
this line in sys/modules/zfs/Makefile:

	CFLAGS+=3D-DDEBUG=3D1

This turns all kind of ZFS debugging and slows it down a lot, but for
the correctness testing is invaluable. This will be turned off once we
import ZFS into FreeBSD-CURRENT.

BTW. In my testing Solaris 11 Express is much, much slower than
FreeBSD/ZFSv28. And by much I mean two or more times in some tests.
I was wondering if they have some debug turned on in Express.

> At first, I thought it may be more of an issue with the ix0/Intel X520DA2=
 10Gbe drivers that I'm using, since the bulk of my tests are over NFS (I'm=
 going to use this as a SAN via NFS, so I test in that environment).=20
>=20
> But - I did a raw cp command from one pool to another of several TB. I ex=
ecuted the same command under FreeBSD as I did under Solaris 11 Express. Wh=
en executed in FreeBSD, the copy took 36 hours. With a fresh destination po=
ol of the same settings/compression/etc under Solaris, the copy took 7.5 ho=
urs.=20

When you turn off compression (because it turns all-zero blocks into
holes) you can test it by simply:

	# dd if=3D/dev/zero of=3D/<zfs_fs>/zero bs=3D1m

--=20
Pawel Jakub Dawidek                       http://www.wheelsystems.com
pjd@FreeBSD.org                           http://www.FreeBSD.org
FreeBSD committer                         Am I Evil? Yes, I Am!

--0F1p//8PRICkK4MW
Content-Type: application/pgp-signature
Content-Disposition: inline

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.14 (FreeBSD)

iEYEARECAAYFAk0vfT0ACgkQForvXbEpPzTvxwCgib/g/1XuwWzSXj325r/keAwA
sHMAn2hW/6V3HJU2mFd3YKdvARFy0xv3
=sy15
-----END PGP SIGNATURE-----

--0F1p//8PRICkK4MW--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20110113223125.GA2330>