Date: Tue, 5 Jun 2007 20:35:51 -0400 From: Kris Kennaway <kris@obsecurity.org> To: Ivan Voras <ivoras@fer.hr> Cc: freebsd-current@freebsd.org Subject: Re: ZFS on 32-bit CPUs? Message-ID: <20070606003551.GA50194@rot13.obsecurity.org> In-Reply-To: <f44ujf$kpf$1@sea.gmane.org> References: <78878E5C-A219-42A6-AB9F-D4C4C7FC994E@gmail.com> <f44ujf$kpf$1@sea.gmane.org>
next in thread | previous in thread | raw e-mail | index | archive | help
--C7zPtVaVf+AK4Oqc Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Wed, Jun 06, 2007 at 02:19:57AM +0200, Ivan Voras wrote: > Sean Hafeez wrote: > > Has anyone looked at the ZFS port and how it does on 32-bit CPUs vs > > 64-bit ones? I know under Solaris they do not recommend using a 32-bit > > CPU. I my case I was thinking about doing some testing on a Dual P3-850. >=20 > It works, and there's never been doubt that it would work. The main > resource you need is memory. At least 1 GB is recommended, but it should > work with 512 MB (though people were reporting panics unless they scale > ZFS and VFS parameters down). If you're thinking of using it in > production, you should read the threads on this list regarding ZFS, > especially those mentioning panics. It "works", but there are serious performance issues to do with how ZFS on freebsd handles caching of data. In order to get reasonable performance you will want to tune VM_KMEM_SIZE_MAX as high as you can get away with (depends on how much ram you have). Roughly half of this will be used by the ARC (zfs buffer cache). This is typically less memory than the standard buffer cache would have available so ZFS still loses out on caching, particularly on systems with a lot of RAM. You may also need to hack zfs a bit. The following patch improves performance for me on amd64 (and avoids a deadlock). I have not tested whether it is sufficient or reasonable on i386 (only amd64), the KVA shortage there makes it hard to tune memory availability the way zfs wants it. There is also a panic condition that may be triggered on SMP when you have INVARIANTS enabled. pjd and I don't yet understand the cause of this but it appears to be spurious ("returning to userspace with 1 locks held" when no locks appear to actually be held, i.e. it seems to be some kind of leak in the stats). Index: contrib/opensolaris/uts/common/fs/zfs/arc.c =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D RCS file: /mnt/xor/ncvs/src/sys/contrib/opensolaris/uts/common/fs/zfs/arc.c= ,v retrieving revision 1.9 diff -u -d -u -r1.9 arc.c --- contrib/opensolaris/uts/common/fs/zfs/arc.c 23 Apr 2007 21:52:14 -0000 = 1.9 +++ contrib/opensolaris/uts/common/fs/zfs/arc.c 2 Jun 2007 20:22:00 -0000 @@ -1439,8 +1439,10 @@ return (1); #endif #else - if (kmem_used() > kmem_size() / 2) + if (kmem_used() * 10 > kmem_size() * 9) { + printf("kmem_used =3D %ld, kmem_size =3D %ld\n", kmem_used(), kmem_size(= )); return (1); + } #endif =20 #else @@ -2689,13 +2689,19 @@ static void arc_lowmem(void *arg __unused, int howto __unused) { + int vnodesave, count =3D 0; =20 /* Serialize access via arc_lowmem_lock. */ mutex_enter(&arc_lowmem_lock); zfs_needfree =3D 1; cv_signal(&arc_reclaim_thr_cv); - while (zfs_needfree) + vnodesave =3D desiredvnodes; + while (zfs_needfree) { + if (count++ % 5 =3D=3D 0) + desiredvnodes /=3D 2; tsleep(&zfs_needfree, 0, "zfs:lowmem", hz / 5); + } + desiredvnodes =3D vnodesave; mutex_exit(&arc_lowmem_lock); } #endif --C7zPtVaVf+AK4Oqc Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.3 (FreeBSD) iD8DBQFGZgFnWry0BWjoQKURAheaAKDx59Jf2zZ4eu+a/Ix8HVpDct2fJgCg072B O9jyjjGdG4UyH1tcLo55PXA= =Z7os -----END PGP SIGNATURE----- --C7zPtVaVf+AK4Oqc--
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20070606003551.GA50194>