Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 25 Apr 2013 08:51:44 -0700
From:      Freddie Cash <fjwcash@gmail.com>
To:        Andriy Gapon <avg@freebsd.org>
Cc:        FreeBSD Filesystems <freebsd-fs@freebsd.org>
Subject:   Re: Strange slowdown when cache devices enabled in ZFS
Message-ID:  <CAOjFWZ6uL5nJ3X0Bz-oxJf-o21k81HfkR3PgwM022R4W21_5ZQ@mail.gmail.com>
In-Reply-To: <51487CE1.5090703@FreeBSD.org>
References:  <CAOjFWZ6Q=Vs3P-kfGysLzSbw4CnfrJkMEka4AqfSrQJFZDP_qw@mail.gmail.com> <51430744.6020004@FreeBSD.org> <CAOjFWZ5e2t0Y_KOxm%2BGhX%2BzXNPfOXb8HKF4uU%2BQ%2BN5eWQqLtdg@mail.gmail.com> <51487CE1.5090703@FreeBSD.org>

next in thread | previous in thread | raw e-mail | index | archive | help
I haven't had a chance to run any of the DTrace scripts on any of my ZFS
systems, but I have narrowed down the issue a bit.

If I set primarycache=all and secondarycache=all, then adding an L2ARC
device to the pool will lead to zfskern{l2arc_feed_thread} taking up 100%
of one CPU core and stalling I/O to the pool.

If I set primarycache=all and secondarycache=metadata, then adding an L2ARC
device to the pool speeds things up (zfs send/recv saturates a 1 Gbps link;
and the nightly rsync backups run finishes 4 hours earlier).

I haven't tested the other two combinations (metadata/metadata;
metadata/all) as yet.

This is consistent across two ZFS systems so far:
  - 8-core Opteron 6100-series CPU with 48 GB of RAM; 44 GB ARC, 40 GB
metadata limit; 3x raidz2
  - 2x 8-core Opteron 6100-series CPU with 128 GB of RAM; 64 GB ARC, 60 GB
metadata limit; 5x raidz2

Still reading up on dtrace/hwpmc as time permits.  Just wanted to pass
along the above to show I haven't forgotten about this yet.  :)  $JOB/$LIFE
slows things down sometimes.  :)

-- 
Freddie Cash
fjwcash@gmail.com



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAOjFWZ6uL5nJ3X0Bz-oxJf-o21k81HfkR3PgwM022R4W21_5ZQ>