Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 19 Aug 2015 10:58:36 +0200
From:      Johan Hendriks <joh.hendriks@gmail.com>
To:        javocado <javocado@gmail.com>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: Optimizing performance with SLOG/L2ARC
Message-ID:  <55D4453C.7040203@gmail.com>
In-Reply-To: <CAP1HOmTidC3%2BG4XfhvkQxieo%2BSYMq-JWiXF9Cs4FSW2VqkktWA@mail.gmail.com>
References:  <CAP1HOmTidC3%2BG4XfhvkQxieo%2BSYMq-JWiXF9Cs4FSW2VqkktWA@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help


Op 19/08/15 om 02:28 schreef javocado:
> Hi,
>
> I've been trying to optimize and enhance my ZFS filesystem performance
> (running FreeBSD 8.3amd) which has been sluggish at times. Thus far I have
> added RAM (256GB) and I've added an SLOG (SSD mirror). The RAM seems to
> have helped a bit, but not sure if the SLOG was of much help. My vdev is
> decently busy, with writes and reads averaging at 100 per second with
> spikes as high as 500.
>
> Here's what arc_statistics is showing me:
>
> ARC Size:                               70.28%  173.89  GiB
>         Target Size: (Adaptive)         71.84%  177.77  GiB
>         Min Size (Hard Limit):          12.50%  30.93   GiB
>         Max Size (High Water):          8:1     247.44  GiB
>
>  ARC Efficiency:                                 2.25b
>         Cache Hit Ratio:                95.76%  2.16b
>         Cache Miss Ratio:               4.24%   95.55m
>         Actual Hit Ratio:               64.95%  1.46b
>
>         Data Demand Efficiency:         94.83%  330.99m
>         Data Prefetch Efficiency:       26.36%  64.23m
>
>         CACHE HITS BY CACHE LIST:
>           Anonymously Used:             30.87%  665.74m
>           Most Recently Used:           7.54%   162.67m
>           Most Frequently Used:         60.29%  1.30b
>           Most Recently Used Ghost:     0.18%   3.97m
>           Most Frequently Used Ghost:   1.11%   23.89m
>
>         CACHE HITS BY DATA TYPE:
>           Demand Data:                  14.56%  313.89m
>           Prefetch Data:                0.79%   16.93m
>           Demand Metadata:              53.28%  1.15b
>           Prefetch Metadata:            31.38%  676.68m
>
>         CACHE MISSES BY DATA TYPE:
>           Demand Data:                  17.90%  17.10m
>           Prefetch Data:                49.50%  47.30m
>           Demand Metadata:              24.46%  23.37m
>           Prefetch Metadata:            8.14%   7.78m
>
>
> 1. based on the output above, I believe a larger ARC may not necessarily
> benefit me at this point. True?
>
> 2. Is more (L2)ARC always better?

One thing to remember is that a L2ARC requires memory!  So for your
hardware you need to find the sweetspot which L2ARC size is best performing.

http://www.mail-archive.com/zfs-discuss@opensolaris.org/msg34674.html

>
> 3. I know it's a good idea to mirror the SLOG (and I have). Do I understand
> correctly that I do not need to mirror the L2ARC since it's just a read
> cache, nothing to lose if the SSD goes down?
You could potentially loose data if the ZIL/SLOG is lost, so always use
a mirrorred vdev as ZIL/SLOG.
You do not need a large vdev for ZIL/SLOG 8 to 10 GB is large enough.

https://pthree.org/2013/04/19/zfs-administration-appendix-a-visualizing-the-zfs-intent-log/

For L2ARC you do not need to mirror the disks. It just copies data to
the device for cache, if it is not on the cache it will use the spinning
disks.
If for whatever reason the cache vdev dies it will use the spinning
disks again.

>
> 4. Is there a better way than looking at zpool iostat -v to determine the
> SLOG utilization and usefulness?
>
> I'd like to test-see if adding L2ARC yields any performance boost. Since
> SLOG isn't doing much for me, I'm thinking I could easily repurpose my SLOG
> into an L2ARC.
>
> Questions:
>
> 5. In testing, it seemed fine to remove the SLOG from a live/running system
> (zpool remove pool mirror-3). Is this in fact a safe thing to do to a
> live/running system? ZFS knows that it should flush the ZIL, then remove
> the device? Is it better or necessary to shut down the system and remove
> the SLOG in "read only" mode?
You can without problem remove the ZIL/SLOG on a running system.
ZFS will fall back to the spinning disks for ZIL/SLOG

>
> 6. Am I missing something about how the SLOG and L2ARC play together that I
> would miss by running my proposed test. i.e. if I take down the SLOG and
> repurpose as an L2ARC might I be shooting myself in the foot cause the SLOG
> and L2ARC combo is much more powerful than the L2ARC alone (or SLOG alone)?
> My hope here is to see if the L2ARC improves performance, after which I
> will proceed with buying the SSD(s) for both the SLOG and L2ARC.
>
> Thanks
> _______________________________________________
> freebsd-fs@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?55D4453C.7040203>