Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 26 Jan 2020 14:11:34 -0800 (PST)
From:      "Rodney W. Grimes" <freebsd-rwg@gndrsh.dnsmgr.net>
To:        Jeff Roberson <jroberson@jroberson.net>
Cc:        Mike Karels <mike@karels.net>, freebsd-arch@freebsd.org, Ben Woods <woodsb02@gmail.com>, Ed Maste <emaste@freebsd.org>, Conrad Meyer <cem@freebsd.org>, Philip Paeps <philip@freebsd.org>
Subject:   Re: Minimum memory for ZFS (was Re: svn commit: r356758 - in head/usr.sbin/bsdinstall: . scripts)
Message-ID:  <202001262211.00QMBYPo045307@gndrsh.dnsmgr.net>
In-Reply-To: <alpine.BSF.2.21.9999.2001261050070.1198@desktop>

next in thread | previous in thread | raw e-mail | index | archive | help
> On Wed, 22 Jan 2020, Mike Karels wrote:
> 
> > I took the liberty of changing the subject line to make it stand out a
> > bit more.
> >
> > Ben wrote:
> >
> >> On Sat, 18 Jan 2020 at 09:16, Mike Karels <mike@karels.net> wrote:
> >
> >>>> On Fri, 17 Jan 2020 at 08:21, Ben Woods <woodsb02@gmail.com> wrote:
> >>>
> >>>>> Perhaps we could simply include a message on that bsdinstall
> >>> partitioning
> >>>>> mode selection screen that UFS is recommended on systems with < 4 Gb
> >>> RAM?
> >>>>>
> >>>
> >>>> I have uploaded a diff for this here: https://reviews.freebsd.org/D23224
> >>>
> >>>> Please let me know your thoughts (comments in the phabricator review
> >>> would
> >>>> be best).
> >>>
> >>> I think this needs more discussion, preferably on this list.  I am not
> >>> convinced that systems with as little as 4 GB should use ZFS.  Conventional
> >>> wisdom on the FreeNAS mailing list says that 8 GB is required for ZFS,
> >>> and FreeNAS no longer includes UFS as an option.  Conrad suggested a
> >>> cutoff of 16 GB; I am happier with 16 GB than 4 GB as a cutoff.  Also,
> >>> there was mention of auto-tuning for smaller systems; I don't think that
> >>> has materialized yet.  I'm not sure how plausible that is without knowing
> >>> the workload.  I use ZFS on a workstation/server with 64 GB that runs 4
> >>> bhyve guests that do things like buildworld.  ZFS wants 63 GB for arc_max;
> >>> needless to say, I have a tunable set to a much lower value.  If tuning
> >>> is required, it is unclear that ZFS is a good default.
> >>>
> >>>                 Mike
> >>>
> >
> >
> >> Hi everyone,
> >
> >> Before I commit phabricator review D23224, is there any final comments?
> >
> >> Particularly on these 2 lines of help-text:
> >> msg_partitioning_zfs_help="ZFS is recommended if you have at least 4GB RAM"
> >> msg_partitioning_ufs_help="UFS is recommended if you have less than 4GB of
> >> RAM"
> >
> >> There is some disagree about what these 2 recommendations should be.
> >
> >> 4GB was recommended by: imp, emaste, philip, eugen, dteske
> >> 8GB was recommended by: mike
> >> 16GB was recommended by: cem
> 
> I have a completely different recommendation that will take someone with 
> some knowledge of arc and a little knowledge of VM.  I am happy to provide 
> the VM side if someone else will provide the arc side and testing.  If you 
> have a reasonable understanding of ARC I suspect this could be prototyped 
> in a weekend.
> 
> Simply put, there is no reason the arc can't spill into the page cache 
> just as the buf cache does.  If you look at the actual papers on 
> ARC/2Q/etc. you will see that their primary advantage is in limiting the 
> impact of scans and they only provide a few percent up to 10% performance 
> in even contrived workloads.  Simply put, there is no reason to give arc 
> all of your memory.  There is a good reason to make all of your memory 
> available for caching however.
> 
> My proposal is this, limit ARC to some reasonable fraction of memory, say 
> 1/8th, and then do the following:
> 
> On expiration from arc place the pages in a vm object associated with the 
> device.  The VM is now free to keep them or re-use them for user memory.
> 
> On miss from arc check the page cache and take the pages back if they 
> exist.
> 
> On invalidation you need to invalidate the page cache.
> 
> ARC already allows spilling to SSD.  I don't know the particulars of the 
> interface but we're essentially spilling to memory that can be reclaimed 
> by the page daemon as necessary.
> 
> With this change the ARC would participate reasonably and we wouldn't need 
> to talk about minimum memory for it.  We have already wasted more 
> person-hours working around this architectural misstep than would be 
> required to address it.  If anyone would like to take this up please 
> contact me directly as I don't always read arch@.

I am not an expert on either, but I would be more than willing to
through my work load(s) at any changes you come up with and provide
feedback on how it behaves.


> Thanks,
> Jeff
> 
> >
> >> The 4GB limit seems to have the best consensus, however there was some
> >> debate about whether ZFS is recommended on a system with 4GB, or only
> >> systems with MORE THAN 4GB.
> >
> > I don't remember what everyone else wrote, but IIRC, Devin said that if
> > you use ZFS with 4 GB, you will soon end up with a dozen tunables set.
> > That doesn't sound like a recommendation for 4 GB.
> >
> >> As for the ZFS auto-tuning, I see that as being a separate discussion
> >> (which could ultimately change this recommendation, but shouldn't prevent
> >> us from committing this help text now).
> >
> > Agreed, but the lack of tuning should factor into the current recommendation.
> >
> > 		Mike
> >
> >> Regards,
> >> Ben
> > _______________________________________________
> > freebsd-arch@freebsd.org mailing list
> > https://lists.freebsd.org/mailman/listinfo/freebsd-arch
> > To unsubscribe, send any mail to "freebsd-arch-unsubscribe@freebsd.org"
> >
> _______________________________________________
> freebsd-arch@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-arch
> To unsubscribe, send any mail to "freebsd-arch-unsubscribe@freebsd.org"
> 

-- 
Rod Grimes                                                 rgrimes@freebsd.org



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?202001262211.00QMBYPo045307>