From owner-freebsd-fs@FreeBSD.ORG Mon Sep 19 20:28:01 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 1FC811065676 for ; Mon, 19 Sep 2011 20:28:01 +0000 (UTC) (envelope-from fjwcash@gmail.com) Received: from mail-yx0-f182.google.com (mail-yx0-f182.google.com [209.85.213.182]) by mx1.freebsd.org (Postfix) with ESMTP id D18DD8FC13 for ; Mon, 19 Sep 2011 20:28:00 +0000 (UTC) Received: by yxk36 with SMTP id 36so5475940yxk.13 for ; Mon, 19 Sep 2011 13:28:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=L+s05AGSmE8n9tbxtaTj+0fTl/zveCmS1JyyhK1r9K0=; b=UfSKCWiN8/ogzueWUA27yHRPo/vqRUxpVtfP57xyWTN4PRGSLPcSYUQH23wFa1D8QP /KBQ/6BFi8xU4xYruaTSM+pciM5gR2fgAPWd63srhYGzUlbiE7vXvg9ZLc5PtNMXLueK 2Aaq+FyJRw53HxnQqve/lQkxt4SiDONFLTQoU= MIME-Version: 1.0 Received: by 10.220.154.201 with SMTP id p9mr750366vcw.2.1316464079990; Mon, 19 Sep 2011 13:27:59 -0700 (PDT) Received: by 10.220.198.130 with HTTP; Mon, 19 Sep 2011 13:27:59 -0700 (PDT) In-Reply-To: <1316459220.35419.YahooMailClassic@web121209.mail.ne1.yahoo.com> References: <1316459220.35419.YahooMailClassic@web121209.mail.ne1.yahoo.com> Date: Mon, 19 Sep 2011 13:27:59 -0700 Message-ID: From: Freddie Cash To: Jason Usher Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: freebsd-fs@freebsd.org Subject: Re: ZFS obn FreeBSD hardware model for 48 or 96 sata3 paths... X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Sep 2011 20:28:01 -0000 On Mon, Sep 19, 2011 at 12:07 PM, Jason Usher wrote: > --- On Sat, 9/17/11, Bob Friesenhahn wrote: > > > 150KB is a relatively small file size given that the > > default zfs blocksize is 128KB. With so many files you > > should definitely max out RAM first before using SSDs as a > > l2arc. It is important to recognize that the ARC cache > > is not populated until data has been read. The cache > > does not help unless the data has been accessed several > > times. You will want to make sure that all metada and > > directories are cached in RAM. Depending on how the > > files are used/accessed you might even want to intentionally > > disable caching of file data. > > How does one make sure that all metadata and directories are cached in RAM? > Just run a 'find' on the filesystem, or a 'du' during the least busy time > of day ? Or is there a more elegant, or more direct way to read all of that > in ? > That should work to "prime" the caches. Or you can just let the system manage it automatically, adding data to the ARC/L2ARC as it's read/accessed. The end result of that would be much more in line with how the data is actually used. > Further, if this (small files, lots of them) dataset benefits a lot from > having the metadata and dirs read in, how can I KEEP that data in the cache, > but not cache the file data (as you suggest, above) ? There are ZFS properties for this (primarycache aka ARC; secondarycache aka L2ARC) which can be set on a per-filesystem basis (and inherited). These can be set to "all", "metadata", or "data". > Can I explicitly cache metadata/dirs in RAM, and cache file data in L2ARC ? No. Data that does not go into the ARC can never go into the L2ARC. IOW, if you set primarycache=metadata and secondarycache=data, you will never see anything in L2ARC. At least, that's the understanding I've come to based on posts on the zfs-discuss mailing list. And it does jive with what I was seeing on our storage servers. It's too bad, because it would be a nice setup, ordered from fastert to slowest: ARC for metadata, L2ARC for file data, pool for permanent storage. -- Freddie Cash fjwcash@gmail.com