Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 18 Dec 2019 09:31:29 -0700
From:      Alan Somers <asomers@freebsd.org>
To:        Karl Denninger <karl@denninger.net>
Cc:        FreeBSD <freebsd-stable@freebsd.org>
Subject:   Re: ZFS and power management
Message-ID:  <CAOtMX2ih%2BboayqOTCO95WT0WbYRdoxYgzYt%2B%2BGxTCoUYMS6ejA@mail.gmail.com>
In-Reply-To: <57da15d4-0944-982b-7d7e-d7b2571e869c@denninger.net>
References:  <57da15d4-0944-982b-7d7e-d7b2571e869c@denninger.net>

next in thread | previous in thread | raw e-mail | index | archive | help
On Wed, Dec 18, 2019 at 9:22 AM Karl Denninger <karl@denninger.net> wrote:

> I'm curious if anyone has come up with a way to do this...
>
> I have a system here that has two pools -- one comprised of SSD disks
> that are the "most commonly used" things including user home directories
> and mailboxes, and another that is comprised of very large things that
> are far less-commonly used (e.g. video data files, media, build
> environments for various devices, etc.)
>
> The second pool has perhaps two dozen filesystems that are mounted, but
> again, rarely accessed.  However, despite them being rarely accessed ZFS
> performs various maintenance checkpoint functions on a nearly-continuous
> basis (it appears) because there's a low level, but not zero, amount of
> I/O traffic to and from them.  Thus if I set power control (e.g. spin
> down after 5 minutes of inactivity) they never do.  I could simply
> export the pool but I prefer (greatly) to not do that because some of
> the data on that pool (e.g. backups from PCs) is information that if a
> user wants to get to it it ought to "just work."
>
> Well, one disk is no big deal.  A rack full of them is another matter.
> I could materially cut the power consumption of this box down (likely by
> a third or more) if those disks were spun down during 95% of the time
> the box is up, but with the "standard" way ZFS does things that doesn't
> appear to be possible.
>
> Has anyone taken a crack at changing the paradigm (e.g. using the
> automounter, perhaps?) to get around this?
>
> --
> Karl Denninger
> karl@denninger.net <mailto:karl@denninger.net>
> /The Market Ticker/
> /[S/MIME encrypted email preferred]/
>

I have, and I found that it wasn't actually ZFS's fault.  By itself ZFS
wasn't initiating any background I/O whatsoever.  I used a combination of
fstat and dtrace to track down the culprit processes.  Once I had
shutdown/patched/reconfigured each of those processes, the disks stayed
idle indefinitely.  You might have success using the same strategy.  I
suspect that the automounter wouldn't help you, because any access that
ought to "just work" for a normal user would likewise "just work" for
whatever background process is hitting your disks right now.
-Alan



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAOtMX2ih%2BboayqOTCO95WT0WbYRdoxYgzYt%2B%2BGxTCoUYMS6ejA>