Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 22 Oct 2012 08:21:07 -0500
From:      Mark Felder <feld@feld.me>
To:        Dustin Wenz <dustinwenz@ebureau.com>, Olivier Smedts <olivier@gid0.org>, Steven Hartland <killing@multiplay.co.uk>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: Imposing ZFS latency limits
Message-ID:  <op.wmk0phab34t2sn@tech304>
In-Reply-To: <089898A4493042448C934643FD5C3887@multiplay.co.uk>
References:  <6116A56E-4565-4485-887E-46E3ED231606@ebureau.com> <CABzXLYNaaKtfGf11%2Bm5td0G8kw8KT7TR-7LCHyFdxeKiw5AfxA@mail.gmail.com> <op.wl9vj0os34t2sn@tech304> <089898A4493042448C934643FD5C3887@multiplay.co.uk>

next in thread | previous in thread | raw e-mail | index | archive | help
On Tue, 16 Oct 2012 10:46:00 -0500, Steven Hartland  
<killing@multiplay.co.uk> wrote:

>
> Interesting, what metrics where you using which made it easy to detect,
> work be nice to know your process there Mark?

One reason is that our virtual machine performance gets awful and we get  
alerted for higher than usual load and/or disk io latency by the  
hypervisor. Another thing we've implemented is watching for some SCSI  
errors on the server too. They seem to let us know before it really gets  
bad.

It's nice knowing ZFS is doing everything within its power to read the  
data off the disk, but when there's a fully intact raidz it should be  
smart enough to kick a disk out that's being problematic.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?op.wmk0phab34t2sn>