Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 2 Dec 2011 19:36:44 GMT
From:      Martin Simmons <martin@lispworks.com>
To:        freebsd-fs@freebsd.org
Subject:   Re: Monitoring ZFS IO
Message-ID:  <201112021936.pB2Jaiuw012444@higson.cam.lispworks.com>
In-Reply-To: <20111202153624.GA28715@icarus.home.lan> (message from Jeremy Chadwick on Fri, 2 Dec 2011 07:36:24 -0800)
References:  <4ED8D7A5.7090700@icritical.com> <op.v5u91pls8527sy@212-182-167-131.ip.telfort.nl> <4ED8EC9A.2080706@icritical.com> <20111202153624.GA28715@icarus.home.lan>

next in thread | previous in thread | raw e-mail | index | archive | help
>>>>> On Fri, 2 Dec 2011 07:36:24 -0800, Jeremy Chadwick said:
> 
> On Fri, Dec 02, 2011 at 03:19:54PM +0000, Matt Burke wrote:
> > On 12/02/11 14:47, Ronald Klop wrote:
> > > while true; do gstat -b -I 1s; done
> > 
> > Looks like I wasn't clear about what I'm after - sorry.
> > 
> > I want to see how many bytes or KB have been read and written to a given
> > zpool since creation (as in the newer of uptime or zpool creation) on the
> > system.
> >
> > For instance I want this data:
> > 
> > # time iostat -Idx
> >                         extended device statistics
> > device     r/i   w/i    kr/i    kw/i wait svc_t  %b
> > mfid0    284807.0 5469251.0 4452202.0 116634996.0    0   0.8   0
> > mfid1    284576.0 5466322.0 4474976.5 116510280.0    0   0.8   0
> > mfid2    278686.0 5450269.0 4418703.0 116511709.0    0   0.8   0
> > mfid3    281673.0 5452757.0 4439770.5 116560910.5    0   0.8   0
> > mfid4    279549.0 5472177.0 4440227.0 116609067.0    0   0.8   0
> > mfid5    282625.0 5464261.0 4503257.5 116608801.5    0   0.8   0
> > mfid6    275635.0 5470654.0 4433529.0 116616131.5    0   0.8   0
> > ...
> > mfid27   302950.0 5464880.0 4434398.0 116542100.0    0   0.7   0
> > mfid28   281464.0 5459410.0 4461678.5 116595780.5    0   0.8   0
> > mfid29   277535.0 5468784.0 4443352.5 116642932.0    0   0.8   0
> > ...
> > real	0m0.003s
> > user	0m0.000s
> > sys	0m0.007s
> > 
> > 
> > For the zpool as a singular entitiy (or even by zfs filesystem), but not
> > for the individual disks.
> > 
> > Hope this clarifies my request a bit
> 
> To my knowledge this kind of data is not kept/available in ZFS (FreeBSD
> or Solaris).  What you're wanting (truly) are counters rather than
> averages, and you can do the averaging yourself (if wanted).  "zpool
> iostat" does not do this.
> 
> With most utilities like iostat, mpstat, zpool iostat, gstat, vmstat,
> and others of this nature, the established method/model/norm is that you
> always provide an interval and you ignore the first sample/set of data
> shown.  In iostat's case on FreeBSD, it provides you an average over the
> entire system uptime.  Other utilities do not work this way.
> 
> Even if "zpool iostat" behaved like your above iostat example, you'd
> still run into the problem I described in my other mail (which is that
> you get human-readable output, not actual integers/floats, and you
> therefore have to do math to turn the values into integers, which sounds
> easy but isn't, and you lose granularity/accuracy too).
> 
> I cannot explain why "zpool iostat" (note no interval argument!) shows
> some reads/writes.  For example, on my systems, the following loop:
> 
>   while true; do zpool iostat; done
> 
> ...literally returns the same data over and over, no matter what is
> going on with he pools (reads or writes).  I'm sure someone can explain
> this behaviour, but it reminds me of systems where running "vmstat 1"
> shows "crazy" values for the first interval, but the 2nd and onward
> are accurate.

It looks like the first set of numbers are the averages between now and the
time that the vdev was loaded into the kernel (see print_vdev_stats).  It
should be easy to write a function that prints the unscaled raw values.

__Martin



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201112021936.pB2Jaiuw012444>