Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 28 Jan 2013 16:12:28 +0000
From:      Laurence Gill <laurencesgill@googlemail.com>
To:        freebsd-fs@freebsd.org
Subject:   Re: HAST performance overheads?
Message-ID:  <20130128161228.477ce174@googlemail.com>
In-Reply-To: <20130128120055.6ca7c734@googlemail.com>
References:  <20130125121044.1afac72e@googlemail.com> <20130127134845.GC1346@garage.freebsd.pl> <20130128120055.6ca7c734@googlemail.com>

next in thread | previous in thread | raw e-mail | index | archive | help

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On Mon, 28 Jan 2013 12:00:55 +0000
Laurence Gill <laurencesgill@googlemail.com> wrote:
> On Sun, 27 Jan 2013 14:48:46 +0100
> Pawel Jakub Dawidek <pjd@FreeBSD.org> wrote:
> > 
> > Let's try to test one step at a time. Can you try to compare
> > sequential performance of regular disk vs. HAST with no secondary
> > configured?
> > 
> > By no secondary configured I mean 'remote' set to 'none'.
> > 
> > Just do:
> > 
> > 	# dd if=/dev/zero of=/dev/da0 bs=1m count=10240
> > 
> > then configure HAST and:
> > 
> > 	# dd if=/dev/zero of=/dev/hast/disk0 bs=1m count=10240
> > 
> > Which FreeBSD version is it?
> > 
> > PS. Your ZFS tests are pretty meaningless, because it is possible
> > that everything will end up in memory. I'm sure this is what
> > happens in 'bs=16k count=65535' case. Let try raw providers first.
> > 
> 
> Thanks for the reply.  I'm using FreeBSD 9.1-RELEASE. Here are the
> results:
> 
>  # dd if=/dev/zero of=/dev/da0 bs=1m count=10240
>  10737418240 bytes transferred in 755.144644 secs (14219022 bytes/sec)
> 
>  # dd if=/dev/zero of=/dev/hast/disk0 bs=1m count=10240
>  10737418240 bytes transferred in 844.167602 secs (12719534 bytes/sec)
> 
> 
> Which indicates a very small overhead, hmmm...
> 

Further to this, sticking with the 1 disk for testing, I see the
following:

 - UFS on da0
 # dd if=/dev/zero of=test.dat bs=1m count=10240
 10737418240 bytes transferred in 76.112873 secs (141072302 bytes/sec)

 - UFS on hast/disk0
 #  dd if=/dev/zero of=test.dat  bs=1m count=10240
 10737418240 bytes transferred in 855.720985 secs (12547803 bytes/sec)

Which is roughly the same as using the raw hast provider.


 - zfs on da0
 # dd if=/dev/zero of=test.dat bs=1m count=10240
 10737418240 bytes transferred in 114.338900 secs (93908707 bytes/sec)

 - zfs on hast/disk0
 # dd if=/dev/zero of=test.dat bs=1m count=10240
 10737418240 bytes transferred in 1287.088416 secs (8342409 bytes/sec)

Which seems slower than the raw provider by approx 4MB/s.

So I'm still trying to work out why the extra "drop" when using ZFS on
hast...



- -- 
Laurence Gill

f: 08721 157 665
skype: laurencegg
e: laurencesgill@googlemail.com
PGP on Key Servers
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.19 (GNU/Linux)

iEYEARECAAYFAlEGo3QACgkQygVt8Sq0Pf8K3QCfVA+nofIgRHM/gYiAzis6TF5+
VvYAn2kEOVtGySR0eZtegGrvUap5BVhx
=9fCv
-----END PGP SIGNATURE-----


Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20130128161228.477ce174>