Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 28 Nov 2004 13:14:08 +0100
From:      Tomas Zvala <tomas@zvala.cz>
To:        freebsd-geom@freebsd.org
Subject:   geom_mirror performance issues
Message-ID:  <41A9C110.9050205@zvala.cz>

next in thread | raw e-mail | index | archive | help
Hello,
	I've been playing with geom_mirror for a while now and few issues came 
up to my mind.
	a) I have two identical drives (Seagate 120GB SATA 8MB Cache 7.2krpm) 
that are able to sequentialy read at 58MB/s at the same time (about 
115MB/s throughput). But when I have them in geom_mirror I get 30MB/s at 
best. Thats about 60MB/s for the mirror (about half the potential). The 
throughput is almost the same for both 'split' and 'load' balancing 
algorithms altough with load algorithm it seems that all the reading is 
being done from just one drive.
	b) Pretty often i can see in gstat that both drives are doing the same 
things (the same number of transactions and same throughput) but one of 
them has significantly higher load(ie. one 50% and the other one 95%). 
How is disk load calculated and why does this happen?
	c) When I use 'split' load balancing algorithm, 128kB requests are 
split into two 64kB requests making twice as many transactions on the 
disks. Is it possible to lure fbsd into allowing 256kB requests that 
will get split into two 128kB requests?
	d) When I use round-robin algorithm the performance halves (i get about 
20MB/s raw throughput). Why is this? I would expect round-robin 
algorithm to be the most effective one for reading as every drive gets 
exactly half the load.
	e) My last question again goes with the 'load' balancing. How often is 
switch between drives done? When I set my load balancing to 'load' i get 
100% load on one drive and 0% or at most 5% on the other one. Is this an 
intention. Seems like a bug to me.

	Last thing doesent go exactly with geom_mirror. I was thinking if it 
was possible to implement some kind of read/write buffering on geom 
level working about the same as read/write buffering works on HW raid 
cards. Would it have any effect on performance or is it just a step in a 
wrong direction?
	Oh not to forget, I was using dumb dd if=<device> of=/dev/null 
bs=1048576 count=10240 to do 'benchmarks' and to study behaviour of load 
balancing. Right now I'm trying to get some results from bonnie++ to 
compare results based on something else than sequential reads.
	
	Thank You for your time

	Tomas Zvala



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?41A9C110.9050205>