Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 23 Oct 2010 14:06:23 -0500
From:      Kevin Day <toasty@dragondata.com>
To:        Eugene M. Kim <gene@nttmcl.com>
Cc:        fs@freebsd.org
Subject:   Re: ZFS: Parallel I/O to vdevs that appear to be separate physical disks but really are partitions
Message-ID:  <86C8DC50-9DE0-42B3-8A57-63AB4D095E6D@dragondata.com>
In-Reply-To: <4CC215B4.3050607@nttmcl.com>
References:  <4CC215B4.3050607@nttmcl.com>

next in thread | previous in thread | raw e-mail | index | archive | help

On Oct 22, 2010, at 5:52 PM, Eugene M. Kim wrote:

>=20
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>=20
> Greetings,
>=20
> I run a FreeBSD guest in VMware ESXi with a 10GB zpool.  Lately the
> originally provisioned 10GB proved insufficient, and I would like to
> provision another 10GB virtual disk and add it to the zpool as a
> top-level vdev.
>=20
> The original and new virtual disks are from the same physical pool (a
> RAID-5 array), but appears to be separate physical disks to the
> FreeBSD guest (da0 and da1).  I am afraid that the ZFS would schedule
> I/O to the two virtual disks in parallel thinking that the pool
> performance will improve, while the performance would actually suffer
> due to seeking back and forth between two regions of one physical =
pool.


Just to chime in with a bit of practical experience here... It really =
won't measurably matter for most workloads. FreeBSD will schedule I/O =
separately, but VMWare will recombine them on the hypervisor and =
reschedule them as best it can knowing the real hardware layout.

It doesn't work well if you're running two mirrored disks, where your OS =
might try round-robin'ing the requests between what it thinks are two =
identical drives that will seek independently. But, if you've only got =
one place you can possibly look for the data, you really have no choice =
where you're going to ask to read it from. So the OS issues a bunch of =
requests as needed, and VMWare will reorder them the best it can.=20

On occasion, where VMWare is connected to a very very large SAN or local =
storage with many (48+) drives, we've even seen small performance =
increases by giving FreeBSD several small disks and using vinum or ccd =
to stripe between them. If FreeBSD thinks there are a half dozen drives, =
there are times where it will allow more outstanding I/O requests at a =
time, and VMWare can reschedule at its whim.

-- Kevin




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?86C8DC50-9DE0-42B3-8A57-63AB4D095E6D>