Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 30 Oct 2010 19:56:56 +0200
From:      =?iso-8859-1?Q?Peter_Ankerst=E5l?= <peter@pean.org>
To:        Sean <sean@ttys0.net>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: Raid + zfs performace.
Message-ID:  <86693036-9351-4303-BADA-C99F7A4C375C@pean.org>
In-Reply-To: <AANLkTinQWchAPtcqcO3mDt9gKK5tCsHo8khyiD69M4BV@mail.gmail.com>
References:  <D2954020-C3A0-46EC-8C64-EB57EA1E9B21@pean.org> <AANLkTinQWchAPtcqcO3mDt9gKK5tCsHo8khyiD69M4BV@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help


On 30 okt 2010, at 19.39, Sean wrote:

>> I have a question about raid and zfs. I Have a hardware-raid running.
>> A mirror thats the only storage in my zfs pool. Im going to
>> add another mirror to the machine and my question is, what is the =
best option performace-wise?
>=20
> The best performance option is to get rid of the hardware-raid, and
> present each disc to ZFS in a JBOD fashion.

Ok. RIght now thats not an option. I have a da0 device thats a =
hardware-raid mirror. and it is currently
the only device in the only pool on the machine.

>=20
>> Is it to add the other mirror to the same pool or create another =
separate pool for that mirror?
>> Btw. Today my disk are quite saturated r/w wise.
>=20
> RAID functionality only exists within the context of a single pool.
> You don't create a new pool and then try to mirror the two pools. You
> add the storage to an existing pool, unless you have a reason to start
> a new pool. When I already have a mirror, I like to add new mirror
> sets. It's the equivalent of a RAID 10 scenario.

Yes I know.
I thought maybe because the existing pool is kind of r/w saturated it =
should be better
to create a new independent pool for the new drives. In that way the =
heavy activity=20
would not "spread" to the new drives.

Now you presented me with a third option. So you think I should skip to =
create
a new hardware-raid mirror and instead use two single drives and add =
these as
a mirror to the existing pool? How will zfs handle howswap of these =
drives?
I've seen a few crashes due to ata-detach in other systems.


>=20
> -Sean
>=20




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?86693036-9351-4303-BADA-C99F7A4C375C>