Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 28 Aug 2012 20:50:03 +0200
From:      Niki Hammler <mailinglists@nobaq.net>
To:        Freddie Cash <fjwcash@gmail.com>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: zvol + raidz issue?
Message-ID:  <503D12DB.3040909@nobaq.net>
In-Reply-To: <CAOjFWZ6a6h_o1sq6KGN-N5Jnq9NSj6KvjEyfvEXXYrznw%2Bj8fA@mail.gmail.com>
References:  <503A6F9F.7070801@nobaq.net> <CAOjFWZ4Ep5ZONO%2B7UNqt36stFN_OXoMhK=83UwPpv51P8OTjfg@mail.gmail.com> <503C8AEE.1090703@nobaq.net> <CAOjFWZ6a6h_o1sq6KGN-N5Jnq9NSj6KvjEyfvEXXYrznw%2Bj8fA@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Am 28.08.2012 17:27, schrieb Freddie Cash:
> On Tue, Aug 28, 2012 at 2:10 AM, Niki Hammler <mailinglists@nobaq.net> wrote:
>> Am 26.08.2012 22:13, schrieb Freddie Cash:
>>> (Sorry for top-post, sending from phone.)
>>>
>>> Please show the command-line used to create the zvol. Especially the
>>> recordsize option.  When using zvols, you have to make sure to match the
>>> recordsize of the zvol to that of the filesystem used above it.
>>> Otherwise, performance will be atrocious.
>>
>> Sorry for my third posting on this.
>> Now I strictly followed your suggestion and used
>>
>> zfs create -b 128k -V 500g plvl5i0/zvtest
>>
>> (with 128k being the recordsize of the dataset in the zpool).
>>
>> Suddenly the write performance increased from the 2.5 MB/s to 250 MB/s
>> (or 78MB/s when using bs=4096 with dd)
>>
>> 1.) How can this explained?
>> 2.) Is there any problem when choosing -b 128k (can I always blindly
>> choose -b 128k)?
>>
>> Remember again that the problem ONLY occurs with raidz1+zvol+force 4096
>> block alignment and in no other case!
> 
> Most likely it has to do with the raidz stripe size and the constant
> block size of the zvol causing alignment or similar issues.

Yes, but I thought a zvol is merely a file in a zpool?
How can it be explained that writing to a file (with the same
parameters!) is OK while writing to a zvol drastically slower?

Is there anything I can check/debug?

I already filed a bug in FreeNAS but since this seems to be
FreeBSD-related I will also file a bug report for FreeBSD.

> I've also seen indications in the zfs-discuss mailing list about
> optimal and sub-optimal disk configurations for the various raidz
> types (wrong number of disks in the vdev leads to horrible
> performance).

Yes, I know this. In my case it's 3x2TB which is fine.

Regards,
Niki




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?503D12DB.3040909>