Date: Tue, 28 Aug 2012 11:10:06 +0200 From: Niki Hammler <mailinglists@nobaq.net> To: Freddie Cash <fjwcash@gmail.com> Cc: freebsd-fs@freebsd.org Subject: Re: zvol + raidz issue? Message-ID: <503C8AEE.1090703@nobaq.net> In-Reply-To: <CAOjFWZ4Ep5ZONO%2B7UNqt36stFN_OXoMhK=83UwPpv51P8OTjfg@mail.gmail.com> References: <503A6F9F.7070801@nobaq.net> <CAOjFWZ4Ep5ZONO%2B7UNqt36stFN_OXoMhK=83UwPpv51P8OTjfg@mail.gmail.com>
next in thread | previous in thread | raw e-mail | index | archive | help
Am 26.08.2012 22:13, schrieb Freddie Cash: > (Sorry for top-post, sending from phone.) > > Please show the command-line used to create the zvol. Especially the > recordsize option. When using zvols, you have to make sure to match the > recordsize of the zvol to that of the filesystem used above it. > Otherwise, performance will be atrocious. Hi, Sorry for my third posting on this. Now I strictly followed your suggestion and used zfs create -b 128k -V 500g plvl5i0/zvtest (with 128k being the recordsize of the dataset in the zpool). Suddenly the write performance increased from the 2.5 MB/s to 250 MB/s (or 78MB/s when using bs=4096 with dd) 1.) How can this explained? 2.) Is there any problem when choosing -b 128k (can I always blindly choose -b 128k)? Remember again that the problem ONLY occurs with raidz1+zvol+force 4096 block alignment and in no other case! Regards Niki > On Aug 26, 2012 11:50 AM, "Niki Hammler" <mailinglists@nobaq.net > <mailto:mailinglists@nobaq.net>> wrote: > > Hi, > > Given: new HP Proliant Microserver N40L (4 GB RAM) and 3x2TB SATA drives > (SAMSUNG HD204UI, ST32000542AS, WDC WD20EARX-00PASB0). > > Goal: RAIDz1 containg datasets and zvols to be exported via iSCSI. > > Issue: When I create a zvol on a RAIDz1 I get horrible performance (few > MB/s or less). > > First test: 500G zvol on a mirror (freshly created): > > # zpool list > NAME SIZE USED AVAIL CAP HEALTH ALTROOT > plvl1i0 1.81T 1.97G 1.81T 0% ONLINE /mnt > # zfs list > NAME USED AVAIL REFER MOUNTPOINT > plvl1i0 500G 1.30T 112K /mnt/plvl1i0 > plvl1i0/zvtest 500G 1.78T 1.97G - > # dd if=/dev/zero of=/dev/zvol/plvl1i0/zvtest bs=2048k count=1000 > 1000+0 records in > 1000+0 records out > 2097152000 bytes transferred in 17.318348 secs (121094230 bytes/sec) > # > > Corresponds to 115,48 MB/s which is good (similar results for a single > drive). > > Second test: 500G zvol on the 3x2TB raidz1 (freshly created): > > # dd if=/dev/zero of=/dev/zvol/plvl5i0/zvtest bs=2048k count=1000 > > 1000+0 records in > 1000+0 records out > 2097152000 bytes transferred in 700.126725 secs (2995389 bytes/sec) > # > > which is only 2,85 MB/s. > > Remark: Both pools are created with the force 4096 alignment option > (since I have 512 and 4096 drives mixed). > > Now is the point where you might say the problem is related to the > raidz1. But it is not: I created a 500G dataset in the same RAIDz pool > and copied about 100G data onto it with rsync+ssh. Result: about 28MB/s > end2end performance which is reasonable. > > Are there any issues with zvol + raidz1? Google resulted in empty result > set. > > I run a minimal FreeBSD 8.2 (FreeNAS): > > # uname -a > FreeBSD zetta 8.2-RELEASE-p9 FreeBSD 8.2-RELEASE-p9 #0: Thu Jul 19 > 12:39:10 PDT 2012 > root@build.ixsystems.com:/build/home/jpaetzel/8.2.0/os-base/amd64/build/home/jpaetzel/8.2.0/FreeBSD/src/sys/FREENAS.amd64 > amd64 > > Regards, > Niki > > > PS: This is also posted on > http://forums.freenas.org/showthread.php?p=35590 > _______________________________________________ > freebsd-fs@freebsd.org <mailto:freebsd-fs@freebsd.org> mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org > <mailto:freebsd-fs-unsubscribe@freebsd.org>" >
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?503C8AEE.1090703>