Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 22 Apr 2014 20:45:59 -0500
From:      Adam Vande More <amvandemore@gmail.com>
To:        Adam Vande More <amvandemore@gmail.com>, David Wolfskill <david@catwhisker.org>,  Louis Kowolowski <louisk@cryptomonkeys.org>, hackers@freebsd.org
Subject:   Re: Pointer to info on migrating from UFS2 -> ZFS?
Message-ID:  <CA%2BtpaK2TGYsrNjnOaqH-6RQteKKnw2X2ihjfJWWbCoiZVxGZ4w@mail.gmail.com>
In-Reply-To: <20140423010417.GH43976@funkthat.com>
References:  <5355E9F9.5080401@freebsd.org> <63190425-672D-4A05-AAB0-B19A49EDB739@cryptomonkeys.org> <20140422222525.GR1321@albert.catwhisker.org> <CA%2BtpaK1tYvTOGRtjdsHzr595OSofiuyZgALoXZpoynUzK8zO%2Bw@mail.gmail.com> <20140423010417.GH43976@funkthat.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Tue, Apr 22, 2014 at 8:04 PM, John-Mark Gurney <jmg@funkthat.com> wrote:

> Adam Vande More wrote this message on Tue, Apr 22, 2014 at 19:50 -0500:
> > On Tue, Apr 22, 2014 at 5:25 PM, David Wolfskill <david@catwhisker.org
> >wrote:
> >
> > > I appreciate the responses, but I seem to have failed to communicate at
> > > least a couple of fairly important aspects of what I'm trying to do.
> > > So....
> > >
> > > On Mon, Apr 21, 2014 at 06:40:05PM -0700, Louis Kowolowski wrote:
> > > > I?d probably suggest a couple things:
> > > > * VirtualBox (or equiv) for setting up test environments that are
> easy
> > > to create and destroy. For all the beginning stuff I can think of, you
> > > should be able to do just fine with a virtual environment. VMs with a
> half
> > > dozen virtual disks that are 2G ea come in handy with playing with ZFS.
> > >
> > > I have existing hardware -- several instantiations of it, including a
> > > couple of test machines.  I am trying to find out if the use of ZFS
> (vs.
> > > UFS2+SU) on the existing hardware will provide a performance advantage
> > > (and if so, how much, as switching from UFS2 to ZFS is going to be
> > > extremely painful).
> >
> > It's very difficult to make any detailed concise comment since we know
> > virtually nothing about your hw or workload.  What do you need?  More
> iops?
> >  Then use a ZIL (maybe even a battery backed DDR drive) to increase
> writes,
>
> But that is only for sync writes, which are for things like fsync...
> ZFS write delays writes for vfs.zfs.txg.timeout seconds and combines
> them into transaction groups, so unless you're running a db that does
> fsync or an NFS server, a ZIL probably won't help you as much as you
> think it will...  Obviously benchmark your use case w/ and w/o ZIL...
>
> > and lots of RAM and cache device to increase read speed.  When I had this
> > setup, diskinfo run on VM's backed by ZVOL's would reflect SSD, not 7200
> > spinning media speeds.
> >
> > Also things like transparent compression can help certain workloads
> > tremendously.  If you're dealing with 99% text data by compressing the
> data
> > you effectively drastically lower the iops needed to work the data and
> > off-load the work to the CPU's which are obvious a lot faster than disk.
> >
> > There are also a lot of different RAID(z) qualities so care should be
> taken
> > when choosing layouts.
>
> Yes, it should be... remeber that raidz is closer to RAID3, than RAID5
> in terms of IOPS, but doesn't suffer the read-modify-write issue that
> RAID5 has...  So you won't necessarily get the same IOPS from a raidz
> config as you would from a hardware raid5 system...
>
> --
>   John-Mark Gurney                              Voice: +1 415 225 5579
>
>      "All that I will do, has been done, All that I have, has not."
>


For even more raid5 fun, check out a "punctured stripe".

-- 
Adam



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CA%2BtpaK2TGYsrNjnOaqH-6RQteKKnw2X2ihjfJWWbCoiZVxGZ4w>