Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 29 Oct 2008 19:49:00 +0000
From:      Matthew Seaman <m.seaman@infracaninophile.co.uk>
To:        Rich Fairbanks <poppanecktie@gmail.com>
Cc:        questions@freebsd.org
Subject:   Re: Filesystem, RAID Question
Message-ID:  <4908BE2C.7010505@infracaninophile.co.uk>
In-Reply-To: <9f3798c00810291118i1c80cb8cw8d4995eabe6a4f8f@mail.gmail.com>
References:  <9f3798c00810291118i1c80cb8cw8d4995eabe6a4f8f@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
--------------enig7D4B20F254D38FC072B4661D
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: quoted-printable

Rich Fairbanks wrote:

> Now, this is how I set up the array. I installed the card, popped in th=
e
> drives. The card bios found the drives and allowed me to setup in RAID =
5.
> Then, FreeBSD booted and found the "disk" as da0. I want the entire arr=
ay to
> be one big chunk of space. In other words, I don't need a bunch of slic=
es or
> partitions (or DO I? I'm still very new to the whole slice vs. partitio=
n
> concept)

The default settings should actually work just about right for a=20
general purpose file system with reasonably sized files.  A RAID5=20
across 3x1TB drives will give you 2+ TB usable space -- that's within=20
the  capabilities of UFS2, so you should be OK there.  However a 3 disk=20
RAID5 is the worst performing RAID5 setup you can create.  A larger=20
number of smaller disks would probably have served you better.

> I typed newfs /dev/da0 . A ton of numbers went across the screen, then =
I
> mounted /dev/da0 at /usr/home/storage. It works, but perhaps I missed a=
 step
> that would have made things easier/perform better, etc.

The sort of changes you can make at newfs time mostly affect how=20
efficient the storage is -- ie. tuning the system for particularly=20
large or small files.  While newfs and tunefs can affect performance,=20
they aren't the first thing to look at here.=20

> Besides creating the file system a different way, what would be an opti=
mum
> stripe size for the array? I will using this for storing, basically, a =
TON
> of word documents and email messages, and a few large .pst files. So, t=
he
> average file size will be in the 25-100K range, but a few 1-2GB files.

Just take the default stripe size the array controller presents you=20
with -- it will be appropriate for this sort of mixed file sizes.

The first thing to consider is what sort of IO caching strategy your
hardware is using.  Does your RAID controller have a battery backup
unit?  Probably not, as that tends to add a large whack onto the price.

If not, then your array controller will not report an IO operation as=20
complete to the OS until the bits have been written to the disk[*]. =20
With the BBU, the controller can report the operation as complete as=20
soon as  the data is stored in (battery backed) RAM on the controller. =20
These  modes are called 'write through' and 'write back' in some=20
controllers, but I can't for the life of me remember which is which.

Given that you don't have a BBU, what is the status of write caching
on the individual hard drives?  You'll have to use 3dm2 or the CLI=20
equivalent to investigate this, as the RAID controller tends to hide=20
that level of information from the OS.  However, this setting is the
same thing as controlled by the hw.ata.wc sysctl -- and like that=20
it has a major effect on disk IO performance.  Turning write caching=20
off is the safe, conservative thing to do for maximum data security. =20

Turning write caching on is the only way to get decent performance out=20
of ordinary hard drives, but it leaves you open to data loss if the=20
machine should crash or lose power suddenly.  Most systems with ATA
or ordinary SATA drives default to using write caching.  SCSI and fast
SAS drives can be configured either way.

You'ld always turn disk level write caching off if you've got a BBU,=20
because it's made redundant in that case by the controller memory=20
cache.

If fiddling with write caching can't make things any better, then I'd=20
reconsider using RAID5.  Unfortunately 3 disks doesn't leave you with=20
many options.  Add another drive of the same size and you can make a 4=20
disk RAID10 with 2TB usable space.  Or you can configure the RAID=20
controller to act as a JBOD and try out ZFS -- the RAID-Z mode is=20
the moral equivalent of RAID5 but quite different in operation.

	Cheers,

	Matthew

[*] Some disks have been known to lie about completing IO transactions=20
even when set to the most conservative mode.  IMHO they aren't fit for=20
purpose and should you be landed with such things you'ld be entitled=20
to a refund from the vendor.

--=20
Dr Matthew J Seaman MA, D.Phil.                   7 Priory Courtyard
                                                  Flat 3
PGP: http://www.infracaninophile.co.uk/pgpkey     Ramsgate
                                                  Kent, CT11 9PW


--------------enig7D4B20F254D38FC072B4661D
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.9 (FreeBSD)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEAREIAAYFAkkIvjMACgkQ8Mjk52CukIxIzACglG4YrG46yFNPxplVGWzPyddz
300An3E69XYzelFr/AE7ypZtzXALUayu
=PTYr
-----END PGP SIGNATURE-----

--------------enig7D4B20F254D38FC072B4661D--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4908BE2C.7010505>