Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 22 Jun 2009 01:57:20 +0300
From:      Dan Naumov <dan.naumov@gmail.com>
To:        =?windows-1252?Q?=8Aimun_Mikecin?= <numisemis@yahoo.com>
Cc:        "freebsd-fs@freebsd.org" <freebsd-fs@freebsd.org>
Subject:   Re: ufs2 / softupdates / ZFS / disk write cache
Message-ID:  <cf9b1ee00906211557l72aec9d9rab7561d12cf11b81@mail.gmail.com>
In-Reply-To: <570433.20373.qm@web37308.mail.mud.yahoo.com>
References:  <570433.20373.qm@web37308.mail.mud.yahoo.com>

next in thread | previous in thread | raw e-mail | index | archive | help
2009/6/21 =8Aimun Mikecin <numisemis@yahoo.com>

>
> 21. lip. 2009., u 13:41, Andrew Snow <andrew@modulus.org> napisao:
> > Folks who need to maximize safety and can't afford the performance
> > hit of no write cache need to do what they always have had to do in
> > the past - buy a controller card with battery-backed cached.
>
> Or:
> B) use SCSI instead of ATA disks
> C) use UFS+gjournal instead of UFS+SU
> D) use ZFS instead of UFS+SU


Actually I think a need a few clarifications regarding ZFS:

1) Does FreeBSD honor the "flush the cache to disk now" commands issued by
ZFS to the harrdive only when ZFS is used directly on top of a disk device
directly or does this also work when ZFS is used on top of a
slice/partition?
2) If we compare ZFS vs UFS+SU while using a regular "lying" SATA disk (wit=
h
write cache enabled) under heavy IO followed by a power loss. Which one is
going to recover better and why?


Sincerely,
- Dan Naumov



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?cf9b1ee00906211557l72aec9d9rab7561d12cf11b81>