Date: Wed, 23 Jan 2013 23:52:32 +0100 (CET) From: Wojciech Puchar <wojtek@wojtek.tensor.gdynia.pl> To: Steven Chamberlain <steven@pyro.eu.org> Cc: freebsd-fs <freebsd-fs@freebsd.org>, Mark Felder <feld@feld.me>, Chris Rees <utisoft@gmail.com> Subject: Re: ZFS regimen: scrub, scrub, scrub and scrub again. Message-ID: <alpine.BSF.2.00.1301232347110.2474@wojtek.tensor.gdynia.pl> In-Reply-To: <510067DC.7030707@pyro.eu.org> References: <CACpH0Mf6sNb8JOsTzC%2BWSfQRB62%2BZn7VtzEnihEKmEV2aO2p%2Bw@mail.gmail.com> <alpine.BSF.2.00.1301211201570.9447@wojtek.tensor.gdynia.pl> <20130122073641.GH30633@server.rulingia.com> <alpine.BSF.2.00.1301232121430.1659@wojtek.tensor.gdynia.pl> <CADLo838Rst7wEtV7DpY23XjpcFCsOkrN=axE1AscyO7vYgSKSg@mail.gmail.com> <op.wrdudwx334t2sn@markf.office.supranet.net> <CAFqOu6gcvTEYCtLEUoyd4tX7acrk=V85u4EuNiDWVj4X%2B0Dcpg@mail.gmail.com> <alpine.BSF.2.00.1301232240200.2067@wojtek.tensor.gdynia.pl> <510067DC.7030707@pyro.eu.org>
next in thread | previous in thread | raw e-mail | index | archive | help
>> unless your work is serving movies it doesn't matter. > > That's why I find it really interesting the Netflix Open Connect > appliance didn't use ZFS - it would have seemed perfect for that "Seems perfect" only by ZFS marketers and their victims. but is at most usable, but and dangerous. > application. because doing it with UFS is ACTUALLY perfect. large parallel transfers are great with UFS, >95% of platter speed is normal and near zero CPU load, metadata amount are minimal and doesn't matter for performance and fsck time (but +J would make it even smaller). Getting ca 90% of platter speed under multitasking load is possible with proper setup. > http://lists.freebsd.org/pipermail/freebsd-stable/2012-June/068129.html > > Instead there are plain UFS+J filesystems on some 36 disks and no RAID - > it tries to handle almost everything at the application layer instead. this is exactly the kind of setup i would do in their case. They can restore all data as master movie storage is not here. but they have to restore 2 drives in case of 2 drives failing in the same time. not 36 :) "application layer" is quite trivial - just store where each movie is. such setup could easily handle 2 10Gb/s cards. Or more if load is spread over drives.
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?alpine.BSF.2.00.1301232347110.2474>