Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 26 Aug 2011 22:12:17 +0400
From:      Lev Serebryakov <lev@FreeBSD.org>
To:        fs@freebsd.org
Subject:   Strange behaviour of UFS2+SU FS on FreeBSD 8-Stable: dreadful perofrmance for old data, excellent for new.
Message-ID:  <1164434239.20110826221217@serebryakov.spb.ru>

next in thread | raw e-mail | index | archive | help
Hello, Fs.

  It is ``common knowledge'', hat UFS don't need defragmentation. But
It seems not to be true in my (corner?) case.

  I have FS which was growfs(8)ed from 2Tb (4x500Gb HDDs, all sizes are
commercial, so, really, is slightly less) to 8Tb (4x2Tb HDDs).

  It was almost full (~15% free space) before growing.

  Now it is in much better shape:

=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
/dev/raid5/storage    7.1T      2T    4.5T    30%    /usr/home
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D

  But ALL old data is read at speed about 20-30MiB/s (dd to /dev/null
with bs=3D128k). I've checked top-10 files (by size) and all of them are
read at such speed, like:

=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
blob# dd if=3D/usr/home/storage/Video/some-film.mkv of=3D/dev/null bs=3D128k
57972+1 records in
57972+1 records out
7598542537 bytes transferred in 305.013037 secs (24912189 bytes/sec)
blob#
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D

 But it is not software RAID5 or UFS-as-whole problem. I could create
file with random data at speed about 280MiB/s with simple program,
which write 128Kb buffer with write(2) gain and again (up to 32Gb,
for example) and after that this new file could be read with
175MiB/s speed (which is less than 50% of theoretical maximum, but is not
bad, IMHO). Yes, this box has only 2GiB of RAM, so it IS NOT reading
from cache:

=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
blob# ./generate big.file.dat
Size: 34359738368 bytes, Speed: 283964779 bytes/s
blob# dd if=3Dbig.file.dat of=3D/dev/null bs=3D128k
34359738368 bytes transferred in 196.044398 secs (175265086 bytes/sec)
blob#
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D

   How could I improve situation with "old" data? "backup, recreate FS
and restore" is not a variant, as I don't have 2TB+ redundant space at
hands, and backup to one 2Tb external disk is not what I want at all.

 Does copy files one-by-one inside this FS helps? Like, create
"/usr/home/copy" and copy everything to this directory, remove
originals, and move-out all files from "/copy"? Or it is bad idea too?

--=20
// Black Lion AKA Lev Serebryakov <lev@FreeBSD.org>




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?1164434239.20110826221217>