Date: Sat, 5 Jul 2014 00:19:38 +0300 From: Stefan Parvu <sparvu@systemdatarecorder.org> To: Roger Pau =?ISO-8859-1?Q?Monn=E9?= <roger.pau@citrix.com> Cc: freebsd-fs@freebsd.org, FreeBSD Hackers <freebsd-hackers@freebsd.org> Subject: Re: Strange IO performance with UFS Message-ID: <20140705001938.54a3873dd698080d93d840e2@systemdatarecorder.org> In-Reply-To: <53B69C73.7090806@citrix.com> References: <53B691EA.3070108@citrix.com> <53B69C73.7090806@citrix.com>
next in thread | previous in thread | raw e-mail | index | archive | help
Hi, > > I'm doing some tests on IO performance using fio, and I've found > > something weird when using UFS and large files. I have the following > > very simple sequential fio workload: System: FreeBSD ox 10.0-RELEASE-p6 FreeBSD 10.0-RELEASE-p6 #0: Tue Jun 24 07:47:37 = UTC 2014 =20 root@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC amd64 1. Seq Write to 1 file, 10GB size, single writer, block 4k, UFS2: I tried to write seq using a single writer using an IOSIZE similar to your = example, 10 GB to a 14TB Hdw RAID 10 LSI device using fio 2.1.9 under FreeBSD 10.0.=20 Result: Run status group 0 (all jobs): WRITE: io=3D10240MB, aggrb=3D460993KB/s, minb=3D460993KB/s, maxb=3D460993= KB/s,=20 mint=3D22746msec, maxt=3D22746msec 2. Seq Write to 2500 files, each file 5MB size, multiple writers, UFS2: Result: Run status group 0 (all jobs): WRITE: io=3D12500MB, aggrb=3D167429KB/s, minb=3D334KB/s, maxb=3D9968KB/s,= =20 mint=3D2568msec, maxt=3D76450msec Questions: - where are you writing, what storage: hdw / sfw RAID ? - are you using time based fio tests ?=20 For fun I can share with you some results we been doing between FreeBSD10 a= md64 (f10)=20 and Debian7 amd64 (d7) using LSI HDW RAID 10. We don't use time based fio b= ut rather=20 we measure how fast we can send once the IOSIZE and measure the elapsed tim= e.=20 This proofed to be more accurate and return more sane results than actually= keeping=20 fio running for 15 or 30minutes. Id=A0 =A0 =A0 Test_Name=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 Throughput=A0 = =A0 =A0 Utilization=A0 =A0 =A0 Idle 1=A0 =A0 =A0 f10.raid10.4k.2500=A0 =A0 =A0 =A0 23 MB/s=A0 =A0 =A0 =A0 =A0 = 8%=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 92% 2=A0 =A0 =A0 f10.raid10.4k.5000=A0 =A0 =A0 =A0 18 MB/s=A0 =A0 =A0 =A0 =A0 = 9%=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 91% 3=A0 =A0 =A0 f10.raid10.64k.2500=A0 =A0 =A0 215 MB/s=A0 =A0 =A0 =A0 22%=A0= =A0 =A0 =A0 =A0 =A0 =A0 78% 4=A0 =A0 =A0 f10.raid10.64k.5000=A0 =A0 =A0 162 MB/s=A0 =A0 =A0 =A0 18%=A0= =A0 =A0 =A0 =A0 =A0 =A0 82% =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0= =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 idle=A0 =A0 iowait 5=A0 =A0 =A0 d7.raid10.4k.2500=A0 =A0 =A0 =A0 29 MB/s=A0 =A0 =A0 =A0 =A0 = 2%=A0 =A0 =A0 =A0 =A0 65.08 + 32.93 6=A0 =A0 =A0 d7.raid10.4k.5000=A0 =A0 =A0 =A0 29 MB/s=A0 =A0 =A0 =A0 =A0 = 3%=A0 =A0 =A0 =A0 =A0 53.68 + 43.79 7=A0 =A0 =A0 d7.raid10.64k.2500=A0 =A0 =A0 297 MB/s=A0 =A0 =A0 =A0 3%=A0 = =A0 =A0 =A0 =A0 56.44 + 41.11 8=A0 =A0 =A0 d7.raid10.64k.5000=A0 =A0 =A0 182 MB/s=A0 =A0 =A0 =A0 4%=A0 = =A0 =A0 =A0 =A0 12.85 + 83.85 --=20 Stefan Parvu <sparvu@systemdatarecorder.org>
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20140705001938.54a3873dd698080d93d840e2>