Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 5 Jul 2014 00:19:38 +0300
From:      Stefan Parvu <sparvu@systemdatarecorder.org>
To:        Roger Pau =?ISO-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Cc:        freebsd-fs@freebsd.org, FreeBSD Hackers <freebsd-hackers@freebsd.org>
Subject:   Re: Strange IO performance with UFS
Message-ID:  <20140705001938.54a3873dd698080d93d840e2@systemdatarecorder.org>
In-Reply-To: <53B69C73.7090806@citrix.com>
References:  <53B691EA.3070108@citrix.com> <53B69C73.7090806@citrix.com>

next in thread | previous in thread | raw e-mail | index | archive | help

Hi,

> > I'm doing some tests on IO performance using fio, and I've found
> > something weird when using UFS and large files. I have the following
> > very simple sequential fio workload:

System:
FreeBSD ox 10.0-RELEASE-p6 FreeBSD 10.0-RELEASE-p6 #0: Tue Jun 24 07:47:37 UTC 2014     
root@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC  amd64


1. Seq Write to 1 file, 10GB size, single writer, block 4k, UFS2:

I tried to write seq using a single writer using an IOSIZE similar to your example, 10
GB to a 14TB Hdw RAID 10 LSI device using fio 2.1.9 under FreeBSD 10.0. 

Result:
Run status group 0 (all jobs):
  WRITE: io=10240MB, aggrb=460993KB/s, minb=460993KB/s, maxb=460993KB/s, 
  mint=22746msec, maxt=22746msec


2. Seq Write to 2500 files, each file 5MB size, multiple writers, UFS2:

Result:
Run status group 0 (all jobs):
  WRITE: io=12500MB, aggrb=167429KB/s, minb=334KB/s, maxb=9968KB/s, 
  mint=2568msec, maxt=76450msec

Questions:

 - where are you writing, what storage: hdw / sfw RAID ?
 - are you using time based fio tests ? 

For fun I can share with you some results we been doing between FreeBSD10 amd64 (f10) 
and Debian7 amd64 (d7) using LSI HDW RAID 10. We don't use time based fio but rather 
we measure how fast we can send once the IOSIZE and measure the elapsed time. 
This proofed to be more accurate and return more sane results than actually keeping 
fio running for 15 or 30minutes.

Id      Test_Name                  Throughput      Utilization      Idle
1       f10.raid10.4k.2500        23 MB/s           8%                92%
2       f10.raid10.4k.5000        18 MB/s           9%                91%
3       f10.raid10.64k.2500      215 MB/s        22%              78%
4       f10.raid10.64k.5000      162 MB/s        18%              82%

                                                                                               idle    iowait
5       d7.raid10.4k.2500         29 MB/s           2%          65.08 + 32.93
6       d7.raid10.4k.5000         29 MB/s           3%          53.68 + 43.79
7       d7.raid10.64k.2500       297 MB/s        3%          56.44 + 41.11
8       d7.raid10.64k.5000       182 MB/s        4%          12.85 + 83.85



-- 
Stefan Parvu <sparvu@systemdatarecorder.org>



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20140705001938.54a3873dd698080d93d840e2>