From owner-freebsd-fs@FreeBSD.ORG Tue Jun 11 21:08:35 2013 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id AC819EBB for ; Tue, 11 Jun 2013 21:08:35 +0000 (UTC) (envelope-from bra@fsn.hu) Received: from people.fsn.hu (people.fsn.hu [195.228.252.137]) by mx1.freebsd.org (Postfix) with ESMTP id 22A3B1CF8 for ; Tue, 11 Jun 2013 21:08:34 +0000 (UTC) Received: by people.fsn.hu (Postfix, from userid 1001) id EDD4910CC062; Tue, 11 Jun 2013 23:01:27 +0200 (CEST) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.3 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MF-ACE0E1EA [pR: 12.4903] X-CRM114-CacheID: sfid-20130611_23012_80876E8A X-CRM114-Status: Good ( pR: 12.4903 ) X-DSPAM-Result: Whitelisted X-DSPAM-Processed: Tue Jun 11 23:01:27 2013 X-DSPAM-Confidence: 0.9938 X-DSPAM-Probability: 0.0000 X-DSPAM-Signature: 51b79027796977830491068 X-DSPAM-Factors: 27, From*Attila Nagy , 0.00010, STABLE, 0.00371, disks, 0.00397, disks, 0.00397, 231, 0.00428, filter, 0.00505, filter, 0.00505, ZFS, 0.00555, ZFS, 0.00555, 1+14, 0.00555, 2+21, 0.00555, Subject*ZFS, 0.00617, sysctl, 0.00617, From*Attila, 0.00617, To*FreeBSD.org, 0.00656, 158, 0.00693, 474, 0.00693, 215, 0.00693, machines, 0.00739, machines, 0.00739, 1+19, 0.00792, 1+19, 0.00792, controller, 0.00874, load, 0.00893, load, 0.00893, files, 0.00965, X-Spambayes-Classification: ham; 0.00 Received: from [192.168.3.2] (japan.t-online.co.hu [195.228.243.99]) by people.fsn.hu (Postfix) with ESMTPSA id B8AAC10CC057 for ; Tue, 11 Jun 2013 23:01:26 +0200 (CEST) Message-ID: <51B79023.5020109@fsn.hu> Date: Tue, 11 Jun 2013 23:01:23 +0200 From: Attila Nagy User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.1.23) Gecko/20090817 Thunderbird/2.0.0.23 Mnenhy/0.7.6.0 MIME-Version: 1.0 To: freebsd-fs@FreeBSD.org Subject: An order of magnitude higher IOPS needed with ZFS than UFS Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Content-Filtered-By: Mailman/MimeDel 2.1.14 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 11 Jun 2013 21:08:35 -0000 Hi, I have two identical machines. They have 14 disks hooked up to a HP smartarray (SA from now on) controller. Both machines have the same SA configuration and layout: the disks are organized into mirror pairs (HW RAID1). On the first machine, these mirrors are formatted with UFS2+SU (default settings), on the second machine they are used as separate zpools (please don't tell me that ZFS can do the same, I know). Atime is turned off, otherwise, no other modifications (zpool/zfs or sysctl parameters). The file systems are loaded more or less evenly with serving of some kB to few megs files. The machines act as NFS servers, so there is one, maybe important difference here: the UFS machine runs 8.3-RELEASE, while the ZFS one runs 9.1-STABLE@r248885. They get the same type of load, and according to nfsstat and netstat, the loads don't explain the big difference which can be seen in disk IOs. In fact, the UFS host seems to be more loaded... According to gstat on the UFS machine: dT: 60.001s w: 60.000s filter: da L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name 0 42 35 404 6.4 8 150 214.2 21.5| da0 0 30 21 215 6.1 9 168 225.2 15.9| da1 0 41 33 474 4.5 8 158 211.3 18.0| da2 0 39 30 425 4.6 9 163 235.0 17.1| da3 1 31 24 266 5.1 7 93 174.1 14.9| da4 0 29 22 273 5.9 7 84 200.7 15.9| da5 0 37 30 692 7.1 7 115 206.6 19.4| da6 and on the ZFS one: dT: 60.001s w: 60.000s filter: da L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name 0 228 201 1045 23.7 27 344 53.5 88.7| da0 5 185 167 855 21.1 19 238 44.9 73.8| da1 10 263 236 1298 34.9 27 454 53.3 99.9| da2 10 255 235 1341 28.3 20 239 64.8 92.9| da3 10 219 195 994 22.3 23 257 46.3 81.3| da4 10 248 221 1213 22.4 27 264 55.8 90.2| da5 9 231 213 1169 25.1 19 229 54.6 88.6| da6 I've seen a lot of cases where ZFS required more memory and CPU (and even IO) to handle the same load, but they were nowhere this bad (often a 10x increase). Any ideas? BTW, the file systems are 77-78% full according to df (so ZFS holds more, because UFS is -m 8). Thanks,