From owner-freebsd-fs@FreeBSD.ORG Tue Jun 11 21:21:18 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 4F7614C0 for ; Tue, 11 Jun 2013 21:21:18 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id 158431DA7 for ; Tue, 11 Jun 2013 21:21:17 +0000 (UTC) X-Cloudmark-SP-Filtered: true X-Cloudmark-SP-Result: v=1.1 cv=ME3lrcP4jFDzpPiCSQywCMKJiHtpRWeRXBDIYmR1BZg= c=1 sm=2 a=ctSXsGKhotwA:10 a=FKkrIqjQGGEA:10 a=uNq0K1xFbOwA:10 a=IkcTkHD0fZMA:10 a=6I5d2MoRAAAA:8 a=IIDPTtw_6pPhr65o58kA:9 a=QEXdDO2ut3YA:10 a=IO5DDJVRER8A:10 a=jpxF4j0qNWYA:10 a=0X1wm-MWLxgA:10 a=izcmP9whcIMA:10 a=UgQyK67jzVMA:10 a=KK3dN39wtEsA:10 a=zr1izwO6SH0A:10 a=SV7veod9ZcQA:10 X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AqMEANeTt1GDaFvO/2dsb2JhbABWA4M5SYJ0u1qBF3SCIwEBAQMBAQEBICsgCwUWDgoCAg0ZAikBCSYGCAcEARwEh2YGDKhbkUKBJoxKEH4kEAcRgjuBFAOTboENgkWBKYkDhxaDKyAygQM2 X-IronPort-AV: E=Sophos;i="4.87,847,1363147200"; d="scan'208";a="32625428" Received: from erie.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.206]) by esa-annu.net.uoguelph.ca with ESMTP; 11 Jun 2013 17:20:09 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id BAA39B3F0D; Tue, 11 Jun 2013 17:20:09 -0400 (EDT) Date: Tue, 11 Jun 2013 17:20:09 -0400 (EDT) From: Rick Macklem To: Attila Nagy Message-ID: <253074981.119060.1370985609747.JavaMail.root@erie.cs.uoguelph.ca> In-Reply-To: <51B79023.5020109@fsn.hu> Subject: Re: An order of magnitude higher IOPS needed with ZFS than UFS MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.202] X-Mailer: Zimbra 6.0.10_GA_2692 (ZimbraWebClient - FF3.0 (Win)/6.0.10_GA_2692) Cc: freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 11 Jun 2013 21:21:18 -0000 Attila Nagy wrote: > Hi, > > I have two identical machines. They have 14 disks hooked up to a HP > smartarray (SA from now on) controller. > Both machines have the same SA configuration and layout: the disks are > organized into mirror pairs (HW RAID1). > > On the first machine, these mirrors are formatted with UFS2+SU > (default > settings), on the second machine they are used as separate zpools > (please don't tell me that ZFS can do the same, I know). Atime is > turned > off, otherwise, no other modifications (zpool/zfs or sysctl > parameters). > The file systems are loaded more or less evenly with serving of some > kB > to few megs files. > > The machines act as NFS servers, so there is one, maybe important > difference here: the UFS machine runs 8.3-RELEASE, while the ZFS one > runs 9.1-STABLE@r248885. > They get the same type of load, and according to nfsstat and netstat, > the loads don't explain the big difference which can be seen in disk > IOs. In fact, the UFS host seems to be more loaded... > > According to gstat on the UFS machine: > dT: 60.001s w: 60.000s filter: da > L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name > 0 42 35 404 6.4 8 150 214.2 21.5| da0 > 0 30 21 215 6.1 9 168 225.2 15.9| da1 > 0 41 33 474 4.5 8 158 211.3 18.0| da2 > 0 39 30 425 4.6 9 163 235.0 17.1| da3 > 1 31 24 266 5.1 7 93 174.1 14.9| da4 > 0 29 22 273 5.9 7 84 200.7 15.9| da5 > 0 37 30 692 7.1 7 115 206.6 19.4| da6 > > and on the ZFS one: > dT: 60.001s w: 60.000s filter: da > L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name > 0 228 201 1045 23.7 27 344 53.5 88.7| da0 > 5 185 167 855 21.1 19 238 44.9 73.8| da1 > 10 263 236 1298 34.9 27 454 53.3 99.9| da2 > 10 255 235 1341 28.3 20 239 64.8 92.9| da3 > 10 219 195 994 22.3 23 257 46.3 81.3| da4 > 10 248 221 1213 22.4 27 264 55.8 90.2| da5 > 9 231 213 1169 25.1 19 229 54.6 88.6| da6 > > I've seen a lot of cases where ZFS required more memory and CPU (and > even IO) to handle the same load, but they were nowhere this bad > (often > a 10x increase). > > Any ideas? > ken@ recently committed a change to the new NFS server to add file handle affinity support to it. He reported that he had found that, without file handle affinity, that ZFS's sequential reading heuristic broke badly (or something like that, you can probably find the email thread or maybe he will chime in). Anyhow, you could try switching the FreeBSD 9 system to use the old NFS server (assuming your clients are doing NFSv3 mounts) and see if that has a significant effect. (For FreeBSD9, the old server has file handle affinity, but the new server does not.) rick > BTW, the file systems are 77-78% full according to df (so ZFS holds > more, because UFS is -m 8). > > Thanks, > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"