From owner-freebsd-fs@FreeBSD.ORG Mon Mar 18 19:32:35 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 2FD63B15 for ; Mon, 18 Mar 2013 19:32:35 +0000 (UTC) (envelope-from davide.damico@contactlab.com) Received: from mail2.shared.smtp.contactlab.it (mail2.shared.smtp.contactlab.it [93.94.37.7]) by mx1.freebsd.org (Postfix) with ESMTP id 98E3F3E0 for ; Mon, 18 Mar 2013 19:32:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; d=contactlab.it; s=clab1; c=relaxed/relaxed; q=dns/txt; i=@contactlab.it; t=1363635153; h=From:Subject:Date:To:MIME-Version:Content-Type; bh=BBXXHZoVi/PC8vRoGjC8vAUUF9Z2GOwqEiG25RLjlfc=; b=QSNmh/lvsxHFxPQaLMzpvMozsIjU5eb0Ll3372swo0ppxRxC0gFKrfIY1Gdikl+K XQqJkyJVOFTT9foZ0n7hEWZlBihHJ3Lz3W+8GWW8NpLogwFDk+hGH7jx/oyaUP7Y bAELcjI0KDCzxe4pM+0F5VwHhMks2qxinLxnAM44cPc=; Received: from [213.92.90.12] ([213.92.90.12:10473] helo=mail3.tomato.it) by t.contactlab.it (envelope-from ) (ecelerity 3.5.1.37854 r(Momo-dev:3.5.1.0)) with ESMTP id A2/7E-24145-1DB67415; Mon, 18 Mar 2013 20:32:33 +0100 Received: from mx3-master.housing.tomato.lan ([172.16.7.55]) by mail3.tomato.it with smtp (Exim 4.80.1 (FreeBSD)) (envelope-from ) id 1UHfnN-000AI5-09 for freebsd-fs@freebsd.org; Mon, 18 Mar 2013 20:32:33 +0100 Received: (qmail 39556 invoked by uid 80); 18 Mar 2013 19:32:32 -0000 To: Steven Hartland Subject: Re: FreBSD 9.1 and ZFS v28 performances X-PHP-Script: uebmeil.sys.tomatointeractive.it/index.php for 172.16.16.228 X-PHP-Originating-Script: 0:main.inc MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Date: Mon, 18 Mar 2013 20:32:32 +0100 From: Davide D'Amico Organization: ContactLab Mail-Reply-To: In-Reply-To: <897DB64CEBAF4F04AE9C76B3F686E497@multiplay.co.uk> References: <514729BD.2000608@contactlab.com> <810E5C08C2D149DBAC94E30678234995@multiplay.co.uk> <51473D1D.3050306@contactlab.com> <1DD6360145924BE0ABF2D0979287F5F4@multiplay.co.uk> <51474F2F.5040003@contactlab.com> <51475267.1050204@contactlab.com> <514757DD.9030705@contactlab.com> <42B9D942BA134E16AFDDB564858CA007@multiplay.co.uk> <1bfdea0efb95a7e06554dadf703d58e7@sys.tomatointeractive.it> <897DB64CEBAF4F04AE9C76B3F686E497@multiplay.co.uk> Message-ID: <13317bbd289c4c828f134e2c2592a2d7@sys.tomatointeractive.it> X-Sender: davide.damico@contactlab.com User-Agent: Roundcube Webmail/0.8.5 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list Reply-To: davide.damico@contactlab.com List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 18 Mar 2013 19:32:35 -0000 Il 18.03.2013 20:28 Steven Hartland ha scritto: > ----- Original Message ----- From: "Davide D'Amico" > >>> How does ZFS compare if you do it on 1 SSD as per your second >>> UFS test? As I'm wondering the mfi cache is kicking in? >> Well, it was a test :) >> The MFI cache is enabled because I am using mfid* as jbod (mfiutil >> create jbod mfid3 mfid4 mfid5 mfid6): > > Don't use mfiutil to do this it doesnt work it creates mirrors. > > Use MegaCli instead to create real jbods e.g. > MegaCli -AdpSetProp -EnableJBOD -1 -aALL > Ok, I'll give it a try (never used, I thought it has been dismissed), and I'll let you know. >> And the result from sysbench: >> General statistics: >> total time: 82.9567s >> total number of events: 1 >> total time taken by event execution: 82.9545s > > Thats hardly doing any disk access at all, so odd it would be doubling > your benchmark time. > >> Using a SSD: >> # iostat mfid2 -x 2 >> tty mfid2 cpu >> tin tout KB/t tps MB/s us ni sy in id >> 0 32 125.21 31 3.84 0 0 0 0 99 [...] >> 0 585 0.00 0 0.00 3 0 1 0 96 >> 0 22 4.00 0 0.00 0 0 0 0 100 >> And the result from sysbench: >> General statistics: >> total time: 36.1146s >> total number of events: 1 >> total time taken by event execution: 36.1123s >> That are the same results using SAS disks. > > So this is ZFS on the SSD, resulting the same benchmark results as > UFS? This is UFS on SSD, that has the same behaviour than UFS on RAID10 HW on SAS drives. d.