From owner-freebsd-scsi@FreeBSD.ORG Mon Sep 1 15:31:31 2014 Return-Path: Delivered-To: freebsd-scsi@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 7C3B1329 for ; Mon, 1 Sep 2014 15:31:31 +0000 (UTC) Received: from cu01176b.smtpx.saremail.com (cu01176b.smtpx.saremail.com [195.16.151.151]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 3A4371AB1 for ; Mon, 1 Sep 2014 15:31:30 +0000 (UTC) Received: from [172.16.2.2] (izaro.sarenet.es [192.148.167.11]) by proxypop04.sare.net (Postfix) with ESMTPSA id 6E4909DCEBE for ; Mon, 1 Sep 2014 17:21:20 +0200 (CEST) From: Borja Marcos Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable Subject: Samsung 840 Pro SSD and quirks Date: Mon, 1 Sep 2014 17:21:18 +0200 Message-Id: To: FreeBSD-scsi Mime-Version: 1.0 (Apple Message framework v1283) X-Mailer: Apple Mail (2.1283) X-BeenThere: freebsd-scsi@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: SCSI subsystem List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 01 Sep 2014 15:31:31 -0000 Hi, I have just noticed that the Samsung 840 SSDs now have the 4 KB block = quirk added.=20 Is this really the case? I've been playing with them some time ago and I = didn't notice performance differences between using ZFS on them either "directly" (advertised 512 byte blocks) or forcing 4 KB blocks using = gnop. Just surprised, I didn't find references to the true block size. Borja. From owner-freebsd-scsi@FreeBSD.ORG Mon Sep 1 15:44:12 2014 Return-Path: Delivered-To: freebsd-scsi@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 801E37FC for ; Mon, 1 Sep 2014 15:44:12 +0000 (UTC) Received: from smtp1.multiplay.co.uk (smtp1.multiplay.co.uk [85.236.96.35]) by mx1.freebsd.org (Postfix) with ESMTP id 446631BF9 for ; Mon, 1 Sep 2014 15:44:11 +0000 (UTC) Received: by smtp1.multiplay.co.uk (Postfix, from userid 65534) id 9F9D020E7088C; Mon, 1 Sep 2014 15:44:04 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.multiplay.co.uk X-Spam-Level: ** X-Spam-Status: No, score=2.2 required=8.0 tests=AWL,BAYES_00,DOS_OE_TO_MX, FSL_HELO_NON_FQDN_1,RDNS_DYNAMIC,STOX_REPLY_TYPE autolearn=no version=3.3.1 Received: from r2d2 (82-69-141-170.dsl.in-addr.zen.co.uk [82.69.141.170]) by smtp1.multiplay.co.uk (Postfix) with ESMTPS id 336AC20E70886; Mon, 1 Sep 2014 15:44:00 +0000 (UTC) Message-ID: From: "Steven Hartland" To: "Borja Marcos" , "FreeBSD-scsi" References: Subject: Re: Samsung 840 Pro SSD and quirks Date: Mon, 1 Sep 2014 16:44:00 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 X-BeenThere: freebsd-scsi@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: SCSI subsystem List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 01 Sep 2014 15:44:12 -0000 We saw a noticable performance increase on 4k on our 8TB 840 array but I too couldn't find any concrete information either. If anyone has this info and can confirm either way that would be great. Regards Steve ----- Original Message ----- From: "Borja Marcos" > > Hi, > > I have just noticed that the Samsung 840 SSDs now have the 4 KB block > quirk added. > > Is this really the case? I've been playing with them some time ago and > I didn't notice performance differences between using ZFS on them > either > "directly" (advertised 512 byte blocks) or forcing 4 KB blocks using > gnop. > > Just surprised, I didn't find references to the true block size. From owner-freebsd-scsi@FreeBSD.ORG Mon Sep 1 16:11:53 2014 Return-Path: Delivered-To: freebsd-scsi@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D767C4B2 for ; Mon, 1 Sep 2014 16:11:53 +0000 (UTC) Received: from cu01176b.smtpx.saremail.com (cu01176b.smtpx.saremail.com [195.16.151.151]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 556071ED8 for ; Mon, 1 Sep 2014 16:11:52 +0000 (UTC) Received: from [172.16.2.2] (izaro.sarenet.es [192.148.167.11]) by proxypop04.sare.net (Postfix) with ESMTPSA id EF1239DC926; Mon, 1 Sep 2014 18:11:50 +0200 (CEST) Subject: Re: Samsung 840 Pro SSD and quirks Mime-Version: 1.0 (Apple Message framework v1283) Content-Type: text/plain; charset=iso-8859-1 From: Borja Marcos X-Priority: 3 In-Reply-To: Date: Mon, 1 Sep 2014 18:11:49 +0200 Content-Transfer-Encoding: quoted-printable Message-Id: References: To: "Steven Hartland" X-Mailer: Apple Mail (2.1283) Cc: FreeBSD-scsi X-BeenThere: freebsd-scsi@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: SCSI subsystem List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 01 Sep 2014 16:11:54 -0000 On Sep 1, 2014, at 5:44 PM, Steven Hartland wrote: > We saw a noticable performance increase on 4k on our 8TB 840 > array but I too couldn't find any concrete information either. >=20 > If anyone has this info and can confirm either way that would > be great. I don=B4t have actual numbers, just recalling that I tried and I didn't = find significant differences using bonnie++ on a ZFS pool. And I recall that according to the kstats.sysctl variables, trim was indeed = working. Just in case I am repeating the tests right now. I still have the = pre-quirks kernel around and I have a pool defined with the default 512 = byte blocks. Version 1.97 ------Sequential Output------ --Sequential Input- = --Random- Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- = --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP = /sec %CP elibm 96G 123 99 670496 97 310330 63 303 99 818483 56 = 6281 165 Latency 93190us 20227us 448ms 41198us 454ms = 26375us Version 1.97 ------Sequential Create------ --------Random = Create-------- elibm -Create-- --Read--- -Delete-- -Create-- --Read--- = -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP = /sec %CP 16 25723 98 +++++ +++ 24559 98 12694 99 31135 100 = 4810 99 Latency 15192us 97us 130us 23708us 355us = 1199us = 1.97,1.97,elibm,1,1409588162,96G,,123,99,670496,97,310330,63,303,99,818483= ,56,6281,165,16,,,,,25723,98,+++++,+++,24559,98,12694,99,31135,100,4810,99= ,93190us,20227us,448ms,41198us,454ms,26375us,15192us,97us,130us,23708us,35= 5us,1199us After a reboot, destroyng and recreating the pool, Version 1.97 ------Sequential Output------ --Sequential Input- = --Random- Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- = --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP = /sec %CP elibm 96G 128 99 675094 98 323692 67 303 99 862380 58 = 9530 189 Latency 64726us 48676us 389ms 36398us 505ms = 15594us Version 1.97 ------Sequential Create------ --------Random = Create-------- elibm -Create-- --Read--- -Delete-- -Create-- --Read--- = -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP = /sec %CP 16 24857 97 +++++ +++ 20422 98 21836 98 +++++ +++ = 17786 97 Latency 15422us 102us 785us 24590us 125us = 170us = 1.97,1.97,elibm,1,1409588443,96G,,128,99,675094,98,323692,67,303,99,862380= ,58,9530,189,16,,,,,24857,97,+++++,+++,20422,98,21836,98,+++++,+++,17786,9= 7,64726us,48676us,389ms,36398us,505ms,15594us,15422us,102us,785us,24590us,= 125us,170us The results seem to be more or less similar. I have checked kstats.zfs = and in both cases trim was working. The count of unsupported trims was 0 = while success and bytes grew as they should. What am I missing? Note that I am not against preemptive 4K quirk = strikes :) I am comparing with multiple concurrent bonnies just in case = or, what did you use to do the test? Thanks! Borja. From owner-freebsd-scsi@FreeBSD.ORG Tue Sep 2 14:57:00 2014 Return-Path: Delivered-To: freebsd-scsi@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 43BF7F6B for ; Tue, 2 Sep 2014 14:57:00 +0000 (UTC) Received: from cu01176a.smtpx.saremail.com (cu01176a.smtpx.saremail.com [195.16.150.151]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id F2C35122F for ; Tue, 2 Sep 2014 14:56:59 +0000 (UTC) Received: from [172.16.2.2] (izaro.sarenet.es [192.148.167.11]) by proxypop03.sare.net (Postfix) with ESMTPSA id 161199DCCE5; Tue, 2 Sep 2014 16:48:28 +0200 (CEST) Subject: Re: Samsung 840 Pro SSD and quirks Mime-Version: 1.0 (Apple Message framework v1283) Content-Type: text/plain; charset=iso-8859-1 From: Borja Marcos X-Priority: 3 In-Reply-To: Date: Tue, 2 Sep 2014 16:48:26 +0200 Content-Transfer-Encoding: quoted-printable Message-Id: <93D764A8-01AE-42FA-8020-65CEB6C7D64C@sarenet.es> References: To: Steven Hartland X-Mailer: Apple Mail (2.1283) Cc: FreeBSD-scsi X-BeenThere: freebsd-scsi@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: SCSI subsystem List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 02 Sep 2014 14:57:00 -0000 On Sep 1, 2014, at 5:44 PM, Steven Hartland wrote: > We saw a noticable performance increase on 4k on our 8TB 840 > array but I too couldn't find any concrete information either. >=20 > If anyone has this info and can confirm either way that would > be great. I stand corrected. I have done some benchmarks with just two Samsung = SSDs (zpool with two disks, no mirroring) and indeed I get better = performance with 4 KB blocks. I did my original tests with 12 disks and some other bottleneck was = hiding the performance difference. In both cases, anyway, Trim was working unless the system lies. Thanks! From owner-freebsd-scsi@FreeBSD.ORG Tue Sep 2 15:07:48 2014 Return-Path: Delivered-To: freebsd-scsi@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id B37B1691 for ; Tue, 2 Sep 2014 15:07:48 +0000 (UTC) Received: from smtp1.multiplay.co.uk (smtp1.multiplay.co.uk [85.236.96.35]) by mx1.freebsd.org (Postfix) with ESMTP id 766851353 for ; Tue, 2 Sep 2014 15:07:47 +0000 (UTC) Received: by smtp1.multiplay.co.uk (Postfix, from userid 65534) id 2097420E7088F; Tue, 2 Sep 2014 15:07:46 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.multiplay.co.uk X-Spam-Level: ** X-Spam-Status: No, score=2.2 required=8.0 tests=AWL,BAYES_00,DOS_OE_TO_MX, FSL_HELO_NON_FQDN_1,RDNS_DYNAMIC,STOX_REPLY_TYPE autolearn=no version=3.3.1 Received: from r2d2 (82-69-141-170.dsl.in-addr.zen.co.uk [82.69.141.170]) by smtp1.multiplay.co.uk (Postfix) with ESMTPS id C2DA320E7088C; Tue, 2 Sep 2014 15:07:44 +0000 (UTC) Message-ID: <14D38CFE3887426D9065E26F50457F30@multiplay.co.uk> From: "Steven Hartland" To: "Borja Marcos" References: <93D764A8-01AE-42FA-8020-65CEB6C7D64C@sarenet.es> Subject: Re: Samsung 840 Pro SSD and quirks Date: Tue, 2 Sep 2014 16:07:45 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 Cc: FreeBSD-scsi X-BeenThere: freebsd-scsi@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: SCSI subsystem List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 02 Sep 2014 15:07:48 -0000 Thanks for the confirmation Borja I was a little confused why our two results differed. For a 12 disk system you'll likely need two SAS2 controllers, or at least 12 SAS lines otherwise you will hit controller throughput issues as a 840 can pretty much saturate a single SAS2 lane on its own. At that point you'll also start to see other issues. I'd strongly suggest moving to stable/10, if you haven't already, particularly if you have large amount of RAM in the system otherwise you will become CPU bound on ARC hash lookups. Regards Steve ----- Original Message ----- From: "Borja Marcos" To: "Steven Hartland" Cc: "FreeBSD-scsi" Sent: Tuesday, September 02, 2014 3:48 PM Subject: Re: Samsung 840 Pro SSD and quirks On Sep 1, 2014, at 5:44 PM, Steven Hartland wrote: > We saw a noticable performance increase on 4k on our 8TB 840 > array but I too couldn't find any concrete information either. > > If anyone has this info and can confirm either way that would > be great. I stand corrected. I have done some benchmarks with just two Samsung SSDs (zpool with two disks, no mirroring) and indeed I get better performance with 4 KB blocks. I did my original tests with 12 disks and some other bottleneck was hiding the performance difference. In both cases, anyway, Trim was working unless the system lies. Thanks! From owner-freebsd-scsi@FreeBSD.ORG Tue Sep 2 15:17:32 2014 Return-Path: Delivered-To: freebsd-scsi@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 538928F0 for ; Tue, 2 Sep 2014 15:17:32 +0000 (UTC) Received: from cu01176a.smtpx.saremail.com (cu01176a.smtpx.saremail.com [195.16.150.151]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 0D4DE14DA for ; Tue, 2 Sep 2014 15:17:31 +0000 (UTC) Received: from [172.16.2.2] (izaro.sarenet.es [192.148.167.11]) by proxypop03.sare.net (Postfix) with ESMTPSA id E66959DCC12; Tue, 2 Sep 2014 17:17:29 +0200 (CEST) Subject: Re: Samsung 840 Pro SSD and quirks Mime-Version: 1.0 (Apple Message framework v1283) Content-Type: text/plain; charset=iso-8859-1 From: Borja Marcos X-Priority: 3 In-Reply-To: <14D38CFE3887426D9065E26F50457F30@multiplay.co.uk> Date: Tue, 2 Sep 2014 17:17:28 +0200 Content-Transfer-Encoding: quoted-printable Message-Id: <59E9F0BF-1B42-40E9-BF1E-E7AFB60C3B27@sarenet.es> References: <93D764A8-01AE-42FA-8020-65CEB6C7D64C@sarenet.es> <14D38CFE3887426D9065E26F50457F30@multiplay.co.uk> To: "Steven Hartland" X-Mailer: Apple Mail (2.1283) Cc: FreeBSD-scsi X-BeenThere: freebsd-scsi@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: SCSI subsystem List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 02 Sep 2014 15:17:32 -0000 On Sep 2, 2014, at 5:07 PM, Steven Hartland wrote: > Thanks for the confirmation Borja I was a little confused why > our two results differed. What do you use for your benchmarks? I am still playing with this, so I = can run the same tests just in case. I have done something pretty straightforward, just creating a pool, a = dataset, and running bonnie++ on it. I also have a backplane=20 >=20 > For a 12 disk system you'll likely need two SAS2 controllers, > or at least 12 SAS lines otherwise you will hit controller > throughput issues as a 840 can pretty much saturate a single > SAS2 lane on its own. >=20 > At that point you'll also start to see other issues. >=20 > I'd strongly suggest moving to stable/10, if you haven't already, > particularly if you have large amount of RAM in the system > otherwise you will become CPU bound on ARC hash lookups. Yes, I'm following -STABLE but this braindead machine has just *one* = PCIe slot, so I am limited to one controller. In my case, a SAS2008 (mps driver) with = a SAS expander.=20 mps0: port 0x3f00-0x3fff mem = 0x90ebc000-0x90ebffff,0x912c0000-0x912fffff irq 32 at device 0.0 on = pci17 mps0: Firmware: 18.00.00.00, Driver: 19.00.00.00-fbsd mps0: IOCCapabilities: = 1285c= Anyway, my main concern is not that maximum throughput, the system will = be much faster than the same using "classic" hard disks :) Borja.