From owner-freebsd-fs@freebsd.org Wed Jun 21 08:01:04 2017 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id D0168D894E7 for ; Wed, 21 Jun 2017 08:01:04 +0000 (UTC) (envelope-from killing@multiplay.co.uk) Received: from mail-wr0-x22c.google.com (mail-wr0-x22c.google.com [IPv6:2a00:1450:400c:c0c::22c]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 608B8846F0 for ; Wed, 21 Jun 2017 08:01:04 +0000 (UTC) (envelope-from killing@multiplay.co.uk) Received: by mail-wr0-x22c.google.com with SMTP id c11so68019771wrc.3 for ; Wed, 21 Jun 2017 01:01:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=multiplay-co-uk.20150623.gappssmtp.com; s=20150623; h=subject:to:references:from:message-id:date:user-agent:mime-version :in-reply-to:content-language; bh=DxsK+0Lpt2vUcHLUsgHVvhwprB9gCsjEVFo7v7sZtd8=; b=x+X+ZgaMebDzYhEZbrQdsuZ3+QJitCwtNcE/2W4atcowdVWrxSJ/tu8XRobLREpPmm KM5NYa28cdVhkypAS0uzHwv6P/I3zCG/gSYN3xyP5mX7r2X2Tb2yAgVWF9Cgi6qfSRH5 O4NzaKVFhnfmYxMwb/j1mqiBoRH/mTGpnVqkgNV3+xeJT1PlH60swtRWinQBP7AlU8E/ IR/eSHcznns0x7lrezQYYYYIkcPv8LEjf9uOCD/5mj2wi5yqqflGB+DeiusP1SaP5AiR P6DhqrZuo6RctNm+8h1uYs5ASYIsiAxAeZWy5CFJKMEsepTP1anURrqrFCS6DuCGjWzR vmHQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-language; bh=DxsK+0Lpt2vUcHLUsgHVvhwprB9gCsjEVFo7v7sZtd8=; b=ZLJAFTqnFMb8wZmL/2cJW8mHP0xqkDCcYW3mW0eA7FIbWxwEUjgmgXXJ7UXBEPdqpe 4q4Ya1Yy8G13sS07hqvxDJl4tAdyWXwBCEuAzKpLXT81nwi18lzrAmqVqy9ZfBFAwlzI qknPuyVUZzKpU+ChA8f/lRtRLLYCwKL4KBMuPqwABWtQWXKBbW+ZZMq7oG0M+aeA8RQM l5AS2UPPI8iQu50S5uDhXT1HNmPPVjQaf5AnDum9Pe7EBATaUfGHexUQOFc8Be15WcGd ZOtsrmhODlU9PrWPmMLVdhaQ+Ex0+SR1enbFK7GJwHb5AjM5TjdWA9gqCArIsMykm7cj 1Row== X-Gm-Message-State: AKS2vOzhX+tw/uYOqEKu9Jt80uRQ7WUCt37n80rYn2qCN1zu0gCIIvL4 2XC5P0nG4vEueLH0ZN3IUg== X-Received: by 10.28.63.209 with SMTP id m200mr5704550wma.40.1498032062296; Wed, 21 Jun 2017 01:01:02 -0700 (PDT) Received: from [10.10.1.111] ([185.97.61.15]) by smtp.gmail.com with ESMTPSA id e24sm7459417wrc.35.2017.06.21.01.01.01 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 21 Jun 2017 01:01:01 -0700 (PDT) Subject: Re: FreeBSD 11.1 Beta 2 ZFS performance degradation on SSDs To: "Caza, Aaron" , "freebsd-fs@freebsd.org" References: From: Steven Hartland Message-ID: <86bf6fad-977a-b096-46b9-e9099a57a1f4@multiplay.co.uk> Date: Wed, 21 Jun 2017 09:01:01 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.1.1 MIME-Version: 1.0 In-Reply-To: Content-Language: en-US Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit X-Content-Filtered-By: Mailman/MimeDel 2.1.23 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 21 Jun 2017 08:01:04 -0000 On 20/06/2017 21:26, Caza, Aaron wrote: >> On 20/06/2017 17:58, Caza, Aaron wrote: >> dT: 1.001s w: 1.000s >> L(q) ops/s r/s kBps ms/r w/s kBps ms/w d/s kBps ms/d %busy Name >> 0 4318 4318 34865 0.0 0 0 0.0 0 0 0.0 14.2| ada0 >> 0 4402 4402 35213 0.0 0 0 0.0 0 0 0.0 14.4| ada1 >> >> dT: 1.002s w: 1.000s >> L(q) ops/s r/s kBps ms/r w/s kBps ms/w d/s kBps ms/d %busy Name >> 1 4249 4249 34136 0.0 0 0 0.0 0 0 0.0 14.1| ada0 >> 0 4393 4393 35287 0.0 0 0 0.0 0 0 0.0 14.5| ada1 >> You %busy is very low, so sounds like the bottleneck isn't in raw disk performance but somewhere else. >> >> Might be interesting to see if anything stands out in top -Sz and then press h for threads. >> > I rebooted the system to disable Trim so currently not degraded. > > dT: 1.001s w: 1.000s > L(q) ops/s r/s kBps ms/r w/s kBps ms/w d/s kBps ms/d %busy Name > 3 3887 3887 426514 0.7 0 0 0.0 0 0 0.0 90.7| ada0 > 3 3987 3987 434702 0.7 0 0 0.0 0 0 0.0 92.0| ada1 > > dT: 1.002s w: 1.000s > L(q) ops/s r/s kBps ms/r w/s kBps ms/w d/s kBps ms/d %busy Name > 3 3958 3958 433563 0.7 0 0 0.0 0 0 0.0 91.6| ada0 > 3 3989 3989 438417 0.7 0 0 0.0 0 0 0.0 93.0| ada1 > > test@f111beta2:~ # dd if=/testdb/test of=/dev/null bs=1m > 16000+0 records in > 16000+0 records out > 16777216000 bytes transferred in 19.385855 secs (865435959 bytes/sec) Now that is interesting, as your getting smaller number ops/s but much higher throughput. In the normal case you're seeing ~108Kb per read where in the degraded case you're seeing 8Kb per read. Given this and knowing the application level isn't effecting it, we need to identify where in the IO stack the reads are getting limited to 8Kb? With your additional information about ARC, it could be that the limited memory is the cause. Regards Steve