From owner-freebsd-fs@freebsd.org Tue Sep 6 07:45:05 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 858B3B96FA4 for ; Tue, 6 Sep 2016 07:45:05 +0000 (UTC) (envelope-from juergen.gotteswinter@internetx.com) Received: from mx1.internetx.com (mx1.internetx.com [62.116.129.39]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 34276DCA for ; Tue, 6 Sep 2016 07:45:04 +0000 (UTC) (envelope-from juergen.gotteswinter@internetx.com) Received: from localhost (localhost [127.0.0.1]) by mx1.internetx.com (Postfix) with ESMTP id B30E34C4C7BE; Tue, 6 Sep 2016 09:44:56 +0200 (CEST) X-Virus-Scanned: InterNetX GmbH amavisd-new at ix-mailer.internetx.de Received: from mx1.internetx.com ([62.116.129.39]) by localhost (ix-mailer.internetx.de [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id KZDT+g0ZZ-t6; Tue, 6 Sep 2016 09:44:54 +0200 (CEST) Received: from [192.168.100.26] (pizza.internetx.de [62.116.129.3]) (using TLSv1 with cipher AES128-SHA (128/128 bits)) (No client certificate requested) by mx1.internetx.com (Postfix) with ESMTPSA id 64FBC45FC10C; Tue, 6 Sep 2016 09:44:54 +0200 (CEST) Reply-To: juergen.gotteswinter@internetx.com Subject: Re: [ZFS] refquota is very slow ! References: <1472914773423.63807@kuleuven.be> <0E828163-AEAB-4C8C-BFCF-93D42B3DB3B6@gmail.com> <1524067530.1937.a66cb17f-9141-4bef-b758-5bb129d16681.open-xchange@ox.internetx.com> <67B3E11E-22B7-4719-A7AF-B8479D35A6D2@gmail.com> To: Ben RUBSON , FreeBSD FS From: InterNetX - Juergen Gotteswinter Organization: InterNetX GmbH Message-ID: <7df8b5ce-d9ae-5b05-0aa5-1de6b06fd29e@internetx.com> Date: Tue, 6 Sep 2016 09:44:53 +0200 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.2.0 MIME-Version: 1.0 In-Reply-To: <67B3E11E-22B7-4719-A7AF-B8479D35A6D2@gmail.com> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 8bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 06 Sep 2016 07:45:05 -0000 i usually use bonnie, fio, or iozone to benchmark the pool io. i dont see the part where 20k files get generated, what you do is writing a single file in a stream from a slow random device until quota kicks in. even dd isnt a reliable benchmarking tool, to get the raw hardware throughput a conv=sync whould be a start to not get just the results of caching. if you could install fio, i can do comparable benchmarks on a single local flash device and spinning rust. fio -bs=8192 -runtime=300 -iodepth 24 -filename test.bin -direct=1 -ioengine=sync or simply download a tarball from kernel.org and time tar -xvzf kernel.tar.gz Am 05.09.2016 um 20:30 schrieb Ben RUBSON: > Yes, I want to dedicate my ARC & L2ARC to my data pool (which anyway suffers from the same bug...). > > I followed your advices and created a whole new SSD test pool, let all options by default, and added lz4 compression. > I then create 20.000.000 empty files in this test pool. > > Then : > > # zfs set quota=7.5G refquota=none test/test > # dd if=/dev/random of=/test/test/bf > dd: bf: Disc quota exceeded > 1047553+0 records in > 1047552+0 records out > 536346624 bytes transferred in 16.769513 secs (31983434 bytes/sec) > # rm /test/test/bf > > # zfs set quota=none refquota=7.5G test/test > # dd if=/dev/random of=/test/test/bf > dd: bf: Disc quota exceeded > 1047520+0 records in > 1047519+0 records out > 536329728 bytes transferred in 215.986582 secs (2483162 bytes/sec) > # rm /test/test/bf > > Additional tests give the same results. > 6MB before the limit, write IOs are done very slowly and it takes minutes to fulfil these remaining MB. > > How many files in the pool you performed this test in Juergen ? > > Ben > >> On 05 Sep 2016, at 10:04, InterNetX - Juergen Gotteswinter wrote: >> >> any special reason for disabling secondarycache and limiting the >> primarycache to metadata? does it change something when you revert it to >> default? >> >> compression -> lz4, even if its not compressable, it wont hurt >> >> you probably got several smaller performance issues which end up in this >> mess all together. >> >> Am 04.09.2016 um 17:16 schrieb Ben RUBSON: >>> Same kind of results with a single local (SSD) disk based pool, refquota takes much more time than quota around the limit. >>> >>> Here is the output for this single disk based pool : >>> zfs get all : http://pastebin.com/raw/TScgy0ps >>> zdb : http://pastebin.com/raw/BxmQ4xNx >>> zpool get all : http://pastebin.com/raw/XugMbydy >>> >>> Thank you ! >>> >>> Ben >>> >>> >>>> On 04 Sep 2016, at 13:42, InterNetX - Juergen Gotteswinter wrote: >>>> >>>> Did you try the same in a single local disk based pool? And pls post output of >>>> zfs get all, zdb & zpool get all >>>> >>>> >>>>> Ben RUBSON hat am 4. September 2016 um 11:28 >>>>> geschrieben: >>>>> >>>>> >>>>> Juergen & Bram, >>>>> >>>>> Thank you for your feedback. >>>>> >>>>> I then investigated further and think I found the root cause. >>>>> >>>>> No issue with refquota in my zroot pool containing (in this example) 300.000 >>>>> inodes used. >>>>> >>>>> However, refquota is terribly slow in my data pool containing around >>>>> 12.000.000 inodes used. >>>>> >>>>> I then created 12.000.000 empty file in my zroot pool, in a test dataset. >>>>> I put a refquota on this dataset and created a dd file to fulfil empty space. >>>>> And around the limit, it began to stall... >>>>> I then created an empty dataset in the same pool, refquota is even slow in >>>>> this dataset having no inode used. >>>>> The root cause seems then to be the total number of inodes used in the pool... >>>>> >>>>> Some numbers : >>>>> Time to fulfil 512MB with quota : 17s >>>>> Time to fulfil 512MB with refquota : 3m35s >>>>> >>>>> Very strange. >>>>> >>>>> Do you experience the same thing ? >>>>> >>>>> Thank you again, >>>>> >>>>> Ben >>>>> >>>>>> On 03 Sep 2016, at 16:59, Bram Vandoren wrote: >>>>>> >>>>>> I encountered the same problem over NFS. I didn't manage to reproduce it not >>>>>> using NFS. I think the userquota property works without any problem though. >>>>>> >>>>>> Cheers, >>>>>> Bram. >>>>> >>>>>> On 03 Sep 2016, at 12:26, InterNetX - Juergen Gotteswinter >>>>>> wrote: >>>>>> >>>>>> cant confirm this, works like a charm without difference to normal quota >>>>>> setting > _______________________________________________ > freebsd-fs@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >