Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 6 Sep 2016 09:44:53 +0200
From:      InterNetX - Juergen Gotteswinter <juergen.gotteswinter@internetx.com>
To:        Ben RUBSON <ben.rubson@gmail.com>, FreeBSD FS <freebsd-fs@freebsd.org>
Subject:   Re: [ZFS] refquota is very slow !
Message-ID:  <7df8b5ce-d9ae-5b05-0aa5-1de6b06fd29e@internetx.com>
In-Reply-To: <67B3E11E-22B7-4719-A7AF-B8479D35A6D2@gmail.com>
References:  <D559DE69-A535-427C-A401-1458C2AA8C31@gmail.com> <1472914773423.63807@kuleuven.be> <0E828163-AEAB-4C8C-BFCF-93D42B3DB3B6@gmail.com> <1524067530.1937.a66cb17f-9141-4bef-b758-5bb129d16681.open-xchange@ox.internetx.com> <EDDE17FC-1B3A-4912-B93C-08E18433A4C9@gmail.com> <f5969fc9-44e2-a8a0-1a7f-9475e65ab93a@internetx.com> <67B3E11E-22B7-4719-A7AF-B8479D35A6D2@gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
i usually use bonnie, fio, or iozone to benchmark the pool io.


i dont see the part where 20k files get generated, what you do is
writing a single file in a stream from a slow random device until quota
kicks in.

even dd isnt a reliable benchmarking tool, to get the raw hardware
throughput a conv=sync whould be a start to not get just the results of
caching.

if you could install fio, i can do comparable benchmarks on a single
local flash device and spinning rust.

fio -bs=8192 -runtime=300 -iodepth 24 -filename test.bin -direct=1
-ioengine=sync

or simply download a tarball from kernel.org and

time tar -xvzf kernel.tar.gz


Am 05.09.2016 um 20:30 schrieb Ben RUBSON:
> Yes, I want to dedicate my ARC & L2ARC to my data pool (which anyway suffers from the same bug...).
> 
> I followed your advices and created a whole new SSD test pool, let all options by default, and added lz4 compression.
> I then create 20.000.000 empty files in this test pool.
> 
> Then :
> 
> # zfs set quota=7.5G refquota=none test/test
> # dd if=/dev/random of=/test/test/bf
> dd: bf: Disc quota exceeded
> 1047553+0 records in
> 1047552+0 records out
> 536346624 bytes transferred in 16.769513 secs (31983434 bytes/sec)
> # rm /test/test/bf
> 
> # zfs set quota=none refquota=7.5G test/test
> # dd if=/dev/random of=/test/test/bf
> dd: bf: Disc quota exceeded
> 1047520+0 records in
> 1047519+0 records out
> 536329728 bytes transferred in 215.986582 secs (2483162 bytes/sec)
> # rm /test/test/bf
> 
> Additional tests give the same results.
> 6MB before the limit, write IOs are done very slowly and it takes minutes to fulfil these remaining MB.
> 
> How many files in the pool you performed this test in Juergen ?
> 
> Ben
> 
>> On 05 Sep 2016, at 10:04, InterNetX - Juergen Gotteswinter <juergen.gotteswinter@internetx.com> wrote:
>>
>> any special reason for disabling secondarycache and limiting the
>> primarycache to metadata? does it change something when you revert it to
>> default?
>>
>> compression -> lz4, even if its not compressable, it wont hurt
>>
>> you probably got several smaller performance issues which end up in this
>> mess all together.
>>
>> Am 04.09.2016 um 17:16 schrieb Ben RUBSON:
>>> Same kind of results with a single local (SSD) disk based pool, refquota takes much more time than quota around the limit.
>>>
>>> Here is the output for this single disk based pool :
>>> zfs get all   : http://pastebin.com/raw/TScgy0ps
>>> zdb           : http://pastebin.com/raw/BxmQ4xNx
>>> zpool get all : http://pastebin.com/raw/XugMbydy
>>>
>>> Thank you !
>>>
>>> Ben
>>>
>>>
>>>> On 04 Sep 2016, at 13:42, InterNetX - Juergen Gotteswinter <juergen.gotteswinter@internetx.com> wrote:
>>>>
>>>> Did you try the same in a single local disk based pool? And pls post output of
>>>> zfs get all, zdb & zpool get all
>>>>
>>>>
>>>>> Ben RUBSON <ben.rubson@gmail.com> hat am 4. September 2016 um 11:28
>>>>> geschrieben:
>>>>>
>>>>>
>>>>> Juergen & Bram,
>>>>>
>>>>> Thank you for your feedback.
>>>>>
>>>>> I then investigated further and think I found the root cause.
>>>>>
>>>>> No issue with refquota in my zroot pool containing (in this example) 300.000
>>>>> inodes used.
>>>>>
>>>>> However, refquota is terribly slow in my data pool containing around
>>>>> 12.000.000 inodes used.
>>>>>
>>>>> I then created 12.000.000 empty file in my zroot pool, in a test dataset.
>>>>> I put a refquota on this dataset and created a dd file to fulfil empty space.
>>>>> And around the limit, it began to stall...
>>>>> I then created an empty dataset in the same pool, refquota is even slow in
>>>>> this dataset having no inode used.
>>>>> The root cause seems then to be the total number of inodes used in the pool...
>>>>>
>>>>> Some numbers :
>>>>> Time to fulfil 512MB with quota : 17s
>>>>> Time to fulfil 512MB with refquota : 3m35s
>>>>>
>>>>> Very strange.
>>>>>
>>>>> Do you experience the same thing ?
>>>>>
>>>>> Thank you again,
>>>>>
>>>>> Ben
>>>>>
>>>>>> On 03 Sep 2016, at 16:59, Bram Vandoren <bram.vandoren@kuleuven.be> wrote:
>>>>>>
>>>>>> I encountered the same problem over NFS. I didn't manage to reproduce it not
>>>>>> using NFS. I think the userquota property works without any problem though.
>>>>>>
>>>>>> Cheers,
>>>>>> Bram.
>>>>>
>>>>>> On 03 Sep 2016, at 12:26, InterNetX - Juergen Gotteswinter
>>>>>> <juergen.gotteswinter@internetx.com> wrote:
>>>>>>
>>>>>> cant confirm this, works like a charm without difference to normal quota
>>>>>> setting
> _______________________________________________
> freebsd-fs@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"
> 



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?7df8b5ce-d9ae-5b05-0aa5-1de6b06fd29e>