Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 27 Aug 2010 21:24:44 -0400
From:      jhell <jhell@DataIX.net>
To:        Artem Belevich <fbsdlist@src.cx>
Cc:        freebsd-current@freebsd.org, Martin Matuska <mm@freebsd.org>
Subject:   Re: [CFT] Improved ZFS metaslab code (faster write speed)
Message-ID:  <4C78655C.3010200@DataIX.net>
In-Reply-To: <AANLkTi=hbL3wfTvmfBhPkpJ7orh_WuhagGPoXaS_hcTW@mail.gmail.com>
References:  <4C713EF5.8080402@FreeBSD.org> <AANLkTi=8x1EenWyqGz6AQWKDUq5JiMJbX_jbVbX43DKx@mail.gmail.com> <4C714FC0.90005@FreeBSD.org> <AANLkTim_BH4WrQUY-X491c%2BfLaP2FKMcS1k-DN5tLG-9@mail.gmail.com> <20100828081917.ee931f7f.nork@FreeBSD.org> <AANLkTi=hbL3wfTvmfBhPkpJ7orh_WuhagGPoXaS_hcTW@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On 08/27/2010 19:50, Artem Belevich wrote:
> Another "me too" here.
> 
> 8-stable/amd64 + v15 (zpool still uses v14) + metaslab +
> abe_stat_rrwlock + A.Gapon's vm_paging_needed() + uma defrag patches.
> 
> The box survived few days of pounding on it without any signs of trouble.
> 

	I must have missed the uma defrag patches but according to the code
those patches should not have any effect on your implimentation of ZFS
on your system because vfs.zfs.zio.use_uma defaults to off unless you
have manually turned this on or the patch reverts that facility back to
its original form.


	Running on a full ZFSv15 system with the metaslab & rrwlock patches and
a slightly modified patch from avg@ for vm_paging_needed() I was able to
achieve the results in read and write ops that I was looking for.

The modified patch from avg@ (portion patch) is:

#ifdef _KERNEL
                if (arc_reclaim_needed()) {
                        needfree = 0;
                        wakeup(&needfree);
                }
#endif

	I still moved that down to below _KERNEL for the obvious reasons.  But
when I was using the original patch with if (needfree) I noticed a
performance degradation after ~12 hours of use with and without UMA
turned on. So far with ~48 hours of testing with the top half of that
being with the above change, I have not seen more degradation of
performance after that ~12 hour mark.

In another 12 hours of testing with UMA turned off Ill be turning UMA
back on and testing for another 24 hours.  Before that third patch from
avg@ had come along I had turned UMA on and had no performance loss for
~7 hours.  Obviously I had to reboot after applying avg@'s patch and
decided to test strictly without UMA at that point.

There seems to be a problem in the logic behind the needfree use and or
arc_reclaim_needed() area that should be worked out, but at least for
this system i386 8.1-STABLE where my code is at right now "Is STABLE!".


=======================================================================
For reference I have also adjusted these: (arc.c)

- /* Start out with 1/8 of all memory */
- arc_c = kmem_size() / 8;
+ /* Start out with 1/4 of all memory */
+ arc_c = kmem_size() / 4;

And these: (arc.c)

- arc_c = MIN(arc_c, vmem_size(heap_arena, VMEM_ALLOC | VMEM_FREE) / 8);
+ arc_c = MIN(arc_c, vmem_size(heap_arena, VMEM_ALLOC | VMEM_FREE) / 4);

	There seems to be no relative way currently to handle adjusting these
properly based on the amount of memory in the system and sets a blind
default currently to 1/8 and in a system with 2GB that is ~256MB but if
you are adjusting to kmem_size as stated above and you set KVA_PAGES to
512 like suggested, then you end up with an arc_c equaling 64MB. So
unless you adjust your kmem_size accordingly on some systems to make up
for the 1/8th problem your ZFS install is going to suffer. This is more
of a problem for systems below the 2GB memory range. Now for systems
that have quite high ranges of memory 8G for example your really only
using 1GB and it will be fairly hard besides adjusting the source to use
more RAM without effecting something else in the system inherently by
bumping vm.kmem_size*
=======================================================================

1GB RAM on ZFSv15 with the patches mentioned: (loader.conf) adjust
accordingly to your own systems environment.
kern.maxdsiz="640M"
kern.maxusers="512" # Overcome the max calculated 384 for >1G of MEM.
                    # See: /sys/kern/subr_param.c for details. ???
vfs.zfs.arc_min="62M"
vfs.zfs.arc_max="496M"
vfs.zfs.prefetch_disable=0
vm.kmem_size="512M"
vm.kmem_size_max="768M"
vm.kmem_size_min="128M"


Regards,

-- 



 jhell,v



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4C78655C.3010200>