Date: Wed, 24 Dec 2008 22:35:11 -0800 From: Matt Simerson <matt@corp.spry.com> To: freebsd-fs@freebsd.org Subject: Re: ZFS performance gains real or imaginary? Message-ID: <5C120CEB-6CEB-4722-BB23-7E4B83F779C2@corp.spry.com> In-Reply-To: <20081225052903.GC87625@egr.msu.edu> References: <22C8092E-210F-4E91-AA09-CFD38966975C@spry.com> <494AE6F4.30506@modulus.org> <1424BEB3-69FE-4BA2-884F-4862B3D7BCFD@corp.spry.com> <20081224034812.GR87625@egr.msu.edu> <F771A20F-384C-4435-847D-22FEFDCB1CDD@corp.spry.com> <20081225052903.GC87625@egr.msu.edu>
index | next in thread | previous in thread | raw e-mail
On Dec 24, 2008, at 9:29 PM, Adam McDougall wrote:
>> On Wed, Dec 24, 2008 at 01:00:14PM -0800, Matt Simerson wrote:
>>
>> On Dec 23, 2008, at 7:48 PM, Adam McDougall wrote:
>>
>>>> On Tue, Dec 23, 2008 at 12:43:47PM -0800, Matt Simerson wrote:
>>>>
>>>>> On Dec 18, 2008, at 4:12 PM, Andrew Snow wrote:
>>>>>
>>>>>> If so, then I really should be upgrading my production ZFS
>>>>>> servers
>>>>>> to the latest -HEAD.
>>>>>
>>>>> Thats correct, that is the only way to get the best working
>>>>> version
>>>>> of ZFS. Of course, then everything is unstable and broken - eg.
>>>>> SMBFS became unusable for me and would crash the server.
>>>>
>>>> Unfortunately, the newer kernel hangs much more frequently.
>>>>
>>>> I have these settings in /boot/loader.conf
>>>>
>>>> vm.kmem_size="1536M"
>>>> vm.kmem_size_max="1536M"
>>>> vfs.zfs.arc_max="100M"
>>>>
>>>> I have also experimented with vfs.zfs.prefetch_disable,
>>>> vfs.zfs.arc_min in the past, and I'm open to suggestions on what
>>>> might help under this workload (multiple concurrent rsync
>>>> processes from remote systems to this one).
>>>
>>> Can you try:
>>>
>>> vm.kmem_size=2G
>>> vm.kmem_size_max=2G
>>> vfs.zfs.arc_max=512M
>>>
>>> This has been working for me on one amd64 system that only
>>> has 2G of ram but had similar problem frequency to yours. I
>>> don't know if its coincidence with the data that I am rsyncing
>>> lately, but: 10:47PM up 22 days, 7:12
>>
>> I made it 23 minutes. I've reduced my rsync concurrency to 1, so I'm
>> not hitting the system nearly as hard but it seems not to matter.
>>
>> Other workloads, like a 'make buildworld' will complete with no
>> problems. For whatever reason, rsync sessions of entire unix systems
>> to my backup servers are very troublesome.
>>
>> Matt
>
> Ok. Since you have 16G of ram, I suppose you could try setting both
> kmem
> sizes to something like 8G to see if it makes a difference? I'm
> getting
> a feeling that even if we don't see an outright failure, it might be
> deadlocking due to a kmem shortage.
back01# w
10:17PM up 40 mins, 2 users, load averages: 4.20, 3.07, 1.74
This is with:
vm.kmem_size="4G"
vm.kmem_size_max="4G"
vfs.zfs.arc_max="512M"
I'll let it trundle along with that setting and see how long it lasts.
Matt
PS: These settings earlier today resulted in 12+ hours of uptime,
until I rebooted to test raising kmem_size to 4G.
vm.kmem_size="2G"
vm.kmem_size_max="2G"
vfs.zfs.arc_max="512M"
vfs.zfs.zil_disable=1
vfs.zfs.prefetch_disable=1
PPS: If/when it hangs with 4G, I'll raise it again to 6 or 8 GB and
see how long it lasts. Whatever pattern emerges might be useful for
Pawel.
help
Want to link to this message? Use this
URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?5C120CEB-6CEB-4722-BB23-7E4B83F779C2>
