Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 5 Oct 2009 08:51:34 -0700
From:      Artem Belevich <fbsdlist@src.cx>
To:        Attila Nagy <bra@fsn.hu>
Cc:        freebsd-fs@freebsd.org, Pawel Jakub Dawidek <pjd@freebsd.org>
Subject:   Re: ARC size constantly shrinks, then ZFS slows down extremely
Message-ID:  <ed91d4a80910050851m3d599f7ai67a57ef17a9a61e7@mail.gmail.com>
In-Reply-To: <4AC99F1D.3040300@fsn.hu>
References:  <4AC1E540.9070001@fsn.hu> <4AC5B2C7.2000200@fsn.hu> <20091002184526.GA1660@garage.freebsd.pl> <4AC99F1D.3040300@fsn.hu>

next in thread | previous in thread | raw e-mail | index | archive | help

Your lockup is very similar (processes stuck sleeping on vmwait) to
what I had when arc_min was set too high. With Pawel's patch ZFS would
not give up any memory above arc_min.
Try bringing vfs.zfs.arc_min down.

--Artem



2009/10/5 Attila Nagy <bra@fsn.hu>:
> On 10/02/09 20:45, Pawel Jakub Dawidek wrote:
>>
>> On Fri, Oct 02, 2009 at 09:59:03AM +0200, Attila Nagy wrote:
>>
>>>
>>> Backing out this change from the 8-STABLE kernel:
>>>
>>> http://svn.freebsd.org/viewvc/base/head/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c?r1=191901&r2=191902
>>>
>>> makes it survive about half and hour of IMAP searching. Of course only
>>> time will tell whether this helps in the long run, but so far 10/10 tries
>>> succeeded to kill the machine with this method...
>>>
>>
>> Could you try this patch:
>>
>>        http://people.freebsd.org/~pjd/patches/arc.c.4.patch
>>
>
> Sure. But before that, a report with the above modification: the machine has
> survived some days, then started to behave strangely. Meaning I could ping
> it, I could log in to the IMAP service (running from ZFS), read some mails,
> but not all.
> I could not access it via ssh (which runs from UFS), but an already running
> top from a different session was alive. It showed:
> last pid: 11272;  load averages:  0.00,  0.00,  0.00    up 3+15:21:13
>  09:11:43
> 149 processes: 1 running, 143 sleeping, 1 zombie, 4 waiting
> CPU:  0.0% user,  0.0% nice,  0.2% system,  0.0% interrupt, 99.8% idle
> Mem: 234M Active, 197M Inact, 559M Wired, 111M Buf, 440K Free
> Swap: 4096M Total, 976K Used, 4095M Free
>
>  PID USERNAME  THR PRI NICE   SIZE    RES STATE   C   TIME   WCPU COMMAND
> 78492 root        1  44    0  4700K  2156K CPU1    1   5:37  0.00% top
> 92343 root        1  44    0  4132K  1576K nanslp  1   4:12  0.00% gstat
> 13401 root        1  44    0  1528K   456K piperd  0   2:19  0.00%
> readproctitl
> 12679 root        1  44    0  3932K  1236K vmwait  1   2:12  0.00% zpool
> 35988    125      4  45    0 16892K  5968K sigwai  0   1:53  0.00%
> milter-greyl
> 25656 root        1  45    0  1536K   564K getblk  0   1:45  0.00% supervise
> 25798 root        1  44    0  1536K   564K vmwait  0   1:44  0.00% supervise
> 28406 root        1  44    0  1536K   544K vmwait  0   1:43  0.00% supervise
> 30226 root        1  44    0  1536K   544K vmwait  0   1:43  0.00% supervise
> 35401 root        1  44    0  1536K   544K vmwait  0   1:42  0.00% supervise
> 29203 root        1  44    0  1536K   544K vmwait  0   1:42  0.00% supervise
> 21629    389      6  44    0 91664K 41892K ucond   0   1:02  0.00% slapd
> 72283     60      1  44    0 80972K  1948K select  1   0:34  0.00% idled
> 98960 root        1  44    0  9396K  2544K select  1   0:32  0.00% sshd
> 1550 root        1  44    0  3340K   940K vmwait  1   0:32  0.00% syslogd
> 5463    125      1  44    0  6924K  2036K vmwait  0   0:27  0.00% qmgr
> 54193 root        1  44    0  9396K  2516K select  0   0:22  0.00% sshd
>
> I could not log into the console, it didn't even gave a "user name" filed
> after hitting enter. Strange.
>
> I will try the patch.
>
>
> _______________________________________________
> freebsd-fs@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"
>



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?ed91d4a80910050851m3d599f7ai67a57ef17a9a61e7>