Date: Mon, 4 Feb 2002 17:28:33 +0500 From: Sergey Gershtein <sg@ur.ru> To: Peter Jeremy <peter.jeremy@alcatel.com.au> Cc: freebsd-stable@FreeBSD.ORG Subject: FS gurus needed! (was: Strange lock-ups during backup over nfs after adding 1024M RAM) Message-ID: <114283707399.20020204172833@ur.ru> In-Reply-To: <20020204130730.B72285@gsmx07.alcatel.com.au> References: <20020126204941.H17540-100000@resnet.uoregon.edu> <1931130530386.20020128130947@ur.ru> <20020130073449.B78919@gsmx07.alcatel.com.au> <791310002584.20020130150111@ur.ru> <20020131111153.Y72285@gsmx07.alcatel.com.au> <1427021336.20020201123650@ur.ru> <20020204130730.B72285@gsmx07.alcatel.com.au>
next in thread | previous in thread | raw e-mail | index | archive | help
On Monday, February 04, 2002 Peter Jeremy <peter.jeremy@alcatel.com.au> wrote:
PJ> On 2002-Feb-01 12:36:50 +0500, Sergey Gershtein <sg@ur.ru> wrote:
>>Here's what "vmstat -m" says about "FFS node":
>>
>>Memory statistics by type Type Kern
>> Type InUse MemUse HighUse Limit Requests Limit Limit Size(s)
>> ...
>> FFS node152293 76147K 76479K102400K 3126467 0 0 512
>> ...
PJ> One oddity here is the Size - "FFS node" is used to allocate struct
PJ> inode's and they should be 256 bytes on i386. Are you using something
PJ> other than an i386 architecture? Unless this is a cut-and-paste
PJ> error, I suspect something is radically wrong with your kernel.
Yes, it's i386 and it's not cut-and-paste error.
The current output of vmstat -m says:
...
FFS node152725 76363K 76479K102400K 9247602 0 0 512
...
vfscache157865 10671K 11539K102400K 9668497 0 0 64,128,256,512,512K
...
The system uptime is 5 days, backup is temporarily disabled.
I put the coplete output of 'vmstat -m', some other commands and
kernel config on the web on http://storm.mplik.ru/fbsd-stable/ so you
can have a look at it.
By the way, on our second server running the same hardware the size of
"FFS node" is also 512. How can it be so?
PJ> By default, the memory limit is 1/2 vm_kmem_size, which is 1/3 physical
PJ> memory, capped to 200MB. Which means you've hit the default cap.
PJ> You can increase this limit with the loader environment
PJ> kern.vm.kmem.size (see loader(8) for details). (This is also capped
PJ> at twice the physical memory - which won't affect you). Before you go
PJ> overboard increasing this, note that the kernel virtual address space
PJ> is only 1GB.
Hmm. Not sure what to do. Shell I try to play with kern.vm.kmem.size
or better not touch it? I am now thinking that removing the extra
memory we've added is the best solution to the problem. I don't like
this solution though.
PJ> How many open files do you expect on your box?
PJ> Is it reasonable for there to be >>150,000 active inodes?
ptat -T right now says:
666/4096 files
0M/511M swap space
I don't expect the number of open files go beyond 1,000-1,500. The
only problem is accessing a lot (more than a 1,000,000) of small files
over NFS. But if I understand correctly, those files should be opened
and closed one by one, not all together. Is that right?
PJ> Does "vfscache" have around the same number of InUse entries as "FFS node"?
Yes, it seems so (see above). What does it mean?
PJ> What is the output of "sysctl vfs"?
See http://storm.mplik.ru/fbsd-stable/sysctl_vfs.txt
PJ> PS: I'm still hoping that one of the FS gurus will step in and point
PJ> out what's wrong.
I changed the subject of my message to catch attention of FS gurus on
the list.
Thank you,
Sergey
To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-stable" in the body of the message
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?114283707399.20020204172833>
