From owner-freebsd-questions@FreeBSD.ORG Fri Sep 12 06:54:56 2003 Return-Path: Delivered-To: freebsd-questions@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 506E216A4BF for ; Fri, 12 Sep 2003 06:54:56 -0700 (PDT) Received: from main.gmane.org (main.gmane.org [80.91.224.249]) by mx1.FreeBSD.org (Postfix) with ESMTP id 70F7343F85 for ; Fri, 12 Sep 2003 06:54:54 -0700 (PDT) (envelope-from freebsd-questions@m.gmane.org) Received: from list by main.gmane.org with local (Exim 3.35 #1 (Debian)) id 19xoNq-00015X-00 for ; Fri, 12 Sep 2003 15:54:46 +0200 X-Injected-Via-Gmane: http://gmane.org/ To: freebsd-questions@freebsd.org Received: from sea.gmane.org ([80.91.224.252]) by main.gmane.org with esmtp (Exim 3.35 #1 (Debian)) id 19xoNp-00015P-00 for ; Fri, 12 Sep 2003 15:54:45 +0200 Received: from news by sea.gmane.org with local (Exim 3.35 #1 (Debian)) id 19xoNv-0008OZ-00 for ; Fri, 12 Sep 2003 15:54:51 +0200 From: Jesse Guardiani Date: Fri, 12 Sep 2003 09:54:50 -0400 Organization: WingNET Lines: 78 Message-ID: References: <1232387734.20030911183521@mygirlfriday.info> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7Bit X-Complaints-To: usenet@sea.gmane.org User-Agent: KNode/0.7.2 X-Mail-Copies-To: never Sender: news Subject: Re: `top` process memory usage: SIZE vs RES X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list Reply-To: jesse@wingnet.net List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 12 Sep 2003 13:54:56 -0000 gv-list-freebsdquestions@mygirlfriday.info wrote: [...] > J> 1.) Where is my Free memory going? > > given what you say > custom-python->>>qmail-scanner->clamd->qmail-queue > > This whole scenario is very memory intensive. First you have each email > "pythonized" and then qmail-scanner is *very* memory intensive, as it has > initially a very heavy duty perl script for each email before being passed > off to clamd. Clamd is a separate issue, since the only clamav command actually run from the pipeline (and thus under the restrictions of softlimit) is the clamdscan client, which is NOT memory intensive. Yes, clamd contributes to the overall memory footprint, but I'm only concerned with getting softlimit set properly at this point. My machine can always revert to swap, but the second softlimit is exceeded the email will be temporarily defered, which I consider a Bad Thing. Having said that, yes, it is still a very memory intensive pipeline. I took some time to profile the memory usage a few days ago, and it looks like the most memory the pipeline should ever use at any given point in time is ~12780K, with the following processes running: USER PID PPID %CPU %MEM VSZ RSS TT STAT STARTED TIME COMMAND qmaild 24716 24553 0.0 0.2 920 460 ?? I 7:39PM 1:08.07 /var/qmail/bin/qmail-smtpd qmaild 24718 24716 0.0 0.3 884 488 ?? I 7:39PM 0:08.63 /usr/local/bin/qmail-qfilter /var/qmail/queue-filters/block-forged-sender.py -s qmailq 24730 24718 9.2 2.1 5052 3988 ?? S 7:41PM 0:55.87 /usr/bin/suidperl -T /dev/fd/4//var/qmail/bin/qmail-scanner-queue.pl (perl) qmailq 24739 24730 69.7 2.1 5052 3988 ?? R 7:43PM 0:06.55 /usr/bin/suidperl -T /dev/fd/4//var/qmail/bin/qmail-scanner-queue.pl (perl) qmailq 24740 24739 14.4 0.2 872 400 ?? R 7:43PM 0:01.28 /var/qmail/bin/qmail-queue (qmail-scanner is silly. For some reason it spawns a copy of itself, possibly to hand the message off to qmail-queue.) But even with the softlimit set to 15M, my huge test message to a server with only about 80M of free RAM (before sending the message. Free Memory dropped to ~500k while handling the message) somehow managed to exceed the softlimit. The exact same message, sent to a machine with ~600M of free RAM and an identical mail server setup, passed through the pipeline without tripping the softlimit. >From what I have seen while watching a huge message pass down the pipeline, none of the processes in the pipeline increase memory usage in proportion to email size. They're all relatively static. So I'm a little confused about why the softlimit would be tripped on a box that had less RAM (128M) but pass through successfully on a box with more RAM (1G). Would the act of using more swap effectively increase a process's: data segment usage? stack segment usage? locked physical pages per process? total of all segments per process? These are the things that softlimit limits (according to `man softlimit`), and I admittedly don't understand how any of the above translates to memory usage as shown by VSZ and RSS under `ps`, or SIZE and RES under `top`. Any ideas? > Maybe running vmstat -w 1 would give you a different perspective also. I'll check it out. -- Jesse Guardiani, Systems Administrator WingNET Internet Services, P.O. Box 2605 // Cleveland, TN 37320-2605 423-559-LINK (v) 423-559-5145 (f) http://www.wingnet.net