From owner-freebsd-performance@FreeBSD.ORG Tue Feb 17 17:48:03 2004 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 58BDA16A4CE for ; Tue, 17 Feb 2004 17:48:03 -0800 (PST) Received: from smtp3b.sentex.ca (smtp3b.sentex.ca [205.211.164.50]) by mx1.FreeBSD.org (Postfix) with ESMTP id CE1CE43D1D for ; Tue, 17 Feb 2004 17:48:02 -0800 (PST) (envelope-from mike@sentex.net) Received: from lava.sentex.ca (pyroxene.sentex.ca [199.212.134.18]) by smtp3b.sentex.ca (8.12.10/8.12.10) with ESMTP id i1I1m1eB003058; Tue, 17 Feb 2004 20:48:01 -0500 (EST) (envelope-from mike@sentex.net) Received: from simian.sentex.net ([192.168.43.27]) by lava.sentex.ca (8.12.9p2/8.12.9) with ESMTP id i1I1lrWZ071905; Tue, 17 Feb 2004 20:47:54 -0500 (EST) (envelope-from mike@sentex.net) Message-Id: <6.0.3.0.0.20040217204319.10542990@209.112.4.2> X-Sender: mdtpop@209.112.4.2 (Unverified) X-Mailer: QUALCOMM Windows Eudora Version 6.0.3.0 Date: Tue, 17 Feb 2004 20:48:27 -0500 To: tec@mega.net.br, freebsd-performance@freebsd.org From: Mike Tancsa In-Reply-To: <20040217.eDA.94486400@admin.mega.net.br> References: <20040217.eDA.94486400@admin.mega.net.br> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; format=flowed X-Virus-Scanned: by amavisd-new Subject: Re: Tuning for large outbound smtp queues X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 18 Feb 2004 01:48:03 -0000 At 08:35 PM 17/02/2004, TEC Meganet wrote: >I have a server with similar load but do not have problems at all >what calls attention is that you use ide disk on a server ... >and your swap is in use what means your ram memory is too low >btw without sysctl hw and kernel compile options and eventually sysctl options >it's hard to give you hints Sorry, here it is. hw.machine: i386 hw.model: Intel(R) Pentium(R) 4 CPU 1.80GHz hw.ncpu: 1 hw.byteorder: 1234 hw.physmem: 1062875136 hw.usermem: 953839616 hw.pagesize: 4096 hw.floatingpoint: 1 hw.machine_arch: i386 hw.ata.ata_dma: 1 hw.ata.wc: 1 hw.ata.tags: 0 hw.fxp_rnr: 0 hw.fxp_noflow: 0 hw.instruction_sse: 0 hw.availpages: 259324 kern.ostype: FreeBSD kern.osrelease: 4.9-STABLE kern.osrevision: 199506 kern.version: FreeBSD 4.9-STABLE #0: Wed Jan 21 09:27:16 EST 2004 mdtancsa@smtp3.sentex.ca:/usr/obj/usr/src/sys/smtp kern.maxvnodes: 69954 kern.maxproc: 6164 kern.maxfiles: 16384 kern.argmax: 65536 kern.securelevel: -1 kern.hostname: smtp3.sentex.ca kern.hostid: 0 kern.clockrate: { hz = 100, tick = 10000, tickadj = 5, profhz = 1024, stathz = 128 } kern.posix1version: 199309 kern.ngroups: 16 kern.job_control: 1 kern.saved_ids: 0 kern.boottime: { sec = 1074697898, usec = 755565 } Wed Jan 21 10:11:38 2004 kern.domainname: kern.osreldate: 490101 kern.bootfile: /kernel kern.maxfilesperproc: 14745 kern.maxprocperuid: 5547 kern.dumpdev: { major = 116, minor = 0x20001 } kern.ipc.maxsockbuf: 262144 kern.ipc.sockbuf_waste_factor: 8 kern.ipc.somaxconn: 128 kern.ipc.max_linkhdr: 16 kern.ipc.max_protohdr: 40 kern.ipc.max_hdr: 56 kern.ipc.max_datalen: 156 kern.ipc.nmbclusters: 65536 kern.ipc.msgmax: 16384 kern.ipc.msgmni: 40 kern.ipc.msgmnb: 2048 kern.ipc.msgtql: 40 kern.ipc.msgssz: 8 kern.ipc.msgseg: 2048 kern.ipc.semmap: 30 kern.ipc.semmni: 10 kern.ipc.semmns: 60 kern.ipc.semmnu: 30 kern.ipc.semmsl: 60 kern.ipc.semopm: 100 kern.ipc.semume: 10 kern.ipc.semusz: 92 kern.ipc.semvmx: 32767 kern.ipc.semaem: 16384 kern.ipc.shmmax: 33554432 kern.ipc.shmmin: 1 kern.ipc.shmmni: 192 kern.ipc.shmseg: 128 kern.ipc.shmall: 8192 kern.ipc.shm_use_phys: 0 kern.ipc.shm_allow_removed: 0 kern.ipc.mbuf_wait: 32 kern.ipc.mbtypes: 541 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 kern.ipc.nmbufs: 262144 kern.ipc.m_clreflimithits: 0 kern.ipc.mcl_pool_max: 0 kern.ipc.mcl_pool_now: 0 kern.ipc.maxsockets: 65536 kern.dummy: 0 kern.ps_strings: 3217031152 kern.usrstack: 3217031168 kern.logsigexit: 1 kern.fallback_elf_brand: -1 kern.init_path: /sbin/init:/sbin/oinit:/sbin/init.bak:/stand/sysinstall kern.module_path: /;/boot/;/modules/ kern.acct_suspend: 2 kern.acct_resume: 4 kern.acct_chkfreq: 15 kern.cp_time: 8385505 1722 12804002 9119439 273191504 kern.timecounter.method: 0 kern.timecounter.hardware: TSC kern.openfiles: 152 kern.kq_calloutmax: 4096 kern.ps_arg_cache_limit: 256 kern.ps_argsopen: 1 kern.randompid: 0 kern.maxusers: 384 kern.ps_showallprocs: 1 kern.shutdown.poweroff_delay: 5000 kern.shutdown.kproc_shutdown_wait: 60 kern.sugid_coredump: 0 kern.coredump: 1 kern.corefile: %N.core kern.quantum: 100000 kern.ccpu: 1948 kern.fscale: 2048 kern.devstat.numdevs: 1 kern.devstat.generation: 1 kern.devstat.version: 4 kern.disks: ad0 kern.log_wakeups_per_second: 5 kern.log_console_output: 1 kern.msgbuf_clear: 0 kern.nselcoll: 0 kern.consmute: 0 kern.filedelay: 30 kern.dirdelay: 29 kern.metadelay: 28 kern.minvnodes: 17488 kern.chroot_allow_open_directories: 1 >general hint: why notifying the virus sender and spammer? I am not. This is the infected person sending out viri to users that dont exist (e.g. spam addresses in their mailbox). The From: address is forged and the bounce comes back to my user, who no longer exists... double bounce.... That and the numerous joe jobs that happen to customer domains :( >you believe they >care or even exist? you are creating part of your own troubles since most of >this notify msgs are coming back to you as delivery error:) > >JM > > > > >Mike Tancsa (mike@sentex.net) escreve: > > > > > > We have separate inbound and outbound smtp servers and I am looking to > > better tune the boxes (2 of them) that spool my network's outbound > > mail. As a result of the zillion viruses and n*zillion spams bouncing back > > to networks that dont accept mail, I am seeing some very large queues for > > sendmail. Apart from > > > > define(`confTO_IDENT', 0s) > > define(`QUEUE_DIR', `/var/spool/mqueue/q*')dnl > > > > where there are 60 q directories I havent really tuned sendmail nor the > > OS. However, as the volume grows, the box becomes quite sluggish. Is it > > just a matter of throwing more hardware at the issue, or can I better tweak > > RELENG_4 and sendmail to deal with massive (80,000+) queues ? Allocating > > more memory to caching the filesystem for example ? > > > > Here is a quick snapshot. > > > > smtp3# vmstat -c 100 > > procs memory page disk faults cpu > > r b w avm fre flt re pi po fr sr ad0 in sy cs us sy id > > 3 10 0 390168 36660 491 0 0 0 717 98 0 504 1749 430 3 7 90 > > 1 13 0 390964 35604 204 0 0 0 225 0 107 407 1155 166 2 10 87 > > 3 8 0 391672 37112 543 0 0 0 1359 0 110 470 1862 163 1 13 85 > > 1 11 0 461436 37316 149 0 0 0 285 0 105 422 1409 190 0 9 91 > > 2 12 0 459796 37700 247 0 0 0 357 0 104 418 1620 177 2 9 89 > > 3 10 0 460924 36612 249 0 0 0 201 0 105 457 2017 185 1 10 88 > > 2 12 0 486584 36888 39 0 0 0 201 0 106 402 1156 164 1 7 92 > > 3 9 0 484632 37280 195 0 0 0 355 0 110 445 1426 184 1 8 90 > > 2 11 0 503260 37628 23 0 0 0 172 0 105 401 706 127 0 7 93 > > 4 7 0 503260 37372 58 0 0 0 30 0 99 384 931 107 1 7 91 > > 2 10 0 529064 36480 202 0 0 0 176 0 110 429 1400 143 1 10 90 > > 2 8 0 527280 36900 114 0 0 0 306 0 109 382 681 130 1 9 90 > > 3 8 0 533508 36592 5 0 0 0 16 0 107 365 641 111 1 4 95 > > 3 9 0 534364 35840 167 0 0 0 138 0 105 375 919 109 1 9 90 > > ^C > > smtp3# iostat -c 100 > > tty ad0 cpu > > tin tout KB/t tps MB/s us ni sy in id > > 0 2 0.00 0 0.00 3 0 4 3 90 > > 0 43 14.60 119 1.70 0 0 0 0 100 > > 0 43 14.13 155 2.14 0 0 0 1 99 > > 0 43 4.93 107 0.51 3 0 4 0 93 > > 0 43 5.01 106 0.52 2 0 3 2 94 > > 0 42 4.17 102 0.42 2 0 2 2 93 > > 0 43 3.51 92 0.32 0 0 1 1 98 > > 0 43 3.42 99 0.33 0 0 1 1 98 > > 0 43 4.87 105 0.50 1 0 1 0 98 > > ^C > > > > > > > > Memory statistics by type Type Kern > > Type InUse MemUse HighUse Limit Requests Limit Limit Size(s) > > atkbddev 2 1K 1K102400K 2 0 0 32 > > uc_devlist 0 0K 2K102400K 12 0 0 16,1K > > nexusdev 3 1K 1K102400K 3 0 0 16 > > memdesc 1 4K 4K102400K 1 0 0 4K > > mbuf 1 96K 96K102400K 1 0 0 128K > > isadev 8 1K 1K102400K 8 0 0 64 > > ZONE 14 2K 2K102400K 14 0 0 128 > > VM pgdata 1 64K 64K102400K 1 0 0 64K > > devbuf 85 185K 185K102400K 141 0 0 > > 16,32,64,128,256,512,1K,2K,4K,16K > > UFS > mount 15 37K 37K102400K 15 0 0 512,2K,4K,8K > > UFS ihash 1 256K 256K102400K 1 0 0 256K > > FFS node 63819 15955K 15955K102400K 97174709 0 0 256 > > dirrem 15 1K 18K102400K 30060178 0 0 32 > > mkdir 0 0K 8K102400K 718 0 0 32 > > diradd 0 0K 41K102400K 30360613 0 0 32 > > freefile 0 0K 41K102400K 19194217 0 0 32 > > freeblks 2 1K 163K102400K 19194170 0 0 128 > > freefrag 0 0K 13K102400K 4389505 0 0 32 > > allocindir 0 0K 1051K102400K 4645678 0 0 64 > > indirdep 1 1K 81K102400K 173299 0 0 32,16K > > allocdirect 2 1K 70K102400K 27923527 0 0 64 > > bmsafemap 2 1K 2K102400K 20570860 0 0 32 > > newblk 1 1K 1K102400K 32569206 0 0 32,256 > > inodedep 18 259K 480K102400K 50515208 0 0 128,256K > > pagedep 15 33K 46K102400K 30234990 0 0 64,32K > > p1003.1b 1 1K 1K102400K 1 0 0 16 > > syncache 1 8K 8K102400K 1 0 0 8K > > tseg_qent 0 0K 1K102400K 213633 0 0 32 > > IpFw/IpAcct 5 1K 1K102400K 5 0 0 64 > > in_multi 2 1K 1K102400K 2 0 0 32 > > routetbl 68 10K 490K102400K 8649146 0 0 > > 16,32,64,128,256 > > faith 1 1K 1K102400K 1 0 0 256 > > ether_multi 7 1K 1K102400K 7 0 0 16,32,64 > > ifaddr 16 5K 5K102400K 16 0 0 > 32,64,256,2K > > BPF 5 1K 65K102400K 56 0 0 32,128,32K > > vnodes 17 4K 4K102400K 209 0 0 > > 16,32,64,128,256 > > mount 6 3K 3K102400K 8 0 0 16,128,512 > > cluster_save buffer 0 0K 1K102400K 788517 0 0 32,64 > > vfscache > 66731 4683K 4990K102400K115446494 0 0 64,128,256,512K > > BIO buffer 6 12K 1198K102400K 2565 0 0 512,2K > > pcb 25 5K 18K102400K 47486348 0 0 16,32,64,2K > > soname 4 1K 12K102400K404821840 0 0 16,128 > > lockf 2 1K 49K102400K759540302 0 0 64 > > ptys 5 3K 3K102400K 5 0 0 512 > > ttys 567 73K 73K102400K 2439 0 0 128,256 > > atexit 1 1K 1K102400K 1 0 0 16 > > zombie 0 0K 7K102400K 8677258 0 0 128 > > shm 1 12K 12K102400K 1 0 0 16K > > proc-args 35 2K 69K102400K100222163 0 0 > > 16,32,64,128,256 > > kqueue 12 12K 786K102400K 43631105 0 0 256,1K > > sigio 1 1K 1K102400K 1 0 0 32 > > file 91 6K 257K102400K318106792 0 0 64 > > file desc 41 11K 203K102400K 8677309 0 0 256 > > dev_t 715 90K 90K102400K 715 0 0 128 > > timecounter 10 2K 2K102400K 10 0 0 128 > > kld 4 1K 1K102400K 36 0 0 16,32,128 > > sem 3 6K 6K102400K 3 0 0 1K,4K > > AR driver 1 1K 3K102400K 3 0 0 64,512,2K > > AD driver 2 2K 2K102400K218055758 0 0 64,1K > > msg 4 25K 25K102400K 4 0 0 512,4K,16K > > rman 50 3K 3K102400K 400 0 0 16,64 > > ioctlops 0 0K 1K102400K 12 0 0 512,1K > > taskqueue 2 1K 1K102400K 2 0 0 32 > > SWAP 2 1097K 1097K102400K 2 0 0 32,512K > > eventhandler 11 1K 1K102400K 11 0 0 32,64 > > bus 424 39K 40K102400K 730 0 0 > > 16,32,64,128,256,512,1K,2K,4K > > sysctl 0 0K 1K102400K 10415 0 0 16,32 > > uidinfo 5 2K 2K102400K 8114 0 0 32,1K > > cred 30 4K 100K102400K 2963736 0 0 128 > > subproc 101 9K 79K102400K 17364833 0 0 32,64,256 > > proc 2 8K 8K102400K 2 0 0 4K > > session 22 2K 48K102400K 2872588 0 0 64 > > pgrp 26 1K 24K102400K 2873228 0 0 32 > > ATA generic 2 1K 1K102400K 2 0 0 16,512 > > temp 166 117K 161K102400K 294963 0 0 > > 16,32,64,128,256,512,1K,4K,16K,128K > > > > Memory Totals: In Use Free Requests > > 23137K 3624K 2427718869 > > -------------------------------------------------------------------- > > Mike Tancsa, tel +1 519 651 3400 > > Sentex Communications, mike@sentex.net > > Providing Internet since 1994 www.sentex.net > > Cambridge, Ontario Canada www.sentex.net/mike > > > > _______________________________________________ > > freebsd-performance@freebsd.org mailing list > > http://lists.freebsd.org/mailman/listinfo/freebsd-performance > > To unsubscribe, send any mail to > "freebsd-performance-unsubscribe@freebsd.org" > > > >-- >WIPNET Telecom Ltda. > >GPG Key http://wip.mega.net.br/tec.asc >{ ABCE D455 FC29 818A B6E6 4D4C 59D9 77EE 41B0 EC54 } > >_______________________________________________ >freebsd-performance@freebsd.org mailing list >http://lists.freebsd.org/mailman/listinfo/freebsd-performance >To unsubscribe, send any mail to "freebsd-performance-unsubscribe@freebsd.org"