From owner-freebsd-performance@FreeBSD.ORG Sun Feb 15 12:33:31 2004 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 5B2EF16A4CE for ; Sun, 15 Feb 2004 12:33:31 -0800 (PST) Received: from hotmail.com (bay12-f32.bay12.hotmail.com [64.4.35.32]) by mx1.FreeBSD.org (Postfix) with ESMTP id 540E943D1F for ; Sun, 15 Feb 2004 12:33:31 -0800 (PST) (envelope-from jtumani55@hotmail.com) Received: from mail pickup service by hotmail.com with Microsoft SMTPSVC; Sun, 15 Feb 2004 12:33:31 -0800 Received: from 161.44.73.245 by by12fd.bay12.hotmail.msn.com with HTTP; Sun, 15 Feb 2004 20:33:30 GMT X-Originating-IP: [161.44.73.245] X-Originating-Email: [jtumani55@hotmail.com] X-Sender: jtumani55@hotmail.com From: "Juan Tumani" To: freebsd-performance@freebsd.org Date: Sun, 15 Feb 2004 15:33:30 -0500 Mime-Version: 1.0 Content-Type: text/plain; format=flowed Message-ID: X-OriginalArrivalTime: 15 Feb 2004 20:33:31.0078 (UTC) FILETIME=[F9930660:01C3F402] Subject: FW: Re: FreeBSD 5.2 v/s FreeBSD 4.9 MFLOPS performance (gcc3.3.3 v/s gcc2.9.5) X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 15 Feb 2004 20:33:31 -0000 (adding freebsd-performance for any comments someone there may have) We did some more experiments and when we compiled the object file on 5.2 then linked it on the 4.9 machine and then ran it on the 5.2 machine, there was no cyclical problem seen (Frankenstein run). Kind of points to the 5.2 link stage, i.e., 5.2 libraries? Check out the graphical exhibits at http://www.employees.org/~rsargent/flops/ The graph at the very bottom shows the Frankenstein run. The graphs depict the /usr/bin/time results of running 300 iterations of flops 1.2 while incrementing the env one byte between iterations. We use flops 1.2, it runs less modules than 2.0 but still exhibits the cyclical slowness [alignment] problem as module #2 in flops 2.0. Source code for flops 1.2 is at the bottom of the above link. Juan >From: Alexandr Kovalenko >To: Wes Peters >CC: Juan Tumani , freebsd-hackers@freebsd.org >Subject: Re: FreeBSD 5.2 v/s FreeBSD 4.9 MFLOPS performance (gcc3.3.3 v/s >gcc2.9.5) >Date: Sat, 14 Feb 2004 10:24:21 +0200 > >Hello, Wes Peters! > >On Tue, Feb 10, 2004 at 11:29:34AM -0800, you wrote: > > > On Monday 09 February 2004 13:20, Juan Tumani wrote: > > > I have an Intel D845GE m/b w/ a P4 1.7 CPU and I have the box setup > > > to dual boot to either 4.9 or 5.2. Both OS are right off the latest > > > posted iso CD image, i.e., no updates, no kernel tweaks, everything > > > vanilla right out of the box. I compiled flops.c on both 4.9 and > > > 5.2 and the 5.2 performance is less than half that of 4.9: 760 > > > MFLOPS on 4.9 v/s 340 MFLOPS on 5.2. > > > > > > I tried turning off the SMP and other kernel tweaks and no > > > improvement in 5.2. I then downloaded and installed gcc295 on the > > > 5.2 machine and that fixed the problem. So now all I have to do is > > > figure out the gcc 3.3.3 switches to make it run like gcc 2.9.5 or > > > figure out how to rebuild 5.2 w/ gcc 2.9.5 :-). > > > > I'm not sure that kernel tweaks are going to make much difference on a > > single-threaded floating point benchmark. Compiler optimizations sure > > do, though. (Note: I couldn't find version 1.2 of flops.c, so this is > > based on version 2.0.) On a 2.0GHz P4, I see: > > > > wpeters@salty> cc -o flops -O -DUNIX flops.c > >Could you please explain me this? Result is fully reproduceable. Please >note, >that the only difference is the output file name. Even resulting files >match >bit-to-bit. If I do > >mv very-slow-flops flops2 > >and then run ./flops2, it runs as flops2 - fast. > >Machine is Dual 2.8 GHz Xeon with HTT disabled (in BIOS). FreeBSD is >5.2.1-RC2. > >%fetch http://home.iae.nl/users/mhx/flops.c >Receiving flops.c (34942 bytes): 100% >34942 bytes transferred in 0.6 seconds (54.72 kBps) >%cc -o flops2 -O2 -mcpu=pentium4 -DUNIX flops.c >flops.c: In function `main': >flops.c:174: warning: return type of `main' is not `int' >%cc -o flops-sse-4 -O2 -mcpu=pentium4 -DUNIX flops.c >flops.c: In function `main': >flops.c:174: warning: return type of `main' is not `int' >%cc -o very-slow-flops -O2 -mcpu=pentium4 -DUNIX flops.c >flops.c: In function `main': >flops.c:174: warning: return type of `main' is not `int' >%./flops2 > > FLOPS C Program (Double Precision), V2.0 18 Dec 1992 > > Module Error RunTime MFLOPS > (usec) > 1 4.0146e-13 0.0130 1074.8815 > 2 -1.4166e-13 0.0128 545.3338 > 3 4.7184e-14 0.0177 960.4579 > 4 -1.2557e-13 0.0166 903.6914 > 5 -1.3800e-13 0.0317 915.0687 > 6 3.2380e-13 0.0310 936.3149 > 7 -8.4583e-11 0.0403 297.7250 > 8 3.4867e-13 0.0310 968.6112 > > Iterations = 512000000 > NullTime (usec) = 0.0006 > MFLOPS(1) = 635.0698 > MFLOPS(2) = 560.4516 > MFLOPS(3) = 805.4502 > MFLOPS(4) = 945.5219 > >%./flops-sse-4 > > FLOPS C Program (Double Precision), V2.0 18 Dec 1992 > > Module Error RunTime MFLOPS > (usec) > 1 4.0146e-13 0.0177 791.6075 > 2 -1.4166e-13 0.0309 226.7944 > 3 4.7184e-14 0.0202 842.7146 > 4 -1.2557e-13 0.0166 902.8921 > 5 -1.3800e-13 0.0317 916.2631 > 6 3.2380e-13 0.0309 937.0923 > 7 -8.4583e-11 0.0403 297.9173 > 8 3.4867e-13 0.0309 969.3446 > > Iterations = 512000000 > NullTime (usec) = 0.0006 > MFLOPS(1) = 297.9983 > MFLOPS(2) = 546.3944 > MFLOPS(3) = 775.3701 > MFLOPS(4) = 922.1566 > >%./very-slow-flops > > FLOPS C Program (Double Precision), V2.0 18 Dec 1992 > > Module Error RunTime MFLOPS > (usec) > 1 4.0146e-13 0.0317 442.0039 > 2 -1.4166e-13 0.0331 211.3728 > 3 4.7184e-14 0.0350 485.1899 > 4 -1.2557e-13 0.0168 892.8307 > 5 -1.3800e-13 0.0319 909.7385 > 6 3.2380e-13 0.0311 931.1527 > 7 -8.4583e-11 0.0405 296.4570 > 8 3.4867e-13 0.0312 962.3224 > > Iterations = 512000000 > NullTime (usec) = 0.0004 > MFLOPS(1) = 259.1938 > MFLOPS(2) = 492.7930 > MFLOPS(3) = 669.1527 > MFLOPS(4) = 797.1471 > >-- >NEVE-RIPE, will build world for food >Ukrainian FreeBSD User Group >http://uafug.org.ua/ _________________________________________________________________ Keep up with high-tech trends here at "Hook'd on Technology." http://special.msn.com/msnbc/hookedontech.armx From owner-freebsd-performance@FreeBSD.ORG Mon Feb 16 02:21:57 2004 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id A73FF16A4CE for ; Mon, 16 Feb 2004 02:21:57 -0800 (PST) Received: from mail.tiscali.cz (stateless3.tiscali.cz [213.235.135.72]) by mx1.FreeBSD.org (Postfix) with ESMTP id 721B843D1D for ; Mon, 16 Feb 2004 02:21:57 -0800 (PST) (envelope-from hsn@netmag.cz) Received: from asura.bsd (213.235.70.179) by mail.tiscali.cz (6.7.021) id 402AD1D2000D5786 for freebsd-performance@freebsd.org; Mon, 16 Feb 2004 11:21:55 +0100 Received: from hsn@localhost by asura.bsd (Exim 4.24_4 FreeBSD) id 1As2d3-0000Gm-DZ for ; Sat, 14 Feb 2004 17:26:53 +0100 Date: Sat, 14 Feb 2004 17:26:53 +0100 From: Radim Kolar To: freebsd-performance@freebsd.org Message-ID: <20040214162653.GA556@asura.bsd> Mail-Followup-To: freebsd-performance@freebsd.org Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.4.1i Subject: realloc (using forkbomb) benchmark results X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 16 Feb 2004 10:21:57 -0000 I have tested memory allocation speed in Linux and FBSD with ports/benchmark/forkbomb. This test is doing realloc(1M) realloc(2M), ... Results are: -M writes into allocated memory, -m don't freebsd 5.2 ./forkbomb -l 64 -i 256 -M --quit 16.77s user 21.38s system 98% cpu 38.591 tota l ./forkbomb -l 64 -i 256 -m --quit > /dev/null 16.12s user 31.30s system 98% cpu 48.329 total Linux/glib2.3.2 (hsn@tty2):~/forkbomb% time ./forkbomb -l 64 -i 256 -m --quit > /dev/null 19:22 ./forkbomb -l 64 -i 256 -m --quit > /dev/null 0.00s user 0.00s system 0% cpu +0.000 total (hsn@tty2):~/forkbomb% time ./forkbomb -l 64 -i 256 -M --quit > /dev/null 19:24 ./forkbomb -l 64 -i 256 -M --quit > /dev/null 0.02s user 0.59s system 96% cpu +0.634 total Results shows: a) FBSD uses too much user time: can't extend last block while realloc()ing without copying it to other place. This should be easy to fix. b) if FBSD writes to allocated memory, results are better! Comments? c) if linux don't writes into memory, it doesn't do anything. d) linux has remap() syscall which is used for memory resizing in realloc() e) Linux uses mmap() for allocating block much larger than page. FreeBSD is doing brk() allocation. f) freebsd brk() vs linux remap() is 31 vs 0.6 sec. From owner-freebsd-performance@FreeBSD.ORG Tue Feb 17 17:14:37 2004 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 90FE916A4CE for ; Tue, 17 Feb 2004 17:14:37 -0800 (PST) Received: from smtp3b.sentex.ca (smtp3b.sentex.ca [205.211.164.50]) by mx1.FreeBSD.org (Postfix) with ESMTP id 3EDED43D2D for ; Tue, 17 Feb 2004 17:14:37 -0800 (PST) (envelope-from mike@sentex.net) Received: from lava.sentex.ca (pyroxene.sentex.ca [199.212.134.18]) by smtp3b.sentex.ca (8.12.10/8.12.10) with ESMTP id i1I1EaeB091621 for ; Tue, 17 Feb 2004 20:14:36 -0500 (EST) (envelope-from mike@sentex.net) Received: from simian.sentex.net ([192.168.43.27]) by lava.sentex.ca (8.12.9p2/8.12.9) with ESMTP id i1I1EVWZ071818 for ; Tue, 17 Feb 2004 20:14:31 -0500 (EST) (envelope-from mike@sentex.net) Message-Id: <6.0.3.0.0.20040217200510.1052ca50@209.112.4.2> X-Sender: mdtpop@209.112.4.2 (Unverified) X-Mailer: QUALCOMM Windows Eudora Version 6.0.3.0 Date: Tue, 17 Feb 2004 20:15:04 -0500 To: freebsd-performance@freebsd.org From: Mike Tancsa Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; format=flowed X-Virus-Scanned: by amavisd-new Subject: Tuning for large outbound smtp queues X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 18 Feb 2004 01:14:37 -0000 We have separate inbound and outbound smtp servers and I am looking to better tune the boxes (2 of them) that spool my network's outbound mail. As a result of the zillion viruses and n*zillion spams bouncing back to networks that dont accept mail, I am seeing some very large queues for sendmail. Apart from define(`confTO_IDENT', 0s) define(`QUEUE_DIR', `/var/spool/mqueue/q*')dnl where there are 60 q directories I havent really tuned sendmail nor the OS. However, as the volume grows, the box becomes quite sluggish. Is it just a matter of throwing more hardware at the issue, or can I better tweak RELENG_4 and sendmail to deal with massive (80,000+) queues ? Allocating more memory to caching the filesystem for example ? Here is a quick snapshot. smtp3# vmstat -c 100 procs memory page disk faults cpu r b w avm fre flt re pi po fr sr ad0 in sy cs us sy id 3 10 0 390168 36660 491 0 0 0 717 98 0 504 1749 430 3 7 90 1 13 0 390964 35604 204 0 0 0 225 0 107 407 1155 166 2 10 87 3 8 0 391672 37112 543 0 0 0 1359 0 110 470 1862 163 1 13 85 1 11 0 461436 37316 149 0 0 0 285 0 105 422 1409 190 0 9 91 2 12 0 459796 37700 247 0 0 0 357 0 104 418 1620 177 2 9 89 3 10 0 460924 36612 249 0 0 0 201 0 105 457 2017 185 1 10 88 2 12 0 486584 36888 39 0 0 0 201 0 106 402 1156 164 1 7 92 3 9 0 484632 37280 195 0 0 0 355 0 110 445 1426 184 1 8 90 2 11 0 503260 37628 23 0 0 0 172 0 105 401 706 127 0 7 93 4 7 0 503260 37372 58 0 0 0 30 0 99 384 931 107 1 7 91 2 10 0 529064 36480 202 0 0 0 176 0 110 429 1400 143 1 10 90 2 8 0 527280 36900 114 0 0 0 306 0 109 382 681 130 1 9 90 3 8 0 533508 36592 5 0 0 0 16 0 107 365 641 111 1 4 95 3 9 0 534364 35840 167 0 0 0 138 0 105 375 919 109 1 9 90 ^C smtp3# iostat -c 100 tty ad0 cpu tin tout KB/t tps MB/s us ni sy in id 0 2 0.00 0 0.00 3 0 4 3 90 0 43 14.60 119 1.70 0 0 0 0 100 0 43 14.13 155 2.14 0 0 0 1 99 0 43 4.93 107 0.51 3 0 4 0 93 0 43 5.01 106 0.52 2 0 3 2 94 0 42 4.17 102 0.42 2 0 2 2 93 0 43 3.51 92 0.32 0 0 1 1 98 0 43 3.42 99 0.33 0 0 1 1 98 0 43 4.87 105 0.50 1 0 1 0 98 ^C Memory statistics by type Type Kern Type InUse MemUse HighUse Limit Requests Limit Limit Size(s) atkbddev 2 1K 1K102400K 2 0 0 32 uc_devlist 0 0K 2K102400K 12 0 0 16,1K nexusdev 3 1K 1K102400K 3 0 0 16 memdesc 1 4K 4K102400K 1 0 0 4K mbuf 1 96K 96K102400K 1 0 0 128K isadev 8 1K 1K102400K 8 0 0 64 ZONE 14 2K 2K102400K 14 0 0 128 VM pgdata 1 64K 64K102400K 1 0 0 64K devbuf 85 185K 185K102400K 141 0 0 16,32,64,128,256,512,1K,2K,4K,16K UFS mount 15 37K 37K102400K 15 0 0 512,2K,4K,8K UFS ihash 1 256K 256K102400K 1 0 0 256K FFS node 63819 15955K 15955K102400K 97174709 0 0 256 dirrem 15 1K 18K102400K 30060178 0 0 32 mkdir 0 0K 8K102400K 718 0 0 32 diradd 0 0K 41K102400K 30360613 0 0 32 freefile 0 0K 41K102400K 19194217 0 0 32 freeblks 2 1K 163K102400K 19194170 0 0 128 freefrag 0 0K 13K102400K 4389505 0 0 32 allocindir 0 0K 1051K102400K 4645678 0 0 64 indirdep 1 1K 81K102400K 173299 0 0 32,16K allocdirect 2 1K 70K102400K 27923527 0 0 64 bmsafemap 2 1K 2K102400K 20570860 0 0 32 newblk 1 1K 1K102400K 32569206 0 0 32,256 inodedep 18 259K 480K102400K 50515208 0 0 128,256K pagedep 15 33K 46K102400K 30234990 0 0 64,32K p1003.1b 1 1K 1K102400K 1 0 0 16 syncache 1 8K 8K102400K 1 0 0 8K tseg_qent 0 0K 1K102400K 213633 0 0 32 IpFw/IpAcct 5 1K 1K102400K 5 0 0 64 in_multi 2 1K 1K102400K 2 0 0 32 routetbl 68 10K 490K102400K 8649146 0 0 16,32,64,128,256 faith 1 1K 1K102400K 1 0 0 256 ether_multi 7 1K 1K102400K 7 0 0 16,32,64 ifaddr 16 5K 5K102400K 16 0 0 32,64,256,2K BPF 5 1K 65K102400K 56 0 0 32,128,32K vnodes 17 4K 4K102400K 209 0 0 16,32,64,128,256 mount 6 3K 3K102400K 8 0 0 16,128,512 cluster_save buffer 0 0K 1K102400K 788517 0 0 32,64 vfscache 66731 4683K 4990K102400K115446494 0 0 64,128,256,512K BIO buffer 6 12K 1198K102400K 2565 0 0 512,2K pcb 25 5K 18K102400K 47486348 0 0 16,32,64,2K soname 4 1K 12K102400K404821840 0 0 16,128 lockf 2 1K 49K102400K759540302 0 0 64 ptys 5 3K 3K102400K 5 0 0 512 ttys 567 73K 73K102400K 2439 0 0 128,256 atexit 1 1K 1K102400K 1 0 0 16 zombie 0 0K 7K102400K 8677258 0 0 128 shm 1 12K 12K102400K 1 0 0 16K proc-args 35 2K 69K102400K100222163 0 0 16,32,64,128,256 kqueue 12 12K 786K102400K 43631105 0 0 256,1K sigio 1 1K 1K102400K 1 0 0 32 file 91 6K 257K102400K318106792 0 0 64 file desc 41 11K 203K102400K 8677309 0 0 256 dev_t 715 90K 90K102400K 715 0 0 128 timecounter 10 2K 2K102400K 10 0 0 128 kld 4 1K 1K102400K 36 0 0 16,32,128 sem 3 6K 6K102400K 3 0 0 1K,4K AR driver 1 1K 3K102400K 3 0 0 64,512,2K AD driver 2 2K 2K102400K218055758 0 0 64,1K msg 4 25K 25K102400K 4 0 0 512,4K,16K rman 50 3K 3K102400K 400 0 0 16,64 ioctlops 0 0K 1K102400K 12 0 0 512,1K taskqueue 2 1K 1K102400K 2 0 0 32 SWAP 2 1097K 1097K102400K 2 0 0 32,512K eventhandler 11 1K 1K102400K 11 0 0 32,64 bus 424 39K 40K102400K 730 0 0 16,32,64,128,256,512,1K,2K,4K sysctl 0 0K 1K102400K 10415 0 0 16,32 uidinfo 5 2K 2K102400K 8114 0 0 32,1K cred 30 4K 100K102400K 2963736 0 0 128 subproc 101 9K 79K102400K 17364833 0 0 32,64,256 proc 2 8K 8K102400K 2 0 0 4K session 22 2K 48K102400K 2872588 0 0 64 pgrp 26 1K 24K102400K 2873228 0 0 32 ATA generic 2 1K 1K102400K 2 0 0 16,512 temp 166 117K 161K102400K 294963 0 0 16,32,64,128,256,512,1K,4K,16K,128K Memory Totals: In Use Free Requests 23137K 3624K 2427718869 -------------------------------------------------------------------- Mike Tancsa, tel +1 519 651 3400 Sentex Communications, mike@sentex.net Providing Internet since 1994 www.sentex.net Cambridge, Ontario Canada www.sentex.net/mike From owner-freebsd-performance@FreeBSD.ORG Tue Feb 17 17:36:02 2004 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 8731D16A4CE for ; Tue, 17 Feb 2004 17:36:02 -0800 (PST) Received: from wipmail.com.br (200-171-107-246.dsl.telesp.net.br [200.171.107.246]) by mx1.FreeBSD.org (Postfix) with ESMTP id B69B643D1D for ; Tue, 17 Feb 2004 17:36:00 -0800 (PST) (envelope-from tec@mega.net.br) Received: from admin.mega.net.br wsrv.mega.net.br by matik.com.br (MDaemon.PRO.v6.8.5.R) with ESMTP id 41-md50000000036.tmp for ; Tue, 17 Feb 2004 22:38:56 -0300 X-Originating-IP: [200.152.81.44] From: "TEC Meganet" To: freebsd-performance@freebsd.org Date: Wed, 18 Feb 2004 01:35:56 +0000 Message-ID: <20040217.eDA.94486400@admin.mega.net.br> MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 8bit Content-Disposition: inline X-Mailer: AngleMail for phpGroupWare (http://www.phpgroupware.org) v 0.9.99.008 X-Return-Path: tec@mega.net.br X-MDaemon-Deliver-To: freebsd-performance@freebsd.org X-Spam-Checker-Version: SpamAssassin 2.55 (1.174.2.19-2003-05-19-exp) X-Spam-Processed: mega.net.br, Tue, 17 Feb 2004 22:38:59 -0300 Subject: Re: Tuning for large outbound smtp queues X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list Reply-To: tec@mega.net.br List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 18 Feb 2004 01:36:02 -0000 I have a server with similar load but do not have problems at all what calls attention is that you use ide disk on a server ... and your swap is in use what means your ram memory is too low btw without sysctl hw and kernel compile options and eventually sysctl options it's hard to give you hints general hint: why notifying the virus sender and spammer? you believe they care or even exist? you are creating part of your own troubles since most of this notify msgs are coming back to you as delivery error:) JM Mike Tancsa (mike@sentex.net) escreve: > > > We have separate inbound and outbound smtp servers and I am looking to > better tune the boxes (2 of them) that spool my network's outbound > mail. As a result of the zillion viruses and n*zillion spams bouncing back > to networks that dont accept mail, I am seeing some very large queues for > sendmail. Apart from > > define(`confTO_IDENT', 0s) > define(`QUEUE_DIR', `/var/spool/mqueue/q*')dnl > > where there are 60 q directories I havent really tuned sendmail nor the > OS. However, as the volume grows, the box becomes quite sluggish. Is it > just a matter of throwing more hardware at the issue, or can I better tweak > RELENG_4 and sendmail to deal with massive (80,000+) queues ? Allocating > more memory to caching the filesystem for example ? > > Here is a quick snapshot. > > smtp3# vmstat -c 100 > procs memory page disk faults cpu > r b w avm fre flt re pi po fr sr ad0 in sy cs us sy id > 3 10 0 390168 36660 491 0 0 0 717 98 0 504 1749 430 3 7 90 > 1 13 0 390964 35604 204 0 0 0 225 0 107 407 1155 166 2 10 87 > 3 8 0 391672 37112 543 0 0 0 1359 0 110 470 1862 163 1 13 85 > 1 11 0 461436 37316 149 0 0 0 285 0 105 422 1409 190 0 9 91 > 2 12 0 459796 37700 247 0 0 0 357 0 104 418 1620 177 2 9 89 > 3 10 0 460924 36612 249 0 0 0 201 0 105 457 2017 185 1 10 88 > 2 12 0 486584 36888 39 0 0 0 201 0 106 402 1156 164 1 7 92 > 3 9 0 484632 37280 195 0 0 0 355 0 110 445 1426 184 1 8 90 > 2 11 0 503260 37628 23 0 0 0 172 0 105 401 706 127 0 7 93 > 4 7 0 503260 37372 58 0 0 0 30 0 99 384 931 107 1 7 91 > 2 10 0 529064 36480 202 0 0 0 176 0 110 429 1400 143 1 10 90 > 2 8 0 527280 36900 114 0 0 0 306 0 109 382 681 130 1 9 90 > 3 8 0 533508 36592 5 0 0 0 16 0 107 365 641 111 1 4 95 > 3 9 0 534364 35840 167 0 0 0 138 0 105 375 919 109 1 9 90 > ^C > smtp3# iostat -c 100 > tty ad0 cpu > tin tout KB/t tps MB/s us ni sy in id > 0 2 0.00 0 0.00 3 0 4 3 90 > 0 43 14.60 119 1.70 0 0 0 0 100 > 0 43 14.13 155 2.14 0 0 0 1 99 > 0 43 4.93 107 0.51 3 0 4 0 93 > 0 43 5.01 106 0.52 2 0 3 2 94 > 0 42 4.17 102 0.42 2 0 2 2 93 > 0 43 3.51 92 0.32 0 0 1 1 98 > 0 43 3.42 99 0.33 0 0 1 1 98 > 0 43 4.87 105 0.50 1 0 1 0 98 > ^C > > > > Memory statistics by type Type Kern > Type InUse MemUse HighUse Limit Requests Limit Limit Size(s) > atkbddev 2 1K 1K102400K 2 0 0 32 > uc_devlist 0 0K 2K102400K 12 0 0 16,1K > nexusdev 3 1K 1K102400K 3 0 0 16 > memdesc 1 4K 4K102400K 1 0 0 4K > mbuf 1 96K 96K102400K 1 0 0 128K > isadev 8 1K 1K102400K 8 0 0 64 > ZONE 14 2K 2K102400K 14 0 0 128 > VM pgdata 1 64K 64K102400K 1 0 0 64K > devbuf 85 185K 185K102400K 141 0 0 > 16,32,64,128,256,512,1K,2K,4K,16K > UFS mount 15 37K 37K102400K 15 0 0 512,2K,4K,8K > UFS ihash 1 256K 256K102400K 1 0 0 256K > FFS node 63819 15955K 15955K102400K 97174709 0 0 256 > dirrem 15 1K 18K102400K 30060178 0 0 32 > mkdir 0 0K 8K102400K 718 0 0 32 > diradd 0 0K 41K102400K 30360613 0 0 32 > freefile 0 0K 41K102400K 19194217 0 0 32 > freeblks 2 1K 163K102400K 19194170 0 0 128 > freefrag 0 0K 13K102400K 4389505 0 0 32 > allocindir 0 0K 1051K102400K 4645678 0 0 64 > indirdep 1 1K 81K102400K 173299 0 0 32,16K > allocdirect 2 1K 70K102400K 27923527 0 0 64 > bmsafemap 2 1K 2K102400K 20570860 0 0 32 > newblk 1 1K 1K102400K 32569206 0 0 32,256 > inodedep 18 259K 480K102400K 50515208 0 0 128,256K > pagedep 15 33K 46K102400K 30234990 0 0 64,32K > p1003.1b 1 1K 1K102400K 1 0 0 16 > syncache 1 8K 8K102400K 1 0 0 8K > tseg_qent 0 0K 1K102400K 213633 0 0 32 > IpFw/IpAcct 5 1K 1K102400K 5 0 0 64 > in_multi 2 1K 1K102400K 2 0 0 32 > routetbl 68 10K 490K102400K 8649146 0 0 > 16,32,64,128,256 > faith 1 1K 1K102400K 1 0 0 256 > ether_multi 7 1K 1K102400K 7 0 0 16,32,64 > ifaddr 16 5K 5K102400K 16 0 0 32,64,256,2K > BPF 5 1K 65K102400K 56 0 0 32,128,32K > vnodes 17 4K 4K102400K 209 0 0 > 16,32,64,128,256 > mount 6 3K 3K102400K 8 0 0 16,128,512 > cluster_save buffer 0 0K 1K102400K 788517 0 0 32,64 > vfscache 66731 4683K 4990K102400K115446494 0 0 64,128,256,512K > BIO buffer 6 12K 1198K102400K 2565 0 0 512,2K > pcb 25 5K 18K102400K 47486348 0 0 16,32,64,2K > soname 4 1K 12K102400K404821840 0 0 16,128 > lockf 2 1K 49K102400K759540302 0 0 64 > ptys 5 3K 3K102400K 5 0 0 512 > ttys 567 73K 73K102400K 2439 0 0 128,256 > atexit 1 1K 1K102400K 1 0 0 16 > zombie 0 0K 7K102400K 8677258 0 0 128 > shm 1 12K 12K102400K 1 0 0 16K > proc-args 35 2K 69K102400K100222163 0 0 > 16,32,64,128,256 > kqueue 12 12K 786K102400K 43631105 0 0 256,1K > sigio 1 1K 1K102400K 1 0 0 32 > file 91 6K 257K102400K318106792 0 0 64 > file desc 41 11K 203K102400K 8677309 0 0 256 > dev_t 715 90K 90K102400K 715 0 0 128 > timecounter 10 2K 2K102400K 10 0 0 128 > kld 4 1K 1K102400K 36 0 0 16,32,128 > sem 3 6K 6K102400K 3 0 0 1K,4K > AR driver 1 1K 3K102400K 3 0 0 64,512,2K > AD driver 2 2K 2K102400K218055758 0 0 64,1K > msg 4 25K 25K102400K 4 0 0 512,4K,16K > rman 50 3K 3K102400K 400 0 0 16,64 > ioctlops 0 0K 1K102400K 12 0 0 512,1K > taskqueue 2 1K 1K102400K 2 0 0 32 > SWAP 2 1097K 1097K102400K 2 0 0 32,512K > eventhandler 11 1K 1K102400K 11 0 0 32,64 > bus 424 39K 40K102400K 730 0 0 > 16,32,64,128,256,512,1K,2K,4K > sysctl 0 0K 1K102400K 10415 0 0 16,32 > uidinfo 5 2K 2K102400K 8114 0 0 32,1K > cred 30 4K 100K102400K 2963736 0 0 128 > subproc 101 9K 79K102400K 17364833 0 0 32,64,256 > proc 2 8K 8K102400K 2 0 0 4K > session 22 2K 48K102400K 2872588 0 0 64 > pgrp 26 1K 24K102400K 2873228 0 0 32 > ATA generic 2 1K 1K102400K 2 0 0 16,512 > temp 166 117K 161K102400K 294963 0 0 > 16,32,64,128,256,512,1K,4K,16K,128K > > Memory Totals: In Use Free Requests > 23137K 3624K 2427718869 > -------------------------------------------------------------------- > Mike Tancsa, tel +1 519 651 3400 > Sentex Communications, mike@sentex.net > Providing Internet since 1994 www.sentex.net > Cambridge, Ontario Canada www.sentex.net/mike > > _______________________________________________ > freebsd-performance@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-performance > To unsubscribe, send any mail to "freebsd-performance-unsubscribe@freebsd.org" > -- WIPNET Telecom Ltda. GPG Key http://wip.mega.net.br/tec.asc { ABCE D455 FC29 818A B6E6 4D4C 59D9 77EE 41B0 EC54 } From owner-freebsd-performance@FreeBSD.ORG Tue Feb 17 17:48:03 2004 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 58BDA16A4CE for ; Tue, 17 Feb 2004 17:48:03 -0800 (PST) Received: from smtp3b.sentex.ca (smtp3b.sentex.ca [205.211.164.50]) by mx1.FreeBSD.org (Postfix) with ESMTP id CE1CE43D1D for ; Tue, 17 Feb 2004 17:48:02 -0800 (PST) (envelope-from mike@sentex.net) Received: from lava.sentex.ca (pyroxene.sentex.ca [199.212.134.18]) by smtp3b.sentex.ca (8.12.10/8.12.10) with ESMTP id i1I1m1eB003058; Tue, 17 Feb 2004 20:48:01 -0500 (EST) (envelope-from mike@sentex.net) Received: from simian.sentex.net ([192.168.43.27]) by lava.sentex.ca (8.12.9p2/8.12.9) with ESMTP id i1I1lrWZ071905; Tue, 17 Feb 2004 20:47:54 -0500 (EST) (envelope-from mike@sentex.net) Message-Id: <6.0.3.0.0.20040217204319.10542990@209.112.4.2> X-Sender: mdtpop@209.112.4.2 (Unverified) X-Mailer: QUALCOMM Windows Eudora Version 6.0.3.0 Date: Tue, 17 Feb 2004 20:48:27 -0500 To: tec@mega.net.br, freebsd-performance@freebsd.org From: Mike Tancsa In-Reply-To: <20040217.eDA.94486400@admin.mega.net.br> References: <20040217.eDA.94486400@admin.mega.net.br> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; format=flowed X-Virus-Scanned: by amavisd-new Subject: Re: Tuning for large outbound smtp queues X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 18 Feb 2004 01:48:03 -0000 At 08:35 PM 17/02/2004, TEC Meganet wrote: >I have a server with similar load but do not have problems at all >what calls attention is that you use ide disk on a server ... >and your swap is in use what means your ram memory is too low >btw without sysctl hw and kernel compile options and eventually sysctl options >it's hard to give you hints Sorry, here it is. hw.machine: i386 hw.model: Intel(R) Pentium(R) 4 CPU 1.80GHz hw.ncpu: 1 hw.byteorder: 1234 hw.physmem: 1062875136 hw.usermem: 953839616 hw.pagesize: 4096 hw.floatingpoint: 1 hw.machine_arch: i386 hw.ata.ata_dma: 1 hw.ata.wc: 1 hw.ata.tags: 0 hw.fxp_rnr: 0 hw.fxp_noflow: 0 hw.instruction_sse: 0 hw.availpages: 259324 kern.ostype: FreeBSD kern.osrelease: 4.9-STABLE kern.osrevision: 199506 kern.version: FreeBSD 4.9-STABLE #0: Wed Jan 21 09:27:16 EST 2004 mdtancsa@smtp3.sentex.ca:/usr/obj/usr/src/sys/smtp kern.maxvnodes: 69954 kern.maxproc: 6164 kern.maxfiles: 16384 kern.argmax: 65536 kern.securelevel: -1 kern.hostname: smtp3.sentex.ca kern.hostid: 0 kern.clockrate: { hz = 100, tick = 10000, tickadj = 5, profhz = 1024, stathz = 128 } kern.posix1version: 199309 kern.ngroups: 16 kern.job_control: 1 kern.saved_ids: 0 kern.boottime: { sec = 1074697898, usec = 755565 } Wed Jan 21 10:11:38 2004 kern.domainname: kern.osreldate: 490101 kern.bootfile: /kernel kern.maxfilesperproc: 14745 kern.maxprocperuid: 5547 kern.dumpdev: { major = 116, minor = 0x20001 } kern.ipc.maxsockbuf: 262144 kern.ipc.sockbuf_waste_factor: 8 kern.ipc.somaxconn: 128 kern.ipc.max_linkhdr: 16 kern.ipc.max_protohdr: 40 kern.ipc.max_hdr: 56 kern.ipc.max_datalen: 156 kern.ipc.nmbclusters: 65536 kern.ipc.msgmax: 16384 kern.ipc.msgmni: 40 kern.ipc.msgmnb: 2048 kern.ipc.msgtql: 40 kern.ipc.msgssz: 8 kern.ipc.msgseg: 2048 kern.ipc.semmap: 30 kern.ipc.semmni: 10 kern.ipc.semmns: 60 kern.ipc.semmnu: 30 kern.ipc.semmsl: 60 kern.ipc.semopm: 100 kern.ipc.semume: 10 kern.ipc.semusz: 92 kern.ipc.semvmx: 32767 kern.ipc.semaem: 16384 kern.ipc.shmmax: 33554432 kern.ipc.shmmin: 1 kern.ipc.shmmni: 192 kern.ipc.shmseg: 128 kern.ipc.shmall: 8192 kern.ipc.shm_use_phys: 0 kern.ipc.shm_allow_removed: 0 kern.ipc.mbuf_wait: 32 kern.ipc.mbtypes: 541 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 kern.ipc.nmbufs: 262144 kern.ipc.m_clreflimithits: 0 kern.ipc.mcl_pool_max: 0 kern.ipc.mcl_pool_now: 0 kern.ipc.maxsockets: 65536 kern.dummy: 0 kern.ps_strings: 3217031152 kern.usrstack: 3217031168 kern.logsigexit: 1 kern.fallback_elf_brand: -1 kern.init_path: /sbin/init:/sbin/oinit:/sbin/init.bak:/stand/sysinstall kern.module_path: /;/boot/;/modules/ kern.acct_suspend: 2 kern.acct_resume: 4 kern.acct_chkfreq: 15 kern.cp_time: 8385505 1722 12804002 9119439 273191504 kern.timecounter.method: 0 kern.timecounter.hardware: TSC kern.openfiles: 152 kern.kq_calloutmax: 4096 kern.ps_arg_cache_limit: 256 kern.ps_argsopen: 1 kern.randompid: 0 kern.maxusers: 384 kern.ps_showallprocs: 1 kern.shutdown.poweroff_delay: 5000 kern.shutdown.kproc_shutdown_wait: 60 kern.sugid_coredump: 0 kern.coredump: 1 kern.corefile: %N.core kern.quantum: 100000 kern.ccpu: 1948 kern.fscale: 2048 kern.devstat.numdevs: 1 kern.devstat.generation: 1 kern.devstat.version: 4 kern.disks: ad0 kern.log_wakeups_per_second: 5 kern.log_console_output: 1 kern.msgbuf_clear: 0 kern.nselcoll: 0 kern.consmute: 0 kern.filedelay: 30 kern.dirdelay: 29 kern.metadelay: 28 kern.minvnodes: 17488 kern.chroot_allow_open_directories: 1 >general hint: why notifying the virus sender and spammer? I am not. This is the infected person sending out viri to users that dont exist (e.g. spam addresses in their mailbox). The From: address is forged and the bounce comes back to my user, who no longer exists... double bounce.... That and the numerous joe jobs that happen to customer domains :( >you believe they >care or even exist? you are creating part of your own troubles since most of >this notify msgs are coming back to you as delivery error:) > >JM > > > > >Mike Tancsa (mike@sentex.net) escreve: > > > > > > We have separate inbound and outbound smtp servers and I am looking to > > better tune the boxes (2 of them) that spool my network's outbound > > mail. As a result of the zillion viruses and n*zillion spams bouncing back > > to networks that dont accept mail, I am seeing some very large queues for > > sendmail. Apart from > > > > define(`confTO_IDENT', 0s) > > define(`QUEUE_DIR', `/var/spool/mqueue/q*')dnl > > > > where there are 60 q directories I havent really tuned sendmail nor the > > OS. However, as the volume grows, the box becomes quite sluggish. Is it > > just a matter of throwing more hardware at the issue, or can I better tweak > > RELENG_4 and sendmail to deal with massive (80,000+) queues ? Allocating > > more memory to caching the filesystem for example ? > > > > Here is a quick snapshot. > > > > smtp3# vmstat -c 100 > > procs memory page disk faults cpu > > r b w avm fre flt re pi po fr sr ad0 in sy cs us sy id > > 3 10 0 390168 36660 491 0 0 0 717 98 0 504 1749 430 3 7 90 > > 1 13 0 390964 35604 204 0 0 0 225 0 107 407 1155 166 2 10 87 > > 3 8 0 391672 37112 543 0 0 0 1359 0 110 470 1862 163 1 13 85 > > 1 11 0 461436 37316 149 0 0 0 285 0 105 422 1409 190 0 9 91 > > 2 12 0 459796 37700 247 0 0 0 357 0 104 418 1620 177 2 9 89 > > 3 10 0 460924 36612 249 0 0 0 201 0 105 457 2017 185 1 10 88 > > 2 12 0 486584 36888 39 0 0 0 201 0 106 402 1156 164 1 7 92 > > 3 9 0 484632 37280 195 0 0 0 355 0 110 445 1426 184 1 8 90 > > 2 11 0 503260 37628 23 0 0 0 172 0 105 401 706 127 0 7 93 > > 4 7 0 503260 37372 58 0 0 0 30 0 99 384 931 107 1 7 91 > > 2 10 0 529064 36480 202 0 0 0 176 0 110 429 1400 143 1 10 90 > > 2 8 0 527280 36900 114 0 0 0 306 0 109 382 681 130 1 9 90 > > 3 8 0 533508 36592 5 0 0 0 16 0 107 365 641 111 1 4 95 > > 3 9 0 534364 35840 167 0 0 0 138 0 105 375 919 109 1 9 90 > > ^C > > smtp3# iostat -c 100 > > tty ad0 cpu > > tin tout KB/t tps MB/s us ni sy in id > > 0 2 0.00 0 0.00 3 0 4 3 90 > > 0 43 14.60 119 1.70 0 0 0 0 100 > > 0 43 14.13 155 2.14 0 0 0 1 99 > > 0 43 4.93 107 0.51 3 0 4 0 93 > > 0 43 5.01 106 0.52 2 0 3 2 94 > > 0 42 4.17 102 0.42 2 0 2 2 93 > > 0 43 3.51 92 0.32 0 0 1 1 98 > > 0 43 3.42 99 0.33 0 0 1 1 98 > > 0 43 4.87 105 0.50 1 0 1 0 98 > > ^C > > > > > > > > Memory statistics by type Type Kern > > Type InUse MemUse HighUse Limit Requests Limit Limit Size(s) > > atkbddev 2 1K 1K102400K 2 0 0 32 > > uc_devlist 0 0K 2K102400K 12 0 0 16,1K > > nexusdev 3 1K 1K102400K 3 0 0 16 > > memdesc 1 4K 4K102400K 1 0 0 4K > > mbuf 1 96K 96K102400K 1 0 0 128K > > isadev 8 1K 1K102400K 8 0 0 64 > > ZONE 14 2K 2K102400K 14 0 0 128 > > VM pgdata 1 64K 64K102400K 1 0 0 64K > > devbuf 85 185K 185K102400K 141 0 0 > > 16,32,64,128,256,512,1K,2K,4K,16K > > UFS > mount 15 37K 37K102400K 15 0 0 512,2K,4K,8K > > UFS ihash 1 256K 256K102400K 1 0 0 256K > > FFS node 63819 15955K 15955K102400K 97174709 0 0 256 > > dirrem 15 1K 18K102400K 30060178 0 0 32 > > mkdir 0 0K 8K102400K 718 0 0 32 > > diradd 0 0K 41K102400K 30360613 0 0 32 > > freefile 0 0K 41K102400K 19194217 0 0 32 > > freeblks 2 1K 163K102400K 19194170 0 0 128 > > freefrag 0 0K 13K102400K 4389505 0 0 32 > > allocindir 0 0K 1051K102400K 4645678 0 0 64 > > indirdep 1 1K 81K102400K 173299 0 0 32,16K > > allocdirect 2 1K 70K102400K 27923527 0 0 64 > > bmsafemap 2 1K 2K102400K 20570860 0 0 32 > > newblk 1 1K 1K102400K 32569206 0 0 32,256 > > inodedep 18 259K 480K102400K 50515208 0 0 128,256K > > pagedep 15 33K 46K102400K 30234990 0 0 64,32K > > p1003.1b 1 1K 1K102400K 1 0 0 16 > > syncache 1 8K 8K102400K 1 0 0 8K > > tseg_qent 0 0K 1K102400K 213633 0 0 32 > > IpFw/IpAcct 5 1K 1K102400K 5 0 0 64 > > in_multi 2 1K 1K102400K 2 0 0 32 > > routetbl 68 10K 490K102400K 8649146 0 0 > > 16,32,64,128,256 > > faith 1 1K 1K102400K 1 0 0 256 > > ether_multi 7 1K 1K102400K 7 0 0 16,32,64 > > ifaddr 16 5K 5K102400K 16 0 0 > 32,64,256,2K > > BPF 5 1K 65K102400K 56 0 0 32,128,32K > > vnodes 17 4K 4K102400K 209 0 0 > > 16,32,64,128,256 > > mount 6 3K 3K102400K 8 0 0 16,128,512 > > cluster_save buffer 0 0K 1K102400K 788517 0 0 32,64 > > vfscache > 66731 4683K 4990K102400K115446494 0 0 64,128,256,512K > > BIO buffer 6 12K 1198K102400K 2565 0 0 512,2K > > pcb 25 5K 18K102400K 47486348 0 0 16,32,64,2K > > soname 4 1K 12K102400K404821840 0 0 16,128 > > lockf 2 1K 49K102400K759540302 0 0 64 > > ptys 5 3K 3K102400K 5 0 0 512 > > ttys 567 73K 73K102400K 2439 0 0 128,256 > > atexit 1 1K 1K102400K 1 0 0 16 > > zombie 0 0K 7K102400K 8677258 0 0 128 > > shm 1 12K 12K102400K 1 0 0 16K > > proc-args 35 2K 69K102400K100222163 0 0 > > 16,32,64,128,256 > > kqueue 12 12K 786K102400K 43631105 0 0 256,1K > > sigio 1 1K 1K102400K 1 0 0 32 > > file 91 6K 257K102400K318106792 0 0 64 > > file desc 41 11K 203K102400K 8677309 0 0 256 > > dev_t 715 90K 90K102400K 715 0 0 128 > > timecounter 10 2K 2K102400K 10 0 0 128 > > kld 4 1K 1K102400K 36 0 0 16,32,128 > > sem 3 6K 6K102400K 3 0 0 1K,4K > > AR driver 1 1K 3K102400K 3 0 0 64,512,2K > > AD driver 2 2K 2K102400K218055758 0 0 64,1K > > msg 4 25K 25K102400K 4 0 0 512,4K,16K > > rman 50 3K 3K102400K 400 0 0 16,64 > > ioctlops 0 0K 1K102400K 12 0 0 512,1K > > taskqueue 2 1K 1K102400K 2 0 0 32 > > SWAP 2 1097K 1097K102400K 2 0 0 32,512K > > eventhandler 11 1K 1K102400K 11 0 0 32,64 > > bus 424 39K 40K102400K 730 0 0 > > 16,32,64,128,256,512,1K,2K,4K > > sysctl 0 0K 1K102400K 10415 0 0 16,32 > > uidinfo 5 2K 2K102400K 8114 0 0 32,1K > > cred 30 4K 100K102400K 2963736 0 0 128 > > subproc 101 9K 79K102400K 17364833 0 0 32,64,256 > > proc 2 8K 8K102400K 2 0 0 4K > > session 22 2K 48K102400K 2872588 0 0 64 > > pgrp 26 1K 24K102400K 2873228 0 0 32 > > ATA generic 2 1K 1K102400K 2 0 0 16,512 > > temp 166 117K 161K102400K 294963 0 0 > > 16,32,64,128,256,512,1K,4K,16K,128K > > > > Memory Totals: In Use Free Requests > > 23137K 3624K 2427718869 > > -------------------------------------------------------------------- > > Mike Tancsa, tel +1 519 651 3400 > > Sentex Communications, mike@sentex.net > > Providing Internet since 1994 www.sentex.net > > Cambridge, Ontario Canada www.sentex.net/mike > > > > _______________________________________________ > > freebsd-performance@freebsd.org mailing list > > http://lists.freebsd.org/mailman/listinfo/freebsd-performance > > To unsubscribe, send any mail to > "freebsd-performance-unsubscribe@freebsd.org" > > > >-- >WIPNET Telecom Ltda. > >GPG Key http://wip.mega.net.br/tec.asc >{ ABCE D455 FC29 818A B6E6 4D4C 59D9 77EE 41B0 EC54 } > >_______________________________________________ >freebsd-performance@freebsd.org mailing list >http://lists.freebsd.org/mailman/listinfo/freebsd-performance >To unsubscribe, send any mail to "freebsd-performance-unsubscribe@freebsd.org" From owner-freebsd-performance@FreeBSD.ORG Tue Feb 17 22:27:33 2004 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 517A616A4CE for ; Tue, 17 Feb 2004 22:27:33 -0800 (PST) Received: from sakura.ninth-nine.com (sakura.ninth-nine.com [219.127.74.120]) by mx1.FreeBSD.org (Postfix) with ESMTP id DF4D343D1D for ; Tue, 17 Feb 2004 22:27:32 -0800 (PST) (envelope-from nork@ninth-nine.com) Received: from melfina.ninth-nine.com ([IPv6:2002:d312:f91e::1]) (authenticated bits=0) by sakura.ninth-nine.com (8.12.10/8.12.10/NinthNine) with ESMTP id i1I6RUXh040119 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Wed, 18 Feb 2004 15:27:31 +0900 (JST) (envelope-from nork@ninth-nine.com) Date: Wed, 18 Feb 2004 15:27:31 +0900 (JST) Message-Id: <200402180627.i1I6RUXh040119@sakura.ninth-nine.com> From: Norikatsu Shigemura To: Mike Tancsa In-Reply-To: <6.0.3.0.0.20040217200510.1052ca50@209.112.4.2> References: <6.0.3.0.0.20040217200510.1052ca50@209.112.4.2> X-Mailer: Sylpheed version 0.9.9 (GTK+ 1.2.10; i386-portbld-freebsd5.2) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit cc: freebsd-performance@freebsd.org Subject: Re: Tuning for large outbound smtp queues X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 18 Feb 2004 06:27:33 -0000 On Tue, 17 Feb 2004 20:15:04 -0500 Mike Tancsa wrote: > where there are 60 q directories I havent really tuned sendmail nor the > OS. However, as the volume grows, the box becomes quite sluggish. Is it > just a matter of throwing more hardware at the issue, or can I better tweak > RELENG_4 and sendmail to deal with massive (80,000+) queues ? Allocating > more memory to caching the filesystem for example ? Humm... Try to set following setting. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - define(`confMAX_QUEUE_RUN_SIZE', `1')dnl - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 1 is not reasonable, so please tune this value. From owner-freebsd-performance@FreeBSD.ORG Wed Feb 18 23:00:19 2004 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id EF91E16A4CE for ; Wed, 18 Feb 2004 23:00:18 -0800 (PST) Received: from relay.pair.com (relay.pair.com [209.68.1.20]) by mx1.FreeBSD.org (Postfix) with SMTP id B79E343D1F for ; Wed, 18 Feb 2004 23:00:18 -0800 (PST) (envelope-from silby@silby.com) Received: (qmail 10371 invoked from network); 19 Feb 2004 07:00:17 -0000 Received: from niwun.pair.com (HELO localhost) (209.68.2.70) by relay.pair.com with SMTP; 19 Feb 2004 07:00:17 -0000 X-pair-Authenticated: 209.68.2.70 Date: Thu, 19 Feb 2004 01:00:16 -0600 (CST) From: Mike Silbersack To: Mike Tancsa Message-ID: <20040219005922.X28073@odysseus.silby.com> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII cc: freebsd-performance@freebsd.org Subject: Re: Tuning for large outbound smtp queues X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 19 Feb 2004 07:00:19 -0000 Mike, are you running UFS_DIRHASH on the machine in question? If not, that should help performance with large directories *greatly*. I know it's on by default in 5.x, but I'm not sure about 4.x. Mike "Silby" Silbersack From owner-freebsd-performance@FreeBSD.ORG Thu Feb 19 05:33:36 2004 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 3FBA016A4CE; Thu, 19 Feb 2004 05:33:36 -0800 (PST) Received: from relay.kiev.sovam.com (relay.kiev.sovam.com [212.109.32.5]) by mx1.FreeBSD.org (Postfix) with ESMTP id C562343D1F; Thu, 19 Feb 2004 05:33:35 -0800 (PST) (envelope-from dimitry@al.org.ua) Received: from [212.109.32.116] (helo=svitonline.com) by relay.kiev.sovam.com with esmtp (Exim 4.30) id 1AtoJ4-000HPn-1E; Thu, 19 Feb 2004 15:33:34 +0200 From: Dmitry Alyabyev To: freebsd-fs@freebsd.org Date: Thu, 19 Feb 2004 15:33:33 +0200 User-Agent: KMail/1.6 References: <200402181729.06202@misha-mx.virtual-estates.net> In-Reply-To: <200402181729.06202@misha-mx.virtual-estates.net> X-NCC-RegID: ua.svitonline MIME-Version: 1.0 Content-Disposition: inline Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Message-Id: <200402191533.33495.dimitry@al.org.ua> X-Scanner-Signature: e35cef5ed21d82303c208435479e4a86 X-DrWeb-checked: yes cc: performance@FreeBSD.org Subject: Re: strange performance dip shown by iozone X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list Reply-To: dimitry@al.org.ua List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 19 Feb 2004 13:33:36 -0000 On Thursday 19 February 2004 00:29, mi+mx@aldan.algebra.com wrote: > Hello! > > I'm trying to tune the amrd-based RAID5 and have made several iozone > runs on the array and -- for comparision -- on the single disk connected > to the Serial ATA controller directly. > > The RAID-based FS was newfs-ed with ``-b 65536'', as it is intended to > store very large files. The single-disk FS was newfs-ed with defaults. > > No softupdates were enabled on either, since those seem to degrade iozone > results slightly (iozone reads/writes a single file anyway). > > The filesystems displayed different performance (reads are better with > RAID, writes -- with the single disk), but both have shown a notable dip > in writing (and re-writing) speed when iozone used the record lengthes > of 128 and 256. Can someone explain that? Is that a known fact? How can > that be avoided? > > The machine is an amd64 running a fresh -current. The disks are 200Gb > SATAs. RAID5 consists of 6 of them. which stripe size of RAID5 did you use ? -- Dimitry From owner-freebsd-performance@FreeBSD.ORG Thu Feb 19 06:09:50 2004 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 6A6A116A4CE for ; Thu, 19 Feb 2004 06:09:50 -0800 (PST) Received: from smtp3b.sentex.ca (smtp3b.sentex.ca [205.211.164.50]) by mx1.FreeBSD.org (Postfix) with ESMTP id 29F9443D1F for ; Thu, 19 Feb 2004 06:09:50 -0800 (PST) (envelope-from mike@sentex.net) Received: from lava.sentex.ca (pyroxene.sentex.ca [199.212.134.18]) by smtp3b.sentex.ca (8.12.10/8.12.10) with ESMTP id i1JE9h3Z074092; Thu, 19 Feb 2004 09:09:43 -0500 (EST) (envelope-from mike@sentex.net) Received: from simian.sentex.net ([192.168.43.27]) by lava.sentex.ca (8.12.9p2/8.12.9) with ESMTP id i1JE9kWZ077700; Thu, 19 Feb 2004 09:09:46 -0500 (EST) (envelope-from mike@sentex.net) Message-Id: <6.0.3.0.0.20040219090819.07d89c60@209.112.4.2> X-Sender: mdtpop@209.112.4.2 (Unverified) X-Mailer: QUALCOMM Windows Eudora Version 6.0.3.0 Date: Thu, 19 Feb 2004 09:10:12 -0500 To: Mike Silbersack From: Mike Tancsa In-Reply-To: <20040219005922.X28073@odysseus.silby.com> References: <20040219005922.X28073@odysseus.silby.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; format=flowed X-Virus-Scanned: by amavisd-new cc: freebsd-performance@freebsd.org Subject: Re: Tuning for large outbound smtp queues X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 19 Feb 2004 14:09:50 -0000 Thanks! Actually I thought I was as that is in GENERIC smtp3# grep -i hash /usr/src/sys/i386/conf/GENERIC options UFS_DIRHASH #Improve performance on big directories smtp3# But it was not in my kernel. Recompiling now! ---Mike At 02:00 AM 19/02/2004, Mike Silbersack wrote: >Mike, are you running UFS_DIRHASH on the machine in question? If not, >that should help performance with large directories *greatly*. I know >it's on by default in 5.x, but I'm not sure about 4.x. > >Mike "Silby" Silbersack From owner-freebsd-performance@FreeBSD.ORG Thu Feb 19 07:39:29 2004 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id BF6F616A4CE for ; Thu, 19 Feb 2004 07:39:29 -0800 (PST) Received: from smtp3b.sentex.ca (smtp3b.sentex.ca [205.211.164.50]) by mx1.FreeBSD.org (Postfix) with ESMTP id 7DE2D43D2D for ; Thu, 19 Feb 2004 07:39:29 -0800 (PST) (envelope-from mike@sentex.net) Received: from lava.sentex.ca (pyroxene.sentex.ca [199.212.134.18]) by smtp3b.sentex.ca (8.12.10/8.12.10) with ESMTP id i1JFdK3Z003406; Thu, 19 Feb 2004 10:39:24 -0500 (EST) (envelope-from mike@sentex.net) Received: from simian.sentex.net ([192.168.43.27]) by lava.sentex.ca (8.12.9p2/8.12.9) with ESMTP id i1JFdNWZ078201; Thu, 19 Feb 2004 10:39:24 -0500 (EST) (envelope-from mike@sentex.net) Message-Id: <6.0.3.0.0.20040219103403.0919ba80@209.112.4.2> X-Sender: mdtpop@209.112.4.2 (Unverified) X-Mailer: QUALCOMM Windows Eudora Version 6.0.3.0 Date: Thu, 19 Feb 2004 10:39:02 -0500 To: Mike Silbersack From: Mike Tancsa In-Reply-To: <6.0.3.0.0.20040219090819.07d89c60@209.112.4.2> References: <20040219005922.X28073@odysseus.silby.com> <6.0.3.0.0.20040219090819.07d89c60@209.112.4.2> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; format=flowed X-Virus-Scanned: by amavisd-new cc: freebsd-performance@freebsd.org Subject: Re: Tuning for large outbound smtp queues X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 19 Feb 2004 15:39:29 -0000 Actually, do you still need to newfs the partition afterwards to take advantage of UFS_DIRHASH ? I seem to recall a long time ago this was the case. ---Mike At 09:10 AM 19/02/2004, Mike Tancsa wrote: >Thanks! Actually I thought I was as that is in GENERIC > > >smtp3# grep -i hash /usr/src/sys/i386/conf/GENERIC >options UFS_DIRHASH #Improve performance on big >directories >smtp3# > >But it was not in my kernel. Recompiling now! > > ---Mike > >At 02:00 AM 19/02/2004, Mike Silbersack wrote: > >>Mike, are you running UFS_DIRHASH on the machine in question? If not, >>that should help performance with large directories *greatly*. I know >>it's on by default in 5.x, but I'm not sure about 4.x. >> >>Mike "Silby" Silbersack From owner-freebsd-performance@FreeBSD.ORG Thu Feb 19 07:44:08 2004 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 2CDFD16A4D2 for ; Thu, 19 Feb 2004 07:44:08 -0800 (PST) Received: from relay.pair.com (relay.pair.com [209.68.1.20]) by mx1.FreeBSD.org (Postfix) with SMTP id C041543D1F for ; Thu, 19 Feb 2004 07:44:07 -0800 (PST) (envelope-from silby@silby.com) Received: (qmail 87090 invoked from network); 19 Feb 2004 15:44:06 -0000 Received: from niwun.pair.com (HELO localhost) (209.68.2.70) by relay.pair.com with SMTP; 19 Feb 2004 15:44:06 -0000 X-pair-Authenticated: 209.68.2.70 Date: Thu, 19 Feb 2004 09:44:05 -0600 (CST) From: Mike Silbersack To: Mike Tancsa In-Reply-To: <6.0.3.0.0.20040219103403.0919ba80@209.112.4.2> Message-ID: <20040219094054.X28073@odysseus.silby.com> References: <20040219005922.X28073@odysseus.silby.com> <6.0.3.0.0.20040219103403.0919ba80@209.112.4.2> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII cc: freebsd-performance@freebsd.org Subject: Re: Tuning for large outbound smtp queues X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 19 Feb 2004 15:44:08 -0000 On Thu, 19 Feb 2004, Mike Tancsa wrote: > Actually, do you still need to newfs the partition afterwards to take > advantage of UFS_DIRHASH ? I seem to recall a long time ago this was the > case. > > ---Mike Nope, UFS_DIRHASH is just another layer of caching the kernel does, nothing more. You're thinking of when the new dirpref code went into the kernel, which changed where new directories were located on the disk. In order to take full advantage of that change, you would have to start from scratch. (That would not be necessary if the disk was mostly empty, however.) Mike "Silby" Silbersack From owner-freebsd-performance@FreeBSD.ORG Wed Feb 18 14:29:48 2004 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id CAB4F16A4CE; Wed, 18 Feb 2004 14:29:48 -0800 (PST) Received: from corbulon.video-collage.com (corbulon.video-collage.com [64.35.99.179]) by mx1.FreeBSD.org (Postfix) with ESMTP id 88DB343D1F; Wed, 18 Feb 2004 14:29:48 -0800 (PST) (envelope-from mi+mx@aldan.algebra.com) Received: from 250-217.customer.cloud9.net (195-11.customer.cloud9.net [168.100.195.11])i1IMTIlP054236 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Wed, 18 Feb 2004 17:29:47 -0500 (EST) (envelope-from mi+mx@aldan.algebra.com) Received: from localhost (mteterin@localhost [127.0.0.1]) i1IMT6w3072596; Wed, 18 Feb 2004 17:29:07 -0500 (EST) (envelope-from mi+mx@aldan.algebra.com) From: mi+mx@aldan.algebra.com Organization: Murex N.A. To: fs@FreeBSD.org, performance@FreeBSD.org Date: Wed, 18 Feb 2004 17:29:06 -0500 User-Agent: KMail/1.5.4 MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Content-Disposition: inline Message-Id: <200402181729.06202@misha-mx.virtual-estates.net> X-Scanned-By: MIMEDefang 2.39 X-Mailman-Approved-At: Thu, 19 Feb 2004 23:21:42 -0800 Subject: strange performance dip shown by iozone X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 18 Feb 2004 22:29:48 -0000 Hello! I'm trying to tune the amrd-based RAID5 and have made several iozone runs on the array and -- for comparision -- on the single disk connected to the Serial ATA controller directly. The RAID-based FS was newfs-ed with ``-b 65536'', as it is intended to store very large files. The single-disk FS was newfs-ed with defaults. No softupdates were enabled on either, since those seem to degrade iozone results slightly (iozone reads/writes a single file anyway). The filesystems displayed different performance (reads are better with RAID, writes -- with the single disk), but both have shown a notable dip in writing (and re-writing) speed when iozone used the record lengthes of 128 and 256. Can someone explain that? Is that a known fact? How can that be avoided? The machine is an amd64 running a fresh -current. The disks are 200Gb SATAs. RAID5 consists of 6 of them. Thanks! -mi From owner-freebsd-performance@FreeBSD.ORG Thu Feb 19 23:23:39 2004 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id D4DAE16A4CE; Thu, 19 Feb 2004 23:23:39 -0800 (PST) Received: from VARK.homeunix.com (adsl-68-122-0-124.dsl.pltn13.pacbell.net [68.122.0.124]) by mx1.FreeBSD.org (Postfix) with ESMTP id B245F43D1D; Thu, 19 Feb 2004 23:23:39 -0800 (PST) (envelope-from das@FreeBSD.ORG) Received: from VARK.homeunix.com (localhost [127.0.0.1]) by VARK.homeunix.com (8.12.11/8.12.10) with ESMTP id i1K7Mwkl017628; Thu, 19 Feb 2004 23:22:58 -0800 (PST) (envelope-from das@FreeBSD.ORG) Received: (from das@localhost) by VARK.homeunix.com (8.12.11/8.12.10/Submit) id i1K7MwM4017627; Thu, 19 Feb 2004 23:22:58 -0800 (PST) (envelope-from das@FreeBSD.ORG) Date: Thu, 19 Feb 2004 23:22:58 -0800 From: David Schultz To: mi+mx@aldan.algebra.com Message-ID: <20040220072258.GA17579@VARK.homeunix.com> Mail-Followup-To: mi+mx@aldan.algebra.com, fs@FreeBSD.ORG, performance@FreeBSD.ORG References: <200402181729.06202@misha-mx.virtual-estates.net> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <200402181729.06202@misha-mx.virtual-estates.net> cc: performance@FreeBSD.ORG cc: fs@FreeBSD.ORG Subject: Re: strange performance dip shown by iozone X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 20 Feb 2004 07:23:40 -0000 On Wed, Feb 18, 2004, mi+mx@aldan.algebra.com wrote: > I'm trying to tune the amrd-based RAID5 and have made several iozone > runs on the array and -- for comparision -- on the single disk connected > to the Serial ATA controller directly. [...] > The filesystems displayed different performance (reads are better with > RAID, writes -- with the single disk), but both have shown a notable dip > in writing (and re-writing) speed when iozone used the record lengthes > of 128 and 256. Can someone explain that? Is that a known fact? How can > that be avoided? This is known as the small write problem for RAID 5. Basically, any write smaller than the RAID 5 stripe size is performed using an expensive read-modify-write operation so that the parity can be recomputed. The solution is to not do that. If you expect lots of small random writes and you can't do anything about it, you need to either use RAID 1 instead of RAID 5, or use a log-structured filesystem, such as NetBSD's LFS. From owner-freebsd-performance@FreeBSD.ORG Fri Feb 20 08:47:58 2004 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 6EAB116A4CE; Fri, 20 Feb 2004 08:47:58 -0800 (PST) Received: from corbulon.video-collage.com (corbulon.video-collage.com [64.35.99.179]) by mx1.FreeBSD.org (Postfix) with ESMTP id 125D543D2F; Fri, 20 Feb 2004 08:47:58 -0800 (PST) (envelope-from mi+mx@aldan.algebra.com) Received: from 250-217.customer.cloud9.net (195-11.customer.cloud9.net [168.100.195.11])i1KGlN3R085326 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Fri, 20 Feb 2004 11:47:56 -0500 (EST) (envelope-from mi+mx@aldan.algebra.com) Received: from localhost (mteterin@localhost [127.0.0.1]) i1KGl4w3084778; Fri, 20 Feb 2004 11:47:05 -0500 (EST) (envelope-from mi+mx@aldan.algebra.com) From: mi+mx@aldan.algebra.com Organization: Murex N.A. To: David Schultz Date: Fri, 20 Feb 2004 11:47:03 -0500 User-Agent: KMail/1.5.4 References: <200402181729.06202@misha-mx.virtual-estates.net> <20040220072258.GA17579@VARK.homeunix.com> In-Reply-To: <20040220072258.GA17579@VARK.homeunix.com> MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit Content-Disposition: inline Message-Id: <200402201147.03606@misha-mx.virtual-estates.net> X-Scanned-By: MIMEDefang 2.39 cc: performance@FreeBSD.ORG cc: fs@FreeBSD.ORG Subject: Re: strange performance dip shown by iozone X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 20 Feb 2004 16:47:58 -0000 On Wed, Feb 18, 2004, mi+mx@aldan.algebra.com wrote: => I'm trying to tune the amrd-based RAID5 and have made several iozone => runs on the array and -- for comparision -- on the single disk => connected to the Serial ATA controller directly. [...] => The filesystems displayed different performance (reads are better => with RAID, writes -- with the single disk), but both have shown a => notable dip in writing (and re-writing) speed when iozone used the => record lengthes of 128 and 256. Can someone explain that? Is that a => known fact? How can that be avoided? =This is known as the small write problem for RAID 5. Basically, =any write smaller than the RAID 5 stripe size is performed using =an expensive read-modify-write operation so that the parity can be =recomputed. I don't think, this is a valid explanation. First, there is no "performance climb" as the record length goes up, there is a "dip". In case of RAID5 it starts at higher level at reclen 4, decreases slowly to 128 and then drops dramaticly at record lengths of 256 and 512, to climb back up at 1024 and stay up. Here is the iozone's output to illustrate: Size: RAID5: Single disk: KB reclen write write (Kb/second) 2097152 4 18625 17922 2097152 8 16794 17004 2097152 16 15744 23967 2097152 32 15514 20476 2097152 64 14693 18245 2097152 128 12518 17598 2097152 256 6370 29418 2097152 512 8596 35997 2097152 1024 16015 36098 2097152 2048 15588 35207 2097152 4096 16016 36832 2097152 8192 15907 37927 2097152 16384 15810 32620 I'd dismiss it as the controller's heurestics' artifact, but the single disk results show a similar (if not as profound) pattern of write performance changes. Could there be something about the FS? Also, is the RAID5 writing speed supposed to be _so much_ worse, than that of a single disk? =The solution is to not do that. If you expect lots of small random =writes and you can't do anything about it, you need to either use =RAID 1 instead of RAID 5, or use a log-structured filesystem, such as =NetBSD's LFS. This partition is intended to store huge backup files (database dumps mostly). Reading and writing will, likely, be limited by the (de)compression speed anyway, so the I/O performance is satisfactory as it is. I just wanted to have some benchmarks to help us decide, what to get for other uses in the future. Thanks! -mi From owner-freebsd-performance@FreeBSD.ORG Fri Feb 20 11:28:01 2004 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id B41C016A4CE for ; Fri, 20 Feb 2004 11:28:01 -0800 (PST) Received: from smtpout.mac.com (A17-250-248-89.apple.com [17.250.248.89]) by mx1.FreeBSD.org (Postfix) with ESMTP id 93D7443D1F for ; Fri, 20 Feb 2004 11:28:01 -0800 (PST) (envelope-from cswiger@mac.com) Received: from mac.com (smtpin07-en2 [10.13.10.152]) by smtpout.mac.com (Xserve/MantshX 2.0) with ESMTP id i1KJRZMq017645; Fri, 20 Feb 2004 11:27:37 -0800 (PST) Received: from [10.1.1.193] ([199.103.21.225]) (authenticated bits=0) by mac.com (Xserve/smtpin07/MantshX 3.0) with ESMTP id i1KJRXVN020336; Fri, 20 Feb 2004 11:27:34 -0800 (PST) In-Reply-To: <200402201147.03606@misha-mx.virtual-estates.net> References: <200402181729.06202@misha-mx.virtual-estates.net> <20040220072258.GA17579@VARK.homeunix.com> <200402201147.03606@misha-mx.virtual-estates.net> Mime-Version: 1.0 (Apple Message framework v612) Content-Type: text/plain; charset=US-ASCII; format=flowed Message-Id: Content-Transfer-Encoding: 7bit From: Charles Swiger Date: Fri, 20 Feb 2004 14:27:33 -0500 To: mi+mx@aldan.algebra.com X-Mailer: Apple Mail (2.612) cc: performance@FreeBSD.ORG Subject: Re: strange performance dip shown by iozone X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 20 Feb 2004 19:28:01 -0000 On Feb 20, 2004, at 11:47 AM, mi+mx@aldan.algebra.com wrote: > Also, is the RAID5 writing speed supposed to be _so much_ worse, than > that of a single disk? it's normal for RAID-5 write performance to be slower than that of a bare drive. RAID filesystems involve tradeoffs between cost, performance, and reliability. RAID-5 maximizes cost and reliability at the expense of performance.... -- -Chuck From owner-freebsd-performance@FreeBSD.ORG Fri Feb 20 11:35:18 2004 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 7362316A4CE for ; Fri, 20 Feb 2004 11:35:18 -0800 (PST) Received: from mail.webjockey.net (mail.webjockey.net [208.141.46.3]) by mx1.FreeBSD.org (Postfix) with ESMTP id 2B7DF43D1F for ; Fri, 20 Feb 2004 11:35:18 -0800 (PST) (envelope-from gary@outloud.org) Received: from uranium-xkzwf96.outloud.org (69-160-74-173.frdrmd.adelphia.net [69.160.74.173]) by mail.webjockey.net (8.12.10/8.12.8) with ESMTP id i1KJZb8O053762; Fri, 20 Feb 2004 14:35:37 -0500 (EST) (envelope-from gary@outloud.org) Message-Id: <6.0.1.1.2.20040220143227.01ee0ec0@208.141.46.3> X-Sender: ancient@208.141.46.3 X-Mailer: QUALCOMM Windows Eudora Version 6.0.1.1 Date: Fri, 20 Feb 2004 14:35:25 -0500 To: Charles Swiger From: Gary Stanley In-Reply-To: References: <200402181729.06202@misha-mx.virtual-estates.net> <20040220072258.GA17579@VARK.homeunix.com> <200402201147.03606@misha-mx.virtual-estates.net> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; format=flowed cc: freebsd-performance@freebsd.org Subject: Re: strange performance dip shown by iozone X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 20 Feb 2004 19:35:18 -0000 Here's some knobs to try, ie; RAID5: Write Back Cache, Normal Read Ahead, and Direct I/O RAID1: Write Through Cache, Normal Read Ahead, and Direct I/O That picked up my write speeds just enough so it wasn't nearly as bad writing as reading. At 02:27 PM 2/20/2004, you wrote: >On Feb 20, 2004, at 11:47 AM, mi+mx@aldan.algebra.com wrote: >>Also, is the RAID5 writing speed supposed to be _so much_ worse, than >>that of a single disk? > >it's normal for RAID-5 write performance to be slower than that of a bare >drive. RAID filesystems involve tradeoffs between cost, performance, and >reliability. RAID-5 maximizes cost and reliability at the expense of >performance.... > >-- >-Chuck > >_______________________________________________ >freebsd-performance@freebsd.org mailing list >http://lists.freebsd.org/mailman/listinfo/freebsd-performance >To unsubscribe, send any mail to "freebsd-performance-unsubscribe@freebsd.org"