Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 8 Feb 2001 04:49:52 -0800 (PST)
From:      Dan Phoenix <dphoenix@bravenet.com>
To:        Geoff Buckingham <geoffb@chuggalug.clues.com>
Cc:        Julian Elischer <julian@elischer.org>, Matt Dillon <dillon@earth.backplane.com>, Andrew Reilly <areilly@bigpond.net.au>, Alfred Perlstein <bright@wintelcom.net>, Andre Oppermann <oppermann@monzoon.net>, Rik van Riel <riel@conectiva.com.br>, Mike Silbersack <silby@silby.com>, Poul-Henning Kamp <phk@critter.freebsd.dk>, Charles Randall <crandall@matchlogic.com>, Jos Backus <josb@cncdsl.com>, freebsd-hackers@FreeBSD.ORG
Subject:   Re: vinum and qmail (RE: qmail IO problems)
Message-ID:  <Pine.BSO.4.21.0102080446120.3495-100000@gandalf.bravenet.com>
In-Reply-To: <20010208120831.C61928@chuggalug.clues.com>

next in thread | previous in thread | raw e-mail | index | archive | help

negative on that houston :)
http://www.FreeBSD.org/cgi/getmsg.cgi?fetch=52705+54899+/usr/local/www/db/text/2000/freebsd-scsi/20001008.freebsd-scsi
..i think maybe thread you are talking about.
Not to much info I could find on specifically on what you are talking
about. ...but again are you talking about venim or ccd?
I'll keep cluster solution in mind for the 2 disks.



On Thu, 8 Feb 2001, Geoff Buckingham wrote:

> Date: Thu, 8 Feb 2001 12:08:31 +0000
> From: Geoff Buckingham <geoffb@chuggalug.clues.com>
> To: Dan Phoenix <dphoenix@bravenet.com>
> Cc: Julian Elischer <julian@elischer.org>,
>      Matt Dillon <dillon@earth.backplane.com>,
>      Andrew Reilly <areilly@bigpond.net.au>,
>      Alfred Perlstein <bright@wintelcom.net>,
>      Andre Oppermann <oppermann@monzoon.net>,
>      Rik van Riel <riel@conectiva.com.br>, Mike Silbersack <silby@silby.com>,
>      Poul-Henning Kamp <phk@critter.freebsd.dk>,
>      Charles Randall <crandall@matchlogic.com>, Jos Backus <josb@cncdsl.com>,
>      freebsd-hackers@FreeBSD.ORG
> Subject: Re: vinum and qmail (RE: qmail IO problems)
> 
> On Thu, Feb 08, 2001 at 03:41:59AM -0800, Dan Phoenix wrote:
> > 
> > 
> > 
> > Yes I did and it made some real differences. I enabled it on /usr as well
> > as /var and mounted /var with the noatime option. Doing not bad for the
> > amount of email it is pushing....thing is this I/O problem never use to be
> > an issue....but with growth constantly happening it has come to a hardware
> > based solution. What I have recommended to the company is a scsi card in
> > that machine with 2 scsi drives.....I will raid 0 then together with ccd
> > or venim and mount it as /var....turn existing /var into extra swap
> > space.....although i may thing of something else...as I don;t think it
> > needs a gig of swap... and that should fix the I/O issue incredibly.
> > Right now I have split up the load also between2 machines so that has
> > helped out incredibly....but systat -vmstat is still always showing 100%
> > disk usage so I will have to remedy the problem. Then I plan on moving all
> > mail back to that one machine and beating the shit right out of that
> > freebsd machine to see what freebsd can really handle. If anyone has some
> > nice newbie docs :) on ccd or venim would be greatly appreciated.
> > 
> > 
> A word of warning on this, when striping on an even number of drives 
> using a power of 2 as as the stripe size, it is very easy to concentrate
> meta data on one drive, thereby doing away with much of your performance gain.
> 
> Workarounds include striping at cluster size (16 or 32MB usually) using and
> odd number of disks (for non raid 3/5), though or experimentation.
> 
> There should be a number of mails on the subject in the archives of the
> scsi mailing list, look for myself or greg lehay to find the thread.
> 



To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?Pine.BSO.4.21.0102080446120.3495-100000>