Date: Thu, 20 May 1999 11:30:21 -0500 (CDT) From: James Wyatt <jwyatt@RWSystems.net> To: Roger Marquis <marquis@roble.com> Cc: freebsd-isp@FreeBSD.ORG Subject: Re: Web Statistics break up program. Message-ID: <Pine.BSF.4.05.9905201115020.18069-100000@kasie.rwsystems.net> In-Reply-To: <Pine.GSO.3.96.990518195705.24618B-100000@roble2.roble.com>
next in thread | previous in thread | raw e-mail | index | archive | help
On Tue, 18 May 1999, Roger Marquis wrote:
> > > This will create an archived logfile (http.site.May_1999) and erase
> > > the original without needing to kill -1 the httpd.
> > >
> > > #!/bin/sh -
> > > LOGDIR=/var/log
> > > ARCDIR=/var/log/oldlogs
> > > DAY=`date | awk '{ OFS="_" ;print $2,$6}' `
> > > for log in $LOGDIR/http* ; do
> > > cp $log $ARCDIR/${log}.${DAY}
> > > chmod 440 $ARCDIR/${log}.${DAY}
> > > cp /dev/null $log
> > > done
> >
> > Egads!!
> > That's a pretty vicious race condition there, you'll lose records on busy
> > servers.
>
> In theory perhaps, in reality it doesn't. I've never seen this algorithm
> fail, even when used on log files that grow by several megabytes per day.
Since you would quietly lose just a few lines once-in-a-while during a low
traffic period, how would you *know*? The server I'm most concerned about
handles eCommerce for transportation. It has logs of about 40-50MB/day.
This looks like a noticable race condition on most platforms. I would like
to know why it doesn't happen if it doesn't. Sounds like I have some
playing to do some evening. - Jy@
Wyatts law: The difference between theory and practise is larger in
practise than in theory.
To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-isp" in the body of the message
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?Pine.BSF.4.05.9905201115020.18069-100000>
