Date: Thu, 20 May 1999 13:32:11 -0500 (CDT) From: James Wyatt <jwyatt@RWSystems.net> To: Roger Marquis <marquis@roble.com> Cc: freebsd-isp@FreeBSD.ORG Subject: Re: Web Statistics break up program. Message-ID: <Pine.BSF.4.05.9905201302461.18069-100000@kasie.rwsystems.net> In-Reply-To: <Pine.GSO.3.96.990520090314.20611B-100000@roble2.roble.com>
next in thread | previous in thread | raw e-mail | index | archive | help
On Thu, 20 May 1999, Roger Marquis wrote: > > And on several of our servers will miss some events we need in the log. > > Any entries that fall between the two 'cp' commands get zapped. (btw: It > > would be better to move the 'chmod' after the second 'cp' to reduce this) > > These events will be gone forever and may represent billable or auditable > > content. beware... > > Sure, depending on system load you could lose a log entry between "cp > logfile logfile.archive" and "cp /dev/null logfile" but you have to > consider this in context. I am still missing what was bad about signalling the server and having it rorate the logfile? I am a bit anal-retentive about logs, I do not waht *any* missing lines. I like being able to tell customers and mgmt that we can guarantee nothing got missed. Many admins get logfile requests and like to handle them professionally with "Yes, I am sure all the data is there". Is there something 'better' about not signalling the server that is worth trading for a disclaimer? > As for race conditions this also has to be considered in context. In > theory a race condition would only occur if httpd was writing data faster > than cp could copy it. Unless the destination media couldn't accept data > at the rate httpd was writing this would never happen. I thought *several* (like 5 to 20) processes could be writing to the access file at the same time in the typical server. As they get connection information and log it they do so in append mode which is very fast. When 'cp' finishes reading the last bit of data from the access file, the window for dropped messages is opened. It then has to write that last buffer full, close the file, execute another uncached process, wait for it to chmod the file and return, start another executable (likely still in cache), open the file in 'create/trunc' mode. The window is then closed just before the access file is. The window is wider than that for JFS as directory updates cause odd timings, suggesting that this approach would be even riskier if ported to some other Unicies. That can be a lot of room. The window is there if the file is 200MB (our case) or 4MB (another case). The results of the cp (and chmod) are unchecked, so if your destination filespool is full (among other failure modes), this blindly does a 'cp /dev/null' over valid, unbacked-up data. You might consider something more like: cp -p $SRC $BACKUP && cp /dev/null $SRC chmod $440 $BACKUP if [ $? -ne 0 ]; then syslog-page-or-whatever fi Or you can just move them aside and signal. - Jy@ To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-isp" in the body of the message
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?Pine.BSF.4.05.9905201302461.18069-100000>