Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 22 Jun 1998 23:53:15 +0000 (GMT)
From:      Terry Lambert <tlambert@primenet.com>
To:        michael@memra.com (Michael Dillon)
Cc:        freebsd-hackers@FreeBSD.ORG
Subject:   Re: LSMTP and Unix (fwd)
Message-ID:  <199806222353.QAA00673@usr02.primenet.com>
In-Reply-To: <Pine.BSI.3.93.980621160451.29422B-100000@sidhe.memra.com> from "Michael Dillon" at Jun 21, 98 04:06:47 pm

next in thread | previous in thread | raw e-mail | index | archive | help
> Why is this benchmark so slow on UNIX filesystems?

I don't think they are.

> Will they give out the source to the benchmark so that someone can test
> FreeBSD's performance and/or figure out if the benchmark is valid?

You should ask them.


> I don't know why people always assume that LSMTP will automatically run
> faster on unix. Other than pure application code, which is the same on
> all systems,

Pure application code is *not* the same on all systems; there is context
switch overhead, which is not inconsiderable under NT.

> LSMTP depends heavily on file I/O, network I/O and
> scheduling. NT has low overhead for all three.

NT does *not* have low file I/O overhead.  Depending on the amount of
RAM, it is very easy to saturate the log cache for NTFS.  According
to a Microsoft "TechNote", to get better FS performance on NT, one
should install VFAT.  Hardly a ringing endorsement.

The network I/O is rather a poor benchmark.  NT does not conform to
the TCP/IP RFC's, for one thing.  This can easily be seen by attempting
to place an NT server's connection in "FINWAIT2"; NT drops out of this
state illegally soon, most likely in response to the lack of resource
tracking in Windows 95.

For the ESMTP "Pipelining" extension, this is, in fact, deadly.


> In practice the main
> source of wasted cycles is file I/O. Here are some figures for a
> LISTSERV benchmark that, while not designed specifically for LSMTP,
> measures the same kind of I/O that LSMTP does and has been shown to
> relate to LSMTP performance. Higher figures mean better performance,
> anything above 50 is very good. There are two different benchmarks, I'll
> write them down as xx/yy, it is not a ratio but two independent numbers
> on the same scale, and in practice you want to worry mainly about the
> second one, so I'll arrange the numbers in that order. First NT (all
> systems are 4.0 and with the $#%#%# 8.3 DOS compatibility kludge
> disabled, which is the first thing we do after installing NT on a fresh
> system):

[ ... benchmarks for systems without a unified VM and buffer cache and
      relatively incomparable hardware, as well as no obvious tuning
      for performance.  Note the Linux performance number (64), which
      is most likely relative to async mounts.  Note the secondary
      numbers -- they appear to be tied to network performance ... ]


> Obviously the RAID systems have the best numbers, but it is not as clean
> cut as with NT. Some systems have good numbers even without RAID, some
> have second rate numbers even with RAID (I have a lot more numbers but I
> also have work to do :-) ). Note that Digital unix has a revamped file
> system in 4.0, so you can't compare 4.0 and 3.2 directly (4.0 without
> RAID would be faster than 53/20). The general idea is that you don't
> want to use ufs with LSMTP, but even some of the non-ufs systems are
> slowish.

I think you don't want to ensure POSIX guarantees for timestamp updates,
specifically "atime".  It also appears to be tied to synchronus directory
operations.  THe Digital UNIX 4.x FS is, I believe, using the USL DOW
(Delayed Ordered Writes) technology.  This technology is significantly
inferior to async mounts (per the Linux numbers), and given the likely
priority banded FIFO usage patterns of a mail server, soft updates
would probably be a significant win on top of this, since it implicitly
does write-gathering over the delta covered by the syncer clock depth.
(ie: many writes would never go to disk because of FreeBSD's aggressive
VM caching policies).


> My experience in correlating these numbers with LSMTP performance is
> that to get good performance without breaking records, you need to score
> around 30 at least on the first test and ideally on both. All NT systems
> do that, even the laptop! To break records, you need 100+ ideally, and
> no less than 50 on the second test. We still manage to break records on
> VMS, but without boring you with RMS details it is a unique file system
> with unique issues and in the end you can make it deliver roughly the
> same performance as a system scoring say 70, plus VMS was engineered for
> lots of asynchronous I/O and is faster in other areas, which compensates
> partly for RMS.  Even so, the file system is by far the biggest
> bottleneck on VMS. VMS clearly outperformed NT with the previous
> generation of processors, now it's about even, and soon NT will take the
> lead (actually, I think Digital unix will take the lead, but it's too
> early to be sure, we only have partial lab results).

Most of the VMS buffer cache work came out of the demands in the FS
placed by the "Pathworks for VMS (NetWare)" product; I was one of the
three Novell engineers who worked on this (Robert Withrow was one of the
DEC engineers).  Much of the FS cacheing subsystem came out of work by
one of the other Novell engineers, Dan Grice.  Much if the scheduler
issues ("hot engine scheduling", LIFO'ing of work-to-do requests by
the processing engines, etc.) Owes a lot to my modifications of the
DEC MTS (MultiThreading Services), a call-conversion scheduler for
a user space threads implementation., which I pounded very deep into
the Mentat Streams Code that I was partly responsible for porting to
VMS in support of that product.

I would seriously suggest that a lot of small duration calls on a
kernel threading environemnt (as one would expect with a reqyest/response,
client/server protocol, like SMTP or NetWare's NCP) would fit *very*
poorly into a kernel threading environment that did not support some
kind of quantum-affinity.  This puts a call-conversion scheduler at a
GREAT advantage over kernel implementations.

Combined, these would well account for the VMS vs. DEC UNIX numbers.

You would most likely be able to resolve some of this by using the
async I/O facilities in DEC UNIX (or indeed, Linux, Solaris, and FreeBSD)
to implement I/O interleaving.


> None of this is going to be an issue if you have a small workload, say
> 100-200k/day. It is only a problem when you decide to buy a really big
> box to deliver a whole bunch of mail. This posting will probably have
> been delivered to 98% of its recipients some 20 sec or so from when I
> hit "Send," and if it were to take 30 sec instead I am sure you would
> survive :-) On the other hand, if your large newsletter had to go out in
> 2h and it took 3h instead, it would be another story.

These numbers are unreasonably small performance expectations, even
assuming most of the time is spent in making DNS requests, and is
therefore a latency issue rather than a throughput issue -- your minimal
time could only be your maximal pool retention time, in the worst case,
which is invariant, and speaks to your connectivity and the size of your
DNS cache, more than anything else.

THere *are* mailers that can handle a *much* larger load; for example,
i.Mail from Software.com:

	http://www.software.com/Products/InterMail/Intermail.html

Which, incidently, runs best on UNIX platforms, and is the primary
foundation for AT&T GlobalNet services.

It would be better if they were using FreeBSD, but at least they are
using a UNIX family system.


					Terry Lambert
					terry@lambert.org
---
Any opinions in this posting are my own and not those of my present
or previous employers.

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199806222353.QAA00673>