Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 19 Apr 1995 08:07:00 +0300
From:      Petri Helenius <pete@silver.sms.fi>
To:        Joe Greco <jgreco@brasil.moneng.mei.com>
Cc:        freebsd-current@FreeBSD.org
Subject:   Re: mmap bugs gone yet?
Message-ID:  <199504190507.IAA05591@silver.sms.fi>
In-Reply-To: <9504182143.AA07382@brasil.moneng.mei.com>
References:  <199504181927.WAA00592@silver.sms.fi> <9504182143.AA07382@brasil.moneng.mei.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Joe Greco writes:
 > Hi Pete,
 > 
 > I would take this to mean that you're in a very memory-starved environment?
 > Are you running realtime nntplinks, or batchfile nntplinks?  I only deal
 > with realtime links, so the following discussion mostly applies there:
 >
With 50 outgoing feeds you're most likely to starve a lot of memory.
(because you also have readers running around)

 > I do not think that this would be helped by the integration of nntplink into
 > INN, multithreaded or otherwise.  Basically, if a feed is falling behind, it
 > will tend to continue to fall even further behind, to a point where it is no
 > longer reasonable to maintain a cache of articles to be sent in memory.  At
 > this point, INN must log to disk the list of articles to be sent.  So there
 > are a few scenarios that I can envision, depending on the implementation:
 >
Yes, but I can see the behaviour that even if most of the feeds are up to date
within 60 or 120 seconds (equals to about same amount of articles) the system
tends to read the articles from disk.

 > INN could remember the entire article in memory, and provide it to the
 > nntplink threads.  I do not think that this would be efficient.  Consider
 > somebody posting several hundred largish (700K) articles in a row.  Consider
 > what the memory requirements would be for any sort of "queued in-RAM"
 > implementation.  Consider the consequences of a feed that was even just a
 > few articles behind.  Now multiply it times ten feeds.  :-(  !!!!
 >
Over a half of the articles are under 2k, most of the articles are less than
4k. This mechanism could read articles say, larger than 16 or 32k from disk
and send them there and keep articles less than that in memory. I've also
thought that the article buffer could be shared with the feeds, not separate
for each feed (which would a waste memory-wise as you state). If a feed is
behind enough not to keep up with the "current" state it should be handled
as it is now.

 > In general, if a feed is not keeping up, you are virtually guaranteed to be
 > forced to reread the article from disk at some point.  The only optimization
 > I can think of would be to devise some method that would try to
 > "synchronize" these rereads in some way.  I don't see an easy way to do
 > _that_.  More memory allows you more caching.  Less memory screws you.  The
 > ideal news server would have a gigabyte of RAM and be able to cache about a
 > day's worth of news in RAM.  :-)
 > 
 > I do not see any real way to shift the paradigm to alter this scenario -
 > certainly not if we accept the fact that more than 5 feeds will be severely
 > lagging.
 >
I think getting this amount from 20 to 5 reads would help a lot in many cases
but disk is getting enough cheap to have multiple fast 2G's for your news
spool. If only more OS's would support that kind of configuration efficiently.
(have the disks mounted as single partition spanning all or many of the disks)

Pete



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199504190507.IAA05591>