Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 28 Jan 1999 23:20:05 -0000 (GMT)
From:      Duncan Barclay <dmlb@ragnet.demon.co.uk>
To:        Matthew Dillon <dillon@apollo.backplane.com>
Cc:        hackers@FreeBSD.ORG, dyson@iquest.net
Subject:   Re: High Load cron patches - comments?
Message-ID:  <XFMail.990128232005.dmlb@computer.my.domain>
In-Reply-To: <199901282203.OAA11295@apollo.backplane.com>

next in thread | previous in thread | raw e-mail | index | archive | help

On 28-Jan-99 Matthew Dillon wrote:
> 
>:>:changing time constants.
>:>:
>:>:Duncan
>:> 
>:>     Think of it as the current-sense (aka limiting) resistor in a switching
>:>     power supply.
>:> 
>:>                                               -Matt
>:
>:One but the resistor is a linear element in the power supply (V=IR), if there
>:is a trip sensing the V however things can go loopy. A couple of years back
>:the
>:West coast power grid under went a chatoic episode which took out most of it.
>:Have a look at Chua's diode, a simple non-linear resistor:
>:  I = Vin / R1 for -x <= V <= x
>:    = Vin / R2 for V > +- x
>:when put in a resonant circuit (i.e. a second order feedback loop with the
>:poles too close together) chaotic oscillations can occur.
> 
>     Now this is getting interesting.  I was thinking of the current limiting
>     resistor going between pins 7 and 8 of the trusty LM3578A ( with pin 7
>     tied to pin 6 ).  Basically, the oscillator is running at an order of
>     magnitude higher frequency then any possible feedback because there is
>     a huge capacitor sitting on the output stage of the regulator.  So the
>     worst you'll see from hitting the current limit is a little jitter
>     ( < 0.1% ) on the output, assuming no further regulation.

The switcher is using the oscillator in the regualtor as a control
element, the big capacitor puts a pole (integrator) into the feedback
loop to get some filtering. The regualtor modulates the oscillator in
some way (usually pulse width) to vary the ammount of charge dumped into
the loop filter. A switching regulator is similar in someways to a phase
locked loop in this respect. So changes in the oscillator will be seen
by the feedback loop, not necessarily as a frequency, but as a change
in the output voltage.

>     The case that matters here is, of course, the case where one actually
>     runs into the limit.  The power curve basically goes up linearly until
>     it hits the limit, then flattens out --- but doesn't go down much ( P=IV,
>     so when it hits the current limit V will start to go down as I goes up
>     in the power output stage.  The current limit is associated with the
>     power input to the regulator, of course, and since the voltage input is
>     steady the current limit is effectively a power limit ). 
>     The jitter due to the limiting function of actually shutting down the
>     oscillator and bringing it back up is too small to worry about.   

In many voltage regulators the shutdown action is to go into a
"foldback" mode and not just constant current mode (which you
describe).  When this happens the regulator deliberately reduces the
output voltage to limit the output current, a little ascii art:

  volts
  |
  |
  |    b-------c
  |   / |     /
  |  /  |  /
  | /   |/ 
  -a----d------- amps

Segment a to b is just the regulator drop out, b to c is normal
operation up to the current limit.  c to d happens when Iout > Imax
and the regulator "crowbars" the output (usually quickly).  d to b
happens when it tries to recover, usually gently.

In many other PLL applications (e.g.  RF synthesisers) the foldback
case isn't handled well.  Recently we used an IC in a GSM handset
design where at turn on the input frequency to the PLL to go above its
limit (point c if we replace amps with frequency).  Unfortuantely this
caused the flip flops in the divider to go meta-stable and eventually
stick requiring a complete power reset much :-(((

>     The rate limiting in this paradigm is dealing with the situation where,
>     say, you have 8 hard drives that eat 1.5A each on spinup and rather then
>     turn them on all at once you stagger-start them.  However, *I* prefer
>     turning them on all at once, which maxes the power supply at its max
>     power output for a short period of time.  In the stagger-start case,
>     the power supply is NOT maxed out.  i.e. you aren't utilizing 100% of
>     your resources.
> 
>     Assuming a direct transfer of power to momemtum, my way will get all 
>     the drives spun up more quickly while the staggered start case will
>     get a few drives spun up every MORE quickly, but the rest of the drives
>     quite a bit LESS quickly.

Hmmm, not sure of this given the foldback functionality but true for
current limiting, I'm not very good with motors.  Your way will stress the
output transistor/GTO of the PSU though and decrease long term
reliablity (increased power dissipation -> hotter -> Arnhaus
relationship -> messy solid state physics stuff ;-)).

>     This is why I prefer allowing bursts, like allowing a lot of sendmails
>     to fork at once rather then rate limit them.  I don't mind hitting 
>     the current-limit ( max power output of the power supply ).  My hard
>     limit would be the 'number of drives' in the system.

Going back to launching multiple processes then, do we have to consider
the "second" order effects of the memory heirarchy? I.e. the load on the
CPU and RAM is managable because essentially there is no difference in
random access time. However, when hitting secondary storage does
thrashing the disk heads etc. adversly affect your "optimisation"?

>     Now, the more quickly verses less quickly case is classic scheduling
>     theory.  You have N people each with job J(n).  Each job takes T(n)
>     time ( different for each job).  How do you schedule the jobs such 
>     that you get the fewest complaints?  This certainly applies to what
>     John is talking about.

I think I'm saying that T(n) may be non-deterministic or its
distribution changes when we get into limiting conditions. Continuing my
disk drive thrashing theme, does the response from a heavily loaded disk
sub-system change between normal and high load? I would say it will.
High load could be defined as when the abscence of the disk read caches
makes no (little) difference to the data returned from the disk. In this
case the T(n) distribution has changed drastically from the normal case
where the cache is effective.

A route to manage this sort of change is to not have a central rate
limiting policy but an adaptive one baseed on information from the
various sub-systems.  For example if the secondary storage is saying
that through put is dropping but demand is increasing and the process
creation unit is saying more more more, a cause and effect relationship
is clear. Thus the system can slow the fork rate a bit. If the cache hit
rate returns (we've managed to load all of sendmail's stuff into
caches) the fork rate can be increased again.

I can't decide whether I agree with your policy (load it up for a
short period) or not.  I can see for some operations (100 sendmails at
once) this will work fine, but I'm not sure it works for say 100
different cron jobs every 10 minutes. However, not running a large
network I don't know which case is more usual/important/intersting.

>                                       -Matt
>                                       Matthew Dillon 
>                                       <dillon@backplane.com>

Duncan

---
________________________________________________________________________
Duncan Barclay          | God smiles upon the little children,
dmlb@ragnet.demon.co.uk | the alcoholics, and the permanently stoned.
________________________________________________________________________

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?XFMail.990128232005.dmlb>