Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 18 Aug 2020 15:36:10 -0700 (PDT)
From:      "Rodney W. Grimes" <freebsd-rwg@gndrsh.dnsmgr.net>
To:        Ryan Stone <rysto32@gmail.com>
Cc:        Marko Zec <zec@fer.hr>, freebsd-net <freebsd-net@freebsd.org>
Subject:   Re: Is anybody using ng_pipe?
Message-ID:  <202008182236.07IMaAo5067447@gndrsh.dnsmgr.net>
In-Reply-To: <CAFMmRNyGu-vUgCsDjjDmX9YcEAhCDD-tZHeFJzRtaCOx-bCrgw@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
> On Tue, Aug 18, 2020 at 2:43 PM Eugene Grosbein <eugen@grosbein.net> wrote:
> > Sorry, missed that. But why wasn't possible?
> 
> There's a daemon running on the system that handles most network
> configuration.  It's quite inflexible and will override any manual
> configuration changes.  It manages firewall rules but is ignorant of
> netgraph, so it will remove any dummynet rules but leave netgraph
> configuration alone.  It was significantly easier to just use ng_pipe,
> even after having to fix or work around the bugs, than it was to fight
> the daemon.
> 
> On Tue, Aug 18, 2020 at 2:56 PM Marko Zec <zec@fer.hr> wrote:
> > The probability that a frame is completely unaffected by BER events,
> > and thus shouldn't be dropped, is currently computed as
> >
> > Ppass(BER, plen) = Psingle_bit_unaffected(BER) ^ Nbits(plen)
> 
> The problem is in its calculation of Psingle_bit_unaffected(BER).  The
> BER is the fraction of bits that are affected, therefore it is the
> probability that a bit is affected.  But for some reason,
> Psingle_bit_unaffected(BER) is calculated as 1 - 1/BER rather than 1 -
> BER.  This leads to the probability table being wrong.  For example,
> given a BER of 23500000, the probability that a 1500-byte packet is
> not dropped is:

Is this a confusion over Bit Error Rate vis Bit Error Ratio?
A BER of 23500000 I must assume is 1 bit in 23500000 bits.

1 - 1/BER looks correct to me for a Bit Error Rate
1 - BER looks correct to me for a Bit Error Ratio (usually a percentage)

> 
> (1 - 23500000/2**48)**(1500 * 8), which is approximately 99.00%.
> 
> However, ng_pipe calculates a fixed-point probability value of
> 281460603879001.  To calculate whether a frame should be dropped,
> ng_pipe takes this probability value and shifts it right by 17,
> yielding 2147373991.  It then calls rand() to generate a random number
> in the range [0,2**31-1]; if the random number is larger than the
> probability value than it is dropped, otherwise it is kept.  The
> chances that a packet is kept is therefore 2147373991/(2**31 - 1), or
> about 99.99%.

This looks like optimization to reduce calculation as it is
done for every packet, why not do the calculation more accurately
and only for each "error", see below.

> It's easy enough to fix this one, but I wasn't sure that it would be
> so easy to fix the TSO/LRO issue without significantly increasing the
> memory usage, so I wanted to gauge whether it was worth pursuing that
> avenue or if a simpler model would be a better use of my time.  The
> feedback is definitely that a simpler model is *not* warranted, so
> let's talk about fixing TSO/LRO.

I am not even sure how to deal with TSO/LRO and BER.  Your not
going to discard the whole segment are you?  Are you going to
try and packetize it, drop the packet(s) with errors, reassmble
it?   My method of calculating the future error point would
at least allow you to just pass a whole segment without any
of that hassle and only have to do that when an error is within
some segment.

> 
> On Tue, Aug 18, 2020 at 1:47 PM Rodney W. Grimes
> <freebsd-rwg@gndrsh.dnsmgr.net> wrote:
> > Hum, that sounds like a poor implementation indeed.  It seems
> > like it would be easy to convert a BER into a packet drop
> > probability based on bytes that have passed through the pipe.
> 
> I'm not quite following you; can you elaborate?  Would this solution
> require us to update some shared state between each packet?  One
> advantage of the current approach is that there is no mutable state
> (except, of course, when configuration changes).

You would use the bytes transfered state that is already stored,
and compute a "next error" point based on BER and some randomness
such that your errors are not clocked at exact BER intervals.

Compare the next error point to the bytes transfered + size of
this packet/segment to decide if it needs dropped, if you drop then you
must calculate a new "next error" point.

This should considerable reduce the overhead for error rates
that effect less than 50% of packets, and would be the
same overhead for BER that effect every packet.  And 100x
more efficient for things that effect 1% of packets.

-- 
Rod Grimes                                                 rgrimes@freebsd.org



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?202008182236.07IMaAo5067447>