Date: Mon, 18 Feb 2002 17:24:44 -0800 From: Luigi Rizzo <rizzo@icir.org> To: Marcel de Vries <mdevries@haveityourway.nl> Cc: freebsd-net@FreeBSD.ORG Subject: Re: network buffer problem Message-ID: <20020218172444.C22456@iguana.icir.org> In-Reply-To: <5.1.0.14.2.20020219013744.01dab338@outshine> References: <5.1.0.14.2.20020218224119.01faca88@outshine> <5.1.0.14.2.20020219013744.01dab338@outshine>
next in thread | previous in thread | raw e-mail | index | archive | help
Ok, I have refrained from jumping into this thread but
the noise is increasing and I think some clarifications are
really necessary now.
First of all: at various levels in the protocol stack, when
a packet cannot be forwarded to the next layer, more often
than not a ENOBUFS error is returned, which is propagated back
and then printed by perror() as "no buffer space available".
One of this situations is when a dummynet pipe fills up. So the
test you report below with dummynet is telling absolutely nothing,
and certainly it does not show an error or configuration problem.
Second: ping makes no use of tcp, so changing the
net.inet.tcp.{sendspace,recvspace} values has no [direct] impact
on the behaviour of your ping.
What I suspect is happening is that mpd is saturating either
the divert socket queue (if you look in netinet/ip_divert.c,
this size is fixed to 64k), or the buffer on the "tun" interface.
My understanding stops here... if you can summarise
what is the problem you were having initially, maybe i
can give more details.
cheers
luigi
On Tue, Feb 19, 2002 at 01:43:41AM +0100, Marcel de Vries wrote:
> Well it could be mpd, but my good old friend ;-) tested a view things.
>
> First he used DUMMYNET to simulate his ADSL connection in a LAN environment
> (100baseT)
>
> So he put in some packet loss and bandwidth limitations on his LAN and
> started pinging some hosts. He gets the same result of packets being lost
> with the message 'no buffer space available'. So in this way mpd wasn't
> used at all. This makes sense right. Could it be a configuration problem or
> a really hard to think IP stack problem?
>
> He was very sure to me about not having these problems when using previous
> versions of FreeBSD before release 4.5.
>
> I don't know if that's true.
> And most of the IP traffic is no problem at all, things that definitely cut
> the chase is to be generating a lot of UDP traffic like games do. This
> buffer seems to be running wild from that type of traffic.
>
> Tonight we lowered the following value's
> net.inet.tcp.sendspace: 16384
> net.inet.tcp.recvspace: 16384
>
> We started gaming like for more then a hour and no problems at all.
> But I think it was not long enough.
>
> But I hope this is useful: To recover from a total down of the internet
> connection, I need to restart mpd. Mpd was patched to start the following
> script when executed.
>
> #!/bin/sh
> INTERFACE=$1
> PROTOCOL=$2
> LOCAL=$3
> REMOTE=$4
>
> /sbin/route delete default
> /sbin/ifconfig ng0 mtu 1492
> /sbin/route add default $REMOTE
>
> [ -x /sbin/ipnat ] && /sbin/ipnat -CF -f /etc/ipnat.conf && ipf -y &&
> echo -n 'ipnat'
>
> /usr/bin/killall -HUP inetd
> /usr/bin/killall snort
> /bin/sleep 15 && /usr/local/bin/snort -i ng0 -p -o -D -c
> /usr/local/etc/snort/snort.conf &
>
> So what kind of buffer would hit hs max limit and will be reset by this
> action?
> not nmb thats for sure, when check netstat -m everything seems normal.
>
> You see the mtu option in the script, there where dutch people having
> trouble with there ADSL mxstream connection regarding with use of mpd on
> the FreeBSD box. 'No buffer' messages also appeared for those users not for
> all. To alter the MTU helped in some cases.
>
> But I think that's not true, because when extremely using bandwidth like
> listen from streaming media and simultaneously pinging a host you could
> generate the message without any problems.
>
> And the message no buffer space available is not only a warning. It's heavy
> packet loss and that's a thing nobody wants to deal with.
>
> I hope this will clear up a few things, and as I see more FreeBSD folks
> replying on this subject something must be wrong in the world of IP BSD ;-)
>
> Thanks,
>
> Marcel de Vries
>
>
>
>
>
>
>
>
>
> At 17:11 18-02-2002 +0000, Mike Silbersack wrote:
>
>
> >On Mon, 18 Feb 2002, Marcel de Vries wrote:
> >
> >> I really want to make a point, is it third party software 'mpd-3.7
> >> Multi-link PPP daemon based on netgraph(4)' that is causing this or is it
> >> something in the TCP/IP stack of BSD that is changed or the driver
> >support.
> >>
> >> We had these problems in 4.3, 4.4 and still in 4.5.
> >> From using mpd 3.1 to 3.7 nothing changed about our problem.
> >>
> >> I think it's an important notice why this is happening, because what if
> >> this is a real TCP/IP stack issue this would be very wrong and bad for
> >FreeBSD.
> >>
> >> But I'm still no expert so I have to leave this open for the Pro BSD
> >> users/developers.
> >>
> >> Bye,
> >>
> >> Marcel
> >
> >If your friend with a different network card is having similar problems,
> >I'd guess that mpd-netgraph is where you should start investigating.
> >
> >However, as I have never used mpd-netgraph, I have no idea what you should
> >be looking at. If by chance a mpd guru does not wander into this thread,
> >I suggest that you look through the old mailing list archives, see who
> >has had experience with it before, and drop them an e-mail.
> >
> >As far as your other question about natd slowing down... I believe that
> >someone was looking into that. If he manages to find the bottleneck and
> >fix it, I suspect you'll see the announcement here.
> >
> >Mike "Silby" Silbersack
> >
> >
> >
> >
> >To Unsubscribe: send mail to majordomo@FreeBSD.org
> >with "unsubscribe freebsd-net" in the body of the message
>
>
>
> To Unsubscribe: send mail to majordomo@FreeBSD.org
> with "unsubscribe freebsd-net" in the body of the message
To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-net" in the body of the message
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20020218172444.C22456>
