Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 13 Mar 2008 14:06:40 -0700
From:      Wade Klaver <wadeklaver@itiva.com>
To:        AT Matik <asstec@matik.com.br>
Cc:        freebsd-ipfw@freebsd.org
Subject:   Re: On the trail of a dummynet/bridge/ipfw bug.
Message-ID:  <1205442400.4349.18.camel@wade-linux.itiva.com>
In-Reply-To: <200803131441.36597.asstec@matik.com.br>
References:  <1205343184.4032.44.camel@wade-linux.itiva.com> <200803131323.34208.asstec@matik.com.br> <1205428297.4032.51.camel@wade-linux.itiva.com> <200803131441.36597.asstec@matik.com.br>

next in thread | previous in thread | raw e-mail | index | archive | help

--=-F4jIDZdkECoKeZUwJHBT
Content-Type: text/plain
Content-Transfer-Encoding: quoted-printable

OK, here's something weird then.  ipfw pipe show | wc -l has reported
higher numbers:
[root@ibm3550b ~]# ipfw pipe show | wc -l
    3453
This was reported after the bridge "died" attempting 2600 simultaneous
connections... it had been running at 2400 before I added 200 more.
Now, immediately after the above crash, I do a /etc/rc.d/netif restart,
and then:
[root@ibm3550b ~]# ipfw pipe show | wc -l
    3900
Then as long as I add additional connections very slowly, I can manage
to get more established until it dies at 2800 with:
[root@ibm3550b ~]# ipfw pipe show | wc -l
    4160
At this point I am only using these numbers as a general indication of
pipe activity as the output is not 1 pipe per line.  In fact there is
more often than not two lines per pipe.  However, the end problem
remains the same.  After a point, the bridge doesn't get saturated, it
crashes and requires that the network be restarted before continuing.
The fact that it is necessary only to restart the network and not to
flush ipfw's pipes (which has no effect without a network restart)
perhaps suggests the problem lies in a different subsystem?  The
broadcom driver perhaps?
  This image: http://www.archeron.ca/pics/bridgecrash.jpg (see the
indicated section on the right) shows the crash from the network side as
additional requesters are added across the bridge.
  Any hints on how I can track down this problem, be it configuration,
hardware, OS or otherwise?'

Cheers,
 -Wade

On Thu, 2008-03-13 at 14:41 -0300, AT Matik wrote:
> On Thursday 13 March 2008 14:11:37 you wrote:
> > Out of curiosity, what is the largest number of pipes you have had in a
> > system?  I would really like to be approaching the 4500 mark to simulat=
e
> > that number of 192kb connections.
>=20
> I really don know because I never had necessity to check because never ha=
d=20
> problems, but the largest subnet what goes through one gw is a /20=20
> I guess I have 60-70% of users max online so there should be roundabout t=
hat=20
> 2.8-3.0k active pipes=20
>=20
>=20
>=20
> >  -Wade
> >
> > On Thu, 2008-03-13 at 13:23 -0300, AT Matik wrote:
> > > On Thursday 13 March 2008 13:09:05 you wrote:
> > > > This is not entirely helpful.  Perhaps a suggestion of where to loo=
k
> > > > for a misconfiguration?  I have not done anything particularly exot=
ic
> > > > to this system.  I also mentioned that I was able to overcome the 1=
024
> > > > pipe limit.  What I am more interested in tracking down is why the
> > > > bridging functionality crashes once I exceed around 2300 pipes.
> > >
> > > I agree, but I only wanted to show that this is not the pattern, also
> > > some of my server make it up to 2.8 -2.9k  pipes and I have no crash/=
hang
> > > problem
> > >
> > > but I have a different fw configuration, may be you like to try:
> > >
> > >     ${fwpipe} 1 config bw ${bwd_max_lan}${uni}
> > >     ${fwpipe} 2 config bw ${bwu_max_lan}${uni}
> > >     ${fwqueue} 1 config pipe 1 weight ${prior_net_lan}
> > >     ${fwqueue} 2 config pipe 2 weight ${prior_net_lan}
> > >     ${fwadd} queue 1 ip from any to ${net_lan} out xmit ${if_lan}
> > >     ${fwadd} queue 2 ip from ${net_lan} to any in recv ${if_lan}
> > >
> > >
> > > for better understanding
> > >
> > > fwadd=3D"/sbin/ipfw -q add"
> > > fwpipe=3D"/sbin/ipfw pipe"
> > > fwqueue=3D"/sbin/ipfw queue"
> > > uni=3D"Kbit/s"
> > >
> > > >  -Wade
> > > >
> > > > On Wed, 2008-03-12 at 18:05 -0300, AT Matik wrote:
> > > > > On Wednesday 12 March 2008 14:33:04 Wade Klaver wrote:
> > > > > > PROBLEM DESCRIPTION
> > > > > >
> > > > > > I have a bridge set up on a 7.0 box and am attempting to use it=
 to
> > > > > > limit HTTP connections outgoing from a box behind it to 192Kbit=
/s
> > > > > > for testing. During this testing I ran into some problems.  At
> > > > > > first, I found that the number of simultaneous pipes was limite=
d to
> > > > > > 1024, allowing only 1024 192Kbit/s clients.  Additional clients
> > > > > > were simply blocked.  I am using a very simple firewall config:
> > > > > >
> > > > > >   ipfw pipe 1 config bw 192Kbits/s mask all
> > > > > >   ipfw add 00051 skipto 99 ip from 192.168.0.0/16 to 192.168.0.=
0/16
> > > > > >   ipfw add 00052 skipto 1000 ip from any to any
> > > > > >   ipfw add 00100 pipe 1 ip from 192.168.10.88 80 to any via bri=
dge0
> > > > > >   ipfw add 00200 pipe 1 ip from any 25111 to any via bridge
> > > > > >
> > > > > > Regardless of how many clients I threw at the box, I had the li=
mit:
> > > > > >
> > > > > > [root@ibm3550b ~]# ipfw pipe show | wc -l
> > > > > >     1028
> > > > >
> > > > > you must have something wrong there, I just checked on one of my
> > > > > boxes:
> > > > >
> > > > > # ipfw pipe show | wc -l
> > > > >     1797
>=20
>=20

--=-F4jIDZdkECoKeZUwJHBT
Content-Type: application/pgp-signature; name=signature.asc
Content-Description: This is a digitally signed message part

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.7 (GNU/Linux)

iD8DBQBH2ZdWne3UhGESRwURArNwAJwJz1oAEKPzOvVMMKrSYG/EWr85JQCfb30w
AWfrlAbK6K+eXlk57A1Y1TM=
=jCoj
-----END PGP SIGNATURE-----

--=-F4jIDZdkECoKeZUwJHBT--




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?1205442400.4349.18.camel>