Date: Sun, 14 May 2006 15:00:44 +0100 From: Bruce M Simpson <bms@spc.org> To: Stephen Clark <Stephen.Clark@seclark.us> Cc: freebsd-net@freebsd.org, Robert Watson <rwatson@FreeBSD.org>, pavlin@icir.org, atanu@icir.org Subject: Re: [PATCH] Re: IP_MAX_MEMBERSHIPS story. Message-ID: <20060514140044.GF79277@spc.org> In-Reply-To: <44667C7E.1020401@seclark.us> References: <20060509122801.GA65297@spc.org> <20060509131517.GB79277@spc.org> <20060512030152.X20138@fledge.watson.org> <4463FD1D.9010600@seclark.us> <20060512131227.GD79277@spc.org> <20060513230315.GE79277@spc.org> <44667C7E.1020401@seclark.us>
next in thread | previous in thread | raw e-mail | index | archive | help
Hello, On Sat, May 13, 2006 at 08:40:30PM -0400, Stephen Clark wrote: > Thanks for your effort - I will try it on monday at work in a test > configuration I have setup with > a hundred gre/vpn tunnels and ospf. This configuration needs a > multicast membership group > of 100. Thank you! I have extended Robert's netinet regression test framework to cover IP_ADD_MEMBERSHIP also and will be committing this update along with an update to the manual page. Initial tests with the regression framework suggest that joining more than 4095 groups on the same interface is likely to cause churn with structures further below in the stack, so whilst this is probably a scalable enough solution for yours (and everyone else's) needs, it probably doesn't need to be this scalable 'in real life', without changing many of the structures further down in net and/or netinet. IPv6 is likely to run into similar churn anyway, given that the KAME stack holds per-socket multicast memberships in a doubly linked list. So I will be updating the patch in the next 24 hours. Given that it seems stable for values 2047 <= n <= 4095 with SOCK_DGRAM I am inclined to commit with the maximum raised to 4095 and lazy allocation in place. Regards, BMS
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20060514140044.GF79277>