Date: Mon, 6 Feb 2012 22:18:04 +0100 From: =?iso-8859-1?Q?Eirik_=D8verby?= <ltning@anduin.net> To: Doug Barton <dougb@FreeBSD.org> Cc: freebsd-jail@FreeBSD.org Subject: Re: Practical limit to number of jails on a given host? Message-ID: <744BFFD8-23A6-4583-A266-B4976F494CC1@anduin.net> In-Reply-To: <4F30381E.2020100@FreeBSD.org> References: <4F30381E.2020100@FreeBSD.org>
next in thread | previous in thread | raw e-mail | index | archive | help
On Feb 6, 2012, at 21:29, Doug Barton wrote: > Howdy, >=20 > Thinking about implementing a poor-man's virtualization solution with > lots'o'jails, and wondering what people think about the practical = limits > of such a system. I realize that part of the answer is going to depend > on CPU and RAM, so let's assume for the sake of argument that the = answer > to that bit is, "Lots of both." Worry more about disk I/O.=20 ZFS with fast spindles in raid-Z combined with SSD L2ARC and ZIL got me = much, much further than only spindles, but in the end I caved and did = SSD across the board on the most busy jail hosts. They have anywhere = between 40 and 70 jails running, many of them very busy, all of them = different. The process count seen from the host is in the low four = digits. > So first question is, is there some sort of hard-coded limit = somewhere? > If not, what is the largest number of jails that you've created > successfully/reliably on a system, and what are the specs for that = system? I've - for the sake of testing - had about 350 jails on one system, each = with a mysql, a java/tomcat, and an nginx. They all worked and responded = fine to queries. I have no reason to think it would be a problem to add = more. The system in question was a 12-core (2 CPU), 48GB system. > On a related note, what are the limits in terms of mount points on the > system and/or jails? I'm thinking of a fairly typical "nullfs mount = the > system, devfs, and 2 or 3 NFS mount points" per jail type of = situation. I have no idea about NFS in such a setting; I use nullfs (ro) for all = the system stuff (6 per jail iirc), and use zfs datasets for /, /tmp, = /var, /etc and /usr/local inside the jails. Devfs of course. I implement = filesystem quotas and the likes using zfs, along with compression for = datasets that generally benefit from that. Make sure you allow for enough open files. Also make sure any postgreses = you allow are on different UIDs (unless 9.x has a new way of "fixing" = that sysv limitation). If you use ZFS, it might be an idea to limit the = ARC size (loader.conf) to avoid ZFS gobbling up all the free memory = after booting but before processes in the jails have ballooned). And make sure you have plenty of swap. You don't want to swap, but if = things get hot it's better to have a slowdown from swapping than having = random processes being killed off ;) > And finally, has anyone run into trouble with a large number of IP > addresses for the jails? ISTR that way back when, the IP addresses > associated with a particular interface were stored in a linked list, = so > as you added more you would start seeing O(N) slowdown on a lot of > network stuff in the kernel. I remember DES complained about his 1-something ghz athlon getting slow = with 1500 jails due to this. That was back around ..5-BETA? I remember = laughing long and hard at the insanity of 1500 jails on one box, and = even more at him being surprised that "something" would barf .. But I am = pretty sure it was fixed soon after. > Any thoughts or advice along these lines will be greatly appreciated. = :) >=20 >=20 > Doug >=20 > --=20 >=20 > It's always a long day; 86400 doesn't fit into a short. >=20 > Breadth of IT experience, and depth of knowledge in the DNS. > Yours for the right price. :) http://SupersetSolutions.com/ >=20 > _______________________________________________ > freebsd-jail@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-jail > To unsubscribe, send any mail to = "freebsd-jail-unsubscribe@freebsd.org" >=20
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?744BFFD8-23A6-4583-A266-B4976F494CC1>