Date: Wed, 9 Dec 2015 18:55:25 +0000 (UTC) From: Alan Somers <asomers@FreeBSD.org> To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: svn commit: r292020 - head/sbin/devd Message-ID: <201512091855.tB9ItPfW083858@repo.freebsd.org>
next in thread | raw e-mail | index | archive | help
Author: asomers Date: Wed Dec 9 18:55:25 2015 New Revision: 292020 URL: https://svnweb.freebsd.org/changeset/base/292020 Log: Increase devd's client socket buffer size to 256KB. This is not as large as it looks, because we'll hit the sockbuf's mbuf limit long before hitting its data limit. A 256KB data limit allows creating a ZFS pool on about 450 drives without overflowing the client socket buffers. MFC after: 4 weeks Sponsored by: Spectra Logic Corp Differential Revision: https://reviews.freebsd.org/D4476 Modified: head/sbin/devd/devd.cc Modified: head/sbin/devd/devd.cc ============================================================================== --- head/sbin/devd/devd.cc Wed Dec 9 18:07:26 2015 (r292019) +++ head/sbin/devd/devd.cc Wed Dec 9 18:55:25 2015 (r292020) @@ -108,15 +108,26 @@ __FBSDID("$FreeBSD$"); /* * Since the client socket is nonblocking, we must increase its send buffer to * handle brief event storms. On FreeBSD, AF_UNIX sockets don't have a receive - * buffer, so the client can't increate the buffersize by itself. + * buffer, so the client can't increase the buffersize by itself. * * For example, when creating a ZFS pool, devd emits one 165 character - * resource.fs.zfs.statechange message for each vdev in the pool. A 64k - * buffer has enough space for almost 400 drives, which would be very large but - * not impossibly large pool. A 128k buffer has enough space for 794 drives, - * which is more than can fit in a rack with modern technology. + * resource.fs.zfs.statechange message for each vdev in the pool. The kernel + * allocates a 4608B mbuf for each message. Modern technology places a limit of + * roughly 450 drives/rack, and it's unlikely that a zpool will ever be larger + * than that. + * + * 450 drives * 165 bytes / drive = 74250B of data in the sockbuf + * 450 drives * 4608B / drive = 2073600B of mbufs in the sockbuf + * + * We can't directly set the sockbuf's mbuf limit, but we can do it indirectly. + * The kernel sets it to the minimum of a hard-coded maximum value and sbcc * + * kern.ipc.sockbuf_waste_factor, where sbcc is the socket buffer size set by + * the user. The default value of kern.ipc.sockbuf_waste_factor is 8. If we + * set the bufsize to 256k and use the kern.ipc.sockbuf_waste_factor, then the + * kernel will set the mbuf limit to 2MB, which is just large enough for 450 + * drives. It also happens to be the same as the hardcoded maximum value. */ -#define CLIENT_BUFSIZE 131072 +#define CLIENT_BUFSIZE 262144 using namespace std;
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201512091855.tB9ItPfW083858>