Date: Thu, 20 Jun 2013 16:29:42 +0200 From: Andre Oppermann <andre@freebsd.org> To: Eugene Grosbein <eugen@grosbein.net> Cc: "freebsd-net@freebsd.org" <freebsd-net@freebsd.org>, "Eggert, Lars" <lars@netapp.com>, Jack Vogel <jfvogel@gmail.com> Subject: Re: hw.igb.num_queues default Message-ID: <51C311D6.5090801@freebsd.org> In-Reply-To: <51C305B3.7050703@grosbein.net> References: <843F7891-FD87-4F16-A279-B45D4A674F4E@netapp.com> <51C305B3.7050703@grosbein.net>
next in thread | previous in thread | raw e-mail | index | archive | help
On 20.06.2013 15:37, Eugene Grosbein wrote: > On 20.06.2013 17:34, Eggert, Lars wrote: > >> real memory = 8589934592 (8192 MB) >> avail memory = 8239513600 (7857 MB) > >> By default, the igb driver seems to set up one queue per detected CPU. Googling around, people seemed to suggest that limiting the number of queues makes things work better. I can confirm that setting hw.igb.num_queues=2 seems to have fixed the issue. (Two was the first value I tried, maybe other values other than 0 would work, too.) >> >> In order to uphold POLA, should the igb driver maybe default to a conservative value for hw.igb.num_queues that may not deliver optimal performance, but at least works out of the box? > > Or, better, make nmbclusters auto-tuning smarter, if any. > I mean, use more nmbclusters for machines with large amounts of memory. That has already been done in HEAD. The other problem is the pre-filling of the large rings for all queues stranding large amounts of mbuf clusters. OpenBSD starts with a small number of filled mbufs in the RX ring and then dynamically adjusts the number upwards if there is enough traffic to maintain deep buffers. I don't know if it always quickly scales in practice though. -- Andre
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?51C311D6.5090801>