From owner-freebsd-net@freebsd.org Thu Oct 1 08:11:25 2020 Return-Path: Delivered-To: freebsd-net@mailman.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.nyi.freebsd.org (Postfix) with ESMTP id 6033242102D for ; Thu, 1 Oct 2020 08:11:25 +0000 (UTC) (envelope-from hps@selasky.org) Received: from mail.turbocat.net (turbocat.net [88.99.82.50]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 4C25Rw48Csz3XYP for ; Thu, 1 Oct 2020 08:11:24 +0000 (UTC) (envelope-from hps@selasky.org) Received: from hps2020.home.selasky.org (unknown [178.17.145.105]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mail.turbocat.net (Postfix) with ESMTPSA id 11430260187; Thu, 1 Oct 2020 10:11:17 +0200 (CEST) Subject: Re: mlx5 irq To: =?UTF-8?Q?Michal_Van=c4=8do?= , freebsd-net@freebsd.org References: <0aa09fcc-dfcc-005e-8834-2a758ba6a03f@microwave.sk> From: Hans Petter Selasky Message-ID: <94978a05-94c6-cc55-229c-5a3c5352b29a@selasky.org> Date: Thu, 1 Oct 2020 10:10:42 +0200 User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:68.0) Gecko/20100101 Thunderbird/68.10.0 MIME-Version: 1.0 In-Reply-To: <0aa09fcc-dfcc-005e-8834-2a758ba6a03f@microwave.sk> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: 4C25Rw48Csz3XYP X-Spamd-Bar: -- Authentication-Results: mx1.freebsd.org; dkim=none; dmarc=none; spf=pass (mx1.freebsd.org: domain of hps@selasky.org designates 88.99.82.50 as permitted sender) smtp.mailfrom=hps@selasky.org X-Spamd-Result: default: False [-2.16 / 15.00]; RCVD_VIA_SMTP_AUTH(0.00)[]; ARC_NA(0.00)[]; MID_RHS_MATCH_FROM(0.00)[]; FROM_HAS_DN(0.00)[]; TO_DN_SOME(0.00)[]; R_SPF_ALLOW(-0.20)[+a:mail.turbocat.net]; NEURAL_HAM_LONG(-0.97)[-0.969]; MIME_GOOD(-0.10)[text/plain]; DMARC_NA(0.00)[selasky.org]; NEURAL_HAM_MEDIUM(-1.03)[-1.030]; NEURAL_SPAM_SHORT(0.14)[0.139]; TO_MATCH_ENVRCPT_SOME(0.00)[]; RCPT_COUNT_TWO(0.00)[2]; FROM_EQ_ENVFROM(0.00)[]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; ASN(0.00)[asn:24940, ipnet:88.99.0.0/16, country:DE]; RCVD_TLS_ALL(0.00)[]; MAILMAN_DEST(0.00)[freebsd-net]; RCVD_COUNT_TWO(0.00)[2] X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.33 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 01 Oct 2020 08:11:25 -0000 On 2020-10-01 09:39, Michal Vančo via freebsd-net wrote: > Hi Hi Michal, > I have a server with one Mellanox ConnectX-4 adapter and the following > CPU configuration (SMT disabled): > > # dmesg | grep SMP > FreeBSD/SMP: Multiprocessor System Detected: 16 CPUs > FreeBSD/SMP: 2 package(s) x 8 core(s) x 2 hardware threads > FreeBSD/SMP Online: 2 package(s) x 8 core(s) > > What I don't understand is the number of IRQs allocated for each mlx5_core: > > # vmstat -i | grep mlx5_core > irq320: mlx5_core0                     1          0 > irq321: mlx5_core0              18646775         84 > irq322: mlx5_core0                    21          0 > irq323: mlx5_core0                 97793          0 > irq324: mlx5_core0                 84685          0 > irq325: mlx5_core0                 89288          0 > irq326: mlx5_core0                 93564          0 > irq327: mlx5_core0                 86892          0 > irq328: mlx5_core0                 99141          0 > irq329: mlx5_core0                 86695          0 > irq330: mlx5_core0                104023          0 > irq331: mlx5_core0                 85238          0 > irq332: mlx5_core0                 88387          0 > irq333: mlx5_core0              93310221        420 ^^^ it appears you have some application which is using a single TCP connection heavily. Then the traffic doesn't get distributed. > irq334: mlx5_core0               1135906          5 > irq335: mlx5_core0                 85394          0 > irq336: mlx5_core0                 88361          0 > irq337: mlx5_core0                 88826          0 > irq338: mlx5_core0              17909515         81 > irq339: mlx5_core1                     1          0 > irq340: mlx5_core1              18646948         84 > irq341: mlx5_core1                    25          0 > irq342: mlx5_core1                208684          1 > irq343: mlx5_core1                 91567          0 > irq344: mlx5_core1                 88340          0 > irq345: mlx5_core1                 92597          0 > irq346: mlx5_core1                 85108          0 > irq347: mlx5_core1                 98858          0 > irq348: mlx5_core1                 88103          0 > irq349: mlx5_core1                104906          0 > irq350: mlx5_core1                 84947          0 > irq351: mlx5_core1                 99767          0 > irq352: mlx5_core1               9482571         43 > irq353: mlx5_core1               1724267          8 > irq354: mlx5_core1                 96698          0 > irq355: mlx5_core1                473324          2 > irq356: mlx5_core1                 86760          0 > irq357: mlx5_core1              11590861         52 > > I expected number of IRQs to be equal number of CPUS. According to > Mellanox docs, I should be able to pin each interrupt to specific core > to loadbalance. How can I do this in this case when number of IRQs is > larger than number of cores? Is there any way to lower the number of > interrupts? > You can lower the number of interrupts by changing the coalescing sysctl's in the mce..conf tree. dev.mce.0.conf.tx_coalesce_pkts: 32 dev.mce.0.conf.tx_coalesce_usecs: 16 dev.mce.0.conf.rx_coalesce_pkts: 32 dev.mce.0.conf.rx_coalesce_usecs: 3 For example 1024 pkts and 125 us. And also set the queue size bigger than 1024 pkts: dev.mce.0.conf.rx_queue_size: 1024 dev.mce.0.conf.tx_queue_size: 1024 --HPS