Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 1 Oct 2020 10:24:33 +0200
From:      =?UTF-8?Q?Michal_Van=c4=8do?= <michal@microwave.sk>
To:        Hans Petter Selasky <hps@selasky.org>, freebsd-net@freebsd.org
Subject:   Re: mlx5 irq
Message-ID:  <c9f8bd7f-6d9d-bb6e-307c-a19c9730b564@microwave.sk>
In-Reply-To: <94978a05-94c6-cc55-229c-5a3c5352b29a@selasky.org>
References:  <0aa09fcc-dfcc-005e-8834-2a758ba6a03f@microwave.sk> <94978a05-94c6-cc55-229c-5a3c5352b29a@selasky.org>

index | next in thread | previous in thread | raw e-mail

On 01/10/2020 10:10, Hans Petter Selasky wrote:

> On 2020-10-01 09:39, Michal Vančo via freebsd-net wrote:
>> Hi
>
> Hi Michal,
>
Thank you for your quick reply.

>> I have a server with one Mellanox ConnectX-4 adapter and the following
>> CPU configuration (SMT disabled):
>>
>> # dmesg | grep SMP
>> FreeBSD/SMP: Multiprocessor System Detected: 16 CPUs
>> FreeBSD/SMP: 2 package(s) x 8 core(s) x 2 hardware threads
>> FreeBSD/SMP Online: 2 package(s) x 8 core(s)
>>
>> What I don't understand is the number of IRQs allocated for each
>> mlx5_core:
>>
>> # vmstat -i | grep mlx5_core
>> irq320: mlx5_core0                     1          0
>> irq321: mlx5_core0              18646775         84
>> irq322: mlx5_core0                    21          0
>> irq323: mlx5_core0                 97793          0
>> irq324: mlx5_core0                 84685          0
>> irq325: mlx5_core0                 89288          0
>> irq326: mlx5_core0                 93564          0
>> irq327: mlx5_core0                 86892          0
>> irq328: mlx5_core0                 99141          0
>> irq329: mlx5_core0                 86695          0
>> irq330: mlx5_core0                104023          0
>> irq331: mlx5_core0                 85238          0
>> irq332: mlx5_core0                 88387          0
>> irq333: mlx5_core0              93310221        420
>
> ^^^ it appears you have some application which is using a single TCP
> connection heavily. Then the traffic doesn't get distributed.
>
This seems correct. This machine is a NFS server with only two clients.
>> irq334: mlx5_core0               1135906          5
>> irq335: mlx5_core0                 85394          0
>> irq336: mlx5_core0                 88361          0
>> irq337: mlx5_core0                 88826          0
>> irq338: mlx5_core0              17909515         81
>> irq339: mlx5_core1                     1          0
>> irq340: mlx5_core1              18646948         84
>> irq341: mlx5_core1                    25          0
>> irq342: mlx5_core1                208684          1
>> irq343: mlx5_core1                 91567          0
>> irq344: mlx5_core1                 88340          0
>> irq345: mlx5_core1                 92597          0
>> irq346: mlx5_core1                 85108          0
>> irq347: mlx5_core1                 98858          0
>> irq348: mlx5_core1                 88103          0
>> irq349: mlx5_core1                104906          0
>> irq350: mlx5_core1                 84947          0
>> irq351: mlx5_core1                 99767          0
>> irq352: mlx5_core1               9482571         43
>> irq353: mlx5_core1               1724267          8
>> irq354: mlx5_core1                 96698          0
>> irq355: mlx5_core1                473324          2
>> irq356: mlx5_core1                 86760          0
>> irq357: mlx5_core1              11590861         52
>>
>> I expected number of IRQs to be equal number of CPUS. According to
>> Mellanox docs, I should be able to pin each interrupt to specific core
>> to loadbalance. How can I do this in this case when number of IRQs is
>> larger than number of cores? Is there any way to lower the number of
>> interrupts?
>>
>
> You can lower the number of interrupts by changing the coalescing
> sysctl's in the mce.<N>.conf tree.
>
> dev.mce.0.conf.tx_coalesce_pkts: 32
> dev.mce.0.conf.tx_coalesce_usecs: 16
> dev.mce.0.conf.rx_coalesce_pkts: 32
> dev.mce.0.conf.rx_coalesce_usecs: 3
>
> For example 1024 pkts and 125 us.
>
> And also set the queue size bigger than 1024 pkts:
>
> dev.mce.0.conf.rx_queue_size: 1024
> dev.mce.0.conf.tx_queue_size: 1024
>
But why is the actual number of IRQ lines bigger than number of CPU cores?

help

Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?c9f8bd7f-6d9d-bb6e-307c-a19c9730b564>