Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 9 Sep 2016 22:51:25 +0200
From:      Christoph Pilka <c.pilka@asconix.com>
To:        freebsd-questions@freebsd.org
Subject:   40 cores, 48 NVMe disks, feel free to take over 
Message-ID:  <E264C60F-7317-4D99-882C-8F76191238BE@asconix.com>

next in thread | raw e-mail | index | archive | help
Hi,

we've just been granted a short-term loan of a server from Supermicro with 4=
0 physical cores (plus HTT) and 48 NVMe drives. After a bit of mucking about=
, we managed to get 11-RC running. A couple of things are preventing the sys=
tem from being terribly useful:

- We have to use hw.nvme.force_intx=3D1 for the server to boot
If we don't, it panics around the 9th NVMe drive with "panic: couldn't find a=
n APIC vector for IRQ...". Increasing hw.nvme.min_cpus_per_ioq brings it fur=
ther, but it still panics later in the NVMe enumeration/init. hw.nvme.per_cp=
u_io_queues=3D0 causes it to panic later (I suspect during ixl init - the bo=
x has 4x10gb ethernet ports).

- zfskern seems to be the limiting factor when doing ~40 parallel "dd if=3D/=
dev/zer of=3D<file> bs=3D1m" on a zpool stripe of all 48 drives. Each drive s=
hows ~30% utilization (gstat), I can do ~14GB/sec write and 16 read.

- direct writing to the NVMe devices (dd from /dev/zero) gives about 550MB/s=
ec and ~91% utilization per device=20

Obviously, the first item is the most troublesome. The rest is based on enti=
rely synthetic testing and may have little or no actual impact on the server=
's usability or fitness for our purposes.=20

There is nothing but sshd running on the server, and if anyone wants to play=
 around you'll have IPMI access (remote kvm, virtual media, power) and root.=


Any takers?

Wbr
Christoph Pilka
Modirum MDpay

Sent from my iPhone=




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?E264C60F-7317-4D99-882C-8F76191238BE>