Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 4 Oct 2018 14:43:25 -0400
From:      Garrett Wollman <wollman@csail.mit.edu>
To:        freebsd-fs@freebsd.org
Subject:   ZFS/NVMe layout puzzle
Message-ID:  <23478.24397.495369.226706@khavrinen.csail.mit.edu>

next in thread | raw e-mail | index | archive | help
Say you're using an all-NVMe zpool with PCIe switches to multiplex
drives (e.g., 12 4-lane NVMe drives on one side, 1 PCIe x8 slot on the
other).  Does it make more sense to spread each vdev across switches
(and thus CPU sockets) or to have all of the drives in a vdev on the
same switch?  I have no intuition about this at all, and it may not
even matter.  (You can be sure I'll be doing some benchmarking.)

I'm assuming the ZFS code doesn't have any sort of CPU affinity that
would allow it to take account of the PCIe topology even if that
information were made available to it.

-GAWollman




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?23478.24397.495369.226706>