Date: Sat, 6 Oct 2018 08:42:10 -0700 From: Mel Pilgrim <list_freebsd@bluerosetech.com> To: Garrett Wollman <wollman@csail.mit.edu>, freebsd-fs@freebsd.org Subject: Re: ZFS/NVMe layout puzzle Message-ID: <775752d2-1e66-7db9-5a4f-7cd775e366a6@bluerosetech.com> In-Reply-To: <23478.24397.495369.226706@khavrinen.csail.mit.edu>
index | next in thread | previous in thread | raw e-mail
On 2018-10-04 11:43, Garrett Wollman wrote: > Say you're using an all-NVMe zpool with PCIe switches to multiplex > drives (e.g., 12 4-lane NVMe drives on one side, 1 PCIe x8 slot on the > other). Does it make more sense to spread each vdev across switches > (and thus CPU sockets) or to have all of the drives in a vdev on the > same switch? I have no intuition about this at all, and it may not > even matter. (You can be sure I'll be doing some benchmarking.) > > I'm assuming the ZFS code doesn't have any sort of CPU affinity that > would allow it to take account of the PCIe topology even if that > information were made available to it. In this scenario, the PCIe switch takes the role of an HBA in terms of fault vulnerability.home | help
Want to link to this message? Use this
URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?775752d2-1e66-7db9-5a4f-7cd775e366a6>
