Date: Tue, 14 May 2019 14:01:25 -0700 From: George Hartzell <hartzell@alerce.com> To: Matthias Oestreicher <matthias@smormegpa.no> Cc: hartzell@alerce.com, Polytropon <freebsd@edvax.de>, freebsd-questions@freebsd.org Subject: Re: Suggestions for working with unstable nvme dev names in AWS Message-ID: <23771.11429.855191.658934@alice.local> In-Reply-To: <eb1d290e48b4ba21ab350044b25592525e61457c.camel@smormegpa.no> References: <23770.10599.687213.86492@alice.local> <08660a2a-489f-8172-22ee-47aeba315986@FreeBSD.org> <23770.58821.826610.399467@alice.local> <20190514210203.3d951fb8.freebsd@edvax.de> <23771.5612.105696.170743@alice.local> <eb1d290e48b4ba21ab350044b25592525e61457c.camel@smormegpa.no>
next in thread | previous in thread | raw e-mail | index | archive | help
Matthias Oestreicher writes: > Am Dienstag, den 14.05.2019, 12:24 -0700 schrieb George Hartzell: > > Polytropon writes: > > > On Tue, 14 May 2019 08:59:01 -0700, George Hartzell wrote: > > > > Matthew Seaman writes: > > > > > [...] but if you > > > > > are using ZFS, then shuffling the disks around should not make any > > > > > difference. > > > > > [...] > > > > Yes, once I have them set up (ZFS or labeled), it doesn't matter what > > > > device names they end up having. For now I just do the setup by hand, > > > > poking around a bit. Same trick in the Linux world, you end up > > > > referring to them by their UUID or .... > > > > > > In addition to what Matthew suggested, you could use UFS-IDs > > > in case the disks are initialized with UFS. You can find more > > > information here (at the bottom of the page): > > > [...] > > > > Yes. As I mentioned in my response to Matthew, once I have some sort > > of filesystem/zpool on the device, it's straightforward (TMTOWTDI). > > > > The problem is being able to provision the system automatically > > without user intervention. > > [...] > Hei, > I'm not familiar with Amazon's AWS, but if your only problem is shiftig device > names for UFS filesystems, then on modern systems, GPT labels is the way to go. > [...] Yes, yes, and yes. I do appreciate all of the answers but I apparently haven't made clear the point of my question. I think that you've all explained ways that I can log in and set things up manually so that things work as they should for the rest of time. You (Matthias) suggested that I could just: > ``` > # gpart modify -l mylabel -i N /dev/nvm1 > ``` But how do I know which of the devices is the one that I'd like labeled 'mylabel' and which is the one that I'd like labeled 'blort'? Another way to explain my situation might be to ask how I can automate applying the labels. Imagine that in my automated creation of the instance, I asked for two additional devices, a big-slow one which I asked to be called `/dev/sdh` and a small-fast one that I asked to be called `/dev/sdz`. But when I boot, I find that I have two devices (in addition to the root device), `/dev/nvme1` and `/dev/nvme2`. There's no way to know which is the big-slow one that I wanted to call `/dev/sdh` and which is the small-fast `/dev/sdz`. In fact, if I reboot the machine, sometimes the big-slow one will be `/dev/nvme1` and sometimes it will be `/dev/nvme2`. Given that situation, how do you write an automated script that will label the big-slow one `backups` and the small-fast one `speedy`? In the Linux world, `ebsnvme-id` & `udev` rules create symlinks at boot time that link the names that I requested to whatever the device is currently named. That makes writing the script easy. We lack `ebsnvme-id` and our nvme driver doesn't seem to have any knowledge of AWS' tricksy trick. Or perhaps not and I've just missed out how we do it. Thanks (seriously) for all the answer, g.
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?23771.11429.855191.658934>