From owner-freebsd-questions@freebsd.org Wed May 15 07:46:03 2019 Return-Path: Delivered-To: freebsd-questions@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 1BDCA15AE221 for ; Wed, 15 May 2019 07:46:03 +0000 (UTC) (envelope-from matthias@smormegpa.no) Received: from mailrelay1-1.pub.mailoutpod1-cph3.one.com (mailrelay1-1.pub.mailoutpod1-cph3.one.com [46.30.210.182]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 68B6A77C8D for ; Wed, 15 May 2019 07:46:00 +0000 (UTC) (envelope-from matthias@smormegpa.no) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=smormegpa.no; s=20140924; h=content-transfer-encoding:mime-version:content-type:references:in-reply-to: date:cc:to:from:subject:message-id:from; bh=zSorFy9PMOOKjctVurl8kQUOjc7f5IRa6zTB5jKxZ+w=; b=CZxLvtHBnpuuPgMZSNSpxTlH0N7OAO7gf9eje8Thb3laGGhWypOkvSzQijqf55QAdzpL/wGaiYufw +L9CdAl2kuzWfcRZtAp/XarcKSb76ec1TapFTnP0yz7j8Lc1shYFsLgqiXMmJ8llvqCrwT6CXX4weT A2ms1tpdI4lmu+LQ= X-HalOne-Cookie: b84738788f538056d6d324d7b3145f4c52af1d25 X-HalOne-ID: 768dc9bc-76e5-11e9-bc24-d0431ea8a283 Received: from picadelly.monsieur.mathieu (unknown [85.166.11.253]) by mailrelay1.pub.mailoutpod1-cph3.one.com (Halon) with ESMTPSA id 768dc9bc-76e5-11e9-bc24-d0431ea8a283; Wed, 15 May 2019 07:45:52 +0000 (UTC) Message-ID: <5917c50c94750782cb3a929d44b04bcce142ece2.camel@smormegpa.no> Subject: Re: Suggestions for working with unstable nvme dev names in AWS From: Matthias Oestreicher To: hartzell@alerce.com Cc: freebsd-questions@freebsd.org Date: Wed, 15 May 2019 09:46:03 +0200 In-Reply-To: <23771.11429.855191.658934@alice.local> References: <23770.10599.687213.86492@alice.local> <08660a2a-489f-8172-22ee-47aeba315986@FreeBSD.org> <23770.58821.826610.399467@alice.local> <20190514210203.3d951fb8.freebsd@edvax.de> <23771.5612.105696.170743@alice.local> <23771.11429.855191.658934@alice.local> Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.28.5 FreeBSD GNOME Team Mime-Version: 1.0 Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: 68B6A77C8D X-Spamd-Bar: / Authentication-Results: mx1.freebsd.org; dkim=pass header.d=smormegpa.no header.s=20140924 header.b=CZxLvtHB X-Spamd-Result: default: False [0.29 / 15.00]; ARC_NA(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; R_DKIM_ALLOW(-0.20)[smormegpa.no:s=20140924]; NEURAL_HAM_MEDIUM(-0.02)[-0.016,0]; FROM_HAS_DN(0.00)[]; MV_CASE(0.50)[]; NEURAL_HAM_LONG(-0.41)[-0.406,0]; MIME_GOOD(-0.10)[text/plain]; TO_DN_NONE(0.00)[]; DMARC_NA(0.00)[smormegpa.no]; NEURAL_SPAM_SHORT(0.16)[0.161,0]; TO_MATCH_ENVRCPT_SOME(0.00)[]; DKIM_TRACE(0.00)[smormegpa.no:+]; MX_GOOD(-0.01)[cached: mx1.pub.mailpod3-cph3.one.com]; RCVD_IN_DNSWL_NONE(0.00)[182.210.30.46.list.dnswl.org : 127.0.5.0]; RCPT_COUNT_TWO(0.00)[2]; R_SPF_NA(0.00)[]; FROM_EQ_ENVFROM(0.00)[]; MIME_TRACE(0.00)[0:+]; IP_SCORE(0.36)[ipnet: 46.30.208.0/21(1.05), asn: 51468(0.79), country: DK(-0.03)]; ASN(0.00)[asn:51468, ipnet:46.30.208.0/21, country:DK]; MID_RHS_MATCH_FROM(0.00)[]; RCVD_TLS_ALL(0.00)[]; RCVD_COUNT_TWO(0.00)[2] X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 15 May 2019 07:46:03 -0000 Am Dienstag, den 14.05.2019, 14:01 -0700 schrieb George Hartzell: > Matthias Oestreicher writes: > > Am Dienstag, den 14.05.2019, 12:24 -0700 schrieb George Hartzell: > > > Polytropon writes: > > > > On Tue, 14 May 2019 08:59:01 -0700, George Hartzell wrote: > > > > > Matthew Seaman writes: > > > > > > [...] but if you > > > > > > are using ZFS, then shuffling the disks around should not make any > > > > > > difference. > > > > > > [...] > > > > > Yes, once I have them set up (ZFS or labeled), it doesn't matter what > > > > > device names they end up having. For now I just do the setup by hand, > > > > > poking around a bit. Same trick in the Linux world, you end up > > > > > referring to them by their UUID or .... > > > > > > > > In addition to what Matthew suggested, you could use UFS-IDs > > > > in case the disks are initialized with UFS. You can find more > > > > information here (at the bottom of the page): > > > > [...] > > > > > > Yes. As I mentioned in my response to Matthew, once I have some sort > > > of filesystem/zpool on the device, it's straightforward (TMTOWTDI). > > > > > > The problem is being able to provision the system automatically > > > without user intervention. > > > [...] > > Hei, > > I'm not familiar with Amazon's AWS, but if your only problem is shiftig device > > names for UFS filesystems, then on modern systems, GPT labels is the way to go. > > [...] > > Yes, yes, and yes. I do appreciate all of the answers but I > apparently haven't made clear the point of my question. I think that > you've all explained ways that I can log in and set things up manually > so that things work as they should for the rest of time. > > You (Matthias) suggested that I could just: > > > ``` > > # gpart modify -l mylabel -i N /dev/nvm1 > > ``` > > But how do I know which of the devices is the one that I'd like > labeled 'mylabel' and which is the one that I'd like labeled 'blort'? > > Another way to explain my situation might be to ask how I can automate > applying the labels. > > Imagine that in my automated creation of the instance, I asked for two > additional devices, a big-slow one which I asked to be called > `/dev/sdh` and a small-fast one that I asked to be called `/dev/sdz`. > > But when I boot, I find that I have two devices (in addition to the > root device), `/dev/nvme1` and `/dev/nvme2`. There's no way to know > which is the big-slow one that I wanted to call `/dev/sdh` and which > is the small-fast `/dev/sdz`. In fact, if I reboot the machine, > sometimes the big-slow one will be `/dev/nvme1` and sometimes it will > be `/dev/nvme2`. > > Given that situation, how do you write an automated script that will > label the big-slow one `backups` and the small-fast one `speedy`? > > In the Linux world, `ebsnvme-id` & `udev` rules create symlinks at > boot time that link the names that I requested to whatever the device > is currently named. That makes writing the script easy. > > We lack `ebsnvme-id` and our nvme driver doesn't seem to have any > knowledge of AWS' tricksy trick. Or perhaps not and I've just missed > out how we do it. > > Thanks (seriously) for all the answer, > > g. I have to admit that I'm still a bit unsure if I understand your problem. You are worried about, that the big-slow and the small-fast change their device names when the system boots... The GPT labels I suggested will survive a reboot, so no need to run a script each time the system boots, to reapply those labels to the right drive. What you only need to do once, is to determine which /dev/nvmN is the big-slow one and which the small-fast. Then you apply your labels, e.g.: # gpart modify -l big-slow -i 2 /dev/nvm1 # gpart modify -l small-fast -i 2 /dev/nvm0 (you could post the output from 'gpart show' so I can give a more precise example) Then you need to edit your /etc/fstab accordingly, e.g. (what I assume) /dev/gpt/small-fast / ufs rw 1 1 /dev/gpt/big-slow /data ufs rw 1 1 Then there's no need to create symlinks to the right device either. FreeBSD actually has devfs(8) to do that, but you don't need it here. If you for some reason need to write a script that automates such things, below are some examples for different use cases. ------------------------------------------------------------------------ The best way for me to get the device name of a specific drive is: # camcontrol devlist | grep "^" | cut -d "(" -f2 | cut -d "," -f1 ------------------------------------------------------------------------ Or to do something with all attached nvmN drives drives=$( sysctl -n kern.disks | tr ' ' '\n' | grep -e '^nvm[[:digit:]]' | sort ) ...and then process $drives in a loop. ------------------------------------------------------------------------ Last but not least, an easy way to process associations between drive names and existing labels: associations=$( geom label status | awk '/ nvm[[:digit:]]/{ print $3, substr($1,5,20) }' | tr ' ' ':' ) This gives you a list of all