Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 12 Apr 2022 17:37:48 +0200
From:      Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To:        Ze Dupsys <zedupsys@gmail.com>
Cc:        <freebsd-xen@freebsd.org>, <buhrow@nfbcal.org>
Subject:   Re: ZFS + FreeBSD XEN dom0 panic
Message-ID:  <YlWczLNsrTpNjk5P@Air-de-Roger>
In-Reply-To: <YlRLN7aAxYzgb7kr@Air-de-Roger>
References:  <4da2302b-0745-ea1d-c868-5a8a5fc66b18@gmail.com> <Yj8lZWqeHbD%2BkfOQ@Air-de-Roger> <48b74c39-abb3-0a3e-91a8-b5ab1e1223ce@gmail.com> <YkAqxjiMM1M1QdgR@Air-de-Roger> <22643831-70d3-5a3e-f973-fb80957e80dc@gmail.com> <Ykxev3fangqRGQcn@Air-de-Roger> <209c9b7c-4b4b-7fe3-6e73-d2a0dc651c19@gmail.com> <YlBOgFYNokZ0rTgD@Air-de-Roger> <1286cb59-867e-e7d0-2bd3-45c33feae66a@gmail.com> <YlRLN7aAxYzgb7kr@Air-de-Roger>

next in thread | previous in thread | raw e-mail | index | archive | help
--AXxWJ3w5Q3v8ydwo
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit

On Mon, Apr 11, 2022 at 05:37:27PM +0200, Roger Pau Monné wrote:
> On Mon, Apr 11, 2022 at 11:47:50AM +0300, Ze Dupsys wrote:
> > On 2022.04.08. 18:02, Roger Pau Monné wrote:
> > > On Fri, Apr 08, 2022 at 10:45:12AM +0300, Ze Dupsys wrote:
> > > > On 2022.04.05. 18:22, Roger Pau Monné wrote:
> > > > > .. Thanks, sorry for the late reply, somehow the message slip.
> > > > > 
> > > > > I've been able to get the file:line for those, and the trace is kind
> > > > > of weird, I'm not sure I know what's going on TBH. It seems to me the
> > > > > backend instance got freed while being in the process of connecting.
> > > > > 
> > > > > I've made some changes, that might mitigate this, but having not a
> > > > > clear understanding of what's going on makes this harder.
> > > > > 
> > > > > I've pushed the changes to:
> > > > > 
> > > > > http://xenbits.xen.org/gitweb/?p=people/royger/freebsd.git;a=shortlog;h=refs/heads/for-leak
> > > > > 
> > > > > (This is on top of main branch).
> > > > > 
> > > > > I'm also attaching the two patches on this email.
> > > > > 
> > > > > Let me know if those make a difference to stabilize the system.
> > > > 
> > > > Hi,
> > > > 
> > > > Yes, it stabilizes the system, but there is still a memleak somewhere, i
> > > > think.
> > > > 
> > > > System could run tests for approximately 41 hour, did not panic, but started
> > > > to OOM kill everything.
> > > > 
> > > > I did not know how to git clone given commit, thus i just applied patches to
> > > > 13.0-RELEASE sources.
> > > > 
> > > > Serial logs have nothing unusual, just that at some point OOM kill starts.
> > > 
> > > Well, I think that's good^W better than before. Thanks again for all
> > > the testing.
> > > 
> > > It might be helpful now to start dumping `vmstat -m` periodically
> > > while running the stress tests. As there are (hopefully) no more
> > > panics now vmstat might report us what subsystem is hogging the
> > > memory. It's possible it's blkback (again).
> > > 
> > > Thanks, Roger.
> > > 
> > 
> > Yes, it certainly is better. Applied patch on my pre-production server, have
> > not had any panic since then, still testing though.
> > 
> > On my stressed lab server, it's a bit different story. On occasion i see a
> > panic with this trace on serial (can not reliably repeat, but sometimes upon
> > starting dom id 1 and 2, sometimes mid-stress-test, dom id > 95).
> > panic: pmap_growkernel: no memory to grow kernel
> > cpuid = 2
> > time = 1649485133
> > KDB: stack backtrace:
> > #0 0xffffffff80c57385 at kdb_backtrace+0x65
> > #1 0xffffffff80c09d61 at vpanic+0x181
> > #2 0xffffffff80c09bd3 at panic+0x43
> > #3 0xffffffff81073eed at pmap_growkernel+0x27d
> > #4 0xffffffff80f2d918 at vm_map_insert+0x248
> > #5 0xffffffff80f30079 at vm_map_find+0x549
> > #6 0xffffffff80f2bda6 at kmem_init+0x226
> > #7 0xffffffff80c731a1 at vmem_xalloc+0xcb1
> > #8 0xffffffff80c72a9b at vmem_xalloc+0x5ab
> > #9 0xffffffff80c724a6 at vmem_alloc+0x46
> > #10 0xffffffff80f2ac6b at kva_alloc+0x2b
> > #11 0xffffffff8107f0eb at pmap_mapdev_attr+0x27b
> > #12 0xffffffff810588ca at nexus_add_irq+0x65a
> > #13 0xffffffff81058710 at nexus_add_irq+0x4a0
> > #14 0xffffffff810585b9 at nexus_add_irq+0x349
> > #15 0xffffffff80c495c1 at bus_alloc_resource+0xa1
> > #16 0xffffffff8105e940 at xenmem_free+0x1a0
> > #17 0xffffffff80a7e0dd at xbd_instance_create+0x943d
> > 
> > | sed -Ee 's/^#[0-9]* //' -e 's/ .*//' | xargs addr2line -e
> > /usr/lib/debug/boot/kernel/kernel.debug
> > /usr/src/sys/kern/subr_kdb.c:443
> > /usr/src/sys/kern/kern_shutdown.c:0
> > /usr/src/sys/kern/kern_shutdown.c:843
> > /usr/src/sys/amd64/amd64/pmap.c:0
> > /usr/src/sys/vm/vm_map.c:0
> > /usr/src/sys/vm/vm_map.c:0
> > /usr/src/sys/vm/vm_kern.c:712
> > /usr/src/sys/kern/subr_vmem.c:928
> > /usr/src/sys/kern/subr_vmem.c:0
> > /usr/src/sys/kern/subr_vmem.c:1350
> > /usr/src/sys/vm/vm_kern.c:150
> > /usr/src/sys/amd64/amd64/pmap.c:0
> > /usr/src/sys/x86/x86/nexus.c:0
> > /usr/src/sys/x86/x86/nexus.c:449
> > /usr/src/sys/x86/x86/nexus.c:412
> > /usr/src/sys/kern/subr_bus.c:4620
> > /usr/src/sys/x86/xen/xenpv.c:123
> > /usr/src/sys/dev/xen/blkback/blkback.c:3010
> > 
> > With gdb backtrace i think i can get a better trace though:
> > #0  __curthread at /usr/src/sys/amd64/include/pcpu_aux.h:55
> > #1  doadump at /usr/src/sys/kern/kern_shutdown.c:399
> > #2  kern_reboot at /usr/src/sys/kern/kern_shutdown.c:486
> > #3  vpanic at /usr/src/sys/kern/kern_shutdown.c:919
> > #4  panic at /usr/src/sys/kern/kern_shutdown.c:843
> > #5  pmap_growkernel at /usr/src/sys/amd64/amd64/pmap.c:208
> > #6  vm_map_insert at /usr/src/sys/vm/vm_map.c:1752
> > #7  vm_map_find at /usr/src/sys/vm/vm_map.c:2259
> > #8  kva_import at /usr/src/sys/vm/vm_kern.c:712
> > #9  vmem_import at /usr/src/sys/kern/subr_vmem.c:928
> > #10 vmem_try_fetch at /usr/src/sys/kern/subr_vmem.c:1049
> > #11 vmem_xalloc at /usr/src/sys/kern/subr_vmem.c:1449
> > #12 vmem_alloc at /usr/src/sys/kern/subr_vmem.c:1350
> > #13 kva_alloc at /usr/src/sys/vm/vm_kern.c:150
> > #14 pmap_mapdev_internal at /usr/src/sys/amd64/amd64/pmap.c:8974
> > #15 pmap_mapdev_attr at /usr/src/sys/amd64/amd64/pmap.c:8990
> > #16 nexus_map_resource at /usr/src/sys/x86/x86/nexus.c:523
> > #17 nexus_activate_resource at /usr/src/sys/x86/x86/nexus.c:448
> > #18 nexus_alloc_resource at /usr/src/sys/x86/x86/nexus.c:412
> > #19 BUS_ALLOC_RESOURCE at ./bus_if.h:321
> > #20 bus_alloc_resource at /usr/src/sys/kern/subr_bus.c:4617
> > #21 xenpv_alloc_physmem at /usr/src/sys/x86/xen/xenpv.c:121
> > #22 xbb_alloc_communication_mem at
> > /usr/src/sys/dev/xen/blkback/blkback.c:3010
> > #23 xbb_connect at /usr/src/sys/dev/xen/blkback/blkback.c:3336
> > #24 xenbusb_back_otherend_changed at
> > /usr/src/sys/xen/xenbus/xenbusb_back.c:228
> > #25 xenwatch_thread at /usr/src/sys/dev/xen/xenstore/xenstore.c:1003
> > #26 in fork_exit at /usr/src/sys/kern/kern_fork.c:1069
> > #27 <signal handler called>
> > 
> > 
> > There is some sort of mismatch in info, because panic message printed
> > "panic: pmap_growkernel: no memory to grow kernel", but gdb backtrace in
> > #5  0xffffffff81073eed in pmap_growkernel at
> > /usr/src/sys/amd64/amd64/pmap.c:208
> > leads to lines:
> > switch (pmap->pm_type) {
> > ..
> > panic("pmap_valid_bit: invalid pm_type %d", pmap->pm_type)
> > 
> > So either trace is off the mark or message in serial logs. If this was only
> > memleak related, then it should not happen when dom id 1 is started, i
> > suppose.
> 
> That's weird, I would rather trust the printed panic message rather
> than the symbol resolution.  Seems to be a kind of memory exhaustion,
> as the kernel is failing to allocate a page for use in the kernel page
> table.
> 
> I will try to see what can be done here.

I have a patch to disable the bounce buffering done in blkback
(attached).

While I think it's not directly related to the panic you are hitting,
it's long time since we should have disabled that.  It should reduce
the memory consumption by blkback greatly, so might have the side
effect of helping with your issue related to pmap_growkernel.

On my test box a single instance of blkback reduced memory usage from
~100M to ~300K.

It should be applied on top of the other two patches.

Regards, Roger.

--AXxWJ3w5Q3v8ydwo
Content-Type: text/plain; charset=utf-8
Content-Disposition: attachment;
	filename="0001-xen-blkback-remove-bounce-buffering-mode.patch"

>From 449ef76695cf5ec5cc3514e6bd653d0b1dff3dde Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <royger@FreeBSD.org>
Date: Tue, 12 Apr 2022 16:17:09 +0200
Subject: [PATCH] xen/blkback: remove bounce buffering mode

Remove bounce buffering code for blkback and only attach if Xen
creates IOMMU entries for grant mapped pages.

Such bounce buffering consumed a non trivial amount of memory and CPU
resources to do the memory copy, when it's been a long time since Xen
has been creating IOMMU entries for grant maps.

Refuse to attach blkback if Xen doesn't advertise that IOMMU entries
are created for grant maps.

Sponsored by: Citrix Systems R&D
---
 sys/dev/xen/blkback/blkback.c | 181 ++--------------------------------
 1 file changed, 8 insertions(+), 173 deletions(-)

diff --git a/sys/dev/xen/blkback/blkback.c b/sys/dev/xen/blkback/blkback.c
index 15e4bbe78fc0..97939d32ffce 100644
--- a/sys/dev/xen/blkback/blkback.c
+++ b/sys/dev/xen/blkback/blkback.c
@@ -80,6 +80,7 @@ __FBSDID("$FreeBSD$");
 #include <xen/gnttab.h>
 #include <xen/xen_intr.h>
 
+#include <contrib/xen/arch-x86/cpuid.h>
 #include <contrib/xen/event_channel.h>
 #include <contrib/xen/grant_table.h>
 
@@ -101,27 +102,6 @@ __FBSDID("$FreeBSD$");
 #define	XBB_MAX_REQUESTS 					\
 	__CONST_RING_SIZE(blkif, PAGE_SIZE * XBB_MAX_RING_PAGES)
 
-/**
- * \brief Define to force all I/O to be performed on memory owned by the
- *        backend device, with a copy-in/out to the remote domain's memory.
- *
- * \note  This option is currently required when this driver's domain is
- *        operating in HVM mode on a system using an IOMMU.
- *
- * This driver uses Xen's grant table API to gain access to the memory of
- * the remote domains it serves.  When our domain is operating in PV mode,
- * the grant table mechanism directly updates our domain's page table entries
- * to point to the physical pages of the remote domain.  This scheme guarantees
- * that blkback and the backing devices it uses can safely perform DMA
- * operations to satisfy requests.  In HVM mode, Xen may use a HW IOMMU to
- * insure that our domain cannot DMA to pages owned by another domain.  As
- * of Xen 4.0, IOMMU mappings for HVM guests are not updated via the grant
- * table API.  For this reason, in HVM mode, we must bounce all requests into
- * memory that is mapped into our domain at domain startup and thus has
- * valid IOMMU mappings.
- */
-#define XBB_USE_BOUNCE_BUFFERS
-
 /**
  * \brief Define to enable rudimentary request logging to the console.
  */
@@ -257,14 +237,6 @@ struct xbb_xen_reqlist {
 	 */
 	uint64_t	 	 gnt_base;
 
-#ifdef XBB_USE_BOUNCE_BUFFERS
-	/**
-	 * Pre-allocated domain local memory used to proxy remote
-	 * domain memory during I/O operations.
-	 */
-	uint8_t			*bounce;
-#endif
-
 	/**
 	 * Array of grant handles (one per page) used to map this request.
 	 */
@@ -500,30 +472,6 @@ struct xbb_file_data {
 	 * so we only need one of these.
 	 */
 	struct iovec	xiovecs[XBB_MAX_SEGMENTS_PER_REQLIST];
-#ifdef XBB_USE_BOUNCE_BUFFERS
-
-	/**
-	 * \brief Array of io vectors used to handle bouncing of file reads.
-	 *
-	 * Vnode operations are free to modify uio data during their
-	 * exectuion.  In the case of a read with bounce buffering active,
-	 * we need some of the data from the original uio in order to
-	 * bounce-out the read data.  This array serves as the temporary
-	 * storage for this saved data.
-	 */
-	struct iovec	saved_xiovecs[XBB_MAX_SEGMENTS_PER_REQLIST];
-
-	/**
-	 * \brief Array of memoized bounce buffer kva offsets used
-	 *        in the file based backend.
-	 *
-	 * Due to the way that the mapping of the memory backing an
-	 * I/O transaction is handled by Xen, a second pass through
-	 * the request sg elements is unavoidable. We memoize the computed
-	 * bounce address here to reduce the cost of the second walk.
-	 */
-	void		*xiovecs_vaddr[XBB_MAX_SEGMENTS_PER_REQLIST];
-#endif /* XBB_USE_BOUNCE_BUFFERS */
 };
 
 /**
@@ -891,25 +839,6 @@ xbb_reqlist_vaddr(struct xbb_xen_reqlist *reqlist, int pagenr, int sector)
 	return (reqlist->kva + (PAGE_SIZE * pagenr) + (sector << 9));
 }
 
-#ifdef XBB_USE_BOUNCE_BUFFERS
-/**
- * Given a page index and 512b sector offset within that page,
- * calculate an offset into a request's local bounce memory region.
- *
- * \param reqlist The request structure whose bounce region will be accessed.
- * \param pagenr  The page index used to compute the bounce offset.
- * \param sector  The 512b sector index used to compute the page relative
- *                bounce offset.
- *
- * \return  The computed global bounce buffer address.
- */
-static inline uint8_t *
-xbb_reqlist_bounce_addr(struct xbb_xen_reqlist *reqlist, int pagenr, int sector)
-{
-	return (reqlist->bounce + (PAGE_SIZE * pagenr) + (sector << 9));
-}
-#endif
-
 /**
  * Given a page number and 512b sector offset within that page,
  * calculate an offset into the request's memory region that the
@@ -929,11 +858,7 @@ xbb_reqlist_bounce_addr(struct xbb_xen_reqlist *reqlist, int pagenr, int sector)
 static inline uint8_t *
 xbb_reqlist_ioaddr(struct xbb_xen_reqlist *reqlist, int pagenr, int sector)
 {
-#ifdef XBB_USE_BOUNCE_BUFFERS
-	return (xbb_reqlist_bounce_addr(reqlist, pagenr, sector));
-#else
 	return (xbb_reqlist_vaddr(reqlist, pagenr, sector));
-#endif
 }
 
 /**
@@ -1508,17 +1433,6 @@ xbb_bio_done(struct bio *bio)
 		}
 	}
 
-#ifdef XBB_USE_BOUNCE_BUFFERS
-	if (bio->bio_cmd == BIO_READ) {
-		vm_offset_t kva_offset;
-
-		kva_offset = (vm_offset_t)bio->bio_data
-			   - (vm_offset_t)reqlist->bounce;
-		memcpy((uint8_t *)reqlist->kva + kva_offset,
-		       bio->bio_data, bio->bio_bcount);
-	}
-#endif /* XBB_USE_BOUNCE_BUFFERS */
-
 	/*
 	 * Decrement the pending count for the request list.  When we're
 	 * done with the requests, send status back for all of them.
@@ -2180,17 +2094,6 @@ xbb_dispatch_dev(struct xbb_softc *xbb, struct xbb_xen_reqlist *reqlist,
 
 	for (bio_idx = 0; bio_idx < nbio; bio_idx++)
 	{
-#ifdef XBB_USE_BOUNCE_BUFFERS
-		vm_offset_t kva_offset;
-
-		kva_offset = (vm_offset_t)bios[bio_idx]->bio_data
-			   - (vm_offset_t)reqlist->bounce;
-		if (operation == BIO_WRITE) {
-			memcpy(bios[bio_idx]->bio_data,
-			       (uint8_t *)reqlist->kva + kva_offset,
-			       bios[bio_idx]->bio_bcount);
-		}
-#endif
 		if (operation == BIO_READ) {
 			SDT_PROBE3(xbb, kernel, xbb_dispatch_dev, read,
 				   device_get_unit(xbb->dev),
@@ -2241,10 +2144,6 @@ xbb_dispatch_file(struct xbb_softc *xbb, struct xbb_xen_reqlist *reqlist,
 	struct uio            xuio;
 	struct xbb_sg        *xbb_sg;
 	struct iovec         *xiovec;
-#ifdef XBB_USE_BOUNCE_BUFFERS
-	void                **p_vaddr;
-	int                   saved_uio_iovcnt;
-#endif /* XBB_USE_BOUNCE_BUFFERS */
 	int                   error;
 
 	file_data = &xbb->backend.file;
@@ -2300,18 +2199,6 @@ xbb_dispatch_file(struct xbb_softc *xbb, struct xbb_xen_reqlist *reqlist,
 			xiovec = &file_data->xiovecs[xuio.uio_iovcnt];
 			xiovec->iov_base = xbb_reqlist_ioaddr(reqlist,
 			    seg_idx, xbb_sg->first_sect);
-#ifdef XBB_USE_BOUNCE_BUFFERS
-			/*
-			 * Store the address of the incoming
-			 * buffer at this particular offset
-			 * as well, so we can do the copy
-			 * later without having to do more
-			 * work to recalculate this address.
-		 	 */
-			p_vaddr = &file_data->xiovecs_vaddr[xuio.uio_iovcnt];
-			*p_vaddr = xbb_reqlist_vaddr(reqlist, seg_idx,
-			    xbb_sg->first_sect);
-#endif /* XBB_USE_BOUNCE_BUFFERS */
 			xiovec->iov_len = 0;
 			xuio.uio_iovcnt++;
 		}
@@ -2331,28 +2218,6 @@ xbb_dispatch_file(struct xbb_softc *xbb, struct xbb_xen_reqlist *reqlist,
 
 	xuio.uio_td = curthread;
 
-#ifdef XBB_USE_BOUNCE_BUFFERS
-	saved_uio_iovcnt = xuio.uio_iovcnt;
-
-	if (operation == BIO_WRITE) {
-		/* Copy the write data to the local buffer. */
-		for (seg_idx = 0, p_vaddr = file_data->xiovecs_vaddr,
-		     xiovec = xuio.uio_iov; seg_idx < xuio.uio_iovcnt;
-		     seg_idx++, xiovec++, p_vaddr++) {
-			memcpy(xiovec->iov_base, *p_vaddr, xiovec->iov_len);
-		}
-	} else {
-		/*
-		 * We only need to save off the iovecs in the case of a
-		 * read, because the copy for the read happens after the
-		 * VOP_READ().  (The uio will get modified in that call
-		 * sequence.)
-		 */
-		memcpy(file_data->saved_xiovecs, xuio.uio_iov,
-		       xuio.uio_iovcnt * sizeof(xuio.uio_iov[0]));
-	}
-#endif /* XBB_USE_BOUNCE_BUFFERS */
-
 	switch (operation) {
 	case BIO_READ:
 
@@ -2429,25 +2294,6 @@ xbb_dispatch_file(struct xbb_softc *xbb, struct xbb_xen_reqlist *reqlist,
 		/* NOTREACHED */
 	}
 
-#ifdef XBB_USE_BOUNCE_BUFFERS
-	/* We only need to copy here for read operations */
-	if (operation == BIO_READ) {
-		for (seg_idx = 0, p_vaddr = file_data->xiovecs_vaddr,
-		     xiovec = file_data->saved_xiovecs;
-		     seg_idx < saved_uio_iovcnt; seg_idx++,
-		     xiovec++, p_vaddr++) {
-			/*
-			 * Note that we have to use the copy of the 
-			 * io vector we made above.  uiomove() modifies
-			 * the uio and its referenced vector as uiomove
-			 * performs the copy, so we can't rely on any
-			 * state from the original uio.
-			 */
-			memcpy(*p_vaddr, xiovec->iov_base, xiovec->iov_len);
-		}
-	}
-#endif /* XBB_USE_BOUNCE_BUFFERS */
-
 bailout_send_response:
 
 	if (error != 0)
@@ -2826,12 +2672,6 @@ xbb_disconnect(struct xbb_softc *xbb)
 		/* There is one request list for ever allocated request. */
 		for (i = 0, reqlist = xbb->request_lists;
 		     i < xbb->max_requests; i++, reqlist++){
-#ifdef XBB_USE_BOUNCE_BUFFERS
-			if (reqlist->bounce != NULL) {
-				free(reqlist->bounce, M_XENBLOCKBACK);
-				reqlist->bounce = NULL;
-			}
-#endif
 			if (reqlist->gnt_handles != NULL) {
 				free(reqlist->gnt_handles, M_XENBLOCKBACK);
 				reqlist->gnt_handles = NULL;
@@ -3210,17 +3050,6 @@ xbb_alloc_request_lists(struct xbb_softc *xbb)
 
 		reqlist->xbb = xbb;
 
-#ifdef XBB_USE_BOUNCE_BUFFERS
-		reqlist->bounce = malloc(xbb->max_reqlist_size,
-					 M_XENBLOCKBACK, M_NOWAIT);
-		if (reqlist->bounce == NULL) {
-			xenbus_dev_fatal(xbb->dev, ENOMEM, 
-					 "Unable to allocate request "
-					 "bounce buffers");
-			return (ENOMEM);
-		}
-#endif /* XBB_USE_BOUNCE_BUFFERS */
-
 		reqlist->gnt_handles = malloc(xbb->max_reqlist_segments *
 					      sizeof(*reqlist->gnt_handles),
 					      M_XENBLOCKBACK, M_NOWAIT|M_ZERO);
@@ -3489,8 +3318,14 @@ xbb_attach_failed(struct xbb_softc *xbb, int err, const char *fmt, ...)
 static int
 xbb_probe(device_t dev)
 {
+	uint32_t regs[4];
+
+	KASSERT(xen_cpuid_base != 0, ("Invalid base Xen CPUID leaf"));
+	cpuid_count(xen_cpuid_base + 4, 0, regs);
 
-        if (!strcmp(xenbus_get_type(dev), "vbd")) {
+	/* Only attach if Xen creates IOMMU entries for grant mapped pages. */
+	if ((regs[0] & XEN_HVM_CPUID_IOMMU_MAPPINGS) &&
+	    !strcmp(xenbus_get_type(dev), "vbd")) {
                 device_set_desc(dev, "Backend Virtual Block Device");
                 device_quiet(dev);
                 return (0);
-- 
2.35.1


--AXxWJ3w5Q3v8ydwo--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?YlWczLNsrTpNjk5P>