Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 26 Oct 2022 18:46:59 -0700
From:      David Christensen <dpchrist@holgerdanske.com>
To:        questions@freebsd.org
Subject:   Re: Setting up ZFS L2ARC on a zvol
Message-ID:  <052d0579-7343-4aa7-8e27-fdab0fe6f400@holgerdanske.com>
In-Reply-To: <PH0PR20MB370438AB8180CE27187E3BD2C0309@PH0PR20MB3704.namprd20.prod.outlook.com>
References:  <PH0PR20MB370438AB8180CE27187E3BD2C0309@PH0PR20MB3704.namprd20.prod.outlook.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On 10/26/22 06:32, julio@meroh.net wrote:
> Hello,
> 
> I'm setting up a new machine in which I have an NVMe drive and a bunch of hard disks. The hard disks are going to be a ZFS pool and the NVMe drive is going to be the root file system + the L2ARC for the pool.
> 
> Now... I'm considering using ZFS as well for the root file system (as a separate single-drive pool) in order to simplify admin operations: I want to host some VMs on the NVMe for speed, and using zvols will be very helpful as I don't have to come up with the partition sizes upfront.
> 
> And here comes the question: can the L2ARC of the hard disk pool be backed by a zvol on the NVMe pool (again, so that I don't have to use fixed-size partitions)?
> 
> I gave a simple try to this setup and it's not working, so I'm wondering if this is just not a good idea and thus is unsupported, or if there is a bug:
> 
> root@think:~ # zfs create -V 16G -o primarycache=none zroot/l2arg
> root@think:~ # zpool add scratch cache zvol/zroot/l2arc
> cannot add to 'scratch': no such pool or dataset
> root@think:~ # Oct 26 05:45:28 think ZFS[3677]: vdev problem, zpool=scratch path=/dev/zvol/zroot/l2arc type=ereport.fs.zfs.vdev.open_failed
> root@think:~ # ls /dev/zvol/zroot/l2arc
> /dev/zvol/zroot/l2arc
> 
> Thanks!


Testing on a VirtualBox VM, it looks like you cannot use a ZFS volume as 
a cache device for another ZFS pool:

2022-10-26 17:47:11 toor@vf1 ~
# freebsd-version ; uname -a
12.3-RELEASE-p7
FreeBSD vf1.tracy.holgerdanske.com 12.3-RELEASE-p6 FreeBSD 
12.3-RELEASE-p6 GENERIC  amd64

2022-10-26 17:47:32 toor@vf1 ~
# camcontrol devlist
<VBOX HARDDISK 1.0>                at scbus0 target 0 lun 0 (pass0,ada0)
<VBOX HARDDISK 1.0>                at scbus0 target 1 lun 0 (pass1,ada1)
<VBOX HARDDISK 1.0>                at scbus1 target 0 lun 0 (pass2,ada2)

2022-10-26 18:02:58 toor@vf1 ~
# zpool list
NAME        SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP 
HEALTH  ALTROOT
vf1_zroot  3.75G  2.67G  1.08G        -         -    43%    71%  1.00x 
ONLINE  -
vf1zpool1   960M  3.15M   957M        -         -     8%     0%  1.00x 
ONLINE  -
vf1zpool2   960M  2.31M   958M        -         -    14%     0%  1.00x 
ONLINE  -

2022-10-26 17:52:26 toor@vf1 ~
# zfs create -V 128M vf1_zroot/myvol

2022-10-26 17:55:06 toor@vf1 ~
# zfs list -d 1 vf1_zroot
NAME              USED  AVAIL  REFER  MOUNTPOINT
vf1_zroot        2.80G   840M    88K  /vf1_zroot
vf1_zroot/ROOT   2.65G   840M    88K  none
vf1_zroot/myvol   134M   974M    56K  -
vf1_zroot/tmp     644K   840M   644K  /tmp
vf1_zroot/usr    16.3M   840M    88K  /usr
vf1_zroot/var    1.51M   840M    88K  /var

2022-10-26 17:58:04 toor@vf1 ~
# find /dev -name myvol
/dev/zvol/vf1_zroot/myvol

2022-10-26 17:58:14 toor@vf1 ~
# ll /dev/zvol/vf1_zroot/myvol
crw-r-----  1 root  operator  0x67 2022/10/26 17:55:06 
/dev/zvol/vf1_zroot/myvol

2022-10-26 17:58:45 toor@vf1 ~
# zpool add vf1zpool1 cache zvol/vf1_zroot/myvol
cannot add to 'vf1zpool1': no such pool or dataset

2022-10-26 17:59:15 toor@vf1 ~
# zpool add vf1zpool1 cache /dev/zvol/vf1_zroot/myvol
cannot add to 'vf1zpool1': no such pool or dataset


Assuming you move root to another device and can repurpose the NVMe, 
perhaps gvinum(8) would meet your needs.


David



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?052d0579-7343-4aa7-8e27-fdab0fe6f400>