Date: Mon, 1 Mar 2010 11:57:15 -0800 From: Freddie Cash <fjwcash@gmail.com> To: fs@freebsd.org Subject: HAST, ucarp, and ZFS Message-ID: <b269bc571003011157x3fa89233va8d6c2f15f1e9e8e@mail.gmail.com>
next in thread | raw e-mail | index | archive | help
Perhaps it's just a misunderstanding on my part of the layering involved, but I'm having an issue with the sample ucarp_up.sh script on the HAST wiki page. Here's the test setup that I have: hast1: glabel 4x 2 GB virtual disks (label/disk01 --> label/disk04) hast.conf create 4 resources (disk01 --> disk04, using the glabelled disks) zpool create hapool raidz1 hast/disk01 .. hast/disk04 hast2: glabel 4x 2 GB virtual disks (label/disk01 --> label/disk04) hast.conf create 4 resources (disk01 --> disk04) So far so good. On hast1, I have a working ZFS pool, I can create data, filessytems, etc, and watch network traffic as it syncs to hast2. I can manually down hast1 and switch hast2 to "primary" and import the hapool. I can create data, filesystems, etc. And I can manually bring hast1 online and set it to secondary, and watch it sync back. Where I'm stuck is how to modify the ucarp_up.sh script to work with multiple hast resources. Do I just edit it to handle each of the 4 hast resources in turn, or am I missing something simple, like that there should only be a single hast resource? I'm guess it's a simple "edit the script to suit my setup" issue, but wanted to double-check. The production server I want to use this with has 24 harddrives in it, configured into multiple raidz2 vdevs, as part of a single ZFS pool. Which will mean 24 separate hast resources, if I understand things correctly. -- Freddie Cash fjwcash@gmail.com
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?b269bc571003011157x3fa89233va8d6c2f15f1e9e8e>