From owner-freebsd-stable@freebsd.org Tue May 31 21:49:31 2016 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 2A402B56BDF for ; Tue, 31 May 2016 21:49:31 +0000 (UTC) (envelope-from fjwcash@gmail.com) Received: from mailman.ysv.freebsd.org (unknown [127.0.1.3]) by mx1.freebsd.org (Postfix) with ESMTP id 0B85E12EA for ; Tue, 31 May 2016 21:49:31 +0000 (UTC) (envelope-from fjwcash@gmail.com) Received: by mailman.ysv.freebsd.org (Postfix) id 0AEA0B56BDE; Tue, 31 May 2016 21:49:31 +0000 (UTC) Delivered-To: stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 0A8F7B56BDD for ; Tue, 31 May 2016 21:49:31 +0000 (UTC) (envelope-from fjwcash@gmail.com) Received: from mail-it0-x22a.google.com (mail-it0-x22a.google.com [IPv6:2607:f8b0:4001:c0b::22a]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id CA4B812E9 for ; Tue, 31 May 2016 21:49:30 +0000 (UTC) (envelope-from fjwcash@gmail.com) Received: by mail-it0-x22a.google.com with SMTP id e62so72947499ita.1 for ; Tue, 31 May 2016 14:49:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc; bh=aJFoncY94aRdmzwmIIRBq9vElUn/q+jLUX/pk4JUb6o=; b=S7QlpJvyf4LCL7sSSf9e0waaG9mGZ5rSNKYFXyEqhsd/COK2HT1tLLePwmsRoytAQJ gO6d6gBRVv5yJX+IKqWQVBq038Ty+iVJwWjIjaTcHh+mm8NUZBQ/eGiYMuFcy/6laXDj Ro2Rcpj1CeLPRwoRBwRSXzJe8I8pmdVOlIAWHerxra3lY0G+CWwIpn3gFBlu9HhL+4Do lL9EOlw6QnbE6lIl0p41QCfIaxbrgtrHMef3vnHmPHKtr7DofPJ+YU43konFoPgYQLw/ TEtIG+523vuenmmwyGEWBZNRXTKDs45xVRFzvvulzs4ctLABscNAinbYCVILJOmaFQU7 josg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc; bh=aJFoncY94aRdmzwmIIRBq9vElUn/q+jLUX/pk4JUb6o=; b=dMNcf/kR6CkXoICNZi8SmrEGTudHTRdiylZv/cuNgHXZSaprJ6O2uewzZN6QI5ryXl v1vi9crHgPCPMd/oQ/OoPE3baSQSyuGfyKBmiW1lXbO6qt1/n7/TBx9N06oMZDTxMxCV QGGjz8MlOT2Dq+HJZyKVhHnNi7PsO6YMpxdOOK9siR3kE43+e1gxFH3B/GFRQblszlmf AHqSC9Yo3C//+P5A34AkNXFhTWuvoshQmGlKjlMS0C6RlHdOldQtolCk2s27/+K56YHd tdgBX5fku3KcoBie5NncGfWPbFG5bt9dyjaq/D8QsrOHxoagFjoHmMjJ6EYA1r1Wk+AI 5ioQ== X-Gm-Message-State: ALyK8tLhNEzEIyMvGSzGAUiQqGC80IEAhhICaQ9Uc1UlRhrIRUH+v9L1y9ANUk74tNrnAE0f9cKCG83xsdxWqw== MIME-Version: 1.0 X-Received: by 10.36.89.4 with SMTP id p4mr958320itb.44.1464731370208; Tue, 31 May 2016 14:49:30 -0700 (PDT) Received: by 10.107.182.214 with HTTP; Tue, 31 May 2016 14:49:30 -0700 (PDT) In-Reply-To: References: Date: Tue, 31 May 2016 14:49:30 -0700 Message-ID: Subject: Re: HAST, zfs and local mirroring From: Freddie Cash To: "Eugene M. Zheganin" Cc: stable@freebsd.org Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.22 X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 31 May 2016 21:49:31 -0000 On Tue, May 31, 2016 at 11:18 AM, Eugene M. Zheganin wrote: > I wat to start using HAST, I have two nodes and a pair of disk on each > node. So I want to use HASt in an environment where each HAST resource > would be mirrored. What is the preferred approach if I want to use ZFS on > an end-device to avoid exsessive fscking, and, in the same time, I want t= o > have some redundancy on a block level ? I see two possibility: HAST on a > zvol of a mirrored pool, and a ZFS on a hast. But recently I heard that > nested zfs (like zfs on zvol) is clamed unsupported. Futhermore, I have z= fs > on a geli on a zvol, and this solution proved itself to be very affected = to > livelocking - when disk i/o on a such fs is above some treshold, system i= s > locking, and the only way out is to reset it. Should I chose geom_mirror = to > provide a device for HAST and the build ZFS on it ? > =E2=80=8BThe generally recommend way to do this is to create a HAST resourc= e out of 1 disk from each system, and then build the ZFS pool using the HAST resources as the "disks". That way, your ZFS pool is made up of 2 HAST devices in a mirror vdev. And each of the two HAST devices uses one disk from each server (total of four disks). =E2=80=8B --=20 Freddie Cash fjwcash@gmail.com