From owner-freebsd-virtualization@freebsd.org Mon Dec 4 23:22:11 2017 Return-Path: Delivered-To: freebsd-virtualization@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id A443BE6E0F5 for ; Mon, 4 Dec 2017 23:22:11 +0000 (UTC) (envelope-from dmarquess@gmail.com) Received: from mail-qt0-x22e.google.com (mail-qt0-x22e.google.com [IPv6:2607:f8b0:400d:c0d::22e]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 523C07E250 for ; Mon, 4 Dec 2017 23:22:11 +0000 (UTC) (envelope-from dmarquess@gmail.com) Received: by mail-qt0-x22e.google.com with SMTP id f2so1794456qtj.4 for ; Mon, 04 Dec 2017 15:22:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-transfer-encoding; bh=s6SKejP6j0k+c1jRM6QVlLpAqfCTxXweukS67+JhF+Y=; b=XNk4jHJj+cLWcQ/+K7D3bUWiw1+vf6wCUMQogZdIV8pYibTd3YyM0klzQQY2OPdjlG bT02vBKXrqiCLtmglRJ1k3Nuhn0g0PTYZ1cX/k4484iHU1fugGmbiilJV5oWxT9U7P6S WYexnoHcLPh6TuX4ftAC+Z26J7HbMGTAogVPDsd/KlzrPBEbkGDdXJS8787LWKSgi0MY dmsDXikZzKiu9URqZtgWa8p7N7OD9DvdYNJJD/P9/sKE0f1byxLDm9SBG/8oBiPG7hGz bGgsV8Beu12gg6KFi/35PvryXbFol6lyDCkpZd9PQNGSEostxA/tjM1uM//KZyTAYhQe bAAw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=s6SKejP6j0k+c1jRM6QVlLpAqfCTxXweukS67+JhF+Y=; b=ahXTiEYpTH9AwtsB7Y3ZCqol4IIT3c3fMefMH9ZFyjs7UXcHuL4tNx1RIZ/Y5WUhLH vJNN96FtxhrNntL6FXCDJmmIuw5PYPyurwWMHz2XgRJ3dKFGxAsXIiPJdXb/evvZ7Nzr OArY+WSq/KQjfSdMfXX9adJs8P48cOGCPU6Ne+7UzNq14QY/JgIhqHAcN8eiPwFkeXW9 48jq6kSPzQepqTslbtDGK/e6JEKAjMToeZKdajzAs4oFxKrbFFhZCLCQPfAkjBgOh5bg RHBoeF3hNiFybOLzOEBXj9hsAfTIXRb4MkpNDKWnh1akoEbM/J3eJcLzu4/w2xWXOLVb yl3w== X-Gm-Message-State: AKGB3mIwnwfJPsNDxzYeBPZZxYM46QtCPPgRJg0v4StAxD7St4aXNtws ixAGCd71SuwazL0p0W9FT90ybxuKpu6aQIPjsl4= X-Google-Smtp-Source: AGs4zMahjVWpCBwvAuKa8qaT6sBcIakk37bl+uc2w8p3fGAPvar9Jz9rgjJ6KsOnV9c3zpxmW8i4k83yvRAcj5zilYo= X-Received: by 10.200.47.220 with SMTP id m28mr297651qta.146.1512429730218; Mon, 04 Dec 2017 15:22:10 -0800 (PST) MIME-Version: 1.0 Received: by 10.140.105.53 with HTTP; Mon, 4 Dec 2017 15:22:09 -0800 (PST) In-Reply-To: References: From: Dustin Marquess Date: Mon, 4 Dec 2017 17:22:09 -0600 Message-ID: Subject: Re: Storage overhead on zvols To: Dustin Wenz Cc: FreeBSD virtualization Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-BeenThere: freebsd-virtualization@freebsd.org X-Mailman-Version: 2.1.25 Precedence: list List-Id: "Discussion of various virtualization techniques FreeBSD supports." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 04 Dec 2017 23:22:11 -0000 I doubt it's best practice, and I'm sure I'm just crazy for doing it, but personally I try and match the ZVOL blocksize to whatever the underlaying filesystem's blocksize is. To me that just makes the most logical sense. -Dustin On Mon, Dec 4, 2017 at 5:19 PM, Dustin Wenz wrote: > I'm starting a new thread based on the previous discussion in "bhyve uses= all available memory during IO-intensive operations" relating to size infl= ation of bhyve data stored on zvols. I've done some experimenting with this= , and I think it will be useful for others. > > The zvols listed here were created with this command: > > zfs create -o volmode=3Ddev -o volblocksize=3DXk -V 30g vm00/chyv= es/guests/myguest/diskY > > The zvols were created on a raidz1 pool of four disks. For each zvol, I c= reated a basic zfs filesystem in the guest using all default tuning (128k r= ecordsize, etc). I then copied the same 8.2GB dataset to each filesystem. > > volblocksize size amplification > > 512B 11.7x > 4k 1.45x > 8k 1.45x > 16k 1.5x > 32k 1.65x > 64k 1x > 128k 1x > > The worst case is with a 512B volblocksize, where the space used is more = than 11 times the size of the data stored within the guest. The size effici= ency gains are non-linear as I continue from 4k and double the block sizes;= 32k blocks being the second-worst. The amount of wasted space was minimize= d by using 64k and 128k blocks. > > It would appear that 64k is a good choice for volblocksize if you are usi= ng a zvol to back your VM, and the VM is using the virtual device for a zpo= ol. Incidentally, I believe this is the default when creating VMs in FreeNA= S. > > - .Dustin >