From owner-freebsd-questions@freebsd.org Mon Jun 29 23:54:25 2015 Return-Path: Delivered-To: freebsd-questions@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 0048198F1C7 for ; Mon, 29 Jun 2015 23:54:24 +0000 (UTC) (envelope-from quartz@sneakertech.com) Received: from douhisi.pair.com (douhisi.pair.com [209.68.5.179]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id C7354213B for ; Mon, 29 Jun 2015 23:54:24 +0000 (UTC) (envelope-from quartz@sneakertech.com) Received: from [10.2.2.1] (pool-173-48-121-235.bstnma.fios.verizon.net [173.48.121.235]) by douhisi.pair.com (Postfix) with ESMTPSA id 22FA43F854; Mon, 29 Jun 2015 19:54:22 -0400 (EDT) Message-ID: <5591DAAD.4080800@sneakertech.com> Date: Mon, 29 Jun 2015 19:54:21 -0400 From: Quartz User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:10.0.2) Gecko/20120216 Thunderbird/10.0.2 MIME-Version: 1.0 To: Paul Kraus CC: freebsd-questions Subject: Re: Corrupt GPT on ZFS full-disks that shouldn't be using GPT References: <5590A7AE.9040303@sneakertech.com> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 29 Jun 2015 23:54:25 -0000 > I do recall a change in ZFS behavior to leave a very small amount of > space unused at the every end of the drive to account for the > differences in real sizes between various vendors drives that were > nominally the same size. This only applied if you used the entire > disk and did not use any partitioning. This was in both the Solaris > and OpenSolaris versions of ZFS, so it predates the fork of the ZFS > code. > > I have had no issues using disks of different manufacturers and even > models within manufacturers That runs counter to everything I've ever heard or read. Many people on all platforms have complained about this issue over the years and tried to come up with workarounds, there's no shortage of hits if you search for it. Here's a few random examples: https://www.mail-archive.com/zfs-discuss@opensolaris.org/msg23070.html https://lists.freebsd.org/pipermail/freebsd-stable/2010-July/057880.html http://blog.dest-unreach.be/2012/06/30/create-future-proof-zfs-pools http://www.freebsddiary.org/zfs-with-gpart.php > I will see if I can dig up the documentation on this. Please do, because if zfs does have this ability buried somewhere I'd love to see how and when you can activate it. >Note that it is > a very small amount as drives of the same nominal capacity vary very > little in real capacity. The second link of the ones I posted above is from a guy with two 1.5TB drives that vary by one MB. I'm not sure what you're considering "nominal capacity" in this context, but any margin smaller than that is probably not useful in practice.