From owner-freebsd-current@FreeBSD.ORG Thu Oct 16 08:10:22 2014 Return-Path: Delivered-To: freebsd-current@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 773EA9DB for ; Thu, 16 Oct 2014 08:10:22 +0000 (UTC) Received: from mail-la0-x230.google.com (mail-la0-x230.google.com [IPv6:2a00:1450:4010:c03::230]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id F0115A56 for ; Thu, 16 Oct 2014 08:10:21 +0000 (UTC) Received: by mail-la0-f48.google.com with SMTP id gi9so2401083lab.21 for ; Thu, 16 Oct 2014 01:10:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:date:from:to:cc:subject:message-id:mail-followup-to :references:mime-version:content-type:content-disposition :in-reply-to:user-agent; bh=/H+gqCHU5wF+Xw2VV7Lvmo/nF27TSXHvmU2LIR4rftI=; b=IDkCCk1pTouCZc0+2mGvn+H4hTFR0lRd3XJdRtBpaR2xPm97kbCIbxFXq22E8uzTGs rVF0dALmcBYB0uKl+oCTYgR/VryCv6pazXzQqthlODeyZpzvMCP9eNykp18c5JxnnVCH oAzjLapsbJDVip6CQN9BBsp3l9e2EUISY6x/5EFmMVChwhCq4LsZcsZgnHeziH83NXVf Vfu++yAMMBmwBxoFY5JbH+ygUYTo48vYDlOANtwhU4PBYSR7L0KElel+ScvarY1kQwgq 4hS5OET47VVSDr9w5uadUmhNKvy/KcgoZ7MUpbqsSjU5YqILSRNmkVeZOyZ1GO2gOhQH stvw== X-Received: by 10.152.22.74 with SMTP id b10mr17750402laf.16.1413447019807; Thu, 16 Oct 2014 01:10:19 -0700 (PDT) Received: from brick.home (addo146.neoplus.adsl.tpnet.pl. [79.184.66.146]) by mx.google.com with ESMTPSA id h1sm6891940lam.5.2014.10.16.01.10.18 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 16 Oct 2014 01:10:19 -0700 (PDT) Sender: =?UTF-8?Q?Edward_Tomasz_Napiera=C5=82a?= Date: Thu, 16 Oct 2014 10:10:16 +0200 From: Edward Tomasz =?utf-8?Q?Napiera=C5=82a?= To: Matthew Grooms Subject: Re: Resizing a zpool as a VMware ESXi guest ... Message-ID: <20141016081016.GA4670@brick.home> Mail-Followup-To: Matthew Grooms , freebsd-current@freebsd.org References: <543841B8.4070007@shrew.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <543841B8.4070007@shrew.net> User-Agent: Mutt/1.5.23 (2014-03-12) Cc: freebsd-current@freebsd.org X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 16 Oct 2014 08:10:22 -0000 On 1010T1529, Matthew Grooms wrote: > All, > > I am a long time user and advocate of FreeBSD and manage a several > deployments of FreeBSD in a few data centers. Now that these > environments are almost always virtual, it would make sense that FreeBSD > support for basic features such as dynamic disk resizing. It looks like > most of the parts are intended to work. Kudos to the FreeBSD foundation > for seeing the need and sponsoring dynamic increase of online UFS > filesystems via growfs. Unfortunately, it would appear that there are > still problems in this area, such as ... > > a) cam/geom recognizing when a drive's size has increased > b) zpool recognizing when a gpt partition size has increased > > For example, if I do an install of FreeBSD 10 on VMware using ZFS, I see > the following ... > > root@zpool-test:~ # gpart show > => 34 16777149 da0 GPT (8.0G) > 34 1024 1 freebsd-boot (512K) > 1058 4194304 2 freebsd-swap (2.0G) > 4195362 12581821 3 freebsd-zfs (6.0G) > > If I increase the VM disk size using VMware to 16G and rescan using > camcontrol, this is what I see ... "camcontrol rescan" does not force fetching the updated disk size. AFAIK there is no way to do that. However, this should happen automatically, if the "other side" properly sends proper Unit Attention after resizing. No idea why this doesn't happen with VMWare. Reboot obviously clears things up. [..] > Now I want the claim the additional 14 gigs of space for my zpool ... > > root@zpool-test:~ # zpool status > pool: zroot > state: ONLINE > scan: none requested > config: > > NAME STATE READ > WRITE CKSUM > zroot ONLINE 0 0 0 > gptid/352086bd-50b5-11e4-95b8-0050569b2a04 ONLINE 0 0 0 > > root@zpool-test:~ # zpool set autoexpand=on zroot > root@zpool-test:~ # zpool online -e zroot > gptid/352086bd-50b5-11e4-95b8-0050569b2a04 > root@zpool-test:~ # zpool list > NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT > zroot 5.97G 876M 5.11G 14% 1.00x ONLINE - > > The zpool appears to still only have 5.11G free. Lets reboot and try > again ... Interesting. This used to work; actually either of those (autoexpand or online -e) should do the trick. > root@zpool-test:~ # zpool set autoexpand=on zroot > root@zpool-test:~ # zpool online -e zroot > gptid/352086bd-50b5-11e4-95b8-0050569b2a04 > root@zpool-test:~ # zpool list > NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT > zroot 14.0G 876M 13.1G 6% 1.00x ONLINE - > > Now I have 13.1G free. I can add this space to any of my zfs volumes and > it picks the change up immediately. So the question remains, why do I > need to reboot the OS twice to allocate new disk space to a volume? > FreeBSD is first and foremost a server operating system. Servers are > commonly deployed in data centers. Virtual environments are now > commonplace in data centers, not the exception to the rule. VMware still > has the vast majority of the private virutal environment market. I > assume that most would expect things like this to work out of the box. > Did I miss a required step or is this fixed in CURRENT? Looks like genuine bugs (or rather, one missing feature and one bug). Filling PRs for those might be a good idea.