From owner-svn-src-all@freebsd.org Tue Jan 19 19:13:28 2016 Return-Path: Delivered-To: svn-src-all@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id C476BA898DF; Tue, 19 Jan 2016 19:13:28 +0000 (UTC) (envelope-from cschuber@gmail.com) Received: from mail-pa0-x232.google.com (mail-pa0-x232.google.com [IPv6:2607:f8b0:400e:c03::232]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 958D71E1D; Tue, 19 Jan 2016 19:13:28 +0000 (UTC) (envelope-from cschuber@gmail.com) Received: by mail-pa0-x232.google.com with SMTP id cy9so456183189pac.0; Tue, 19 Jan 2016 11:13:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=message-id:mime-version:from:subject:date:to:cc:content-type; bh=srAuWYcc6GDQtCD0A983fQSkneO9jm7Ys+Zc1WEamsU=; b=soPnSX6s2jYj++fziIoDmDx5MrBPB3HgpAlVTiPSuNdSgreBurm1oYQJ1aEjajU3wA 6f2TYZzRFtPX2XACsFksjwxOG8OmFgYTNHYfMvbE6PS7ZOvCrdq7zrXZvfZtSIT0Je76 F4NzUNe//pYPzVVOt87JTJIK3RzXmwoPVIptrk+t8XfS+RCtzHxtxx2K+rFJONZmcZ2H Mgja5e6J2HF2wLDl1NjbGcIU0LwU09ncqO+AFXNMNJE44qZ/F4ohb9nqoDiKsnRtdGQ4 wK3eAWRAH6qT9G1AN+j1ULrs48rVN+EH3vSi0lteLhYqgcXjj/GTc5NZSFtOQSvxqJp7 WcSA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:message-id:mime-version:from:subject:date:to:cc :content-type; bh=srAuWYcc6GDQtCD0A983fQSkneO9jm7Ys+Zc1WEamsU=; b=IXQIHxFEU5e1H4Z1ZSkEuOjeQd6LaiNuN+fOS5ah4QJcF1bpW5/pyE5GywEhCRsbJV 4ioKtiY+3Q3o9EetIMJmKqkBNn0XRzYjJKZZPoRi8JfIi6HB3Dkd6baJoZn700bmirI9 so/A2NBm136mddiVAxvEKqnepEBnvDlohmKIal7CimgzL783Fv6vboms4UfwYTH8vjYo lycri/4DEbppDY72wrxPEQeUmxiJ/d3Y9jtAFSuPKYLWTfW5iDakAevr8F4H1lddBWLo 2o169bQfoBWe83Lj/o+plRHe/6qFPrW82wTHJPNMqa0cXv23W659HNUH3HjveahDMIbT 3OTw== X-Gm-Message-State: ALoCoQloEealDRMRVKibSBH7ZWwjBf8ee60o992BwJHQraLBjNZGKYQeVQjBrEDcYr9/Zpl0OlcGTeTFi2fVNIANNJEt2GRqgg== X-Received: by 10.66.118.198 with SMTP id ko6mr46853721pab.122.1453230808134; Tue, 19 Jan 2016 11:13:28 -0800 (PST) Received: from [10.168.3.75] (S0106d4ca6d8943b0.gv.shawcable.net. [24.68.134.59]) by smtp.gmail.com with ESMTPSA id m75sm43267767pfj.38.2016.01.19.11.13.26 (version=TLSv1/SSLv3 cipher=OTHER); Tue, 19 Jan 2016 11:13:27 -0800 (PST) Message-ID: <569e8ad7.ce20620a.a05fc.ffffd4a5@mx.google.com> MIME-Version: 1.0 From: Cy Schubert Subject: RE: svn commit: r294329 - inhead/sys/cddl/contrib/opensolaris/uts/common/fs/zfs: . sys Date: Tue, 19 Jan 2016 11:13:29 -0800 To: Alan Somers CC: "src-committers@freebsd.org" , "svn-src-all@freebsd.org" , "svn-src-head@freebsd.org" , Cy Schubert Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: svn-src-all@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: "SVN commit messages for the entire src tree \(except for " user" and " projects" \)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 19 Jan 2016 19:13:28 -0000 Thanks :) I do a lot of ufs on zvols too, and yes, there are performance impacts due = to double caching -- I use ufs and zfs mounted zvols for installworld/insta= llkernel which are later unmounted on the host and booted as VMs for testin= g. Sent from my cellphone, ~Cy -----Original Message----- From: Alan Somers Sent: 19/01/2016 10:55 Cc: src-committers@freebsd.org; svn-src-all@freebsd.org; svn-src-head@freeb= sd.org Subject: Re: svn commit: r294329 - inhead/sys/cddl/contrib/opensolaris/uts/= common/fs/zfs: . sys On Tue, Jan 19, 2016 at 10:00 AM, Alan Somers wrote: > Author: asomers > Date: Tue Jan 19 17:00:25 2016 > New Revision: 294329 > URL: https://svnweb.freebsd.org/changeset/base/294329 > > Log: > Disallow zvol-backed ZFS pools > > Using zvols as backing devices for ZFS pools is fraught with panics and > deadlocks. For example, attempting to online a missing device in the > presence of a zvol can cause a panic when vdev_geom tastes the zvol. B= etter > to completely disable vdev_geom from ever opening a zvol. The solution > relies on setting a thread-local variable during vdev_geom_open, and > returning EOPNOTSUPP during zvol_open if that thread-local variable is = set. > > Remove the check for MUTEX_HELD(&zfsdev_state_lock) in zvol_open. Its i= ntent > was to prevent a recursive mutex acquisition panic. However, the new ch= eck > for the thread-local variable also fixes that problem. > > Also, fix a panic in vdev_geom_taste_orphan. For an unknown reason, thi= s > function was set to panic. But it can occur that a device disappears du= ring > tasting, and it causes no problems to ignore this departure. > > Reviewed by: delphij > MFC after: 1 week > Relnotes: yes > Sponsored by: Spectra Logic Corp > Differential Revision: https://reviews.freebsd.org/D4986 > > Modified: > head/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/sys/vdev_impl.h > head/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_geom.c > head/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_ioctl.c > head/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zvol.c > > Modified: head/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/sys/vdev_im= pl.h Due to popular demand, I will conditionalize this behavior on a sysctl, and I won't MFC it. The sysctl must default to off (ZFS on zvols not allowed) because having the ability to put pools on zvols can cause panics even for users who aren't using it. And let me clear up some confusion: 1) Having the ability to put a zpool on a zvol can cause panics and deadlocks, even if that ability is unused. 2) Putting a zpool atop a zvol causes unnecessary performance problems because there are two layers of COW involved, with all their software complexities. This also applies to putting a zpool atop files on a ZFS filesystem. 3) A VM guest putting a zpool on its virtual disk, where the VM host backs that virtual disk with a zvol, will work fine. That's the ideal use case for zvols. 3b) Using ZFS on both host and guest isn't ideal for performance, as described in item 2. That's why I prefer to use UFS for VM guests. 4) Using UFS on a zvol as Stefen Esser described works fine. I'm not aware of any performance problems associated with mixing UFS and ZFS. Perhaps Stefan was referring to duplication between the ARC and UFS's vnode cache. The same duplication would be present in a ZFS on top of zvol scenario. -Alan