From owner-freebsd-fs@freebsd.org Tue Apr 5 01:38:20 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 366DDB02B93 for ; Tue, 5 Apr 2016 01:38:20 +0000 (UTC) (envelope-from tenzin.lhakhang@gmail.com) Received: from mail-qg0-x235.google.com (mail-qg0-x235.google.com [IPv6:2607:f8b0:400d:c04::235]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id E52CF1E49 for ; Tue, 5 Apr 2016 01:38:19 +0000 (UTC) (envelope-from tenzin.lhakhang@gmail.com) Received: by mail-qg0-x235.google.com with SMTP id f52so71672974qga.3 for ; Mon, 04 Apr 2016 18:38:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:subject:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=hgqViGXIi3okeFOFOLbpQvklDvMbplzPGz2iMlefKNY=; b=Bk15eukcdCn1xt/PJKhBbcTgtVD1MuzIvcymAWvzoEJe/Blom657ZAPevgOh+06ghQ icu9LXjLpGqkOukMbdIf+ngVZUU7fUDu1iVpJnMklDMA6h1kQ1HBvN6R5dK+Nult1c6c aKCVl4H60KfID7TZQLjMtG1Fvz3wZejBuZbzL4RW6HhHJZsCdGGpX9eMMQ1IL1ccqKkL KjY7f4VjyPZWFmdjI/40parzQo47UIzlV9w/DZyFvs06+2PeqG7R1IC8TxZluIX+HK3V lgQta/N6S2t6F0S/scNdnyE+y9Bcu/yQWpNo2VqPHm6qQkRiYFU+y4U/hVha5foPX8V2 3JLQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:subject:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=hgqViGXIi3okeFOFOLbpQvklDvMbplzPGz2iMlefKNY=; b=mZgAz6DDJ0Gq31aBS+vYG5/xWVzsRo/m99YSjn57pqTIiX8wcBOL3PufkglCTK9zPb qaBNChyw1+avI5O327IxUIj4HuzP+3Cp2yTaVi+D8ul7IgfCypuVGf5KJoWoIfXp/2xe dRkssgPLVr3XYXfijZQ3vWSDOQz91G14Z+DtGnpGCrFLtQaBF6/QkzrA94wEBieK1376 2hFl0XVDadopp6DnVCLhuFGP818gv9ITPPL0sgzLxzZ9GTuPlR3fNIlRrsXS144Ci78Q xxcXrOgcx3R1Kava6nnt/0j3K4Edn3KTxlAVsM+ZGTfQeDEFGGFuE2DdQEoCX8FjBd/+ VnIg== X-Gm-Message-State: AD7BkJJK8Nf3LCfP/Uah7rmccpKEPlevwSSg7+PoCOHKNnVupFEqbxeETyZFz2en7ORyCA== X-Received: by 10.140.228.68 with SMTP id y65mr27114129qhb.78.1459820299092; Mon, 04 Apr 2016 18:38:19 -0700 (PDT) Received: from [192.168.100.34] (c-73-167-145-126.hsd1.ma.comcast.net. [73.167.145.126]) by smtp.gmail.com with ESMTPSA id y89sm8451651qgd.5.2016.04.04.18.38.17 (version=TLSv1/SSLv3 cipher=OTHER); Mon, 04 Apr 2016 18:38:18 -0700 (PDT) Content-Type: text/plain; charset=us-ascii Mime-Version: 1.0 (1.0) Subject: Re: ZFS pool with a large number of filesystems From: "Tenzin W. Lhakhang" X-Mailer: iPhone Mail (13E238) In-Reply-To: <570311C5.4010702@quip.cz> Date: Mon, 4 Apr 2016 21:38:16 -0400 Cc: Wim Lewis , "freebsd-fs@FreeBSD.org" Content-Transfer-Encoding: quoted-printable Message-Id: References: <34DB45E8-7E1F-4D7C-96FF-E0A403EE8000@omnigroup.com> <570311C5.4010702@quip.cz> To: Miroslav Lachman <000.fbsd@quip.cz> X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 05 Apr 2016 01:38:20 -0000 The most I have seen is approximately 10,000. The performance isn't too bad at that scale. The clone fs metadata becomes a= bit costly. I seen pretty deep nesting into the zfs dataset chains on cert= ain server types; dataset clone snap clone snap chains going about 30 levels= . Note: we are running zfs on illumos (Smartos) General system specs are: e5-26[89]0v2,256gb ram, 1-2 ZIL and ssd or spinni= ng with 4-6 ssd cache drives. Tenzin Sent from my iPhone > On Apr 4, 2016, at 9:15 PM, Miroslav Lachman <000.fbsd@quip.cz> wrote: >=20 > Wim Lewis wrote on 04/05/2016 02:38: >> I'm curious how many ZFS filesystems are reasonable to have on a single m= achine (in a single zpool). We're contemplating a design in which we'd have t= ens of thousands, perhaps a couple hundred thousand, filesystems mounted out= of the same pool. Before we go too far into investigating this idea: Does a= nyone have real-world experience doing something like that? Is it a situatio= n that ZFS-on-FreeBSD is engineered to handle with good performance? Is ther= e a rough estimate of the resources consumed per additional filesystem (in t= erms of kernel VM and disk space)? >>=20 >> Thanks for any insight or advice (even, or especially, if the answer is "= that's crazy, don't do that" :) ) >=20 > I donn't know about how many filesystems but I know that few hundereds of s= napshots can make a noticeable slowdown for some zfs operations. > I think that basic "zfs list" will be painfully slow with tens of thousand= s of filesystems. >=20 > Miroslav Lachman > _______________________________________________ > freebsd-fs@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"