From owner-freebsd-fs@FreeBSD.ORG Sun Sep 18 02:31:26 2011 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 98B53106564A; Sun, 18 Sep 2011 02:31:26 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 6FCF58FC13; Sun, 18 Sep 2011 02:31:26 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id p8I2VQ68090689; Sun, 18 Sep 2011 02:31:26 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id p8I2VQow090682; Sun, 18 Sep 2011 02:31:26 GMT (envelope-from linimon) Date: Sun, 18 Sep 2011 02:31:26 GMT Message-Id: <201109180231.p8I2VQow090682@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Cc: Subject: Re: kern/160662: [ufs] [hang] Snapshots cause a lockup on UFS with SU+J enabled X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 18 Sep 2011 02:31:26 -0000 Old Synopsis: Snapshots cause a lockup on UFS with SU+J enabled New Synopsis: [ufs] [hang] Snapshots cause a lockup on UFS with SU+J enabled Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Sun Sep 18 02:31:13 UTC 2011 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=160662 From owner-freebsd-fs@FreeBSD.ORG Sun Sep 18 02:40:31 2011 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 70445106566B; Sun, 18 Sep 2011 02:40:31 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 474908FC0A; Sun, 18 Sep 2011 02:40:31 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id p8I2eVFi096421; Sun, 18 Sep 2011 02:40:31 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id p8I2eVHR096412; Sun, 18 Sep 2011 02:40:31 GMT (envelope-from linimon) Date: Sun, 18 Sep 2011 02:40:31 GMT Message-Id: <201109180240.p8I2eVHR096412@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Cc: Subject: Re: kern/160777: [zfs] [hang] RAID-Z3 causes fatal hang upon scrub/import on 9.0-BETA2/amd64 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 18 Sep 2011 02:40:31 -0000 Old Synopsis: RAID-Z3 causes fatal hang upon scrub/import on 9.0-BETA2/amd64 New Synopsis: [zfs] [hang] RAID-Z3 causes fatal hang upon scrub/import on 9.0-BETA2/amd64 Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Sun Sep 18 02:40:08 UTC 2011 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=160777 From owner-freebsd-fs@FreeBSD.ORG Sun Sep 18 02:47:22 2011 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id AB57D1065680; Sun, 18 Sep 2011 02:47:22 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 832478FC1B; Sun, 18 Sep 2011 02:47:22 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id p8I2lMF2002690; Sun, 18 Sep 2011 02:47:22 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id p8I2lMtd002686; Sun, 18 Sep 2011 02:47:22 GMT (envelope-from linimon) Date: Sun, 18 Sep 2011 02:47:22 GMT Message-Id: <201109180247.p8I2lMtd002686@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Cc: Subject: Re: kern/160790: [fusefs] [panic] VPUTX: negative ref count with FUSE X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 18 Sep 2011 02:47:22 -0000 Old Synopsis: panic: VPUTX: negative ref count with FUSE New Synopsis: [fusefs] [panic] VPUTX: negative ref count with FUSE Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Sun Sep 18 02:46:03 UTC 2011 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=160790 From owner-freebsd-fs@FreeBSD.ORG Sun Sep 18 09:27:33 2011 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 02983106564A for ; Sun, 18 Sep 2011 09:27:33 +0000 (UTC) (envelope-from jeremie@le-hen.org) Received: from smtp5-g21.free.fr (unknown [IPv6:2a01:e0c:1:1599::14]) by mx1.freebsd.org (Postfix) with ESMTP id 4DDEC8FC12 for ; Sun, 18 Sep 2011 09:27:30 +0000 (UTC) Received: from endor.tataz.chchile.org (unknown [82.233.239.98]) by smtp5-g21.free.fr (Postfix) with ESMTP id DF115D48132; Sun, 18 Sep 2011 11:27:24 +0200 (CEST) Received: from felucia.tataz.chchile.org (felucia.tataz.chchile.org [192.168.1.9]) by endor.tataz.chchile.org (Postfix) with ESMTP id 11B0333CED; Sun, 18 Sep 2011 09:27:23 +0000 (UTC) Received: by felucia.tataz.chchile.org (Postfix, from userid 1000) id E9E99A1180; Sun, 18 Sep 2011 09:27:22 +0000 (UTC) Date: Sun, 18 Sep 2011 11:27:22 +0200 From: Jeremie Le Hen To: freebsd-fs@FreeBSD.org Message-ID: <20110918092722.GA7930@felucia.tataz.chchile.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.5.21 (2010-09-15) Cc: Jeremie Le Hen Subject: ZFS on root: / is found but child datasets are not mounted X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 18 Sep 2011 09:27:33 -0000 Hi, Please Cc: me when replying. I bought a new HDD in order to mirror the system disk. This is also a great opportunity to migrate the root filesystem to ZFS. I followed mm@'s advice from another thread: "having everything one level deeper". That is zroot/root is "/". The kernel boots fine, it finds the root filesystem, but fails miserably when running rc.d scripts because child datasets are not mounted (/var, /usr, ...). FWIW, I escaped to DDB and typed "show mount". Besides /dev, / was indeed mounted from zoot/root and /tmp was /dev/md0 for an unknown reason. I've been fiddling this for 3 hours yesterday without luck. Does anyone have an idea on this please? More information: obiwan:~# zpool import -o altroot=/mnt -o cachefile=/tmp/zpool.cache zroot obiwan:~# zpool list zroot NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT zroot 147G 2.34G 145G 1% 1.00x ONLINE /mnt obiwan:~# zpool status zroot pool: zroot state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM zroot ONLINE 0 0 0 gptid/080c18f8-c2d2-11e0-baa0-00151724749a ONLINE 0 0 0 errors: No known data errors obiwan:~# zpool get all zroot NAME PROPERTY VALUE SOURCE zroot size 147G - zroot capacity 1% - zroot altroot /mnt local zroot health ONLINE - zroot guid 12889954819379028468 default zroot version 28 default zroot bootfs zroot/root local zroot delegation on default zroot autoreplace off default zroot cachefile /tmp/zpool.cache local zroot failmode wait default zroot listsnapshots off default zroot autoexpand off default zroot dedupditto 0 default zroot dedupratio 1.00x - zroot free 145G - zroot allocated 2.34G - zroot readonly off - obiwan:~# cp /tmp/zpool.cache /mnt/boot/zfs/zpool.cache obiwan:~# grep 'zfs[:_]' /mnt/boot/loader.conf zfs_load="YES" vfs.root.mountfrom="zfs:zroot/root" obiwan:~# zfs list -o name,mounted,canmount,mountpoint -r zroot | grep -v /mnt/jails NAME MOUNTED CANMOUNT MOUNTPOINT zroot no on none zroot/root yes on /mnt zroot/root/root yes on /mnt/root zroot/root/tmp yes on /mnt/tmp zroot/root/usr yes on /mnt/usr zroot/root/usr/local yes on /mnt/usr/local zroot/root/usr/obj yes on /mnt/usr/obj zroot/root/usr/pkgsrc yes on /mnt/usr/pkgsrc zroot/root/usr/pkgsrc/distfiles yes on /mnt/usr/pkgsrc/distfiles zroot/root/usr/ports yes on /mnt/usr/ports zroot/root/usr/ports/distfiles yes on /mnt/usr/ports/distfiles zroot/root/usr/ports/packages yes on /mnt/usr/ports/packages zroot/root/usr/src yes on /mnt/usr/src zroot/root/var yes on /mnt/var zroot/root/var/crash yes on /mnt/var/crash zroot/root/var/db yes on /mnt/var/db zroot/root/var/db/pkg yes on /mnt/var/db/pkg zroot/root/var/empty yes on /mnt/var/empty zroot/root/var/log yes on /mnt/var/log zroot/root/var/mail yes on /mnt/var/mail zroot/root/var/run yes on /mnt/var/run zroot/root/var/tmp yes on /mnt/var/tmp Thanks. Regards, -- Jeremie Le Hen Men are born free and equal. Later on, they're on their own. Jean Yanne From owner-freebsd-fs@FreeBSD.ORG Sun Sep 18 09:40:13 2011 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 4ADDD106564A for ; Sun, 18 Sep 2011 09:40:13 +0000 (UTC) (envelope-from mat@mat.cc) Received: from prod2.absolight.net (mx3.absolight.net [IPv6:2a01:678:2:100::25]) by mx1.freebsd.org (Postfix) with ESMTP id D7A468FC13 for ; Sun, 18 Sep 2011 09:40:11 +0000 (UTC) Received: from prod2.absolight.net (localhost [127.0.0.1]) by prod2.absolight.net (Postfix) with ESMTP id BF0B6BDC24; Sun, 18 Sep 2011 11:40:10 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=mat.cc; h=date:from:to :subject:message-id:in-reply-to:references:mime-version :content-type:content-transfer-encoding; s=plouf; bh=1g+QUQDWNhB 2LksqoYpRSrl46IA=; b=ZXA7xoJf9IG5fwluSmgfr//nYtx83g65MxTgo9VbGlw pjQyeyGd+C8M6VDijlDK7UyVnVh2QzP/RHiiKx9WuSpPIpC+kvGJYILEENH7MNGI knl5hY4UsfgIAsYjGBL3aCGwHPBTwVKkBwHGen3sPlrUpCqvRUeFrpfWvxBx7vwM = Received: from atuin.in.mat.cc (atuin.in.mat.cc [79.143.241.205]) by prod2.absolight.net (Postfix) with ESMTPA id 9C3EEBDC1F; Sun, 18 Sep 2011 11:40:10 +0200 (CEST) Received: from localhost (localhost [127.0.0.1]) by atuin.in.mat.cc (Postfix) with ESMTP id 3C2C347FD8A4; Sun, 18 Sep 2011 11:40:10 +0200 (CEST) Date: Sun, 18 Sep 2011 11:40:10 +0200 From: Mathieu Arnold To: Jeremie Le Hen , freebsd-fs@FreeBSD.org Message-ID: In-Reply-To: <20110918092722.GA7930@felucia.tataz.chchile.org> References: <20110918092722.GA7930@felucia.tataz.chchile.org> X-Mailer: Mulberry/4.0.8 (Mac OS X) MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Content-Disposition: inline Cc: Subject: Re: ZFS on root: / is found but child datasets are not mounted X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 18 Sep 2011 09:40:13 -0000 +--On 18 septembre 2011 11:27:22 +0200 Jeremie Le Hen wrote: | The kernel boots fine, it finds the root filesystem, but fails miserably | when running rc.d scripts because child datasets are not mounted (/var, | /usr, ...). | | obiwan:~# cp /tmp/zpool.cache /mnt/boot/zfs/zpool.cache | obiwan:~# grep 'zfs[:_]' /mnt/boot/loader.conf | zfs_load="YES" | vfs.root.mountfrom="zfs:zroot/root" What about zfs_enable="yes" in /etc/rc.conf ? -- Mathieu Arnold From owner-freebsd-fs@FreeBSD.ORG Sun Sep 18 11:09:47 2011 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id D03701065670; Sun, 18 Sep 2011 11:09:47 +0000 (UTC) (envelope-from mckusick@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id A720C8FC0A; Sun, 18 Sep 2011 11:09:47 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id p8IB9lat098473; Sun, 18 Sep 2011 11:09:47 GMT (envelope-from mckusick@freefall.freebsd.org) Received: (from mckusick@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id p8IB9lcT098469; Sun, 18 Sep 2011 11:09:47 GMT (envelope-from mckusick) Date: Sun, 18 Sep 2011 11:09:47 GMT Message-Id: <201109181109.p8IB9lcT098469@freefall.freebsd.org> To: mckusick@FreeBSD.org, freebsd-fs@FreeBSD.org, mckusick@FreeBSD.org From: mckusick@FreeBSD.org Cc: Subject: Re: kern/160662: [ufs] [hang] Snapshots cause a lockup on UFS with SU+J enabled X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 18 Sep 2011 11:09:47 -0000 Synopsis: [ufs] [hang] Snapshots cause a lockup on UFS with SU+J enabled Responsible-Changed-From-To: freebsd-fs->mckusick Responsible-Changed-By: mckusick Responsible-Changed-When: Sun Sep 18 11:08:54 UTC 2011 Responsible-Changed-Why: I will take responsibility for dealing with this bug. http://www.freebsd.org/cgi/query-pr.cgi?pr=160662 From owner-freebsd-fs@FreeBSD.ORG Sun Sep 18 11:49:52 2011 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 711691065670 for ; Sun, 18 Sep 2011 11:49:52 +0000 (UTC) (envelope-from jeremie@le-hen.org) Received: from smtp5-g21.free.fr (unknown [IPv6:2a01:e0c:1:1599::14]) by mx1.freebsd.org (Postfix) with ESMTP id E1B788FC0A for ; Sun, 18 Sep 2011 11:49:50 +0000 (UTC) Received: from endor.tataz.chchile.org (unknown [82.233.239.98]) by smtp5-g21.free.fr (Postfix) with ESMTP id B90FCD4812B; Sun, 18 Sep 2011 13:49:44 +0200 (CEST) Received: from felucia.tataz.chchile.org (felucia.tataz.chchile.org [192.168.1.9]) by endor.tataz.chchile.org (Postfix) with ESMTP id DDACC33CED; Sun, 18 Sep 2011 11:49:42 +0000 (UTC) Received: by felucia.tataz.chchile.org (Postfix, from userid 1000) id BCF72A1180; Sun, 18 Sep 2011 11:49:42 +0000 (UTC) Date: Sun, 18 Sep 2011 13:49:42 +0200 From: Jeremie Le Hen To: Mathieu Arnold Message-ID: <20110918114942.GC7930@felucia.tataz.chchile.org> References: <20110918092722.GA7930@felucia.tataz.chchile.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) Cc: freebsd-fs@FreeBSD.org, Jeremie Le Hen Subject: Re: ZFS on root: / is found but child datasets are not mounted X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 18 Sep 2011 11:49:52 -0000 On Sun, Sep 18, 2011 at 11:40:10AM +0200, Mathieu Arnold wrote: > +--On 18 septembre 2011 11:27:22 +0200 Jeremie Le Hen > wrote: > | The kernel boots fine, it finds the root filesystem, but fails miserably > | when running rc.d scripts because child datasets are not mounted (/var, > | /usr, ...). > | > | obiwan:~# cp /tmp/zpool.cache /mnt/boot/zfs/zpool.cache > | obiwan:~# grep 'zfs[:_]' /mnt/boot/loader.conf > | zfs_load="YES" > | vfs.root.mountfrom="zfs:zroot/root" > > What about zfs_enable="yes" in /etc/rc.conf ? Yeah right, someone already pointed this to me in another thread. For some reason rc.conf(5) on the ZFS disk is not there whereas all other files are there and identical (with the exception of fstab(5) of course). -- Jeremie Le Hen Men are born free and equal. Later on, they're on their own. Jean Yanne From owner-freebsd-fs@FreeBSD.ORG Sun Sep 18 11:50:14 2011 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 1E26B106566B for ; Sun, 18 Sep 2011 11:50:14 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id E74468FC13 for ; Sun, 18 Sep 2011 11:50:13 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id p8IBoDud039237 for ; Sun, 18 Sep 2011 11:50:13 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id p8IBoDWL039236; Sun, 18 Sep 2011 11:50:13 GMT (envelope-from gnats) Date: Sun, 18 Sep 2011 11:50:13 GMT Message-Id: <201109181150.p8IBoDWL039236@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org From: Allen Landsidel Cc: Subject: Re: kern/160790: [fusefs] [panic] VPUTX: negative ref count with FUSE X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: Allen Landsidel List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 18 Sep 2011 11:50:14 -0000 The following reply was made to PR kern/160790; it has been noted by GNATS. From: Allen Landsidel To: bug-followup@FreeBSD.org Cc: Subject: Re: kern/160790: [fusefs] [panic] VPUTX: negative ref count with FUSE Date: Sun, 18 Sep 2011 07:40:36 -0400 The crash is repeatable though the exact steps are unknown. Three identical crashes within 8-9 hours, running the same workload. From owner-freebsd-fs@FreeBSD.ORG Sun Sep 18 12:28:06 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 5AF4C106564A for ; Sun, 18 Sep 2011 12:28:06 +0000 (UTC) (envelope-from jdc@koitsu.dyndns.org) Received: from qmta13.westchester.pa.mail.comcast.net (qmta13.westchester.pa.mail.comcast.net [76.96.59.243]) by mx1.freebsd.org (Postfix) with ESMTP id C23E48FC08 for ; Sun, 18 Sep 2011 12:28:05 +0000 (UTC) Received: from omta23.westchester.pa.mail.comcast.net ([76.96.62.74]) by qmta13.westchester.pa.mail.comcast.net with comcast id aBQ31h0021c6gX85DCU6CQ; Sun, 18 Sep 2011 12:28:06 +0000 Received: from koitsu.dyndns.org ([67.180.84.87]) by omta23.westchester.pa.mail.comcast.net with comcast id aCU41h00L1t3BNj3jCU4DL; Sun, 18 Sep 2011 12:28:05 +0000 Received: by icarus.home.lan (Postfix, from userid 1000) id B84CA102C1B; Sun, 18 Sep 2011 05:28:02 -0700 (PDT) Date: Sun, 18 Sep 2011 05:28:02 -0700 From: Jeremy Chadwick To: Jeremie Le Hen Message-ID: <20110918122802.GA38941@icarus.home.lan> References: <20110918092722.GA7930@felucia.tataz.chchile.org> <20110918114942.GC7930@felucia.tataz.chchile.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20110918114942.GC7930@felucia.tataz.chchile.org> User-Agent: Mutt/1.5.21 (2010-09-15) Cc: freebsd-fs@FreeBSD.org, Mathieu Arnold Subject: Re: ZFS on root: / is found but child datasets are not mounted X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 18 Sep 2011 12:28:06 -0000 On Sun, Sep 18, 2011 at 01:49:42PM +0200, Jeremie Le Hen wrote: > On Sun, Sep 18, 2011 at 11:40:10AM +0200, Mathieu Arnold wrote: > > +--On 18 septembre 2011 11:27:22 +0200 Jeremie Le Hen > > wrote: > > | The kernel boots fine, it finds the root filesystem, but fails miserably > > | when running rc.d scripts because child datasets are not mounted (/var, > > | /usr, ...). > > | > > | obiwan:~# cp /tmp/zpool.cache /mnt/boot/zfs/zpool.cache > > | obiwan:~# grep 'zfs[:_]' /mnt/boot/loader.conf > > | zfs_load="YES" > > | vfs.root.mountfrom="zfs:zroot/root" > > > > What about zfs_enable="yes" in /etc/rc.conf ? > > Yeah right, someone already pointed this to me in another thread. > > For some reason rc.conf(5) on the ZFS disk is not there whereas all > other files are there and identical (with the exception of fstab(5) of > course). If this is a "brand new install" then I imagine it's possible for rc.conf not to exist unless you chose during sysinstall to configure the network (which would include setting hostname="xxx", etc.) or adjust post-installation options (sshd_enable="YES", etc.). -- | Jeremy Chadwick jdc at parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, US | | Making life hard for others since 1977. PGP 4BD6C0CB | From owner-freebsd-fs@FreeBSD.ORG Sun Sep 18 14:34:19 2011 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id D80C91065670; Sun, 18 Sep 2011 14:34:19 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id AFA218FC12; Sun, 18 Sep 2011 14:34:19 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id p8IEYJ4e096283; Sun, 18 Sep 2011 14:34:19 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id p8IEYJVP096279; Sun, 18 Sep 2011 14:34:19 GMT (envelope-from linimon) Date: Sun, 18 Sep 2011 14:34:19 GMT Message-Id: <201109181434.p8IEYJVP096279@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-amd64@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Cc: Subject: Re: kern/160801: [zfs] zfsboot on 8.2-RELEASE fails to boot from root-on-zfs in MBR slice [regression] X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 18 Sep 2011 14:34:20 -0000 Old Synopsis: zfsboot on 8.2-RELEASE fails to boot from root-on-zfs in MBR slice New Synopsis: [zfs] zfsboot on 8.2-RELEASE fails to boot from root-on-zfs in MBR slice [regression] Responsible-Changed-From-To: freebsd-amd64->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Sun Sep 18 14:33:27 UTC 2011 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=160801 From owner-freebsd-fs@FreeBSD.ORG Sun Sep 18 19:59:46 2011 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id E13D51065677 for ; Sun, 18 Sep 2011 19:59:44 +0000 (UTC) (envelope-from jeremie@le-hen.org) Received: from smtp5-g21.free.fr (unknown [IPv6:2a01:e0c:1:1599::14]) by mx1.freebsd.org (Postfix) with ESMTP id 6460C8FC0A for ; Sun, 18 Sep 2011 19:59:41 +0000 (UTC) Received: from endor.tataz.chchile.org (unknown [82.233.239.98]) by smtp5-g21.free.fr (Postfix) with ESMTP id 7FADCD48015; Sun, 18 Sep 2011 21:59:34 +0200 (CEST) Received: from felucia.tataz.chchile.org (felucia.tataz.chchile.org [192.168.1.9]) by endor.tataz.chchile.org (Postfix) with ESMTP id 631821212; Sun, 18 Sep 2011 19:59:33 +0000 (UTC) Received: by felucia.tataz.chchile.org (Postfix, from userid 1000) id 47B8280CD; Sun, 18 Sep 2011 19:59:33 +0000 (UTC) Date: Sun, 18 Sep 2011 21:59:33 +0200 From: Jeremie Le Hen To: Jeremy Chadwick Message-ID: <20110918195933.GB88617@felucia.tataz.chchile.org> References: <20110918092722.GA7930@felucia.tataz.chchile.org> <20110918114942.GC7930@felucia.tataz.chchile.org> <20110918122802.GA38941@icarus.home.lan> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20110918122802.GA38941@icarus.home.lan> User-Agent: Mutt/1.5.21 (2010-09-15) Cc: freebsd-fs@FreeBSD.org, Jeremie Le Hen , Mathieu Arnold Subject: Re: ZFS on root: / is found but child datasets are not mounted X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 18 Sep 2011 19:59:46 -0000 Hi Jeremy, On Sun, Sep 18, 2011 at 05:28:02AM -0700, Jeremy Chadwick wrote: > On Sun, Sep 18, 2011 at 01:49:42PM +0200, Jeremie Le Hen wrote: > > On Sun, Sep 18, 2011 at 11:40:10AM +0200, Mathieu Arnold wrote: > > > +--On 18 septembre 2011 11:27:22 +0200 Jeremie Le Hen > > > wrote: > > > | The kernel boots fine, it finds the root filesystem, but fails miserably > > > | when running rc.d scripts because child datasets are not mounted (/var, > > > | /usr, ...). > > > | > > > | obiwan:~# cp /tmp/zpool.cache /mnt/boot/zfs/zpool.cache > > > | obiwan:~# grep 'zfs[:_]' /mnt/boot/loader.conf > > > | zfs_load="YES" > > > | vfs.root.mountfrom="zfs:zroot/root" > > > > > > What about zfs_enable="yes" in /etc/rc.conf ? > > > > Yeah right, someone already pointed this to me in another thread. > > > > For some reason rc.conf(5) on the ZFS disk is not there whereas all > > other files are there and identical (with the exception of fstab(5) of > > course). > > If this is a "brand new install" then I imagine it's possible for > rc.conf not to exist unless you chose during sysinstall to configure the > network (which would include setting hostname="xxx", etc.) or adjust > post-installation options (sshd_enable="YES", etc.). This was a migration from UFS to ZFS. I've basically copied /etc, which I verified by comparing every files in both directories. Only rc.conf was missing, the other files were there and identical. Anyway, this was indeed the lack of zfs_enable="YES" in rc.conf! Thanks all. Regards, -- Jeremie Le Hen Men are born free and equal. Later on, they're on their own. Jean Yanne From owner-freebsd-fs@FreeBSD.ORG Mon Sep 19 02:23:59 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 5AC191065673 for ; Mon, 19 Sep 2011 02:23:59 +0000 (UTC) (envelope-from lisen1001@gmail.com) Received: from mail-ey0-f182.google.com (mail-ey0-f182.google.com [209.85.215.182]) by mx1.freebsd.org (Postfix) with ESMTP id E345A8FC13 for ; Mon, 19 Sep 2011 02:23:58 +0000 (UTC) Received: by eyg7 with SMTP id 7so2960572eyg.13 for ; Sun, 18 Sep 2011 19:23:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:date:message-id:subject:from:to:content-type; bh=IA9AHgqFJ9U+TSKteBLscDR8T3ZnbQhK42QjlCiin8M=; b=HnMOxabtG07FRmKqDy1EMkpPxTf2WiMX4UPxbTokPph81WKWND6ju+bh8uGRV9B0XW LB9gdxxuHKuR+MN40nG1WeJaWpxDlPKvJxE5Q0OkZtlcVJq1GvVpakxXwmJpUc0pCizh eerO1RFUpUw7HuCegWThRn9G7mwUKkJAjPSUc= MIME-Version: 1.0 Received: by 10.14.11.31 with SMTP id 31mr572118eew.77.1316399037775; Sun, 18 Sep 2011 19:23:57 -0700 (PDT) Received: by 10.14.189.6 with HTTP; Sun, 18 Sep 2011 19:23:57 -0700 (PDT) Date: Mon, 19 Sep 2011 10:23:57 +0800 Message-ID: From: =?GB2312?B?wO7JrQ==?= To: freebsd-fs@freebsd.org X-Mailman-Approved-At: Mon, 19 Sep 2011 02:44:46 +0000 Content-Type: text/plain; charset=GB2312 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Subject: file lose inode in Memory-Based file system. X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Sep 2011 02:23:59 -0000 hi,all: my syetem is FreeBSD 8.2. I build a memory disk : mdmfs -s 10G -i 512 -o rw md1 /home/test1 After a period of time=A3=ACsome file in the memory disk lose their inode: #ls 90020595.o #ls -l 90020595.o ls: 90020595.o: No such file or directory it seem the inode of this file was lost. how to solve this problem? From owner-freebsd-fs@FreeBSD.ORG Mon Sep 19 05:18:04 2011 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 65EF2106564A; Mon, 19 Sep 2011 05:18:04 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 3DB708FC0A; Mon, 19 Sep 2011 05:18:04 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id p8J5I4NL019764; Mon, 19 Sep 2011 05:18:04 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id p8J5I465019760; Mon, 19 Sep 2011 05:18:04 GMT (envelope-from linimon) Date: Mon, 19 Sep 2011 05:18:04 GMT Message-Id: <201109190518.p8J5I465019760@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Cc: Subject: Re: kern/160706: [zfs] zfs bootloader fails when a non-root vdev exists on a slice before the root slice X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Sep 2011 05:18:04 -0000 Old Synopsis: zfs bootloader fails when a non-root vdev exists on a slice before the root slice New Synopsis: [zfs] zfs bootloader fails when a non-root vdev exists on a slice before the root slice Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Mon Sep 19 05:17:48 UTC 2011 Responsible-Changed-Why: reclassify. http://www.freebsd.org/cgi/query-pr.cgi?pr=160706 From owner-freebsd-fs@FreeBSD.ORG Mon Sep 19 08:10:11 2011 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 979CE1065674 for ; Mon, 19 Sep 2011 08:10:11 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 868FB8FC14 for ; Mon, 19 Sep 2011 08:10:11 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id p8J8ABxx003814 for ; Mon, 19 Sep 2011 08:10:11 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id p8J8AB30003813; Mon, 19 Sep 2011 08:10:11 GMT (envelope-from gnats) Date: Mon, 19 Sep 2011 08:10:11 GMT Message-Id: <201109190810.p8J8AB30003813@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org From: Andriy Gapon Cc: Subject: Re: kern/160706: [zfs] zfs bootloader fails when a non-root vdev exists on a slice before the root slice X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: Andriy Gapon List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Sep 2011 08:10:11 -0000 The following reply was made to PR kern/160706; it has been noted by GNATS. From: Andriy Gapon To: bug-followup@FreeBSD.org, peter.maloney@brockmann-consult.de Cc: Subject: Re: kern/160706: [zfs] zfs bootloader fails when a non-root vdev exists on a slice before the root slice Date: Mon, 19 Sep 2011 11:05:53 +0300 I think that this is a WONTFIX bug. It is well know, if less documented, that FreeBSD (gpt)zfsboot boot program tries to use a pool that contains a very first vdev seen by (gpt)zfsboot to load zfsloader or kernel. You just have to take this into account. This behavior makes sense. Trying to boot all pools may create more problems than it solves, e.g. if you have more than one pool that can be booted. You may try to work-around your problem using boot.config file. Place it in a root dataset of a pool that gets used by (gpt)zfsboo (tank, I presume) with the following contents: zroot:/boot/zfsloader P.S. Your example is incomplete, it doesn't show where root1, cache1, log1 come from :-) -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Mon Sep 19 09:53:29 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A4E28106564A for ; Mon, 19 Sep 2011 09:53:29 +0000 (UTC) (envelope-from peterjeremy@acm.org) Received: from mail26.syd.optusnet.com.au (mail26.syd.optusnet.com.au [211.29.133.167]) by mx1.freebsd.org (Postfix) with ESMTP id 359228FC0A for ; Mon, 19 Sep 2011 09:53:28 +0000 (UTC) Received: from server.vk2pj.dyndns.org (c220-239-116-103.belrs4.nsw.optusnet.com.au [220.239.116.103]) by mail26.syd.optusnet.com.au (8.13.1/8.13.1) with ESMTP id p8J9rNmF015004 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Mon, 19 Sep 2011 19:53:25 +1000 X-Bogosity: Ham, spamicity=0.000000 Received: from server.vk2pj.dyndns.org (localhost.vk2pj.dyndns.org [127.0.0.1]) by server.vk2pj.dyndns.org (8.14.5/8.14.4) with ESMTP id p8J9eEhM007807; Mon, 19 Sep 2011 19:40:14 +1000 (EST) (envelope-from peter@server.vk2pj.dyndns.org) Received: (from peter@localhost) by server.vk2pj.dyndns.org (8.14.5/8.14.4/Submit) id p8J9eEX7007806; Mon, 19 Sep 2011 19:40:14 +1000 (EST) (envelope-from peter) Date: Mon, 19 Sep 2011 19:40:13 +1000 From: Peter Jeremy To: Jason Usher Message-ID: <20110919094013.GA7771@server.vk2pj.dyndns.org> References: <1316222526.31565.YahooMailNeo@web121205.mail.ne1.yahoo.com> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="5vNYLRcllDrimb99" Content-Disposition: inline In-Reply-To: <1316222526.31565.YahooMailNeo@web121205.mail.ne1.yahoo.com> X-PGP-Key: http://members.optusnet.com.au/peterjeremy/pubkey.asc User-Agent: Mutt/1.5.21 (2010-09-15) Cc: "freebsd-fs@freebsd.org" Subject: Re: ZFS obn FreeBSD hardware model for 48 or 96 sata3 paths... X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Sep 2011 09:53:29 -0000 --5vNYLRcllDrimb99 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On 2011-Sep-16 18:22:06 -0700, Jason Usher wrote: >1) immediately support 48 internal sata3 drives at full bandwidth - >every drive has independent path to CPU This would seem to be overkill - no current HDD can saturate a SATA3 channel. And I suspect you will run into DRAM bandwidth issues well before you saturate 48 SATA3 channels. >Next, I see a lot of implementations done with LSI adaptors - is this >as simple as choosing (3) LSI SAS 9201-16i for the 48 internal drives >and (3) LSI SAS 9201-16e for the external drives ? I can't comment on driver support but I'd start by checking for motherboards that have 6 PCIe x16 slots with all lanes available. >I would also like to spec and use a ZIL+L2ARC and am not sure where >to go ... the system will be VERY write-biased and use a LOT of >inodes - so lots of scanning of large dirs with lots of inodes and >writing data.=A0 Something like 400 million inodes on a filesystem with >an average file size of 150 KB. ZIL will only be useful if you do lots of sync writes. L2ARC won't help write performance. Heavy write load implies you want mirroring rather than RAIDZ and mirroring 60TB with 48 spindles means 3TB disks. >- can I just skip the l2arc and just add more RAM ? Definitely - I'd be looking at around 200GB RAM - and you might need to tweak the ZFS (particularly) ARC parameters to suit the workload. >- provided I maintain the free pcie slot(s) and/or free 2.5" drive >slots, can I always just add a ZIL after the fact ? Yes. --=20 Peter Jeremy --5vNYLRcllDrimb99 Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.18 (FreeBSD) iEYEARECAAYFAk53Df0ACgkQ/opHv/APuIfD9ACfaW5bN9fubqYkKq2l7RsOyZNj nIAAn2DtKwqYkMczTrZ+m7zS8m59amzU =jJBv -----END PGP SIGNATURE----- --5vNYLRcllDrimb99-- From owner-freebsd-fs@FreeBSD.ORG Mon Sep 19 10:09:45 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id DF5D9106564A for ; Mon, 19 Sep 2011 10:09:45 +0000 (UTC) (envelope-from peterjeremy@acm.org) Received: from mail28.syd.optusnet.com.au (mail28.syd.optusnet.com.au [211.29.133.169]) by mx1.freebsd.org (Postfix) with ESMTP id 6C21F8FC0A for ; Mon, 19 Sep 2011 10:09:45 +0000 (UTC) Received: from server.vk2pj.dyndns.org (c220-239-116-103.belrs4.nsw.optusnet.com.au [220.239.116.103]) by mail28.syd.optusnet.com.au (8.13.1/8.13.1) with ESMTP id p8JA9fxt014098 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Mon, 19 Sep 2011 20:09:42 +1000 X-Bogosity: Ham, spamicity=0.000000 Received: from server.vk2pj.dyndns.org (localhost.vk2pj.dyndns.org [127.0.0.1]) by server.vk2pj.dyndns.org (8.14.5/8.14.4) with ESMTP id p8JA9en0007985; Mon, 19 Sep 2011 20:09:40 +1000 (EST) (envelope-from peter@server.vk2pj.dyndns.org) Received: (from peter@localhost) by server.vk2pj.dyndns.org (8.14.5/8.14.4/Submit) id p8JA9dNq007984; Mon, 19 Sep 2011 20:09:39 +1000 (EST) (envelope-from peter) Date: Mon, 19 Sep 2011 20:09:39 +1000 From: Peter Jeremy To: Jason Usher Message-ID: <20110919100939.GB7816@server.vk2pj.dyndns.org> References: <1316222526.31565.YahooMailNeo@web121205.mail.ne1.yahoo.com> <20110919094013.GA7771@server.vk2pj.dyndns.org> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="T4sUOijqQbZv57TR" Content-Disposition: inline In-Reply-To: <20110919094013.GA7771@server.vk2pj.dyndns.org> X-PGP-Key: http://members.optusnet.com.au/peterjeremy/pubkey.asc User-Agent: Mutt/1.5.21 (2010-09-15) Cc: "freebsd-fs@freebsd.org" Subject: Re: ZFS obn FreeBSD hardware model for 48 or 96 sata3 paths... X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Sep 2011 10:09:46 -0000 --T4sUOijqQbZv57TR Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On 2011-Sep-19 19:40:13 +1000, Peter Jeremy wrote: >>Next, I see a lot of implementations done with LSI adaptors - is this >>as simple as choosing (3) LSI SAS 9201-16i for the 48 internal drives >>and (3) LSI SAS 9201-16e for the external drives ? > >I can't comment on driver support but I'd start by checking for >motherboards that have 6 PCIe x16 slots with all lanes available. Someone has pointed out that these are x8, not x16 cards - which should make finding a motherboard slightly easier. --=20 Peter Jeremy --T4sUOijqQbZv57TR Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.18 (FreeBSD) iEYEARECAAYFAk53FOMACgkQ/opHv/APuIfjMQCfSIZweCuE9KW0C6wL42ALq2Hm 7XgAnA0lUxUp0kiAlGBaX2gOGN0o+4Sz =Hlb1 -----END PGP SIGNATURE----- --T4sUOijqQbZv57TR-- From owner-freebsd-fs@FreeBSD.ORG Mon Sep 19 11:07:04 2011 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 7C38A1065701 for ; Mon, 19 Sep 2011 11:07:04 +0000 (UTC) (envelope-from owner-bugmaster@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 6993E8FC2C for ; Mon, 19 Sep 2011 11:07:04 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id p8JB74Ww073491 for ; Mon, 19 Sep 2011 11:07:04 GMT (envelope-from owner-bugmaster@FreeBSD.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id p8JB730L073489 for freebsd-fs@FreeBSD.org; Mon, 19 Sep 2011 11:07:03 GMT (envelope-from owner-bugmaster@FreeBSD.org) Date: Mon, 19 Sep 2011 11:07:03 GMT Message-Id: <201109191107.p8JB730L073489@freefall.freebsd.org> X-Authentication-Warning: freefall.freebsd.org: gnats set sender to owner-bugmaster@FreeBSD.org using -f From: FreeBSD bugmaster To: freebsd-fs@FreeBSD.org Cc: Subject: Current problem reports assigned to freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Sep 2011 11:07:04 -0000 Note: to view an individual PR, use: http://www.freebsd.org/cgi/query-pr.cgi?pr=(number). The following is a listing of current problems submitted by FreeBSD users. These represent problem reports covering all versions including experimental development code and obsolete releases. S Tracker Resp. Description -------------------------------------------------------------------------------- o kern/160801 fs [zfs] zfsboot on 8.2-RELEASE fails to boot from root-o o kern/160790 fs [fusefs] [panic] VPUTX: negative ref count with FUSE o kern/160777 fs [zfs] [hang] RAID-Z3 causes fatal hang upon scrub/impo o kern/160706 fs [zfs] zfs bootloader fails when a non-root vdev exists o kern/160591 fs [zfs] Fail to boot on zfs root with degraded raidz2 [r o kern/160410 fs [smbfs] [hang] smbfs hangs when transferring large fil o kern/160283 fs [zfs] [patch] 'zfs list' does abort in make_dataset_ha o kern/159971 fs [ffs] [panic] panic with soft updates journaling durin o kern/159930 fs [ufs] [panic] kernel core o kern/159418 fs [tmpfs] [panic] tmpfs kernel panic: recursing on non r o kern/159402 fs [zfs][loader] symlinks cause I/O errors o kern/159357 fs [zfs] ZFS MAXNAMELEN macro has confusing name (off-by- o kern/159356 fs [zfs] [patch] ZFS NAME_ERR_DISKLIKE check is Solaris-s o kern/159351 fs [nfs] [patch] - divide by zero in mountnfs() o kern/159251 fs [zfs] [request]: add FLETCHER4 as DEDUP hash option o kern/159233 fs [ext2fs] [patch] fs/ext2fs: finish reallocblk implemen o kern/159232 fs [ext2fs] [patch] fs/ext2fs: merge ext2_readwrite into o kern/159077 fs [zfs] Can't cd .. with latest zfs version o kern/159048 fs [smbfs] smb mount corrupts large files o kern/159045 fs [zfs] [hang] ZFS scrub freezes system o kern/158839 fs [zfs] ZFS Bootloader Fails if there is a Dead Disk o kern/158802 fs [amd] amd(8) ICMP storm and unkillable process. o kern/158711 fs [ffs] [panic] panic in ffs_blkfree and ffs_valloc o kern/158231 fs [nullfs] panic on unmounting nullfs mounted over ufs o f kern/157929 fs [nfs] NFS slow read o kern/157722 fs [geli] unable to newfs a geli encrypted partition o kern/157399 fs [zfs] trouble with: mdconfig force delete && zfs strip o kern/157179 fs [zfs] zfs/dbuf.c: panic: solaris assert: arc_buf_remov o kern/156797 fs [zfs] [panic] Double panic with FreeBSD 9-CURRENT and o kern/156781 fs [zfs] zfs is losing the snapshot directory, p kern/156545 fs [ufs] mv could break UFS on SMP systems o kern/156193 fs [ufs] [hang] UFS snapshot hangs && deadlocks processes o kern/156168 fs [nfs] [panic] Kernel panic under concurrent access ove o kern/156039 fs [nullfs] [unionfs] nullfs + unionfs do not compose, re o kern/155615 fs [zfs] zfs v28 broken on sparc64 -current o kern/155587 fs [zfs] [panic] kernel panic with zfs o kern/155411 fs [regression] [8.2-release] [tmpfs]: mount: tmpfs : No o kern/155199 fs [ext2fs] ext3fs mounted as ext2fs gives I/O errors o bin/155104 fs [zfs][patch] use /dev prefix by default when importing o kern/154930 fs [zfs] cannot delete/unlink file from full volume -> EN o kern/154828 fs [msdosfs] Unable to create directories on external USB o kern/154491 fs [smbfs] smb_co_lock: recursive lock for object 1 o kern/154447 fs [zfs] [panic] Occasional panics - solaris assert somew p kern/154228 fs [md] md getting stuck in wdrain state o kern/153996 fs [zfs] zfs root mount error while kernel is not located o kern/153847 fs [nfs] [panic] Kernel panic from incorrect m_free in nf o kern/153753 fs [zfs] ZFS v15 - grammatical error when attempting to u o kern/153716 fs [zfs] zpool scrub time remaining is incorrect o kern/153695 fs [patch] [zfs] Booting from zpool created on 4k-sector o kern/153680 fs [xfs] 8.1 failing to mount XFS partitions o kern/153520 fs [zfs] Boot from GPT ZFS root on HP BL460c G1 unstable o kern/153418 fs [zfs] [panic] Kernel Panic occurred writing to zfs vol o kern/153351 fs [zfs] locking directories/files in ZFS o bin/153258 fs [patch][zfs] creating ZVOLs requires `refreservation' s kern/153173 fs [zfs] booting from a gzip-compressed dataset doesn't w o kern/153126 fs [zfs] vdev failure, zpool=peegel type=vdev.too_small p kern/152488 fs [tmpfs] [patch] mtime of file updated when only inode o kern/152022 fs [nfs] nfs service hangs with linux client [regression] o kern/151942 fs [zfs] panic during ls(1) zfs snapshot directory o kern/151905 fs [zfs] page fault under load in /sbin/zfs o kern/151845 fs [smbfs] [patch] smbfs should be upgraded to support Un o bin/151713 fs [patch] Bug in growfs(8) with respect to 32-bit overfl o kern/151648 fs [zfs] disk wait bug o kern/151629 fs [fs] [patch] Skip empty directory entries during name o kern/151330 fs [zfs] will unshare all zfs filesystem after execute a o kern/151326 fs [nfs] nfs exports fail if netgroups contain duplicate o kern/151251 fs [ufs] Can not create files on filesystem with heavy us o kern/151226 fs [zfs] can't delete zfs snapshot o kern/151111 fs [zfs] vnodes leakage during zfs unmount o kern/150503 fs [zfs] ZFS disks are UNAVAIL and corrupted after reboot o kern/150501 fs [zfs] ZFS vdev failure vdev.bad_label on amd64 o kern/150390 fs [zfs] zfs deadlock when arcmsr reports drive faulted o kern/150336 fs [nfs] mountd/nfsd became confused; refused to reload n o kern/150207 fs zpool(1): zpool import -d /dev tries to open weird dev o kern/149208 fs mksnap_ffs(8) hang/deadlock o kern/149173 fs [patch] [zfs] make OpenSolaris installa o kern/149015 fs [zfs] [patch] misc fixes for ZFS code to build on Glib o kern/149014 fs [zfs] [patch] declarations in ZFS libraries/utilities o kern/149013 fs [zfs] [patch] make ZFS makefiles use the libraries fro o kern/148504 fs [zfs] ZFS' zpool does not allow replacing drives to be o kern/148490 fs [zfs]: zpool attach - resilver bidirectionally, and re o kern/148368 fs [zfs] ZFS hanging forever on 8.1-PRERELEASE o kern/148204 fs [nfs] UDP NFS causes overload o kern/148138 fs [zfs] zfs raidz pool commands freeze o kern/147903 fs [zfs] [panic] Kernel panics on faulty zfs device o kern/147881 fs [zfs] [patch] ZFS "sharenfs" doesn't allow different " o kern/147790 fs [zfs] zfs set acl(mode|inherit) fails on existing zfs o kern/147560 fs [zfs] [boot] Booting 8.1-PRERELEASE raidz system take o kern/147420 fs [ufs] [panic] ufs_dirbad, nullfs, jail panic (corrupt o kern/146941 fs [zfs] [panic] Kernel Double Fault - Happens constantly o kern/146786 fs [zfs] zpool import hangs with checksum errors o kern/146708 fs [ufs] [panic] Kernel panic in softdep_disk_write_compl o kern/146528 fs [zfs] Severe memory leak in ZFS on i386 o kern/146502 fs [nfs] FreeBSD 8 NFS Client Connection to Server s kern/145712 fs [zfs] cannot offline two drives in a raidz2 configurat o kern/145411 fs [xfs] [panic] Kernel panics shortly after mounting an o bin/145309 fs bsdlabel: Editing disk label invalidates the whole dev o kern/145272 fs [zfs] [panic] Panic during boot when accessing zfs on o kern/145246 fs [ufs] dirhash in 7.3 gratuitously frees hashes when it o kern/145238 fs [zfs] [panic] kernel panic on zpool clear tank o kern/145229 fs [zfs] Vast differences in ZFS ARC behavior between 8.0 o kern/145189 fs [nfs] nfsd performs abysmally under load o kern/144929 fs [ufs] [lor] vfs_bio.c + ufs_dirhash.c p kern/144447 fs [zfs] sharenfs fsunshare() & fsshare_main() non functi o kern/144416 fs [panic] Kernel panic on online filesystem optimization s kern/144415 fs [zfs] [panic] kernel panics on boot after zfs crash o kern/144234 fs [zfs] Cannot boot machine with recent gptzfsboot code o kern/143825 fs [nfs] [panic] Kernel panic on NFS client o bin/143572 fs [zfs] zpool(1): [patch] The verbose output from iostat o kern/143212 fs [nfs] NFSv4 client strange work ... o kern/143184 fs [zfs] [lor] zfs/bufwait LOR o kern/142878 fs [zfs] [vfs] lock order reversal o kern/142597 fs [ext2fs] ext2fs does not work on filesystems with real o kern/142489 fs [zfs] [lor] allproc/zfs LOR o kern/142466 fs Update 7.2 -> 8.0 on Raid 1 ends with screwed raid [re o kern/142306 fs [zfs] [panic] ZFS drive (from OSX Leopard) causes two o kern/142068 fs [ufs] BSD labels are got deleted spontaneously o kern/141897 fs [msdosfs] [panic] Kernel panic. msdofs: file name leng o kern/141463 fs [nfs] [panic] Frequent kernel panics after upgrade fro o kern/141305 fs [zfs] FreeBSD ZFS+sendfile severe performance issues ( o kern/141091 fs [patch] [nullfs] fix panics with DIAGNOSTIC enabled o kern/141086 fs [nfs] [panic] panic("nfs: bioread, not dir") on FreeBS o kern/141010 fs [zfs] "zfs scrub" fails when backed by files in UFS2 o kern/140888 fs [zfs] boot fail from zfs root while the pool resilveri o kern/140661 fs [zfs] [patch] /boot/loader fails to work on a GPT/ZFS- o kern/140640 fs [zfs] snapshot crash o kern/140068 fs [smbfs] [patch] smbfs does not allow semicolon in file o kern/139725 fs [zfs] zdb(1) dumps core on i386 when examining zpool c o kern/139715 fs [zfs] vfs.numvnodes leak on busy zfs p bin/139651 fs [nfs] mount(8): read-only remount of NFS volume does n o kern/139597 fs [patch] [tmpfs] tmpfs initializes va_gen but doesn't u o kern/139564 fs [zfs] [panic] 8.0-RC1 - Fatal trap 12 at end of shutdo o kern/139407 fs [smbfs] [panic] smb mount causes system crash if remot o kern/138662 fs [panic] ffs_blkfree: freeing free block o kern/138421 fs [ufs] [patch] remove UFS label limitations o kern/138202 fs mount_msdosfs(1) see only 2Gb o kern/136968 fs [ufs] [lor] ufs/bufwait/ufs (open) o kern/136945 fs [ufs] [lor] filedesc structure/ufs (poll) o kern/136944 fs [ffs] [lor] bufwait/snaplk (fsync) o kern/136873 fs [ntfs] Missing directories/files on NTFS volume o kern/136865 fs [nfs] [patch] NFS exports atomic and on-the-fly atomic p kern/136470 fs [nfs] Cannot mount / in read-only, over NFS o kern/135546 fs [zfs] zfs.ko module doesn't ignore zpool.cache filenam o kern/135469 fs [ufs] [panic] kernel crash on md operation in ufs_dirb o kern/135050 fs [zfs] ZFS clears/hides disk errors on reboot o kern/134491 fs [zfs] Hot spares are rather cold... o kern/133676 fs [smbfs] [panic] umount -f'ing a vnode-based memory dis o kern/133174 fs [msdosfs] [patch] msdosfs must support multibyte inter o kern/132960 fs [ufs] [panic] panic:ffs_blkfree: freeing free frag o kern/132397 fs reboot causes filesystem corruption (failure to sync b o kern/132331 fs [ufs] [lor] LOR ufs and syncer o kern/132237 fs [msdosfs] msdosfs has problems to read MSDOS Floppy o kern/132145 fs [panic] File System Hard Crashes o kern/131441 fs [unionfs] [nullfs] unionfs and/or nullfs not combineab o kern/131360 fs [nfs] poor scaling behavior of the NFS server under lo o kern/131342 fs [nfs] mounting/unmounting of disks causes NFS to fail o bin/131341 fs makefs: error "Bad file descriptor" on the mount poin o kern/130920 fs [msdosfs] cp(1) takes 100% CPU time while copying file o kern/130210 fs [nullfs] Error by check nullfs f kern/130133 fs [panic] [zfs] 'kmem_map too small' caused by make clea o kern/129760 fs [nfs] after 'umount -f' of a stale NFS share FreeBSD l o kern/129488 fs [smbfs] Kernel "bug" when using smbfs in smbfs_smb.c: o kern/129231 fs [ufs] [patch] New UFS mount (norandom) option - mostly o kern/129152 fs [panic] non-userfriendly panic when trying to mount(8) o kern/127787 fs [lor] [ufs] Three LORs: vfslock/devfs/vfslock, ufs/vfs f kern/127375 fs [zfs] If vm.kmem_size_max>"1073741823" then write spee o bin/127270 fs fsck_msdosfs(8) may crash if BytesPerSec is zero o kern/127029 fs [panic] mount(8): trying to mount a write protected zi f kern/126703 fs [panic] [zfs] _mtx_lock_sleep: recursed on non-recursi o kern/126287 fs [ufs] [panic] Kernel panics while mounting an UFS file o kern/125895 fs [ffs] [panic] kernel: panic: ffs_blkfree: freeing free s kern/125738 fs [zfs] [request] SHA256 acceleration in ZFS o kern/123939 fs [msdosfs] corrupts new files f sparc/123566 fs [zfs] zpool import issue: EOVERFLOW o kern/122380 fs [ffs] ffs_valloc:dup alloc (Soekris 4801/7.0/USB Flash o bin/122172 fs [fs]: amd(8) automount daemon dies on 6.3-STABLE i386, o bin/121898 fs [nullfs] pwd(1)/getcwd(2) fails with Permission denied o bin/121366 fs [zfs] [patch] Automatic disk scrubbing from periodic(8 o bin/121072 fs [smbfs] mount_smbfs(8) cannot normally convert the cha o kern/120483 fs [ntfs] [patch] NTFS filesystem locking changes o kern/120482 fs [ntfs] [patch] Sync style changes between NetBSD and F f kern/120210 fs [zfs] [panic] reboot after panic: solaris assert: arc_ o kern/118912 fs [2tb] disk sizing/geometry problem with large array o kern/118713 fs [minidump] [patch] Display media size required for a k o bin/118249 fs [ufs] mv(1): moving a directory changes its mtime o kern/118126 fs [nfs] [patch] Poor NFS server write performance o kern/118107 fs [ntfs] [panic] Kernel panic when accessing a file at N o kern/117954 fs [ufs] dirhash on very large directories blocks the mac o bin/117315 fs [smbfs] mount_smbfs(8) and related options can't mount o kern/117314 fs [ntfs] Long-filename only NTFS fs'es cause kernel pani o kern/117158 fs [zfs] zpool scrub causes panic if geli vdevs detach on o bin/116980 fs [msdosfs] [patch] mount_msdosfs(8) resets some flags f o conf/116931 fs lack of fsck_cd9660 prevents mounting iso images with o kern/116583 fs [ffs] [hang] System freezes for short time when using o bin/115361 fs [zfs] mount(8) gets into a state where it won't set/un o kern/114955 fs [cd9660] [patch] [request] support for mask,dirmask,ui o kern/114847 fs [ntfs] [patch] [request] dirmask support for NTFS ala o kern/114676 fs [ufs] snapshot creation panics: snapacct_ufs2: bad blo o bin/114468 fs [patch] [request] add -d option to umount(8) to detach o kern/113852 fs [smbfs] smbfs does not properly implement DFS referral o bin/113838 fs [patch] [request] mount(8): add support for relative p o bin/113049 fs [patch] [request] make quot(8) use getopt(3) and show o kern/112658 fs [smbfs] [patch] smbfs and caching problems (resolves b o kern/111843 fs [msdosfs] Long Names of files are incorrectly created o kern/111782 fs [ufs] dump(8) fails horribly for large filesystems s bin/111146 fs [2tb] fsck(8) fails on 6T filesystem o kern/109024 fs [msdosfs] [iconv] mount_msdosfs: msdosfs_iconv: Operat o kern/109010 fs [msdosfs] can't mv directory within fat32 file system o bin/107829 fs [2TB] fdisk(8): invalid boundary checking in fdisk / w o kern/106107 fs [ufs] left-over fsck_snapshot after unfinished backgro o kern/104406 fs [ufs] Processes get stuck in "ufs" state under persist o kern/104133 fs [ext2fs] EXT2FS module corrupts EXT2/3 filesystems o kern/103035 fs [ntfs] Directories in NTFS mounted disc images appear o kern/101324 fs [smbfs] smbfs sometimes not case sensitive when it's s o kern/99290 fs [ntfs] mount_ntfs ignorant of cluster sizes s bin/97498 fs [request] newfs(8) has no option to clear the first 12 o kern/97377 fs [ntfs] [patch] syntax cleanup for ntfs_ihash.c o kern/95222 fs [cd9660] File sections on ISO9660 level 3 CDs ignored o kern/94849 fs [ufs] rename on UFS filesystem is not atomic o bin/94810 fs fsck(8) incorrectly reports 'file system marked clean' o kern/94769 fs [ufs] Multiple file deletions on multi-snapshotted fil o kern/94733 fs [smbfs] smbfs may cause double unlock o kern/93942 fs [vfs] [patch] panic: ufs_dirbad: bad dir (patch from D o kern/92272 fs [ffs] [hang] Filling a filesystem while creating a sna o kern/91134 fs [smbfs] [patch] Preserve access and modification time a kern/90815 fs [smbfs] [patch] SMBFS with character conversions somet o kern/88657 fs [smbfs] windows client hang when browsing a samba shar o kern/88555 fs [panic] ffs_blkfree: freeing free frag on AMD 64 o kern/88266 fs [smbfs] smbfs does not implement UIO_NOCOPY and sendfi o bin/87966 fs [patch] newfs(8): introduce -A flag for newfs to enabl o kern/87859 fs [smbfs] System reboot while umount smbfs. o kern/86587 fs [msdosfs] rm -r /PATH fails with lots of small files o bin/85494 fs fsck_ffs: unchecked use of cg_inosused macro etc. o kern/80088 fs [smbfs] Incorrect file time setting on NTFS mounted vi o bin/74779 fs Background-fsck checks one filesystem twice and omits o kern/73484 fs [ntfs] Kernel panic when doing `ls` from the client si o bin/73019 fs [ufs] fsck_ufs(8) cannot alloc 607016868 bytes for ino o kern/71774 fs [ntfs] NTFS cannot "see" files on a WinXP filesystem o bin/70600 fs fsck(8) throws files away when it can't grow lost+foun o kern/68978 fs [panic] [ufs] crashes with failing hard disk, loose po o kern/65920 fs [nwfs] Mounted Netware filesystem behaves strange o kern/65901 fs [smbfs] [patch] smbfs fails fsx write/truncate-down/tr o kern/61503 fs [smbfs] mount_smbfs does not work as non-root o kern/55617 fs [smbfs] Accessing an nsmb-mounted drive via a smb expo o kern/51685 fs [hang] Unbounded inode allocation causes kernel to loc o kern/51583 fs [nullfs] [patch] allow to work with devices and socket o kern/36566 fs [smbfs] System reboot with dead smb mount and umount o kern/33464 fs [ufs] soft update inconsistencies after system crash o bin/27687 fs fsck(8) wrapper is not properly passing options to fsc o kern/18874 fs [2TB] 32bit NFS servers export wrong negative values t 250 problems total. From owner-freebsd-fs@FreeBSD.ORG Mon Sep 19 12:18:04 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 212EF1065675 for ; Mon, 19 Sep 2011 12:18:04 +0000 (UTC) (envelope-from jhb@freebsd.org) Received: from cyrus.watson.org (cyrus.watson.org [65.122.17.42]) by mx1.freebsd.org (Postfix) with ESMTP id ECA6A8FC18 for ; Mon, 19 Sep 2011 12:18:03 +0000 (UTC) Received: from bigwig.baldwin.cx (66.111.2.69.static.nyinternet.net [66.111.2.69]) by cyrus.watson.org (Postfix) with ESMTPSA id 9D7BA46B45; Mon, 19 Sep 2011 08:18:03 -0400 (EDT) Received: from jhbbsd.localnet (unknown [209.249.190.124]) by bigwig.baldwin.cx (Postfix) with ESMTPSA id 2BB018A037; Mon, 19 Sep 2011 08:18:03 -0400 (EDT) From: John Baldwin To: freebsd-fs@freebsd.org, Allen Landsidel Date: Mon, 19 Sep 2011 08:18:01 -0400 User-Agent: KMail/1.13.5 (FreeBSD/8.2-CBSD-20110617; KDE/4.5.5; amd64; ; ) References: <201109181150.p8IBoDWL039236@freefall.freebsd.org> In-Reply-To: <201109181150.p8IBoDWL039236@freefall.freebsd.org> MIME-Version: 1.0 Content-Type: Text/Plain; charset="iso-8859-15" Content-Transfer-Encoding: 7bit Message-Id: <201109190818.01699.jhb@freebsd.org> X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.6 (bigwig.baldwin.cx); Mon, 19 Sep 2011 08:18:03 -0400 (EDT) Cc: amistry@am-productions.biz Subject: Re: kern/160790: [fusefs] [panic] VPUTX: negative ref count with FUSE X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Sep 2011 12:18:04 -0000 On Sunday, September 18, 2011 7:50:13 am Allen Landsidel wrote: > The following reply was made to PR kern/160790; it has been noted by GNATS. > > From: Allen Landsidel > To: bug-followup@FreeBSD.org > Cc: > Subject: Re: kern/160790: [fusefs] [panic] VPUTX: negative ref count with > FUSE > Date: Sun, 18 Sep 2011 07:40:36 -0400 > > The crash is repeatable though the exact steps are unknown. Three > identical crashes within 8-9 hours, running the same workload. My first guess would be that VOP_LOOKUP() for fusefs is returning a vnode with an insufficient number of references. -- John Baldwin From owner-freebsd-fs@FreeBSD.ORG Mon Sep 19 12:20:05 2011 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 0669C106564A for ; Mon, 19 Sep 2011 12:20:05 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id D02738FC12 for ; Mon, 19 Sep 2011 12:20:04 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id p8JCK4he044114 for ; Mon, 19 Sep 2011 12:20:04 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id p8JCK4QA044113; Mon, 19 Sep 2011 12:20:04 GMT (envelope-from gnats) Date: Mon, 19 Sep 2011 12:20:04 GMT Message-Id: <201109191220.p8JCK4QA044113@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org From: John Baldwin Cc: Subject: Re: amd64/160801: zfsboot on 8.2-RELEASE fails to boot from root-on-zfs in MBR slice X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: John Baldwin List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Sep 2011 12:20:05 -0000 The following reply was made to PR kern/160801; it has been noted by GNATS. From: John Baldwin To: freebsd-amd64@freebsd.org Cc: Camillo SXrs , freebsd-gnats-submit@freebsd.org Subject: Re: amd64/160801: zfsboot on 8.2-RELEASE fails to boot from root-on-zfs in MBR slice Date: Mon, 19 Sep 2011 08:02:59 -0400 On Sunday, September 18, 2011 9:01:11 am Camillo SXrs wrote: >=20 > >Number: 160801 > >Category: amd64 > >Synopsis: zfsboot on 8.2-RELEASE fails to boot from root-on-zfs in= =20 MBR slice > >Confidential: no > >Severity: serious > >Priority: low > >Responsible: freebsd-amd64 > >State: open > >Quarter: =20 > >Keywords: =20 > >Date-Required: > >Class: sw-bug > >Submitter-Id: current-users > >Arrival-Date: Sun Sep 18 13:10:07 UTC 2011 > >Closed-Date: > >Last-Modified: > >Originator: Camillo S=E4rs > >Release: 8.2-RELEASE > >Organization: > >Environment: > FreeBSD free 8.2-RELEASE FreeBSD 8.2-RELEASE #0: Thu Feb 17 02:41:51 UTC= =20 2011 root@mason.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC amd64 >=20 > >Description: > /boot/zfsboot when installed fails to boot from root-on-zfs in MBR slice,= =20 set up according to this: >=20 > >=20 > I upgraded from 8.1 to 8.2-RELEASE, and consequently upgraded my zfs root= =20 pool to version 15. Upgraded the bootloader in Fixit prompt to allow booti= ng=20 from v15 pool. After this, the system fails to boot, and freezes after the= =20 "Boot: F1" prompt with "-" on the screen. See this thread for example=20 screenshot: >=20 > >=20 > MBR is used because of BIOS incompatibility with GPT as installed by=20 =46reeBSD. > >How-To-Repeat: > Set up root-on-ZFS in MBR slice on 8.2-RELEASE according to: >=20 > >=20 > Reboot - system halts on "-". > >Fix: > Install zfsboot from 9.0-BETA2, where the problem is fixed. Can you test 8.2-stable? The various fixes made to zfsboot in 9 were merge= d=20 to 8 after 8.2-release. =2D-=20 John Baldwin From owner-freebsd-fs@FreeBSD.ORG Mon Sep 19 13:48:11 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A2C351065670 for ; Mon, 19 Sep 2011 13:48:11 +0000 (UTC) (envelope-from bfriesen@simple.dallas.tx.us) Received: from blade.simplesystems.org (blade.simplesystems.org [65.66.246.74]) by mx1.freebsd.org (Postfix) with ESMTP id 6BBD78FC20 for ; Mon, 19 Sep 2011 13:48:10 +0000 (UTC) Received: from freddy.simplesystems.org (freddy.simplesystems.org [65.66.246.65]) by blade.simplesystems.org (8.14.4+Sun/8.14.4) with ESMTP id p8JDm8Vq014837; Mon, 19 Sep 2011 08:48:08 -0500 (CDT) Date: Mon, 19 Sep 2011 08:48:08 -0500 (CDT) From: Bob Friesenhahn X-X-Sender: bfriesen@freddy.simplesystems.org To: Peter Jeremy In-Reply-To: <20110919094013.GA7771@server.vk2pj.dyndns.org> Message-ID: References: <1316222526.31565.YahooMailNeo@web121205.mail.ne1.yahoo.com> <20110919094013.GA7771@server.vk2pj.dyndns.org> User-Agent: Alpine 2.01 (GSO 1266 2009-07-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.2.2 (blade.simplesystems.org [65.66.246.90]); Mon, 19 Sep 2011 08:48:08 -0500 (CDT) Cc: "freebsd-fs@freebsd.org" Subject: Re: ZFS obn FreeBSD hardware model for 48 or 96 sata3 paths... X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Sep 2011 13:48:11 -0000 On Mon, 19 Sep 2011, Peter Jeremy wrote: > > ZIL will only be useful if you do lots of sync writes. L2ARC won't > help write performance. Heavy write load implies you want mirroring L2ARC can substantially help write performance in the case where a partial block is updated. It avoids the 'read' part of the read/modify/write cycle. Of course it only helps if the L2ARC is much more responsive than the main store. Bob -- Bob Friesenhahn bfriesen@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ From owner-freebsd-fs@FreeBSD.ORG Mon Sep 19 14:30:11 2011 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 508F71065670 for ; Mon, 19 Sep 2011 14:30:11 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 4024C8FC0A for ; Mon, 19 Sep 2011 14:30:11 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id p8JEUAaV063028 for ; Mon, 19 Sep 2011 14:30:10 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id p8JEUA03063023; Mon, 19 Sep 2011 14:30:10 GMT (envelope-from gnats) Date: Mon, 19 Sep 2011 14:30:10 GMT Message-Id: <201109191430.p8JEUA03063023@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org From: =?ISO-8859-15?Q?Camillo_S=E4rs?= Cc: Subject: Re: amd64/160801: zfsboot on 8.2-RELEASE fails to boot from root-on-zfs in MBR slice X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: =?ISO-8859-15?Q?Camillo_S=E4rs?= List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Sep 2011 14:30:11 -0000 The following reply was made to PR kern/160801; it has been noted by GNATS. From: =?ISO-8859-15?Q?Camillo_S=E4rs?= To: John Baldwin Cc: freebsd-amd64@freebsd.org, freebsd-gnats-submit@freebsd.org Subject: Re: amd64/160801: zfsboot on 8.2-RELEASE fails to boot from root-on-zfs in MBR slice Date: Mon, 19 Sep 2011 17:07:26 +0300 Hi, On 2011-09-19 15:02, John Baldwin wrote: >> Install zfsboot from 9.0-BETA2, where the problem is fixed. > > Can you test 8.2-stable? The various fixes made to zfsboot in 9 were merged > to 8 after 8.2-release. Unfortunately fixing this issue by installing zfsboot from 9.0-BETA2 was a surprising amount of work, because of an incompatibility between the 9.0 USB installer GPT and the BIOS on this system. It took quite a while to recognize the root cause for that one. I simply cannot boot the system in question with the GPT pmbr used on the memstick of 9.0. The BIOS locks completely. I am very reluctant to risk breaking my currently running system, the previous boot failure caused almost two weeks of downtime. Does the 8.2-stable memstick image still use MBR? If so, I could conceivably try to copy the 9.0 zfsboot version to the 8.2-stable memstick and test both. Regards, Camillo From owner-freebsd-fs@FreeBSD.ORG Mon Sep 19 15:29:42 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 487BC106566B for ; Mon, 19 Sep 2011 15:29:42 +0000 (UTC) (envelope-from rabgvzr@gmail.com) Received: from mail-yw0-f54.google.com (mail-yw0-f54.google.com [209.85.213.54]) by mx1.freebsd.org (Postfix) with ESMTP id 0EC198FC13 for ; Mon, 19 Sep 2011 15:29:41 +0000 (UTC) Received: by ywp17 with SMTP id 17so5196586ywp.13 for ; Mon, 19 Sep 2011 08:29:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:date:message-id:subject:from:to:content-type; bh=H4YGPFchLF+rysCWV4c1AX6C4i/HVslR+ONykAsKulQ=; b=ihz7GoALqdoe+l7PHog69VNmHryXLHBrXSuYqVBiRNnglPW3tvayeLwt5g4daNbUfF ziubWltz9vjc4QKd9VkzETw9ZCWyrX2DUjMpdN9t/rT/Uj4L7VnbC4LYMFEYGKY6KTV/ F1tXBfipWmFy4jvq9WA2/1DEA1KzeWqnFMztg= MIME-Version: 1.0 Received: by 10.236.80.74 with SMTP id j50mr14842979yhe.131.1316446181164; Mon, 19 Sep 2011 08:29:41 -0700 (PDT) Received: by 10.236.43.167 with HTTP; Mon, 19 Sep 2011 08:29:41 -0700 (PDT) Date: Mon, 19 Sep 2011 11:29:41 -0400 Message-ID: From: Rotate 13 To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 Subject: ZFS: deferring automounts/mounting root without bootfs [9.0-BETA2] X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Sep 2011 15:29:42 -0000 9.0-BETA2 system is booted off removable UFS volume, but root is mounted from ZFS. I try to meet the following two goals: 1. Not use bootfs property (too many limitations mentioned in docs) 2. Use ZFS inheritable mountpoints and management (not clutter up /etc/fstab... and not set mountpoint= on each child dataset!) Config info is below. Result: System boots, but hangs with init: can't exec getty '/usr/libexec/getty' for port /dev/ttyv0: No such file or directory (and many similar messages) I think I need some way other than bootfs to defer ZFS automatic mounting until after / is mounted. Apparently mount is not functioning right. I am guessing that ZFS tries to mount tank/usr, tank/var and so forth before root is mounted (thus mounts fail). However this is just a guess - based on what happens when I zpool import -f -o altroot. For obvious reason, I don't have logs of above problem; and I cannot review the messages that scrolled by, as this keyboard completely lacks a scroll lock key (guilty: Dell). Simplified config (have tried number of subtle variations): zpool create -O canmount=off -O mountpoint=/ setuid=off tank /path/to/disk zfs create -o mountpoint=legacy -o setuid=on tank/root [...create datasets for /usr, /var, and so forth, inheriting root mountpoint...] = On UFS volume: = /boot/etc/fstab: tank/root / zfs rw,noatime 0 0 /path/to/ufsboot /boot rw,noatime 0 0 /boot/loader.conf zfs_load="YES" vfs.root.mountfrom="zfs:tank/root" = On ZFS volume: = /etc/fstab: tank/root / zfs rw,noatime 0 0 /etc/rc.conf: zfs_enable="YES" (also tried placing this on UFS volume in /boot/etc/rc.conf) /boot on ZFS is kept in sync with /boot on UFS volume. Note, zpool is exported/imported and zpool.cache properly placed in /boot/zfs; before I did that, I got mountroot followed by panic. Off hand note: I get lots of lock order reversals mounting filesystems on 9.0-BETA2. But not specific only to ZFS. Thanks for any advices in making work an unusual, but very useful configuration. From owner-freebsd-fs@FreeBSD.ORG Mon Sep 19 15:37:39 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id BC78A1065672 for ; Mon, 19 Sep 2011 15:37:39 +0000 (UTC) (envelope-from rabgvzr@gmail.com) Received: from mail-gx0-f182.google.com (mail-gx0-f182.google.com [209.85.161.182]) by mx1.freebsd.org (Postfix) with ESMTP id 842A08FC12 for ; Mon, 19 Sep 2011 15:37:39 +0000 (UTC) Received: by gxk28 with SMTP id 28so5693143gxk.13 for ; Mon, 19 Sep 2011 08:37:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:date:message-id:subject:from:to:content-type; bh=H4YGPFchLF+rysCWV4c1AX6C4i/HVslR+ONykAsKulQ=; b=KBnjVQcAk34hZY25kSyrfvBWpGWcccedn0b4XO4X7t9ZPgk2evBciaC9zJ0iR+zQWd 7iZKf4IA2E3DwQSfFRKjpVm33MjknRjrm1j3NpMMbUP9czmxSfc4cO+p4tZ1yTFAyWNM uAEDLHAkXHfUUaEZPFeaoyTQ5FeqMO5nRwEIs= MIME-Version: 1.0 Received: by 10.236.185.4 with SMTP id t4mr14719644yhm.121.1316445162076; Mon, 19 Sep 2011 08:12:42 -0700 (PDT) Received: by 10.236.43.167 with HTTP; Mon, 19 Sep 2011 08:12:42 -0700 (PDT) Date: Mon, 19 Sep 2011 11:12:42 -0400 Message-ID: From: Rotate 13 To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 Subject: ZFS: deferring automounts/mounting root without bootfs [9.0-BETA2] X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Sep 2011 15:37:39 -0000 9.0-BETA2 system is booted off removable UFS volume, but root is mounted from ZFS. I try to meet the following two goals: 1. Not use bootfs property (too many limitations mentioned in docs) 2. Use ZFS inheritable mountpoints and management (not clutter up /etc/fstab... and not set mountpoint= on each child dataset!) Config info is below. Result: System boots, but hangs with init: can't exec getty '/usr/libexec/getty' for port /dev/ttyv0: No such file or directory (and many similar messages) I think I need some way other than bootfs to defer ZFS automatic mounting until after / is mounted. Apparently mount is not functioning right. I am guessing that ZFS tries to mount tank/usr, tank/var and so forth before root is mounted (thus mounts fail). However this is just a guess - based on what happens when I zpool import -f -o altroot. For obvious reason, I don't have logs of above problem; and I cannot review the messages that scrolled by, as this keyboard completely lacks a scroll lock key (guilty: Dell). Simplified config (have tried number of subtle variations): zpool create -O canmount=off -O mountpoint=/ setuid=off tank /path/to/disk zfs create -o mountpoint=legacy -o setuid=on tank/root [...create datasets for /usr, /var, and so forth, inheriting root mountpoint...] = On UFS volume: = /boot/etc/fstab: tank/root / zfs rw,noatime 0 0 /path/to/ufsboot /boot rw,noatime 0 0 /boot/loader.conf zfs_load="YES" vfs.root.mountfrom="zfs:tank/root" = On ZFS volume: = /etc/fstab: tank/root / zfs rw,noatime 0 0 /etc/rc.conf: zfs_enable="YES" (also tried placing this on UFS volume in /boot/etc/rc.conf) /boot on ZFS is kept in sync with /boot on UFS volume. Note, zpool is exported/imported and zpool.cache properly placed in /boot/zfs; before I did that, I got mountroot followed by panic. Off hand note: I get lots of lock order reversals mounting filesystems on 9.0-BETA2. But not specific only to ZFS. Thanks for any advices in making work an unusual, but very useful configuration. From owner-freebsd-fs@FreeBSD.ORG Mon Sep 19 15:44:22 2011 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 413C6106566C for ; Mon, 19 Sep 2011 15:44:22 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 919E18FC1D for ; Mon, 19 Sep 2011 15:44:21 +0000 (UTC) Received: from odyssey.starpoint.kiev.ua (alpha-e.starpoint.kiev.ua [212.40.38.101]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id SAA13583; Mon, 19 Sep 2011 18:44:18 +0300 (EEST) (envelope-from avg@FreeBSD.org) Message-ID: <4E776352.30702@FreeBSD.org> Date: Mon, 19 Sep 2011 18:44:18 +0300 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:6.0.2) Gecko/20110907 Thunderbird/6.0.2 MIME-Version: 1.0 To: Rotate 13 References: In-Reply-To: X-Enigmail-Version: undefined Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Cc: freebsd-fs@FreeBSD.org Subject: Re: ZFS: deferring automounts/mounting root without bootfs [9.0-BETA2] X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Sep 2011 15:44:22 -0000 on 19/09/2011 18:29 Rotate 13 said the following: > 9.0-BETA2 system is booted off removable UFS volume, but root is > mounted from ZFS. I try to meet the following two goals: > > 1. Not use bootfs property (too many limitations mentioned in docs) > 2. Use ZFS inheritable mountpoints and management (not clutter up > /etc/fstab... and not set mountpoint= on each child dataset!) > > Config info is below. Result: System boots, but hangs with > > init: can't exec getty '/usr/libexec/getty' for port /dev/ttyv0: No > such file or directory This looks like devfs (/dev) is either not mounted or something is mounted over it. I think that you should check if any other auto-mountable dataset in your pool has a mountpoint of '/'. Or the root dataset of tank is till mounted for some reason or something like that. -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Mon Sep 19 16:03:46 2011 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id E58B3106566B for ; Mon, 19 Sep 2011 16:03:46 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 298A48FC0A for ; Mon, 19 Sep 2011 16:03:45 +0000 (UTC) Received: from odyssey.starpoint.kiev.ua (alpha-e.starpoint.kiev.ua [212.40.38.101]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id TAA13885; Mon, 19 Sep 2011 19:03:41 +0300 (EEST) (envelope-from avg@FreeBSD.org) Message-ID: <4E7767DD.5030208@FreeBSD.org> Date: Mon, 19 Sep 2011 19:03:41 +0300 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:6.0.2) Gecko/20110907 Thunderbird/6.0.2 MIME-Version: 1.0 To: =?UTF-8?B?Q2FtaWxsbyBTw6Rycw==?= References: <201109191430.p8JEUA03063023@freefall.freebsd.org> In-Reply-To: <201109191430.p8JEUA03063023@freefall.freebsd.org> X-Enigmail-Version: undefined Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Cc: freebsd-fs@FreeBSD.org Subject: Re: amd64/160801: zfsboot on 8.2-RELEASE fails to boot from root-on-zfs in MBR slice X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Sep 2011 16:03:47 -0000 on 19/09/2011 17:30 Camillo Särs said the following: > Does the 8.2-stable memstick image still use MBR? If so, I could > conceivably try to copy the 9.0 zfsboot version to the 8.2-stable > memstick and test both. Perhaps create your own? It should be relatively easy to do with the mfsBSD tools: http://mfsbsd.vx.sk/ As far as I can the tools create an image in a "dangerously dedicated" mode. -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Mon Sep 19 16:15:11 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id ADB071065672 for ; Mon, 19 Sep 2011 16:15:11 +0000 (UTC) (envelope-from rabgvzr@gmail.com) Received: from mail-gw0-f50.google.com (mail-gw0-f50.google.com [74.125.83.50]) by mx1.freebsd.org (Postfix) with ESMTP id 5D0C68FC14 for ; Mon, 19 Sep 2011 16:15:11 +0000 (UTC) Received: by gwj16 with SMTP id 16so6033851gwj.37 for ; Mon, 19 Sep 2011 09:15:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=PwNGmEkdtYS/TYaDKdTh1sNEbFeMSe8BqoZLAzzU9Fo=; b=i5QoY7vxEO+B9FqTUZUhOP5W6iw1GsG40qJ/GwIA+H0AZnW+/mOuMR04i0Dp1OOcXF g+pjIymWkSMDlznt5Z5SXM0ir8EMV9yild7rOUT8A7uwdWzoxzMuNsenNIPFmjJl2zNM AdouWT8ZT8VOEPc+el6l1v4F+6vSzAuOowurU= MIME-Version: 1.0 Received: by 10.236.155.4 with SMTP id i4mr15114918yhk.34.1316448910610; Mon, 19 Sep 2011 09:15:10 -0700 (PDT) Received: by 10.236.43.167 with HTTP; Mon, 19 Sep 2011 09:15:10 -0700 (PDT) In-Reply-To: <4E776352.30702@FreeBSD.org> References: <4E776352.30702@FreeBSD.org> Date: Mon, 19 Sep 2011 12:15:10 -0400 Message-ID: From: Rotate 13 To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 Cc: Andriy Gapon Subject: Re: ZFS: deferring automounts/mounting root without bootfs [9.0-BETA2] X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Sep 2011 16:15:11 -0000 On Mon, 19 Sep 2011 11:44:18 -0400, Andriy Gapon wrote: > on 19/09/2011 18:29 Rotate 13 said the following: >> 9.0-BETA2 system is booted off removable UFS volume, but root is >> mounted from ZFS. I try to meet the following two goals: >> >> 1. Not use bootfs property (too many limitations mentioned in docs) > >> 2. Use ZFS inheritable mountpoints and management (not clutter up >> /etc/fstab... and not set mountpoint= on each child dataset!) >> >> Config info is below. Result: System boots, but hangs with >> >> init: can't exec getty '/usr/libexec/getty' for port /dev/ttyv0: No >> such file or directory > > This looks like devfs (/dev) is either not mounted or something is > mounted over > it. I think that you should check if any other auto-mountable dataset > in your > pool has a mountpoint of '/'. Or the root dataset of tank is till > mounted for > some reason or something like that. Thanks for quick reply. No /dev was my first thought too. But I also saw other messages scroll by of unable to write in /var, which is on ZFS itself. So I think the "No such file or directory" probably for /usr/libexec/getty (cannot read /usr). Note also, root dataset is canmount=off - should not be ever mounted to begin with - and nothing except root dataset and tank/root have / mountpoint. I will see what I can do to verify devfs is being mounted, but definitely at least some ZFS dataset(s) are problem. Which brings me back my original question. Difficult to diagnose when system won't write logs to /var - could be very simple misconfiguration, could be bug. Manuals don't say a lot about mount order on boot, and that remains my suspicion due to behavior when zpool import -f from rescue shell: Can't mount /usr, /var, etc. until after tank/root is manually mounted, but after, zfs mount -a is magic. From owner-freebsd-fs@FreeBSD.ORG Mon Sep 19 17:46:01 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id C5F92106564A for ; Mon, 19 Sep 2011 17:46:01 +0000 (UTC) (envelope-from ben@altesco.nl) Received: from altus-escon.com (altesco.xs4all.nl [82.95.106.39]) by mx1.freebsd.org (Postfix) with ESMTP id 49F1D8FC12 for ; Mon, 19 Sep 2011 17:46:00 +0000 (UTC) Received: from giskard.altus-escon.com (giskard.altus-escon.com [193.78.231.1]) by altus-escon.com (8.14.4/8.14.4) with ESMTP id p8JH8eQc086651 for ; Mon, 19 Sep 2011 19:08:45 +0200 (CEST) (envelope-from ben@altesco.nl) From: Ben Stuyts Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable Date: Mon, 19 Sep 2011 19:08:40 +0200 Message-Id: <9774D03B-A8C7-48DE-9BC4-528DD4134787@altesco.nl> To: freebsd-fs@freebsd.org Mime-Version: 1.0 (Apple Message framework v1250.3) X-Mailer: Apple Mail (2.1250.3) X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.2.6 (altus-escon.com [193.78.231.142]); Mon, 19 Sep 2011 19:08:45 +0200 (CEST) X-Virus-Scanned: clamav-milter 0.97 at mars.altus-escon.com X-Virus-Status: Clean X-Spam-Status: No, score=-3.4 required=3.5 tests=AWL,BAYES_00 autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mars.altus-escon.com Subject: ZFS auto expand mirror X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Sep 2011 17:46:01 -0000 Hi, I want to expand an existing mirror by replacing the existing drives = with bigger ones. This is on: FreeBSD xxx 7.3-STABLE FreeBSD 7.3-STABLE #2: Mon Sep 20 18:36:08 CEST = 2010 root@xxx:/usr/obj/usr/src/sys/xxx amd64 # zpool status home pool: home state: ONLINE scrub: scrub completed after 2h0m with 0 errors on Mon Sep 19 18:25:45 = 2011 config: NAME STATE READ WRITE CKSUM home ONLINE 0 0 0 mirror ONLINE 0 0 0 ad5s1a ONLINE 0 0 0 ad7s1a ONLINE 0 0 0 Will this version of FreeBSD auto-expand to the new, bigger drive size = once they are both replaced? I did not see the autoexpand property in = this pool. zpool is v13, zfs is v3. Kind regards, Ben From owner-freebsd-fs@FreeBSD.ORG Mon Sep 19 18:18:22 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 3E821106566B for ; Mon, 19 Sep 2011 18:18:22 +0000 (UTC) (envelope-from fjwcash@gmail.com) Received: from mail-vw0-f45.google.com (mail-vw0-f45.google.com [209.85.212.45]) by mx1.freebsd.org (Postfix) with ESMTP id ED5F78FC17 for ; Mon, 19 Sep 2011 18:18:20 +0000 (UTC) Received: by vws17 with SMTP id 17so30979881vws.18 for ; Mon, 19 Sep 2011 11:18:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=YLZl4GeVLx5oQb2nwkZfLv18PJHtsAixgDYoSJkGkYA=; b=OPme5ioSIMKuv6I9Uy/lrZHdt6Ji4M57KLTig7ElnyLpglfvLG4qSKEuw7n+5MIGIA eE/VVm/Ve0HbHOHLshRgIiIaa0S2x3/FFUaAQSenwOFCuxJbJCLpsIuJ421bjVgWDxmJ qYD+mA6j4rg0mPfbAiYPLdqBjSOroFRG0R4G0= MIME-Version: 1.0 Received: by 10.52.176.196 with SMTP id ck4mr2260496vdc.168.1316454596707; Mon, 19 Sep 2011 10:49:56 -0700 (PDT) Received: by 10.220.198.130 with HTTP; Mon, 19 Sep 2011 10:49:56 -0700 (PDT) In-Reply-To: <9774D03B-A8C7-48DE-9BC4-528DD4134787@altesco.nl> References: <9774D03B-A8C7-48DE-9BC4-528DD4134787@altesco.nl> Date: Mon, 19 Sep 2011 10:49:56 -0700 Message-ID: From: Freddie Cash To: Ben Stuyts Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: freebsd-fs@freebsd.org Subject: Re: ZFS auto expand mirror X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Sep 2011 18:18:22 -0000 On Mon, Sep 19, 2011 at 10:08 AM, Ben Stuyts wrote: > I want to expand an existing mirror by replacing the existing drives with > bigger ones. This is on: > FreeBSD xxx 7.3-STABLE FreeBSD 7.3-STABLE #2: Mon Sep 20 18:36:08 CEST 2010 > root@xxx:/usr/obj/usr/src/sys/xxx amd64 > > # zpool status home > pool: home > state: ONLINE > scrub: scrub completed after 2h0m with 0 errors on Mon Sep 19 18:25:45 > 2011 > config: > > NAME STATE READ WRITE CKSUM > home ONLINE 0 0 0 > mirror ONLINE 0 0 0 > ad5s1a ONLINE 0 0 0 > ad7s1a ONLINE 0 0 0 > > Will this version of FreeBSD auto-expand to the new, bigger drive size once > they are both replaced? I did not see the autoexpand property in this pool. > zpool is v13, zfs is v3. > No. You will need to reboot the system in order for the extra space to become usable in the pool. Or, if none of the OS is installed on the pool, you can export/import the pool to make the new space available. -- Freddie Cash fjwcash@gmail.com From owner-freebsd-fs@FreeBSD.ORG Mon Sep 19 18:49:46 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 7C0141065670 for ; Mon, 19 Sep 2011 18:49:46 +0000 (UTC) (envelope-from jusher71@yahoo.com) Received: from nm23-vm2.bullet.mail.ne1.yahoo.com (nm23-vm2.bullet.mail.ne1.yahoo.com [98.138.91.211]) by mx1.freebsd.org (Postfix) with SMTP id 39F118FC0C for ; Mon, 19 Sep 2011 18:49:46 +0000 (UTC) Received: from [98.138.90.56] by nm23.bullet.mail.ne1.yahoo.com with NNFMP; 19 Sep 2011 18:49:45 -0000 Received: from [98.138.88.238] by tm9.bullet.mail.ne1.yahoo.com with NNFMP; 19 Sep 2011 18:49:45 -0000 Received: from [127.0.0.1] by omp1038.mail.ne1.yahoo.com with NNFMP; 19 Sep 2011 18:49:45 -0000 X-Yahoo-Newman-Property: ymail-3 X-Yahoo-Newman-Id: 862152.24422.bm@omp1038.mail.ne1.yahoo.com Received: (qmail 42274 invoked by uid 60001); 19 Sep 2011 18:49:45 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024; t=1316458185; bh=owDhR3NtxYfeEEzUiUQgSHbIt5FQa3DNmq1y15Tuz+Q=; h=X-YMail-OSG:Received:X-Mailer:Message-ID:Date:From:Subject:To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding; b=1Vw1NOm1Yi/jw9LUZCDt00faJKwpSUG2wMemrSJmH4YgaDZfU6RQuWbX8BshXNkiCUhJ5TT/7S2qkiI52nBNfuMnhLBw+OV5Fc2XMz+aElo07exuChD0LcUsJ+1jKYRJEB3a2IcEPJIY+spMz2cSvye7lMj9VJ2dEvhjdBYALqE= DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com; h=X-YMail-OSG:Received:X-Mailer:Message-ID:Date:From:Subject:To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding; b=cH6RyNR+1Gk/g4qfyNy8vUEjEtdIt4EyOGoLEX7IJ3BrJVL64T/d0pAnuMQBbs5l2LDRHdFem8N6kZOJjUVjmC3C/zUYdoSv5m7HDRHf5uPEDUV8OoamFxL3MIlQaYxE4LqEq1cgigRGxYJF4KF3nWCNl66ZC8b4ypWHTavINeE=; X-YMail-OSG: YbTCLaoVM1mxpzpzUtlkjJR9Gv7DjsxbeIedioMIclvWBNq jB0n_eyFYM6jJ0k4pcLOFsIHNExZZ8y9JE0Ag5xq9xHN7zAUXdtQtAhx_25R O1xnskg8eXw9Kq04woqdA1evBj_tYY4R7.qaZZWFZ5gBjcIj5XMnUk3XDgdw 6.XI3SNqSwv6Qy2dLAiDl1zphg3AvC4E46ASgLOA1GjWCxlnOkhqIVJYHEec iMsDcoNNRDmdeIDbPnpmfu.rzKKM.oXpaiWxwVidkleDIWyiVYRkFLEW3te2 do9i7m.2nxguo9F8xG3kLhDJet2zyj5dh..jYnLI4IKTntp3buBOfw_j4vW5 P21mo2QbuUMYvDZK86JQ5wZa.db55x2S91nbsBlRNpX20exJRJwb7kvlJngG Dwot6GQ-- Received: from [192.251.226.205] by web121207.mail.ne1.yahoo.com via HTTP; Mon, 19 Sep 2011 11:49:45 PDT X-Mailer: YahooMailClassic/14.0.5 YahooMailWebService/0.8.114.317681 Message-ID: <1316458185.42258.YahooMailClassic@web121207.mail.ne1.yahoo.com> Date: Mon, 19 Sep 2011 11:49:45 -0700 (PDT) From: Jason Usher To: Joshua Boyd , Julian Elischer MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable Cc: "freebsd-fs@freebsd.org" Subject: Re: ZFS obn FreeBSD hardware model for 48 or 96 sata3 paths... X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Sep 2011 18:49:46 -0000 =0A--- On Fri, 9/16/11, Julian Elischer wrote:=0A=0Awh= at is it you are trying to achieve?=A0 large storage, or high =0Atransactio= n rates? (or both?)=0A=0AI'm biased but I'd put a 160GB zil on a fusion-io = card=A0=A0=A0and dedicate =0A8G of the ram to it's useage.=0A=0Ait's remark= able what a 20uSec turnaround time on your metadata can do..=0A=0A---=0A=0A= I am optimizing for storage space - as much as possible for the budget.=0A= =0ASo we are going with SATA3 drives (the 3TB ones) and raidz3 (even though= it is a write-intensive application).=0A=0ABUT, in the context of those mi= serly constraints, I'm trying to figure out what mistakes to avoid and what= optimizations are worth making...=0A=0AThe fusion-io would be great, but a= nother $5k (or more) is not in the budget. I'm glad we can always add a ZI= L after the fact, and maybe we will do so with a cheaper (OCZ ?) pcie based= SSD...=0A=0A From owner-freebsd-fs@FreeBSD.ORG Mon Sep 19 18:52:21 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 013011065672 for ; Mon, 19 Sep 2011 18:52:21 +0000 (UTC) (envelope-from jusher71@yahoo.com) Received: from nm34-vm2.bullet.mail.ne1.yahoo.com (nm34-vm2.bullet.mail.ne1.yahoo.com [98.138.229.82]) by mx1.freebsd.org (Postfix) with SMTP id A204B8FC16 for ; Mon, 19 Sep 2011 18:52:20 +0000 (UTC) Received: from [98.138.90.51] by nm34.bullet.mail.ne1.yahoo.com with NNFMP; 19 Sep 2011 18:52:20 -0000 Received: from [98.138.89.233] by tm4.bullet.mail.ne1.yahoo.com with NNFMP; 19 Sep 2011 18:52:20 -0000 Received: from [127.0.0.1] by omp1048.mail.ne1.yahoo.com with NNFMP; 19 Sep 2011 18:52:20 -0000 X-Yahoo-Newman-Property: ymail-3 X-Yahoo-Newman-Id: 44129.21879.bm@omp1048.mail.ne1.yahoo.com Received: (qmail 94627 invoked by uid 60001); 19 Sep 2011 18:52:20 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024; t=1316458339; bh=CXIRTo1wBLXJpnMu23lUQQPu+VS9vR2ou63Q2rID/N8=; h=X-YMail-OSG:Received:X-Mailer:Message-ID:Date:From:Subject:To:In-Reply-To:MIME-Version:Content-Type; b=yyI9YFYVknCQ6ZOV8B6wqD1/Hah4xh7Kpx6bkl/y1NlO3u27O/nhxIigus0Hjd1T7LKw6k8Z4WRJYfIXT2BA5btTdcYflUgv1SQa4mqkCaJjfT92tDYA5ZPKR2DEZswyHxlWzqllIOWZDT3/7+haYTAx99xL4TYW1DaTMNjFzzI= DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com; h=X-YMail-OSG:Received:X-Mailer:Message-ID:Date:From:Subject:To:In-Reply-To:MIME-Version:Content-Type; b=LnQN25y9T4OulwlvmykkiUNs0Fu+hsVzUHnuJQwWrcW6BBJnP0zfKzZ46uxsefV22TSVdmHdHBB4FKePpVrGXfMSDuK5ZywFlZqVzYrvNa6KyO8NwXN+nb1etFkFVcCHcg3MzveidiwoOXTMGiuZZRszuHE6penyhdhBRmcdNDM=; X-YMail-OSG: OTLAMU4VM1npm7FBweMWiD_p1d71jmriRYeoXE2Op.mxIU0 em1Um8s8RhW7s4BY3mrzYa0Wcjn67hXmhBEbb7bWGEB4LlrwY31qRYp8h4QJ _O8eZi.OoLgahveR1JUbmkE9S8DqE7pyQ1d0uopdeZyk2WiPlQBeGYEDHNRt Ob0VPTx81t_yEPsKtDtup9a1rnQFeMony.MA8rtceUs.bsAvGZ2Z.SS.MnAZ kUOxsYObnIBv4Oa3DawTpzXQxCD.rp0LT1HnKHM7ag0aFowZvl3wJwHmd.9h F80GnUei0OZG.W4MHhLG5A4OU4BSLn89XsXaZMsTthzDX7SRZMBJ3MzbIsmt YDVh3Y5r_NHqUx8HFok73Jj10vvkl8qydBCWBMoPyBQiVXdfMwuFfs8KZMMi 0KWk- Received: from [80.237.226.76] by web121216.mail.ne1.yahoo.com via HTTP; Mon, 19 Sep 2011 11:52:19 PDT X-Mailer: YahooMailClassic/14.0.5 YahooMailWebService/0.8.114.317681 Message-ID: <1316458339.73301.YahooMailClassic@web121216.mail.ne1.yahoo.com> Date: Mon, 19 Sep 2011 11:52:19 -0700 (PDT) From: Jason Usher To: freebsd-fs@freebsd.org In-Reply-To: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Subject: Re: ZFS obn FreeBSD hardware model for 48 or 96 sata3 paths... X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Sep 2011 18:52:21 -0000 --- On Fri, 9/16/11, Rich wrote: > To get full bandwidth SATA 3 from 48/96 drives, that's 750 > MB/s * 8/10 > (8 data bytes per 10 bytes transmitted raw - SATA 3 does an > 8b10b > encoding) ~ 600 MB/s * 48/96 = 28800/57600 MB/s > > PCIe 2.x is 500 MB/s per lane, so that's 57/114 lanes of > PCIe 2.x to > do full bandwidth. > > And that all assumes you have sufficient memory bandwidth > anyway. But does that exist ? I do not see any motherboards with even 64 lanes of pcie 2.0, much less 128 of them ... I'm seeing 32 @ pcie2.0 ... I guess 7 slots @ 16x would be 112 lanes, yes ? I just don't see a board like that in existence... From owner-freebsd-fs@FreeBSD.ORG Mon Sep 19 18:54:58 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id DDF341065672 for ; Mon, 19 Sep 2011 18:54:58 +0000 (UTC) (envelope-from rabgvzr@gmail.com) Received: from mail-yi0-f54.google.com (mail-yi0-f54.google.com [209.85.218.54]) by mx1.freebsd.org (Postfix) with ESMTP id 9E2228FC0A for ; Mon, 19 Sep 2011 18:54:58 +0000 (UTC) Received: by yia13 with SMTP id 13so3388105yia.13 for ; Mon, 19 Sep 2011 11:54:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=okNBpAcd3xsw22cfDCIqm9+rf5xZHGZC2n+YYl3k6N8=; b=G0fjJsuzf4zt9pFMIffybpPVOdxry5LqHn2SZNqR3MURSSWK+wKCwiScqo/qy4LMFc D7VeGUZfV6ijoTvYak3x3qVDgyhhY11lWZ46/ORIKn3aIiD0HxtwRXq2VzVPGp0NO3d0 oWDDp1uBdN9W/pPMd6BgFNmG5IPrxkGXCnfYw= MIME-Version: 1.0 Received: by 10.236.201.233 with SMTP id b69mr16672178yho.51.1316458497967; Mon, 19 Sep 2011 11:54:57 -0700 (PDT) Received: by 10.236.43.167 with HTTP; Mon, 19 Sep 2011 11:54:57 -0700 (PDT) In-Reply-To: References: <4E776352.30702@FreeBSD.org> Date: Mon, 19 Sep 2011 14:54:57 -0400 Message-ID: From: Rotate 13 To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 Subject: [solved/not problem] Re: ZFS: deferring automounts/mounting root without bootfs [9.0-BETA2] X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Sep 2011 18:54:58 -0000 On Mon, 19 Sep 2011 12:15:10 -0400, Rotate 13 wrote: > back my original question. Difficult to diagnose when system won't > write logs to /var - could be very simple misconfiguration, could be > bug. Manuals don't say a lot about mount order on boot, and that Solved. Was minor error in many lines of configuration typed by hand into box with no other access. For archives and searchers, formula outlined in my original post *does* work on 9.0-BETA2 for root ZFS without option bootfs (boot from external UFS). Apology for wasted time, thanks to Andriy Gapon for trying to help... And thanks to ZFS porters/coders for file system! From owner-freebsd-fs@FreeBSD.ORG Mon Sep 19 19:00:16 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A44CD1065673 for ; Mon, 19 Sep 2011 19:00:16 +0000 (UTC) (envelope-from jusher71@yahoo.com) Received: from nm26-vm0.bullet.mail.ne1.yahoo.com (nm26-vm0.bullet.mail.ne1.yahoo.com [98.138.91.68]) by mx1.freebsd.org (Postfix) with SMTP id 8557B8FC15 for ; Mon, 19 Sep 2011 19:00:12 +0000 (UTC) Received: from [98.138.90.56] by nm26.bullet.mail.ne1.yahoo.com with NNFMP; 19 Sep 2011 19:00:12 -0000 Received: from [98.138.87.11] by tm9.bullet.mail.ne1.yahoo.com with NNFMP; 19 Sep 2011 19:00:11 -0000 Received: from [127.0.0.1] by omp1011.mail.ne1.yahoo.com with NNFMP; 19 Sep 2011 19:00:11 -0000 X-Yahoo-Newman-Property: ymail-3 X-Yahoo-Newman-Id: 920117.34743.bm@omp1011.mail.ne1.yahoo.com Received: (qmail 89302 invoked by uid 60001); 19 Sep 2011 19:00:11 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024; t=1316458811; bh=DNnpNlXlNZFTB11wTOCDmdzpTf15WRadTDxoZuwcTyU=; h=X-YMail-OSG:Received:X-Mailer:Message-ID:Date:From:Subject:To:In-Reply-To:MIME-Version:Content-Type; b=fG3TkJ0abYGA7KGR7TuaOZzGinnd5R7QQZegEncW3WlWAgy2/reNRZAAFTPMTM+5V+3Q6vERxRa1/y0/OXRFuURtBf5ZYjSsVx62NlNP7aXbsOWk5Jc+ulqeyzu//psYPmq4Mh9ypgQ9csI/UpGV4VqENWipEwKcGyys8JUrLHo= DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com; h=X-YMail-OSG:Received:X-Mailer:Message-ID:Date:From:Subject:To:In-Reply-To:MIME-Version:Content-Type; b=byx3RDbWUw1TAcsxwxSAXzdeuYWzP4ieAE8fnQBIloS4uqgl4fUhbISWGcpZLI88CoVZBrGjbvVERgjx1IN0Ip54h1zV6YXsooJF27zB6q8gucMjmDnA2LEc4Z5yVjNxIidmcgNs3QbxFjlAyvX7Ca855E6eZlVDcp9YMcBQt5s=; X-YMail-OSG: SP7OcbgVM1n4UVf0R5o910Lu1L6sNh6VXkiLv7nlDLYQ_XB QWMSUbKRblj4VzySb3sjMitBrOWd1wqiSaCNfVSKIUFSByNPfNjdKT4N..Tt x_ULoE3EFO4FvWefSxdgPOyrRyuy.hGJRXX_8QZImYZYjTz4xULfmxf0BPto 1VV9aCpTPhI0ENBPLhz9ub1K7jYrnKQgB5M2E2ecsjnrreGEsInaaSlxBgbO SfMUrTzHZDDUKLiIbLObZozDfHtBH9ltvzgcePEi3jKPxk2MmQePlU1fLV8z ktwp43xkcHjiEhQdyFStsD.JmwiXeb4e5FD02VN_zBOHzdrJeDcqQgFkRT8i 0upvYsVcnOJGq8mTCND9.AkV5.qIUv7wyBG3sAfP5uYgoTXAXfM2GAqOOhAQ FOp4- Received: from [80.237.226.76] by web121208.mail.ne1.yahoo.com via HTTP; Mon, 19 Sep 2011 12:00:11 PDT X-Mailer: YahooMailClassic/14.0.5 YahooMailWebService/0.8.114.317681 Message-ID: <1316458811.88701.YahooMailClassic@web121208.mail.ne1.yahoo.com> Date: Mon, 19 Sep 2011 12:00:11 -0700 (PDT) From: Jason Usher To: freebsd-fs@freebsd.org In-Reply-To: <72A6ABD6-F6FD-4563-AB3F-6061E3DD9FBF@digsys.bg> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Subject: Re: ZFS obn FreeBSD hardware model for 48 or 96 sata3 paths... X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Sep 2011 19:00:16 -0000 --- On Sat, 9/17/11, Daniel Kalchev wrote: > There is not single magnetic drive on the market that can > saturate SATA2 (300 Mbps), yet. Most can't match even SATA1 > (150 MBps). You don't need that much dedicated bandwidth for > drives. > If you intend to have 48/96 SSDs, then that is another > story, but then I am doubtful a "PC" architecture can handle > that much data either. Hmmm... I understand this, but is there not any data that might transfer from multiple magnetic disks, simultaneously, at 6GB, that could periodically max out the card bandwidth ? As in, all drives in a 12 drive array perform an operation on their built-in cache simultaneously ? I know the spinning disks themselves can't do it, but there is 64 MB of cache on each drive, and that can run at 6G ... this doesn't ever happen ? Further, the cards I use will be the same regardless - the number of PCIe lanes is just a different motherboard choice at the front end, and only adds a marginal extra cost (assuming there _IS_ a 112+ lane mobo around) ... so why not ? > Memory is much more expensive than SSDs for L2ARC and if > your workload permits it (lots of repeated small reads), > larger L2ARC will help a lot. It will also help if you have > huge spool or if you enable dedup etc. Just populate as much > RAM as the server can handle and then add L2ARC > (read-optimized). That's interesting (the part about dedup being assisted by L2ARC) ... what about snapshots ? If we run 14 or 21 snapshots, what component is that stressing, and what structures would speed that up ? Thanks a lot. From owner-freebsd-fs@FreeBSD.ORG Mon Sep 19 19:06:03 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 4B7F91065670 for ; Mon, 19 Sep 2011 19:06:03 +0000 (UTC) (envelope-from bfriesen@simple.dallas.tx.us) Received: from blade.simplesystems.org (blade.simplesystems.org [65.66.246.74]) by mx1.freebsd.org (Postfix) with ESMTP id 146758FC13 for ; Mon, 19 Sep 2011 19:06:02 +0000 (UTC) Received: from freddy.simplesystems.org (freddy.simplesystems.org [65.66.246.65]) by blade.simplesystems.org (8.14.4+Sun/8.14.4) with ESMTP id p8JJ6292016716; Mon, 19 Sep 2011 14:06:02 -0500 (CDT) Date: Mon, 19 Sep 2011 14:06:02 -0500 (CDT) From: Bob Friesenhahn X-X-Sender: bfriesen@freddy.simplesystems.org To: Jason Usher In-Reply-To: <1316458811.88701.YahooMailClassic@web121208.mail.ne1.yahoo.com> Message-ID: References: <1316458811.88701.YahooMailClassic@web121208.mail.ne1.yahoo.com> User-Agent: Alpine 2.01 (GSO 1266 2009-07-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.2.2 (blade.simplesystems.org [65.66.246.90]); Mon, 19 Sep 2011 14:06:02 -0500 (CDT) Cc: freebsd-fs@freebsd.org Subject: Re: ZFS obn FreeBSD hardware model for 48 or 96 sata3 paths... X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Sep 2011 19:06:03 -0000 On Mon, 19 Sep 2011, Jason Usher wrote: > > Hmmm... I understand this, but is there not any data that might > transfer from multiple magnetic disks, simultaneously, at 6GB, that > could periodically max out the card bandwidth ? As in, all drives > in a 12 drive array perform an operation on their built-in cache > simultaneously ? The best way to deal with this is by careful zfs pool design so that disks that can be expected to perform related operations (e.g. in same vdev) are carefully split across interface cards and I/O channels. This also helps with reliability. Bob -- Bob Friesenhahn bfriesen@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ From owner-freebsd-fs@FreeBSD.ORG Mon Sep 19 19:07:02 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id EB6531065676 for ; Mon, 19 Sep 2011 19:07:01 +0000 (UTC) (envelope-from jusher71@yahoo.com) Received: from nm28.bullet.mail.ne1.yahoo.com (nm28.bullet.mail.ne1.yahoo.com [98.138.90.91]) by mx1.freebsd.org (Postfix) with SMTP id 9A71D8FC15 for ; Mon, 19 Sep 2011 19:07:01 +0000 (UTC) Received: from [98.138.90.52] by nm28.bullet.mail.ne1.yahoo.com with NNFMP; 19 Sep 2011 19:07:00 -0000 Received: from [98.138.89.232] by tm5.bullet.mail.ne1.yahoo.com with NNFMP; 19 Sep 2011 19:07:00 -0000 Received: from [127.0.0.1] by omp1047.mail.ne1.yahoo.com with NNFMP; 19 Sep 2011 19:07:00 -0000 X-Yahoo-Newman-Property: ymail-3 X-Yahoo-Newman-Id: 828430.98107.bm@omp1047.mail.ne1.yahoo.com Received: (qmail 54433 invoked by uid 60001); 19 Sep 2011 19:07:00 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024; t=1316459220; bh=6ndsGbrugWP+sTuTFBLU3EtcVIbxPxxgKxXEleZfL8M=; h=X-YMail-OSG:Received:X-Mailer:Message-ID:Date:From:Subject:To:In-Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding; b=bgXwQWInMsATdcjrf6AyZ7sIuthkS9XZmEiN2zZwBJr3PQjFsSVdpAu4ObLmB/VzyafX4C6C3YYFwJDeKG7QDvGN93lYA4gMJdY+fU/8yfIyyoksXfqxBFxPH0jjMwEAOFIgWxIfN1l6nJO2G9ayWuICMekNYHyPt0JyWJQvEIk= DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com; h=X-YMail-OSG:Received:X-Mailer:Message-ID:Date:From:Subject:To:In-Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding; b=P1vveZOopkLOjWoG6KENpLv3p8mgkY2OH8NyYQ01gQuAUcDMm7VvsIjtQHhp3mCdfn5rE31vO087yWwK7L9hk4gFw97gtpV8Mt9VGxBqpz8drINNnNBKEpwMwLyZ0OoOBJIJqnkXuRh0e6IUAGH16BhcTkLBCSfKVqJSgldVDNs=; X-YMail-OSG: P5ZO3aEVM1nxu5KQ2iXyW8AvL_b5ch.7DJ7CnRgw6_K07O_ S5Fnz8fP4.UvkJJskPgOWYetyc2ZT3rOIZiETtgjo0dlzCMxAyny._tvYoHp hQ1sF0yp2CcwdLtWMme.qvUA196vx8b7_LmK6LJG0HXXu57uxryIMEXPGuvf jqJZqnLF7VKoiOio5ETuMnzkW961m3N8Yetq.gXN3xICzEZQmunMklqdwPGu EICgf3FGIMrUiQ5kESK23TNcBzTQHTJBh2JXiTaBFbV0Necnr0s6aFgxjG95 LFV1z0fO1SYtzmXYw0EUDXlnar_BpYT91CPaM5Zf7CRkLQdKJJRilK3PsXAT Z1si2gvTr_tz3PzGRBHg90AORPEc9mL6e_o9ou_.XESkoCfJKOQ4pplpDWNT Ce_lh Received: from [77.247.181.164] by web121209.mail.ne1.yahoo.com via HTTP; Mon, 19 Sep 2011 12:07:00 PDT X-Mailer: YahooMailClassic/14.0.5 YahooMailWebService/0.8.114.317681 Message-ID: <1316459220.35419.YahooMailClassic@web121209.mail.ne1.yahoo.com> Date: Mon, 19 Sep 2011 12:07:00 -0700 (PDT) From: Jason Usher To: freebsd-fs@freebsd.org In-Reply-To: MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable Subject: Re: ZFS obn FreeBSD hardware model for 48 or 96 sata3 paths... X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Sep 2011 19:07:02 -0000 =0A=0A--- On Sat, 9/17/11, Bob Friesenhahn w= rote:=0A=0A> 150KB is a relatively small file size given that the=0A> defau= lt zfs blocksize is 128KB.=A0 With so many files you=0A> should definitely = max out RAM first before using SSDs as a=0A> l2arc.=A0 It is important to r= ecognize that the ARC cache=0A> is not populated until data has been read.= =A0 The cache=0A> does not help unless the data has been accessed several= =0A> times. You will want to make sure that all metada and=0A> directories = are cached in RAM.=A0 Depending on how the=0A> files are used/accessed you = might even want to intentionally=0A> disable caching of file data.=0A=0A=0A= How does one make sure that all metadata and directories are cached in RAM?= Just run a 'find' on the filesystem, or a 'du' during the least busy time= of day ? Or is there a more elegant, or more direct way to read all of th= at in ?=0A=0AFurther, if this (small files, lots of them) dataset benefits = a lot from having the metadata and dirs read in, how can I KEEP that data i= n the cache, but not cache the file data (as you suggest, above) ?=0A=0ACan= I explicitly cache metadata/dirs in RAM, and cache file data in L2ARC ?=0A= =0A=0A=0A> Are the writes expected to be synchronous writes, or are=0A> the= y asynchronous?=A0 Are the writes expected to be=0A> primarily sequential (= e.g. whole file), or is data=0A> accessed/updated in place?=0A=0A=0AIt's a = mix, I'm afraid. From owner-freebsd-fs@FreeBSD.ORG Mon Sep 19 19:11:42 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id D2682106564A for ; Mon, 19 Sep 2011 19:11:42 +0000 (UTC) (envelope-from jusher71@yahoo.com) Received: from nm30-vm4.bullet.mail.ne1.yahoo.com (nm30-vm4.bullet.mail.ne1.yahoo.com [98.138.91.190]) by mx1.freebsd.org (Postfix) with SMTP id 912108FC1A for ; Mon, 19 Sep 2011 19:11:42 +0000 (UTC) Received: from [98.138.90.50] by nm30.bullet.mail.ne1.yahoo.com with NNFMP; 19 Sep 2011 19:11:42 -0000 Received: from [98.138.86.156] by tm3.bullet.mail.ne1.yahoo.com with NNFMP; 19 Sep 2011 19:11:42 -0000 Received: from [127.0.0.1] by omp1014.mail.ne1.yahoo.com with NNFMP; 19 Sep 2011 19:11:42 -0000 X-Yahoo-Newman-Property: ymail-3 X-Yahoo-Newman-Id: 181598.53278.bm@omp1014.mail.ne1.yahoo.com Received: (qmail 28096 invoked by uid 60001); 19 Sep 2011 19:11:42 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024; t=1316459502; bh=fy5bAV3TSNPwP773KqlcWXxF1qe4cfZc28sYUs1N9dQ=; h=X-YMail-OSG:Received:X-Mailer:Message-ID:Date:From:Subject:To:In-Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding; b=cWeTWMcp2AKnp58R6k8+84crLhTFBOWjlb8tvLvWk+JfsyLJlT5HsNH55zjIr02U5hIni9QGjPLLWIkcu7TPJXhAGbs5BFaya1Z8HyjHPOX6aV9VMewXG1MY8A7vE7W6Hp8WQGGCyNPqWbRQuCAchjxBccfAt6RAh1hohk+HpZM= DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com; h=X-YMail-OSG:Received:X-Mailer:Message-ID:Date:From:Subject:To:In-Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding; b=Lyan3dAdaUciB2VgGE8ezrw3wXj1sNRBqx8if1dla4EOWyPhArjUSZF7JwMuh0EkffPerZhdXJkHeTappTrPjtg5KYijp9oqKWCWuaNX/HSiAE4F5XtwT8OsMAWfnV/C/1ItsH/wU/67+dxp/awTyTLSO4OV/mECab+/YpSIU3c=; X-YMail-OSG: ostRMcMVM1ksu6q2usRjpLaTDHbsroUzMsD1_k2gaFjuJyl N6iygmj6CsEt.YbF9G3DXSsVBhVIC1oV.eeCUK.HuqZHetLHEMvmEoM4cVT5 ffWebec51QCSkERk_pN3x74oUxcJCJz0bMPWYEWbq8YudPrLMlvV6ueq4INO 9jY7g04vp2Dckwp6QW32YQ0fv1XqffGJhYzy2Sn_WcvHVkhHJb337jFfIpF3 b3rFxTC9EaRy2j_d1TZODCPyyWC1Ou7kwpj4fTGpCe2_dJZncows5DSuuyJJ V7p6ZcKBKR3o6Lf.u68qj71Ona9gurVccsdjd7qyl8OAczE1oCZ3BF1b4gzc D7Jud4ZQS7lqxlljkyM05D5UWxjfkE3fX7IfVHKaWwejQLlZREqYsRLufYWs X4DIT Received: from [77.247.181.164] by web121212.mail.ne1.yahoo.com via HTTP; Mon, 19 Sep 2011 12:11:42 PDT X-Mailer: YahooMailClassic/14.0.5 YahooMailWebService/0.8.114.317681 Message-ID: <1316459502.23423.YahooMailClassic@web121212.mail.ne1.yahoo.com> Date: Mon, 19 Sep 2011 12:11:42 -0700 (PDT) From: Jason Usher To: freebsd-fs@freebsd.org In-Reply-To: MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable Subject: Re: ZFS obn FreeBSD hardware model for 48 or 96 sata3 paths... X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Sep 2011 19:11:42 -0000 =0A=0A--- On Mon, 9/19/11, Bob Friesenhahn w= rote:=0A=0A=0A> > Hmmm... I understand this, but is there not any data=0A> = that might transfer from multiple magnetic disks,=0A> simultaneously, at 6G= B, that could periodically max out the=0A> card bandwidth ?=A0 As in, all d= rives in a 12 drive array=0A> perform an operation on their built-in cache = simultaneously=0A> ?=0A> =0A> The best way to deal with this is by careful = zfs pool=0A> design so that disks that can be expected to perform related= =0A> operations (e.g. in same vdev) are carefully split across=0A> interfac= e cards and I/O channels. This also helps with=0A> reliability.=0A=0A=0AUnd= erstood.=0A=0ABut again, can't that all be dismissed completely by having a= one drive / one path build ? And since that does not add extra cost per d= rive, or per card ... only per motherboard ... it seems an easy cost to swa= llow - even if it's a very edge case that it might ever be useful.=0A=0APre= suming I can *find* a 112+ lane mobo, I assume the cost would be at worst d= ouble ($800ish instead of $400ish) a mobo with fewer pcie lanes... From owner-freebsd-fs@FreeBSD.ORG Mon Sep 19 19:15:02 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id CD0E7106566C for ; Mon, 19 Sep 2011 19:15:02 +0000 (UTC) (envelope-from rincebrain@gmail.com) Received: from mail-qy0-f182.google.com (mail-qy0-f182.google.com [209.85.216.182]) by mx1.freebsd.org (Postfix) with ESMTP id 86FB68FC1C for ; Mon, 19 Sep 2011 19:15:02 +0000 (UTC) Received: by qyk4 with SMTP id 4so6808516qyk.13 for ; Mon, 19 Sep 2011 12:15:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=BeiyZHdWr9R8sra1tifsF4eVOmynNLS1TwC+4UMUjjU=; b=xGu0xuALWWdBWHSS6jmrbXFzW2PUzT2pplEss3YTnFRGlU//59oDTtgVoylcarEdIN MJ5N1rlHV9kjmWOqKY3Bn5inFqcVJQtET3YCU8WhWlK87fP6m47uOrzwLeFzpT69kZk6 enj0+gK2ldptFWQmnCxFkBA9fztnWhQM3BKOo= MIME-Version: 1.0 Received: by 10.229.224.149 with SMTP id io21mr2419310qcb.81.1316459701751; Mon, 19 Sep 2011 12:15:01 -0700 (PDT) Received: by 10.229.168.132 with HTTP; Mon, 19 Sep 2011 12:15:01 -0700 (PDT) In-Reply-To: <1316459502.23423.YahooMailClassic@web121212.mail.ne1.yahoo.com> References: <1316459502.23423.YahooMailClassic@web121212.mail.ne1.yahoo.com> Date: Mon, 19 Sep 2011 15:15:01 -0400 Message-ID: From: Rich To: Jason Usher Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-fs@freebsd.org Subject: Re: ZFS obn FreeBSD hardware model for 48 or 96 sata3 paths... X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Sep 2011 19:15:02 -0000 You are assuming you can find such a motherboard. I would not make that assumption at this time. If you can, it's quite likely there will be PCIe multipliers/switches/expanders/call them whatever you wish on the board. I have yet to see any motherboards that achieve this feat without such tricks. - Rich From owner-freebsd-fs@FreeBSD.ORG Mon Sep 19 19:17:45 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 5E1BB106566B for ; Mon, 19 Sep 2011 19:17:45 +0000 (UTC) (envelope-from bfriesen@simple.dallas.tx.us) Received: from blade.simplesystems.org (blade.simplesystems.org [65.66.246.74]) by mx1.freebsd.org (Postfix) with ESMTP id 23FD38FC12 for ; Mon, 19 Sep 2011 19:17:44 +0000 (UTC) Received: from freddy.simplesystems.org (freddy.simplesystems.org [65.66.246.65]) by blade.simplesystems.org (8.14.4+Sun/8.14.4) with ESMTP id p8JJHis1016787; Mon, 19 Sep 2011 14:17:44 -0500 (CDT) Date: Mon, 19 Sep 2011 14:17:44 -0500 (CDT) From: Bob Friesenhahn X-X-Sender: bfriesen@freddy.simplesystems.org To: Jason Usher In-Reply-To: <1316459220.35419.YahooMailClassic@web121209.mail.ne1.yahoo.com> Message-ID: References: <1316459220.35419.YahooMailClassic@web121209.mail.ne1.yahoo.com> User-Agent: Alpine 2.01 (GSO 1266 2009-07-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.2.2 (blade.simplesystems.org [65.66.246.90]); Mon, 19 Sep 2011 14:17:44 -0500 (CDT) Cc: freebsd-fs@freebsd.org Subject: Re: ZFS obn FreeBSD hardware model for 48 or 96 sata3 paths... X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Sep 2011 19:17:45 -0000 On Mon, 19 Sep 2011, Jason Usher wrote: > > How does one make sure that all metadata and directories are cached > in RAM? Just run a 'find' on the filesystem, or a 'du' during the > least busy time of day ? Or is there a more elegant, or more direct > way to read all of that in ? Caching occurs due to normal use and it is best to rely on that until proven otherwise. > Further, if this (small files, lots of them) dataset benefits a lot > from having the metadata and dirs read in, how can I KEEP that data > in the cache, but not cache the file data (as you suggest, above) ? Modern zfs includes tunables to decide how metadata and file data caching should be handled. The main reason to disable file data caching would be for cases where the data is only accessed once such as when data is normally written out once to whole files or read just once with a well-behaved algorithm. Video streaming servers may disable file caching if the number of streams served would cause the cache size to grow to huge (yet insufficient) proportions. > Can I explicitly cache metadata/dirs in RAM, and cache file data in L2ARC ? Again, it is best to rely on the caching algorithm until an actual problem has been found. The ZFS ARC will optimize its caching based on use. Less often used data will end up being migrated from the RAM-based ARC to L2ARC. Bob -- Bob Friesenhahn bfriesen@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ From owner-freebsd-fs@FreeBSD.ORG Mon Sep 19 19:27:21 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 89EFE106564A for ; Mon, 19 Sep 2011 19:27:21 +0000 (UTC) (envelope-from gpalmer@freebsd.org) Received: from noop.in-addr.com (mail.in-addr.com [IPv6:2001:470:8:162::1]) by mx1.freebsd.org (Postfix) with ESMTP id 5AD828FC0A for ; Mon, 19 Sep 2011 19:27:21 +0000 (UTC) Received: from gjp by noop.in-addr.com with local (Exim 4.76 (FreeBSD)) (envelope-from ) id 1R5jUu-0002uA-3x; Mon, 19 Sep 2011 15:27:20 -0400 Date: Mon, 19 Sep 2011 15:27:20 -0400 From: Gary Palmer To: Jason Usher Message-ID: <20110919192720.GD10165@in-addr.com> References: <1316459502.23423.YahooMailClassic@web121212.mail.ne1.yahoo.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1316459502.23423.YahooMailClassic@web121212.mail.ne1.yahoo.com> X-SA-Exim-Connect-IP: X-SA-Exim-Mail-From: gpalmer@freebsd.org X-SA-Exim-Scanned: No (on noop.in-addr.com); SAEximRunCond expanded to false Cc: freebsd-fs@freebsd.org Subject: Re: ZFS obn FreeBSD hardware model for 48 or 96 sata3 paths... X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Sep 2011 19:27:21 -0000 On Mon, Sep 19, 2011 at 12:11:42PM -0700, Jason Usher wrote: > > > --- On Mon, 9/19/11, Bob Friesenhahn wrote: > > > > > Hmmm... I understand this, but is there not any data > > that might transfer from multiple magnetic disks, > > simultaneously, at 6GB, that could periodically max out the > > card bandwidth ?? As in, all drives in a 12 drive array > > perform an operation on their built-in cache simultaneously > > ? > > > > The best way to deal with this is by careful zfs pool > > design so that disks that can be expected to perform related > > operations (e.g. in same vdev) are carefully split across > > interface cards and I/O channels. This also helps with > > reliability. > > > Understood. > > But again, can't that all be dismissed completely by having a one drive / one path build ? And since that does not add extra cost per drive, or per card ... only per motherboard ... it seems an easy cost to swallow - even if it's a very edge case that it might ever be useful. The message you quoted said to split the load across interface cards and I/O channels (PCIE lanes I presume). Unless you are going to somehow cram 30+ interface cards into a motherboard and chassis, I cannot see how your query can relate back to the statement unless you are talking about configurations with SAS/SATA port multipiers, which you are determined to avoid. You *cannot* avoid having multiple disks on a single controller card and it is definitely Best Practice to split drives in any array across controllers so that any controller failure at most knocks a single component out of a redundant RAID configuration. Losing multiple disks in a single RAID group (or whatever the ZFS name is) normally results in data loss unless you are extremely lucky. I also think you are going to be pushed to find a motherboard with your requirements and will have to use port multipliers, and somewhat suspect that with the right architecture that the performance hit is not nearly as bad as you expect. Gary From owner-freebsd-fs@FreeBSD.ORG Mon Sep 19 19:49:01 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id DCCE2106564A for ; Mon, 19 Sep 2011 19:49:01 +0000 (UTC) (envelope-from fullermd@over-yonder.net) Received: from thyme.infocus-llc.com (server.infocus-llc.com [206.156.254.44]) by mx1.freebsd.org (Postfix) with ESMTP id 851A48FC14 for ; Mon, 19 Sep 2011 19:49:01 +0000 (UTC) Received: from draco.over-yonder.net (c-174-50-4-38.hsd1.ms.comcast.net [174.50.4.38]) (using TLSv1 with cipher ADH-CAMELLIA256-SHA (256/256 bits)) (No client certificate requested) by thyme.infocus-llc.com (Postfix) with ESMTPSA id EC5C637B495; Mon, 19 Sep 2011 14:32:34 -0500 (CDT) Received: by draco.over-yonder.net (Postfix, from userid 100) id 3233E1787D; Mon, 19 Sep 2011 14:32:34 -0500 (CDT) Date: Mon, 19 Sep 2011 14:32:34 -0500 From: "Matthew D. Fuller" To: Rich Message-ID: <20110919193234.GY14862@over-yonder.net> References: <1316459502.23423.YahooMailClassic@web121212.mail.ne1.yahoo.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Editor: vi X-OS: FreeBSD User-Agent: Mutt/1.5.21-fullermd.4 (2010-09-15) X-Virus-Scanned: clamav-milter 0.97.2 at thyme.infocus-llc.com X-Virus-Status: Clean Cc: Jason Usher , freebsd-fs@freebsd.org Subject: Re: ZFS obn FreeBSD hardware model for 48 or 96 sata3 paths... X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Sep 2011 19:49:01 -0000 On Mon, Sep 19, 2011 at 03:15:01PM -0400 I heard the voice of Rich, and lo! it spake thus: > > You are assuming you can find such a motherboard. > > I would not make that assumption at this time. And of course it's not enough to get that many lanes to the northbridge (which you aren't gonna get anyway). You have to have that much bandwidth to the CPU package (which you aren't gonna get even if you get that far; the 57GB/s calculated earlier is well over what the highest speed HyperTransport of QPI can pass, much less the links you'll find in real boards). And then you need to have that much bandwidth out to memory (which you also won't get even if you get all the former; triple channel DDR3-1833 falls short). There's something to be said for absurd overspec'ing to avoid having to think too carefully about each piece. But it relies on the absurd not being absurdly absurd (which this is) as well as being realizable (which this isn't). -- Matthew Fuller (MF4839) | fullermd@over-yonder.net Systems/Network Administrator | http://www.over-yonder.net/~fullermd/ On the Internet, nobody can hear you scream. From owner-freebsd-fs@FreeBSD.ORG Mon Sep 19 20:04:42 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A931D1065672 for ; Mon, 19 Sep 2011 20:04:42 +0000 (UTC) (envelope-from artemb@gmail.com) Received: from mail-yi0-f54.google.com (mail-yi0-f54.google.com [209.85.218.54]) by mx1.freebsd.org (Postfix) with ESMTP id 67CFF8FC18 for ; Mon, 19 Sep 2011 20:04:42 +0000 (UTC) Received: by yia13 with SMTP id 13so3451049yia.13 for ; Mon, 19 Sep 2011 13:04:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type; bh=Xy4QE/d1C5iffV3mQXrOYnozfYKRduhdncxlh98owNk=; b=mLvkHv08TEv8JV5RNqBrnpsTYbbMfKY7ZAKQvLI3kaYT5dt4F1lXLrkSkDLYlognCj RKIX9vJsuNGhLvx8Tc4xmkFoWFvPjLxdwBfIhKVlsFnrYJMUt1qldXb6MRQycQTmZlXF aIfW60R/D3L0E87HXi+YGcft3oX1p1uhByMT8= MIME-Version: 1.0 Received: by 10.236.184.134 with SMTP id s6mr17213978yhm.6.1316462681130; Mon, 19 Sep 2011 13:04:41 -0700 (PDT) Sender: artemb@gmail.com Received: by 10.236.102.147 with HTTP; Mon, 19 Sep 2011 13:04:41 -0700 (PDT) In-Reply-To: <1316459220.35419.YahooMailClassic@web121209.mail.ne1.yahoo.com> References: <1316459220.35419.YahooMailClassic@web121209.mail.ne1.yahoo.com> Date: Mon, 19 Sep 2011 13:04:41 -0700 X-Google-Sender-Auth: Z_BHYqGQEcActlI9ckDV0P7lJbM Message-ID: From: Artem Belevich To: Jason Usher Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-fs@freebsd.org Subject: Re: ZFS obn FreeBSD hardware model for 48 or 96 sata3 paths... X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Sep 2011 20:04:42 -0000 On Mon, Sep 19, 2011 at 12:07 PM, Jason Usher wrote: > Can I explicitly cache metadata/dirs in RAM, and cache file data in L2ARC ? See primarycache and secondarycache properties. They determine caching policies for ARC and L2ARC respectfully. Valid values are none, metadata and all. So, the answer you your question above is yes for metadata in RAM and no for data-only for L2ARC as there is no way to enable data chache without metadata. --Artem From owner-freebsd-fs@FreeBSD.ORG Mon Sep 19 20:28:01 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 1FC811065676 for ; Mon, 19 Sep 2011 20:28:01 +0000 (UTC) (envelope-from fjwcash@gmail.com) Received: from mail-yx0-f182.google.com (mail-yx0-f182.google.com [209.85.213.182]) by mx1.freebsd.org (Postfix) with ESMTP id D18DD8FC13 for ; Mon, 19 Sep 2011 20:28:00 +0000 (UTC) Received: by yxk36 with SMTP id 36so5475940yxk.13 for ; Mon, 19 Sep 2011 13:28:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=L+s05AGSmE8n9tbxtaTj+0fTl/zveCmS1JyyhK1r9K0=; b=UfSKCWiN8/ogzueWUA27yHRPo/vqRUxpVtfP57xyWTN4PRGSLPcSYUQH23wFa1D8QP /KBQ/6BFi8xU4xYruaTSM+pciM5gR2fgAPWd63srhYGzUlbiE7vXvg9ZLc5PtNMXLueK 2Aaq+FyJRw53HxnQqve/lQkxt4SiDONFLTQoU= MIME-Version: 1.0 Received: by 10.220.154.201 with SMTP id p9mr750366vcw.2.1316464079990; Mon, 19 Sep 2011 13:27:59 -0700 (PDT) Received: by 10.220.198.130 with HTTP; Mon, 19 Sep 2011 13:27:59 -0700 (PDT) In-Reply-To: <1316459220.35419.YahooMailClassic@web121209.mail.ne1.yahoo.com> References: <1316459220.35419.YahooMailClassic@web121209.mail.ne1.yahoo.com> Date: Mon, 19 Sep 2011 13:27:59 -0700 Message-ID: From: Freddie Cash To: Jason Usher Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: freebsd-fs@freebsd.org Subject: Re: ZFS obn FreeBSD hardware model for 48 or 96 sata3 paths... X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Sep 2011 20:28:01 -0000 On Mon, Sep 19, 2011 at 12:07 PM, Jason Usher wrote: > --- On Sat, 9/17/11, Bob Friesenhahn wrote: > > > 150KB is a relatively small file size given that the > > default zfs blocksize is 128KB. With so many files you > > should definitely max out RAM first before using SSDs as a > > l2arc. It is important to recognize that the ARC cache > > is not populated until data has been read. The cache > > does not help unless the data has been accessed several > > times. You will want to make sure that all metada and > > directories are cached in RAM. Depending on how the > > files are used/accessed you might even want to intentionally > > disable caching of file data. > > How does one make sure that all metadata and directories are cached in RAM? > Just run a 'find' on the filesystem, or a 'du' during the least busy time > of day ? Or is there a more elegant, or more direct way to read all of that > in ? > That should work to "prime" the caches. Or you can just let the system manage it automatically, adding data to the ARC/L2ARC as it's read/accessed. The end result of that would be much more in line with how the data is actually used. > Further, if this (small files, lots of them) dataset benefits a lot from > having the metadata and dirs read in, how can I KEEP that data in the cache, > but not cache the file data (as you suggest, above) ? There are ZFS properties for this (primarycache aka ARC; secondarycache aka L2ARC) which can be set on a per-filesystem basis (and inherited). These can be set to "all", "metadata", or "data". > Can I explicitly cache metadata/dirs in RAM, and cache file data in L2ARC ? No. Data that does not go into the ARC can never go into the L2ARC. IOW, if you set primarycache=metadata and secondarycache=data, you will never see anything in L2ARC. At least, that's the understanding I've come to based on posts on the zfs-discuss mailing list. And it does jive with what I was seeing on our storage servers. It's too bad, because it would be a nice setup, ordered from fastert to slowest: ARC for metadata, L2ARC for file data, pool for permanent storage. -- Freddie Cash fjwcash@gmail.com From owner-freebsd-fs@FreeBSD.ORG Mon Sep 19 20:37:17 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 7C8F81065673 for ; Mon, 19 Sep 2011 20:37:17 +0000 (UTC) (envelope-from alexander@leidinger.net) Received: from mail.ebusiness-leidinger.de (mail.ebusiness-leidinger.de [217.11.53.44]) by mx1.freebsd.org (Postfix) with ESMTP id 0808C8FC13 for ; Mon, 19 Sep 2011 20:37:16 +0000 (UTC) Received: from outgoing.leidinger.net (p4FC43190.dip.t-dialin.net [79.196.49.144]) by mail.ebusiness-leidinger.de (Postfix) with ESMTPSA id 6CDB9844017; Mon, 19 Sep 2011 22:36:56 +0200 (CEST) Received: from unknown (IO.Leidinger.net [192.168.1.12]) by outgoing.leidinger.net (Postfix) with ESMTP id AD9C556C7; Mon, 19 Sep 2011 22:36:53 +0200 (CEST) Date: Mon, 19 Sep 2011 22:36:53 +0200 From: Alexander Leidinger To: Jason Usher Message-ID: <20110919223653.0000702b@unknown> In-Reply-To: <1316458811.88701.YahooMailClassic@web121208.mail.ne1.yahoo.com> References: <72A6ABD6-F6FD-4563-AB3F-6061E3DD9FBF@digsys.bg> <1316458811.88701.YahooMailClassic@web121208.mail.ne1.yahoo.com> X-Mailer: Claws Mail 3.7.8cvs47 (GTK+ 2.16.6; i586-pc-mingw32msvc) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-EBL-MailScanner-Information: Please contact the ISP for more information X-EBL-MailScanner-ID: 6CDB9844017.A0406 X-EBL-MailScanner: Found to be clean X-EBL-MailScanner-SpamCheck: not spam, spamhaus-ZEN, SpamAssassin (not cached, score=-1, required 6, autolearn=disabled, ALL_TRUSTED -1.00) X-EBL-MailScanner-From: alexander@leidinger.net X-EBL-MailScanner-Watermark: 1317069420.68406@49RmZMn9Z0YXgwBrqlE1Vg X-EBL-Spam-Status: No Cc: freebsd-fs@freebsd.org Subject: Re: ZFS obn FreeBSD hardware model for 48 or 96 sata3 paths... X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Sep 2011 20:37:17 -0000 On Mon, 19 Sep 2011 12:00:11 -0700 (PDT) Jason Usher wrote: > --- On Sat, 9/17/11, Daniel Kalchev wrote: > > > There is not single magnetic drive on the market that can > > saturate SATA2 (300 Mbps), yet. Most can't match even SATA1 > > (150 MBps). You don't need that much dedicated bandwidth for > > drives. > > If you intend to have 48/96 SSDs, then that is another > > story, but then I am doubtful a "PC" architecture can handle > > that much data either. > > > Hmmm... I understand this, but is there not any data that might > transfer from multiple magnetic disks, simultaneously, at 6GB, that > could periodically max out the card bandwidth ? As in, all drives in > a 12 drive array perform an operation on their built-in cache > simultaneously ? A pragmatic advise: Do not put all drives into the the same vdev. Have a look at the ZFS best practices guide for some words about how much drives shall be in the same vdev. Concatenate several RAIDZx vdevs instead. Play a little bit around on paper what works best for you. An example: With 8 controllers (assuming 6 ports each) you could do 6 raidz1 vdevs (one drive from each controller in the same raidz1) which are concatenated to give you a pool of ports*num_controller_minus_one*drivesize amount of storage. From each of those 6 vdevs one drive (6 in total = number of raidz1 vdevs) or one controller (number of controllers which can fail = the X in raidzX) can fail. If you need speed, rely on RAM or L2ARC (assuming the data is read often enough to be cached). If you need more speed, go with SSDs instead of harddisks for the pool-drives (a L2ARC does not make much sense then, except you invest in something significant faster like a Fusion-board as already mentioned in the thread). Optimizing for the theoretical case that all drives deliver everything from the HD-cache is a waste of money because you are either in the unlikely case that this really happens (go play in the lottery instead, you may have more luck). If the access pattern is really that strange that it happens often enough for you that such an optimization would give a nice speed increase, invest the money in more RAM to have the data in the ARC instead. > > Memory is much more expensive than SSDs for L2ARC and if > > your workload permits it (lots of repeated small reads), > > larger L2ARC will help a lot. It will also help if you have > > huge spool or if you enable dedup etc. Just populate as much > > RAM as the server can handle and then add L2ARC > > (read-optimized). > > > That's interesting (the part about dedup being assisted by L2ARC) ... > what about snapshots ? If we run 14 or 21 snapshots, what component > is that stressing, and what structures would speed that up ? A snapshot is a short write to the disks. I do not know if it is a sync- or async-write. If you do not do a lot of snapshots per second or minute (I hope the 14/21 values mean to take a snapshot per (working-)hour), I would not worry about this. Bye, Alexander. -- http://www.Leidinger.net Alexander @ Leidinger.net: PGP ID = B0063FE7 http://www.FreeBSD.org netchild @ FreeBSD.org : PGP ID = 72077137 From owner-freebsd-fs@FreeBSD.ORG Mon Sep 19 20:43:57 2011 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 4DC51106566B for ; Mon, 19 Sep 2011 20:43:57 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 99C158FC17 for ; Mon, 19 Sep 2011 20:43:56 +0000 (UTC) Received: from porto.starpoint.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id XAA17466; Mon, 19 Sep 2011 23:43:53 +0300 (EEST) (envelope-from avg@FreeBSD.org) Received: from localhost ([127.0.0.1]) by porto.starpoint.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1R5kgz-000I2S-Gn; Mon, 19 Sep 2011 23:43:53 +0300 Message-ID: <4E77A988.30905@FreeBSD.org> Date: Mon, 19 Sep 2011 23:43:52 +0300 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:6.0.2) Gecko/20110907 Thunderbird/6.0.2 MIME-Version: 1.0 To: Rotate 13 References: <4E776352.30702@FreeBSD.org> In-Reply-To: X-Enigmail-Version: undefined Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@FreeBSD.org Subject: Re: ZFS: deferring automounts/mounting root without bootfs [9.0-BETA2] X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Sep 2011 20:43:57 -0000 on 19/09/2011 19:15 Rotate 13 said the following: > On Mon, 19 Sep 2011 11:44:18 -0400, Andriy Gapon wrote: > >> on 19/09/2011 18:29 Rotate 13 said the following: >>> 9.0-BETA2 system is booted off removable UFS volume, but root is >>> mounted from ZFS. I try to meet the following two goals: >>> >>> 1. Not use bootfs property (too many limitations mentioned in docs) >> >>> 2. Use ZFS inheritable mountpoints and management (not clutter up >>> /etc/fstab... and not set mountpoint= on each child dataset!) >>> >>> Config info is below. Result: System boots, but hangs with >>> >>> init: can't exec getty '/usr/libexec/getty' for port /dev/ttyv0: No >>> such file or directory >> >> This looks like devfs (/dev) is either not mounted or something is >> mounted over >> it. I think that you should check if any other auto-mountable dataset >> in your >> pool has a mountpoint of '/'. Or the root dataset of tank is till >> mounted for >> some reason or something like that. > > Thanks for quick reply. No /dev was my first thought too. But I also > saw other messages scroll by of unable to write in /var, which is on > ZFS itself. So I think the "No such file or directory" probably for > /usr/libexec/getty (cannot read /usr). Note also, root dataset is > canmount=off - should not be ever mounted to begin with - and nothing > except root dataset and tank/root have / mountpoint. > > I will see what I can do to verify devfs is being mounted, but > definitely at least some ZFS dataset(s) are problem. Which brings me > back my original question. Difficult to diagnose when system won't > write logs to /var - could be very simple misconfiguration, could be > bug. Manuals don't say a lot about mount order on boot, and that > remains my suspicion due to behavior when zpool import -f from rescue > shell: Can't mount /usr, /var, etc. until after tank/root is manually > mounted, but after, zfs mount -a is magic. You can try to enter ddb (if you have that in your kernel and also have the magic ddb key enabled) and issue 'show mount' command to see some details. -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Mon Sep 19 21:15:28 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id EF5011065670 for ; Mon, 19 Sep 2011 21:15:28 +0000 (UTC) (envelope-from bfriesen@simple.dallas.tx.us) Received: from blade.simplesystems.org (blade.simplesystems.org [65.66.246.74]) by mx1.freebsd.org (Postfix) with ESMTP id B639A8FC13 for ; Mon, 19 Sep 2011 21:15:28 +0000 (UTC) Received: from freddy.simplesystems.org (freddy.simplesystems.org [65.66.246.65]) by blade.simplesystems.org (8.14.4+Sun/8.14.4) with ESMTP id p8JLFRuM017422; Mon, 19 Sep 2011 16:15:27 -0500 (CDT) Date: Mon, 19 Sep 2011 16:15:27 -0500 (CDT) From: Bob Friesenhahn X-X-Sender: bfriesen@freddy.simplesystems.org To: Freddie Cash In-Reply-To: Message-ID: References: <1316459220.35419.YahooMailClassic@web121209.mail.ne1.yahoo.com> User-Agent: Alpine 2.01 (GSO 1266 2009-07-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.2.2 (blade.simplesystems.org [65.66.246.90]); Mon, 19 Sep 2011 16:15:28 -0500 (CDT) Cc: freebsd-fs@freebsd.org Subject: Re: ZFS obn FreeBSD hardware model for 48 or 96 sata3 paths... X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Sep 2011 21:15:29 -0000 > > It's too bad, because it would be a nice setup, ordered from fastert to > slowest: ARC for metadata, L2ARC for file data, pool for permanent storage. L2ARC has extreme bandwidth limitations as compared with RAM. Be careful what you wish for. Bob -- Bob Friesenhahn bfriesen@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ From owner-freebsd-fs@FreeBSD.ORG Mon Sep 19 21:19:09 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 0666E1065674 for ; Mon, 19 Sep 2011 21:19:09 +0000 (UTC) (envelope-from ben@altesco.nl) Received: from altus-escon.com (altesco.xs4all.nl [82.95.106.39]) by mx1.freebsd.org (Postfix) with ESMTP id 7782F8FC0A for ; Mon, 19 Sep 2011 21:19:08 +0000 (UTC) Received: from giskard.stuyts.nl (stuyts.xs4all.nl [83.163.168.175]) by altus-escon.com (8.14.4/8.14.4) with ESMTP id p8JLIxSC088682; Mon, 19 Sep 2011 23:19:04 +0200 (CEST) (envelope-from ben@altesco.nl) Mime-Version: 1.0 (Apple Message framework v1250.3) From: Ben Stuyts In-Reply-To: Date: Mon, 19 Sep 2011 23:18:58 +0200 Message-Id: <85A88FCD-4ECE-46BC-85B7-7828F1A30F57@altesco.nl> References: <9774D03B-A8C7-48DE-9BC4-528DD4134787@altesco.nl> To: Freddie Cash X-Mailer: Apple Mail (2.1250.3) X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.2.6 (altus-escon.com [10.0.0.150]); Mon, 19 Sep 2011 23:19:05 +0200 (CEST) X-Virus-Scanned: clamav-milter 0.97 at mars.altus-escon.com X-Virus-Status: Clean X-Spam-Status: No, score=-1.9 required=3.5 tests=BAYES_00,HTML_MESSAGE autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mars.altus-escon.com Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: freebsd-fs@freebsd.org Subject: Re: ZFS auto expand mirror X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Sep 2011 21:19:09 -0000 On 19 sep. 2011, at 19:49, Freddie Cash wrote: > On Mon, Sep 19, 2011 at 10:08 AM, Ben Stuyts wrote: > I want to expand an existing mirror by replacing the existing drives = with bigger ones. This is on: > FreeBSD xxx 7.3-STABLE FreeBSD 7.3-STABLE #2: Mon Sep 20 18:36:08 CEST = 2010 root@xxx:/usr/obj/usr/src/sys/xxx amd64 >=20 > # zpool status home > pool: home > state: ONLINE > scrub: scrub completed after 2h0m with 0 errors on Mon Sep 19 = 18:25:45 2011 > config: >=20 > NAME STATE READ WRITE CKSUM > home ONLINE 0 0 0 > mirror ONLINE 0 0 0 > ad5s1a ONLINE 0 0 0 > ad7s1a ONLINE 0 0 0 >=20 > Will this version of FreeBSD auto-expand to the new, bigger drive size = once they are both replaced? I did not see the autoexpand property in = this pool. zpool is v13, zfs is v3. >=20 > No. You will need to reboot the system in order for the extra space = to become usable in the pool. Or, if none of the OS is installed on the = pool, you can export/import the pool to make the new space available.=20 Ok, at least it grows on export/import, so I don't need to create a new = pool and copy everything over. Thanks for the info! Ben From owner-freebsd-fs@FreeBSD.ORG Mon Sep 19 21:37:48 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 14126106566B for ; Mon, 19 Sep 2011 21:37:48 +0000 (UTC) (envelope-from artemb@gmail.com) Received: from mail-yw0-f54.google.com (mail-yw0-f54.google.com [209.85.213.54]) by mx1.freebsd.org (Postfix) with ESMTP id C4D1F8FC12 for ; Mon, 19 Sep 2011 21:37:47 +0000 (UTC) Received: by ywp17 with SMTP id 17so5535363ywp.13 for ; Mon, 19 Sep 2011 14:37:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=uiKAo0uahMiO51/yoeDCmK5lvPmtUX+XkW94AqbTHZg=; b=qWZpTpeH7UgA6dteP1rLKLmAAKPtzMTpRltwUszdqAM60F5GI+Lq4h1LlLS5P9KJtI dg0qWHUHYH1tpvFFv7UYjTQ2iXaIF3lTWdy0nKh7zLwHyQC3uh/x4zmggcCsvdDEF9Mu lpogQD9kYBHQJahxru38ot9Kib6vbzImSLs4M= MIME-Version: 1.0 Received: by 10.236.191.71 with SMTP id f47mr17151872yhn.125.1316468267019; Mon, 19 Sep 2011 14:37:47 -0700 (PDT) Sender: artemb@gmail.com Received: by 10.236.102.147 with HTTP; Mon, 19 Sep 2011 14:37:46 -0700 (PDT) In-Reply-To: References: <1316459220.35419.YahooMailClassic@web121209.mail.ne1.yahoo.com> Date: Mon, 19 Sep 2011 14:37:46 -0700 X-Google-Sender-Auth: jYfNjYhXadTc0Q1YCbTiXOFfyBs Message-ID: From: Artem Belevich To: Freddie Cash Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Cc: Jason Usher , freebsd-fs@freebsd.org Subject: Re: ZFS obn FreeBSD hardware model for 48 or 96 sata3 paths... X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Sep 2011 21:37:48 -0000 On Mon, Sep 19, 2011 at 1:27 PM, Freddie Cash wrote: >> Can I explicitly cache metadata/dirs in RAM, and cache file data in L2AR= C ? > > > No. =A0Data that does not go into the ARC can never go into the L2ARC. = =A0IOW, > if you set primarycache=3Dmetadata and secondarycache=3Ddata, you will ne= ver see > anything in L2ARC. > > At least, that's the understanding I've come to based on posts on the > zfs-discuss mailing list. =A0And it does jive with what I was seeing on o= ur > storage servers. Indeed. I didn't think of that. L2ARC is populated by the data that gets evicted from ARC. So, if ARC is metadata only, whatever spills into L2ARC would be metadata only, too... > It's too bad, because it would be a nice setup, ordered from fastert to > slowest: =A0ARC for metadata, L2ARC for file data, pool for permanent sto= rage. That would indeed be nice. --Artem From owner-freebsd-fs@FreeBSD.ORG Mon Sep 19 21:38:14 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A82AB1065673 for ; Mon, 19 Sep 2011 21:38:14 +0000 (UTC) (envelope-from jdc@koitsu.dyndns.org) Received: from qmta02.emeryville.ca.mail.comcast.net (qmta02.emeryville.ca.mail.comcast.net [76.96.30.24]) by mx1.freebsd.org (Postfix) with ESMTP id 8E9A98FC12 for ; Mon, 19 Sep 2011 21:38:14 +0000 (UTC) Received: from omta23.emeryville.ca.mail.comcast.net ([76.96.30.90]) by qmta02.emeryville.ca.mail.comcast.net with comcast id adbm1h0031wfjNsA2le8Jc; Mon, 19 Sep 2011 21:38:08 +0000 Received: from koitsu.dyndns.org ([67.180.84.87]) by omta23.emeryville.ca.mail.comcast.net with comcast id alZu1h00G1t3BNj8jlZu0T; Mon, 19 Sep 2011 21:33:54 +0000 Received: by icarus.home.lan (Postfix, from userid 1000) id 80C5A102C1B; Mon, 19 Sep 2011 14:38:13 -0700 (PDT) Date: Mon, 19 Sep 2011 14:38:13 -0700 From: Jeremy Chadwick To: Freddie Cash Message-ID: <20110919213813.GA70527@icarus.home.lan> References: <9774D03B-A8C7-48DE-9BC4-528DD4134787@altesco.nl> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) Cc: freebsd-fs@freebsd.org Subject: Re: ZFS auto expand mirror X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Sep 2011 21:38:14 -0000 On Mon, Sep 19, 2011 at 10:49:56AM -0700, Freddie Cash wrote: > On Mon, Sep 19, 2011 at 10:08 AM, Ben Stuyts wrote: > > > I want to expand an existing mirror by replacing the existing drives with > > bigger ones. This is on: > > FreeBSD xxx 7.3-STABLE FreeBSD 7.3-STABLE #2: Mon Sep 20 18:36:08 CEST 2010 > > root@xxx:/usr/obj/usr/src/sys/xxx amd64 > > > > # zpool status home > > pool: home > > state: ONLINE > > scrub: scrub completed after 2h0m with 0 errors on Mon Sep 19 18:25:45 > > 2011 > > config: > > > > NAME STATE READ WRITE CKSUM > > home ONLINE 0 0 0 > > mirror ONLINE 0 0 0 > > ad5s1a ONLINE 0 0 0 > > ad7s1a ONLINE 0 0 0 > > > > Will this version of FreeBSD auto-expand to the new, bigger drive size once > > they are both replaced? I did not see the autoexpand property in this pool. > > zpool is v13, zfs is v3. > > > > No. You will need to reboot the system in order for the extra space to > become usable in the pool. Or, if none of the OS is installed on the pool, > you can export/import the pool to make the new space available. Does this advice/fact apply to FreeBSD 7.3? To my knowledge it does not. The ZFS version is too old. -- | Jeremy Chadwick jdc at parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, US | | Making life hard for others since 1977. PGP 4BD6C0CB | From owner-freebsd-fs@FreeBSD.ORG Mon Sep 19 21:54:08 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 3592F106564A for ; Mon, 19 Sep 2011 21:54:08 +0000 (UTC) (envelope-from fjwcash@gmail.com) Received: from mail-qw0-f45.google.com (mail-qw0-f45.google.com [209.85.216.45]) by mx1.freebsd.org (Postfix) with ESMTP id E4C808FC13 for ; Mon, 19 Sep 2011 21:54:07 +0000 (UTC) Received: by qwg2 with SMTP id 2so7071699qwg.4 for ; Mon, 19 Sep 2011 14:54:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=T3EXsgIKWF1xMvlNJgflDLnDp+AlHIyxcQdlAfS1Mog=; b=HzGAQ7nz+C7iRJsVfsBw+uP6LzxA7B5/zyC27IHJ+ec4AgeXYnTGJJVnyWql9yCV7u +YQJT6j8skipz0ST81fK5ve7OAenJVPRYXlnx4ynGJekXDalRd+c5f3U+WqPY24Wnrh1 7kUVNw/eW3Zurw9HxggH51YTZpNKm2h1dy64Y= MIME-Version: 1.0 Received: by 10.52.176.196 with SMTP id ck4mr25275vdc.168.1316469246900; Mon, 19 Sep 2011 14:54:06 -0700 (PDT) Received: by 10.220.198.130 with HTTP; Mon, 19 Sep 2011 14:54:06 -0700 (PDT) In-Reply-To: <20110919213813.GA70527@icarus.home.lan> References: <9774D03B-A8C7-48DE-9BC4-528DD4134787@altesco.nl> <20110919213813.GA70527@icarus.home.lan> Date: Mon, 19 Sep 2011 14:54:06 -0700 Message-ID: From: Freddie Cash To: Jeremy Chadwick Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: freebsd-fs@freebsd.org Subject: Re: ZFS auto expand mirror X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Sep 2011 21:54:08 -0000 On Mon, Sep 19, 2011 at 2:38 PM, Jeremy Chadwick wrote: > On Mon, Sep 19, 2011 at 10:49:56AM -0700, Freddie Cash wrote: > > On Mon, Sep 19, 2011 at 10:08 AM, Ben Stuyts wrote: > > > > > I want to expand an existing mirror by replacing the existing drives > with > > > bigger ones. This is on: > > > FreeBSD xxx 7.3-STABLE FreeBSD 7.3-STABLE #2: Mon Sep 20 18:36:08 CEST > 2010 > > > root@xxx:/usr/obj/usr/src/sys/xxx amd64 > > > > > > # zpool status home > > > pool: home > > > state: ONLINE > > > scrub: scrub completed after 2h0m with 0 errors on Mon Sep 19 18:25:45 > > > 2011 > > > config: > > > > > > NAME STATE READ WRITE CKSUM > > > home ONLINE 0 0 0 > > > mirror ONLINE 0 0 0 > > > ad5s1a ONLINE 0 0 0 > > > ad7s1a ONLINE 0 0 0 > > > > > > Will this version of FreeBSD auto-expand to the new, bigger drive size > once > > > they are both replaced? I did not see the autoexpand property in this > pool. > > > zpool is v13, zfs is v3. > > > > > > > No. You will need to reboot the system in order for the extra space to > > become usable in the pool. Or, if none of the OS is installed on the > pool, > > you can export/import the pool to make the new space available. > > Does this advice/fact apply to FreeBSD 7.3? To my knowledge it does > not. The ZFS version is too old. > > It's worked for me on our storage servers. These started with ZFSv6 and have been upgraded through each version, currently running ZFSv28 on 8-STABLE. Early versions of ZFS need the reboot or export/import cycle. Newer versions pick up the new space as soon as the resilver of the last drive in the vdev occurs (if the autoexpand property is enabled on the pool). -- Freddie Cash fjwcash@gmail.com From owner-freebsd-fs@FreeBSD.ORG Mon Sep 19 21:56:13 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 8964F106564A for ; Mon, 19 Sep 2011 21:56:13 +0000 (UTC) (envelope-from fjwcash@gmail.com) Received: from mail-gy0-f182.google.com (mail-gy0-f182.google.com [209.85.160.182]) by mx1.freebsd.org (Postfix) with ESMTP id 456AD8FC08 for ; Mon, 19 Sep 2011 21:56:12 +0000 (UTC) Received: by gyf2 with SMTP id 2so5553643gyf.13 for ; Mon, 19 Sep 2011 14:56:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=N57mpa1+oR7xG9QH2vdE3v8OP6S67IHlA0rf8fTMPNc=; b=hvvlCRzIwekUjpb7qptyzfagKCEW5VeOb0U6yyyFmGKnl8r+dtSkViMwGEz32N9uTM 2CzBS8mAMHl0GdheWRF4gJ0E1svUe18RlhbPQqjsBlOZRui5SaS4El3hjgNWBOEWBnuc ujhY4w5f9xZgrETVxqKebNSFXk1Y3Ok116yF8= MIME-Version: 1.0 Received: by 10.52.94.18 with SMTP id cy18mr32812vdb.101.1316469372522; Mon, 19 Sep 2011 14:56:12 -0700 (PDT) Received: by 10.220.198.130 with HTTP; Mon, 19 Sep 2011 14:56:12 -0700 (PDT) In-Reply-To: References: <1316459220.35419.YahooMailClassic@web121209.mail.ne1.yahoo.com> Date: Mon, 19 Sep 2011 14:56:12 -0700 Message-ID: From: Freddie Cash To: Bob Friesenhahn Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: freebsd-fs@freebsd.org Subject: Re: ZFS obn FreeBSD hardware model for 48 or 96 sata3 paths... X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Sep 2011 21:56:13 -0000 On Mon, Sep 19, 2011 at 2:15 PM, Bob Friesenhahn < bfriesen@simple.dallas.tx.us> wrote: > >> It's too bad, because it would be a nice setup, ordered from fastert to >> slowest: ARC for metadata, L2ARC for file data, pool for permanent >> storage. >> > > L2ARC has extreme bandwidth limitations as compared with RAM. Be careful > what you wish for. > > For writes (7 MBps, I believe); there shouldn't be any limits on the reads. -- Freddie Cash fjwcash@gmail.com From owner-freebsd-fs@FreeBSD.ORG Mon Sep 19 23:30:48 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 7560D106564A for ; Mon, 19 Sep 2011 23:30:48 +0000 (UTC) (envelope-from jdc@koitsu.dyndns.org) Received: from qmta07.westchester.pa.mail.comcast.net (qmta07.westchester.pa.mail.comcast.net [76.96.62.64]) by mx1.freebsd.org (Postfix) with ESMTP id 1F2498FC0A for ; Mon, 19 Sep 2011 23:30:47 +0000 (UTC) Received: from omta21.westchester.pa.mail.comcast.net ([76.96.62.72]) by qmta07.westchester.pa.mail.comcast.net with comcast id aZgW1h0021ZXKqc57nWoXu; Mon, 19 Sep 2011 23:30:48 +0000 Received: from koitsu.dyndns.org ([67.180.84.87]) by omta21.westchester.pa.mail.comcast.net with comcast id anWm1h01T1t3BNj3hnWnxq; Mon, 19 Sep 2011 23:30:48 +0000 Received: by icarus.home.lan (Postfix, from userid 1000) id 56FAA102C1B; Mon, 19 Sep 2011 16:30:45 -0700 (PDT) Date: Mon, 19 Sep 2011 16:30:45 -0700 From: Jeremy Chadwick To: Freddie Cash Message-ID: <20110919233045.GA71606@icarus.home.lan> References: <9774D03B-A8C7-48DE-9BC4-528DD4134787@altesco.nl> <20110919213813.GA70527@icarus.home.lan> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) Cc: freebsd-fs@freebsd.org Subject: Re: ZFS auto expand mirror X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Sep 2011 23:30:48 -0000 On Mon, Sep 19, 2011 at 02:54:06PM -0700, Freddie Cash wrote: > On Mon, Sep 19, 2011 at 2:38 PM, Jeremy Chadwick > wrote: > > > On Mon, Sep 19, 2011 at 10:49:56AM -0700, Freddie Cash wrote: > > > On Mon, Sep 19, 2011 at 10:08 AM, Ben Stuyts wrote: > > > > > > > I want to expand an existing mirror by replacing the existing drives > > with > > > > bigger ones. This is on: > > > > FreeBSD xxx 7.3-STABLE FreeBSD 7.3-STABLE #2: Mon Sep 20 18:36:08 CEST > > 2010 > > > > root@xxx:/usr/obj/usr/src/sys/xxx amd64 > > > > > > > > # zpool status home > > > > pool: home > > > > state: ONLINE > > > > scrub: scrub completed after 2h0m with 0 errors on Mon Sep 19 18:25:45 > > > > 2011 > > > > config: > > > > > > > > NAME STATE READ WRITE CKSUM > > > > home ONLINE 0 0 0 > > > > mirror ONLINE 0 0 0 > > > > ad5s1a ONLINE 0 0 0 > > > > ad7s1a ONLINE 0 0 0 > > > > > > > > Will this version of FreeBSD auto-expand to the new, bigger drive size > > once > > > > they are both replaced? I did not see the autoexpand property in this > > pool. > > > > zpool is v13, zfs is v3. > > > > > > > > > > No. You will need to reboot the system in order for the extra space to > > > become usable in the pool. Or, if none of the OS is installed on the > > pool, > > > you can export/import the pool to make the new space available. > > > > Does this advice/fact apply to FreeBSD 7.3? To my knowledge it does > > not. The ZFS version is too old. > > It's worked for me on our storage servers. These started with ZFSv6 and > have been upgraded through each version, currently running ZFSv28 on > 8-STABLE. > > Early versions of ZFS need the reboot or export/import cycle. Newer > versions pick up the new space as soon as the resilver of the last drive in > the vdev occurs (if the autoexpand property is enabled on the pool). I was about to ask what the autoexpand property was for then, but you've answered it in your 2nd paragraph here. Also, need some clarification here: when you say "the last drive in the vdev" do you effectively mean "once all the drives in the vdev are of the same size", or do you quite literally mean "the last device/disk shown in the vdev"? I can't imagine the latter being correct but I want clarification for myself as well as others who read this. Thanks! -- | Jeremy Chadwick jdc at parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, US | | Making life hard for others since 1977. PGP 4BD6C0CB | From owner-freebsd-fs@FreeBSD.ORG Tue Sep 20 00:50:51 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 8C4DE106564A for ; Tue, 20 Sep 2011 00:50:51 +0000 (UTC) (envelope-from fjwcash@gmail.com) Received: from mail-vw0-f44.google.com (mail-vw0-f44.google.com [209.85.212.44]) by mx1.freebsd.org (Postfix) with ESMTP id 3C47A8FC12 for ; Tue, 20 Sep 2011 00:50:50 +0000 (UTC) Received: by vws5 with SMTP id 5so14728vws.17 for ; Mon, 19 Sep 2011 17:50:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=XZMTXRJgfT+4/x0VAVKMfeOTXW1wYDnL6DHGt2bg+vQ=; b=c/ZHAr7zOcAsT1//2pVtlscgBSet3ITV2abmMGiwC76ALEb3Tg4Bv5xTbGs//Rig4E yO7oeqzrlEalpPkW8ddBT3Kfz0d+EbuBmw0ABE+2Nb/yLQiFNNXdLueUXX5tDApDfWBY s0S5kQ4d046OSoxug/TJ1PxvQermeFvoXVZd4= MIME-Version: 1.0 Received: by 10.220.154.201 with SMTP id p9mr38459vcw.2.1316479850183; Mon, 19 Sep 2011 17:50:50 -0700 (PDT) Received: by 10.220.198.130 with HTTP; Mon, 19 Sep 2011 17:50:50 -0700 (PDT) Received: by 10.220.198.130 with HTTP; Mon, 19 Sep 2011 17:50:50 -0700 (PDT) In-Reply-To: <20110919233045.GA71606@icarus.home.lan> References: <9774D03B-A8C7-48DE-9BC4-528DD4134787@altesco.nl> <20110919213813.GA70527@icarus.home.lan> <20110919233045.GA71606@icarus.home.lan> Date: Mon, 19 Sep 2011 17:50:50 -0700 Message-ID: From: Freddie Cash To: Jeremy Chadwick Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: freebsd-fs@freebsd.org Subject: Re: ZFS auto expand mirror X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 Sep 2011 00:50:51 -0000 Once all of the disks in the vdev have been replaced, making them all the same size. Doesn't matter in what order they are replaced. Freddie fjwcash@gmail.com On Sep 19, 2011 4:30 PM, "Jeremy Chadwick" wrote: > On Mon, Sep 19, 2011 at 02:54:06PM -0700, Freddie Cash wrote: >> On Mon, Sep 19, 2011 at 2:38 PM, Jeremy Chadwick >> wrote: >> >> > On Mon, Sep 19, 2011 at 10:49:56AM -0700, Freddie Cash wrote: >> > > On Mon, Sep 19, 2011 at 10:08 AM, Ben Stuyts wrote: >> > > >> > > > I want to expand an existing mirror by replacing the existing drives >> > with >> > > > bigger ones. This is on: >> > > > FreeBSD xxx 7.3-STABLE FreeBSD 7.3-STABLE #2: Mon Sep 20 18:36:08 CEST >> > 2010 >> > > > root@xxx:/usr/obj/usr/src/sys/xxx amd64 >> > > > >> > > > # zpool status home >> > > > pool: home >> > > > state: ONLINE >> > > > scrub: scrub completed after 2h0m with 0 errors on Mon Sep 19 18:25:45 >> > > > 2011 >> > > > config: >> > > > >> > > > NAME STATE READ WRITE CKSUM >> > > > home ONLINE 0 0 0 >> > > > mirror ONLINE 0 0 0 >> > > > ad5s1a ONLINE 0 0 0 >> > > > ad7s1a ONLINE 0 0 0 >> > > > >> > > > Will this version of FreeBSD auto-expand to the new, bigger drive size >> > once >> > > > they are both replaced? I did not see the autoexpand property in this >> > pool. >> > > > zpool is v13, zfs is v3. >> > > > >> > > >> > > No. You will need to reboot the system in order for the extra space to >> > > become usable in the pool. Or, if none of the OS is installed on the >> > pool, >> > > you can export/import the pool to make the new space available. >> > >> > Does this advice/fact apply to FreeBSD 7.3? To my knowledge it does >> > not. The ZFS version is too old. >> >> It's worked for me on our storage servers. These started with ZFSv6 and >> have been upgraded through each version, currently running ZFSv28 on >> 8-STABLE. >> >> Early versions of ZFS need the reboot or export/import cycle. Newer >> versions pick up the new space as soon as the resilver of the last drive in >> the vdev occurs (if the autoexpand property is enabled on the pool). > > I was about to ask what the autoexpand property was for then, but you've > answered it in your 2nd paragraph here. > > Also, need some clarification here: when you say "the last drive in the > vdev" do you effectively mean "once all the drives in the vdev are of > the same size", or do you quite literally mean "the last device/disk > shown in the vdev"? > > I can't imagine the latter being correct but I want clarification for > myself as well as others who read this. Thanks! > > -- > | Jeremy Chadwick jdc at parodius.com | > | Parodius Networking http://www.parodius.com/ | > | UNIX Systems Administrator Mountain View, CA, US | > | Making life hard for others since 1977. PGP 4BD6C0CB | > From owner-freebsd-fs@FreeBSD.ORG Tue Sep 20 01:13:22 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 90F3F1065670 for ; Tue, 20 Sep 2011 01:13:21 +0000 (UTC) (envelope-from julian@freebsd.org) Received: from vps1.elischer.org (vps1.elischer.org [204.109.63.16]) by mx1.freebsd.org (Postfix) with ESMTP id 497318FC15 for ; Tue, 20 Sep 2011 01:13:21 +0000 (UTC) Received: from julian-mac.elischer.org (home-nat.elischer.org [67.100.89.137]) (authenticated bits=0) by vps1.elischer.org (8.14.4/8.14.4) with ESMTP id p8K1DJmZ069775 (version=TLSv1/SSLv3 cipher=DHE-RSA-CAMELLIA256-SHA bits=256 verify=NO); Mon, 19 Sep 2011 18:13:20 -0700 (PDT) (envelope-from julian@freebsd.org) Message-ID: <4E77E8D6.1050108@freebsd.org> Date: Mon, 19 Sep 2011 18:13:58 -0700 From: Julian Elischer User-Agent: Mozilla/5.0 (Macintosh; U; PPC Mac OS X 10.4; en-US; rv:1.9.2.22) Gecko/20110902 Thunderbird/3.1.14 MIME-Version: 1.0 To: Alexander Leidinger References: <72A6ABD6-F6FD-4563-AB3F-6061E3DD9FBF@digsys.bg> <1316458811.88701.YahooMailClassic@web121208.mail.ne1.yahoo.com> <20110919223653.0000702b@unknown> In-Reply-To: <20110919223653.0000702b@unknown> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: Jason Usher , freebsd-fs@freebsd.org Subject: Re: ZFS obn FreeBSD hardware model for 48 or 96 sata3 paths... X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 Sep 2011 01:13:22 -0000 jason, you still haven't said what the reason for all this is.. speed, capacity, both or some other reason.. (or if you did, I missed it). it also makes a difference in how much ZIL or L2ARC you will need and how you will lay that out.. From owner-freebsd-fs@FreeBSD.ORG Tue Sep 20 07:13:18 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 552421065672 for ; Tue, 20 Sep 2011 07:13:18 +0000 (UTC) (envelope-from radiomlodychbandytow@o2.pl) Received: from moh2-ve1.go2.pl (moh2-ve1.go2.pl [193.17.41.186]) by mx1.freebsd.org (Postfix) with ESMTP id DAA278FC18 for ; Tue, 20 Sep 2011 07:13:17 +0000 (UTC) Received: from moh2-ve1.go2.pl (unknown [10.0.0.186]) by moh2-ve1.go2.pl (Postfix) with ESMTP id 4F6F244CD41 for ; Tue, 20 Sep 2011 09:13:11 +0200 (CEST) Received: from unknown (unknown [10.0.0.108]) by moh2-ve1.go2.pl (Postfix) with SMTP for ; Tue, 20 Sep 2011 09:13:11 +0200 (CEST) Received: from host892524678.com-promis.3s.pl [89.25.246.78] by poczta.o2.pl with ESMTP id SWpvtn; Tue, 20 Sep 2011 09:14:11 +0200 Message-ID: <4E783D04.7000006@o2.pl> Date: Tue, 20 Sep 2011 09:13:08 +0200 From: =?UTF-8?B?UmFkaW8gbcWCb2R5Y2ggYmFuZHl0w7N3?= User-Agent: Mozilla/5.0 (Windows NT 5.2; WOW64; rv:6.0.2) Gecko/20110902 Thunderbird/6.0.2 MIME-Version: 1.0 To: freebsd-fs@freebsd.org References: <20110919233058.463F01065711@hub.freebsd.org> In-Reply-To: <20110919233058.463F01065711@hub.freebsd.org> X-O2-Trust: 2, 65 X-O2-SPF: neutral Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Subject: Re: freebsd-fs Digest, Vol 431, Issue 2 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 Sep 2011 07:13:18 -0000 On 2011-09-20 01:30, freebsd-fs-request@freebsd.org wrote: > Presuming I can*find* a 112+ lane mobo, I assume the cost would be at worst double ($800ish instead of $400ish) a mobo with fewer pcie lanes... Good luck...I would be extremely surprised if you found it. And then even more surprised if it cost much below $8000. TYAN S8232, S7025 have 72 lanes, but unevenly distributed Supermicro X8OBN-F has 80, but that's for Xeon 7xxx. Tyan FT72B7015 has 8 x16 slots and 2 x4, but x16 ones are built with PCIe switches, which halves available bandwidth. I think having 8 SATA cards in them would be your best option. The only chance to have more would be something with 4 Opterons, but I haven't seen anything like that and I don't know if it's actually possible. Like pointed out, you'll also find scalability issues on the way in controllers, memory and likely in the OS too. Scalable Informatics build a system like this recently and after lots of tweaking they got 4.1 GB/s writes. http://scalability.org/?p=3355 However that was with Linux, XFS and a ton of tweaks made by people who do it for life. Overall, it reminds me about a song: https://www.youtube.com/watch?v=twICykaRRvY -- Twoje radio From owner-freebsd-fs@FreeBSD.ORG Tue Sep 20 07:35:07 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 9CA3B106566C for ; Tue, 20 Sep 2011 07:35:07 +0000 (UTC) (envelope-from lev@FreeBSD.org) Received: from onlyone.friendlyhosting.spb.ru (onlyone.friendlyhosting.spb.ru [IPv6:2a01:4f8:131:60a2::2]) by mx1.freebsd.org (Postfix) with ESMTP id 3ACC88FC15 for ; Tue, 20 Sep 2011 07:35:07 +0000 (UTC) Received: from lion.home.serebryakov.spb.ru (unknown [IPv6:2001:470:923f:1:f803:edca:622b:8392]) (Authenticated sender: lev@serebryakov.spb.ru) by onlyone.friendlyhosting.spb.ru (Postfix) with ESMTPA id 6625D4AC1C; Tue, 20 Sep 2011 11:35:05 +0400 (MSD) Date: Tue, 20 Sep 2011 11:35:03 +0400 From: Lev Serebryakov Organization: FreeBSD X-Priority: 3 (Normal) Message-ID: <14810728095.20110920113503@serebryakov.spb.ru> To: =?utf-8?Q?Radio_m=C5=82odych_bandyt=C3=B3w?= In-Reply-To: <4E783D04.7000006@o2.pl> References: <20110919233058.463F01065711@hub.freebsd.org> <4E783D04.7000006@o2.pl> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Cc: freebsd-fs@freebsd.org, Jason Usher Subject: Re: freebsd-fs Digest, Vol 431, Issue 2 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: lev@FreeBSD.org List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 Sep 2011 07:35:07 -0000 Hello, Radio. You wrote 20 =D1=81=D0=B5=D0=BD=D1=82=D1=8F=D0=B1=D1=80=D1=8F 2011 =D0=B3.,= 11:13:08: >> Presuming I can*find* a 112+ lane mobo, I assume the cost would be at w= orst double ($800ish instead of $400ish) a mobo with fewer pcie lanes... > Good luck...I would be extremely surprised if you found it. And then=20 > even more surprised if it cost much below $8000. > TYAN S8232, S7025 have 72 lanes, but unevenly distributed > Supermicro X8OBN-F has 80, but that's for Xeon 7xxx. > Tyan FT72B7015 has 8 x16 slots and 2 x4, but x16 ones are built with=20 > PCIe switches, which halves available bandwidth. I think having 8 SATA > cards in them would be your best option. IMHO, best option for topic stater is to buy Sun, grrr, sorry, Oracle Thumper. It has custom-build mobo with proper configuration of PCIe lanes, proper case for 48 3.5" SATA drives, already configured and tuned ZFS (Oracle 10, of course), etc. And as it is Opteron-based server, FreeBSD could be installed too :) Its official name is SunFire X4540. It is EOL now, but, I think, it is po= ssible to find one. Here is one at eBay for $19000 with 48x500Gb discs right now :) --=20 // Black Lion AKA Lev Serebryakov From owner-freebsd-fs@FreeBSD.ORG Tue Sep 20 07:52:13 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id AB5BD106564A for ; Tue, 20 Sep 2011 07:52:13 +0000 (UTC) (envelope-from jdc@koitsu.dyndns.org) Received: from qmta06.westchester.pa.mail.comcast.net (qmta06.westchester.pa.mail.comcast.net [76.96.62.56]) by mx1.freebsd.org (Postfix) with ESMTP id 67D0B8FC08 for ; Tue, 20 Sep 2011 07:52:13 +0000 (UTC) Received: from omta21.westchester.pa.mail.comcast.net ([76.96.62.72]) by qmta06.westchester.pa.mail.comcast.net with comcast id avne1h0041ZXKqc56vsD02; Tue, 20 Sep 2011 07:52:13 +0000 Received: from koitsu.dyndns.org ([67.180.84.87]) by omta21.westchester.pa.mail.comcast.net with comcast id avsB1h00X1t3BNj3hvsCgh; Tue, 20 Sep 2011 07:52:13 +0000 Received: by icarus.home.lan (Postfix, from userid 1000) id 6AD0F102C31; Tue, 20 Sep 2011 00:52:10 -0700 (PDT) Date: Tue, 20 Sep 2011 00:52:10 -0700 From: Jeremy Chadwick To: Julian Elischer Message-ID: <20110920075210.GA8194@icarus.home.lan> References: <72A6ABD6-F6FD-4563-AB3F-6061E3DD9FBF@digsys.bg> <1316458811.88701.YahooMailClassic@web121208.mail.ne1.yahoo.com> <20110919223653.0000702b@unknown> <4E77E8D6.1050108@freebsd.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4E77E8D6.1050108@freebsd.org> User-Agent: Mutt/1.5.21 (2010-09-15) Cc: Jason Usher , freebsd-fs@freebsd.org Subject: Re: ZFS obn FreeBSD hardware model for 48 or 96 sata3 paths... X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 Sep 2011 07:52:13 -0000 On Mon, Sep 19, 2011 at 06:13:58PM -0700, Julian Elischer wrote: > jason, you still haven't said what the reason for all this is.. > speed, capacity, both or some other reason.. > (or if you did, I missed it). While in my case I'm still trying to figure out Jason's actual goal. I get the feeling it's an attempt at avoiding use of, say, a Netapp filer (read: dedicated hardware and software on a device built to do and scale to the degree the OP wants) and the reasons are unknown. I'm also dreading all of the support mails on the list we'd see 2-3 months after such a beast was built. "It works!!!" followed by 2-3 months of silence, then "Hi, I have a problem ". The closest pre-built thing I can find to a white-box system would be iXSystems' Titan 445J, but the *actual hardware* used in the storage subsystem are unknown, ditto with tons of specification details: http://www.ixsystems.com/ix/storage/titan-jbod/titan-445j This comes no where near what the OP stated he wants though, in numerous regards. -- | Jeremy Chadwick jdc at parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, US | | Making life hard for others since 1977. PGP 4BD6C0CB | From owner-freebsd-fs@FreeBSD.ORG Tue Sep 20 11:39:39 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A2DC2106564A for ; Tue, 20 Sep 2011 11:39:39 +0000 (UTC) (envelope-from peterjeremy@acm.org) Received: from mail12.syd.optusnet.com.au (mail12.syd.optusnet.com.au [211.29.132.193]) by mx1.freebsd.org (Postfix) with ESMTP id 2EB7D8FC14 for ; Tue, 20 Sep 2011 11:39:38 +0000 (UTC) Received: from server.vk2pj.dyndns.org (c220-239-116-103.belrs4.nsw.optusnet.com.au [220.239.116.103]) by mail12.syd.optusnet.com.au (8.13.1/8.13.1) with ESMTP id p8KBdZJp019627 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Tue, 20 Sep 2011 21:39:37 +1000 X-Bogosity: Ham, spamicity=0.000000 Received: from server.vk2pj.dyndns.org (localhost.vk2pj.dyndns.org [127.0.0.1]) by server.vk2pj.dyndns.org (8.14.5/8.14.4) with ESMTP id p8KBdY48086442; Tue, 20 Sep 2011 21:39:34 +1000 (EST) (envelope-from peter@server.vk2pj.dyndns.org) Received: (from peter@localhost) by server.vk2pj.dyndns.org (8.14.5/8.14.4/Submit) id p8KBdY5c086441; Tue, 20 Sep 2011 21:39:34 +1000 (EST) (envelope-from peter) Date: Tue, 20 Sep 2011 21:39:33 +1000 From: Peter Jeremy To: Jeremy Chadwick Message-ID: <20110920113933.GA84566@server.vk2pj.dyndns.org> References: <9774D03B-A8C7-48DE-9BC4-528DD4134787@altesco.nl> <20110919213813.GA70527@icarus.home.lan> <20110919233045.GA71606@icarus.home.lan> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="0F1p//8PRICkK4MW" Content-Disposition: inline In-Reply-To: <20110919233045.GA71606@icarus.home.lan> X-PGP-Key: http://members.optusnet.com.au/peterjeremy/pubkey.asc User-Agent: Mutt/1.5.21 (2010-09-15) Cc: freebsd-fs@freebsd.org Subject: Re: ZFS auto expand mirror X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 Sep 2011 11:39:39 -0000 --0F1p//8PRICkK4MW Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On 2011-Sep-19 16:30:45 -0700, Jeremy Chadwick w= rote: >Also, need some clarification here: when you say "the last drive in the >vdev" do you effectively mean "once all the drives in the vdev are of >the same size", or do you quite literally mean "the last device/disk >shown in the vdev"? An easy way of looking at it is that the vdev (and hence pool) will autoexpand to match the smallest device in the vdev when any device is replaced. --=20 Peter Jeremy --0F1p//8PRICkK4MW Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.18 (FreeBSD) iEYEARECAAYFAk54e3UACgkQ/opHv/APuId/hACePgiuajZT3pOSTwqw6HSoU5H1 uXkAniy7cCun7IwQm5N/R9gDTXmplVTh =Qzrh -----END PGP SIGNATURE----- --0F1p//8PRICkK4MW-- From owner-freebsd-fs@FreeBSD.ORG Tue Sep 20 11:58:55 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 7B8141065670 for ; Tue, 20 Sep 2011 11:58:55 +0000 (UTC) (envelope-from roberto@keltia.freenix.fr) Received: from keltia.net (centre.keltia.net [IPv6:2a01:240:fe5c::41]) by mx1.freebsd.org (Postfix) with ESMTP id 31BCA8FC1A for ; Tue, 20 Sep 2011 11:58:55 +0000 (UTC) Received: from roberto-al.eurocontrol.fr (aran.keltia.net [88.191.250.24]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) (Authenticated sender: roberto) by keltia.net (Postfix/TLS) with ESMTPSA id 1BA3D10155 for ; Tue, 20 Sep 2011 13:58:52 +0200 (CEST) Date: Tue, 20 Sep 2011 13:58:46 +0200 From: Ollivier Robert To: freebsd-fs@freebsd.org Message-ID: <20110920115845.GA11481@roberto-al.eurocontrol.fr> References: <20110905195458.GA7863@felucia.tataz.chchile.org> <4E65393F.9070401@FreeBSD.org> <20110917155859.GA8243@felucia.tataz.chchile.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20110917155859.GA8243@felucia.tataz.chchile.org> X-Operating-System: MacOS X / Macbook Pro - FreeBSD 7.2 / Dell D820 SMP User-Agent: Mutt/1.5.21 (2010-09-15) Subject: Re: Difficulties to use ZFS root: ROOT MOUNT ERROR X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 Sep 2011 11:58:55 -0000 According to Jeremie Le Hen: > The kernel boots fine, it finds the root filesystem, but fails miserably > when running rc.d scripts because deeper datasets are not mounted (/var, > /usr, ...). > > I've been fiddling this this for 3 hours this afternoon without luck. > Does anyone have an idea on this please? Have you changed the mountpoint property for all FS below root ? zfs set mountpoint=/usr tank/root/usr ... -- Ollivier ROBERT -=- FreeBSD: The Power to Serve! -=- roberto@keltia.net In memoriam to Ondine, our 2nd child: http://ondine.keltia.net/ From owner-freebsd-fs@FreeBSD.ORG Tue Sep 20 14:24:08 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id B192F1065680 for ; Tue, 20 Sep 2011 14:24:08 +0000 (UTC) (envelope-from rs@bytecamp.net) Received: from mail.bytecamp.net (mail.bytecamp.net [212.204.60.9]) by mx1.freebsd.org (Postfix) with ESMTP id 0B68D8FC1F for ; Tue, 20 Sep 2011 14:24:07 +0000 (UTC) Received: (qmail 66060 invoked by uid 89); 20 Sep 2011 15:57:27 +0200 Received: from stella.bytecamp.net (HELO ?212.204.60.37?) (rs%bytecamp.net@212.204.60.37) by mail.bytecamp.net with CAMELLIA256-SHA encrypted SMTP; 20 Sep 2011 15:57:27 +0200 Message-ID: <4E789BC7.3090702@bytecamp.net> Date: Tue, 20 Sep 2011 15:57:27 +0200 From: Robert Schulze Organization: bytecamp GmbH User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.21) Gecko/20110831 Thunderbird/3.1.13 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=ISO-8859-15; format=flowed Content-Transfer-Encoding: 7bit Subject: NFS umount takes ages when no DNS available X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 Sep 2011 14:24:08 -0000 Hi, during maintainance work, I realized that an umount -h ip.ad.re.ss takes very long time when there is no nameserver reachable by the client (and server). All nfs configuration is done without hostnames, so I wonder why there is this delay (about 1 minute per mountpoint). client: 8.1-RELEASE-p1/amd64 server: 8.2-STABLE/amd64 The hostname seems to be exchanged by client and server, this can be noticed by warnings like the following in /var/log/messages on the server: rpc.statd: Failed to contact host client.foobar.net: RPC: Port mapper failure - RPC: Timed out statd is running on the client, but bound to an interface with an ip address not matching the logged hostname. Could NFS be tweaked in that way, that it does not use hostnames at all or at least will not provide hostnames on interfaces with don't carry the ip address the hostname matches to? with kind regards, Robert Schulze From owner-freebsd-fs@FreeBSD.ORG Tue Sep 20 16:01:06 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 737EC106566B for ; Tue, 20 Sep 2011 16:01:06 +0000 (UTC) (envelope-from bfriesen@simple.dallas.tx.us) Received: from blade.simplesystems.org (blade.simplesystems.org [65.66.246.74]) by mx1.freebsd.org (Postfix) with ESMTP id 3504A8FC17 for ; Tue, 20 Sep 2011 16:01:05 +0000 (UTC) Received: from freddy.simplesystems.org (freddy.simplesystems.org [65.66.246.65]) by blade.simplesystems.org (8.14.4+Sun/8.14.4) with ESMTP id p8KG14RQ022973; Tue, 20 Sep 2011 11:01:04 -0500 (CDT) Date: Tue, 20 Sep 2011 11:01:05 -0500 (CDT) From: Bob Friesenhahn X-X-Sender: bfriesen@freddy.simplesystems.org To: Freddie Cash In-Reply-To: Message-ID: References: <1316459220.35419.YahooMailClassic@web121209.mail.ne1.yahoo.com> User-Agent: Alpine 2.01 (GSO 1266 2009-07-14) MIME-Version: 1.0 Content-Type: MULTIPART/MIXED; BOUNDARY="-559023410-351212254-1316534465=:26410" X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.2.2 (blade.simplesystems.org [65.66.246.90]); Tue, 20 Sep 2011 11:01:05 -0500 (CDT) Cc: freebsd-fs@freebsd.org Subject: Re: ZFS obn FreeBSD hardware model for 48 or 96 sata3 paths... X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 Sep 2011 16:01:06 -0000 This message is in MIME format. The first part should be readable text, while the remaining parts are likely unreadable without MIME-aware tools. ---559023410-351212254-1316534465=:26410 Content-Type: TEXT/PLAIN; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8BIT On Mon, 19 Sep 2011, Freddie Cash wrote: > > L2ARC has extreme bandwidth limitations as compared with RAM.  Be careful what you wish for. > > For writes (7 MBps, I believe); there shouldn't be any limits on the reads.  If (for example) an SSD is used with a 200MB/s read rate for the L2ARC, then the L2ARC is limited to 200MB/s (as compared with perhaps 10GB/s or 20GB/s for RAM). The L2ARC is really all about eliminating the access latency of rotating-rust but any device will provide far less bandwidth than system RAM. Bob -- Bob Friesenhahn bfriesen@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ ---559023410-351212254-1316534465=:26410-- From owner-freebsd-fs@FreeBSD.ORG Tue Sep 20 18:45:13 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 351AB1065677 for ; Tue, 20 Sep 2011 18:45:13 +0000 (UTC) (envelope-from fjwcash@gmail.com) Received: from mail-vw0-f44.google.com (mail-vw0-f44.google.com [209.85.212.44]) by mx1.freebsd.org (Postfix) with ESMTP id DA3978FC1C for ; Tue, 20 Sep 2011 18:45:12 +0000 (UTC) Received: by vws5 with SMTP id 5so1176852vws.17 for ; Tue, 20 Sep 2011 11:45:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=2yh1zGEglDQRK/+Q9i+GwUumWW9qwKPtvxFCKRTsIDM=; b=pCB1tKTSopVzOqgdnlIpWFvv4Obj3lbny17hHOn6ZlccFN0KlpNG+6mW1dbRGJntN6 8F02AEXZ4o0fHqdwnsLfP7R9b1UugGjgat3OINHoFjMUg5agl7BQ9UFk5EpmSBi0uOKZ 6WUZq8O9yNJRdx/ClExQnMsxxvZpgrBLrgSBg= MIME-Version: 1.0 Received: by 10.52.176.196 with SMTP id ck4mr1054023vdc.168.1316544311726; Tue, 20 Sep 2011 11:45:11 -0700 (PDT) Received: by 10.220.198.130 with HTTP; Tue, 20 Sep 2011 11:45:11 -0700 (PDT) In-Reply-To: References: <1316459220.35419.YahooMailClassic@web121209.mail.ne1.yahoo.com> Date: Tue, 20 Sep 2011 11:45:11 -0700 Message-ID: From: Freddie Cash To: Bob Friesenhahn Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: freebsd-fs@freebsd.org Subject: Re: ZFS obn FreeBSD hardware model for 48 or 96 sata3 paths... X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 Sep 2011 18:45:13 -0000 On Tue, Sep 20, 2011 at 9:01 AM, Bob Friesenhahn < bfriesen@simple.dallas.tx.us> wrote: > On Mon, 19 Sep 2011, Freddie Cash wrote: > >> >> L2ARC has extreme bandwidth limitations as compared with RAM. Be careful >> what you wish for. >> >> For writes (7 MBps, I believe); there shouldn't be any limits on the >> reads. >> > > If (for example) an SSD is used with a 200MB/s read rate for the L2ARC, > then the L2ARC is limited to 200MB/s (as compared with perhaps 10GB/s or > 20GB/s for RAM). > Ah, yes, obviously it's limited by the hardware, but so is the pool. :) I meant there's no artificial limits on reads from an L2ARC device, or writes to a ZIL device. In contrast to the write throttling for the L2ARC device. > The L2ARC is really all about eliminating the access latency of > rotating-rust but any device will provide far less bandwidth than system > RAM. > Yes, L2ARC is definitely slower than RAM. But properly selected/configured L2ARC will be a heck of a lot faster/lower latency than the pool. Hence the ordering I gave originally: ARC -> L2ARC -> pool -- Freddie Cash fjwcash@gmail.com From owner-freebsd-fs@FreeBSD.ORG Tue Sep 20 19:25:46 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 85DF2106566B for ; Tue, 20 Sep 2011 19:25:46 +0000 (UTC) (envelope-from jusher71@yahoo.com) Received: from nm23.bullet.mail.ne1.yahoo.com (nm23.bullet.mail.ne1.yahoo.com [98.138.90.86]) by mx1.freebsd.org (Postfix) with SMTP id 504438FC1D for ; Tue, 20 Sep 2011 19:25:46 +0000 (UTC) Received: from [98.138.90.55] by nm23.bullet.mail.ne1.yahoo.com with NNFMP; 20 Sep 2011 19:25:45 -0000 Received: from [98.138.89.164] by tm8.bullet.mail.ne1.yahoo.com with NNFMP; 20 Sep 2011 19:25:45 -0000 Received: from [127.0.0.1] by omp1020.mail.ne1.yahoo.com with NNFMP; 20 Sep 2011 19:25:45 -0000 X-Yahoo-Newman-Property: ymail-3 X-Yahoo-Newman-Id: 845336.8903.bm@omp1020.mail.ne1.yahoo.com Received: (qmail 7554 invoked by uid 60001); 20 Sep 2011 19:25:45 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024; t=1316546745; bh=nXWEW4O+hjHeIa68zo2/nKjkvSUpNdeGTdRGlbRh71U=; h=X-YMail-OSG:Received:X-Mailer:Message-ID:Date:From:Subject:To:MIME-Version:Content-Type; b=5dE7vuj1KVZo8kRELywrnDoBWV4F5MvpY7Oow1MN8bJG9tK8eCiK551MQqPCU4S7B2kXPrA32Y8i3ZlwtvFsrpkIvdX0X841KfELa2vgXQ8u2MN1ktDSVIH8DvllDX/QVGF0m+sMsWBhCyDqBvFECnL4P5puCsYVqkzmyYmR8+4= DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com; h=X-YMail-OSG:Received:X-Mailer:Message-ID:Date:From:Subject:To:MIME-Version:Content-Type; b=T7jfVqbF7o4NHQ4JyiYALmMedNUlvLdK13gxFP18n8/A1pHjRAX+8MLtFhfPB0YNlA3RXQJis+exzr2zwsWqPJZmrYQDyHdhE4qPeDf8AUKm8Iaw5o1URh6y+UNTk84eZp2hKiYc29wZaFs4f/ww777YRfqmyJV4OLi3Lm+43XQ=; X-YMail-OSG: AiF9w.IVM1neVL33uwCXCtlxmD1XPU6G6F0uMZfWYcMvVzV qz0D5drKn6LZJFdDj2DH4g3nRILxcmBRoX.tDktnHcQizKzKXN1GbZZXAWjL qbSwt.7vwxzcJ6JiJ.UVDJLYClH1uolEFJwB0Tv1o7hJjTw2JGOXPcdoj_aH _Pmp56gCLk2ljITGcEsWU9yvGk98kSotvcxWUjc_OL2tSRzFU7QKNGb2mL3s 9ef7K8PRLtNGxAVZdWQljomIMsPGdGjy95OiFlVPYdXcaMSRCzAp3g.l5hDy NIXk_jjfFTKGHQ3Iu5rDp61K3bYDjEosCabD5naBWNe15LXdW2BNGG8NtQdq Lo8PyX0LKGCS8PwfYbJNi3.xzKzR3E2Y80nlMli_frZ7XCZaDwImMiEFv3_b a5VnzgDocWXl9ZhthnWA4I1f2g0nfdoH.ZXjfyHwN2bt8gi0- Received: from [46.105.26.30] by web121208.mail.ne1.yahoo.com via HTTP; Tue, 20 Sep 2011 12:25:45 PDT X-Mailer: YahooMailClassic/14.0.5 YahooMailWebService/0.8.114.317681 Message-ID: <1316546745.96947.YahooMailClassic@web121208.mail.ne1.yahoo.com> Date: Tue, 20 Sep 2011 12:25:45 -0700 (PDT) From: Jason Usher To: freebsd-fs@freebsd.org MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-Mailman-Approved-At: Tue, 20 Sep 2011 21:03:13 +0000 Subject: Re: ZFS obn FreeBSD hardware model for 48 or 96 sata3 paths... X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 Sep 2011 19:25:46 -0000 Hi Julian, --- On Mon, 9/19/11, Julian Elischer wrote: > jason, you still haven't said what > the reason for all this is.. speed, capacity, both or > some other reason.. > (or if you did, I missed it). I did, but I replied badly and it didn't thread right - sorry. The use case is very simple - nothing interesting at all - just a big giant local fileserver that will get hit by a lot of big, long rsync and sftp jobs, as well as some simple, but intensive, local housekeeping jobs (file culling with 'find', legacy hardlink "snapshots" and other things that could be done in better ways, but won't be). In fact, the only interesting aspect of the whole operation is that there are a few hundred million inodes in use and the average file size is between 150 and 200 KB. So why am I going on about pcie paths and dedicated drive paths, etc. ? No reason - I just thought it was a simple and cheap optimization that would allow me to never worry about a certain class of problems - admittedly, problems I might not ever run into. I'm not going to double the cost of 48 drives to get this, nor am I going to double the cost of 6 adaptor cards to do this, but I *would* be willing to double the cost of a single motherboard to do this. But now I see it's not that practical, and probably doesn't exist. The latest, greatest 32 lane pcie 2.0 motherboards tend to have just four ports, or other such complications. So, if there isn't a better suggestion, I think I will economize a bit and get the Supermicro X8DTH-6F ... 8 core / 192 GB / 7 8x slots ... or the X8DAH+-F ... 8 core / 288 GB / 2 x16, 4 x8, 1 x4 slots. The other questions regarding the ZIL/L2arc and so on have, I think, been answered - many thanks for all of the good suggestions and warnings. From owner-freebsd-fs@FreeBSD.ORG Tue Sep 20 21:16:16 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 411E1106566B for ; Tue, 20 Sep 2011 21:16:16 +0000 (UTC) (envelope-from freebsd-fs@m.gmane.org) Received: from lo.gmane.org (lo.gmane.org [80.91.229.12]) by mx1.freebsd.org (Postfix) with ESMTP id C38C78FC16 for ; Tue, 20 Sep 2011 21:16:15 +0000 (UTC) Received: from list by lo.gmane.org with local (Exim 4.69) (envelope-from ) id 1R67fo-0001th-EM for freebsd-fs@freebsd.org; Tue, 20 Sep 2011 23:16:12 +0200 Received: from dyn1242-88.vpn.ic.ac.uk ([129.31.242.88]) by main.gmane.org with esmtp (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Tue, 20 Sep 2011 23:16:12 +0200 Received: from jtotz by dyn1242-88.vpn.ic.ac.uk with local (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Tue, 20 Sep 2011 23:16:12 +0200 X-Injected-Via-Gmane: http://gmane.org/ To: freebsd-fs@freebsd.org From: Johannes Totz Date: Tue, 20 Sep 2011 22:15:58 +0100 Lines: 48 Message-ID: <4E79028E.3090102@imperial.ac.uk> References: <201109191430.p8JEUA03063023@freefall.freebsd.org> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 8bit X-Complaints-To: usenet@dough.gmane.org X-Gmane-NNTP-Posting-Host: dyn1242-88.vpn.ic.ac.uk User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:6.0.2) Gecko/20110902 Thunderbird/6.0.2 In-Reply-To: <201109191430.p8JEUA03063023@freefall.freebsd.org> Cc: freebsd-fs@FreeBSD.org Subject: Re: amd64/160801: zfsboot on 8.2-RELEASE fails to boot from root-on-zfs in MBR slice X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 Sep 2011 21:16:16 -0000 On 19/09/2011 15:30, Camillo Särs wrote: > The following reply was made to PR kern/160801; it has been noted by GNATS. > > From: =?ISO-8859-15?Q?Camillo_S=E4rs?= > To: John Baldwin > Cc: freebsd-amd64@freebsd.org, freebsd-gnats-submit@freebsd.org > Subject: Re: amd64/160801: zfsboot on 8.2-RELEASE fails to boot from root-on-zfs > in MBR slice > Date: Mon, 19 Sep 2011 17:07:26 +0300 > > Hi, > > On 2011-09-19 15:02, John Baldwin wrote: > >> Install zfsboot from 9.0-BETA2, where the problem is fixed. > > > > Can you test 8.2-stable? The various fixes made to zfsboot in 9 were merged > > to 8 after 8.2-release. > > Unfortunately fixing this issue by installing zfsboot from 9.0-BETA2 was > a surprising amount of work, because of an incompatibility between the > 9.0 USB installer GPT and the BIOS on this system. It took quite a > while to recognize the root cause for that one. I simply cannot boot > the system in question with the GPT pmbr used on the memstick of 9.0. > The BIOS locks completely. I have a similar issue (with an HP Proliant microserver). GPT on USB simply wont boot, but GPT on HDD is fine. However, I followed http://wiki.freebsd.org/RootOnZFS/ZFSBootPartition to set up an MBR on my boot-usb-stick and it worked fine. This was using a version of 8-stable of around 5th Sept 2011 (dont have svn rev at hand). > I am very reluctant to risk breaking my currently running system, the > previous boot failure caused almost two weeks of downtime. > > Does the 8.2-stable memstick image still use MBR? If so, I could > conceivably try to copy the 9.0 zfsboot version to the 8.2-stable > memstick and test both. > > Regards, > Camillo > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Wed Sep 21 23:50:56 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 155E4106564A for ; Wed, 21 Sep 2011 23:50:56 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-annu.mail.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id C5AF88FC16 for ; Wed, 21 Sep 2011 23:50:55 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AqsEAGd3ek6DaFvO/2dsb2JhbABChFyhJoJggVMBAQQBIwRSBRYOCgICDRkCWQaICqJ4kXCBLIRAgREEk02RSw X-IronPort-AV: E=Sophos;i="4.68,420,1312171200"; d="scan'208";a="135387600" Received: from erie.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.206]) by esa-annu-pri.mail.uoguelph.ca with ESMTP; 21 Sep 2011 19:50:54 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id D41C7B3F1F; Wed, 21 Sep 2011 19:50:54 -0400 (EDT) Date: Wed, 21 Sep 2011 19:50:54 -0400 (EDT) From: Rick Macklem To: Robert Schulze Message-ID: <373396436.1795807.1316649054817.JavaMail.root@erie.cs.uoguelph.ca> In-Reply-To: <4E789BC7.3090702@bytecamp.net> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.202] X-Mailer: Zimbra 6.0.10_GA_2692 (ZimbraWebClient - FF3.0 (Win)/6.0.10_GA_2692) Cc: freebsd-fs@freebsd.org Subject: Re: NFS umount takes ages when no DNS available X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 21 Sep 2011 23:50:56 -0000 Robert Schulze wrote: > Hi, > > during maintainance work, I realized that an umount -h ip.ad.re.ss > takes > very long time when there is no nameserver reachable by the client > (and > server). All nfs configuration is done without hostnames, so I wonder > why there is this delay (about 1 minute per mountpoint). > Well, here is the code snippet. (I'm not sure why the author felt that the getaddinfo() needed to be done before the check for the need to do an rpc?): if (hostp != NULL) { 341 *delimp = '\0'; 342 getaddrinfo(hostp, NULL, &hints, &ai); 343 if (ai == NULL) { 344 warnx("can't get net id for host"); 345 } 346 } 347 348 /* 349 * Check if we have to start the rpc-call later. 350 * If there are still identical nfs-names mounted, 351 * we skip the rpc-call. Obviously this has to 352 * happen before unmount(2), but it should happen 353 * after the previous namecheck. 354 * A non-NULL return means that this is the last 355 * mount from mntfromname that is still mounted. 356 */ 357 if (getmntentry(sfs->f_mntfromname, NULL, NULL, 358 CHECKUNIQUE) != NULL) 359 do_rpc = 1; Just to clarify, umount(8) does do an rpc against the server, but it isn't very important. All it does is tell the server to remove an entry from the table it uses to generate replies to showmount(8). NFS itself doesn't care about this table. If you do a # umount /mnt and the server name can't be resolved via DNS, you can C out and the mount point will be gone in the client. (As above, I'm not sure why the author felt that the getaddrinfo() should be done before the umount(2) for the "-h" case. As such, I would be hesitant to change it.. Even if you change it, it would need to be done later, so the RPC to the server can be performed. The C trick would then work, but only if there was only one mount point on the server. > client: 8.1-RELEASE-p1/amd64 > server: 8.2-STABLE/amd64 > > The hostname seems to be exchanged by client and server, this can be > noticed by warnings like the following in /var/log/messages on the > server: > > rpc.statd: Failed to contact host client.foobar.net: RPC: Port mapper > failure - RPC: Timed out > > statd is running on the client, but bound to an interface with an ip > address not matching the logged hostname. > > Could NFS be tweaked in that way, that it does not use hostnames at > all > or at least will not provide hostnames on interfaces with don't carry > the ip address the hostname matches to? > I don't know if the hostnames are in arguments on the wire for the Network Status Monitor (NSM) and Network Lock Manager (NLM) protocols. To find out, you'll probably need to read the sources, since there isn't any RFCs for these. (They were eventually published in an X/Open manual, but in the old days, you had to buy it hardcopy, since that was a funding source for X/Open.) These protocols were done in the days when servers would have been in /etc/host files, so the names always resolved. (mid to late 1980s) rick From owner-freebsd-fs@FreeBSD.ORG Thu Sep 22 03:58:16 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx2.freebsd.org (mx2.freebsd.org [IPv6:2001:4f8:fff6::35]) by hub.freebsd.org (Postfix) with ESMTP id EB006106567B for ; Thu, 22 Sep 2011 03:58:16 +0000 (UTC) (envelope-from dougb@FreeBSD.org) Received: from 172-17-198-245.globalsuite.net (hub.freebsd.org [IPv6:2001:4f8:fff6::36]) by mx2.freebsd.org (Postfix) with ESMTP id 1F85114DA22; Thu, 22 Sep 2011 03:58:13 +0000 (UTC) Message-ID: <4E7AB254.4080908@FreeBSD.org> Date: Wed, 21 Sep 2011 20:58:12 -0700 From: Doug Barton Organization: http://SupersetSolutions.com/ User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:6.0.2) Gecko/20110912 Thunderbird/6.0.2 MIME-Version: 1.0 To: Rick Macklem References: <373396436.1795807.1316649054817.JavaMail.root@erie.cs.uoguelph.ca> In-Reply-To: <373396436.1795807.1316649054817.JavaMail.root@erie.cs.uoguelph.ca> X-Enigmail-Version: undefined OpenPGP: id=1A1ABC84 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org Subject: Re: NFS umount takes ages when no DNS available X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 22 Sep 2011 03:58:17 -0000 On 09/21/2011 16:50, Rick Macklem wrote: > I'm not sure why the author felt that the getaddinfo() needed to be > done before the check for the need to do an rpc? I can't speak to the original author's intent, but I do know that going all the way back to 1994 the advice I always received was to put critical NFS server hosts in /etc/hosts. Perhaps given that fundamental assumption this seemed reasonable. The code goes all the way back to: r74462 | alfred | 2001-03-19 04:50:13 -0800 (Mon, 19 Mar 2001) I'll leave the commit log itself as an exercise for the reader, since it's lengthy and informative, but not necessarily directly relevant to this problem. hth, Doug -- Nothin' ever doesn't change, but nothin' changes much. -- OK Go Breadth of IT experience, and depth of knowledge in the DNS. Yours for the right price. :) http://SupersetSolutions.com/ From owner-freebsd-fs@FreeBSD.ORG Thu Sep 22 09:49:49 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id C3329106564A for ; Thu, 22 Sep 2011 09:49:49 +0000 (UTC) (envelope-from rs@bytecamp.net) Received: from mail.bytecamp.net (mail.bytecamp.net [212.204.60.9]) by mx1.freebsd.org (Postfix) with ESMTP id 193DE8FC08 for ; Thu, 22 Sep 2011 09:49:48 +0000 (UTC) Received: (qmail 51783 invoked by uid 89); 22 Sep 2011 11:49:47 +0200 Received: from stella.bytecamp.net (HELO ?212.204.60.37?) (rs%bytecamp.net@212.204.60.37) by mail.bytecamp.net with CAMELLIA256-SHA encrypted SMTP; 22 Sep 2011 11:49:47 +0200 Message-ID: <4E7B04BB.2010808@bytecamp.net> Date: Thu, 22 Sep 2011 11:49:47 +0200 From: Robert Schulze Organization: bytecamp GmbH User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.21) Gecko/20110831 Thunderbird/3.1.13 MIME-Version: 1.0 To: freebsd-fs@freebsd.org References: <373396436.1795807.1316649054817.JavaMail.root@erie.cs.uoguelph.ca> In-Reply-To: <373396436.1795807.1316649054817.JavaMail.root@erie.cs.uoguelph.ca> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: NFS umount takes ages when no DNS available X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 22 Sep 2011 09:49:49 -0000 Hi, first of all: thanks to your answers. Am 22.09.2011 01:50, schrieb Rick Macklem: >> > Well, here is the code snippet. (I'm not sure why the author felt that > the getaddinfo() needed to be done before the check for the need to do > an rpc?): > [...] > > These protocols were done in the days when servers would have been in > /etc/host files, so the names always resolved. (mid to late 1980s) this all would make sense (at least to me) when using hostnames. But I don't use them anywhere on the server or client regarding NFS. Furthermore a getaddrinfo() with numeric ip address should return instantly, or is it supposed to do a reverse lookup? The addresses we use for NFS are in the 10.x.y.z/24 range which are not declared in our nameservers, so the client will get NXDOMAIN for the address. Regarding the hostname which statd complains with: I've looked into the sources of /usr/sbin/rpc.statd and found out one place, where a gethostname() is called, but I can't figure out whether this value is also handed out over the wire or just for logging purposes. When looking at a multihomed setup like we use it (external public ip address for non-nfs and internal local address for nfs-only) the hostname of the machine is true for the external interface only. with kind regards, Robert Schulze From owner-freebsd-fs@FreeBSD.ORG Thu Sep 22 10:01:17 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 72AF8106564A for ; Thu, 22 Sep 2011 10:01:17 +0000 (UTC) (envelope-from mickael.maillot@gmail.com) Received: from mail-qw0-f44.google.com (mail-qw0-f44.google.com [209.85.216.44]) by mx1.freebsd.org (Postfix) with ESMTP id 2E7128FC14 for ; Thu, 22 Sep 2011 10:01:16 +0000 (UTC) Received: by qwb8 with SMTP id 8so5914972qwb.3 for ; Thu, 22 Sep 2011 03:01:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=3cWggj0U8Odcx833T6bLxLz/b2rZqaM0JuDzFAr9Og4=; b=hvYyhW1zShdWZkxvg/or99HL326b+dpbacpM5/2f7zsql3u6BMfAr9XpVf5FMId5tX 96cWF9IBCle0K30aXTzGlnTgQU3vsShGEnjH3i2AmFGcjM2Objo82Z/cFUHvAM83sDqU dhN6lJBOTx4kI+lfs7DYcAMzIe1GO3RVrf+l4= MIME-Version: 1.0 Received: by 10.224.205.194 with SMTP id fr2mr1554562qab.320.1316684094411; Thu, 22 Sep 2011 02:34:54 -0700 (PDT) Received: by 10.229.89.145 with HTTP; Thu, 22 Sep 2011 02:34:54 -0700 (PDT) In-Reply-To: <1316546745.96947.YahooMailClassic@web121208.mail.ne1.yahoo.com> References: <1316546745.96947.YahooMailClassic@web121208.mail.ne1.yahoo.com> Date: Thu, 22 Sep 2011 11:34:54 +0200 Message-ID: From: =?ISO-8859-1?Q?Micka=EBl_Maillot?= To: Jason Usher Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: freebsd-fs@freebsd.org Subject: Re: ZFS obn FreeBSD hardware model for 48 or 96 sata3 paths... X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 22 Sep 2011 10:01:17 -0000 warning: don't forget to update all bios: SSDs, Motherboard and raid cards. recently bad experiences: - one ocz vertex 3 stuck the fileserver at reboot: the only option was to unplug the falted ssd - one disk failed on a supermicro AOC-USAS-L8i fileserver stuck, need hard reboot no problem after USAS-L8i upgrade. * * * * From owner-freebsd-fs@FreeBSD.ORG Thu Sep 22 17:45:30 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A40F9106566C for ; Thu, 22 Sep 2011 17:45:30 +0000 (UTC) (envelope-from jusher71@yahoo.com) Received: from nm35-vm5.bullet.mail.ne1.yahoo.com (nm35-vm5.bullet.mail.ne1.yahoo.com [98.138.229.101]) by mx1.freebsd.org (Postfix) with SMTP id 417328FC08 for ; Thu, 22 Sep 2011 17:45:29 +0000 (UTC) Received: from [98.138.90.52] by nm35.bullet.mail.ne1.yahoo.com with NNFMP; 22 Sep 2011 17:45:29 -0000 Received: from [98.138.89.175] by tm5.bullet.mail.ne1.yahoo.com with NNFMP; 22 Sep 2011 17:45:29 -0000 Received: from [127.0.0.1] by omp1031.mail.ne1.yahoo.com with NNFMP; 22 Sep 2011 17:45:29 -0000 X-Yahoo-Newman-Property: ymail-3 X-Yahoo-Newman-Id: 467129.24006.bm@omp1031.mail.ne1.yahoo.com Received: (qmail 92607 invoked by uid 60001); 22 Sep 2011 17:45:29 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024; t=1316713529; bh=oY9hEhkkepN/4fx8juGq2ueuMToy0m2AjjDJrpoaN10=; h=X-YMail-OSG:Received:X-Mailer:Message-ID:Date:From:Subject:To:MIME-Version:Content-Type; b=xf4fPZFEQrajBy0qPujDYFhN1wUgfl+NdvCXSxsXY1++7lBXlHSJ9F6a1lLS2PrdgnbmGkRFcWvXKj+47e3GmmeqSmKifAN+ie5WssCNwTS8EPndxa0XAwzEMNqPk8nYWmfJobm9k1/DkAfKNVYev+pF5Gmat2xsC3ZJ8RdJR48= DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com; h=X-YMail-OSG:Received:X-Mailer:Message-ID:Date:From:Subject:To:MIME-Version:Content-Type; b=a1szF/YafqnYr3B7IV5t+x57ifXtdjigVmhKkTAwU3zYP1fYX0RM3aZxGnF0c+VgYWg5v+6c6eWxrVq7jVLadN+KlQz64Oh+QcLTq7A9hQshx/2PQG/fk1/eVU5ceyvL7BUpFZr1Ug2aiLLU6j8njpy8zsmacEP9jIVN+l51RKA=; X-YMail-OSG: nEhUEa4VM1ntO6Riz55iSjHhFTlztrJf_jPNTVyEhxFfI1A zaARE2OmsHfEORo3nun27g.8v3D_zUYgv2JS13MX.OoOs4awcPASIT0mhjn9 mqT5ShE1sm1gyTw1HK3HLEmyA3D_xmGkfuy5c8wl4yajKcPAoRUCg4QUOz2h 4BG4QWG0R5coCsFUX1ZW5cLGL9tZshy.UXe3lT4hDZSWMljaxkLrNt20TCUi 1_.RVrpgMPIKN.oURuZeJ.CRaH20X4ZI6duMLnH7ABbJ58m5Ha7M9XlgO7o. v9wujDitw64WAwW8oCapkqf.3DopxOi3pLufiNhPRWWfT4zBgbNMHuawkSTe 8osCyYXBNj_Ek1nM1fml2Cu2vOwcnO2KY1lFplwc- Received: from [88.203.185.2] by web121201.mail.ne1.yahoo.com via HTTP; Thu, 22 Sep 2011 10:45:29 PDT X-Mailer: YahooMailClassic/14.0.5 YahooMailWebService/0.8.114.317681 Message-ID: <1316713529.81545.YahooMailClassic@web121201.mail.ne1.yahoo.com> Date: Thu, 22 Sep 2011 10:45:29 -0700 (PDT) From: Jason Usher To: freebsd-fs@freebsd.org MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Subject: redux: 48 or 96 sata3 paths ... specific ZFS hardware proposal X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 22 Sep 2011 17:45:30 -0000 Thanks to all for the suggestions RE: my ill informed attempt to create a 1:1 drive/path for 48 or 96 sata3 drives. Not only is it extremely unlikely that it would ever be useful, it appears not to exist as a hardware option anyway. So with that in mind, I am falling back on what I see over and over in my searches of this list and the OpenSolaris lists: Supermicro X8DTH-6F + LSI 9211-8i The LSI 9211-8i seems a no-brainer - completely non-raid, and fulfills the ZFS requirement of giving the OS complete, total control over the disk, and it is 6Gb to match the speed of SATA3. But the X8DTH-6F motherboard seems too good to be true ... it has 7 8x pcie slots, and further, has two sff8087 connectors onboard with the EXACT SAME LSI chipset as the 9211-8i cards above - so you can get an extra 8 drives on this motherboard using the same driver. Takes xeon5500/5600 (6core, potentially) at 6.4 QPI ... so, very modern there. About the only possible downside I can see to this board is that it only takes 192GB of ram, but that is a LOT of ram ... Does anyone have any comments ? Is everyone using this motherboard (as list postings seem to indicate) ? I am perfectly willing to pay more for something newer/faster/more expandable, but I just don't see a 48 drive capable motherboard for ZFS that outclasses this one ... Thanks. From owner-freebsd-fs@FreeBSD.ORG Thu Sep 22 18:02:23 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 0E5E6106566C; Thu, 22 Sep 2011 18:02:23 +0000 (UTC) (envelope-from artemb@gmail.com) Received: from mail-yx0-f182.google.com (mail-yx0-f182.google.com [209.85.213.182]) by mx1.freebsd.org (Postfix) with ESMTP id B1A888FC0A; Thu, 22 Sep 2011 18:02:22 +0000 (UTC) Received: by mail-yx0-f182.google.com with SMTP id 36so2729692yxk.13 for ; Thu, 22 Sep 2011 11:02:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:sender:date:x-google-sender-auth:message-id:subject :from:to:cc:content-type; bh=jyqo6Nv6pas5N3AwsRTepjO1FpnET96gf7Il/y2hzP8=; b=jTLYCDEUv/ZJGT1DjqdIxelVyUo86o3utMy2okYYvxcGtz3aXwbIdgj60Zu8YfOUxe beQin8kdMyq70DQHBkJWIY8VYz1WBrNltCYl7aS6e/5BY9wFe++fzBUko6liXW8XhxCx OIocoeXYiIaALYEiJN+IBqvHEl3S3wgl82nFo= MIME-Version: 1.0 Received: by 10.236.191.71 with SMTP id f47mr15212581yhn.125.1316714542515; Thu, 22 Sep 2011 11:02:22 -0700 (PDT) Sender: artemb@gmail.com Received: by 10.236.102.147 with HTTP; Thu, 22 Sep 2011 11:02:22 -0700 (PDT) Date: Thu, 22 Sep 2011 11:02:22 -0700 X-Google-Sender-Auth: JKLFQQevA9ZeIqEthRaE3dpyeY8 Message-ID: From: Artem Belevich To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 Cc: Andriy Gapon Subject: bootloader block cache improvement X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 22 Sep 2011 18:02:23 -0000 Hi, I've had ZFS-only box that boots off 8-drive raidz2 array. I've noticed that on this machine it takes noticeably longer to load kernel and modules than on a similar box that boots off 1-drive ZFS filesystem. It turns out that block cache in loader only caches data from one disk only and invalidates the cache as soon as we read from another drive. With ZFS reading from multiple drives when filesystem is on a raidz pool the cache was effectively useless in that scenario. I've got literally 0 hits reported by bcachestat command. The patch below modifies bootloader block cache so that it can cache data from multiple drives simultaneously. That helped a bit. Another issue is the size of the cache. Currently it's set at 32 512-byte blocks. While it may be enough to cache frequently used data on UFS, it seems to be way too small for ZFS. On my machine the sweet spot seems to be just below 256. Setting it to 256 resulted in sharp increase of cache misses, probably because single 128K record (default on ZFS) would evict all other data from cache. In the end I've settled on 192 blocks which resulted in about 90% hit rate. Here's the data from 'bcachestat' command in the loader on 3-disk raidz: | cache size | ops | bypass | hits | misses | |-------------+------+--------+-------+--------| | 32(w/patch) | 6541 | 287 | 6988 | ~15K | | 128 | 6541 | 281 | ~15K | 7560 | | 192 | 6541 | 281 | 12883 | 1394 | | 256 | 6541 | 15 | ~19K | ~37K | Attached is the patch with the changes. If the changes are acceptable, I'd like to eventually MFC them to 8-stable, too. --Artem ================================================================ --- sys/boot/common/bcache.c | 50 ++++++++++++++++++++++--------------------- sys/boot/i386/loader/main.c | 5 +++- 2 files changed, 29 insertions(+), 26 deletions(-) diff --git a/sys/boot/common/bcache.c b/sys/boot/common/bcache.c index c88adca..85e47a6 100644 --- a/sys/boot/common/bcache.c +++ b/sys/boot/common/bcache.c @@ -55,6 +55,7 @@ struct bcachectl daddr_t bc_blkno; time_t bc_stamp; int bc_count; + int bc_unit; }; static struct bcachectl *bcache_ctl; @@ -66,9 +67,9 @@ static u_int bcache_hits, bcache_misses, bcache_ops, bcache_bypasses; static u_int bcache_flushes; static u_int bcache_bcount; -static void bcache_invalidate(daddr_t blkno); -static void bcache_insert(caddr_t buf, daddr_t blkno); -static int bcache_lookup(caddr_t buf, daddr_t blkno); +static void bcache_invalidate(int unit, daddr_t blkno); +static void bcache_insert(caddr_t buf, int unit, daddr_t blkno); +static int bcache_lookup(caddr_t buf, int unit, daddr_t blkno); /* * Initialise the cache for (nblks) of (bsize). @@ -117,6 +118,7 @@ bcache_flush(void) for (i = 0; i < bcache_nblks; i++) { bcache_ctl[i].bc_count = -1; bcache_ctl[i].bc_blkno = -1; + bcache_ctl[i].bc_unit = -1; } } @@ -136,7 +138,7 @@ write_strategy(void *devdata, int unit, int rw, daddr_t blk, size_t size, /* Invalidate the blocks being written */ for (i = 0; i < nblk; i++) { - bcache_invalidate(blk + i); + bcache_invalidate(unit, blk + i); } /* Write the blocks */ @@ -145,7 +147,7 @@ write_strategy(void *devdata, int unit, int rw, daddr_t blk, size_t size, /* Populate the block cache with the new data */ if (err == 0) { for (i = 0; i < nblk; i++) { - bcache_insert(buf + (i * bcache_blksize),blk + i); + bcache_insert(buf + (i * bcache_blksize), unit, blk + i); } } @@ -171,7 +173,7 @@ read_strategy(void *devdata, int unit, int rw, daddr_t blk, size_t size, /* Satisfy any cache hits up front */ for (i = 0; i < nblk; i++) { - if (bcache_lookup(buf + (bcache_blksize * i), blk + i)) { + if (bcache_lookup(buf + (bcache_blksize * i), unit, blk + i)) { bit_set(bcache_miss, i); /* cache miss */ bcache_misses++; } else { @@ -200,7 +202,7 @@ read_strategy(void *devdata, int unit, int rw, daddr_t blk, size_t size, if (result != 0) goto done; for (j = 0; j < p_size; j++) - bcache_insert(p_buf + (j * bcache_blksize), p_blk + j); + bcache_insert(p_buf + (j * bcache_blksize), unit, p_blk + j); p_blk = -1; } } @@ -210,7 +212,7 @@ read_strategy(void *devdata, int unit, int rw, daddr_t blk, size_t size, if (result != 0) goto done; for (j = 0; j < p_size; j++) - bcache_insert(p_buf + (j * bcache_blksize), p_blk + j); + bcache_insert(p_buf + (j * bcache_blksize), unit, p_blk + j); } done: @@ -227,19 +229,13 @@ int bcache_strategy(void *devdata, int unit, int rw, daddr_t blk, size_t size, char *buf, size_t *rsize) { - static int bcache_unit = -1; struct bcache_devdata *dd = (struct bcache_devdata *)devdata; bcache_ops++; - if(bcache_unit != unit) { - bcache_flush(); - bcache_unit = unit; - } - /* bypass large requests, or when the cache is inactive */ if ((bcache_data == NULL) || ((size * 2 / bcache_blksize) > bcache_nblks)) { - DEBUG("bypass %d from %d", size / bcache_blksize, blk); + DEBUG("bypass %d from %d", size / bcache_blksize, (int)blk); bcache_bypasses++; return(dd->dv_strategy(dd->dv_devdata, rw, blk, size, buf, rsize)); } @@ -260,7 +256,7 @@ bcache_strategy(void *devdata, int unit, int rw, daddr_t blk, size_t size, * XXX the LRU algorithm will fail after 2^31 blocks have been transferred. */ static void -bcache_insert(caddr_t buf, daddr_t blkno) +bcache_insert(caddr_t buf, int unit, daddr_t blkno) { time_t now; int cand, ocount; @@ -272,7 +268,7 @@ bcache_insert(caddr_t buf, daddr_t blkno) /* find the oldest block */ for (i = 1; i < bcache_nblks; i++) { - if (bcache_ctl[i].bc_blkno == blkno) { + if (bcache_ctl[i].bc_unit == unit && bcache_ctl[i].bc_blkno == blkno) { /* reuse old entry */ cand = i; break; @@ -283,8 +279,9 @@ bcache_insert(caddr_t buf, daddr_t blkno) } } - DEBUG("insert blk %d -> %d @ %d # %d", blkno, cand, now, bcache_bcount); + DEBUG("insert blk %d:%d -> %d @ %d # %d", unit, blkno, cand, now, bcache_bcount); bcopy(buf, bcache_data + (bcache_blksize * cand), bcache_blksize); + bcache_ctl[cand].bc_unit = unit; bcache_ctl[cand].bc_blkno = blkno; bcache_ctl[cand].bc_stamp = now; bcache_ctl[cand].bc_count = bcache_bcount++; @@ -296,7 +293,7 @@ bcache_insert(caddr_t buf, daddr_t blkno) * if successful and return zero, or return nonzero on failure. */ static int -bcache_lookup(caddr_t buf, daddr_t blkno) +bcache_lookup(caddr_t buf, int unit, daddr_t blkno) { time_t now; u_int i; @@ -305,9 +302,10 @@ bcache_lookup(caddr_t buf, daddr_t blkno) for (i = 0; i < bcache_nblks; i++) /* cache hit? */ - if ((bcache_ctl[i].bc_blkno == blkno) && ((bcache_ctl[i].bc_stamp + BCACHE_TIMEOUT) >= now)) { + if ((bcache_ctl[i].bc_unit == unit) && (bcache_ctl[i].bc_blkno == blkno) + && ((bcache_ctl[i].bc_stamp + BCACHE_TIMEOUT) >= now)) { bcopy(bcache_data + (bcache_blksize * i), buf, bcache_blksize); - DEBUG("hit blk %d <- %d (now %d then %d)", blkno, i, now, bcache_ctl[i].bc_stamp); + DEBUG("hit blk %d:%d <- %d (now %d then %d)", unit, blkno, i, now, bcache_ctl[i].bc_stamp); return(0); } return(ENOENT); @@ -317,15 +315,16 @@ bcache_lookup(caddr_t buf, daddr_t blkno) * Invalidate a block from the cache. */ static void -bcache_invalidate(daddr_t blkno) +bcache_invalidate(int unit, daddr_t blkno) { u_int i; for (i = 0; i < bcache_nblks; i++) { - if (bcache_ctl[i].bc_blkno == blkno) { + if ((bcache_ctl[i].bc_unit == unit) && (bcache_ctl[i].bc_blkno == blkno)) { bcache_ctl[i].bc_count = -1; bcache_ctl[i].bc_blkno = -1; - DEBUG("invalidate blk %d", blkno); + bcache_ctl[i].bc_unit = -1; + DEBUG("invalidate blk %d:%d", unit, blkno); break; } } @@ -339,7 +338,8 @@ command_bcache(int argc, char *argv[]) u_int i; for (i = 0; i < bcache_nblks; i++) { - printf("%08jx %04x %04x|", (uintmax_t)bcache_ctl[i].bc_blkno, (unsigned int)bcache_ctl[i].bc_stamp & 0xffff, bcache_ctl[i].bc_count & 0xffff); + printf("%02x:%08jx %04x %04x|", bcache_ctl[i].bc_unit, (uintmax_t)bcache_ctl[i].bc_blkno, + (unsigned int)bcache_ctl[i].bc_stamp & 0xffff, bcache_ctl[i].bc_count & 0xffff); if (((i + 1) % 4) == 0) printf("\n"); } diff --git a/sys/boot/i386/loader/main.c b/sys/boot/i386/loader/main.c index 75d5dbc..e1a01fa 100644 --- a/sys/boot/i386/loader/main.c +++ b/sys/boot/i386/loader/main.c @@ -86,6 +86,7 @@ int main(void) { int i; + int bcache_size; /* Pick up arguments */ kargs = (void *)__args; @@ -109,11 +110,13 @@ main(void) heap_bottom = PTOV(high_heap_base); if (high_heap_base < memtop_copyin) memtop_copyin = high_heap_base; + bcache_size = 192; } else #endif { heap_top = (void *)PTOV(bios_basemem); heap_bottom = (void *)end; + bcache_size = 32; } setheap(heap_bottom, heap_top); @@ -140,7 +143,7 @@ main(void) /* * Initialise the block cache */ - bcache_init(32, 512); /* 16k cache XXX tune this */ + bcache_init(bcache_size, 512); /* * Special handling for PXE and CD booting. From owner-freebsd-fs@FreeBSD.ORG Thu Sep 22 18:55:08 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 4D20B106566B for ; Thu, 22 Sep 2011 18:55:08 +0000 (UTC) (envelope-from cracauer@koef.zs64.net) Received: from koef.zs64.net (koef.zs64.net [IPv6:2001:470:1f0b:105e::1e6]) by mx1.freebsd.org (Postfix) with ESMTP id 0AE818FC17 for ; Thu, 22 Sep 2011 18:55:07 +0000 (UTC) Received: from koef.zs64.net (koef.zs64.net [IPv6:2001:470:1f0b:105e::1e6]) by koef.zs64.net (8.14.5/8.14.4) with ESMTP id p8MIt6wZ009045 for ; Thu, 22 Sep 2011 20:55:06 +0200 (CEST) (envelope-from cracauer@koef.zs64.net) Received: (from cracauer@localhost) by koef.zs64.net (8.14.5/8.14.4/Submit) id p8MIt6lG009044 for freebsd-fs@freebsd.org; Thu, 22 Sep 2011 14:55:06 -0400 (EDT) (envelope-from cracauer) Date: Thu, 22 Sep 2011 14:55:06 -0400 From: Martin Cracauer To: freebsd-fs@freebsd.org Message-ID: <20110922185506.GA5281@cons.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.4.2.3i Subject: Another Zpool management thing X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 22 Sep 2011 18:55:08 -0000 I dd-copied a whole drive that had several slices, one of which holds a single-disk ZFS in it (nothing mounted at the time of course). At this time both disks are in the machine. Naturally, `zfs list` shows the name of the pool once. `zfs import` reports nothing. `zpool status ` shows that the currently recognized physical location of this pool is the new disk. This isn't what I want, I would like to mount the original from the old location. Is there any way to reference a pool by it's /dev/ entry so that I can rename the new copy? Destroying it would be fine, too, as long as it doesn't affect the original one. The zfs list (Sun's) mentioned a PR 6280547 for a `zpool rename` but this doesn't seem to have been implemented. FreeBSD-9 code as of yesterday. I know I can easily work around this by using the drive on a different machine while this drive is alone, but out of curiosity I'd like to know. Martin -- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Martin Cracauer http://www.cons.org/cracauer/ From owner-freebsd-fs@FreeBSD.ORG Thu Sep 22 19:33:20 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 272E3106566B for ; Thu, 22 Sep 2011 19:33:20 +0000 (UTC) (envelope-from daniel@digsys.bg) Received: from smtp-sofia.digsys.bg (smtp-sofia.digsys.bg [193.68.3.230]) by mx1.freebsd.org (Postfix) with ESMTP id A545F8FC13 for ; Thu, 22 Sep 2011 19:33:19 +0000 (UTC) Received: from digsys236-136.pip.digsys.bg (digsys236-136.pip.digsys.bg [193.68.136.236]) (authenticated bits=0) by smtp-sofia.digsys.bg (8.14.4/8.14.4) with ESMTP id p8MJX73P052671 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=NO); Thu, 22 Sep 2011 22:33:13 +0300 (EEST) (envelope-from daniel@digsys.bg) Mime-Version: 1.0 (Apple Message framework v1244.3) Content-Type: text/plain; charset=us-ascii From: Daniel Kalchev In-Reply-To: <1316713529.81545.YahooMailClassic@web121201.mail.ne1.yahoo.com> Date: Thu, 22 Sep 2011 22:33:07 +0300 Content-Transfer-Encoding: quoted-printable Message-Id: References: <1316713529.81545.YahooMailClassic@web121201.mail.ne1.yahoo.com> To: Jason Usher X-Mailer: Apple Mail (2.1244.3) Cc: freebsd-fs@freebsd.org Subject: Re: redux: 48 or 96 sata3 paths ... specific ZFS hardware proposal X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 22 Sep 2011 19:33:20 -0000 On Sep 22, 2011, at 20:45 , Jason Usher wrote: > But the X8DTH-6F motherboard seems too good to be true ... it has 7 8x = pcie slots, and further, has two sff8087 connectors onboard with the = EXACT SAME LSI chipset as the 9211-8i cards above - so you can get an = extra 8 drives on this motherboard using the same driver. Takes = xeon5500/5600 (6core, potentially) at 6.4 QPI ... so, very modern there. I have a system working with this motherboard and Supermicro E16 = extender. There were some strange behavior with the expander when using = the version 9 firmware, but that was just as it was published and no = Supermicro version existed --- it might be ok now: the ses device showed = 4 times and sometimes drives were replicated as well. I guess that might = be related to the mps/expander combination (at that time). It works fine = with version 7 firmware. How many LSI cards do you intend to use? You may wish to look at the = architecture (CPU/chipset/PCIe interconnects) in order to make more = informed decision.=20 Daniel= From owner-freebsd-fs@FreeBSD.ORG Thu Sep 22 20:27:28 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 54A0B106564A for ; Thu, 22 Sep 2011 20:27:28 +0000 (UTC) (envelope-from c.kworr@gmail.com) Received: from mail-fx0-f54.google.com (mail-fx0-f54.google.com [209.85.161.54]) by mx1.freebsd.org (Postfix) with ESMTP id DC79C8FC0A for ; Thu, 22 Sep 2011 20:27:27 +0000 (UTC) Received: by fxg9 with SMTP id 9so4038534fxg.13 for ; Thu, 22 Sep 2011 13:27:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=message-id:date:from:user-agent:mime-version:to:subject :content-type:content-transfer-encoding; bh=1h8RXd+93X0vGHl2oNvFxhyCZH5ltSLQOOLscFk1ax0=; b=nNL9Y+G0blFGxBAOU26o6qDuJmxY0ssGzO2Oan/+IYhFTB5oWajCwI9XizUbR40WU8 /yuxU3Ffo5qvMzzwMUZ+53n5cBTLxu+XyVSVNkLuU9wukOVybL/6WXtWXveURukZjcAO s0qFOshKFLsKXhaF4L4nW5m7C5/58MEZJDZUg= Received: by 10.223.45.209 with SMTP id g17mr298626faf.96.1316721748955; Thu, 22 Sep 2011 13:02:28 -0700 (PDT) Received: from limbo.lan ([195.225.157.86]) by mx.google.com with ESMTPS id t13sm19346318fae.0.2011.09.22.13.02.27 (version=SSLv3 cipher=OTHER); Thu, 22 Sep 2011 13:02:28 -0700 (PDT) Message-ID: <4E7B9452.2020506@gmail.com> Date: Thu, 22 Sep 2011 23:02:26 +0300 From: Volodymyr Kostyrko User-Agent: Mozilla/5.0 (X11; FreeBSD i386; rv:6.0.2) Gecko/20110907 Thunderbird/6.0.2 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Subject: zfs crash on high load X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 22 Sep 2011 20:27:28 -0000 Hi all. Recently I switched to 9.0 and then moved to clang. When I moved to 9.0 btpd failed to start hanging in some state without printing the name of the state. When I switched to clang the machine drops to debugger, however when I try to dump it drops core again for the second time: I've shooted the screen with all debugger stuff and dropped it at http://limbo.xim.bz/trap/. It looks like something bad happens in zap_leaf_lookup. uname -a: FreeBSD limbo.lan 9.0-BETA2 FreeBSD 9.0-BETA2 #1: Tue Sep 20 23:00:24 EEST 2011 arcade@limbo.lan:/usr/obj/usr/src/sys/MINIMAL i386 -- Sphinx of black quartz judge my vow. From owner-freebsd-fs@FreeBSD.ORG Thu Sep 22 21:13:45 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id CB58C106566C for ; Thu, 22 Sep 2011 21:13:45 +0000 (UTC) (envelope-from jusher71@yahoo.com) Received: from nm22.bullet.mail.ne1.yahoo.com (nm22.bullet.mail.ne1.yahoo.com [98.138.90.85]) by mx1.freebsd.org (Postfix) with SMTP id 68A3F8FC0A for ; Thu, 22 Sep 2011 21:13:45 +0000 (UTC) Received: from [98.138.90.56] by nm22.bullet.mail.ne1.yahoo.com with NNFMP; 22 Sep 2011 21:13:44 -0000 Received: from [98.138.89.240] by tm9.bullet.mail.ne1.yahoo.com with NNFMP; 22 Sep 2011 21:13:44 -0000 Received: from [127.0.0.1] by omp1013.mail.ne1.yahoo.com with NNFMP; 22 Sep 2011 21:13:44 -0000 X-Yahoo-Newman-Property: ymail-3 X-Yahoo-Newman-Id: 910626.88174.bm@omp1013.mail.ne1.yahoo.com Received: (qmail 5596 invoked by uid 60001); 22 Sep 2011 21:13:44 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024; t=1316726024; bh=oSoxDnxwfRd4ScqK2+2Vx3IqgtKJF5qE7E5I365ZEqA=; h=X-YMail-OSG:Received:X-Mailer:Message-ID:Date:From:Subject:To:Cc:In-Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding; b=zPi97DjKqGQXBcZuOkutu2HNZ/W8Vcnu9HPi0qjrz+22+sKE1WtcE3ilckQ161oKefTQ4dfLf6dSKYbM8eeeiUroD4jZckAlj/3PYP2ctTc98OgAEpQhSbflMvVnmT6UURCEnZGHbbBNPVg2Ahi5eNVZF7Ky61+fH+w/HKrE9Bo= DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com; h=X-YMail-OSG:Received:X-Mailer:Message-ID:Date:From:Subject:To:Cc:In-Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding; b=N3OhrP3rugYAz/v8yHl2wF9akRQuF/ZQctUK5UXKUGxwEJH0XEmGjCITa7KMef3MfqHUk5UXGtR+gz/xJaftcjCvk6roi4iuLCn7gHAvXq5PFwJ4tkFPlmIZN0FbSH+8dWjte1EOiaxINM4LzIY5UKgrJ9XIkUbJSfXR5DRzE/E=; X-YMail-OSG: tY5tCt8VM1mWHMnp6ZGRahE8WFj2.P41GWmfzjpQHZokS4N .RdL0lKiggeL2icdqt.A0e.5fh9Aka8_n4KiDZsfb2xsTsQNeM2j3l.WdzGy FJ0q8nINdqob0kjHQoqs4f_fEtmYE._kRhTn0EnioymVHWmjGXzPrrA4W_D8 SKN7.f5RfwVD2wVnSf2Vw7tlnE3TRs.Xi_IlSV3NgmvIVjTiz_SYplrDUlrQ Nc8mxOGBFK9qgKVbR0.qTJcCASMpzEgK4MyQOE.hm0Vu.sEh8QqMw3Toqx8x E.ZznUY0hmrjxWYaL4v3WAZvu2TbLirxfPgyvoBwnG.SBc2ndYXRGj6wyNkW NnazTf68chXFmhYmLOZANeODNUSjX0sfWjK8Jl9rTSc5jbTC.MguirAO3bzD 0WQ6h Received: from [173.254.192.38] by web121210.mail.ne1.yahoo.com via HTTP; Thu, 22 Sep 2011 14:13:44 PDT X-Mailer: YahooMailClassic/14.0.5 YahooMailWebService/0.8.114.317681 Message-ID: <1316726024.96139.YahooMailClassic@web121210.mail.ne1.yahoo.com> Date: Thu, 22 Sep 2011 14:13:44 -0700 (PDT) From: Jason Usher To: Daniel Kalchev In-Reply-To: MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable Cc: freebsd-fs@freebsd.org Subject: Re: redux: 48 or 96 sata3 paths ... specific ZFS hardware proposal X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 22 Sep 2011 21:13:45 -0000 =0ADaniel,=0A=0A--- On Thu, 9/22/11, Daniel Kalchev wrot= e:=0A=0A> > But the X8DTH-6F motherboard seems too good to be true=0A> ... = it has 7 8x pcie slots, and further, has two sff8087=0A> connectors onboard= with the EXACT SAME LSI chipset as the=0A> 9211-8i cards above - so you ca= n get an extra 8 drives on=0A> this motherboard using the same driver.=A0 T= akes=0A> xeon5500/5600 (6core, potentially) at 6.4 QPI ... so, very=0A> mod= ern there.=0A=0A=0A(snip)=0A=0A=0A> How many LSI cards do you intend to use= ? You may wish to=0A> look at the architecture (CPU/chipset/PCIe interconne= cts) in=0A> order to make more informed decision. =0A=0A=0AI was thinking o= f using 4 of the cards - that gets me 40 total SATA3 ports (32 on the 4 car= ds, plus 8 more on the board).=0A=0A(I am assuming that any 6Gb SAS port is= also a perfectly functioning SATA3 port, also at 6Gb)=0A=0AThe chipset is = "LSI 2008" and is identical on the cards and on this motherboard, which app= eals to me greatly. PCIe interconnect is "X8 lane, PCI Express 2.0 complia= nt" and "X8 PCIe 4000 MB/s". I see no mention of its cpu.=0A=0AWhat are yo= u suggesting here with that information ? What am I looking for, or trying= to avoid ? From owner-freebsd-fs@FreeBSD.ORG Thu Sep 22 22:03:34 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 4B05E106564A for ; Thu, 22 Sep 2011 22:03:34 +0000 (UTC) (envelope-from rincebrain@gmail.com) Received: from mail-qy0-f182.google.com (mail-qy0-f182.google.com [209.85.216.182]) by mx1.freebsd.org (Postfix) with ESMTP id 08FD48FC14 for ; Thu, 22 Sep 2011 22:03:33 +0000 (UTC) Received: by qyk4 with SMTP id 4so3470276qyk.13 for ; Thu, 22 Sep 2011 15:03:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; bh=P/QmEwVgQb2VFBNeI0p+wqMxVPGfJ4ItyjGNK62ygGg=; b=V0NVqbqtIbSaZ5XAHWfrEy/8FEkMI/tPKKvibz7uLZy8w/F20rb0pzhoOCIIWEx0eJ Ftl0fOtxkKUbgzHZ0dQqtC5MqPmquLSON6BPbKsUOMencUf7rtVtmPTJV4F40VI9bGS3 lHaVzCKn37xCpUcCl+RQZ7d6zUgj0/R1xZ+3Q= MIME-Version: 1.0 Received: by 10.229.223.130 with SMTP id ik2mr1712144qcb.5.1316729013363; Thu, 22 Sep 2011 15:03:33 -0700 (PDT) Received: by 10.229.168.132 with HTTP; Thu, 22 Sep 2011 15:03:33 -0700 (PDT) In-Reply-To: <1316726024.96139.YahooMailClassic@web121210.mail.ne1.yahoo.com> References: <1316726024.96139.YahooMailClassic@web121210.mail.ne1.yahoo.com> Date: Thu, 22 Sep 2011 18:03:33 -0400 Message-ID: From: Rich To: Jason Usher Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Cc: freebsd-fs@freebsd.org Subject: Re: redux: 48 or 96 sata3 paths ... specific ZFS hardware proposal X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 22 Sep 2011 22:03:34 -0000 A couple of notes. - The 9211-8i has IR and IT firmware revisions; you probably want the latter. Flashing from one to the other, last I saw, requires booting a DOS environment and destructively reflashing. - I am reasonably certain, but cannot swear to it (I don't have the rev of the X8DTH motherboard with the LSI SAS2008 chip onboard) that the motherboard has the IR firmware out of the box. - If memory serves, the way the PCIe bandwidth works on that board is that slots 1-3 go to CPU1's interface and slots 4-7 go to CPU2. - Rich On Thu, Sep 22, 2011 at 5:13 PM, Jason Usher wrote: > > Daniel, > > --- On Thu, 9/22/11, Daniel Kalchev wrote: > >> > But the X8DTH-6F motherboard seems too good to be true >> ... it has 7 8x pcie slots, and further, has two sff8087 >> connectors onboard with the EXACT SAME LSI chipset as the >> 9211-8i cards above - so you can get an extra 8 drives on >> this motherboard using the same driver.=A0 Takes >> xeon5500/5600 (6core, potentially) at 6.4 QPI ... so, very >> modern there. > > > (snip) > > >> How many LSI cards do you intend to use? You may wish to >> look at the architecture (CPU/chipset/PCIe interconnects) in >> order to make more informed decision. > > > I was thinking of using 4 of the cards - that gets me 40 total SATA3 port= s (32 on the 4 cards, plus 8 more on the board). > > (I am assuming that any 6Gb SAS port is also a perfectly functioning SATA= 3 port, also at 6Gb) > > The chipset is "LSI 2008" and is identical on the cards and on this mothe= rboard, which appeals to me greatly. =A0PCIe interconnect is "X8 lane, PCI = Express 2.0 compliant" and "X8 PCIe 4000 MB/s". =A0I see no mention of its = cpu. > > What are you suggesting here with that information ? =A0What am I looking= for, or trying to avoid ? > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Thu Sep 22 22:11:16 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 8D975106564A for ; Thu, 22 Sep 2011 22:11:16 +0000 (UTC) (envelope-from jusher71@yahoo.com) Received: from nm30-vm3.bullet.mail.ne1.yahoo.com (nm30-vm3.bullet.mail.ne1.yahoo.com [98.138.91.160]) by mx1.freebsd.org (Postfix) with SMTP id 3BA8E8FC0C for ; Thu, 22 Sep 2011 22:11:15 +0000 (UTC) Received: from [98.138.90.50] by nm30.bullet.mail.ne1.yahoo.com with NNFMP; 22 Sep 2011 22:11:15 -0000 Received: from [98.138.88.237] by tm3.bullet.mail.ne1.yahoo.com with NNFMP; 22 Sep 2011 22:11:15 -0000 Received: from [127.0.0.1] by omp1037.mail.ne1.yahoo.com with NNFMP; 22 Sep 2011 22:11:15 -0000 X-Yahoo-Newman-Property: ymail-3 X-Yahoo-Newman-Id: 647470.68545.bm@omp1037.mail.ne1.yahoo.com Received: (qmail 57206 invoked by uid 60001); 22 Sep 2011 22:11:15 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024; t=1316729475; bh=Gpz9YxA45u6sHHJxVlWaK0GLzBuZZLQADMa+NTFFpjs=; h=X-YMail-OSG:Received:X-Mailer:Message-ID:Date:From:Subject:To:Cc:In-Reply-To:MIME-Version:Content-Type; b=6rOMRO2JbvKmdE2ibMvPn6M44Nm/zrj5zkfSGjmV6OvWgvMIP4p8exOnDMuquupd2BlCLbHxBUM5mvdNJ1uzXUQ4rjgXgXEII+RYVFm5IViEFEZDPBPQnmr+90tfT1KYmHEuxOwUp+67UyO46vh3JemciewgmwIiYMuf2nJQ420= DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com; h=X-YMail-OSG:Received:X-Mailer:Message-ID:Date:From:Subject:To:Cc:In-Reply-To:MIME-Version:Content-Type; b=ytitZ6KiD5yMFbe0mdJzet39e4mpexLsQ5AGV2hbIb1mc2sqmygnHdKxICQvelUHjARLcQ0XZizb3veF+8ZTusV1nCdXaXbxnl10xNSZCLPwfFGdJODfmPOVRfFf76h3opafXSQfkVEVSVpDsqcRcLsphNnE+L02tg04hfz+SDg=; X-YMail-OSG: O0xqPc4VM1l3ADz6uNlAXQblJHXYV1dhBIUOsUYpjVGNvmF epWz1hrjwZMmwRleTyif.hjcupMurp93IA1RtJS9..LRnTaK9f_tcqIWFbue YWthZ2gLn2SmSGzeah.oE9bhgRMtDKlMlsNL4b1JKwX5RwGRjy3A7xwPeQ9U uFRL1szt.ZC1DorypB91m.Uor_luVb9TZ50a5aoRK5Mse5S7Ca4L.3UknARG gkw7LJBtfAIBbfPjDZ9aRd0hr0w5OQpYrj_O7yrntLhOW0PwFlaGqlwO30Nk o.1L7rDHVKnnWXk.rCg.3n4B06DhsmDK4596bm3M_Fjpa7pNO4xJVVjhkD1P wmlIGe1TMZqeNWSJhOcDAZe_tHjUFELjcTxp5HgyXozehchrWn81P9Ycjceu Jpw-- Received: from [89.45.202.93] by web121211.mail.ne1.yahoo.com via HTTP; Thu, 22 Sep 2011 15:11:15 PDT X-Mailer: YahooMailClassic/14.0.5 YahooMailWebService/0.8.114.317681 Message-ID: <1316729475.54812.YahooMailClassic@web121211.mail.ne1.yahoo.com> Date: Thu, 22 Sep 2011 15:11:15 -0700 (PDT) From: Jason Usher To: Rich In-Reply-To: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: freebsd-fs@freebsd.org Subject: Re: redux: 48 or 96 sata3 paths ... specific ZFS hardware proposal X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 22 Sep 2011 22:11:16 -0000 Hello Rich, --- On Thu, 9/22/11, Rich wrote: > - The 9211-8i has IR and IT firmware revisions; you > probably want the > latter. Flashing from one to the other, last I saw, You are saying I want the IR, yes ? > - If memory serves, the way the PCIe bandwidth works on > that board is > that slots 1-3 go to CPU1's interface and slots 4-7 go to > CPU2. So, if I add four LSI cards, I should probably put two of them on slots 1,2 and the other two on 4,5 ... yes ? Thanks very much for this help. From owner-freebsd-fs@FreeBSD.ORG Thu Sep 22 22:15:40 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id CAD64106566C for ; Thu, 22 Sep 2011 22:15:40 +0000 (UTC) (envelope-from jdc@koitsu.dyndns.org) Received: from qmta03.westchester.pa.mail.comcast.net (qmta03.westchester.pa.mail.comcast.net [76.96.62.32]) by mx1.freebsd.org (Postfix) with ESMTP id 8C4AE8FC14 for ; Thu, 22 Sep 2011 22:15:40 +0000 (UTC) Received: from omta18.westchester.pa.mail.comcast.net ([76.96.62.90]) by qmta03.westchester.pa.mail.comcast.net with comcast id br171h0041wpRvQ53yFgJN; Thu, 22 Sep 2011 22:15:40 +0000 Received: from koitsu.dyndns.org ([67.180.84.87]) by omta18.westchester.pa.mail.comcast.net with comcast id byFf1h00N1t3BNj3eyFfZB; Thu, 22 Sep 2011 22:15:40 +0000 Received: by icarus.home.lan (Postfix, from userid 1000) id AC016102C31; Thu, 22 Sep 2011 15:15:37 -0700 (PDT) Date: Thu, 22 Sep 2011 15:15:37 -0700 From: Jeremy Chadwick To: Jason Usher Message-ID: <20110922221537.GA71339@icarus.home.lan> References: <1316729475.54812.YahooMailClassic@web121211.mail.ne1.yahoo.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1316729475.54812.YahooMailClassic@web121211.mail.ne1.yahoo.com> User-Agent: Mutt/1.5.21 (2010-09-15) Cc: freebsd-fs@freebsd.org, Rich Subject: Re: redux: 48 or 96 sata3 paths ... specific ZFS hardware proposal X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 22 Sep 2011 22:15:40 -0000 On Thu, Sep 22, 2011 at 03:11:15PM -0700, Jason Usher wrote: > > Hello Rich, > > --- On Thu, 9/22/11, Rich wrote: > > > - The 9211-8i has IR and IT firmware revisions; you > > probably want the > > latter. Flashing from one to the other, last I saw, > > > You are saying I want the IR, yes ? Former means "the first shown", latter means "the last shown". So no, that is not what he's saying. -- | Jeremy Chadwick jdc at parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, US | | Making life hard for others since 1977. PGP 4BD6C0CB | From owner-freebsd-fs@FreeBSD.ORG Fri Sep 23 08:54:13 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 7ACBF106564A for ; Fri, 23 Sep 2011 08:54:13 +0000 (UTC) (envelope-from peterjeremy@acm.org) Received: from mail13.syd.optusnet.com.au (mail13.syd.optusnet.com.au [211.29.132.194]) by mx1.freebsd.org (Postfix) with ESMTP id E49F18FC15 for ; Fri, 23 Sep 2011 08:54:12 +0000 (UTC) Received: from server.vk2pj.dyndns.org (c220-239-116-103.belrs4.nsw.optusnet.com.au [220.239.116.103]) by mail13.syd.optusnet.com.au (8.13.1/8.13.1) with ESMTP id p8N8sAD4014846 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Fri, 23 Sep 2011 18:54:11 +1000 X-Bogosity: Ham, spamicity=0.000000 Received: from server.vk2pj.dyndns.org (localhost.vk2pj.dyndns.org [127.0.0.1]) by server.vk2pj.dyndns.org (8.14.5/8.14.4) with ESMTP id p8N8s9jj016792; Fri, 23 Sep 2011 18:54:09 +1000 (EST) (envelope-from peter@server.vk2pj.dyndns.org) Received: (from peter@localhost) by server.vk2pj.dyndns.org (8.14.5/8.14.4/Submit) id p8N8s9Yj016791; Fri, 23 Sep 2011 18:54:09 +1000 (EST) (envelope-from peter) Date: Fri, 23 Sep 2011 18:54:08 +1000 From: Peter Jeremy To: Jason Usher Message-ID: <20110923085408.GA16726@server.vk2pj.dyndns.org> References: <1316713529.81545.YahooMailClassic@web121201.mail.ne1.yahoo.com> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="tKW2IUtsqtDRztdT" Content-Disposition: inline In-Reply-To: <1316713529.81545.YahooMailClassic@web121201.mail.ne1.yahoo.com> X-PGP-Key: http://members.optusnet.com.au/peterjeremy/pubkey.asc User-Agent: Mutt/1.5.21 (2010-09-15) Cc: freebsd-fs@freebsd.org Subject: Re: redux: 48 or 96 sata3 paths ... specific ZFS hardware proposal X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 23 Sep 2011 08:54:13 -0000 --tKW2IUtsqtDRztdT Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On 2011-Sep-22 10:45:29 -0700, Jason Usher wrote: >About the only possible downside I can see to this board is that it >only takes 192GB of ram, but that is a LOT of ram ... Not for a 60TB ZFS system with lots (metadata in particular) activity. Even if you don't think you need 192GB, I suggest you populate the board with the largest DIMMs you can afford and leave a bank of slots free so you can fit more RAM in the future. --=20 Peter Jeremy --tKW2IUtsqtDRztdT Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.18 (FreeBSD) iEYEARECAAYFAk58STAACgkQ/opHv/APuIdrPQCeKopy9zuBrWSYKoE/dA/+m6dE jREAnjh/ALK6uhD4WvVle6+Xx0lIfWH3 =1tXa -----END PGP SIGNATURE----- --tKW2IUtsqtDRztdT-- From owner-freebsd-fs@FreeBSD.ORG Fri Sep 23 09:38:58 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id E03D11065675 for ; Fri, 23 Sep 2011 09:38:58 +0000 (UTC) (envelope-from peterjeremy@acm.org) Received: from mail13.syd.optusnet.com.au (mail13.syd.optusnet.com.au [211.29.132.194]) by mx1.freebsd.org (Postfix) with ESMTP id 579078FC13 for ; Fri, 23 Sep 2011 09:38:57 +0000 (UTC) Received: from server.vk2pj.dyndns.org (c220-239-116-103.belrs4.nsw.optusnet.com.au [220.239.116.103]) by mail13.syd.optusnet.com.au (8.13.1/8.13.1) with ESMTP id p8N9cfwL020756 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Fri, 23 Sep 2011 19:38:43 +1000 X-Bogosity: Ham, spamicity=0.000000 Received: from server.vk2pj.dyndns.org (localhost.vk2pj.dyndns.org [127.0.0.1]) by server.vk2pj.dyndns.org (8.14.5/8.14.4) with ESMTP id p8N9cfxF017033; Fri, 23 Sep 2011 19:38:41 +1000 (EST) (envelope-from peter@server.vk2pj.dyndns.org) Received: (from peter@localhost) by server.vk2pj.dyndns.org (8.14.5/8.14.4/Submit) id p8N9cesg017032; Fri, 23 Sep 2011 19:38:40 +1000 (EST) (envelope-from peter) Date: Fri, 23 Sep 2011 19:38:40 +1000 From: Peter Jeremy To: Martin Cracauer Message-ID: <20110923093840.GB16726@server.vk2pj.dyndns.org> References: <20110922185506.GA5281@cons.org> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="jq0ap7NbKX2Kqbes" Content-Disposition: inline In-Reply-To: <20110922185506.GA5281@cons.org> X-PGP-Key: http://members.optusnet.com.au/peterjeremy/pubkey.asc User-Agent: Mutt/1.5.21 (2010-09-15) Cc: freebsd-fs@freebsd.org Subject: Re: Another Zpool management thing X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 23 Sep 2011 09:38:59 -0000 --jq0ap7NbKX2Kqbes Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On 2011-Sep-22 14:55:06 -0400, Martin Cracauer wrote: >I dd-copied a whole drive that had several slices, one of which holds >a single-disk ZFS in it (nothing mounted at the time of course). At >this time both disks are in the machine. =2E.. >Is there any way to reference a pool by it's /dev/ entry so that I can >rename the new copy? Not AFAIK. > Destroying it would be fine, too, as long as it >doesn't affect the original one. You can do this by zeroing the first and last 1MB of the unwanted ZFS partition - that's where ZFS stores its labels. --=20 Peter Jeremy --jq0ap7NbKX2Kqbes Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.18 (FreeBSD) iEYEARECAAYFAk58U6AACgkQ/opHv/APuIci/gCfS3DUaREJXL6CxCPhfBZ7joIi WYsAoI/gYSqa/WGaEIxiQ5ep2e+HKFUd =G9MU -----END PGP SIGNATURE----- --jq0ap7NbKX2Kqbes-- From owner-freebsd-fs@FreeBSD.ORG Fri Sep 23 11:06:46 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A1AB61065672 for ; Fri, 23 Sep 2011 11:06:46 +0000 (UTC) (envelope-from daniel@digsys.bg) Received: from smtp-sofia.digsys.bg (smtp-sofia.digsys.bg [193.68.3.230]) by mx1.freebsd.org (Postfix) with ESMTP id 2AEA18FC19 for ; Fri, 23 Sep 2011 11:06:45 +0000 (UTC) Received: from digsys236-136.pip.digsys.bg (digsys236-136.pip.digsys.bg [193.68.136.236]) (authenticated bits=0) by smtp-sofia.digsys.bg (8.14.4/8.14.4) with ESMTP id p8NB6Zrd055136 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=NO); Fri, 23 Sep 2011 14:06:41 +0300 (EEST) (envelope-from daniel@digsys.bg) Mime-Version: 1.0 (Apple Message framework v1244.3) Content-Type: text/plain; charset=windows-1252 From: Daniel Kalchev In-Reply-To: <1316729475.54812.YahooMailClassic@web121211.mail.ne1.yahoo.com> Date: Fri, 23 Sep 2011 14:06:35 +0300 Content-Transfer-Encoding: quoted-printable Message-Id: <81C28D6A-47E6-4680-BFB2-BE36CAE4DD8C@digsys.bg> References: <1316729475.54812.YahooMailClassic@web121211.mail.ne1.yahoo.com> To: Jason Usher X-Mailer: Apple Mail (2.1244.3) Cc: freebsd-fs@freebsd.org, Rich Subject: Re: redux: 48 or 96 sata3 paths ... specific ZFS hardware proposal X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 23 Sep 2011 11:06:46 -0000 On Sep 23, 2011, at 01:11 , Jason Usher wrote: > You are saying I want the IR, yes ? >=20 You want IT firmware. You also need to follow exactly the procedure to = update the firmware and you also should use the Supermicro provided = software. In particular, pay attention to write down the original SAS = addresses. >=20 >> - If memory serves, the way the PCIe bandwidth works on >> that board is >> that slots 1-3 go to CPU1's interface and slots 4-7 go to >> CPU2. >=20 >=20 > So, if I add four LSI cards, I should probably put two of them on = slots 1,2 and the other two on 4,5 ... yes ? You also need to make sure you have two processors. In you only have = one, then ports 4-7 will get routed to the first CPU, via the same QPI = link to the hub chipset. You also need to make sure RAM is evenly populated for both CPUs, but = this should already be in the motherboard's documentation. It is somewhat unclear how good current FreeBSD handles NUMA and similar = architecture=85 At worst, you will end up with just the capacity of one = QPI link to the peripherals. Daniel= From owner-freebsd-fs@FreeBSD.ORG Fri Sep 23 20:25:52 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id D3A751065679 for ; Fri, 23 Sep 2011 20:25:52 +0000 (UTC) (envelope-from clinton.adams@gmail.com) Received: from mail-ey0-f182.google.com (mail-ey0-f182.google.com [209.85.215.182]) by mx1.freebsd.org (Postfix) with ESMTP id 6F0E58FC14 for ; Fri, 23 Sep 2011 20:25:52 +0000 (UTC) Received: by eyg7 with SMTP id 7so3280556eyg.13 for ; Fri, 23 Sep 2011 13:25:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:date:message-id:subject:from:to:content-type; bh=s/WnN+ICRVOVadfNuO/JFN0clWBnMzMVuCeIuOGCvmc=; b=baUd0rxq94lr8A7RuGjYeHnAzu6CIMkXYPoNrvQ3esitCK+klD62CJ7npcJSFAX7Jt GoKu4SOnP9y90G0d8yWitrq2Qn9EaOUjhAstASpznM7/j+OesRpc5V+DRpn+jDzFMu1G sLHyXNsP+tcwTvYxL2vNFDD9dVLIvqZAaNRGU= MIME-Version: 1.0 Received: by 10.14.10.100 with SMTP id 76mr1252386eeu.165.1316807918949; Fri, 23 Sep 2011 12:58:38 -0700 (PDT) Received: by 10.14.53.67 with HTTP; Fri, 23 Sep 2011 12:58:38 -0700 (PDT) Date: Fri, 23 Sep 2011 12:58:38 -0700 Message-ID: From: Clinton Adams To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 Subject: kernel panics with RPCSEC_GSS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 23 Sep 2011 20:25:53 -0000 Hi, On 8.2-RELEASE-p2, kernel occassionaly panics during relatively high nfs usage (usually morning logins). Frequency of crashes have decreased as we have reduced the number of clients, about twice a week with 10 clients versus daily with 15. Server is running nfsv4 with mit kerberos, clients are linux (ubuntu 10.04). Backtraces from last 2 cores - #1 0xffffffff805cbb5e in boot (howto=260) at /usr/src/sys/kern/kern_shutdown.c:419 #2 0xffffffff805cbf91 in panic (fmt=Variable "fmt" is not available. ) at /usr/src/sys/kern/kern_shutdown.c:592 #3 0xffffffff808d25c0 in trap_fatal (frame=0xc, eva=Variable "eva" is not available. ) at /usr/src/sys/amd64/amd64/trap.c:783 #4 0xffffffff808d299f in trap_pfault (frame=0xffffff8096bb7790, usermode=0) at /usr/src/sys/amd64/amd64/trap.c:699 #5 0xffffffff808d2e7f in trap (frame=0xffffff8096bb7790) at /usr/src/sys/amd64/amd64/trap.c:449 #6 0xffffffff808baf74 in calltrap () at /usr/src/sys/amd64/amd64/exception.S:224 #7 0xffffffff807db8d8 in svc_rpc_gss_forget_client (client=0x0) at /usr/src/sys/rpc/rpcsec_gss/svc_rpcsec_gss.c:616 #8 0xffffffff807dc1c3 in svc_rpc_gss (rqst=0xffffff005708c000, msg=0xffffff8096bb7b20) at /usr/src/sys/rpc/rpcsec_gss/svc_rpcsec_gss.c:642 #9 0xffffffff807d49d3 in svc_run_internal (pool=0xffffff003d03d600, ismaster=0) at /usr/src/sys/rpc/svc.c:837 #10 0xffffffff807d518b in svc_thread_start (arg=Variable "arg" is not available. ) at /usr/src/sys/rpc/svc.c:1200 #11 0xffffffff805a2798 in fork_exit ( callout=0xffffffff807d5180 , arg=0xffffff003d03d600, frame=0xffffff8096bb7c40) at /usr/src/sys/kern/kern_fork.c:845 #12 0xffffffff808bb43e in fork_trampoline () at /usr/src/sys/amd64/amd64/exception.S:565 #1 0xffffffff805cbabe in boot (howto=260) at /usr/src/sys/kern/kern_shutdown.c:419 #2 0xffffffff805cbed3 in panic (fmt=0x0) at /usr/src/sys/kern/kern_shutdown.c:592 #3 0xffffffff808d239d in trap_fatal (frame=0xffffff0004c89460, eva=Variable "eva" is not available. ) at /usr/src/sys/amd64/amd64/trap.c:783 #4 0xffffffff808d275f in trap_pfault (frame=0xffffff8096c0d790, usermode=0) at /usr/src/sys/amd64/amd64/trap.c:699 #5 0xffffffff808d2b5f in trap (frame=0xffffff8096c0d790) at /usr/src/sys/amd64/amd64/trap.c:449 #6 0xffffffff808bada4 in calltrap () at /usr/src/sys/amd64/amd64/exception.S:224 #7 0xffffffff807db856 in svc_rpc_gss_forget_client (client=0xffffff001c015200) at atomic.h:158 #8 0xffffffff807dc0e3 in svc_rpc_gss (rqst=0xffffff0004a24000, msg=0xffffff8096c0db20) at /usr/src/sys/rpc/rpcsec_gss/svc_rpcsec_gss.c:642 #9 0xffffffff807d48f3 in svc_run_internal (pool=0xffffff0004ca6200, ismaster=0) at /usr/src/sys/rpc/svc.c:837 #10 0xffffffff807d50ab in svc_thread_start (arg=Variable "arg" is not available. ) at /usr/src/sys/rpc/svc.c:1200 #11 0xffffffff805a26f8 in fork_exit ( callout=0xffffffff807d50a0 , arg=0xffffff0004ca6200, frame=0xffffff8096c0dc40) at /usr/src/sys/kern/kern_fork.c:845 #12 0xffffffff808bb26e in fork_trampoline () at /usr/src/sys/amd64/amd64/exception.S:565 Kernel is generic except for device crypto options KGSSAPI. Ash /etc/make.conf WITHOUT_X11=yes KRB5_HOME=/usr/local KRB5_IMPL=mit # added by use.perl 2011-09-02 11:38:57 PERL_VERSION=5.10.1 I'm happy to provide any additional info. Thanks for any help, Clinton From owner-freebsd-fs@FreeBSD.ORG Sat Sep 24 00:54:27 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 593F11065777 for ; Sat, 24 Sep 2011 00:54:27 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-annu.mail.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id EE28A8FC0A for ; Sat, 24 Sep 2011 00:54:26 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: Ap8EAHgpfU6DaFvO/2dsb2JhbABChGKkMoFTAQEBBAEBASAEJyALGxgRGQIEJQEJJgYIBwQBHASHXaR/kVGDFYJbgREEkTqCGIhUiHk X-IronPort-AV: E=Sophos;i="4.68,433,1312171200"; d="scan'208";a="135692855" Received: from erie.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.206]) by esa-annu-pri.mail.uoguelph.ca with ESMTP; 23 Sep 2011 20:54:24 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id C3A77B3F0F; Fri, 23 Sep 2011 20:54:24 -0400 (EDT) Date: Fri, 23 Sep 2011 20:54:24 -0400 (EDT) From: Rick Macklem To: Clinton Adams Message-ID: <1498466253.1940252.1316825664747.JavaMail.root@erie.cs.uoguelph.ca> In-Reply-To: MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="----=_Part_1940251_322460862.1316825664744" X-Originating-IP: [172.17.91.202] X-Mailer: Zimbra 6.0.10_GA_2692 (ZimbraWebClient - FF3.0 (Win)/6.0.10_GA_2692) Cc: freebsd-fs@freebsd.org Subject: Re: kernel panics with RPCSEC_GSS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 24 Sep 2011 00:54:27 -0000 ------=_Part_1940251_322460862.1316825664744 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Clinton Adams wrote: > Hi, > > On 8.2-RELEASE-p2, kernel occassionaly panics during relatively high > nfs usage (usually morning logins). Frequency of crashes have > decreased as we have reduced the number of clients, about twice a week > with 10 clients versus daily with 15. > > Server is running nfsv4 with mit kerberos, clients are linux (ubuntu > 10.04). > > Backtraces from last 2 cores - > > #1 0xffffffff805cbb5e in boot (howto=260) > at /usr/src/sys/kern/kern_shutdown.c:419 > #2 0xffffffff805cbf91 in panic (fmt=Variable "fmt" is not available. > ) at /usr/src/sys/kern/kern_shutdown.c:592 > #3 0xffffffff808d25c0 in trap_fatal (frame=0xc, eva=Variable "eva" is > not available. > ) > at /usr/src/sys/amd64/amd64/trap.c:783 > #4 0xffffffff808d299f in trap_pfault (frame=0xffffff8096bb7790, > usermode=0) > at /usr/src/sys/amd64/amd64/trap.c:699 > #5 0xffffffff808d2e7f in trap (frame=0xffffff8096bb7790) > at /usr/src/sys/amd64/amd64/trap.c:449 > #6 0xffffffff808baf74 in calltrap () > at /usr/src/sys/amd64/amd64/exception.S:224 > #7 0xffffffff807db8d8 in svc_rpc_gss_forget_client (client=0x0) > at /usr/src/sys/rpc/rpcsec_gss/svc_rpcsec_gss.c:616 > #8 0xffffffff807dc1c3 in svc_rpc_gss (rqst=0xffffff005708c000, > msg=0xffffff8096bb7b20) at > /usr/src/sys/rpc/rpcsec_gss/svc_rpcsec_gss.c:642 Well, here's the code snippet... while (svc_rpc_gss_client_count > CLIENT_MAX) 642 svc_rpc_gss_forget_client(TAILQ_LAST(&svc_rpc_gss_clients, 643 svc_rpc_gss_client_list)); >From the above, it looks like the "client" returned by TAILQ_LAST() is bogus. A quick look at the code shows that all changes to that tailq and the value of svc_rpc_gss_client_count are protected by a sx lock, however this lock isn't held here. (svc_rpc_gss_client_count is decremented in svc_rpc_gss_forget_client().) svc_rpc_gss_client_count only seems to be incremented when an entry is added to the tailq and decremented in svc_rpc_gss_forget_client() when an entry is removed from the tailq, so I can't see how it's value would get messed up? All I can think of is trying adding locking to the above. Could you please try the attached patch. rick > #9 0xffffffff807d49d3 in svc_run_internal (pool=0xffffff003d03d600, > ismaster=0) at /usr/src/sys/rpc/svc.c:837 > #10 0xffffffff807d518b in svc_thread_start (arg=Variable "arg" is not > available. > ) > at /usr/src/sys/rpc/svc.c:1200 > #11 0xffffffff805a2798 in fork_exit ( > callout=0xffffffff807d5180 , arg=0xffffff003d03d600, > frame=0xffffff8096bb7c40) at /usr/src/sys/kern/kern_fork.c:845 > #12 0xffffffff808bb43e in fork_trampoline () > at /usr/src/sys/amd64/amd64/exception.S:565 > > > #1 0xffffffff805cbabe in boot (howto=260) > at /usr/src/sys/kern/kern_shutdown.c:419 > #2 0xffffffff805cbed3 in panic (fmt=0x0) > at /usr/src/sys/kern/kern_shutdown.c:592 > #3 0xffffffff808d239d in trap_fatal (frame=0xffffff0004c89460, > eva=Variable "eva" is not available. > ) > at /usr/src/sys/amd64/amd64/trap.c:783 > #4 0xffffffff808d275f in trap_pfault (frame=0xffffff8096c0d790, > usermode=0) > at /usr/src/sys/amd64/amd64/trap.c:699 > #5 0xffffffff808d2b5f in trap (frame=0xffffff8096c0d790) > at /usr/src/sys/amd64/amd64/trap.c:449 > #6 0xffffffff808bada4 in calltrap () > at /usr/src/sys/amd64/amd64/exception.S:224 > #7 0xffffffff807db856 in svc_rpc_gss_forget_client > (client=0xffffff001c015200) > at atomic.h:158 > #8 0xffffffff807dc0e3 in svc_rpc_gss (rqst=0xffffff0004a24000, > msg=0xffffff8096c0db20) at > /usr/src/sys/rpc/rpcsec_gss/svc_rpcsec_gss.c:642 > #9 0xffffffff807d48f3 in svc_run_internal (pool=0xffffff0004ca6200, > ismaster=0) at /usr/src/sys/rpc/svc.c:837 > #10 0xffffffff807d50ab in svc_thread_start (arg=Variable "arg" is not > available. > ) > at /usr/src/sys/rpc/svc.c:1200 > #11 0xffffffff805a26f8 in fork_exit ( > callout=0xffffffff807d50a0 , arg=0xffffff0004ca6200, > frame=0xffffff8096c0dc40) at /usr/src/sys/kern/kern_fork.c:845 > #12 0xffffffff808bb26e in fork_trampoline () > at /usr/src/sys/amd64/amd64/exception.S:565 > > Kernel is generic except for > device crypto > options KGSSAPI. > > Ash /etc/make.conf > WITHOUT_X11=yes > KRB5_HOME=/usr/local > KRB5_IMPL=mit > # added by use.perl 2011-09-02 11:38:57 > PERL_VERSION=5.10.1 > > I'm happy to provide any additional info. > > Thanks for any help, > Clinton > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" ------=_Part_1940251_322460862.1316825664744 Content-Type: text/x-patch; name=svcrpcsec.patch Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename=svcrpcsec.patch LS0tIHJwYy9ycGNzZWNfZ3NzL3N2Y19ycGNzZWNfZ3NzLmMuc2F2CTIwMTEtMDktMjMgMjA6MTQ6 MDcuMDAwMDAwMDAwIC0wNDAwCisrKyBycGMvcnBjc2VjX2dzcy9zdmNfcnBjc2VjX2dzcy5jCTIw MTEtMDktMjMgMjA6MjE6MjguMDAwMDAwMDAwIC0wNDAwCkBAIC02MzgsMTYgKzYzOCwyNSBAQCBz dmNfcnBjX2dzc190aW1lb3V0X2NsaWVudHModm9pZCkKIAkgKiBGaXJzdCBlbmZvcmNlIHRoZSBt YXggY2xpZW50IGxpbWl0LiBXZSBrZWVwCiAJICogc3ZjX3JwY19nc3NfY2xpZW50cyBpbiBMUlUg b3JkZXIuCiAJICovCi0Jd2hpbGUgKHN2Y19ycGNfZ3NzX2NsaWVudF9jb3VudCA+IENMSUVOVF9N QVgpCi0JCXN2Y19ycGNfZ3NzX2ZvcmdldF9jbGllbnQoVEFJTFFfTEFTVCgmc3ZjX3JwY19nc3Nf Y2xpZW50cywKLQkJCSAgICBzdmNfcnBjX2dzc19jbGllbnRfbGlzdCkpOworCXN4X3hsb2NrKCZz dmNfcnBjX2dzc19sb2NrKTsKKwljbGllbnQgPSBUQUlMUV9MQVNUKCZzdmNfcnBjX2dzc19jbGll bnRzLCBzdmNfcnBjX2dzc19jbGllbnRfbGlzdCk7CisJd2hpbGUgKHN2Y19ycGNfZ3NzX2NsaWVu dF9jb3VudCA+IENMSUVOVF9NQVggJiYgY2xpZW50ICE9IE5VTEwpIHsKKwkJc3hfeHVubG9jaygm c3ZjX3JwY19nc3NfbG9jayk7CisJCXN2Y19ycGNfZ3NzX2ZvcmdldF9jbGllbnQoY2xpZW50KTsK KwkJc3hfeGxvY2soJnN2Y19ycGNfZ3NzX2xvY2spOworCQljbGllbnQgPSBUQUlMUV9MQVNUKCZz dmNfcnBjX2dzc19jbGllbnRzLAorCQkgICAgc3ZjX3JwY19nc3NfY2xpZW50X2xpc3QpOworCX0K IAlUQUlMUV9GT1JFQUNIX1NBRkUoY2xpZW50LCAmc3ZjX3JwY19nc3NfY2xpZW50cywgY2xfYWxs bGluaywgbmNsaWVudCkgewogCQlpZiAoY2xpZW50LT5jbF9zdGF0ZSA9PSBDTElFTlRfU1RBTEUK IAkJICAgIHx8IG5vdyA+IGNsaWVudC0+Y2xfZXhwaXJhdGlvbikgeworCQkJc3hfeHVubG9jaygm c3ZjX3JwY19nc3NfbG9jayk7CiAJCQlycGNfZ3NzX2xvZ19kZWJ1ZygiZXhwaXJpbmcgY2xpZW50 ICVwIiwgY2xpZW50KTsKIAkJCXN2Y19ycGNfZ3NzX2ZvcmdldF9jbGllbnQoY2xpZW50KTsKKwkJ CXN4X3hsb2NrKCZzdmNfcnBjX2dzc19sb2NrKTsKIAkJfQogCX0KKwlzeF94dW5sb2NrKCZzdmNf cnBjX2dzc19sb2NrKTsKIH0KIAogI2lmZGVmIERFQlVHCg== ------=_Part_1940251_322460862.1316825664744-- From owner-freebsd-fs@FreeBSD.ORG Sat Sep 24 01:27:03 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 4A2C41065670 for ; Sat, 24 Sep 2011 01:27:03 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id E066B8FC13 for ; Sat, 24 Sep 2011 01:27:02 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: Ap8EAMswfU6DaFvO/2dsb2JhbAA2DIRipDKBUwEBAQQBAQEaBgQnIAsbGBEZAgQlAQkmBggHBAEcBIddpQWRT4MVOwGCH4ERBJE6ghiIVIh5 X-IronPort-AV: E=Sophos;i="4.68,433,1312171200"; d="scan'208";a="138896314" Received: from erie.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.206]) by esa-jnhn-pri.mail.uoguelph.ca with ESMTP; 23 Sep 2011 21:27:01 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id DC7E9B3F27; Fri, 23 Sep 2011 21:27:01 -0400 (EDT) Date: Fri, 23 Sep 2011 21:27:01 -0400 (EDT) From: Rick Macklem To: Clinton Adams Message-ID: <1461855405.1940757.1316827621857.JavaMail.root@erie.cs.uoguelph.ca> In-Reply-To: MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="----=_Part_1940756_505940954.1316827621854" X-Originating-IP: [172.17.91.202] X-Mailer: Zimbra 6.0.10_GA_2692 (ZimbraWebClient - FF3.0 (Win)/6.0.10_GA_2692) Cc: freebsd-fs@freebsd.org Subject: Re: kernel panics with RPCSEC_GSS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 24 Sep 2011 01:27:03 -0000 ------=_Part_1940756_505940954.1316827621854 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Clinton Adams wrote: > Hi, > > On 8.2-RELEASE-p2, kernel occassionaly panics during relatively high > nfs usage (usually morning logins). Frequency of crashes have > decreased as we have reduced the number of clients, about twice a week > with 10 clients versus daily with 15. > > Server is running nfsv4 with mit kerberos, clients are linux (ubuntu > 10.04). > > Backtraces from last 2 cores - > > #1 0xffffffff805cbb5e in boot (howto=260) > at /usr/src/sys/kern/kern_shutdown.c:419 > #2 0xffffffff805cbf91 in panic (fmt=Variable "fmt" is not available. > ) at /usr/src/sys/kern/kern_shutdown.c:592 > #3 0xffffffff808d25c0 in trap_fatal (frame=0xc, eva=Variable "eva" is > not available. > ) > at /usr/src/sys/amd64/amd64/trap.c:783 > #4 0xffffffff808d299f in trap_pfault (frame=0xffffff8096bb7790, > usermode=0) > at /usr/src/sys/amd64/amd64/trap.c:699 > #5 0xffffffff808d2e7f in trap (frame=0xffffff8096bb7790) > at /usr/src/sys/amd64/amd64/trap.c:449 > #6 0xffffffff808baf74 in calltrap () > at /usr/src/sys/amd64/amd64/exception.S:224 > #7 0xffffffff807db8d8 in svc_rpc_gss_forget_client (client=0x0) > at /usr/src/sys/rpc/rpcsec_gss/svc_rpcsec_gss.c:616 Oops, I realized that if multiple threads did the call at line#642 concurrently, it could try to remove it from the tailq twice. Please try this attached patch instead of the one I posted a few minutes ago (I think it avoids this race). Thanks for reporting this and please let us know if this patch helps, rick > #8 0xffffffff807dc1c3 in svc_rpc_gss (rqst=0xffffff005708c000, > msg=0xffffff8096bb7b20) at > /usr/src/sys/rpc/rpcsec_gss/svc_rpcsec_gss.c:642 > #9 0xffffffff807d49d3 in svc_run_internal (pool=0xffffff003d03d600, > ismaster=0) at /usr/src/sys/rpc/svc.c:837 > #10 0xffffffff807d518b in svc_thread_start (arg=Variable "arg" is not > available. > ) > at /usr/src/sys/rpc/svc.c:1200 > #11 0xffffffff805a2798 in fork_exit ( > callout=0xffffffff807d5180 , arg=0xffffff003d03d600, > frame=0xffffff8096bb7c40) at /usr/src/sys/kern/kern_fork.c:845 > #12 0xffffffff808bb43e in fork_trampoline () > at /usr/src/sys/amd64/amd64/exception.S:565 > > > #1 0xffffffff805cbabe in boot (howto=260) > at /usr/src/sys/kern/kern_shutdown.c:419 > #2 0xffffffff805cbed3 in panic (fmt=0x0) > at /usr/src/sys/kern/kern_shutdown.c:592 > #3 0xffffffff808d239d in trap_fatal (frame=0xffffff0004c89460, > eva=Variable "eva" is not available. > ) > at /usr/src/sys/amd64/amd64/trap.c:783 > #4 0xffffffff808d275f in trap_pfault (frame=0xffffff8096c0d790, > usermode=0) > at /usr/src/sys/amd64/amd64/trap.c:699 > #5 0xffffffff808d2b5f in trap (frame=0xffffff8096c0d790) > at /usr/src/sys/amd64/amd64/trap.c:449 > #6 0xffffffff808bada4 in calltrap () > at /usr/src/sys/amd64/amd64/exception.S:224 > #7 0xffffffff807db856 in svc_rpc_gss_forget_client > (client=0xffffff001c015200) > at atomic.h:158 > #8 0xffffffff807dc0e3 in svc_rpc_gss (rqst=0xffffff0004a24000, > msg=0xffffff8096c0db20) at > /usr/src/sys/rpc/rpcsec_gss/svc_rpcsec_gss.c:642 > #9 0xffffffff807d48f3 in svc_run_internal (pool=0xffffff0004ca6200, > ismaster=0) at /usr/src/sys/rpc/svc.c:837 > #10 0xffffffff807d50ab in svc_thread_start (arg=Variable "arg" is not > available. > ) > at /usr/src/sys/rpc/svc.c:1200 > #11 0xffffffff805a26f8 in fork_exit ( > callout=0xffffffff807d50a0 , arg=0xffffff0004ca6200, > frame=0xffffff8096c0dc40) at /usr/src/sys/kern/kern_fork.c:845 > #12 0xffffffff808bb26e in fork_trampoline () > at /usr/src/sys/amd64/amd64/exception.S:565 > > Kernel is generic except for > device crypto > options KGSSAPI. > > Ash /etc/make.conf > WITHOUT_X11=yes > KRB5_HOME=/usr/local > KRB5_IMPL=mit > # added by use.perl 2011-09-02 11:38:57 > PERL_VERSION=5.10.1 > > I'm happy to provide any additional info. > > Thanks for any help, > Clinton > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" ------=_Part_1940756_505940954.1316827621854 Content-Type: text/x-patch; name=svcrpcsec.patch Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename=svcrpcsec.patch LS0tIHJwYy9ycGNzZWNfZ3NzL3N2Y19ycGNzZWNfZ3NzLmMuc2F2CTIwMTEtMDktMjMgMjA6MTQ6 MDcuMDAwMDAwMDAwIC0wNDAwCisrKyBycGMvcnBjc2VjX2dzcy9zdmNfcnBjc2VjX2dzcy5jCTIw MTEtMDktMjMgMjE6MTY6MzQuMDAwMDAwMDAwIC0wNDAwCkBAIC02MjUsNiArNjI1LDIzIEBAIHN2 Y19ycGNfZ3NzX2ZvcmdldF9jbGllbnQoc3RydWN0IHN2Y19ycGMKIAlzdmNfcnBjX2dzc19yZWxl YXNlX2NsaWVudChjbGllbnQpOwogfQogCisvKgorICogU2FtZSBhcyBhYm92ZSwgZXhjZXB0IHRo YXQgdGhpcyBvbmUgZXhwZWN0cyBzdmNfcnBjX2dzc19sb2NrIHRvCisgKiBiZSBoZWxkIHdoZW4g aXQgaXMgY2FsbGVkLiBJdCByZWxlYXNlcyB0aGlzIGxvY2suCisgKi8KK3N0YXRpYyB2b2lkCitz dmNfcnBjX2dzc19mb3JnZXRfY2xpZW50X2xvY2tlZChzdHJ1Y3Qgc3ZjX3JwY19nc3NfY2xpZW50 ICpjbGllbnQpCit7CisJc3RydWN0IHN2Y19ycGNfZ3NzX2NsaWVudF9saXN0ICpsaXN0OworCisJ bGlzdCA9ICZzdmNfcnBjX2dzc19jbGllbnRfaGFzaFtjbGllbnQtPmNsX2lkLmNpX2lkICUgQ0xJ RU5UX0hBU0hfU0laRV07CisJVEFJTFFfUkVNT1ZFKGxpc3QsIGNsaWVudCwgY2xfbGluayk7CisJ VEFJTFFfUkVNT1ZFKCZzdmNfcnBjX2dzc19jbGllbnRzLCBjbGllbnQsIGNsX2FsbGxpbmspOwor CXN2Y19ycGNfZ3NzX2NsaWVudF9jb3VudC0tOworCXN4X3h1bmxvY2soJnN2Y19ycGNfZ3NzX2xv Y2spOworCXN2Y19ycGNfZ3NzX3JlbGVhc2VfY2xpZW50KGNsaWVudCk7Cit9CisKIHN0YXRpYyB2 b2lkCiBzdmNfcnBjX2dzc190aW1lb3V0X2NsaWVudHModm9pZCkKIHsKQEAgLTYzOCwxNiArNjU1 LDIzIEBAIHN2Y19ycGNfZ3NzX3RpbWVvdXRfY2xpZW50cyh2b2lkKQogCSAqIEZpcnN0IGVuZm9y Y2UgdGhlIG1heCBjbGllbnQgbGltaXQuIFdlIGtlZXAKIAkgKiBzdmNfcnBjX2dzc19jbGllbnRz IGluIExSVSBvcmRlci4KIAkgKi8KLQl3aGlsZSAoc3ZjX3JwY19nc3NfY2xpZW50X2NvdW50ID4g Q0xJRU5UX01BWCkKLQkJc3ZjX3JwY19nc3NfZm9yZ2V0X2NsaWVudChUQUlMUV9MQVNUKCZzdmNf cnBjX2dzc19jbGllbnRzLAotCQkJICAgIHN2Y19ycGNfZ3NzX2NsaWVudF9saXN0KSk7CisJc3hf eGxvY2soJnN2Y19ycGNfZ3NzX2xvY2spOworCWNsaWVudCA9IFRBSUxRX0xBU1QoJnN2Y19ycGNf Z3NzX2NsaWVudHMsIHN2Y19ycGNfZ3NzX2NsaWVudF9saXN0KTsKKwl3aGlsZSAoc3ZjX3JwY19n c3NfY2xpZW50X2NvdW50ID4gQ0xJRU5UX01BWCAmJiBjbGllbnQgIT0gTlVMTCkgeworCQlzdmNf cnBjX2dzc19mb3JnZXRfY2xpZW50X2xvY2tlZChjbGllbnQpOyAvKiByZWxlYXNlcyBsb2NrICov CisJCXN4X3hsb2NrKCZzdmNfcnBjX2dzc19sb2NrKTsKKwkJY2xpZW50ID0gVEFJTFFfTEFTVCgm c3ZjX3JwY19nc3NfY2xpZW50cywKKwkJICAgIHN2Y19ycGNfZ3NzX2NsaWVudF9saXN0KTsKKwl9 CiAJVEFJTFFfRk9SRUFDSF9TQUZFKGNsaWVudCwgJnN2Y19ycGNfZ3NzX2NsaWVudHMsIGNsX2Fs bGxpbmssIG5jbGllbnQpIHsKIAkJaWYgKGNsaWVudC0+Y2xfc3RhdGUgPT0gQ0xJRU5UX1NUQUxF CiAJCSAgICB8fCBub3cgPiBjbGllbnQtPmNsX2V4cGlyYXRpb24pIHsKIAkJCXJwY19nc3NfbG9n X2RlYnVnKCJleHBpcmluZyBjbGllbnQgJXAiLCBjbGllbnQpOwotCQkJc3ZjX3JwY19nc3NfZm9y Z2V0X2NsaWVudChjbGllbnQpOworCQkJc3ZjX3JwY19nc3NfZm9yZ2V0X2NsaWVudF9sb2NrZWQo Y2xpZW50KTsKKwkJCXN4X3hsb2NrKCZzdmNfcnBjX2dzc19sb2NrKTsKIAkJfQogCX0KKwlzeF94 dW5sb2NrKCZzdmNfcnBjX2dzc19sb2NrKTsKIH0KIAogI2lmZGVmIERFQlVHCg== ------=_Part_1940756_505940954.1316827621854-- From owner-freebsd-fs@FreeBSD.ORG Sat Sep 24 04:09:27 2011 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 4D5F5106566B; Sat, 24 Sep 2011 04:09:27 +0000 (UTC) (envelope-from eadler@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 259B88FC12; Sat, 24 Sep 2011 04:09:27 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id p8O49RCf067942; Sat, 24 Sep 2011 04:09:27 GMT (envelope-from eadler@freefall.freebsd.org) Received: (from eadler@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id p8O49Q6F067938; Sat, 24 Sep 2011 04:09:26 GMT (envelope-from eadler) Date: Sat, 24 Sep 2011 04:09:26 GMT Message-Id: <201109240409.p8O49Q6F067938@freefall.freebsd.org> To: vpaepcke@incore.de, eadler@FreeBSD.org, freebsd-fs@FreeBSD.org From: eadler@FreeBSD.org Cc: Subject: Re: kern/33464: [ufs] soft update inconsistencies after system crash X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 24 Sep 2011 04:09:27 -0000 Synopsis: [ufs] soft update inconsistencies after system crash State-Changed-From-To: open->feedback State-Changed-By: eadler State-Changed-When: Sat Sep 24 04:09:26 UTC 2011 State-Changed-Why: Is this still an issue on recent versions of FreeBSD? http://www.freebsd.org/cgi/query-pr.cgi?pr=33464 From owner-freebsd-fs@FreeBSD.ORG Sat Sep 24 05:20:40 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id C2657106564A; Sat, 24 Sep 2011 05:20:40 +0000 (UTC) (envelope-from mckusick@mckusick.com) Received: from chez.mckusick.com (chez.mckusick.com [70.36.157.235]) by mx1.freebsd.org (Postfix) with ESMTP id 8B8E78FC16; Sat, 24 Sep 2011 05:20:40 +0000 (UTC) Received: from chez.mckusick.com (localhost [127.0.0.1]) by chez.mckusick.com (8.14.3/8.14.3) with ESMTP id p8O5KgpG063882; Fri, 23 Sep 2011 22:20:42 -0700 (PDT) (envelope-from mckusick@chez.mckusick.com) Message-Id: <201109240520.p8O5KgpG063882@chez.mckusick.com> To: eadler@freebsd.org In-reply-to: <201109240409.p8O49Q6F067938@freefall.freebsd.org> Date: Fri, 23 Sep 2011 22:20:42 -0700 From: Kirk McKusick X-Spam-Status: No, score=0.0 required=5.0 tests=MISSING_MID, UNPARSEABLE_RELAY autolearn=failed version=3.2.5 X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on chez.mckusick.com Cc: freebsd-fs@freebsd.org, vpaepcke@incore.de Subject: Re: kern/33464: [ufs] soft update inconsistencies after system crash X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 24 Sep 2011 05:20:40 -0000 Two fixes were added to soft updates that probably corrected this problem. Unless someone can reproduce the problem on a recent system, I believe this PR should be closed. Kirk McKusick From owner-freebsd-fs@FreeBSD.ORG Sat Sep 24 15:16:51 2011 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 4C3571065675; Sat, 24 Sep 2011 15:16:51 +0000 (UTC) (envelope-from eadler@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 235C58FC0C; Sat, 24 Sep 2011 15:16:51 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id p8OFGpeK085094; Sat, 24 Sep 2011 15:16:51 GMT (envelope-from eadler@freefall.freebsd.org) Received: (from eadler@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id p8OFGo1k085089; Sat, 24 Sep 2011 15:16:50 GMT (envelope-from eadler) Date: Sat, 24 Sep 2011 15:16:50 GMT Message-Id: <201109241516.p8OFGo1k085089@freefall.freebsd.org> To: vpaepcke@incore.de, eadler@FreeBSD.org, freebsd-fs@FreeBSD.org From: eadler@FreeBSD.org Cc: Subject: Re: kern/33464: [ufs] soft update inconsistencies after system crash X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 24 Sep 2011 15:16:51 -0000 Synopsis: [ufs] soft update inconsistencies after system crash State-Changed-From-To: feedback->closed State-Changed-By: eadler State-Changed-When: Sat Sep 24 15:16:50 UTC 2011 State-Changed-Why: as per previous comment http://www.freebsd.org/cgi/query-pr.cgi?pr=33464 From owner-freebsd-fs@FreeBSD.ORG Sat Sep 24 19:22:11 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 39FA71065673 for ; Sat, 24 Sep 2011 19:22:11 +0000 (UTC) (envelope-from rabgvzr@gmail.com) Received: from mail-yx0-f182.google.com (mail-yx0-f182.google.com [209.85.213.182]) by mx1.freebsd.org (Postfix) with ESMTP id 000368FC14 for ; Sat, 24 Sep 2011 19:22:10 +0000 (UTC) Received: by yxk36 with SMTP id 36so4364840yxk.13 for ; Sat, 24 Sep 2011 12:22:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:date:message-id:subject:from:to:content-type; bh=lVh9a4ONyP+0u/Y/JBezlcc0LdI3zjNsANb3wjGrUsU=; b=ERzfxy7XQSw7jJW1JLmai0bruVtkGjDqCUc79qWiLoEjXht9LhnSxYviyiyAsALY+f WxMbt4os9I/4fc3HkU7Dp4z/BA+cOQmQvFpku8CdV26dcmCjqYAMpQu7BqjgeuSkB5rj s9NU8yZlJVj7yDR6tvmDgMrC3geAW02e1JKQ0= MIME-Version: 1.0 Received: by 10.236.201.234 with SMTP id b70mr13345425yho.122.1316892130223; Sat, 24 Sep 2011 12:22:10 -0700 (PDT) Received: by 10.236.41.10 with HTTP; Sat, 24 Sep 2011 12:22:10 -0700 (PDT) Date: Sat, 24 Sep 2011 15:22:10 -0400 Message-ID: From: Rotate 13 To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 Subject: [ZFS] Mixed 512 and 4096 byte physical sector size X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 24 Sep 2011 19:22:11 -0000 Anyone had decent performance running mix of 512 and 4096 ("advanced format") physical sector size drives on same vdev, with ashift=12 and correct alignment? Looking at zmirror, maybe zraid. >From what I can tell, worst that can happen is I/O amplification and cache pressure against drive with smaller sector size. Search shows people had problems with regular RAID in such configuration; I think ZFS probably smart enough that only problem is more I/O. But I am not ZFS expert, so I ask. I know it's not ideal, but sometimes must work with what I've got. And better set ashift=12 from the start on live zpool, as added/replacement drives in future will probably have 4096 byte sectors. From owner-freebsd-fs@FreeBSD.ORG Sat Sep 24 23:13:25 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id BB4531065670 for ; Sat, 24 Sep 2011 23:13:25 +0000 (UTC) (envelope-from joe@tao.org.uk) Received: from alpha.tao.org.uk (alpha.tao.org.uk [95.154.203.106]) by mx1.freebsd.org (Postfix) with ESMTP id 7A6038FC0A for ; Sat, 24 Sep 2011 23:13:25 +0000 (UTC) Received: from localhost (alpha.tao.org.uk [95.154.203.106]) by alpha.tao.org.uk (Postfix) with ESMTP id 44C9F11E6C3; Sun, 25 Sep 2011 00:03:41 +0100 (BST) Received: from alpha.tao.org.uk ([95.154.203.106]) by localhost (mail.tao.org.uk [95.154.203.106]) (amavisd-maia, port 10024) with LMTP id 06605-01-2; Sun, 25 Sep 2011 00:03:40 +0100 (BST) Received: from [10.0.2.3] (p2.dhcp.tao.org.uk [90.155.77.81]) (Authenticated sender: joemail@alpha.tao.org.uk) by alpha.tao.org.uk (Postfix) with ESMTPA id A205611E126; Sat, 24 Sep 2011 23:17:21 +0100 (BST) Mime-Version: 1.0 (Apple Message framework v1244.3) Content-Type: text/plain; charset=us-ascii From: Dr Josef Karthauser In-Reply-To: Date: Sat, 24 Sep 2011 23:17:27 +0100 Content-Transfer-Encoding: quoted-printable Message-Id: <1F0AC978-F158-41BB-B1AE-38D4FFDC9D33@tao.org.uk> References: <2EF5C613-ACFF-449A-9388-664E0179F450@tao.org.uk> <5E2A5A2A-6AE9-48FB-99E0-6C52DAB372E6@tao.org.uk> To: Dr Josef Karthauser X-Mailer: Apple Mail (2.1244.3) X-Virus-Scanned: Maia Mailguard 1.0.2a Cc: freebsd-fs@freebsd.org Subject: Re: Expanding a spool on a system with a single zfs root disk? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 24 Sep 2011 23:13:25 -0000 On 24 Sep 2011, at 23:05, Dr Josef Karthauser wrote: > On 24 Sep 2011, at 23:00, Dr Josef Karthauser wrote: >=20 >> On 24 Sep 2011, at 22:44, Dr Josef Karthauser wrote: >>=20 >>> I'm scratching my head working out how to expand a zpool on a remote = server. It's got a larger gpart partition, and I want to grow the zpool = into it. I've got remote console access, but the system has the root = disk on the same zfs pool, so I can't simply export and reimport the = pool from single user mode. :/ Any ideas on how to achieve this then? >>=20 >> Ok, so it looks like zpool has an autoexpand setting... I've switched = it on, but it hasn't expanded. Perhaps it only expands when it's loaded? = Another reboot in order then..... >=20 > Ah, no, that didn't work. It's still the same size: >=20 > # zpool get all void > NAME PROPERTY VALUE SOURCE > void size 126G - >=20 > So, yes please, I could do with some more suggestions. I've sussed it: # zpool online -e void gpt/disk0 # zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT void 226G 112G 114G 49% 1.00x ONLINE - So, is it a bug that autoexpand doesn't work on zpools? Glad to have got it working, I was down to my last 10G! :) Joe From owner-freebsd-fs@FreeBSD.ORG Sat Sep 24 23:18:25 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id EDFA7106564A for ; Sat, 24 Sep 2011 23:18:25 +0000 (UTC) (envelope-from joe@tao.org.uk) Received: from alpha.tao.org.uk (alpha.tao.org.uk [95.154.203.106]) by mx1.freebsd.org (Postfix) with ESMTP id B59038FC0C for ; Sat, 24 Sep 2011 23:18:25 +0000 (UTC) Received: from localhost (alpha.tao.org.uk [95.154.203.106]) by alpha.tao.org.uk (Postfix) with ESMTP id 3977011E3C9; Sun, 25 Sep 2011 00:02:30 +0100 (BST) Received: from alpha.tao.org.uk ([95.154.203.106]) by localhost (mail.tao.org.uk [95.154.203.106]) (amavisd-maia, port 10024) with LMTP id 05992-02-6; Sun, 25 Sep 2011 00:02:29 +0100 (BST) Received: from [10.0.2.3] (p2.dhcp.tao.org.uk [90.155.77.81]) (Authenticated sender: joemail@alpha.tao.org.uk) by alpha.tao.org.uk (Postfix) with ESMTPA id 10B4911D1AF; Sat, 24 Sep 2011 22:44:42 +0100 (BST) From: Dr Josef Karthauser Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable Date: Sat, 24 Sep 2011 22:44:48 +0100 To: freebsd-fs@freebsd.org Message-Id: <2EF5C613-ACFF-449A-9388-664E0179F450@tao.org.uk> Mime-Version: 1.0 (Apple Message framework v1244.3) X-Mailer: Apple Mail (2.1244.3) X-Virus-Scanned: Maia Mailguard 1.0.2a Subject: Expanding a spool on a system with a single zfs root disk? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 24 Sep 2011 23:18:26 -0000 Hey guys, I'm scratching my head working out how to expand a zpool on a remote = server. It's got a larger gpart partition, and I want to grow the zpool = into it. I've got remote console access, but the system has the root = disk on the same zfs pool, so I can't simply export and reimport the = pool from single user mode. :/ Any ideas on how to achieve this then? Joe # gpart show =3D> 34 482344893 ad0 GPT (230G) 34 128 1 freebsd-boot (64k) 162 8388608 2 freebsd-swap (4.0G) 8388770 473956157 3 freebsd-zfs (226G) # glabel list ad0p3 Geom name: ad0p3 Providers: 1. Name: gpt/disk0 Mediasize: 242665552384 (226G) Sectorsize: 512 Stripesize: 0 Stripeoffset: 82944 Mode: r1w1e1 secoffset: 0 offset: 0 seclength: 473956157 length: 242665552384 index: 0 Consumers: 1. Name: ad0p3 Mediasize: 242665552384 (226G) Sectorsize: 512 Stripesize: 0 Stripeoffset: 82944 Mode: r1w1e2 =09 # zpool status void pool: void state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM void ONLINE 0 0 0 gpt/disk0 ONLINE 0 0 0 errors: No known data errors =09 # zpool list void NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT void 126G 112G 13.7G 89% 1.00x ONLINE - From owner-freebsd-fs@FreeBSD.ORG Sat Sep 24 23:18:26 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id F3164106566B for ; Sat, 24 Sep 2011 23:18:25 +0000 (UTC) (envelope-from joe@tao.org.uk) Received: from alpha.tao.org.uk (alpha.tao.org.uk [95.154.203.106]) by mx1.freebsd.org (Postfix) with ESMTP id B5AC98FC13 for ; Sat, 24 Sep 2011 23:18:25 +0000 (UTC) Received: from localhost (alpha.tao.org.uk [95.154.203.106]) by alpha.tao.org.uk (Postfix) with ESMTP id 21BFB11E497; Sun, 25 Sep 2011 00:02:49 +0100 (BST) Received: from alpha.tao.org.uk ([95.154.203.106]) by localhost (mail.tao.org.uk [95.154.203.106]) (amavisd-maia, port 10024) with LMTP id 06234-01; Sun, 25 Sep 2011 00:02:48 +0100 (BST) Received: from [10.0.2.3] (p2.dhcp.tao.org.uk [90.155.77.81]) (Authenticated sender: joemail@alpha.tao.org.uk) by alpha.tao.org.uk (Postfix) with ESMTPA id 24D0C11D26F; Sat, 24 Sep 2011 23:00:01 +0100 (BST) Mime-Version: 1.0 (Apple Message framework v1244.3) Content-Type: text/plain; charset=us-ascii From: Dr Josef Karthauser In-Reply-To: <2EF5C613-ACFF-449A-9388-664E0179F450@tao.org.uk> Date: Sat, 24 Sep 2011 23:00:06 +0100 Content-Transfer-Encoding: quoted-printable Message-Id: <5E2A5A2A-6AE9-48FB-99E0-6C52DAB372E6@tao.org.uk> References: <2EF5C613-ACFF-449A-9388-664E0179F450@tao.org.uk> To: Dr Josef Karthauser X-Mailer: Apple Mail (2.1244.3) X-Virus-Scanned: Maia Mailguard 1.0.2a Cc: freebsd-fs@freebsd.org Subject: Re: Expanding a spool on a system with a single zfs root disk? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 24 Sep 2011 23:18:26 -0000 On 24 Sep 2011, at 22:44, Dr Josef Karthauser wrote: > I'm scratching my head working out how to expand a zpool on a remote = server. It's got a larger gpart partition, and I want to grow the zpool = into it. I've got remote console access, but the system has the root = disk on the same zfs pool, so I can't simply export and reimport the = pool from single user mode. :/ Any ideas on how to achieve this then? Ok, so it looks like zpool has an autoexpand setting... I've switched it = on, but it hasn't expanded. Perhaps it only expands when it's loaded? = Another reboot in order then..... Joe= From owner-freebsd-fs@FreeBSD.ORG Sat Sep 24 23:18:26 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id F38BE106566C for ; Sat, 24 Sep 2011 23:18:25 +0000 (UTC) (envelope-from joe@tao.org.uk) Received: from alpha.tao.org.uk (alpha.tao.org.uk [95.154.203.106]) by mx1.freebsd.org (Postfix) with ESMTP id B59708FC12 for ; Sat, 24 Sep 2011 23:18:25 +0000 (UTC) Received: from localhost (alpha.tao.org.uk [95.154.203.106]) by alpha.tao.org.uk (Postfix) with ESMTP id 499EA11E452; Sun, 25 Sep 2011 00:02:44 +0100 (BST) Received: from alpha.tao.org.uk ([95.154.203.106]) by localhost (mail.tao.org.uk [95.154.203.106]) (amavisd-maia, port 10024) with LMTP id 06139-01-8; Sun, 25 Sep 2011 00:02:44 +0100 (BST) Received: from [10.0.2.3] (p2.dhcp.tao.org.uk [90.155.77.81]) (Authenticated sender: joemail@alpha.tao.org.uk) by alpha.tao.org.uk (Postfix) with ESMTPA id 87E2D11E087; Sat, 24 Sep 2011 23:05:34 +0100 (BST) Mime-Version: 1.0 (Apple Message framework v1244.3) Content-Type: text/plain; charset=us-ascii From: Dr Josef Karthauser In-Reply-To: <5E2A5A2A-6AE9-48FB-99E0-6C52DAB372E6@tao.org.uk> Date: Sat, 24 Sep 2011 23:05:39 +0100 Content-Transfer-Encoding: quoted-printable Message-Id: References: <2EF5C613-ACFF-449A-9388-664E0179F450@tao.org.uk> <5E2A5A2A-6AE9-48FB-99E0-6C52DAB372E6@tao.org.uk> To: Dr Josef Karthauser X-Mailer: Apple Mail (2.1244.3) X-Virus-Scanned: Maia Mailguard 1.0.2a Cc: freebsd-fs@freebsd.org Subject: Re: Expanding a spool on a system with a single zfs root disk? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 24 Sep 2011 23:18:26 -0000 On 24 Sep 2011, at 23:00, Dr Josef Karthauser wrote: > On 24 Sep 2011, at 22:44, Dr Josef Karthauser wrote: >=20 >> I'm scratching my head working out how to expand a zpool on a remote = server. It's got a larger gpart partition, and I want to grow the zpool = into it. I've got remote console access, but the system has the root = disk on the same zfs pool, so I can't simply export and reimport the = pool from single user mode. :/ Any ideas on how to achieve this then? >=20 > Ok, so it looks like zpool has an autoexpand setting... I've switched = it on, but it hasn't expanded. Perhaps it only expands when it's loaded? = Another reboot in order then..... Ah, no, that didn't work. It's still the same size: # zpool get all void NAME PROPERTY VALUE SOURCE void size 126G - void capacity 89% - void altroot - default void health ONLINE - void guid 10894823139123390159 default void version 28 default void bootfs void local void delegation on default void autoreplace off default void cachefile - default void failmode wait default void listsnapshots off default void autoexpand on local void dedupditto 0 default void dedupratio 1.00x - void free 13.6G - void allocated 112G - void readonly off - So, yes please, I could do with some more suggestions. Thanks :). Joe