Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 3 Oct 2018 10:49:44 +0200 (CEST)
From:      Ronald Klop <ronald-lists@klop.ws>
To:        freebsd-fs@freebsd.org
Subject:   Re: ZFS mix disks diffrent size and speed in a pool
Message-ID:  <492380418.1.1538556584486@localhost>
In-Reply-To: <72e4915f-3d2f-3a3c-ada1-44f797b7244f@multiplay.co.uk>
References:  <029ac041-39a9-3a42-4dda-7ce94188d83c@unice.fr> <72e4915f-3d2f-3a3c-ada1-44f797b7244f@multiplay.co.uk>

next in thread | previous in thread | raw e-mail | index | archive | help
After all the disks are replaced the extra space can be used.

See man zpool:

     autoexpand=on | off
         Controls automatic pool expansion when the underlying LUN is grown.
         If set to "on", the pool will be resized according to the size of the
         expanded device. If the device is part of a mirror or raidz then all
         devices within that mirror/raidz group must be expanded before the
         new space is made available to the pool. The default behavior is
         "off".  This property can also be referred to by its shortened column
         name, expand.

So it can be nice to start using bigger disks (2+ TB) if you know you need more space in the future and the price is ok.

Regards,
Ronald.

 
Van: Steven Hartland <killing@multiplay.co.uk>
Datum: woensdag, 3 oktober 2018 10:12
Aan: freebsd-fs@freebsd.org
Onderwerp: Re: ZFS mix disks diffrent size and speed in a pool
> 
> That will be fine, you wont benefit from the extra space or speed but it will just work.
> 
> On 03/10/2018 09:07, Jean-Marc LACROIX wrote:
> > Hello,
> >
> >     we actually have a storage solution based on a DELL 630 + 3 JBODs > DELL MD 1420 (HBA mode),
> >
> > The ZFS pool is made of 9 raidz2 vdevs + 1 mirror log + 2 spares
> >
> >     All disks are of the same type in capacity and speed.
> >
> > The problem is that DELL is unable to provide the SAME kind of disk > (capacity AND speed)
> >
> > when a disk has to be replaced. So the question is:
> >
> > Is it possible to replace a 1TB 7200rpm by a 1.2TB 10000rpm disk > without any consequences for the existing pool ?
> >
> > THanks in advance
> >
> > Best Regards
> >
> > JM
> >
> 
> _______________________________________________
> freebsd-fs@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"
> 
> 
> 
From owner-freebsd-fs@freebsd.org  Wed Oct  3 14:32:29 2018
Return-Path: <owner-freebsd-fs@freebsd.org>
Delivered-To: freebsd-fs@mailman.ysv.freebsd.org
Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1])
 by mailman.ysv.freebsd.org (Postfix) with ESMTP id CB1B310C4ED7
 for <freebsd-fs@mailman.ysv.freebsd.org>; Wed,  3 Oct 2018 14:32:28 +0000 (UTC)
 (envelope-from jgitlin@goboomtown.com)
Received: from mail-yw1-xc31.google.com (mail-yw1-xc31.google.com
 [IPv6:2607:f8b0:4864:20::c31])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (Client CN "smtp.gmail.com",
 Issuer "Google Internet Authority G3" (verified OK))
 by mx1.freebsd.org (Postfix) with ESMTPS id 664AA7D307
 for <freebsd-fs@freebsd.org>; Wed,  3 Oct 2018 14:32:28 +0000 (UTC)
 (envelope-from jgitlin@goboomtown.com)
Received: by mail-yw1-xc31.google.com with SMTP id d126-v6so2355983ywa.5
 for <freebsd-fs@freebsd.org>; Wed, 03 Oct 2018 07:32:28 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=goboomtown-com.20150623.gappssmtp.com; s=20150623;
 h=from:message-id:mime-version:subject:date:in-reply-to:cc:to
 :references; bh=fjnwbuzPql1MMeMXV4ao89ZK0ceN0WkrayQhvO4fmMI=;
 b=bL0zA3zxDJotUssjRYP0XH7LeVMlyj+pEjSZ8cIeBLof2iQIBM3doZc19g7nkmY6+Y
 oGYyGDY9ypjPZpZNkgIfzQUG/ckKDdhW8yDWulb7R36EJ/IZ+k9V8vLqMBczHUS9VOU/
 gl7Ci/7nP5ODB56336rA3t/VWi9gj9RWGe2o9H7qRlUn4athUp4P8iGkd+u9bkQjrb77
 ceSiuF5LzugNJCuKScPh7HB8bYAynUlDCEaovjrUw59li8sVerV1QXeWnawvG0vg/qoc
 UZ/jbeVMF+deH9OLtMabqrYFINOdjPTax1plyQzols+7bwZ/aJl+yR03AUZNer6bBEJR
 HkvA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:message-id:mime-version:subject:date
 :in-reply-to:cc:to:references;
 bh=fjnwbuzPql1MMeMXV4ao89ZK0ceN0WkrayQhvO4fmMI=;
 b=JFzYJjFe3ptwmQ03lip4uEheFv+vxN2J6k3j1XHSIAX2Dlu3ReHio3FuIGctLiKjtA
 e5BZQIWt02G3Qhr/21Eghlt9Itmx2P1nhzLT9EhW3GkqJEwOAYaUYRO/K8o3P0uW+LFz
 Edv3JkS3NfGgh9Jo6pMhkHHa1ZH/NaTbKacS9eBQovV31EcV1b6CTkVW3EvwOHxuSBvc
 lE0uXaZu/KHi0EvzgqFj3ooomU0aEBxm0hJfU8tliQ0PdE9sRDfh0D0ywANkxPUKceKa
 fE+TAveg+bTqsxR8BdPPJSl+2RcHJUWgHtM99RLlanNwPiPTQtyPxG2YpHp8mWnUxiw7
 PZDw==
X-Gm-Message-State: ABuFfohXJQR9tQ4lrFwi0DQpdrfq+ICy6DcEs0xn9zhXk3MA1XC04XkY
 +QHPMQ7c8PY/jlRmv25S3VZWauV7JRBPtw==
X-Google-Smtp-Source: ACcGV63OLTDX+N1EJdASAcWLavY3Y2BRXSpzKpkANdJBot6kjT/xyAuZ8fcSo2mKOqE/MXiLUaoFdA==
X-Received: by 2002:a81:81c6:: with SMTP id
 r189-v6mr1057852ywf.494.1538577146950; 
 Wed, 03 Oct 2018 07:32:26 -0700 (PDT)
Received: from yyz.farcry.sitepalette.com (047-134-241-203.res.spectrum.com.
 [47.134.241.203])
 by smtp.gmail.com with ESMTPSA id e82-v6sm689716ywa.60.2018.10.03.07.32.26
 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
 Wed, 03 Oct 2018 07:32:26 -0700 (PDT)
From: Josh Gitlin <jgitlin@goboomtown.com>
Message-Id: <D652B4AC-617D-4B03-BB1B-DF4C47C6B657@goboomtown.com>
Mime-Version: 1.0 (Mac OS X Mail 11.5 \(3445.9.1\))
Subject: Re: Troubleshooting kernel panic with zfs
Date: Wed, 3 Oct 2018 10:32:25 -0400
In-Reply-To: <D54225AD-CC96-45E7-A203-D2C52E984963@goboomtown.com>
To: freebsd-fs@freebsd.org
References: <D54225AD-CC96-45E7-A203-D2C52E984963@goboomtown.com>
X-Mailer: Apple Mail (2.3445.9.1)
Content-Type: text/plain;
	charset=us-ascii
Content-Transfer-Encoding: quoted-printable
X-Content-Filtered-By: Mailman/MimeDel 2.1.27
X-BeenThere: freebsd-fs@freebsd.org
X-Mailman-Version: 2.1.27
Precedence: list
List-Id: Filesystems <freebsd-fs.freebsd.org>
List-Unsubscribe: <https://lists.freebsd.org/mailman/options/freebsd-fs>,
 <mailto:freebsd-fs-request@freebsd.org?subject=unsubscribe>
List-Archive: <http://lists.freebsd.org/pipermail/freebsd-fs/>;
List-Post: <mailto:freebsd-fs@freebsd.org>
List-Help: <mailto:freebsd-fs-request@freebsd.org?subject=help>
List-Subscribe: <https://lists.freebsd.org/mailman/listinfo/freebsd-fs>,
 <mailto:freebsd-fs-request@freebsd.org?subject=subscribe>
X-List-Received-Date: Wed, 03 Oct 2018 14:32:29 -0000

Following up on this, a bug was just posted to the stable@freebsd.org =
<mailto:stable@freebsd.org> list where the stack trace exactly matches =
what I was seeing. See: =
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D231296

On our end we reduced ARC and other memory tunables and have not seen a =
panic *yet* but they were unpredictable before so I am not 100% sure =
that we've resolved the issue.

CC'ing rainer@ultra-secure.de who posted the similar bug to stable@

--
 <http://www.goboomtown.com/>=09
Josh Gitlin
Senior Full Stack Developer
(415) 690-1610 x155

Stay up to date and join the conversation in Relay =
<http://relay.goboomtown.com/>.

> On Sep 20, 2018, at 8:27 PM, Josh Gitlin <jgitlin@goboomtown.com> =
wrote:
>=20
> I am working to debug/troubleshoot a kernel panic with a FreeBSD ZFS =
iSCSI server, specifically trying to determine if it's a bug or (more =
likely) a misconfiguration in our settings. Server is running =
11.2-RELEASE-p2 with 15.6 GiB of RAM and has a single zpool with 4x2 =
mirrored vdevs, 2x mirrored zil and 2x l2arc. Server runs pretty much =
nothing other than SSH and iSCSI (via ctld) and serves VM virtual disks =
to hypervisor servers over 10gbe LAN.
>=20
> The server experienced a kernel panic and we unfortunately did not =
have dumpdev set in /etc/rc.conf (we have since corrected this) so the =
only info I have is what was on the screen before I rebooted it. =
(Because it's a production system I couldn't mess around and had to =
reboot ASAP)
>=20
> trap number =3D 12
> panic: page fault
> cpuid =3D 6
> KDB: stack backtrace:
> #0 0xffffffff80b3d567 at kdb_backtrace+0x67
> #1 0xffffffff80af6b07 at vpanic+0x177
> #2 0xffffffff80af6983 at panic+0x43
> #3 0xffffffff80f77fcf at trap_fatal+0x35f
> #4 0xffffffff80f78029 at trap_pfault+0x49
> #5 0xffffffff80f777f7 at trap+0x2c7
> #6 0xffffffff80f57dac at calltrap+0x8
> #7 0xffffffff80dee7e2 at kmem_back+0xf2
> #8 0xffffffff80dee6c0 at kmem_malloc+0x60
> #9 0xffffffff80de6172 at keg_alloc_slab+0xe2
> #10 0xffffffff80de8b7e at keg_fetch_slab+0x14e
> #11 0xffffffff80de8364 at zone_fetch_slab+0x64
> #12 0xffffffff80de848f at zone_import+0x3f
> #13 0xffffffff80de4b99 at uma_zalloc_arg+0x3d9
> #14 0xffffffff826e6ab2 at zio_write_compress+0x1e2
> #15 0xffffffff826e574c at zio_execute+0xac
> #16 0xffffffff80bled74 at taskqueue_run_locked+0x154
> #17 0xffffffff80b4fed8 at taskqueue_thread_loop+0x98
> Uptime: 18d18h31m6s
> mpr0: Sending StopUnit: path (xpt0:mpr0:0:10:ffffffff): handle 10=20
> mpr0: Incrementing SSU count
> mpr0: Sending StopUnit: path (xpt0:mpr0:0:13:ffffffff): handle 13=20
> mpr0: Incrementing SSU count
> mpr0: Sending StopUnit: path Ixpt0:mpr0:0:16:ffffffff): handle 16=20
> mpr0: Incrementing SSU count
>=20
> My hunch is that, given this was inside kmem_malloc, we were unable to =
allocate memory for a zio_write_compress call (the pool does have ZFS =
compression on) and hence this is a tuning issue and not a bug... but I =
am looking for confirmation and/or suggested changes/troubleshooting =
steps. The ZFS tuning configuration has been stable for years, to it may =
be a change in behavior or traffic... If this looks like it might be a =
bug, I will be able to get more information from a minidump if it =
reoccurs and can follow up on this thread.
>=20
> Any advice or suggestions are welcome!
>=20
> [jgitlin@zfs3 ~]$ zpool status
>   pool: srv
>  state: ONLINE
>   scan: scrub repaired 0 in 2h32m with 0 errors on Tue Sep 11 20:32:18 =
2018
> config:
>=20
> 	NAME            STATE     READ WRITE CKSUM
> 	srv             ONLINE       0     0     0
> 	  mirror-0      ONLINE       0     0     0
> 	    gpt/s5      ONLINE       0     0     0
> 	    gpt/s9      ONLINE       0     0     0
> 	  mirror-1      ONLINE       0     0     0
> 	    gpt/s6      ONLINE       0     0     0
> 	    gpt/s10     ONLINE       0     0     0
> 	  mirror-2      ONLINE       0     0     0
> 	    gpt/s7      ONLINE       0     0     0
> 	    gpt/s11     ONLINE       0     0     0
> 	  mirror-3      ONLINE       0     0     0
> 	    gpt/s8      ONLINE       0     0     0
> 	    gpt/s12     ONLINE       0     0     0
> 	logs
> 	  mirror-4      ONLINE       0     0     0
> 	    gpt/s2-zil  ONLINE       0     0     0
> 	    gpt/s3-zil  ONLINE       0     0     0
> 	cache
> 	  gpt/s2-cache  ONLINE       0     0     0
> 	  gpt/s3-cache  ONLINE       0     0     0
>=20
> errors: No known data errors
>=20
> ZFS tuning:
>=20
> vfs.zfs.delay_min_dirty_percent=3D90
> vfs.zfs.dirty_data_max=3D4294967296
> vfs.zfs.dirty_data_sync=3D3221225472
> vfs.zfs.prefetch_disable=3D1
> vfs.zfs.top_maxinflight=3D128
> vfs.zfs.trim.txg_delay=3D8
> vfs.zfs.txg.timeout=3D20
> vfs.zfs.vdev.aggregation_limit=3D524288
> vfs.zfs.vdev.scrub_max_active=3D3
> vfs.zfs.l2arc_write_boost=3D134217728
> vfs.zfs.l2arc_write_max=3D134217728
> vfs.zfs.l2arc_feed_min_ms=3D200
> vfs.zfs.min_auto_ashift=3D12
>=20
>=20
> --
>  <http://www.goboomtown.com/>=09
> Josh Gitlin
> Senior DevOps Engineer
> (415) 690-1610 x155
>=20
> Stay up to date and join the conversation in Relay =
<http://relay.goboomtown.com/>.
>=20




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?492380418.1.1538556584486>