Date: Wed, 4 Oct 2017 09:27:40 -0700 From: Freddie Cash <fjwcash@gmail.com> To: javocado <javocado@gmail.com> Cc: FreeBSD Filesystems <freebsd-fs@freebsd.org> Subject: Re: lockup during zfs destroy Message-ID: <CAOjFWZ54hB_jRaSQ8NX=s214Km9o%2BN=qvnQehJykZbY_QJGESA@mail.gmail.com> In-Reply-To: <CAP1HOmQtU14X1EvwYMHQmOru9S4uyXep=n0pU4PL5z-%2BQnX02A@mail.gmail.com> References: <CAP1HOmQtU14X1EvwYMHQmOru9S4uyXep=n0pU4PL5z-%2BQnX02A@mail.gmail.com>
next in thread | previous in thread | raw e-mail | index | archive | help
On Wed, Oct 4, 2017 at 9:15 AM, javocado <javocado@gmail.com> wrote: > I am trying to destroy a dense, large filesystem and it's not going well. > > Details: > - zpool is a raidz3 with 3 x 12 drive vdevs. > - target filesystem to be destroyed is ~2T with ~63M inodes. > - OS: FreeBSD 10.3amd with 192 GB of RAM. > - 120 GB of swap (90GB recently added as swap-on-disk) > =E2=80=8BDo you have dedupe enabled on any filesystems in the pool? Or was= it enabled at any point in the past? This is a common occurrence when destroying large filesystems or lots of filesystems/snapshots on pools that have/had dedupe enabled and there's not enough RAM/L2ARC to contain the DDT. The system runs out of usable wired memory=E2=80=8B and locks up. Adding more RAM and/or being patient with th= e boot-wait-lockup-repeat cycle will (usually) eventually allow it to finish the destroy. There was a loader.conf tunable (or sysctl) added in the 10.x series that mitigates this by limiting the number of delete operations that occur in a transaction group, but I forget the details on it. Not sure if this affects pools that never had dedupe enabled or not. (We used to suffer through this at least once a year until we enabled a delete-oldest-snapshot-before-running-backups process to limit the number of snapshots.)=E2=80=8B --=20 Freddie Cash fjwcash@gmail.com
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAOjFWZ54hB_jRaSQ8NX=s214Km9o%2BN=qvnQehJykZbY_QJGESA>