From owner-freebsd-fs@freebsd.org Wed Oct 4 17:58:03 2017 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id B53ECE3EFE9 for ; Wed, 4 Oct 2017 17:58:03 +0000 (UTC) (envelope-from fjwcash@gmail.com) Received: from mail-oi0-x233.google.com (mail-oi0-x233.google.com [IPv6:2607:f8b0:4003:c06::233]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 781BA6BA7D for ; Wed, 4 Oct 2017 17:58:03 +0000 (UTC) (envelope-from fjwcash@gmail.com) Received: by mail-oi0-x233.google.com with SMTP id j126so20845131oia.10 for ; Wed, 04 Oct 2017 10:58:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=JpW0FGvBGdQZvJJcwxcl2xOPgWkWTP1q2CObIFeAJiQ=; b=DzrSHk7bcVYdf0y0WXPThCdEA+BiVQG94AqO5h2CL0KjV9MAr/nnm8JBFYoypxV9Ip BK9nZ0j1gf0nqVMMlFgkq2rXxCt8xWxmviDmAModo/RXd37jYTvyKJ1ZOLQkTFq2fQWm HwS9Cek/mYC5SpVdDlGBqzmM00QuahYzCJLkatagBMLKlNlApVeR9o7dDgmKBYRxKzwa cGKFCSOi6t6aj0Zgi5EdqaZgkkAqpPkZtpY1hJNJqM31NZH5eKEinamehWAbdeBgYpA/ mLagTHPQ3++Kzu9lW+rKNveDYetyXif7wmbOq2eRudR+as6cx3T6esnXarq1Mh1i0gfA 4AyQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=JpW0FGvBGdQZvJJcwxcl2xOPgWkWTP1q2CObIFeAJiQ=; b=uYKELqXjuXNxkXe6Tlz6YDinmwmepPUSLjCvebrtPb2UK0QzTw4JuQoT17TrwBH1pK Zp2OpGe2KQHawd6XNqDlVeBoDs8RXv9TU21UvCNzbZoz8U+OIr5TYoxONR38MtKc1vzD BAONmTElR7n6S3Eo2PjQSDg982bDsWiSPJc6lbJrOmMU/rXuoQDy7VDBn9K+bA49dguD lUN/dKmbeMqGqxJiOA0C2vpLSjoRYsXuI32EwdWVmGKH5YKm8lIWRSDzb+ZZOtOFoEmC rpsMJ6anmG9nmstM8D/SlU60IwD3yFHHmUKaHfTckJJ9IM2artitWYXvo6OhBflXWgfo /9yA== X-Gm-Message-State: AMCzsaUnZtXU36dGd68U6l/ps3dBzganBKi1xN3HgYHB/WucwsMhnRMW F39JTA/03HY3DPppgzznbkEWjKk/yFxB+Bjjo/g= X-Google-Smtp-Source: AOwi7QD0W/VcWYWg6ScMHBkXN5JumtjTtMM5VYSDJ+PjTlQ+LSH5mXI3mO548q41KMiwyTBz/yUkjRpVsgPMKLdEV0o= X-Received: by 10.157.9.195 with SMTP id 3mr12779378otz.431.1507139882693; Wed, 04 Oct 2017 10:58:02 -0700 (PDT) MIME-Version: 1.0 Received: by 10.157.62.245 with HTTP; Wed, 4 Oct 2017 10:58:01 -0700 (PDT) In-Reply-To: References: From: Freddie Cash Date: Wed, 4 Oct 2017 10:58:01 -0700 Message-ID: Subject: Re: lockup during zfs destroy To: javocado Cc: FreeBSD Filesystems Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.23 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 04 Oct 2017 17:58:03 -0000 On Wed, Oct 4, 2017 at 9:27 AM, Freddie Cash wrote: > On Wed, Oct 4, 2017 at 9:15 AM, javocado wrote: > >> I am trying to destroy a dense, large filesystem and it's not going well= . >> >> Details: >> - zpool is a raidz3 with 3 x 12 drive vdevs. >> - target filesystem to be destroyed is ~2T with ~63M inodes. >> - OS: FreeBSD 10.3amd with 192 GB of RAM. >> - 120 GB of swap (90GB recently added as swap-on-disk) >> > > =E2=80=8BDo you have dedupe enabled on any filesystems in the pool? Or w= as it > enabled at any point in the past? > > This is a common occurrence when destroying large filesystems or lots of > filesystems/snapshots on pools that have/had dedupe enabled and there's n= ot > enough RAM/L2ARC to contain the DDT. The system runs out of usable wired > memory=E2=80=8B and locks up. Adding more RAM and/or being patient with = the > boot-wait-lockup-repeat cycle will (usually) eventually allow it to finis= h > the destroy. > > There was a loader.conf tunable (or sysctl) added in the 10.x series that > mitigates this by limiting the number of delete operations that occur in = a > transaction group, but I forget the details on it. > > Not sure if this affects pools that never had dedupe enabled or not. > > (We used to suffer through this at least once a year until we enabled a > delete-oldest-snapshot-before-running-backups process to limit the number > of snapshots.)=E2=80=8B > =E2=80=8BFound it. You can set vfs.zfs.free_max_blocks in /etc/sysctl.conf= . That will limit the number to-be-freed blocks in a single transaction group. You can play with that number until you find a value that won't run the system out of kernel memory trying to free all those blocks in a single transaction. On our problem server, running dedupe with only 64 GB of RAM for a 53 TB pool, we set it to 200,000 blocks: =E2=80=8Bvfs.zfs.free_max_blocks=3D200000 --=20 Freddie Cash fjwcash@gmail.com