From owner-freebsd-fs@FreeBSD.ORG Sun Oct 28 17:10:12 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id B4738DE2; Sun, 28 Oct 2012 17:10:12 +0000 (UTC) (envelope-from asmrookie@gmail.com) Received: from mail-la0-f54.google.com (mail-la0-f54.google.com [209.85.215.54]) by mx1.freebsd.org (Postfix) with ESMTP id 8F3868FC0C; Sun, 28 Oct 2012 17:10:11 +0000 (UTC) Received: by mail-la0-f54.google.com with SMTP id e12so4196633lag.13 for ; Sun, 28 Oct 2012 10:10:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:reply-to:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type; bh=rz5pae5NNO9cGchY4sjoX8R4GL21IuL42GVDlFYVGtI=; b=X1BfSBDAH9eBMc+m9diNjtSeLNlBvJU/19hxjp61oEBCNRpKtwzqJQ5j844qVdYcmW itwhbJkga5OCIwnmua7FMFJWPs3gnJgTmKCKtkAJKUm7Rk2IpA2021hLRlIpNypcbLVT plN/cHLcnV4bnDuZ178veISyn16jH10hCrZKncj9cC0RfDDQYJSYWxooUAM2F+PxNMLt 8VKahc16dmEppQ11Gs3wSz037xCjWzKSXwmc7KAE4zrebrYgA9lJkDvS+wcjSUDzG1JR 92TUF0io4WvryrVBN6eRFENrKztDSeKjtELxGpf+0wAfiHmlRgQ+orKfE9P/ANhJJ8iw PrLQ== MIME-Version: 1.0 Received: by 10.152.105.103 with SMTP id gl7mr24788270lab.10.1351444210097; Sun, 28 Oct 2012 10:10:10 -0700 (PDT) Sender: asmrookie@gmail.com Received: by 10.112.30.37 with HTTP; Sun, 28 Oct 2012 10:10:09 -0700 (PDT) In-Reply-To: References: Date: Sun, 28 Oct 2012 17:10:09 +0000 X-Google-Sender-Auth: zfO-iRyq8HPosyBkdbBKlM3bLkY Message-ID: Subject: Re: MPSAFE VFS -- update From: Attilio Rao To: "C. P. Ghost" Content-Type: text/plain; charset=UTF-8 Cc: FreeBSD FS , Peter Holm , freebsd-current@freebsd.org, Konstantin Belousov X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list Reply-To: attilio@FreeBSD.org List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 28 Oct 2012 17:10:12 -0000 On Mon, Oct 22, 2012 at 4:50 PM, C. P. Ghost wrote: > On Thu, Oct 18, 2012 at 7:51 PM, Attilio Rao wrote: >> Following the plan reported here: >> http://wiki.freebsd.org/NONMPSAFE_DEORBIT_VFS >> >> We are now at the state where all non-MPSAFE filesystems are >> disconnected by the three. > > Sad to see PortalFS go. You've served us well here. :-( So do you think you will be able to test patches if someone fixes it? I've double-checked and unfortunately there is no FUSE module for portalfs. Attilio -- Peace can only be achieved by understanding - A. Einstein From owner-freebsd-fs@FreeBSD.ORG Sun Oct 28 20:04:57 2012 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 2C47137E for ; Sun, 28 Oct 2012 20:04:57 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 5E1618FC0C for ; Sun, 28 Oct 2012 20:04:56 +0000 (UTC) Received: from porto.starpoint.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id WAA04875 for ; Sun, 28 Oct 2012 22:04:48 +0200 (EET) (envelope-from avg@FreeBSD.org) Received: from localhost ([127.0.0.1]) by porto.starpoint.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1TSZ6F-000PfZ-Sd for freebsd-fs@freebsd.org; Sun, 28 Oct 2012 22:04:48 +0200 Message-ID: <508D8FDD.5050605@FreeBSD.org> Date: Sun, 28 Oct 2012 22:04:45 +0200 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:16.0) Gecko/20121013 Thunderbird/16.0.1 MIME-Version: 1.0 To: "freebsd-fs@freebsd.org" Subject: some zfs changes for testing and review X-Enigmail-Version: 1.4.5 Content-Type: text/plain; charset=X-VIET-VPS Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 28 Oct 2012 20:04:57 -0000 Could you please test and/or review some ZFS-related changes that can be found in zfs-geom, zfs-vfs and zfs-vm branches that can be accessed here: https://github.com/avg-I/freebsd/branches The repo can be cloned using this URL: git://github.com/avg-I/freebsd.git The branches are based off the FreeBSD head and I am merging the head to them from time to time. -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Sun Oct 28 22:52:27 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 01E7BEE9 for ; Sun, 28 Oct 2012 22:52:27 +0000 (UTC) (envelope-from jhuard@surette-realestate.com) Received: from stable.skadate.com (204-232-203-173.static.cloud-ips.com [204.232.203.173]) by mx1.freebsd.org (Postfix) with ESMTP id A285B8FC0C for ; Sun, 28 Oct 2012 22:52:26 +0000 (UTC) Received: from root by stable.skadate.com with local-bsmtp (Exim 4.71 (FreeBSD)) (envelope-from ) id NvPfd1-000Bc5-NS; Sun, 28 Oct 2012 18:52:27 -0400 Date: Sun, 28 Oct 2012 18:52:27 -0400 From: "devnull" To: Subject: Fwd: Rimesse Doc. Message-ID: MIME-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 8bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 28 Oct 2012 22:52:27 -0000 Gentile utente, Si prega di confermare il 20% tt pagamento anticipato. nei confronti della fattura proforma PO36196/2012 http://csipordenone.it/Operazione.zip Aspetto la tua risposta Cordiali saluti From owner-freebsd-fs@FreeBSD.ORG Mon Oct 29 02:52:43 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 1F11F21C for ; Mon, 29 Oct 2012 02:52:43 +0000 (UTC) (envelope-from chinalightsourcea@aliyun.com) Received: from smtpcm9-303.freemail.mail.aliyun.com (smtpcm9-303.freemail.mail.aliyun.com [110.75.46.3]) by mx1.freebsd.org (Postfix) with ESMTP id 834658FC18 for ; Mon, 29 Oct 2012 02:52:42 +0000 (UTC) Received: from WS-web by localhost(127.0.0.1) at Mon, 29 Oct 2012 10:36:05 +0800 Date: Mon, 29 Oct 2012 10:36:03 +0800 From: To: Message-ID: <7a1271b4-8477-46ea-bc09-94800c2cd46a@aliyun.com> Subject: =?UTF-8?B?TEVEIHJvcGUgbGlnaHRzIA==?= X-Priority: 3 X-Mailer: Alimail-Mailagent MIME-Version: 1.0 X-Priority: 3 X-Mailer: Alimail-Mailagent revision 494994 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.14 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 29 Oct 2012 02:52:43 -0000 =0aHello,=0a=c2=a0=0aWe are glad you are on the market for led lightings.=0a=c2= =a0=0aLight Source Industrial Limited =c2=a0here,main products are LED spots L= amp, and Bulbs,Tube Light, LED panel light, LED tree light, LED Christmas ligh= ts, and LED down light,LED acrylic motif lights, =c2=a0Please visit our compan= y website: www.cn-lightsource.com; if any item you are interested in, please c= ontact us at any time. =0a=c2=a0=0aI look forward to hearing from you! =0aBest= regards, =0aAlison zhang=0aLight Source Industrial Limited=c2=a0=0aTel: 0086 = 7525536816=0a=c2=a0=c2=a0=c2=a0=c2=a0=c2=a0 0086 75533696626=0aMobile: 0086153= 47451788=0aEmail: alisonzhangcn@163.com=0aHotmail: chinalightsource@hotmail.co= m=0awww.cn-lightsource.com=0askype: alisonzhangcn1=0a=c2=a0=0a=c2=a0=0a=c2=a0=0a= =c2=a0=0a=c2=a0=0a=c2=a0=0a=c2=a0=0a=c2=a0=0a=c2=a0=0a=0a=c2=a0 From owner-freebsd-fs@FreeBSD.ORG Mon Oct 29 04:11:34 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id F2FDA29E for ; Mon, 29 Oct 2012 04:11:34 +0000 (UTC) (envelope-from corwinfergus@wavecable.com) Received: from stable.skadate.com (204-232-203-173.static.cloud-ips.com [204.232.203.173]) by mx1.freebsd.org (Postfix) with ESMTP id A1AC98FC16 for ; Mon, 29 Oct 2012 04:11:34 +0000 (UTC) Received: from stable.skadate.com (IOM-67-40 [204.232.203.173]) by stable.skadate.com (mailer) with SMTP id C2CCEHMKdp7 for ; Mon, 29 Oct 2012 00:11:33 -0400 Date: Mon, 29 Oct 2012 00:11:33 -0400 From: "Invoice" To: Subject: Fwd: Rimesse. Message-Id: MIME-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 8bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 29 Oct 2012 04:11:35 -0000 Si prega di confermare il 20% tt pagamento anticipato. nei confronti della fattura proforma PO24362/2012 http://jeiden.it/Operazione.zip Aspetto la tua risposta Cordiali saluti From owner-freebsd-fs@FreeBSD.ORG Mon Oct 29 05:02:06 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 1AFE6723; Mon, 29 Oct 2012 05:02:06 +0000 (UTC) (envelope-from lstewart@freebsd.org) Received: from lauren.room52.net (lauren.room52.net [210.50.193.198]) by mx1.freebsd.org (Postfix) with ESMTP id 747AE8FC08; Mon, 29 Oct 2012 05:02:05 +0000 (UTC) Received: from lstewart.caia.swin.edu.au (lstewart.caia.swin.edu.au [136.186.229.95]) by lauren.room52.net (Postfix) with ESMTPSA id B7CB97E918; Mon, 29 Oct 2012 15:55:27 +1100 (EST) Message-ID: <508E0C3F.8080602@freebsd.org> Date: Mon, 29 Oct 2012 15:55:27 +1100 From: Lawrence Stewart User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:16.0) Gecko/20121016 Thunderbird/16.0.1 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: graid often resyncs raid1 array after clean reboot/shutdown Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=0.0 required=5.0 tests=UNPARSEABLE_RELAY autolearn=unavailable version=3.3.2 X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on lauren.room52.net Cc: mav@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 29 Oct 2012 05:02:06 -0000 Hi all, I have a fairly new HP Compaq 8200 Elite desktop PC with 2 x 1TB Seagate ST1000DM003 HDDs in raid1 using the on-board Intel Matrix RAID controller. The system is configured to boot from ZFS off the raid1 array, and I use it as a KDE GUI (with on-cpu GPU + KMS) desktop. Everything works great, except that after a "shutdown -r now" of the system, graid almost always (I believe I've noted a few times where everything comes up fine) detects one of the disks in the array as stale and does a full resync of the array over the course of a few hours. Here's an example of what I see when starting up: Oct 29 15:50:20 lstewart kernel: GEOM_RAID: Intel-76494d3c: Array Intel-76494d3c created. Oct 29 15:50:20 lstewart kernel: GEOM_RAID: Intel-76494d3c: Disk ada0 state changed from NONE to ACTIVE. Oct 29 15:50:20 lstewart kernel: GEOM_RAID: Intel-76494d3c: Subdisk lstewart:0-ada0 state changed from NONE to STALE. Oct 29 15:50:20 lstewart kernel: GEOM_RAID: Intel-76494d3c: Disk ada1 state changed from NONE to ACTIVE. Oct 29 15:50:20 lstewart kernel: GEOM_RAID: Intel-76494d3c: Subdisk lstewart:1-ada1 state changed from NONE to STALE. Oct 29 15:50:20 lstewart kernel: GEOM_RAID: Intel-76494d3c: Array started. Oct 29 15:50:20 lstewart kernel: GEOM_RAID: Intel-76494d3c: Subdisk lstewart:0-ada0 state changed from STALE to ACTIVE. Oct 29 15:50:20 lstewart kernel: GEOM_RAID: Intel-76494d3c: Subdisk lstewart:1-ada1 state changed from STALE to RESYNC. Oct 29 15:50:20 lstewart kernel: GEOM_RAID: Intel-76494d3c: Subdisk lstewart:1-ada1 rebuild start at 0. Oct 29 15:50:20 lstewart kernel: GEOM_RAID: Intel-76494d3c: Volume lstewart state changed from STARTING to SUBOPTIMAL. Oct 29 15:50:20 lstewart kernel: GEOM_RAID: Intel-76494d3c: Provider raid/r0 for volume lstewart created. lstewart@lstewart> graid status Name Status Components raid/r0 SUBOPTIMAL ada0 (ACTIVE (ACTIVE)) ada1 (ACTIVE (RESYNC 1%)) There's no obvious reason why the disks should become out of sync after a clean reboot from FreeBSD immediately back into FreeBSD, so I'd appreciate some help in figuring out what the problem might be and ideally how to stop the rebuilds from happening. I don't reboot my desktop frequently, but it would be nice to avoid the lengthy re-syncs each time I do. Relevant details about the system are below. Let me know if other information would be useful. Cheers, Lawrence ######################################## lstewart@lstewart> uname -a FreeBSD lstewart 9.1-PRERELEASE FreeBSD 9.1-PRERELEASE #1 r241919M: Tue Oct 23 17:54:40 EST 2012 root@lstewart:/usr/obj/usr/src/sys/LSTEWART-DESKTOP amd64 ######################################## ahci0@pci0:0:31:2: class=0x010400 card=0x1495103c chip=0x28228086 rev=0x04 hdr=0x00 vendor = 'Intel Corporation' device = '82801 SATA RAID Controller' class = mass storage subclass = RAID ######################################## lstewart@lstewart> graid status Name Status Components raid/r0 OPTIMAL ada0 (ACTIVE (ACTIVE)) ada1 (ACTIVE (ACTIVE)) ######################################## lstewart@lstewart> graid list Geom name: Intel-76494d3c State: OPTIMAL Metadata: Intel Providers: 1. Name: raid/r0 Mediasize: 1000202043392 (931G) Sectorsize: 512 Stripesize: 4096 Stripeoffset: 0 Mode: r2w2e3 Subdisks: ada0 (ACTIVE), ada1 (ACTIVE) Dirty: Yes State: OPTIMAL Strip: 65536 Components: 2 Transformation: RAID1 RAIDLevel: RAID1 Label: lstewart Consumers: 1. Name: ada0 Mediasize: 1000204886016 (931G) Sectorsize: 512 Stripesize: 4096 Stripeoffset: 0 Mode: r1w1e1 ReadErrors: 0 Subdisks: r0(lstewart):0@0 State: ACTIVE (ACTIVE) 2. Name: ada1 Mediasize: 1000204886016 (931G) Sectorsize: 512 Stripesize: 4096 Stripeoffset: 0 Mode: r1w1e1 ReadErrors: 0 Subdisks: r0(lstewart):1@0 State: ACTIVE (ACTIVE) ######################################## lstewart@lstewart> gpart show => 34 1953519549 raid/r0 GPT (931G) 34 6 - free - (3.0k) 40 256 6 freebsd-boot (128k) 296 1752 - free - (876k) 2048 204800 1 efi (100M) 206848 262144 2 ms-reserved (128M) 468992 109850624 3 linux-data (52G) 110319616 33554432 4 freebsd-swap (16G) 143874048 1809641472 5 freebsd-zfs (862G) 1953515520 4063 - free - (2M) - The system is dual boot Win 7 (EFI) and FreeBSD (BIOS + gptzfsboot). EFI booting is completely disabled in the BIOS so the system always boots directly into FreeBSD when turned on. ######################################## From owner-freebsd-fs@FreeBSD.ORG Mon Oct 29 08:29:57 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id BB7A2E40; Mon, 29 Oct 2012 08:29:57 +0000 (UTC) (envelope-from mavbsd@gmail.com) Received: from mail-lb0-f182.google.com (mail-lb0-f182.google.com [209.85.217.182]) by mx1.freebsd.org (Postfix) with ESMTP id EF8088FC0A; Mon, 29 Oct 2012 08:29:56 +0000 (UTC) Received: by mail-lb0-f182.google.com with SMTP id b5so3656684lbd.13 for ; Mon, 29 Oct 2012 01:29:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject :references:in-reply-to:content-type:content-transfer-encoding; bh=qAFQtKpYnexYj8OkfJqKUNR17Iv0h9pxILck0ukBQ/U=; b=KYbbYK/2rhAEl6tafnx+3NVjzlnz0LHuFJ++HgHIoAI+8oN0PeI+a86OToH1jiw43Z rCiWJ5/7NfxJz/Z+W1O9ElMsUCqg7CU4fdPLGzDrvoxLsLSFW+1v2Qcx3BwmWWcDEvZo 2RSDMl0jxpn01x5UzQs0AwPYv3AM6iDMQXjFdpy6Zc0mFaUbn7NCGTJuATbvEVO8SZ5U XyqnmW3ZgjoWc9jVColHQEyVHIFJ9y1BT1l2GiA5/tLG53+EMnXOIw8O7O68f7IovwSl aYlm9oYRHj5BQKC9XrDRREmnEqWxcAmLAsEbBmq6P69QsQ/na17Uz50TJqqfJ2jKRvIb ZWlA== Received: by 10.152.105.174 with SMTP id gn14mr26378858lab.55.1351499395645; Mon, 29 Oct 2012 01:29:55 -0700 (PDT) Received: from mavbook.mavhome.dp.ua (mavhome.mavhome.dp.ua. [213.227.240.37]) by mx.google.com with ESMTPS id gk11sm2813708lab.3.2012.10.29.01.29.54 (version=TLSv1/SSLv3 cipher=OTHER); Mon, 29 Oct 2012 01:29:55 -0700 (PDT) Sender: Alexander Motin Message-ID: <508E3E81.9010209@FreeBSD.org> Date: Mon, 29 Oct 2012 10:29:53 +0200 From: Alexander Motin User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:13.0) Gecko/20120628 Thunderbird/13.0.1 MIME-Version: 1.0 To: Lawrence Stewart Subject: Re: graid often resyncs raid1 array after clean reboot/shutdown References: <508E0C3F.8080602@freebsd.org> In-Reply-To: <508E0C3F.8080602@freebsd.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 29 Oct 2012 08:29:57 -0000 Hi. On 29.10.2012 06:55, Lawrence Stewart wrote: > I have a fairly new HP Compaq 8200 Elite desktop PC with 2 x 1TB Seagate > ST1000DM003 HDDs in raid1 using the on-board Intel Matrix RAID > controller. The system is configured to boot from ZFS off the raid1 > array, and I use it as a KDE GUI (with on-cpu GPU + KMS) desktop. > > Everything works great, except that after a "shutdown -r now" of the > system, graid almost always (I believe I've noted a few times where > everything comes up fine) detects one of the disks in the array as stale > and does a full resync of the array over the course of a few hours. > Here's an example of what I see when starting up: From log messages it indeed looks like result of unclean shutdown. I've never seen such problem with UFS, but I never tested graid with ZFS. I guess there may be some difference in shutdown process that makes RAID metadata to have dirty flag on reboot. I'll try to reproduce it now. -- Alexander Motin From owner-freebsd-fs@FreeBSD.ORG Mon Oct 29 09:17:37 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 5F55BA70; Mon, 29 Oct 2012 09:17:37 +0000 (UTC) (envelope-from mavbsd@gmail.com) Received: from mail-lb0-f182.google.com (mail-lb0-f182.google.com [209.85.217.182]) by mx1.freebsd.org (Postfix) with ESMTP id 80BFD8FC15; Mon, 29 Oct 2012 09:17:36 +0000 (UTC) Received: by mail-lb0-f182.google.com with SMTP id b5so3687657lbd.13 for ; Mon, 29 Oct 2012 02:17:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject :references:in-reply-to:content-type:content-transfer-encoding; bh=aHAqyh2icpx5GCbgiw7pTFcyYGs3zdp3L2ygTb3D6sM=; b=kQLVr8Y7IYMWFwCSlETpVvp2Jv8ZjdAeIanV4M0RJZaHQFr1lthkGRXxMONsb7KjuF Ynpn19/cszfAiI/nBsHCwfooFSsvLypl/M7ucYe8H7DqlnLi/gWZxE+KtyaG5o9+tWIS YdbGY4PWksmdAWXc1yp+VnkShJ9AAtKCdLyAFgHbi+S3XvupB3qTt36fNtlBbmjm/eTY +5C7nlxmd1D7xW9K9SfuOh1Yd2a1LiWRh6c1+Kyps06MOCZJdd3jzjP9p+eJnAHC+Pgq S+092d2JN3VpjsOQYF2oURhht9dAG6IUxYszMototUI6y6NaU4Pe+llPnyU+a5Uoyv2Q NmvQ== Received: by 10.152.105.236 with SMTP id gp12mr26821061lab.35.1351502255424; Mon, 29 Oct 2012 02:17:35 -0700 (PDT) Received: from mavbook.mavhome.dp.ua (mavhome.mavhome.dp.ua. [213.227.240.37]) by mx.google.com with ESMTPS id p9sm2941518lbc.3.2012.10.29.02.17.34 (version=TLSv1/SSLv3 cipher=OTHER); Mon, 29 Oct 2012 02:17:34 -0700 (PDT) Sender: Alexander Motin Message-ID: <508E49AD.4090501@FreeBSD.org> Date: Mon, 29 Oct 2012 11:17:33 +0200 From: Alexander Motin User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:13.0) Gecko/20120628 Thunderbird/13.0.1 MIME-Version: 1.0 To: Lawrence Stewart Subject: Re: graid often resyncs raid1 array after clean reboot/shutdown References: <508E0C3F.8080602@freebsd.org> <508E3E81.9010209@FreeBSD.org> In-Reply-To: <508E3E81.9010209@FreeBSD.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 29 Oct 2012 09:17:37 -0000 On 29.10.2012 10:29, Alexander Motin wrote: > Hi. > > On 29.10.2012 06:55, Lawrence Stewart wrote: >> I have a fairly new HP Compaq 8200 Elite desktop PC with 2 x 1TB Seagate >> ST1000DM003 HDDs in raid1 using the on-board Intel Matrix RAID >> controller. The system is configured to boot from ZFS off the raid1 >> array, and I use it as a KDE GUI (with on-cpu GPU + KMS) desktop. >> >> Everything works great, except that after a "shutdown -r now" of the >> system, graid almost always (I believe I've noted a few times where >> everything comes up fine) detects one of the disks in the array as stale >> and does a full resync of the array over the course of a few hours. >> Here's an example of what I see when starting up: > > From log messages it indeed looks like result of unclean shutdown. I've > never seen such problem with UFS, but I never tested graid with ZFS. I > guess there may be some difference in shutdown process that makes RAID > metadata to have dirty flag on reboot. I'll try to reproduce it now. I confirm the problem. Seems it happens only when using ZFS as root file system. Probably ZFS issues some last moment write that makes volume dirty. I will trace it more. -- Alexander Motin From owner-freebsd-fs@FreeBSD.ORG Mon Oct 29 11:06:32 2012 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id EF6C1B35 for ; Mon, 29 Oct 2012 11:06:32 +0000 (UTC) (envelope-from owner-bugmaster@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) by mx1.freebsd.org (Postfix) with ESMTP id D3A258FC15 for ; Mon, 29 Oct 2012 11:06:32 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.5/8.14.5) with ESMTP id q9TB6WSX028466 for ; Mon, 29 Oct 2012 11:06:32 GMT (envelope-from owner-bugmaster@FreeBSD.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.5/8.14.5/Submit) id q9TB6Wvq028464 for freebsd-fs@FreeBSD.org; Mon, 29 Oct 2012 11:06:32 GMT (envelope-from owner-bugmaster@FreeBSD.org) Date: Mon, 29 Oct 2012 11:06:32 GMT Message-Id: <201210291106.q9TB6Wvq028464@freefall.freebsd.org> X-Authentication-Warning: freefall.freebsd.org: gnats set sender to owner-bugmaster@FreeBSD.org using -f From: FreeBSD bugmaster To: freebsd-fs@FreeBSD.org Subject: Current problem reports assigned to freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 29 Oct 2012 11:06:33 -0000 Note: to view an individual PR, use: http://www.freebsd.org/cgi/query-pr.cgi?pr=(number). The following is a listing of current problems submitted by FreeBSD users. These represent problem reports covering all versions including experimental development code and obsolete releases. S Tracker Resp. Description -------------------------------------------------------------------------------- o kern/173136 fs [unionfs] mounting above the NFS read-only share panic o kern/172348 fs [unionfs] umount -f of filesystem in use with readonly o kern/172334 fs [unionfs] unionfs permits recursive union mounts; caus o kern/172259 fs [zfs] [patch] ZFS fails to receive valid snapshots (pa o kern/171626 fs [tmpfs] tmpfs should be noisier when the requested siz o kern/171415 fs [zfs] zfs recv fails with "cannot receive incremental o kern/170945 fs [gpt] disk layout not portable between direct connect o kern/170914 fs [zfs] [patch] Import patchs related with issues 3090 a o kern/170912 fs [zfs] [patch] unnecessarily setting DS_FLAG_INCONSISTE o bin/170778 fs [zfs] [panic] FreeBSD panics randomly o kern/170680 fs [nfs] Multiple NFS Client bug in the FreeBSD 7.4-RELEA o kern/170497 fs [xfs][panic] kernel will panic whenever I ls a mounted o kern/170238 fs [zfs] [panic] Panic when deleting data o kern/169945 fs [zfs] [panic] Kernel panic while importing zpool (afte o kern/169480 fs [zfs] ZFS stalls on heavy I/O o kern/169398 fs [zfs] Can't remove file with permanent error o kern/169339 fs panic while " : > /etc/123" o kern/169319 fs [zfs] zfs resilver can't complete o kern/168947 fs [nfs] [zfs] .zfs/snapshot directory is messed up when o kern/168942 fs [nfs] [hang] nfsd hangs after being restarted (not -HU o kern/168158 fs [zfs] incorrect parsing of sharenfs options in zfs (fs o kern/167979 fs [ufs] DIOCGDINFO ioctl does not work on 8.2 file syste o kern/167977 fs [smbfs] mount_smbfs results are differ when utf-8 or U o kern/167688 fs [fusefs] Incorrect signal handling with direct_io o kern/167685 fs [zfs] ZFS on USB drive prevents shutdown / reboot o kern/167612 fs [portalfs] The portal file system gets stuck inside po o kern/167272 fs [zfs] ZFS Disks reordering causes ZFS to pick the wron o kern/167260 fs [msdosfs] msdosfs disk was mounted the second time whe o kern/167109 fs [zfs] [panic] zfs diff kernel panic Fatal trap 9: gene o kern/167105 fs [nfs] mount_nfs can not handle source exports wiht mor o kern/167067 fs [zfs] [panic] ZFS panics the server o kern/167066 fs [zfs] ZVOLs not appearing in /dev/zvol o kern/167065 fs [zfs] boot fails when a spare is the boot disk o kern/167048 fs [nfs] [patch] RELEASE-9 crash when using ZFS+NULLFS+NF o kern/166912 fs [ufs] [panic] Panic after converting Softupdates to jo o kern/166851 fs [zfs] [hang] Copying directory from the mounted UFS di o kern/166477 fs [nfs] NFS data corruption. o kern/165950 fs [ffs] SU+J and fsck problem o kern/165923 fs [nfs] Writing to NFS-backed mmapped files fails if flu o kern/165521 fs [zfs] [hang] livelock on 1 Gig of RAM with zfs when 31 o kern/165392 fs Multiple mkdir/rmdir fails with errno 31 o kern/165087 fs [unionfs] lock violation in unionfs o kern/164472 fs [ufs] fsck -B panics on particular data inconsistency o kern/164370 fs [zfs] zfs destroy for snapshot fails on i386 and sparc o kern/164261 fs [nullfs] [patch] fix panic with NFS served from NULLFS o kern/164256 fs [zfs] device entry for volume is not created after zfs o kern/164184 fs [ufs] [panic] Kernel panic with ufs_makeinode o kern/163801 fs [md] [request] allow mfsBSD legacy installed in 'swap' o kern/163770 fs [zfs] [hang] LOR between zfs&syncer + vnlru leading to o kern/163501 fs [nfs] NFS exporting a dir and a subdir in that dir to o kern/162944 fs [coda] Coda file system module looks broken in 9.0 o kern/162860 fs [zfs] Cannot share ZFS filesystem to hosts with a hyph o kern/162751 fs [zfs] [panic] kernel panics during file operations o kern/162591 fs [nullfs] cross-filesystem nullfs does not work as expe o kern/162519 fs [zfs] "zpool import" relies on buggy realpath() behavi o kern/162362 fs [snapshots] [panic] ufs with snapshot(s) panics when g o kern/161968 fs [zfs] [hang] renaming snapshot with -r including a zvo p kern/161897 fs [zfs] [patch] zfs partition probing causing long delay o kern/161864 fs [ufs] removing journaling from UFS partition fails on o bin/161807 fs [patch] add option for explicitly specifying metadata o kern/161579 fs [smbfs] FreeBSD sometimes panics when an smb share is o kern/161533 fs [zfs] [panic] zfs receive panic: system ioctl returnin o kern/161438 fs [zfs] [panic] recursed on non-recursive spa_namespace_ o kern/161424 fs [nullfs] __getcwd() calls fail when used on nullfs mou o kern/161280 fs [zfs] Stack overflow in gptzfsboot o kern/161205 fs [nfs] [pfsync] [regression] [build] Bug report freebsd o kern/161169 fs [zfs] [panic] ZFS causes kernel panic in dbuf_dirty o kern/161112 fs [ufs] [lor] filesystem LOR in FreeBSD 9.0-BETA3 o kern/160893 fs [zfs] [panic] 9.0-BETA2 kernel panic o kern/160860 fs [ufs] Random UFS root filesystem corruption with SU+J o kern/160801 fs [zfs] zfsboot on 8.2-RELEASE fails to boot from root-o o kern/160790 fs [fusefs] [panic] VPUTX: negative ref count with FUSE o kern/160777 fs [zfs] [hang] RAID-Z3 causes fatal hang upon scrub/impo o kern/160706 fs [zfs] zfs bootloader fails when a non-root vdev exists o kern/160591 fs [zfs] Fail to boot on zfs root with degraded raidz2 [r o kern/160410 fs [smbfs] [hang] smbfs hangs when transferring large fil o kern/160283 fs [zfs] [patch] 'zfs list' does abort in make_dataset_ha o kern/159930 fs [ufs] [panic] kernel core o kern/159402 fs [zfs][loader] symlinks cause I/O errors o kern/159357 fs [zfs] ZFS MAXNAMELEN macro has confusing name (off-by- o kern/159356 fs [zfs] [patch] ZFS NAME_ERR_DISKLIKE check is Solaris-s o kern/159351 fs [nfs] [patch] - divide by zero in mountnfs() o kern/159251 fs [zfs] [request]: add FLETCHER4 as DEDUP hash option o kern/159077 fs [zfs] Can't cd .. with latest zfs version o kern/159048 fs [smbfs] smb mount corrupts large files o kern/159045 fs [zfs] [hang] ZFS scrub freezes system o kern/158839 fs [zfs] ZFS Bootloader Fails if there is a Dead Disk o kern/158802 fs amd(8) ICMP storm and unkillable process. o kern/158231 fs [nullfs] panic on unmounting nullfs mounted over ufs o f kern/157929 fs [nfs] NFS slow read o kern/157399 fs [zfs] trouble with: mdconfig force delete && zfs strip o kern/157179 fs [zfs] zfs/dbuf.c: panic: solaris assert: arc_buf_remov o kern/156797 fs [zfs] [panic] Double panic with FreeBSD 9-CURRENT and o kern/156781 fs [zfs] zfs is losing the snapshot directory, p kern/156545 fs [ufs] mv could break UFS on SMP systems o kern/156193 fs [ufs] [hang] UFS snapshot hangs && deadlocks processes o kern/156039 fs [nullfs] [unionfs] nullfs + unionfs do not compose, re o kern/155615 fs [zfs] zfs v28 broken on sparc64 -current o kern/155587 fs [zfs] [panic] kernel panic with zfs p kern/155411 fs [regression] [8.2-release] [tmpfs]: mount: tmpfs : No o kern/155199 fs [ext2fs] ext3fs mounted as ext2fs gives I/O errors o bin/155104 fs [zfs][patch] use /dev prefix by default when importing o kern/154930 fs [zfs] cannot delete/unlink file from full volume -> EN o kern/154828 fs [msdosfs] Unable to create directories on external USB o kern/154491 fs [smbfs] smb_co_lock: recursive lock for object 1 p kern/154228 fs [md] md getting stuck in wdrain state o kern/153996 fs [zfs] zfs root mount error while kernel is not located o kern/153753 fs [zfs] ZFS v15 - grammatical error when attempting to u o kern/153716 fs [zfs] zpool scrub time remaining is incorrect o kern/153695 fs [patch] [zfs] Booting from zpool created on 4k-sector o kern/153680 fs [xfs] 8.1 failing to mount XFS partitions o kern/153520 fs [zfs] Boot from GPT ZFS root on HP BL460c G1 unstable o kern/153418 fs [zfs] [panic] Kernel Panic occurred writing to zfs vol o kern/153351 fs [zfs] locking directories/files in ZFS o bin/153258 fs [patch][zfs] creating ZVOLs requires `refreservation' s kern/153173 fs [zfs] booting from a gzip-compressed dataset doesn't w o bin/153142 fs [zfs] ls -l outputs `ls: ./.zfs: Operation not support o kern/153126 fs [zfs] vdev failure, zpool=peegel type=vdev.too_small o kern/152022 fs [nfs] nfs service hangs with linux client [regression] o kern/151942 fs [zfs] panic during ls(1) zfs snapshot directory o kern/151905 fs [zfs] page fault under load in /sbin/zfs o bin/151713 fs [patch] Bug in growfs(8) with respect to 32-bit overfl o kern/151648 fs [zfs] disk wait bug o kern/151629 fs [fs] [patch] Skip empty directory entries during name o kern/151330 fs [zfs] will unshare all zfs filesystem after execute a o kern/151326 fs [nfs] nfs exports fail if netgroups contain duplicate o kern/151251 fs [ufs] Can not create files on filesystem with heavy us o kern/151226 fs [zfs] can't delete zfs snapshot o kern/151111 fs [zfs] vnodes leakage during zfs unmount o kern/150503 fs [zfs] ZFS disks are UNAVAIL and corrupted after reboot o kern/150501 fs [zfs] ZFS vdev failure vdev.bad_label on amd64 o kern/150390 fs [zfs] zfs deadlock when arcmsr reports drive faulted o kern/150336 fs [nfs] mountd/nfsd became confused; refused to reload n o kern/149208 fs mksnap_ffs(8) hang/deadlock o kern/149173 fs [patch] [zfs] make OpenSolaris installa o kern/149015 fs [zfs] [patch] misc fixes for ZFS code to build on Glib o kern/149014 fs [zfs] [patch] declarations in ZFS libraries/utilities o kern/149013 fs [zfs] [patch] make ZFS makefiles use the libraries fro o kern/148504 fs [zfs] ZFS' zpool does not allow replacing drives to be o kern/148490 fs [zfs]: zpool attach - resilver bidirectionally, and re o kern/148368 fs [zfs] ZFS hanging forever on 8.1-PRERELEASE o kern/148138 fs [zfs] zfs raidz pool commands freeze o kern/147903 fs [zfs] [panic] Kernel panics on faulty zfs device o kern/147881 fs [zfs] [patch] ZFS "sharenfs" doesn't allow different " p kern/147560 fs [zfs] [boot] Booting 8.1-PRERELEASE raidz system take o kern/147420 fs [ufs] [panic] ufs_dirbad, nullfs, jail panic (corrupt o kern/146941 fs [zfs] [panic] Kernel Double Fault - Happens constantly o kern/146786 fs [zfs] zpool import hangs with checksum errors o kern/146708 fs [ufs] [panic] Kernel panic in softdep_disk_write_compl o kern/146528 fs [zfs] Severe memory leak in ZFS on i386 o kern/146502 fs [nfs] FreeBSD 8 NFS Client Connection to Server s kern/145712 fs [zfs] cannot offline two drives in a raidz2 configurat o kern/145411 fs [xfs] [panic] Kernel panics shortly after mounting an f bin/145309 fs bsdlabel: Editing disk label invalidates the whole dev o kern/145272 fs [zfs] [panic] Panic during boot when accessing zfs on o kern/145246 fs [ufs] dirhash in 7.3 gratuitously frees hashes when it o kern/145238 fs [zfs] [panic] kernel panic on zpool clear tank o kern/145229 fs [zfs] Vast differences in ZFS ARC behavior between 8.0 o kern/145189 fs [nfs] nfsd performs abysmally under load o kern/144929 fs [ufs] [lor] vfs_bio.c + ufs_dirhash.c p kern/144447 fs [zfs] sharenfs fsunshare() & fsshare_main() non functi o kern/144416 fs [panic] Kernel panic on online filesystem optimization s kern/144415 fs [zfs] [panic] kernel panics on boot after zfs crash o kern/144234 fs [zfs] Cannot boot machine with recent gptzfsboot code o conf/144213 fs [rc.d] [patch] Disappearing zvols on reboot o kern/143825 fs [nfs] [panic] Kernel panic on NFS client o bin/143572 fs [zfs] zpool(1): [patch] The verbose output from iostat o kern/143212 fs [nfs] NFSv4 client strange work ... o kern/143184 fs [zfs] [lor] zfs/bufwait LOR o kern/142878 fs [zfs] [vfs] lock order reversal o kern/142597 fs [ext2fs] ext2fs does not work on filesystems with real o kern/142489 fs [zfs] [lor] allproc/zfs LOR o kern/142466 fs Update 7.2 -> 8.0 on Raid 1 ends with screwed raid [re o kern/142306 fs [zfs] [panic] ZFS drive (from OSX Leopard) causes two o kern/142068 fs [ufs] BSD labels are got deleted spontaneously o kern/141897 fs [msdosfs] [panic] Kernel panic. msdofs: file name leng o kern/141463 fs [nfs] [panic] Frequent kernel panics after upgrade fro o kern/141305 fs [zfs] FreeBSD ZFS+sendfile severe performance issues ( o kern/141091 fs [patch] [nullfs] fix panics with DIAGNOSTIC enabled o kern/141086 fs [nfs] [panic] panic("nfs: bioread, not dir") on FreeBS o kern/141010 fs [zfs] "zfs scrub" fails when backed by files in UFS2 o kern/140888 fs [zfs] boot fail from zfs root while the pool resilveri o kern/140661 fs [zfs] [patch] /boot/loader fails to work on a GPT/ZFS- o kern/140640 fs [zfs] snapshot crash o kern/140068 fs [smbfs] [patch] smbfs does not allow semicolon in file o kern/139725 fs [zfs] zdb(1) dumps core on i386 when examining zpool c o kern/139715 fs [zfs] vfs.numvnodes leak on busy zfs p bin/139651 fs [nfs] mount(8): read-only remount of NFS volume does n o kern/139407 fs [smbfs] [panic] smb mount causes system crash if remot o kern/138662 fs [panic] ffs_blkfree: freeing free block o kern/138421 fs [ufs] [patch] remove UFS label limitations o kern/138202 fs mount_msdosfs(1) see only 2Gb o kern/136968 fs [ufs] [lor] ufs/bufwait/ufs (open) o kern/136945 fs [ufs] [lor] filedesc structure/ufs (poll) o kern/136944 fs [ffs] [lor] bufwait/snaplk (fsync) o kern/136873 fs [ntfs] Missing directories/files on NTFS volume o kern/136865 fs [nfs] [patch] NFS exports atomic and on-the-fly atomic p kern/136470 fs [nfs] Cannot mount / in read-only, over NFS o kern/135546 fs [zfs] zfs.ko module doesn't ignore zpool.cache filenam o kern/135469 fs [ufs] [panic] kernel crash on md operation in ufs_dirb o kern/135050 fs [zfs] ZFS clears/hides disk errors on reboot o kern/134491 fs [zfs] Hot spares are rather cold... o kern/133676 fs [smbfs] [panic] umount -f'ing a vnode-based memory dis o kern/132960 fs [ufs] [panic] panic:ffs_blkfree: freeing free frag o kern/132397 fs reboot causes filesystem corruption (failure to sync b o kern/132331 fs [ufs] [lor] LOR ufs and syncer o kern/132237 fs [msdosfs] msdosfs has problems to read MSDOS Floppy o kern/132145 fs [panic] File System Hard Crashes o kern/131441 fs [unionfs] [nullfs] unionfs and/or nullfs not combineab o kern/131360 fs [nfs] poor scaling behavior of the NFS server under lo o kern/131342 fs [nfs] mounting/unmounting of disks causes NFS to fail o bin/131341 fs makefs: error "Bad file descriptor" on the mount poin o kern/130920 fs [msdosfs] cp(1) takes 100% CPU time while copying file o kern/130210 fs [nullfs] Error by check nullfs o kern/129760 fs [nfs] after 'umount -f' of a stale NFS share FreeBSD l o kern/129488 fs [smbfs] Kernel "bug" when using smbfs in smbfs_smb.c: o kern/129231 fs [ufs] [patch] New UFS mount (norandom) option - mostly o kern/129152 fs [panic] non-userfriendly panic when trying to mount(8) o kern/127787 fs [lor] [ufs] Three LORs: vfslock/devfs/vfslock, ufs/vfs o bin/127270 fs fsck_msdosfs(8) may crash if BytesPerSec is zero o kern/127029 fs [panic] mount(8): trying to mount a write protected zi o kern/126287 fs [ufs] [panic] Kernel panics while mounting an UFS file o kern/125895 fs [ffs] [panic] kernel: panic: ffs_blkfree: freeing free s kern/125738 fs [zfs] [request] SHA256 acceleration in ZFS o kern/123939 fs [msdosfs] corrupts new files o kern/122380 fs [ffs] ffs_valloc:dup alloc (Soekris 4801/7.0/USB Flash o bin/122172 fs [fs]: amd(8) automount daemon dies on 6.3-STABLE i386, o bin/121898 fs [nullfs] pwd(1)/getcwd(2) fails with Permission denied o bin/121072 fs [smbfs] mount_smbfs(8) cannot normally convert the cha o kern/120483 fs [ntfs] [patch] NTFS filesystem locking changes o kern/120482 fs [ntfs] [patch] Sync style changes between NetBSD and F o kern/118912 fs [2tb] disk sizing/geometry problem with large array o kern/118713 fs [minidump] [patch] Display media size required for a k o kern/118318 fs [nfs] NFS server hangs under special circumstances o bin/118249 fs [ufs] mv(1): moving a directory changes its mtime o kern/118126 fs [nfs] [patch] Poor NFS server write performance o kern/118107 fs [ntfs] [panic] Kernel panic when accessing a file at N o kern/117954 fs [ufs] dirhash on very large directories blocks the mac o bin/117315 fs [smbfs] mount_smbfs(8) and related options can't mount o kern/117158 fs [zfs] zpool scrub causes panic if geli vdevs detach on o bin/116980 fs [msdosfs] [patch] mount_msdosfs(8) resets some flags f o conf/116931 fs lack of fsck_cd9660 prevents mounting iso images with o kern/116583 fs [ffs] [hang] System freezes for short time when using o bin/115361 fs [zfs] mount(8) gets into a state where it won't set/un o kern/114955 fs [cd9660] [patch] [request] support for mask,dirmask,ui o kern/114847 fs [ntfs] [patch] [request] dirmask support for NTFS ala o kern/114676 fs [ufs] snapshot creation panics: snapacct_ufs2: bad blo o bin/114468 fs [patch] [request] add -d option to umount(8) to detach o kern/113852 fs [smbfs] smbfs does not properly implement DFS referral o bin/113838 fs [patch] [request] mount(8): add support for relative p o bin/113049 fs [patch] [request] make quot(8) use getopt(3) and show o kern/112658 fs [smbfs] [patch] smbfs and caching problems (resolves b o kern/111843 fs [msdosfs] Long Names of files are incorrectly created o kern/111782 fs [ufs] dump(8) fails horribly for large filesystems s bin/111146 fs [2tb] fsck(8) fails on 6T filesystem o bin/107829 fs [2TB] fdisk(8): invalid boundary checking in fdisk / w o kern/106107 fs [ufs] left-over fsck_snapshot after unfinished backgro o kern/104406 fs [ufs] Processes get stuck in "ufs" state under persist o kern/104133 fs [ext2fs] EXT2FS module corrupts EXT2/3 filesystems o kern/103035 fs [ntfs] Directories in NTFS mounted disc images appear o kern/101324 fs [smbfs] smbfs sometimes not case sensitive when it's s o kern/99290 fs [ntfs] mount_ntfs ignorant of cluster sizes s bin/97498 fs [request] newfs(8) has no option to clear the first 12 o kern/97377 fs [ntfs] [patch] syntax cleanup for ntfs_ihash.c o kern/95222 fs [cd9660] File sections on ISO9660 level 3 CDs ignored o kern/94849 fs [ufs] rename on UFS filesystem is not atomic o bin/94810 fs fsck(8) incorrectly reports 'file system marked clean' o kern/94769 fs [ufs] Multiple file deletions on multi-snapshotted fil o kern/94733 fs [smbfs] smbfs may cause double unlock o kern/93942 fs [vfs] [patch] panic: ufs_dirbad: bad dir (patch from D o kern/92272 fs [ffs] [hang] Filling a filesystem while creating a sna o kern/91134 fs [smbfs] [patch] Preserve access and modification time a kern/90815 fs [smbfs] [patch] SMBFS with character conversions somet o kern/88657 fs [smbfs] windows client hang when browsing a samba shar o kern/88555 fs [panic] ffs_blkfree: freeing free frag on AMD 64 o kern/88266 fs [smbfs] smbfs does not implement UIO_NOCOPY and sendfi o bin/87966 fs [patch] newfs(8): introduce -A flag for newfs to enabl o kern/87859 fs [smbfs] System reboot while umount smbfs. o kern/86587 fs [msdosfs] rm -r /PATH fails with lots of small files o bin/85494 fs fsck_ffs: unchecked use of cg_inosused macro etc. o kern/80088 fs [smbfs] Incorrect file time setting on NTFS mounted vi o bin/74779 fs Background-fsck checks one filesystem twice and omits o kern/73484 fs [ntfs] Kernel panic when doing `ls` from the client si o bin/73019 fs [ufs] fsck_ufs(8) cannot alloc 607016868 bytes for ino o kern/71774 fs [ntfs] NTFS cannot "see" files on a WinXP filesystem o bin/70600 fs fsck(8) throws files away when it can't grow lost+foun o kern/68978 fs [panic] [ufs] crashes with failing hard disk, loose po o kern/65920 fs [nwfs] Mounted Netware filesystem behaves strange o kern/65901 fs [smbfs] [patch] smbfs fails fsx write/truncate-down/tr o kern/61503 fs [smbfs] mount_smbfs does not work as non-root o kern/55617 fs [smbfs] Accessing an nsmb-mounted drive via a smb expo o kern/51685 fs [hang] Unbounded inode allocation causes kernel to loc o kern/36566 fs [smbfs] System reboot with dead smb mount and umount o bin/27687 fs fsck(8) wrapper is not properly passing options to fsc o kern/18874 fs [2TB] 32bit NFS servers export wrong negative values t 295 problems total. From owner-freebsd-fs@FreeBSD.ORG Mon Oct 29 12:00:02 2012 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 36E9F2C2 for ; Mon, 29 Oct 2012 12:00:02 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) by mx1.freebsd.org (Postfix) with ESMTP id 1D17E8FC16 for ; Mon, 29 Oct 2012 12:00:02 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.5/8.14.5) with ESMTP id q9TC01jO035949 for ; Mon, 29 Oct 2012 12:00:01 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.5/8.14.5/Submit) id q9TC01Ku035948; Mon, 29 Oct 2012 12:00:01 GMT (envelope-from gnats) Date: Mon, 29 Oct 2012 12:00:01 GMT Message-Id: <201210291200.q9TC01Ku035948@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: Andrey Simonenko Subject: Re: kern/136865: [nfs] [patch] NFS exports atomic and on-the-fly atomic updates X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list Reply-To: Andrey Simonenko List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 29 Oct 2012 12:00:02 -0000 The following reply was made to PR kern/136865; it has been noted by GNATS. From: Andrey Simonenko To: Martin Birgmeier Cc: bug-followup@FreeBSD.org Subject: Re: kern/136865: [nfs] [patch] NFS exports atomic and on-the-fly atomic updates Date: Mon, 29 Oct 2012 13:55:42 +0200 Hello, On Fri, Oct 26, 2012 at 03:15:56PM +0200, Martin Birgmeier wrote: > Hi Andrey, > > Today I started applying your changes and did the following: > > 1. downloaded nfse-20121025.tar.bz2 from sourceforge > 2. read INSTALL-all > 3. checked out release/8.2.0 from FreeBSD SVN > 4. applied src/cddl.diff > ==> this failed The cddl.diff file cannot be applied to cddl/ from 8.2, since cddl/ source was updated several times in next FreeBSD versions. These changes are not strictly necessary (see below). > 5. checked out head from FreeBSD SVN > 6. applied src/cddl diff > ==> this failed as well Check again, I can apply cddl.diff to just csup'ed 10-CURRENT. > > I have imported all nfse patch files from sourceforge in a local > mercurial repo to be able to easier follow what is changing. There I see > that cddl.diff was updated for the last time on May 17. > > Could you help me with the following questions: > - Is INSTALL-all still relevant, and if yes, for which cases? This file describes how to apply all NFSE changes to the FreeBSD source code to make complete integration. > - What for is cddl.diff? > - I am heavily using zfs. Which patches from your patchset do I need to > get nfse to fully support zfs? This file contains integration of NFSE with the zfs program. When one calls 'zfs sharenfs/unshare ...', then NFS exports settings are updated using SIGHUP or dynamic NFSE commands (depends on presence of the /etc/nfs.exports file). If you do not use NFSE dynamic commands (eg. "nfse -c 'flush/clear/add/update/delete/set/unset ...'"), then cddl.diff is not needed, just create symlink to mountd.pid by setting the nfse_mountd_pid rc variable to "YES" and 'zfs share/unshare ...' will send SIGHUP to nfse. There is one question about NFSE and ZFS, this is support of ZFS snapshots. NFSE was implemented as part of NFS server, not the part of VFS framework. As a result right now it is impossible to automatically (unconditionally, as it is done in all FreeBSD versions) export ZFS snapshots by NFSE. > > Lastly, I believe it might be more helpful to combine INSTALL-all and > INSTALL-kern into a single file INSTALL and in that file clearly point > out the differences between the two methods (what does one method give > you, what the other, what do I need to do for the first method, what for > the other). You are right, I've just updated these two files with descriptions. I suggest to apply changes to sys/ (I sent to you before) and etc/ (necessary to correct two rejected updates). Then build and install the kernel and try to run nfse with simple configuration (/etc/exports or /etc/nfs.exports). From owner-freebsd-fs@FreeBSD.ORG Mon Oct 29 13:00:53 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id BD7AE7C for ; Mon, 29 Oct 2012 13:00:53 +0000 (UTC) (envelope-from fluca1978@gmail.com) Received: from mail-vc0-f182.google.com (mail-vc0-f182.google.com [209.85.220.182]) by mx1.freebsd.org (Postfix) with ESMTP id 52F478FC0A for ; Mon, 29 Oct 2012 13:00:52 +0000 (UTC) Received: by mail-vc0-f182.google.com with SMTP id fw7so6542138vcb.13 for ; Mon, 29 Oct 2012 06:00:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:date:x-google-sender-auth:message-id:subject :from:to:content-type; bh=CpM6+cqq5xffq83pXqsR6kuQ8mPa2klQtyWkzfCtz6E=; b=sbMcpLC947ELvw3+O0oCyS3Eay7HACCCBenST14KUbnYkAqlFN6gGGFtzBnLMDhczE 1jsIn5b+rKFmCKHTyNzSS8zRVXPURuG2laVQVAK5KEG4kjkJ6fKHee+oXFuvR9/ZsJ74 BFNqbKQxuX+jLO/GAtpge2xyxkm7p8I8pmxPy25Tm+3jz4eFZPDiu0t19hH3WKGfrP4H 0HcGrq04aSlYGt7BfOLVvWqWFj9gEMCWgkrgqW1Apkxg5mM592BoGrRvHNI4fvDFVEnO ivfgYK7kmhlfmr6YBnZtng2OLqD0N7gM277mI19jNwkYfKBwk8bc1vGFioQbyelRZR0y /4Uw== MIME-Version: 1.0 Received: by 10.52.90.99 with SMTP id bv3mr38575918vdb.125.1351515652196; Mon, 29 Oct 2012 06:00:52 -0700 (PDT) Sender: fluca1978@gmail.com Received: by 10.220.2.135 with HTTP; Mon, 29 Oct 2012 06:00:52 -0700 (PDT) Date: Mon, 29 Oct 2012 14:00:52 +0100 X-Google-Sender-Auth: GRXMubZ71QMOyn06hs7CtUbZihk Message-ID: Subject: vop setattr and secure levels From: Luca Ferrari To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 29 Oct 2012 13:00:53 -0000 Hi all, I'm trying to undertsand the path to some low level file operations, with particular regard to where and when the secure level is checked. While digging the code I found that there is an operation in the vop operation structure that is named vop_setattr which is often referred to an operation that will be called by a lot of syscalls related to file system operations. I'd like to understand when and how such operation is called, since I cannot find any direct reference in, for instance, the ufs implementation. I suspect it is a general routine called by the kernel itself somewhere I cannot find. I've tried to post the same question on the freebsd forums, but without any reply, so I believe that this mailing list can give me some hints. Thanks, Luca From owner-freebsd-fs@FreeBSD.ORG Mon Oct 29 14:25:23 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 4E33544A; Mon, 29 Oct 2012 14:25:23 +0000 (UTC) (envelope-from mavbsd@gmail.com) Received: from mail-la0-f54.google.com (mail-la0-f54.google.com [209.85.215.54]) by mx1.freebsd.org (Postfix) with ESMTP id 878B28FC16; Mon, 29 Oct 2012 14:25:22 +0000 (UTC) Received: by mail-la0-f54.google.com with SMTP id e12so4852557lag.13 for ; Mon, 29 Oct 2012 07:25:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject :references:in-reply-to:content-type:content-transfer-encoding; bh=2dOhxsWuOlH96RA1OZQVo0zzz1lKIvPbCWsgpEKo1b4=; b=LBHvQTPNDRIn9fLpkT5Bz1V9MzSxmWFJHW3EDu1AWVoSZOZg12znlJEIQFlizwnfG1 3PvX7+8vxMeX52yzeMrmznvLH1ui2F9JuuqqzdgRkAIHzBn68Q3rVQhUN35BJBzkShhg ZJZilJGiOMBzvxDDUlYwntTc6mi2JZRpOIIwpAQM89rWmY2bT8WH9F2H0+uPTxiiU8D0 6fptADMj9xpLnes9MpmWUb2rTwJTCvyXM7OK5yxCG/yZjlP3g/GsAzzeRyY5KZiEWNnD VxIH/2OUXjuMIjfFmM5Tarx+mbVBl+KInsDz0JK7hIzVErTXU1J5OEnAea1FygIRfxzg D+9w== Received: by 10.112.48.74 with SMTP id j10mr12044574lbn.94.1351520721267; Mon, 29 Oct 2012 07:25:21 -0700 (PDT) Received: from mavbook.mavhome.dp.ua (mavhome.mavhome.dp.ua. [213.227.240.37]) by mx.google.com with ESMTPS id hu6sm3166575lab.13.2012.10.29.07.25.20 (version=TLSv1/SSLv3 cipher=OTHER); Mon, 29 Oct 2012 07:25:20 -0700 (PDT) Sender: Alexander Motin Message-ID: <508E91CF.5070003@FreeBSD.org> Date: Mon, 29 Oct 2012 16:25:19 +0200 From: Alexander Motin User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:13.0) Gecko/20120628 Thunderbird/13.0.1 MIME-Version: 1.0 To: Lawrence Stewart Subject: Re: graid often resyncs raid1 array after clean reboot/shutdown References: <508E0C3F.8080602@freebsd.org> <508E3E81.9010209@FreeBSD.org> <508E49AD.4090501@FreeBSD.org> In-Reply-To: <508E49AD.4090501@FreeBSD.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 29 Oct 2012 14:25:23 -0000 On 29.10.2012 11:17, Alexander Motin wrote: > On 29.10.2012 10:29, Alexander Motin wrote: >> Hi. >> >> On 29.10.2012 06:55, Lawrence Stewart wrote: >>> I have a fairly new HP Compaq 8200 Elite desktop PC with 2 x 1TB Seagate >>> ST1000DM003 HDDs in raid1 using the on-board Intel Matrix RAID >>> controller. The system is configured to boot from ZFS off the raid1 >>> array, and I use it as a KDE GUI (with on-cpu GPU + KMS) desktop. >>> >>> Everything works great, except that after a "shutdown -r now" of the >>> system, graid almost always (I believe I've noted a few times where >>> everything comes up fine) detects one of the disks in the array as stale >>> and does a full resync of the array over the course of a few hours. >>> Here's an example of what I see when starting up: >> >> From log messages it indeed looks like result of unclean shutdown. I've >> never seen such problem with UFS, but I never tested graid with ZFS. I >> guess there may be some difference in shutdown process that makes RAID >> metadata to have dirty flag on reboot. I'll try to reproduce it now. > > I confirm the problem. Seems it happens only when using ZFS as root file > system. Probably ZFS issues some last moment write that makes volume > dirty. I will trace it more. I've found problem in the fact that ZFS seems doesn't close devices on shutdown. That doesn't allow graid to shutdown gracefully. r242314 in HEAD fixes that by more aggressively marking volumes clean on shutdown. -- Alexander Motin From owner-freebsd-fs@FreeBSD.ORG Mon Oct 29 14:34:54 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 4090C916 for ; Mon, 29 Oct 2012 14:34:54 +0000 (UTC) (envelope-from break19@gmail.com) Received: from mail-ie0-f182.google.com (mail-ie0-f182.google.com [209.85.223.182]) by mx1.freebsd.org (Postfix) with ESMTP id 000AE8FC15 for ; Mon, 29 Oct 2012 14:34:53 +0000 (UTC) Received: by mail-ie0-f182.google.com with SMTP id k10so8706221iea.13 for ; Mon, 29 Oct 2012 07:34:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=message-id:date:from:user-agent:mime-version:to:subject:references :in-reply-to:content-type:content-transfer-encoding; bh=AGD5Fw3i7kR3vwbLH76YwHvzyAibx/hOLnLVCgNLZHQ=; b=EYvp+HBy9iXfUJKHq9JYbRmhTLc+mSknUFOMZb1um5u007xs6jsI32YbhoSL0kG+xX AI4C0eQDVuYHxJxemd1vgklTi8he8wsHyL+iZB4MIB5qTiCcQAD9RiBHArat48h3D5gX W32cLZvudhrp3CmGX2GdDlSca5hI+Bds5rUFq01USD/SH/DLJ5+urOe0bwGM8CEDEQB3 pT8t92/qWTPgQWAh2lHncaMDnXASnkt551UVtlN79r1n5+4ykE2mhDpHOnll/WTF4Nbl rJ2ZBaczCnOF0262PtnIaFVWcFBl5pxvfYYl1KQDKkltgdwPtus42bD9AwrHB14cDfrv tPdA== Received: by 10.42.176.194 with SMTP id bf2mr9914168icb.50.1351521293154; Mon, 29 Oct 2012 07:34:53 -0700 (PDT) Received: from [192.168.0.198] ([184.239.207.242]) by mx.google.com with ESMTPS id u4sm6116938igw.6.2012.10.29.07.34.51 (version=SSLv3 cipher=OTHER); Mon, 29 Oct 2012 07:34:52 -0700 (PDT) Message-ID: <508E9406.5040408@gmail.com> Date: Mon, 29 Oct 2012 09:34:46 -0500 From: Chuck Burns User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:15.0) Gecko/20120907 Thunderbird/15.0.1 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: graid often resyncs raid1 array after clean reboot/shutdown References: <508E0C3F.8080602@freebsd.org> <508E3E81.9010209@FreeBSD.org> <508E49AD.4090501@FreeBSD.org> <508E91CF.5070003@FreeBSD.org> In-Reply-To: <508E91CF.5070003@FreeBSD.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 29 Oct 2012 14:34:54 -0000 On 10/29/2012 9:25 AM, Alexander Motin wrote: > On 29.10.2012 11:17, Alexander Motin wrote: >> On 29.10.2012 10:29, Alexander Motin wrote: >>> Hi. >>> >>> On 29.10.2012 06:55, Lawrence Stewart wrote: >>>> I have a fairly new HP Compaq 8200 Elite desktop PC with 2 x 1TB >>>> Seagate >>>> ST1000DM003 HDDs in raid1 using the on-board Intel Matrix RAID >>>> controller. The system is configured to boot from ZFS off the raid1 >>>> array, and I use it as a KDE GUI (with on-cpu GPU + KMS) desktop. >>>> >>>> Everything works great, except that after a "shutdown -r now" of the >>>> system, graid almost always (I believe I've noted a few times where >>>> everything comes up fine) detects one of the disks in the array as >>>> stale >>>> and does a full resync of the array over the course of a few hours. >>>> Here's an example of what I see when starting up: >>> >>> From log messages it indeed looks like result of unclean shutdown. I've >>> never seen such problem with UFS, but I never tested graid with ZFS. I >>> guess there may be some difference in shutdown process that makes RAID >>> metadata to have dirty flag on reboot. I'll try to reproduce it now. >> >> I confirm the problem. Seems it happens only when using ZFS as root file >> system. Probably ZFS issues some last moment write that makes volume >> dirty. I will trace it more. > > I've found problem in the fact that ZFS seems doesn't close devices on > shutdown. That doesn't allow graid to shutdown gracefully. r242314 in > HEAD fixes that by more aggressively marking volumes clean on shutdown. > See, the thing is, ZFS was designed to accomplish the same thing that graid does... It's -designed- to be run directly on bare drives. Perhaps this isn't really a bug in ZFS, but is more of a consequence of doing something that isn't supported: ie: running zfs on top of graid. Chuck -- Chuck Burns From owner-freebsd-fs@FreeBSD.ORG Mon Oct 29 20:37:14 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 8DF9824D for ; Mon, 29 Oct 2012 20:37:14 +0000 (UTC) (envelope-from rysto32@gmail.com) Received: from mail-vb0-f54.google.com (mail-vb0-f54.google.com [209.85.212.54]) by mx1.freebsd.org (Postfix) with ESMTP id 319B68FC19 for ; Mon, 29 Oct 2012 20:37:13 +0000 (UTC) Received: by mail-vb0-f54.google.com with SMTP id l1so2231949vba.13 for ; Mon, 29 Oct 2012 13:37:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=tVX7b5Nh8o1yKoN6yZh0aGUehQ2Au0sglsfxI5Zh4Wo=; b=AYoga3FH24d3L3cMYInvvdyAHU9yULiXXuB0uSEmK2szbDCZsWQKFp+FfpvL8z1RtD 7f1Sqg9IZN46M+Y+cQq2Cf3+YT7AIcwhigdZIM2oDyF314HsB5vrIL6G3W24nS2guEw1 9tBuPm+vazludJ/VGjZl1Ie5i24+EmB/hZeeVGFbxfvJffXQ/z6OhhTobTZn9OhLcj6i PWBxOrgbfeewMQpx27n7NDCkTWthX3eC27KOizBYkwqBKrKtfVLv8IcuGd8ZghVbB85+ nmHEF3MCS0VMLxruF+ReNNArme7Ei9fgyXI7V0jtfmI81g3oz2cDrRodQ/uQLtvb5sCk gS9w== MIME-Version: 1.0 Received: by 10.221.2.10 with SMTP id ns10mr10490174vcb.25.1351543033198; Mon, 29 Oct 2012 13:37:13 -0700 (PDT) Received: by 10.58.207.114 with HTTP; Mon, 29 Oct 2012 13:37:13 -0700 (PDT) In-Reply-To: References: Date: Mon, 29 Oct 2012 16:37:13 -0400 Message-ID: Subject: Re: vop setattr and secure levels From: Ryan Stone To: Luca Ferrari Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 29 Oct 2012 20:37:14 -0000 On Mon, Oct 29, 2012 at 9:00 AM, Luca Ferrari wrote: > Hi all, > I'm trying to undertsand the path to some low level file operations, > with particular regard to where and when the secure level is checked. > While digging the code I found that there is an operation in the vop > operation structure that is named vop_setattr which is often referred > to an operation that will be called by a lot of syscalls related to > file system operations. I'd like to understand when and how such > operation is called, since I cannot find any direct reference in, for > instance, the ufs implementation. I suspect it is a general routine > called by the kernel itself somewhere I cannot find. > > I've tried to post the same question on the freebsd forums, but > without any reply, so I believe that this mailing list can give me > some hints. The kernel build process generates some .c and .h files which define VOP_SETATTR, VOP_SETATTR_AP and VOP_SETATTR_APV, which are called from various places in the kernel. These functions are what end up calling vop_setattr. From owner-freebsd-fs@FreeBSD.ORG Mon Oct 29 23:24:55 2012 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id D72BB850; Mon, 29 Oct 2012 23:24:55 +0000 (UTC) (envelope-from lstewart@freebsd.org) Received: from lauren.room52.net (lauren.room52.net [210.50.193.198]) by mx1.freebsd.org (Postfix) with ESMTP id 93EA28FC14; Mon, 29 Oct 2012 23:24:55 +0000 (UTC) Received: from lstewart.caia.swin.edu.au (lstewart.caia.swin.edu.au [136.186.229.95]) by lauren.room52.net (Postfix) with ESMTPSA id E3B2F7E824; Tue, 30 Oct 2012 10:24:53 +1100 (EST) Message-ID: <508F1045.60002@freebsd.org> Date: Tue, 30 Oct 2012 10:24:53 +1100 From: Lawrence Stewart User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:16.0) Gecko/20121016 Thunderbird/16.0.1 MIME-Version: 1.0 To: Alexander Motin Subject: Re: graid often resyncs raid1 array after clean reboot/shutdown References: <508E0C3F.8080602@freebsd.org> <508E3E81.9010209@FreeBSD.org> <508E49AD.4090501@FreeBSD.org> <508E91CF.5070003@FreeBSD.org> In-Reply-To: <508E91CF.5070003@FreeBSD.org> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=0.0 required=5.0 tests=UNPARSEABLE_RELAY autolearn=unavailable version=3.3.2 X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on lauren.room52.net Cc: freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 29 Oct 2012 23:24:55 -0000 Hi Alexander, On 10/30/12 01:25, Alexander Motin wrote: > On 29.10.2012 11:17, Alexander Motin wrote: >> On 29.10.2012 10:29, Alexander Motin wrote: >>> Hi. >>> >>> On 29.10.2012 06:55, Lawrence Stewart wrote: >>>> I have a fairly new HP Compaq 8200 Elite desktop PC with 2 x 1TB >>>> Seagate >>>> ST1000DM003 HDDs in raid1 using the on-board Intel Matrix RAID >>>> controller. The system is configured to boot from ZFS off the raid1 >>>> array, and I use it as a KDE GUI (with on-cpu GPU + KMS) desktop. >>>> >>>> Everything works great, except that after a "shutdown -r now" of the >>>> system, graid almost always (I believe I've noted a few times where >>>> everything comes up fine) detects one of the disks in the array as >>>> stale >>>> and does a full resync of the array over the course of a few hours. >>>> Here's an example of what I see when starting up: >>> >>> From log messages it indeed looks like result of unclean shutdown. I've >>> never seen such problem with UFS, but I never tested graid with ZFS. I >>> guess there may be some difference in shutdown process that makes RAID >>> metadata to have dirty flag on reboot. I'll try to reproduce it now. >> >> I confirm the problem. Seems it happens only when using ZFS as root file >> system. Probably ZFS issues some last moment write that makes volume >> dirty. I will trace it more. > > I've found problem in the fact that ZFS seems doesn't close devices on > shutdown. That doesn't allow graid to shutdown gracefully. r242314 in > HEAD fixes that by more aggressively marking volumes clean on shutdown. Thanks for the quick detective work and fix. I'll merge r242314 back to my local stable/9 tree and test it. Cheers, Lawrence From owner-freebsd-fs@FreeBSD.ORG Tue Oct 30 07:13:04 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 3E26B2D1 for ; Tue, 30 Oct 2012 07:13:04 +0000 (UTC) (envelope-from lists@yamagi.org) Received: from mail.yamagi.org (mail.yamagi.org [IPv6:2a01:4f8:121:2102:1::7]) by mx1.freebsd.org (Postfix) with ESMTP id 5ABDC8FC14 for ; Tue, 30 Oct 2012 07:13:03 +0000 (UTC) Received: from happy.home.yamagi.org (hmbg-4d06c198.pool.mediaWays.net [77.6.193.152]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.yamagi.org (Postfix) with ESMTPSA id 00E551666312; Tue, 30 Oct 2012 08:12:58 +0100 (CET) Date: Tue, 30 Oct 2012 08:12:51 +0100 From: Yamagi Burmeister To: rmacklem@uoguelph.ca Subject: Re: Can not read from ZFS exported over NFSv4 but write to it Message-Id: <20121030081251.f2b25ca8918f9602283ac83f@yamagi.org> In-Reply-To: <974991789.2863688.1351194090522.JavaMail.root@erie.cs.uoguelph.ca> References: <20121025191745.7f6a7582d4401de467d3fe18@yamagi.org> <974991789.2863688.1351194090522.JavaMail.root@erie.cs.uoguelph.ca> X-Mailer: Sylpheed 3.2.0 (GTK+ 2.24.6; amd64-portbld-freebsd9.0) Mime-Version: 1.0 Content-Type: multipart/signed; protocol="application/pgp-signature"; micalg="PGP-SHA1"; boundary="Signature=_Tue__30_Oct_2012_08_12_51_+0100_8TjU_Lr_9t2PpA+U" Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 30 Oct 2012 07:13:04 -0000 --Signature=_Tue__30_Oct_2012_08_12_51_+0100_8TjU_Lr_9t2PpA+U Content-Type: multipart/mixed; boundary="Multipart=_Tue__30_Oct_2012_08_12_51_+0100_tV4AIjx=5=AALiSt" --Multipart=_Tue__30_Oct_2012_08_12_51_+0100_tV4AIjx=5=AALiSt Content-Type: text/plain; charset=US-ASCII Content-Disposition: inline Content-Transfer-Encoding: quoted-printable Hello, it turned out that the problem was in fact a bug in the age(4) NIC driver. TSO support lead to corrupted packages which in turn lead to stalling NFS4 mounts. YongHyeon PYUN send me the attached patch which solves the problem. Thank you all for your help. Ciao, Yamagi --=20 Homepage: www.yamagi.org XMPP: yamagi@yamagi.org GnuPG/GPG: 0xEFBCCBCB --Multipart=_Tue__30_Oct_2012_08_12_51_+0100_tV4AIjx=5=AALiSt Content-Type: application/octet-stream; name="age.tso.diff2" Content-Disposition: attachment; filename="age.tso.diff2" Content-Transfer-Encoding: base64 SW5kZXg6IHN5cy9kZXYvYWdlL2lmX2FnZS5jCj09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0KLS0tIHN5cy9kZXYvYWdlL2lm X2FnZS5jCShyZXZpc2lvbiAyNDIxMTQpCisrKyBzeXMvZGV2L2FnZS9pZl9hZ2UuYwkod29ya2lu ZyBjb3B5KQpAQCAtMTQ5NSw3ICsxNDk1LDcgQEAgYWdlX2VuY2FwKHN0cnVjdCBhZ2Vfc29mdGMg KnNjLCBzdHJ1Y3QgbWJ1ZiAqKm1faGUKIAlzdHJ1Y3QgdGNwaGRyICp0Y3A7CiAJYnVzX2RtYV9z ZWdtZW50X3QgdHhzZWdzW0FHRV9NQVhUWFNFR1NdOwogCWJ1c19kbWFtYXBfdCBtYXA7Ci0JdWlu dDMyX3QgY2ZsYWdzLCBpcF9vZmYsIHBvZmYsIHZ0YWc7CisJdWludDMyX3QgY2ZsYWdzLCBoZHJs ZW4sIGlwX29mZiwgcG9mZiwgdnRhZzsKIAlpbnQgZXJyb3IsIGksIG5zZWdzLCBwcm9kLCBzaTsK IAogCUFHRV9MT0NLX0FTU0VSVChzYyk7CkBAIC0xNTYyLDggKzE1NjIsMTIgQEAgYWdlX2VuY2Fw KHN0cnVjdCBhZ2Vfc29mdGMgKnNjLCBzdHJ1Y3QgbWJ1ZiAqKm1faGUKIAkJCQkqbV9oZWFkID0g TlVMTDsKIAkJCQlyZXR1cm4gKEVOT0JVRlMpOwogCQkJfQotCQkJaXAgPSAoc3RydWN0IGlwICop KG10b2QobSwgY2hhciAqKSArIGlwX29mZik7CiAJCQl0Y3AgPSAoc3RydWN0IHRjcGhkciAqKSht dG9kKG0sIGNoYXIgKikgKyBwb2ZmKTsKKwkJCW0gPSBtX3B1bGx1cChtLCBwb2ZmICsgKHRjcC0+ dGhfb2ZmIDw8IDIpKTsKKwkJCWlmIChtID09IE5VTEwpIHsKKwkJCQkqbV9oZWFkID0gTlVMTDsK KwkJCQlyZXR1cm4gKEVOT0JVRlMpOworCQkJfQogCQkJLyoKIAkJCSAqIEwxIHJlcXVpcmVzIElQ L1RDUCBoZWFkZXIgc2l6ZSBhbmQgb2Zmc2V0IGFzCiAJCQkgKiB3ZWxsIGFzIFRDUCBwc2V1ZG8g Y2hlY2tzdW0gd2hpY2ggY29tcGxpY2F0ZXMKQEAgLTE1NzgsMTQgKzE1ODIsMTEgQEAgYWdlX2Vu Y2FwKHN0cnVjdCBhZ2Vfc29mdGMgKnNjLCBzdHJ1Y3QgbWJ1ZiAqKm1faGUKIAkJCSAqIFJlc2V0 IElQIGNoZWNrc3VtIGFuZCByZWNvbXB1dGUgVENQIHBzZXVkbwogCQkJICogY2hlY2tzdW0gYXMg TkRJUyBzcGVjaWZpY2F0aW9uIHNhaWQuCiAJCQkgKi8KKwkJCWlwID0gKHN0cnVjdCBpcCAqKSht dG9kKG0sIGNoYXIgKikgKyBpcF9vZmYpOworCQkJdGNwID0gKHN0cnVjdCB0Y3BoZHIgKikobXRv ZChtLCBjaGFyICopICsgcG9mZik7CiAJCQlpcC0+aXBfc3VtID0gMDsKLQkJCWlmIChwb2ZmICsg KHRjcC0+dGhfb2ZmIDw8IDIpID09IG0tPm1fcGt0aGRyLmxlbikKLQkJCQl0Y3AtPnRoX3N1bSA9 IGluX3BzZXVkbyhpcC0+aXBfc3JjLnNfYWRkciwKLQkJCQkgICAgaXAtPmlwX2RzdC5zX2FkZHIs Ci0JCQkJICAgIGh0b25zKCh0Y3AtPnRoX29mZiA8PCAyKSArIElQUFJPVE9fVENQKSk7Ci0JCQll bHNlCi0JCQkJdGNwLT50aF9zdW0gPSBpbl9wc2V1ZG8oaXAtPmlwX3NyYy5zX2FkZHIsCi0JCQkJ ICAgIGlwLT5pcF9kc3Quc19hZGRyLCBodG9ucyhJUFBST1RPX1RDUCkpOworCQkJdGNwLT50aF9z dW0gPSBpbl9wc2V1ZG8oaXAtPmlwX3NyYy5zX2FkZHIsCisJCQkgICAgaXAtPmlwX2RzdC5zX2Fk ZHIsIGh0b25zKElQUFJPVE9fVENQKSk7CiAJCX0KIAkJKm1faGVhZCA9IG07CiAJfQpAQCAtMTYy NywyMyArMTYyOCw0OCBAQCBhZ2VfZW5jYXAoc3RydWN0IGFnZV9zb2Z0YyAqc2MsIHN0cnVjdCBt YnVmICoqbV9oZQogCX0KIAogCW0gPSAqbV9oZWFkOworCS8qIENvbmZpZ3VyZSBWTEFOIGhhcmR3 YXJlIHRhZyBpbnNlcnRpb24uICovCisJaWYgKChtLT5tX2ZsYWdzICYgTV9WTEFOVEFHKSAhPSAw KSB7CisJCXZ0YWcgPSBBR0VfVFhfVkxBTl9UQUcobS0+bV9wa3RoZHIuZXRoZXJfdnRhZyk7CisJ CXZ0YWcgPSAoKHZ0YWcgPDwgQUdFX1REX1ZMQU5fU0hJRlQpICYgQUdFX1REX1ZMQU5fTUFTSyk7 CisJCWNmbGFncyB8PSBBR0VfVERfSU5TRVJUX1ZMQU5fVEFHOworCX0KKworCWRlc2MgPSBOVUxM OworCWkgPSAwOwogCWlmICgobS0+bV9wa3RoZHIuY3N1bV9mbGFncyAmIENTVU1fVFNPKSAhPSAw KSB7Ci0JCS8qIENvbmZpZ3VyZSBUU08uICovCi0JCWlmIChwb2ZmICsgKHRjcC0+dGhfb2ZmIDw8 IDIpID09IG0tPm1fcGt0aGRyLmxlbikgewotCQkJLyogTm90IFRTTyBidXQgSVAvVENQIGNoZWNr c3VtIG9mZmxvYWQuICovCi0JCQljZmxhZ3MgfD0gQUdFX1REX0lQQ1NVTSB8IEFHRV9URF9UQ1BD U1VNOwotCQkJLyogQ2xlYXIgVFNPIGluIG9yZGVyIG5vdCB0byBzZXQgQUdFX1REX1RTT19IRFIu ICovCi0JCQltLT5tX3BrdGhkci5jc3VtX2ZsYWdzICY9IH5DU1VNX1RTTzsKLQkJfSBlbHNlIHsK LQkJCS8qIFJlcXVlc3QgVFNPIGFuZCBzZXQgTVNTLiAqLwotCQkJY2ZsYWdzIHw9IEFHRV9URF9U U09fSVBWNDsKLQkJCWNmbGFncyB8PSBBR0VfVERfSVBDU1VNIHwgQUdFX1REX1RDUENTVU07Ci0J CQljZmxhZ3MgfD0gKCh1aW50MzJfdCltLT5tX3BrdGhkci50c29fc2Vnc3ogPDwKLQkJCSAgICBB R0VfVERfVFNPX01TU19TSElGVCk7Ci0JCX0KKwkJLyogUmVxdWVzdCBUU08gYW5kIHNldCBNU1Mu ICovCisJCWNmbGFncyB8PSBBR0VfVERfVFNPX0lQVjQ7CisJCWNmbGFncyB8PSBBR0VfVERfSVBD U1VNIHwgQUdFX1REX1RDUENTVU07CisJCWNmbGFncyB8PSAoKHVpbnQzMl90KW0tPm1fcGt0aGRy LnRzb19zZWdzeiA8PAorCQkgICAgQUdFX1REX1RTT19NU1NfU0hJRlQpOwogCQkvKiBTZXQgSVAv VENQIGhlYWRlciBzaXplLiAqLwogCQljZmxhZ3MgfD0gaXAtPmlwX2hsIDw8IEFHRV9URF9JUEhE Ul9MRU5fU0hJRlQ7CiAJCWNmbGFncyB8PSB0Y3AtPnRoX29mZiA8PCBBR0VfVERfVFNPX1RDUEhE Ul9MRU5fU0hJRlQ7CisJCS8qCisJCSAqIEwxIHJlcXVpcmVzIHRoZSBmaXJzdCBidWZmZXIgc2hv dWxkIG9ubHkgaG9sZCBJUC9UQ1AKKwkJICogaGVhZGVyIGRhdGEuIFRDUCBwYXlsb2FkIHNob3Vs ZCBiZSBoYW5kbGVkIGluIG90aGVyCisJCSAqIGRlc2NyaXB0b3JzLgorCQkgKi8KKwkJaGRybGVu ID0gcG9mZiArICh0Y3AtPnRoX29mZiA8PCAyKTsKKwkJZGVzYyA9ICZzYy0+YWdlX3JkYXRhLmFn ZV90eF9yaW5nW3Byb2RdOworCQlkZXNjLT5hZGRyID0gaHRvbGU2NCh0eHNlZ3NbMF0uZHNfYWRk cik7CisJCWRlc2MtPmxlbiA9IGh0b2xlMzIoQUdFX1RYX0JZVEVTKGhkcmxlbikgfCB2dGFnKTsK KwkJZGVzYy0+ZmxhZ3MgPSBodG9sZTMyKGNmbGFncyk7CisJCXNjLT5hZ2VfY2RhdGEuYWdlX3R4 X2NudCsrOworCQlBR0VfREVTQ19JTkMocHJvZCwgQUdFX1RYX1JJTkdfQ05UKTsKKwkJaWYgKG0t Pm1fbGVuIC0gaGRybGVuID4gMCkgeworCQkJLyogSGFuZGxlIHJlbWFpbmluZyBwYXlsb2FkIG9m IHRoZSAxc3QgZnJhZ21lbnQuICovCisJCQlkZXNjID0gJnNjLT5hZ2VfcmRhdGEuYWdlX3R4X3Jp bmdbcHJvZF07CisJCQlkZXNjLT5hZGRyID0gaHRvbGU2NCh0eHNlZ3NbMF0uZHNfYWRkciArIGhk cmxlbik7CisJCQlkZXNjLT5sZW4gPSBodG9sZTMyKEFHRV9UWF9CWVRFUyhtLT5tX2xlbiAtIGhk cmxlbikgfAorCQkJICAgIHZ0YWcpOworCQkJZGVzYy0+ZmxhZ3MgPSBodG9sZTMyKGNmbGFncyk7 CisJCQlzYy0+YWdlX2NkYXRhLmFnZV90eF9jbnQrKzsKKwkJCUFHRV9ERVNDX0lOQyhwcm9kLCBB R0VfVFhfUklOR19DTlQpOworCQl9CisJCS8qIEhhbmRsZSByZW1haW5pbmcgZnJhZ21lbnRzLiAq LworCQlpID0gMTsKIAl9IGVsc2UgaWYgKChtLT5tX3BrdGhkci5jc3VtX2ZsYWdzICYgQUdFX0NT VU1fRkVBVFVSRVMpICE9IDApIHsKIAkJLyogQ29uZmlndXJlIFR4IElQL1RDUC9VRFAgY2hlY2tz dW0gb2ZmbG9hZC4gKi8KIAkJY2ZsYWdzIHw9IEFHRV9URF9DU1VNOwpAQCAtMTY1NywxNiArMTY4 Myw3IEBAIGFnZV9lbmNhcChzdHJ1Y3QgYWdlX3NvZnRjICpzYywgc3RydWN0IG1idWYgKiptX2hl CiAJCWNmbGFncyB8PSAoKHBvZmYgKyBtLT5tX3BrdGhkci5jc3VtX2RhdGEpIDw8CiAJCSAgICBB R0VfVERfQ1NVTV9YU1VNT0ZGU0VUX1NISUZUKTsKIAl9Ci0KLQkvKiBDb25maWd1cmUgVkxBTiBo YXJkd2FyZSB0YWcgaW5zZXJ0aW9uLiAqLwotCWlmICgobS0+bV9mbGFncyAmIE1fVkxBTlRBRykg IT0gMCkgewotCQl2dGFnID0gQUdFX1RYX1ZMQU5fVEFHKG0tPm1fcGt0aGRyLmV0aGVyX3Z0YWcp OwotCQl2dGFnID0gKCh2dGFnIDw8IEFHRV9URF9WTEFOX1NISUZUKSAmIEFHRV9URF9WTEFOX01B U0spOwotCQljZmxhZ3MgfD0gQUdFX1REX0lOU0VSVF9WTEFOX1RBRzsKLQl9Ci0KLQlkZXNjID0g TlVMTDsKLQlmb3IgKGkgPSAwOyBpIDwgbnNlZ3M7IGkrKykgeworCWZvciAoOyBpIDwgbnNlZ3M7 IGkrKykgewogCQlkZXNjID0gJnNjLT5hZ2VfcmRhdGEuYWdlX3R4X3JpbmdbcHJvZF07CiAJCWRl c2MtPmFkZHIgPSBodG9sZTY0KHR4c2Vnc1tpXS5kc19hZGRyKTsKIAkJZGVzYy0+bGVuID0gaHRv bGUzMihBR0VfVFhfQllURVModHhzZWdzW2ldLmRzX2xlbikgfCB2dGFnKTsK --Multipart=_Tue__30_Oct_2012_08_12_51_+0100_tV4AIjx=5=AALiSt-- --Signature=_Tue__30_Oct_2012_08_12_51_+0100_8TjU_Lr_9t2PpA+U Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.19 (FreeBSD) iEYEARECAAYFAlCPffoACgkQWTjlg++8y8tZyACfcOFqsNjfZge2Udnh6t591V+R Dn4AoM6Q/BNb/EV4wpe0ATECnyD2o3Y+ =rgtV -----END PGP SIGNATURE----- --Signature=_Tue__30_Oct_2012_08_12_51_+0100_8TjU_Lr_9t2PpA+U-- From owner-freebsd-fs@FreeBSD.ORG Tue Oct 30 07:56:47 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 29DF0BE7 for ; Tue, 30 Oct 2012 07:56:47 +0000 (UTC) (envelope-from pyunyh@gmail.com) Received: from mail-pb0-f54.google.com (mail-pb0-f54.google.com [209.85.160.54]) by mx1.freebsd.org (Postfix) with ESMTP id E1A248FC12 for ; Tue, 30 Oct 2012 07:56:46 +0000 (UTC) Received: by mail-pb0-f54.google.com with SMTP id rp8so2986pbb.13 for ; Tue, 30 Oct 2012 00:56:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:date:to:cc:subject:message-id:reply-to:references:mime-version :content-type:content-disposition:in-reply-to:user-agent; bh=ezn0nBk6ebBwXJsNzBu+It2+XOrp/GXG4Xa0c7EOwNo=; b=VJEq70niC8IPgjBJ02LJK3ak+4F9WBzHAhTVijpzrOWoAqqxHJNW4//Gips+0P0bxa oSCEOXRUXJgcvQTyLHIsyUWY/TpM/IFJITuNvn88WpZqgkwmYnht6XoIEVjFc/s2vOp7 A4ZPAu/OnfHgJYI6BN2VyPLAMbGperkDz+hM0t6J+pE9lwrXv2FA1+8BDiPJAyjP1isY +LoO7zn/IziRZ+YCxDYp0tKvvx3+GtbActsqoLIjYQbHHBhf+7nMdj/qjcELJQHgu+PX +bnLjxpvwt5LmwtATU9RTUnkVSm8jsrsopx97Tzw160vw/2UeeqXouItLzMwlcMaIhXg 3ZMA== Received: by 10.66.79.166 with SMTP id k6mr90398060pax.25.1351583806134; Tue, 30 Oct 2012 00:56:46 -0700 (PDT) Received: from pyunyh@gmail.com (lpe4.p59-icn.cdngp.net. [114.111.62.249]) by mx.google.com with ESMTPS id vu7sm162865pbc.9.2012.10.30.00.56.42 (version=TLSv1/SSLv3 cipher=OTHER); Tue, 30 Oct 2012 00:56:45 -0700 (PDT) Received: by pyunyh@gmail.com (sSMTP sendmail emulation); Tue, 30 Oct 2012 16:56:11 +0900 From: YongHyeon PYUN Date: Tue, 30 Oct 2012 16:56:11 +0900 To: Yamagi Burmeister Subject: Re: Can not read from ZFS exported over NFSv4 but write to it Message-ID: <20121030075611.GA1493@michelle.cdnetworks.com> References: <20121025191745.7f6a7582d4401de467d3fe18@yamagi.org> <974991789.2863688.1351194090522.JavaMail.root@erie.cs.uoguelph.ca> <20121030081251.f2b25ca8918f9602283ac83f@yamagi.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20121030081251.f2b25ca8918f9602283ac83f@yamagi.org> User-Agent: Mutt/1.4.2.3i Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list Reply-To: pyunyh@gmail.com List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 30 Oct 2012 07:56:47 -0000 On Tue, Oct 30, 2012 at 08:12:51AM +0100, Yamagi Burmeister wrote: > Hello, > it turned out that the problem was in fact a bug in the age(4) NIC > driver. TSO support lead to corrupted packages which in turn lead to > stalling NFS4 mounts. YongHyeon PYUN send me the attached patch which > solves the problem. Thank you all for your help. Committed to HEAD(r242348). From owner-freebsd-fs@FreeBSD.ORG Tue Oct 30 09:20:32 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 3902DFC1 for ; Tue, 30 Oct 2012 09:20:32 +0000 (UTC) (envelope-from paul-freebsd@fletchermoorland.co.uk) Received: from hercules.mthelicon.com (hercules.mthelicon.com [66.90.118.40]) by mx1.freebsd.org (Postfix) with ESMTP id E41848FC12 for ; Tue, 30 Oct 2012 09:20:31 +0000 (UTC) Received: from demophon.fletchermoorland.co.uk (hydra.fletchermoorland.co.uk [78.33.209.59] (may be forged)) (authenticated bits=0) by hercules.mthelicon.com (8.14.5/8.14.5) with ESMTP id q9U9897A091886 for ; Tue, 30 Oct 2012 09:08:10 GMT (envelope-from paul-freebsd@fletchermoorland.co.uk) Message-ID: <508F98F9.3040604@fletchermoorland.co.uk> Date: Tue, 30 Oct 2012 09:08:09 +0000 From: Paul Wootton User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:12.0) Gecko/20120530 Thunderbird/12.0.1 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: ZFS RaidZ-2 problems Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Content-Filtered-By: Mailman/MimeDel 2.1.14 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 30 Oct 2012 09:20:32 -0000 Hi, I have had lots of bad luck with SATA drives and have had them fail on me far too often. Started with a 3 drive RAIDZ and lost 2 drives at the same time. Upgraded to a 6 drive RAIDZ and lost 2 drives with in hours of each other and finally had a 9 drive RAIDZ (1 parity) and lost another 2 drives (as luck would happen, this time I had a 90% backup on another machine so did not loose everything). I finally decided that I should switch to a RAIDZ2 (my current setup). Now I have lost 1 drive and the pack is showing as faulted. I have tried exporting and reimporting, but that did not help either. Is this normal? Has any one got any ideas as to what has happened and why? The fault this time might be cabling so I might not have lost the data, but my understanding was that with RAIDZ-2, you could loose 2 drives and still have a working pack. I do still have the 90% backup of the pool and nothing has really changed since that backup, so if someone wants me to try something and it blows the pack away, it's not the end of the world. Cheers Paul pool: storage state: FAULTED status: One or more devices could not be opened. There are insufficient replicas for the pool to continue functioning. action: Attach the missing device and online it using 'zpool online'. see: http://illumos.org/msg/ZFS-8000-3C scan: resilvered 30K in 0h0m with 0 errors on Sun Oct 14 12:52:45 2012 config: NAME STATE READ WRITE CKSUM storage FAULTED 0 0 1 raidz2-0 FAULTED 0 0 6 ada0 ONLINE 0 0 0 ada1 ONLINE 0 0 0 ada2 ONLINE 0 0 0 17777811927559723424 UNAVAIL 0 0 0 was /dev/ada3 ada4 ONLINE 0 0 0 ada5 ONLINE 0 0 0 ada6 ONLINE 0 0 0 ada7 ONLINE 0 0 0 ada8 ONLINE 0 0 0 ada10p4 ONLINE 0 0 0 root@filekeeper:/storage # zpool export storage root@filekeeper:/storage # zpool import storage cannot import 'storage': I/O error Destroy and re-create the pool from a backup source. root@filekeeper:/usr/home/paul # uname -a FreeBSD filekeeper.caspersworld.co.uk 10.0-CURRENT FreeBSD 10.0-CURRENT #0 r240967: Thu Sep 27 08:01:24 UTC 2012 root@filekeeper.caspersworld.co.uk:/usr/obj/usr/src/sys/GENERIC amd64 From owner-freebsd-fs@FreeBSD.ORG Tue Oct 30 10:10:01 2012 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id C3EDDA0B for ; Tue, 30 Oct 2012 10:10:01 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) by mx1.freebsd.org (Postfix) with ESMTP id 91E9F8FC14 for ; Tue, 30 Oct 2012 10:10:01 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.5/8.14.5) with ESMTP id q9UAA1xh065856 for ; Tue, 30 Oct 2012 10:10:01 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.5/8.14.5/Submit) id q9UAA1lF065855; Tue, 30 Oct 2012 10:10:01 GMT (envelope-from gnats) Date: Tue, 30 Oct 2012 10:10:01 GMT Message-Id: <201210301010.q9UAA1lF065855@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: J B Subject: Re: kern/165950: [ffs] SU J and fsck problem X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list Reply-To: J B List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 30 Oct 2012 10:10:01 -0000 The following reply was made to PR kern/165950; it has been noted by GNATS. From: J B To: bug-followup@FreeBSD.org, jb.1234abcd@gmail.com Cc: Subject: Re: kern/165950: [ffs] SU J and fsck problem Date: Tue, 30 Oct 2012 11:01:43 +0100 Request to close by submitter. I have not seen more of this behavior since reporting it. jb From owner-freebsd-fs@FreeBSD.ORG Tue Oct 30 10:10:02 2012 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id A9C38A0D for ; Tue, 30 Oct 2012 10:10:02 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) by mx1.freebsd.org (Postfix) with ESMTP id 8FBE88FC16 for ; Tue, 30 Oct 2012 10:10:02 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.5/8.14.5) with ESMTP id q9UAA25Q065864 for ; Tue, 30 Oct 2012 10:10:02 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.5/8.14.5/Submit) id q9UAA2ta065863; Tue, 30 Oct 2012 10:10:02 GMT (envelope-from gnats) Date: Tue, 30 Oct 2012 10:10:02 GMT Message-Id: <201210301010.q9UAA2ta065863@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: Andrey Simonenko Subject: Re: kern/136865: [nfs] [patch] NFS exports atomic and on-the-fly atomic updates X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list Reply-To: Andrey Simonenko List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 30 Oct 2012 10:10:02 -0000 The following reply was made to PR kern/136865; it has been noted by GNATS. From: Andrey Simonenko To: Martin Birgmeier Cc: bug-followup@FreeBSD.org Subject: Re: kern/136865: [nfs] [patch] NFS exports atomic and on-the-fly atomic updates Date: Tue, 30 Oct 2012 12:07:57 +0200 On Fri, Oct 26, 2012 at 03:15:56PM +0200, Martin Birgmeier wrote: > - What for is cddl.diff? > - I am heavily using zfs. Which patches from your patchset do I need to > get nfse to fully support zfs? I forgot to say, that cddl.diff also updates 'zfs sharenfs/unshare ...' and zfs does not verifies correctness of exports(5) or nfs.exports(5) settings. There are several PRs related to the current code in zfs that verifies exports(5) settings. If it is not necessary to use 'zfs sharenfs/unshare ...', and it is not necessary to use NFSE dynamic commands (that are flushed after reloading of export settings) and it is not necessary to use nfs.exports(5) file format, then cddl.diff can be ignored. The cddl.diff file has only changes to the zfs program and does not have any changes related to ZFS in the kernel. From owner-freebsd-fs@FreeBSD.ORG Tue Oct 30 11:40:11 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 125F8AE6 for ; Tue, 30 Oct 2012 11:40:11 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-annu.mail.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id BD7838FC14 for ; Tue, 30 Oct 2012 11:40:10 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AhEKAJG7j1CDaFvO/2dsb2JhbABEhhi8FQEDAQOCCIIeAQEFI1YbGAICDRkCWYgfqhOCO5A3gSCKVYM5ghGBEwOVdJBCgwuBfQ X-IronPort-AV: E=Sophos;i="4.80,679,1344225600"; d="scan'208";a="188668346" Received: from erie.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.206]) by esa-annu-pri.mail.uoguelph.ca with ESMTP; 30 Oct 2012 07:40:09 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 91D5C79462; Tue, 30 Oct 2012 07:40:09 -0400 (EDT) Date: Tue, 30 Oct 2012 07:40:09 -0400 (EDT) From: Rick Macklem To: pyunyh@gmail.com Message-ID: <23521459.3033845.1351597209483.JavaMail.root@erie.cs.uoguelph.ca> In-Reply-To: <20121030075611.GA1493@michelle.cdnetworks.com> Subject: Re: Can not read from ZFS exported over NFSv4 but write to it MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.202] X-Mailer: Zimbra 6.0.10_GA_2692 (ZimbraWebClient - FF3.0 (Win)/6.0.10_GA_2692) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 30 Oct 2012 11:40:11 -0000 Pyon YongHyeon wrote: > On Tue, Oct 30, 2012 at 08:12:51AM +0100, Yamagi Burmeister wrote: > > Hello, > > it turned out that the problem was in fact a bug in the age(4) NIC > > driver. TSO support lead to corrupted packages which in turn lead to > > stalling NFS4 mounts. YongHyeon PYUN send me the attached patch > > which > > solves the problem. Thank you all for your help. > > Committed to HEAD(r242348). Good work. Thanks for resolving this, rick From owner-freebsd-fs@FreeBSD.ORG Tue Oct 30 12:05:03 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id BF84FB10 for ; Tue, 30 Oct 2012 12:05:03 +0000 (UTC) (envelope-from freebsd@penx.com) Received: from btw.pki2.com (btw.pki2.com [IPv6:2001:470:a:6fd::2]) by mx1.freebsd.org (Postfix) with ESMTP id 6DCEB8FC0A for ; Tue, 30 Oct 2012 12:05:03 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by btw.pki2.com (8.14.5/8.14.5) with ESMTP id q9UC4i41033402; Tue, 30 Oct 2012 05:04:44 -0700 (PDT) (envelope-from freebsd@penx.com) Subject: Re: ZFS RaidZ-2 problems From: Dennis Glatting To: Paul Wootton In-Reply-To: <508F98F9.3040604@fletchermoorland.co.uk> References: <508F98F9.3040604@fletchermoorland.co.uk> Content-Type: text/plain; charset="us-ascii" Date: Tue, 30 Oct 2012 05:04:44 -0700 Message-ID: <1351598684.88435.19.camel@btw.pki2.com> Mime-Version: 1.0 X-Mailer: Evolution 2.32.1 FreeBSD GNOME Team Port Content-Transfer-Encoding: 7bit X-yoursite-MailScanner-Information: Dennis Glatting X-yoursite-MailScanner-ID: q9UC4i41033402 X-yoursite-MailScanner: Found to be clean X-MailScanner-From: freebsd@penx.com Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 30 Oct 2012 12:05:03 -0000 On Tue, 2012-10-30 at 09:08 +0000, Paul Wootton wrote: > Hi, > > I have had lots of bad luck with SATA drives and have had them fail on > me far too often. Started with a 3 drive RAIDZ and lost 2 drives at the > same time. Upgraded to a 6 drive RAIDZ and lost 2 drives with in hours > of each other and finally had a 9 drive RAIDZ (1 parity) and lost > another 2 drives (as luck would happen, this time I had a 90% backup on > another machine so did not loose everything). I finally decided that I > should switch to a RAIDZ2 (my current setup). > Now I have lost 1 drive and the pack is showing as faulted. I have tried > exporting and reimporting, but that did not help either. > Is this normal? Has any one got any ideas as to what has happened and why? > > The fault this time might be cabling so I might not have lost the data, > but my understanding was that with RAIDZ-2, you could loose 2 drives and > still have a working pack. > > I do still have the 90% backup of the pool and nothing has really > changed since that backup, so if someone wants me to try something and > it blows the pack away, it's not the end of the world. > I've had this problem too. Here is what I can tell you for my case. In the first system I have four arrays: two RAID1 by an Areca 1880i card and two RAIDz2 through a LSI 9211-8i (IT) card and the MB (Gigabyte X58A-UD7). One of the RAIDz2 arrays notoriously faulted and I lost the array several times. I replaced the card, the cable, and the disks themselves leaving only one other possibility -- the power supply. The faulting array was on a separate cable from the power supply. I replaced the power supply, going from a 1,000W to 1,300W, and the power cables to the disks. Not a problem since. In four other systems, including one where I've lost 30% of the disks in less than a year, I have downgraded the operating system from stable/9 to stable/8 on two and installed CentOS 6.3 ZFS-on-Linux on another (the last system is still running stable/9, for now). These systems experience heavy load (compute and disk) and so far (less than two weeks) all of my problems have gone away. On two of those systems, which ran for over four days before a power event, each generated 10TB of data and successfully scrubbed after the power event. That simply wasn't possible previously for approximately five months. What is interesting is three smaller systems running stable/9 with four disk RAIDz arrays have not had the same problems but all of their disks are through their MBs and they do not experience the same loading as the others. YMMV > > Cheers > Paul > > > pool: storage > state: FAULTED > status: One or more devices could not be opened. There are insufficient > replicas for the pool to continue functioning. > action: Attach the missing device and online it using 'zpool online'. > see: http://illumos.org/msg/ZFS-8000-3C > scan: resilvered 30K in 0h0m with 0 errors on Sun Oct 14 12:52:45 2012 > config: > > NAME STATE READ WRITE CKSUM > storage FAULTED 0 0 1 > raidz2-0 FAULTED 0 0 6 > ada0 ONLINE 0 0 0 > ada1 ONLINE 0 0 0 > ada2 ONLINE 0 0 0 > 17777811927559723424 UNAVAIL 0 0 0 was /dev/ada3 > ada4 ONLINE 0 0 0 > ada5 ONLINE 0 0 0 > ada6 ONLINE 0 0 0 > ada7 ONLINE 0 0 0 > ada8 ONLINE 0 0 0 > ada10p4 ONLINE 0 0 0 > > root@filekeeper:/storage # zpool export storage > root@filekeeper:/storage # zpool import storage > cannot import 'storage': I/O error > Destroy and re-create the pool from > a backup source. > > root@filekeeper:/usr/home/paul # uname -a > FreeBSD filekeeper.caspersworld.co.uk 10.0-CURRENT FreeBSD 10.0-CURRENT > #0 r240967: Thu Sep 27 08:01:24 UTC 2012 > root@filekeeper.caspersworld.co.uk:/usr/obj/usr/src/sys/GENERIC amd64 > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Tue Oct 30 14:37:59 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id F08935DA for ; Tue, 30 Oct 2012 14:37:59 +0000 (UTC) (envelope-from paul-freebsd@fletchermoorland.co.uk) Received: from hercules.mthelicon.com (hercules.mthelicon.com [66.90.118.40]) by mx1.freebsd.org (Postfix) with ESMTP id B81378FC08 for ; Tue, 30 Oct 2012 14:37:58 +0000 (UTC) Received: from demophon.fletchermoorland.co.uk (hydra.fletchermoorland.co.uk [78.33.209.59] (may be forged)) (authenticated bits=0) by hercules.mthelicon.com (8.14.5/8.14.5) with ESMTP id q9UEbuJK092812; Tue, 30 Oct 2012 14:37:57 GMT (envelope-from paul-freebsd@fletchermoorland.co.uk) Message-ID: <508FE643.4090107@fletchermoorland.co.uk> Date: Tue, 30 Oct 2012 14:37:55 +0000 From: Paul Wootton User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:12.0) Gecko/20120530 Thunderbird/12.0.1 MIME-Version: 1.0 To: Dennis Glatting Subject: Re: ZFS RaidZ-2 problems References: <508F98F9.3040604@fletchermoorland.co.uk> <1351598684.88435.19.camel@btw.pki2.com> In-Reply-To: <1351598684.88435.19.camel@btw.pki2.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 30 Oct 2012 14:38:00 -0000 On 10/30/12 12:04, Dennis Glatting wrote: > I've had this problem too. Here is what I can tell you for my case. > ... I replaced the card, the cable, and the disks themselves leaving > only one other possibility -- the power supply. The faulting array was > on a separate cable from the power supply. I replaced the power > supply, going from a 1,000W to 1,300W, and the power cables to the > disks. Not a problem since. While I can accept that I might have a bad power supply. or cables, my main concern is that I have only 1 drive showing as "Unavail" on a RAIDZ-2 and the pack is showing "Faulted". I would have expected that pack to continue working with 2 bad drives, and would have failed if I had 3rd one fail Paul From owner-freebsd-fs@FreeBSD.ORG Tue Oct 30 15:50:17 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 1DE87386 for ; Tue, 30 Oct 2012 15:50:17 +0000 (UTC) (envelope-from freebsd@penx.com) Received: from btw.pki2.com (btw.pki2.com [IPv6:2001:470:a:6fd::2]) by mx1.freebsd.org (Postfix) with ESMTP id D8A3A8FC0C for ; Tue, 30 Oct 2012 15:50:16 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by btw.pki2.com (8.14.5/8.14.5) with ESMTP id q9UFo1CI022253; Tue, 30 Oct 2012 08:50:01 -0700 (PDT) (envelope-from freebsd@penx.com) Subject: Re: ZFS RaidZ-2 problems From: Dennis Glatting To: Paul Wootton In-Reply-To: <508FE643.4090107@fletchermoorland.co.uk> References: <508F98F9.3040604@fletchermoorland.co.uk> <1351598684.88435.19.camel@btw.pki2.com> <508FE643.4090107@fletchermoorland.co.uk> Content-Type: text/plain; charset="us-ascii" Date: Tue, 30 Oct 2012 08:50:01 -0700 Message-ID: <1351612201.88435.23.camel@btw.pki2.com> Mime-Version: 1.0 X-Mailer: Evolution 2.32.1 FreeBSD GNOME Team Port Content-Transfer-Encoding: 7bit X-yoursite-MailScanner-Information: Dennis Glatting X-yoursite-MailScanner-ID: q9UFo1CI022253 X-yoursite-MailScanner: Found to be clean X-MailScanner-From: freebsd@penx.com Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 30 Oct 2012 15:50:17 -0000 On Tue, 2012-10-30 at 14:37 +0000, Paul Wootton wrote: > On 10/30/12 12:04, Dennis Glatting wrote: > > I've had this problem too. Here is what I can tell you for my case. > > ... I replaced the card, the cable, and the disks themselves leaving > > only one other possibility -- the power supply. The faulting array was > > on a separate cable from the power supply. I replaced the power > > supply, going from a 1,000W to 1,300W, and the power cables to the > > disks. Not a problem since. > > While I can accept that I might have a bad power supply. or cables, my > main concern is that I have only 1 drive showing as "Unavail" on a > RAIDZ-2 and the pack is showing "Faulted". > I would have expected that pack to continue working with 2 bad drives, > and would have failed if I had 3rd one fail I have successfully upgraded arrays by replacing disks in a RAIDz and RAIDz2 array one at a time, so I know for certain one disk failures work as expected. I tried replacing two at once in a RAIDz2 array but it did not succeed; however it was also the array with the bad power supply. That said, it certainly tried to rebuild the array. From owner-freebsd-fs@FreeBSD.ORG Tue Oct 30 16:10:38 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id CA251CF0 for ; Tue, 30 Oct 2012 16:10:38 +0000 (UTC) (envelope-from ronald-freebsd8@klop.yi.org) Received: from smarthost1.greenhost.nl (smarthost1.greenhost.nl [195.190.28.78]) by mx1.freebsd.org (Postfix) with ESMTP id 77BD88FC08 for ; Tue, 30 Oct 2012 16:10:38 +0000 (UTC) Received: from smtp.greenhost.nl ([213.108.104.138]) by smarthost1.greenhost.nl with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.69) (envelope-from ) id 1TTEOg-0003ES-4h; Tue, 30 Oct 2012 17:10:34 +0100 Received: from [81.21.138.17] (helo=ronaldradial.versatec.local) by smtp.greenhost.nl with esmtpsa (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.72) (envelope-from ) id 1TTEOf-0000P8-F3; Tue, 30 Oct 2012 17:10:33 +0100 Content-Type: text/plain; charset=us-ascii; format=flowed; delsp=yes To: "Dennis Glatting" , "Paul Wootton" Subject: Re: ZFS RaidZ-2 problems References: <508F98F9.3040604@fletchermoorland.co.uk> <1351598684.88435.19.camel@btw.pki2.com> <508FE643.4090107@fletchermoorland.co.uk> Date: Tue, 30 Oct 2012 17:10:31 +0100 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: "Ronald Klop" Message-ID: In-Reply-To: <508FE643.4090107@fletchermoorland.co.uk> User-Agent: Opera Mail/12.02 (Win32) X-Virus-Scanned: by clamav at smarthost1.samage.net X-Spam-Level: / X-Spam-Score: -0.2 X-Spam-Status: No, score=-0.2 required=5.0 tests=BAYES_40 autolearn=disabled version=3.2.5 X-Scan-Signature: a350ae07b5350cdad28cef237d2e7179 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 30 Oct 2012 16:10:38 -0000 On Tue, 30 Oct 2012 15:37:55 +0100, Paul Wootton wrote: > On 10/30/12 12:04, Dennis Glatting wrote: >> I've had this problem too. Here is what I can tell you for my case. ... >> I replaced the card, the cable, and the disks themselves leaving only >> one other possibility -- the power supply. The faulting array was on a >> separate cable from the power supply. I replaced the power supply, >> going from a 1,000W to 1,300W, and the power cables to the disks. Not a >> problem since. > > While I can accept that I might have a bad power supply. or cables, my > main concern is that I have only 1 drive showing as "Unavail" on a > RAIDZ-2 and the pack is showing "Faulted". > I would have expected that pack to continue working with 2 bad drives, > and would have failed if I had 3rd one fail > > Paul Isn't your problem something else than a non-working pool with one broken disk. I guess it still worked before you exported it. Your actual problem is 'zpool import' does not work for your pool. Maybe there is more broken than one disk. Why did you export/import to fix anything in stead of replacing the faulted disk? (I'm not into the code details of ZFS, so can't help you with everything.) Ronald. From owner-freebsd-fs@FreeBSD.ORG Tue Oct 30 16:32:14 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 23075425 for ; Tue, 30 Oct 2012 16:32:14 +0000 (UTC) (envelope-from paul-freebsd@fletchermoorland.co.uk) Received: from hercules.mthelicon.com (hercules.mthelicon.com [66.90.118.40]) by mx1.freebsd.org (Postfix) with ESMTP id DAA038FC14 for ; Tue, 30 Oct 2012 16:32:12 +0000 (UTC) Received: from demophon.fletchermoorland.co.uk (hydra.fletchermoorland.co.uk [78.33.209.59] (may be forged)) (authenticated bits=0) by hercules.mthelicon.com (8.14.5/8.14.5) with ESMTP id q9UGWAYj093196; Tue, 30 Oct 2012 16:32:11 GMT (envelope-from paul-freebsd@fletchermoorland.co.uk) Message-ID: <5090010A.4050109@fletchermoorland.co.uk> Date: Tue, 30 Oct 2012 16:32:10 +0000 From: Paul Wootton User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:12.0) Gecko/20120530 Thunderbird/12.0.1 MIME-Version: 1.0 To: Ronald Klop Subject: Re: ZFS RaidZ-2 problems References: <508F98F9.3040604@fletchermoorland.co.uk> <1351598684.88435.19.camel@btw.pki2.com> <508FE643.4090107@fletchermoorland.co.uk> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 30 Oct 2012 16:32:14 -0000 On 10/30/12 16:10, Ronald Klop wrote: > On Tue, 30 Oct 2012 15:37:55 +0100, Paul Wootton > wrote: > >> On 10/30/12 12:04, Dennis Glatting wrote: >>> I've had this problem too. Here is what I can tell you for my case. >>> ... I replaced the card, the cable, and the disks themselves leaving >>> only one other possibility -- the power supply. The faulting array >>> was on a separate cable from the power supply. I replaced the power >>> supply, going from a 1,000W to 1,300W, and the power cables to the >>> disks. Not a problem since. >> >> While I can accept that I might have a bad power supply. or cables, >> my main concern is that I have only 1 drive showing as "Unavail" on a >> RAIDZ-2 and the pack is showing "Faulted". >> I would have expected that pack to continue working with 2 bad >> drives, and would have failed if I had 3rd one fail >> >> Paul > > Isn't your problem something else than a non-working pool with one > broken disk. I guess it still worked before you exported it. Your > actual problem is 'zpool import' does not work for your pool. Maybe > there is more broken than one disk. > > Why did you export/import to fix anything in stead of replacing the > faulted disk? > > (I'm not into the code details of ZFS, so can't help you with > everything.) > > Ronald. > > The pool was marked as faulted before I tried exporting/importing.ZFS should have marked the pool as degraded, so I wondered if I exported and then reimported the pool, ZFS would taste each of the disks and imported the pool in a degraded mode. From owner-freebsd-fs@FreeBSD.ORG Wed Oct 31 08:23:43 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id D9EE0CAC for ; Wed, 31 Oct 2012 08:23:43 +0000 (UTC) (envelope-from ronald-freebsd8@klop.yi.org) Received: from smarthost1.greenhost.nl (smarthost1.greenhost.nl [195.190.28.78]) by mx1.freebsd.org (Postfix) with ESMTP id 581468FC0C for ; Wed, 31 Oct 2012 08:23:42 +0000 (UTC) Received: from smtp.greenhost.nl ([213.108.104.138]) by smarthost1.greenhost.nl with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.69) (envelope-from ) id 1TTTaO-0007lG-41; Wed, 31 Oct 2012 09:23:40 +0100 Received: from [81.21.138.17] (helo=ronaldradial.versatec.local) by smtp.greenhost.nl with esmtpsa (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.72) (envelope-from ) id 1TTTaN-0003V0-F5; Wed, 31 Oct 2012 09:23:39 +0100 Content-Type: text/plain; charset=us-ascii; format=flowed; delsp=yes To: "Paul Wootton" Subject: Re: ZFS RaidZ-2 problems References: <508F98F9.3040604@fletchermoorland.co.uk> <1351598684.88435.19.camel@btw.pki2.com> <508FE643.4090107@fletchermoorland.co.uk> <5090010A.4050109@fletchermoorland.co.uk> Date: Wed, 31 Oct 2012 09:23:38 +0100 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: "Ronald Klop" Message-ID: In-Reply-To: <5090010A.4050109@fletchermoorland.co.uk> User-Agent: Opera Mail/12.02 (Win32) X-Virus-Scanned: by clamav at smarthost1.samage.net X-Spam-Level: / X-Spam-Score: 0.0 X-Spam-Status: No, score=0.0 required=5.0 tests=BAYES_50 autolearn=disabled version=3.2.5 X-Scan-Signature: ba572e8a3bde05b4b19613c12a9e49fc Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 31 Oct 2012 08:23:43 -0000 On Tue, 30 Oct 2012 17:32:10 +0100, Paul Wootton wrote: > On 10/30/12 16:10, Ronald Klop wrote: >> On Tue, 30 Oct 2012 15:37:55 +0100, Paul Wootton >> wrote: >> >>> On 10/30/12 12:04, Dennis Glatting wrote: >>>> I've had this problem too. Here is what I can tell you for my case. >>>> ... I replaced the card, the cable, and the disks themselves leaving >>>> only one other possibility -- the power supply. The faulting array >>>> was on a separate cable from the power supply. I replaced the power >>>> supply, going from a 1,000W to 1,300W, and the power cables to the >>>> disks. Not a problem since. >>> >>> While I can accept that I might have a bad power supply. or cables, my >>> main concern is that I have only 1 drive showing as "Unavail" on a >>> RAIDZ-2 and the pack is showing "Faulted". >>> I would have expected that pack to continue working with 2 bad drives, >>> and would have failed if I had 3rd one fail >>> >>> Paul >> >> Isn't your problem something else than a non-working pool with one >> broken disk. I guess it still worked before you exported it. Your >> actual problem is 'zpool import' does not work for your pool. Maybe >> there is more broken than one disk. >> >> Why did you export/import to fix anything in stead of replacing the >> faulted disk? >> >> (I'm not into the code details of ZFS, so can't help you with >> everything.) >> >> Ronald. >> >> > The pool was marked as faulted before I tried exporting/importing.ZFS > should have marked the pool as degraded, so I wondered if I exported and > then reimported the pool, ZFS would taste each of the disks and imported > the pool in a degraded mode. Weird. I don't know what is happening. I hope somebody else knows it. Ronald. From owner-freebsd-fs@FreeBSD.ORG Wed Oct 31 17:25:17 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id EDBBC805 for ; Wed, 31 Oct 2012 17:25:17 +0000 (UTC) (envelope-from steven@multiplay.co.uk) Received: from mail-we0-f182.google.com (mail-we0-f182.google.com [74.125.82.182]) by mx1.freebsd.org (Postfix) with ESMTP id 503C08FC1A for ; Wed, 31 Oct 2012 17:25:16 +0000 (UTC) Received: by mail-we0-f182.google.com with SMTP id x43so936011wey.13 for ; Wed, 31 Oct 2012 10:25:10 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=message-id:from:to:subject:date:mime-version:content-type :content-transfer-encoding:x-priority:x-msmail-priority:x-mailer :x-mimeole:x-gm-message-state; bh=/UdVNw6/TtBky4k6o1nrrKoz8JR6OpY1IC9KmusreE0=; b=NfEd3zz/4UImDK47Ceai5+LcB4jybDJHOwsQ8pgXlWJJTrSXYO1JlMnyEYL5H4FON7 A0CVKMWUMqiZiMxEWTrEdKCFg7EOlXZ30pfLQVfHg3l4rSqCl8JI/T0+/A/t3jLMNl8U i5oEe8cN2AZL+DqhxBNYeC5Q2dS7Gc2D/Uf9DoJ6DAI7YraAsFzfhjfu0OSULVNQvRZn /c9ELHZAtfpCJbQeFVwXg3glBWIrGcJXSh2c4x/wUVMz7XYPZeIf3fc7FPmqkvLNIB0R gyDsgwSdR1EtK6JXKG3MxO+sfLzoVtGsBhVHuQ/W97sbTMUYFFwIDXXnndQ/pGAqnuzw /YVQ== Received: by 10.216.200.150 with SMTP id z22mr18350489wen.97.1351704309968; Wed, 31 Oct 2012 10:25:09 -0700 (PDT) Received: from r2d2 (188-220-16-49.zone11.bethere.co.uk. [188.220.16.49]) by mx.google.com with ESMTPS id eq2sm6687299wib.1.2012.10.31.10.25.08 (version=SSLv3 cipher=OTHER); Wed, 31 Oct 2012 10:25:08 -0700 (PDT) Message-ID: <27087376D1C14132A3CC1B4016912F6D@multiplay.co.uk> From: "Steven Hartland" To: , Subject: ZFS corruption due to lack of space? Date: Wed, 31 Oct 2012 17:25:09 -0000 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 X-Gm-Message-State: ALoCoQkEI8wXqpj6AnyQhgucAaDUsafj5x2dlzlb4LntFGSDRm7T4rpXK2J/mvJYa2HRJdf+CSDO X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 31 Oct 2012 17:25:18 -0000 Been running some tests on new hardware here to verify all is good. One of the tests was to fill the zfs array which seems like its totally corrupted the tank. The HW is 7 x 3TB disks in RAIDZ2 with dual 13GB ZIL partitions and dual 100GB L2ARC on Enterprise SSD's. All disks are connected to an LSI 2208 RAID controller run by mfi driver. HD's via a SAS2X28 backplane and SSD's via a passive blackplane backplane. The file system has 31 test files most random data from /dev/random and one blank from /dev/zero. The test running was multiple ~20 dd's under screen with all but one from /dev/random and to final one from /dev/zero e.g. dd if=/dev/random bs=1m of=/tank2/random10 No hardware errors have raised, so no disk timeouts etc. On completion each dd reported no space as you would expect e.g. dd if=/dev/random bs=1m of=/tank2/random13 dd: /tank2/random13: No space left on device 503478+0 records in 503477+0 records out 527933898752 bytes transferred in 126718.731762 secs (4166187 bytes/sec) You have new mail. At that point with the test seemingly successful I went to delete test files which resulted in:- rm random* rm: random1: Unknown error: 122 rm: random10: Unknown error: 122 rm: random11: Unknown error: 122 rm: random12: Unknown error: 122 rm: random13: Unknown error: 122 rm: random14: Unknown error: 122 rm: random18: Unknown error: 122 rm: random2: Unknown error: 122 rm: random3: Unknown error: 122 rm: random4: Unknown error: 122 rm: random5: Unknown error: 122 rm: random6: Unknown error: 122 rm: random7: Unknown error: 122 rm: random9: Unknown error: 122 Error 122 I assume is ECKSUM At this point the pool was showing checksum errors zpool status pool: tank state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 gptid/41fb7e5c-21cf-11e2-92a3-002590881138 ONLINE 0 0 0 gptid/42a1b53c-21cf-11e2-92a3-002590881138 ONLINE 0 0 0 errors: No known data errors pool: tank2 state: ONLINE status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in question if possible. Otherwise restore the entire pool from backup. see: http://www.sun.com/msg/ZFS-8000-8A scan: none requested config: NAME STATE READ WRITE CKSUM tank2 ONLINE 0 0 4.22K raidz2-0 ONLINE 0 0 16.9K mfisyspd0 ONLINE 0 0 0 mfisyspd1 ONLINE 0 0 0 mfisyspd2 ONLINE 0 0 0 mfisyspd3 ONLINE 0 0 0 mfisyspd4 ONLINE 0 0 0 mfisyspd5 ONLINE 0 0 0 mfisyspd6 ONLINE 0 0 0 logs mfisyspd7p3 ONLINE 0 0 0 mfisyspd8p3 ONLINE 0 0 0 cache mfisyspd9 ONLINE 0 0 0 mfisyspd10 ONLINE 0 0 0 errors: Permanent errors have been detected in the following files: tank2:<0x3> tank2:<0x8> tank2:<0x9> tank2:<0xa> tank2:<0xb> tank2:<0xf> tank2:<0x10> tank2:<0x11> tank2:<0x12> tank2:<0x13> tank2:<0x14> tank2:<0x15> So I tried a scrub, which looks like its going to take 5 days to complete and is reporting many many more errors:- pool: tank2 state: ONLINE status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in question if possible. Otherwise restore the entire pool from backup. see: http://www.sun.com/msg/ZFS-8000-8A scan: scrub in progress since Wed Oct 31 16:13:53 2012 118G scanned out of 18.7T at 42.2M/s, 128h19m to go 49.0M repaired, 0.62% done config: NAME STATE READ WRITE CKSUM tank2 ONLINE 0 0 596K raidz2-0 ONLINE 0 0 1.20M mfisyspd0 ONLINE 0 0 0 (repairing) mfisyspd1 ONLINE 0 0 0 (repairing) mfisyspd2 ONLINE 0 0 0 (repairing) mfisyspd3 ONLINE 0 0 2 (repairing) mfisyspd4 ONLINE 0 0 1 (repairing) mfisyspd5 ONLINE 0 0 0 (repairing) mfisyspd6 ONLINE 0 0 1 (repairing) logs mfisyspd7p3 ONLINE 0 0 0 mfisyspd8p3 ONLINE 0 0 0 cache mfisyspd9 ONLINE 0 0 0 mfisyspd10 ONLINE 0 0 0 errors: 596965 data errors, use '-v' for a list At this point I decided to cancel the scrub but no joy on that zpool scrub -s tank2 cannot cancel scrubbing tank2: out of space So questions:- 1. Given the information it seems like the multiple writes filling the disk may have caused metadata corruption? 2. Is there anyway to stop the scrub? 3. Surely low space should never prevent stopping a scrub? Regards Steve From owner-freebsd-fs@FreeBSD.ORG Wed Oct 31 17:55:46 2012 Return-Path: Delivered-To: freebsd-fs@FreeBSD.ORG Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id B590AFAD; Wed, 31 Oct 2012 17:55:46 +0000 (UTC) (envelope-from prvs=1651f70f45=killing@multiplay.co.uk) Received: from mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) by mx1.freebsd.org (Postfix) with ESMTP id 180338FC14; Wed, 31 Oct 2012 17:55:45 +0000 (UTC) Received: from r2d2 ([188.220.16.49]) by mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) (MDaemon PRO v10.0.4) with ESMTP id md50000898021.msg; Wed, 31 Oct 2012 17:55:43 +0000 X-Spam-Processed: mail1.multiplay.co.uk, Wed, 31 Oct 2012 17:55:43 +0000 (not processed: message from valid local sender) X-MDRemoteIP: 188.220.16.49 X-Return-Path: prvs=1651f70f45=killing@multiplay.co.uk X-Envelope-From: killing@multiplay.co.uk Message-ID: From: "Steven Hartland" To: , References: <27087376D1C14132A3CC1B4016912F6D@multiplay.co.uk> Subject: Re: ZFS corruption due to lack of space? Date: Wed, 31 Oct 2012 17:55:43 -0000 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=response Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 31 Oct 2012 17:55:46 -0000 Other info: zpool list tank2 NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT tank2 19T 18.7T 304G 98% 1.00x ONLINE - zfs list tank2 NAME USED AVAIL REFER MOUNTPOINT tank2 13.3T 0 13.3T /tank2 Running: 8.3-RELEASE-p4, zpool: v28, zfs: v5 ----- Original Message ----- From: "Steven Hartland" To: ; Sent: Wednesday, October 31, 2012 5:25 PM Subject: ZFS corruption due to lack of space? > Been running some tests on new hardware here to verify all > is good. One of the tests was to fill the zfs array which > seems like its totally corrupted the tank. > > The HW is 7 x 3TB disks in RAIDZ2 with dual 13GB ZIL > partitions and dual 100GB L2ARC on Enterprise SSD's. > > All disks are connected to an LSI 2208 RAID controller > run by mfi driver. HD's via a SAS2X28 backplane and > SSD's via a passive blackplane backplane. > > The file system has 31 test files most random data from > /dev/random and one blank from /dev/zero. > > The test running was multiple ~20 dd's under screen with > all but one from /dev/random and to final one from /dev/zero > > e.g. dd if=/dev/random bs=1m of=/tank2/random10 > > No hardware errors have raised, so no disk timeouts etc. > > On completion each dd reported no space as you would expect > e.g. dd if=/dev/random bs=1m of=/tank2/random13 > dd: /tank2/random13: No space left on device > 503478+0 records in > 503477+0 records out > 527933898752 bytes transferred in 126718.731762 secs (4166187 bytes/sec) > You have new mail. > > At that point with the test seemingly successful I went > to delete test files which resulted in:- > rm random* > rm: random1: Unknown error: 122 > rm: random10: Unknown error: 122 > rm: random11: Unknown error: 122 > rm: random12: Unknown error: 122 > rm: random13: Unknown error: 122 > rm: random14: Unknown error: 122 > rm: random18: Unknown error: 122 > rm: random2: Unknown error: 122 > rm: random3: Unknown error: 122 > rm: random4: Unknown error: 122 > rm: random5: Unknown error: 122 > rm: random6: Unknown error: 122 > rm: random7: Unknown error: 122 > rm: random9: Unknown error: 122 > > Error 122 I assume is ECKSUM > > At this point the pool was showing checksum errors > zpool status > pool: tank > state: ONLINE > scan: none requested > config: > > NAME STATE READ WRITE CKSUM > tank ONLINE 0 0 0 > mirror-0 ONLINE 0 0 0 > gptid/41fb7e5c-21cf-11e2-92a3-002590881138 ONLINE 0 0 0 > gptid/42a1b53c-21cf-11e2-92a3-002590881138 ONLINE 0 0 0 > > errors: No known data errors > > pool: tank2 > state: ONLINE > status: One or more devices has experienced an error resulting in data > corruption. Applications may be affected. > action: Restore the file in question if possible. Otherwise restore the > entire pool from backup. > see: http://www.sun.com/msg/ZFS-8000-8A > scan: none requested > config: > > NAME STATE READ WRITE CKSUM > tank2 ONLINE 0 0 4.22K > raidz2-0 ONLINE 0 0 16.9K > mfisyspd0 ONLINE 0 0 0 > mfisyspd1 ONLINE 0 0 0 > mfisyspd2 ONLINE 0 0 0 > mfisyspd3 ONLINE 0 0 0 > mfisyspd4 ONLINE 0 0 0 > mfisyspd5 ONLINE 0 0 0 > mfisyspd6 ONLINE 0 0 0 > logs > mfisyspd7p3 ONLINE 0 0 0 > mfisyspd8p3 ONLINE 0 0 0 > cache > mfisyspd9 ONLINE 0 0 0 > mfisyspd10 ONLINE 0 0 0 > > errors: Permanent errors have been detected in the following files: > > tank2:<0x3> > tank2:<0x8> > tank2:<0x9> > tank2:<0xa> > tank2:<0xb> > tank2:<0xf> > tank2:<0x10> > tank2:<0x11> > tank2:<0x12> > tank2:<0x13> > tank2:<0x14> > tank2:<0x15> > > So I tried a scrub, which looks like its going to > take 5 days to complete and is reporting many many more > errors:- > > pool: tank2 > state: ONLINE > status: One or more devices has experienced an error resulting in data > corruption. Applications may be affected. > action: Restore the file in question if possible. Otherwise restore the > entire pool from backup. > see: http://www.sun.com/msg/ZFS-8000-8A > scan: scrub in progress since Wed Oct 31 16:13:53 2012 > 118G scanned out of 18.7T at 42.2M/s, 128h19m to go > 49.0M repaired, 0.62% done > config: > > NAME STATE READ WRITE CKSUM > tank2 ONLINE 0 0 596K > raidz2-0 ONLINE 0 0 1.20M > mfisyspd0 ONLINE 0 0 0 (repairing) > mfisyspd1 ONLINE 0 0 0 (repairing) > mfisyspd2 ONLINE 0 0 0 (repairing) > mfisyspd3 ONLINE 0 0 2 (repairing) > mfisyspd4 ONLINE 0 0 1 (repairing) > mfisyspd5 ONLINE 0 0 0 (repairing) > mfisyspd6 ONLINE 0 0 1 (repairing) > logs > mfisyspd7p3 ONLINE 0 0 0 > mfisyspd8p3 ONLINE 0 0 0 > cache > mfisyspd9 ONLINE 0 0 0 > mfisyspd10 ONLINE 0 0 0 > > errors: 596965 data errors, use '-v' for a list > > > At this point I decided to cancel the scrub but no joy on that > > zpool scrub -s tank2 > cannot cancel scrubbing tank2: out of space > > So questions:- > > 1. Given the information it seems like the multiple writes filling > the disk may have caused metadata corruption? > 2. Is there anyway to stop the scrub? > 3. Surely low space should never prevent stopping a scrub? ================================================ This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337 or return the E.mail to postmaster@multiplay.co.uk. From owner-freebsd-fs@FreeBSD.ORG Wed Oct 31 17:58:31 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 1EAED2A4 for ; Wed, 31 Oct 2012 17:58:31 +0000 (UTC) (envelope-from zbeeble@gmail.com) Received: from mail-la0-f54.google.com (mail-la0-f54.google.com [209.85.215.54]) by mx1.freebsd.org (Postfix) with ESMTP id 8CB2F8FC0A for ; Wed, 31 Oct 2012 17:58:30 +0000 (UTC) Received: by mail-la0-f54.google.com with SMTP id e12so1594503lag.13 for ; Wed, 31 Oct 2012 10:58:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=fYKTxSmc88mxLX0lAEfop1praagarLhkcJWY5EjSkUA=; b=jBj8qe0tNAGiHt41TofY1r8ZU/0qISQnZByN25yZ7XTopAbHzf2ITwH/GsM6jgzGSv 8Q6LZJ7V2XctzJ6SH2J0KNu1TwmY1/Pl21szWIe+QAonlYUuAjwEebOtjQPP1WdWmKHY YnBW/gOWRMPuY/cMcwpD1+Hkhu9O4Fg+eYK5jvYrrpuYRr4FLzPtRmJS4Jhk1HhBO2DB l18n41U1j2A+WGRptChWRA5pJYfd+JHyJKzOuUvoL1M3LNocvyeEgRZJ8DyMzL0fFZKw ixQBY4Z1FDsK45xxZyxKZpc7Apjf7cSNHIxfPcPQoW0Asph/P56dtQoTo7oUZIHajooD BdIQ== MIME-Version: 1.0 Received: by 10.112.54.99 with SMTP id i3mr14322012lbp.37.1351706309341; Wed, 31 Oct 2012 10:58:29 -0700 (PDT) Received: by 10.112.49.138 with HTTP; Wed, 31 Oct 2012 10:58:29 -0700 (PDT) In-Reply-To: References: <508F98F9.3040604@fletchermoorland.co.uk> <1351598684.88435.19.camel@btw.pki2.com> <508FE643.4090107@fletchermoorland.co.uk> <5090010A.4050109@fletchermoorland.co.uk> Date: Wed, 31 Oct 2012 13:58:29 -0400 Message-ID: Subject: Re: ZFS RaidZ-2 problems From: Zaphod Beeblebrox To: Ronald Klop Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 31 Oct 2012 17:58:31 -0000 I'd start off by saying "smart is your friend." Install smartmontools and study the somewhat opaque "smartctl -a /dev/mydisk" output carefully. Try running a short and/or long test, too. Many times the disk can tell you what the problem is. If too many blocks are being replaced, your drive is dying. If the drive sees errors in commands it receives, the cable or the controller are at fault. ZFS itself does _exceptionally_ well at trying to use what it has. I'll also say that bad power supplies make for bad disks. Replacing a power supply has often been the solution to bad disk problems I've had. Disks are sensitive to under voltage problems. Brown-outs can exacerbate this problem. My parents live out where power is very flaky. Cheap UPSs didn't help much ... but a good power supply can make all the difference. But I've also had bad controllers of late, too. My most recent problem had my 9-disk raidZ1 array loose a disk. Smartctl said that it was loosing blocks fast, so I RMA'd the disk. When the new disk came, the array just wouldn't heal... it kept loosing the disks attached to a certain controller. Now it's possible the controller was bad before the disk had died ... or that it died during the first attempt at resilver ... or that FreeBSD drivers don't like it anymore ... I don't know. My solution was to get two more 4 drive "pro box" SATA enclosures. They use a 1-to-4 SATA breakout and the 6 motherboard ports I have are a revision of the ICH11 intel chipset that supports SATA port replication (I already had two of these boxes). In this manner I could remove the defective controller and put all disks onto the motherboard ICH11 (it actually also allowed me to later expand the array... but that's not part of this story). The upshot was that I now had all the disks present for a raidZ array, but tonnes of the errors had occured when there were not enough disks. zpool status -v listed hundresds thousands of files and directories that were "bad" or lost. But I'd seen this before and started a scrub. The result of the scrub was: perfect recovery. Actually... it took a 2nd scrub --- I don't know why. It was happy after the 1st scrub, but then some checksum errors were found --- and then fixed, so I scrubbed again ... and that fixed it. How does it do it? Unlike other RAID systems, ZFS can tell a bad block from a good one. When it is asked to re-recover after really bad multiple failures, it can tell if a block is good or not. This means that it can choose among alternate or partially recovered versions and get the right one. Certainly, my above experience would have been a dead array ... or an array with much loss if I had used any other RAID technology. What does this mean? Well... one thing it means is that for non-essential systems (say my home media array), using cheap technology is less risky. None of these is enterprise level technology, but none of it costs anywhere near what enterprise level, either. From owner-freebsd-fs@FreeBSD.ORG Wed Oct 31 20:21:13 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 67255903 for ; Wed, 31 Oct 2012 20:21:13 +0000 (UTC) (envelope-from lists@jnielsen.net) Received: from ns1.jnielsen.net (secure.freebsdsolutions.net [69.55.234.48]) by mx1.freebsd.org (Postfix) with ESMTP id 2D7408FC14 for ; Wed, 31 Oct 2012 20:21:12 +0000 (UTC) Received: from [10.10.1.32] (office.betterlinux.com [199.58.199.60]) (authenticated bits=0) by ns1.jnielsen.net (8.14.4/8.14.4) with ESMTP id q9VK1W6O056558 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=NOT) for ; Wed, 31 Oct 2012 16:01:32 -0400 (EDT) (envelope-from lists@jnielsen.net) From: John Nielsen Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable Subject: creating a bootable ZFS image Message-Id: Date: Wed, 31 Oct 2012 14:01:42 -0600 To: freebsd-fs@freebsd.org Mime-Version: 1.0 (Mac OS X Mail 6.2 \(1499\)) X-Mailer: Apple Mail (2.1499) X-DCC-x.dcc-servers-Metrics: ns1.jnielsen.net 104; Body=1 Fuz1=1 Fuz2=1 X-Virus-Scanned: clamav-milter 0.97.5 at ns1.jnielsen.net X-Virus-Status: Clean X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 31 Oct 2012 20:21:13 -0000 I am working on a script to create a ZFS-only disk image and install = FreeBSD 9.1-RC2/RELEASE to it for use as a virtual machine template. = Everything works fine up to the point where the image needs to be = detached from the build host. The cleanest (and most logical to me) would be to export the pool on the = build host. Doing so frees the md device and removes the pool from the = build host, which is what I want. Unfortunately the image will not boot, = since the pool is marked inactive. I found a similar thread in 2011 = (subject "Booting from a ZFS pool exported on another system") with a = patch by PJD, but I don't know if that has ever been tested or = committed. (AndI share Kenneth Vestergaard's concern that something else = might need to happen to import the pool once the system boots.) What I am doing instead is creating the pool with -o failmode=3Dcontinue, = installing, unmounting everything, then forcibly detaching the md = device. This gives me an image I can use, and it boots and runs fine. = Unfortunately, that leaves me with a defunct pool on the build host = until I reboot it. Anything I try to do to the pool (destroy, offline, = export, etc) returns "cannot open 'zfsroot': pool I/O is currently = suspended." (With the default failmode=3Dwait, it's even worse since any = command that tries to touch the pool never returns.) The pool state is = "UNAVAIL" and the device state is "REMOVED". Once the build host is = rebooted the device state changes to UNAVAIL and zpool destroy works as = expected. Obviously I need the VM image to be bootable, and ideally I'd like to be = able to run the script multiple times on the build host without changing = the pool name or rebooting every time. Is there a way to make that = happen? Specifically: Is it possible to cleanly offline a zpool without exporting it? If I yank the md device, is there a way to tell zpool to give up = on it without rebooting? Is it possible to boot from an exported filesystem? Thank you, John Nielsen From owner-freebsd-fs@FreeBSD.ORG Wed Oct 31 20:24:11 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 4368BA8B for ; Wed, 31 Oct 2012 20:24:11 +0000 (UTC) (envelope-from maeulen@awp-shop.de) Received: from h1432027.stratoserver.net (h1432027.stratoserver.net [85.214.136.58]) by mx1.freebsd.org (Postfix) with ESMTP id 81E8F8FC08 for ; Wed, 31 Oct 2012 20:24:09 +0000 (UTC) Received: (qmail 17949 invoked from network); 31 Oct 2012 21:17:26 +0100 Received: from hsi-kbw-078-042-101-221.hsi3.kabel-badenwuerttemberg.de (HELO maulwurf.homelinux.org) (78.42.101.221) by h1432027.stratoserver.net with (AES128-SHA encrypted) SMTP; 31 Oct 2012 21:17:26 +0100 Received: from EXCHANGE2010.Skynet.local ([fe80::34e1:cd61:5835:d791]) by CAS.Skynet.local ([fe80::fd53:3300:b8c6:f916%14]) with mapi id 14.01.0421.002; Wed, 31 Oct 2012 21:17:26 +0100 From: =?iso-8859-1?Q?Johannes_M=E4ulen?= To: "freebsd-fs@freebsd.org" Subject: geli device istgt Thread-Topic: geli device istgt Thread-Index: Ac23pKFyM7Xv0bwEQpm/TuNB8aJVgg== Date: Wed, 31 Oct 2012 20:17:25 +0000 Message-ID: <9A757AF2CA7F204A8F2444FFC5C27C301CADCD4C@Exchange2010.Skynet.local> Accept-Language: de-DE, en-US Content-Language: de-DE X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [192.168.178.20] MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.14 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 31 Oct 2012 20:24:11 -0000 Hi there, I'm new here, "Hello" :). Hopefully it's the right mailing list... I've setup a Freebsd 9.0-RELEASE machine to handle my storage devices. I'd like to "share" encrypted partitions via iscsi. But, I'd like to take t= he encryption take place on the iscsi-target(-machine). The machine is equi= pped with a aes-ni capable cpu, which I'd like to use. I set up a geli devi= ce, but whenever I try to use it as target I get errors like: /usr/local/etc/rc.d/istgt start Starting istgt. istgt version 0.5 (20121028) normal mode using kqueue using host atomic LU1 HDD UNIT LU1: LUN0 file=3D/dev/da0p1.eli, size=3D1499976953856 LU1: LUN0 2929642488 blocks, 512 bytes/block istgt_lu_disk.c: 330:istgt_lu_disk_allocate_raw: ***ERROR*** lu_disk_read()= failed istgt_lu_disk.c: 650:istgt_lu_disk_init: ***ERROR*** LU1: LUN0: allocate er= ror istgt_lu.c:2091:istgt_lu_init_unit: ***ERROR*** LU1: lu_disk_init() failed istgt_lu.c:2166:istgt_lu_init: ***ERROR*** LU1: lu_init_unit() failed istgt.c:2799:main: ***ERROR*** istgt_lu_init() failed /usr/local/etc/rc.d/istgt: WARNING: failed to start istgt Could somebody help me with that? If I try to start istgt with an unencrypted partition everything works as e= xpected. Kind regards Johannes From owner-freebsd-fs@FreeBSD.ORG Wed Oct 31 20:48:21 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 89CF57FD; Wed, 31 Oct 2012 20:48:21 +0000 (UTC) (envelope-from artemb@gmail.com) Received: from mail-la0-f54.google.com (mail-la0-f54.google.com [209.85.215.54]) by mx1.freebsd.org (Postfix) with ESMTP id D23388FC14; Wed, 31 Oct 2012 20:48:20 +0000 (UTC) Received: by mail-la0-f54.google.com with SMTP id e12so1737766lag.13 for ; Wed, 31 Oct 2012 13:48:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type; bh=Ktu6RtoKL5Q+g7H5K7CQOJ9ljDL+ED865hmblXtv+6o=; b=qb02R/da5H6TDOhjYsgPDRfobKg5Nvu5hhg6XZaN6K99vOjZxdIqJaBPa68ZjnwQZk ZXbWWQ1NOwXWC9jxvlp0TkHLFykbCQmx5BJOBos8URbopnliU/Pnd2S8MmpsDagKk+Y5 hNkG/X1/uh4qTxFo2iUoExw4QSDg4Y9iBNWYnW0pt3MNAxuEvj+/5zVfgaOWr9+s/RpB 5T/RCc9Ux+WL1Bk4Ix2CQ8Ytc44NWxEcthehBp9706ipBHhywtSY5GOyaWuhc2MB4JO7 JmGCzYojvIC22E3xy26mzeixP7jBnnvP54Si6Co0YlCSX7ksuQ8RGkSuWIUXwuyndiFj ajOQ== MIME-Version: 1.0 Received: by 10.112.37.138 with SMTP id y10mr3753593lbj.121.1351716499843; Wed, 31 Oct 2012 13:48:19 -0700 (PDT) Sender: artemb@gmail.com Received: by 10.112.80.103 with HTTP; Wed, 31 Oct 2012 13:48:19 -0700 (PDT) In-Reply-To: References: <27087376D1C14132A3CC1B4016912F6D@multiplay.co.uk> Date: Wed, 31 Oct 2012 13:48:19 -0700 X-Google-Sender-Auth: MhwlRgIxjzGvohr-bO9q9AaqRbk Message-ID: Subject: Re: ZFS corruption due to lack of space? From: Artem Belevich To: Steven Hartland Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-fs@freebsd.org, freebsd-stable@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 31 Oct 2012 20:48:21 -0000 On Wed, Oct 31, 2012 at 10:55 AM, Steven Hartland wrote: > At that point with the test seemingly successful I went > to delete test files which resulted in:- > rm random* > rm: random1: Unknown error: 122 ZFS is a logging filesystem. Even removing a file apparently requires some space to write a new record saying that the file is not referenced any more. One way out of this jam is to try truncating some large file in place. Make sure that file is not part of any snapshot. Something like this may do the trick: #dd if=/dev/null of=existing_large_file Or, perhaps even something as simple as 'echo -n > large_file' may work. Good luck, --Artem From owner-freebsd-fs@FreeBSD.ORG Wed Oct 31 21:23:55 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 9660FE12; Wed, 31 Oct 2012 21:23:55 +0000 (UTC) (envelope-from peter@rulingia.com) Received: from vps.rulingia.com (host-122-100-2-194.octopus.com.au [122.100.2.194]) by mx1.freebsd.org (Postfix) with ESMTP id 092A28FC16; Wed, 31 Oct 2012 21:23:53 +0000 (UTC) Received: from server.rulingia.com (c220-239-241-202.belrs5.nsw.optusnet.com.au [220.239.241.202]) by vps.rulingia.com (8.14.5/8.14.5) with ESMTP id q9VLNqO2037647 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Thu, 1 Nov 2012 08:23:52 +1100 (EST) (envelope-from peter@rulingia.com) X-Bogosity: Ham, spamicity=0.000000 Received: from server.rulingia.com (localhost.rulingia.com [127.0.0.1]) by server.rulingia.com (8.14.5/8.14.5) with ESMTP id q9VLNkcA040212 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Thu, 1 Nov 2012 08:23:46 +1100 (EST) (envelope-from peter@server.rulingia.com) Received: (from peter@localhost) by server.rulingia.com (8.14.5/8.14.5/Submit) id q9VLNk2i040211; Thu, 1 Nov 2012 08:23:46 +1100 (EST) (envelope-from peter) Date: Thu, 1 Nov 2012 08:23:46 +1100 From: Peter Jeremy To: Steven Hartland Subject: Re: ZFS corruption due to lack of space? Message-ID: <20121031212346.GL3309@server.rulingia.com> References: <27087376D1C14132A3CC1B4016912F6D@multiplay.co.uk> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="IMjqdzrDRly81ofr" Content-Disposition: inline In-Reply-To: <27087376D1C14132A3CC1B4016912F6D@multiplay.co.uk> X-PGP-Key: http://www.rulingia.com/keys/peter.pgp User-Agent: Mutt/1.5.21 (2010-09-15) Cc: freebsd-fs@freebsd.org, freebsd-stable@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 31 Oct 2012 21:23:55 -0000 --IMjqdzrDRly81ofr Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On 2012-Oct-31 17:25:09 -0000, Steven Hartland wro= te: >Been running some tests on new hardware here to verify all >is good. One of the tests was to fill the zfs array which >seems like its totally corrupted the tank. I've accidently "filled" a pool, and had multiple processes try to write to the full pool, without either emptying the free space reserve (so I could still delete the offending files) or corrupting the pool. Had you tried to read/write the raw disks before you tried the ZFS testing? Do you have compression and/or dedupe enabled on the pool? >1. Given the information it seems like the multiple writes filling >the disk may have caused metadata corruption? I don't recall seeing this reported before. >2. Is there anyway to stop the scrub? Other than freeing up some space, I don't think so. If this is a test pool that you don't need, you could try destroying it and re-creating it - that may be quicker and easier than recovering the existing pool. >3. Surely low space should never prevent stopping a scrub? As Artem noted, ZFS is a copy-on-write filesystem. It is supposed to reserve some free space to allow metadata updates (stop scrubs, delete files, etc) even when it is "full" but I have seen reports of this not working correctly in the past. A truncate-in-place may work. You could also try asking on zfs-discuss@opensolaris.org=20 --=20 Peter Jeremy --IMjqdzrDRly81ofr Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.19 (FreeBSD) iEYEARECAAYFAlCRluIACgkQ/opHv/APuIf5XwCePbniJH+FqKmFdUYvRlHobjbE U74AoIBMqgc6dVkhg9Znx5K9IVh4Spa2 =1SD/ -----END PGP SIGNATURE----- --IMjqdzrDRly81ofr-- From owner-freebsd-fs@FreeBSD.ORG Wed Oct 31 21:26:38 2012 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 2A1B03FA for ; Wed, 31 Oct 2012 21:26:38 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 5ACFF8FC12 for ; Wed, 31 Oct 2012 21:26:37 +0000 (UTC) Received: from porto.starpoint.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id XAA13077; Wed, 31 Oct 2012 23:26:30 +0200 (EET) (envelope-from avg@FreeBSD.org) Received: from localhost ([127.0.0.1]) by porto.starpoint.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1TTfny-0009Ww-AE; Wed, 31 Oct 2012 23:26:30 +0200 Message-ID: <50919783.5060807@FreeBSD.org> Date: Wed, 31 Oct 2012 23:26:27 +0200 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:16.0) Gecko/20121030 Thunderbird/16.0.2 MIME-Version: 1.0 To: John Nielsen Subject: Re: creating a bootable ZFS image References: In-Reply-To: X-Enigmail-Version: 1.4.5 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 31 Oct 2012 21:26:38 -0000 on 31/10/2012 22:01 John Nielsen said the following: > Is it possible to boot from an exported filesystem? It is possible in head since recently. And soon it will be possible in stable/[89]. -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Wed Oct 31 21:31:55 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 3CB246B3; Wed, 31 Oct 2012 21:31:55 +0000 (UTC) (envelope-from rysto32@gmail.com) Received: from mail-vc0-f182.google.com (mail-vc0-f182.google.com [209.85.220.182]) by mx1.freebsd.org (Postfix) with ESMTP id BC51A8FC08; Wed, 31 Oct 2012 21:31:54 +0000 (UTC) Received: by mail-vc0-f182.google.com with SMTP id fw7so2677243vcb.13 for ; Wed, 31 Oct 2012 14:31:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=pqae7FdZL9WSNEyXswPexnAL8jbV6Tbp5y/vgv0k7p4=; b=uw5ULVVR846/dl4pIqKwM5SPxaIIxz3vsVK/OVrn2vI67Lt84m5cuIlZaFZNQcHgOV TpXY/d5rDZily61UxCFtIgTb5XKaddPQqOEHItmFGFfIv1lBPnUBxLNSFgMH26zDNFSv O6TOiol/Fh6wysY6ggzKhS1kaTmb9wlVg/Yp4K9WgPI22Hh/hd0qzpWjwuEz8Eg0MLnd S/n6geoiP/EQdytOMjhW5Q14O6WlTgOOEnQNzv5hPXKPg9PlFOfysh0GUTNJVJR1uetB Ac1GOW4x9Us0K2Yu0o2J90GIh8j6oM6klRatvlHrIyhC8kum3uVLmU/+pVEx530Bm5Fc +2dA== MIME-Version: 1.0 Received: by 10.52.155.199 with SMTP id vy7mr50194247vdb.54.1351719113870; Wed, 31 Oct 2012 14:31:53 -0700 (PDT) Received: by 10.58.207.114 with HTTP; Wed, 31 Oct 2012 14:31:53 -0700 (PDT) In-Reply-To: References: <27087376D1C14132A3CC1B4016912F6D@multiplay.co.uk> Date: Wed, 31 Oct 2012 17:31:53 -0400 Message-ID: Subject: Re: ZFS corruption due to lack of space? From: Ryan Stone To: Artem Belevich Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-fs@freebsd.org, freebsd-stable@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 31 Oct 2012 21:31:55 -0000 On Wed, Oct 31, 2012 at 4:48 PM, Artem Belevich wrote: > One way out of this jam is to try truncating some large file in place. > Make sure that file is not part of any snapshot. > Something like this may do the trick: > #dd if=/dev/null of=existing_large_file > > Or, perhaps even something as simple as 'echo -n > large_file' may work. truncate -s 0? From owner-freebsd-fs@FreeBSD.ORG Wed Oct 31 22:42:59 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 0CBFDC0E for ; Wed, 31 Oct 2012 22:42:59 +0000 (UTC) (envelope-from freebsd-listen@fabiankeil.de) Received: from smtprelay04.ispgateway.de (smtprelay04.ispgateway.de [80.67.31.42]) by mx1.freebsd.org (Postfix) with ESMTP id B37A58FC0C for ; Wed, 31 Oct 2012 22:42:58 +0000 (UTC) Received: from [84.44.210.71] (helo=fabiankeil.de) by smtprelay04.ispgateway.de with esmtpsa (SSLv3:AES128-SHA:128) (Exim 4.68) (envelope-from ) id 1TTgnp-0004r8-Rk; Wed, 31 Oct 2012 23:30:25 +0100 Date: Wed, 31 Oct 2012 23:30:07 +0100 From: Fabian Keil To: John Nielsen Subject: Re: creating a bootable ZFS image Message-ID: <20121031233007.57aea90b@fabiankeil.de> In-Reply-To: References: Mime-Version: 1.0 Content-Type: multipart/signed; micalg=PGP-SHA1; boundary="Sig_/sAwor/XY6FvDk/uMqtg8U7/"; protocol="application/pgp-signature" X-Df-Sender: Nzc1MDY3 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list Reply-To: freebsd-fs@freebsd.org List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 31 Oct 2012 22:42:59 -0000 --Sig_/sAwor/XY6FvDk/uMqtg8U7/ Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: quoted-printable John Nielsen wrote: > What I am doing instead is creating the pool with -o failmode=3Dcontinue, > installing, unmounting everything, then forcibly detaching the md > device. This gives me an image I can use, and it boots and runs fine. > Unfortunately, that leaves me with a defunct pool on the build host > until I reboot it. Anything I try to do to the pool (destroy, offline, > export, etc) returns "cannot open 'zfsroot': pool I/O is currently > suspended." (With the default failmode=3Dwait, it's even worse since any > command that tries to touch the pool never returns.) The pool state is > "UNAVAIL" and the device state is "REMOVED". Once the build host is > rebooted the device state changes to UNAVAIL and zpool destroy works as > expected. Did you try "zpool clear [-F] $pool" after reattaching the md? It often works for me in situations where other zpool subcommands just hang like you described above. Fabian --Sig_/sAwor/XY6FvDk/uMqtg8U7/ Content-Type: application/pgp-signature; name=signature.asc Content-Disposition: attachment; filename=signature.asc -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.19 (FreeBSD) iEYEARECAAYFAlCRpnIACgkQBYqIVf93VJ0Z8ACeNRrRwKrUV1906XQkstMdBg+K eGAAoISYGfWBqiRiswUrCYqWm0ak7V4b =bBpP -----END PGP SIGNATURE----- --Sig_/sAwor/XY6FvDk/uMqtg8U7/-- From owner-freebsd-fs@FreeBSD.ORG Thu Nov 1 00:09:36 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id BC21CDF4; Thu, 1 Nov 2012 00:09:36 +0000 (UTC) (envelope-from prvs=1652892d21=killing@multiplay.co.uk) Received: from mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) by mx1.freebsd.org (Postfix) with ESMTP id 14E948FC0C; Thu, 1 Nov 2012 00:09:35 +0000 (UTC) Received: from r2d2 ([188.220.16.49]) by mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) (MDaemon PRO v10.0.4) with ESMTP id md50000903327.msg; Thu, 01 Nov 2012 00:09:33 +0000 X-Spam-Processed: mail1.multiplay.co.uk, Thu, 01 Nov 2012 00:09:33 +0000 (not processed: message from valid local sender) X-MDRemoteIP: 188.220.16.49 X-Return-Path: prvs=1652892d21=killing@multiplay.co.uk X-Envelope-From: killing@multiplay.co.uk Message-ID: From: "Steven Hartland" To: "Peter Jeremy" References: <27087376D1C14132A3CC1B4016912F6D@multiplay.co.uk> <20121031212346.GL3309@server.rulingia.com> Subject: Re: ZFS corruption due to lack of space? Date: Thu, 1 Nov 2012 00:09:33 -0000 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 Cc: freebsd-fs@freebsd.org, freebsd-stable@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 01 Nov 2012 00:09:36 -0000 On 2012-Oct-31 17:25:09 -0000, Steven Hartland wrote: >>Been running some tests on new hardware here to verify all >>is good. One of the tests was to fill the zfs array which >>seems like its totally corrupted the tank. > >I've accidently "filled" a pool, and had multiple processes try to >write to the full pool, without either emptying the free space reserve >(so I could still delete the offending files) or corrupting the pool. Same here but its the first time I've had ZIL in place at the time so wondering if that may be playing a factor. > Had you tried to read/write the raw disks before you tried the > ZFS testing? Yes, didn't see any issues but then it wasn't checksuming so tbh I wouldn't have noticed if it was silently corrupting data. >Do you have compression and/or dedupe enabled on the pool? Nope bog standard raidz2 no additional settings >>1. Given the information it seems like the multiple writes filling >>the disk may have caused metadata corruption? > > I don't recall seeing this reported before. Nore me and we've been using ZFS for years, but never filled a pool with such known simultanious access + ZIL before >>2. Is there anyway to stop the scrub? > >Other than freeing up some space, I don't think so. If this is a test >pool that you don't need, you could try destroying it and re-creating >it - that may be quicker and easier than recovering the existing pool. Artems trick of cat /dev/null > /tank2/ worked and I've now managed to stop the scrub :) >>3. Surely low space should never prevent stopping a scrub? > > As Artem noted, ZFS is a copy-on-write filesystem. It is supposed to > reserve some free space to allow metadata updates (stop scrubs, delete > files, etc) even when it is "full" but I have seen reports of this not > working correctly in the past. A truncate-in-place may work. Yes it did thanks, but as you said if this metadata update was failing due to out of space lends credability to the fact that the same lack of space and hence failure to update metadata could have also caused the corruption in the first place. Its interesting to note that the zpool is reporting pleanty of free space even when the root zfs volume was showing 0, so you would expect there to be pleanty of space for it be able to stop the scrub but it appears not which is definitely interesting and could point to the underlying cause? zpool list tank2 NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT tank2 19T 18.7T 304G 98% 1.00x ONLINE - zfs list tank2 NAME USED AVAIL REFER MOUNTPOINT tank2 13.3T 0 13.3T /tank2 Current state is:- scan: scrub in progress since Wed Oct 31 16:13:53 2012 1.64T scanned out of 18.7T at 62.8M/s, 79h12m to go 280M repaired, 8.76% done Something else that was interesting is while the scrub was running devd was using a good amount of CPU 40% of a 3.3Ghz core, which I've never seen before. Any ideas why its usage would be so high? Regards Steve ================================================ This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337 or return the E.mail to postmaster@multiplay.co.uk. From owner-freebsd-fs@FreeBSD.ORG Thu Nov 1 00:19:45 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id E6C67FEA; Thu, 1 Nov 2012 00:19:45 +0000 (UTC) (envelope-from prvs=1652892d21=killing@multiplay.co.uk) Received: from mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) by mx1.freebsd.org (Postfix) with ESMTP id 805E88FC08; Thu, 1 Nov 2012 00:19:43 +0000 (UTC) Received: from r2d2 ([188.220.16.49]) by mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) (MDaemon PRO v10.0.4) with ESMTP id md50000903417.msg; Thu, 01 Nov 2012 00:19:42 +0000 X-Spam-Processed: mail1.multiplay.co.uk, Thu, 01 Nov 2012 00:19:42 +0000 (not processed: message from valid local sender) X-MDRemoteIP: 188.220.16.49 X-Return-Path: prvs=1652892d21=killing@multiplay.co.uk X-Envelope-From: killing@multiplay.co.uk Message-ID: From: "Steven Hartland" To: "Peter Jeremy" References: <27087376D1C14132A3CC1B4016912F6D@multiplay.co.uk> <20121031212346.GL3309@server.rulingia.com> Subject: Re: ZFS corruption due to lack of space? Date: Thu, 1 Nov 2012 00:19:38 -0000 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=response Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 Cc: freebsd-fs@freebsd.org, freebsd-stable@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 01 Nov 2012 00:19:46 -0000 ----- Original Message ----- From: "Steven Hartland" To: "Peter Jeremy" Cc: ; Sent: Thursday, November 01, 2012 12:09 AM Subject: Re: ZFS corruption due to lack of space? > On 2012-Oct-31 17:25:09 -0000, Steven Hartland wrote: >>>Been running some tests on new hardware here to verify all >>>is good. One of the tests was to fill the zfs array which >>>seems like its totally corrupted the tank. >> >>I've accidently "filled" a pool, and had multiple processes try to >>write to the full pool, without either emptying the free space reserve >>(so I could still delete the offending files) or corrupting the pool. > > Same here but its the first time I've had ZIL in place at the time so > wondering if that may be playing a factor. > >> Had you tried to read/write the raw disks before you tried the >> ZFS testing? > > Yes, didn't see any issues but then it wasn't checksuming so tbh I > wouldn't have noticed if it was silently corrupting data. > >>Do you have compression and/or dedupe enabled on the pool? > > Nope bog standard raidz2 no additional settings > >>>1. Given the information it seems like the multiple writes filling >>>the disk may have caused metadata corruption? >> >> I don't recall seeing this reported before. > > Nore me and we've been using ZFS for years, but never filled a pool > with such known simultanious access + ZIL before > >>>2. Is there anyway to stop the scrub? >> >>Other than freeing up some space, I don't think so. If this is a test >>pool that you don't need, you could try destroying it and re-creating >>it - that may be quicker and easier than recovering the existing pool. > > Artems trick of cat /dev/null > /tank2/ worked and I've now > managed to stop the scrub :) > >>>3. Surely low space should never prevent stopping a scrub? >> >> As Artem noted, ZFS is a copy-on-write filesystem. It is supposed to >> reserve some free space to allow metadata updates (stop scrubs, delete >> files, etc) even when it is "full" but I have seen reports of this not >> working correctly in the past. A truncate-in-place may work. > > Yes it did thanks, but as you said if this metadata update was failing > due to out of space lends credability to the fact that the same lack of > space and hence failure to update metadata could have also caused the > corruption in the first place. > > Its interesting to note that the zpool is reporting pleanty of free space > even when the root zfs volume was showing 0, so you would expect there > to be pleanty of space for it be able to stop the scrub but it appears > not which is definitely interesting and could point to the underlying > cause? > > zpool list tank2 > NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT > tank2 19T 18.7T 304G 98% 1.00x ONLINE - > > zfs list tank2 > NAME USED AVAIL REFER MOUNTPOINT > tank2 13.3T 0 13.3T /tank2 > > Current state is:- > scan: scrub in progress since Wed Oct 31 16:13:53 2012 > 1.64T scanned out of 18.7T at 62.8M/s, 79h12m to go > 280M repaired, 8.76% done > > Something else that was interesting is while the scrub was running > devd was using a good amount of CPU 40% of a 3.3Ghz core, which I've > never seen before. Any ideas why its usage would be so high? In case its useful here's the output from a zdb tank2 so far:- zdb tank2 Cached configuration: version: 28 name: 'tank2' state: 0 txg: 39502 pool_guid: 15779146362913479443 hostid: 1751781486 vdev_children: 3 vdev_tree: type: 'root' id: 0 guid: 15779146362913479443 create_txg: 4 children[0]: type: 'raidz' id: 0 guid: 8518972900227438019 nparity: 2 metaslab_array: 33 metaslab_shift: 37 ashift: 9 asize: 21004116295680 is_log: 0 create_txg: 4 children[0]: type: 'disk' id: 0 guid: 15577380450172137060 path: '/dev/mfisyspd0' phys_path: '/dev/mfisyspd0' whole_disk: 1 DTL: 236 create_txg: 4 children[1]: type: 'disk' id: 1 guid: 16940350228793267704 path: '/dev/mfisyspd1' phys_path: '/dev/mfisyspd1' whole_disk: 1 DTL: 235 create_txg: 4 children[2]: type: 'disk' id: 2 guid: 9264743178245473794 path: '/dev/mfisyspd2' phys_path: '/dev/mfisyspd2' whole_disk: 1 DTL: 234 create_txg: 4 children[3]: type: 'disk' id: 3 guid: 432716341673487166 path: '/dev/mfisyspd3' phys_path: '/dev/mfisyspd3' whole_disk: 1 DTL: 233 create_txg: 4 children[4]: type: 'disk' id: 4 guid: 18217760646550913544 path: '/dev/mfisyspd4' phys_path: '/dev/mfisyspd4' whole_disk: 1 DTL: 232 create_txg: 4 children[5]: type: 'disk' id: 5 guid: 6964614355298004256 path: '/dev/mfisyspd5' phys_path: '/dev/mfisyspd5' whole_disk: 1 DTL: 231 create_txg: 4 children[6]: type: 'disk' id: 6 guid: 4397961270160308034 path: '/dev/mfisyspd6' phys_path: '/dev/mfisyspd6' whole_disk: 1 DTL: 230 create_txg: 4 children[1]: type: 'disk' id: 1 guid: 13757670211012452260 path: '/dev/mfisyspd7p3' phys_path: '/dev/mfisyspd7p3' whole_disk: 1 metaslab_array: 32 metaslab_shift: 27 ashift: 9 asize: 14125891584 is_log: 1 DTL: 237 create_txg: 4 children[2]: type: 'disk' id: 2 guid: 7315839509249482920 path: '/dev/mfisyspd8p3' phys_path: '/dev/mfisyspd8p3' whole_disk: 1 metaslab_array: 31 metaslab_shift: 27 ashift: 9 asize: 14125891584 is_log: 1 DTL: 229 create_txg: 4 MOS Configuration: version: 28 name: 'tank2' state: 0 txg: 39502 pool_guid: 15779146362913479443 hostid: 1751781486 vdev_children: 3 vdev_tree: type: 'root' id: 0 guid: 15779146362913479443 create_txg: 4 children[0]: type: 'raidz' id: 0 guid: 8518972900227438019 nparity: 2 metaslab_array: 33 metaslab_shift: 37 ashift: 9 asize: 21004116295680 is_log: 0 create_txg: 4 children[0]: type: 'disk' id: 0 guid: 15577380450172137060 path: '/dev/mfisyspd0' phys_path: '/dev/mfisyspd0' whole_disk: 1 DTL: 236 create_txg: 4 children[1]: type: 'disk' id: 1 guid: 16940350228793267704 path: '/dev/mfisyspd1' phys_path: '/dev/mfisyspd1' whole_disk: 1 DTL: 235 create_txg: 4 children[2]: type: 'disk' id: 2 guid: 9264743178245473794 path: '/dev/mfisyspd2' phys_path: '/dev/mfisyspd2' whole_disk: 1 DTL: 234 create_txg: 4 children[3]: type: 'disk' id: 3 guid: 432716341673487166 path: '/dev/mfisyspd3' phys_path: '/dev/mfisyspd3' whole_disk: 1 DTL: 233 create_txg: 4 children[4]: type: 'disk' id: 4 guid: 18217760646550913544 path: '/dev/mfisyspd4' phys_path: '/dev/mfisyspd4' whole_disk: 1 DTL: 232 create_txg: 4 children[5]: type: 'disk' id: 5 guid: 6964614355298004256 path: '/dev/mfisyspd5' phys_path: '/dev/mfisyspd5' whole_disk: 1 DTL: 231 create_txg: 4 children[6]: type: 'disk' id: 6 guid: 4397961270160308034 path: '/dev/mfisyspd6' phys_path: '/dev/mfisyspd6' whole_disk: 1 DTL: 230 create_txg: 4 children[1]: type: 'disk' id: 1 guid: 13757670211012452260 path: '/dev/mfisyspd7p3' phys_path: '/dev/mfisyspd7p3' whole_disk: 1 metaslab_array: 32 metaslab_shift: 27 ashift: 9 asize: 14125891584 is_log: 1 DTL: 237 create_txg: 4 children[2]: type: 'disk' id: 2 guid: 7315839509249482920 path: '/dev/mfisyspd8p3' phys_path: '/dev/mfisyspd8p3' whole_disk: 1 metaslab_array: 31 metaslab_shift: 27 ashift: 9 asize: 14125891584 is_log: 1 DTL: 229 create_txg: 4 Uberblock: magic = 0000000000bab10c version = 28 txg = 39547 guid_sum = 6486691012039134504 timestamp = 1351727718 UTC = Wed Oct 31 23:55:18 2012 All DDTs are empty Metaslabs: vdev 0 metaslabs 152 offset spacemap free --------------- ------------------- --------------- ------------- metaslab 0 offset 0 spacemap 36 free 1.41G segments 4170 maxsize 1.38G freepct 1% metaslab 1 offset 2000000000 spacemap 65 free 1.45G segments 4508 maxsize 1.40G freepct 1% metaslab 2 offset 4000000000 spacemap 70 free 1.44G segments 5723 maxsize 1.39G freepct 1% metaslab 3 offset 6000000000 spacemap 77 free 1.41G segments 6635 maxsize 1.35G freepct 1% metaslab 4 offset 8000000000 spacemap 80 free 1.46G segments 6408 maxsize 1.40G freepct 1% metaslab 5 offset a000000000 spacemap 81 free 1.47G segments 6480 maxsize 1.42G freepct 1% metaslab 6 offset c000000000 spacemap 82 free 1.48G segments 6684 maxsize 1.42G freepct 1% metaslab 7 offset e000000000 spacemap 83 free 1.48G segments 6873 maxsize 1.42G freepct 1% metaslab 8 offset 10000000000 spacemap 84 free 1.48G segments 7298 maxsize 1.42G freepct 1% metaslab 9 offset 12000000000 spacemap 85 free 1.43G segments 6699 maxsize 1.37G freepct 1% metaslab 10 offset 14000000000 spacemap 88 free 1.50G segments 6781 maxsize 1.44G freepct 1% metaslab 11 offset 16000000000 spacemap 89 free 1.50G segments 6434 maxsize 1.44G freepct 1% metaslab 12 offset 18000000000 spacemap 90 free 1.51G segments 7188 maxsize 1.44G freepct 1% metaslab 13 offset 1a000000000 spacemap 91 free 1.49G segments 6712 maxsize 1.42G freepct 1% metaslab 14 offset 1c000000000 spacemap 92 free 1.52G segments 6810 maxsize 1.46G freepct 1% metaslab 15 offset 1e000000000 spacemap 93 free 1.52G segments 8306 maxsize 1.41G freepct 1% metaslab 16 offset 20000000000 spacemap 94 free 1.49G segments 21881 maxsize 660M freepct 1% metaslab 17 offset 22000000000 spacemap 97 free 1.50G segments 9590 maxsize 1.32G freepct 1% metaslab 18 offset 24000000000 spacemap 98 free 1.53G segments 6921 maxsize 1.47G freepct 1% metaslab 19 offset 26000000000 spacemap 99 free 1.53G segments 7982 maxsize 1.46G freepct 1% metaslab 20 offset 28000000000 spacemap 100 free 1.55G segments 7943 maxsize 1.48G freepct 1% metaslab 21 offset 2a000000000 spacemap 101 free 1.54G segments 8049 maxsize 1.47G freepct 1% metaslab 22 offset 2c000000000 spacemap 102 free 1.54G segments 8205 maxsize 1.46G freepct 1% metaslab 23 offset 2e000000000 spacemap 103 free 1.53G segments 11339 maxsize 1.37G freepct 1% metaslab 24 offset 30000000000 spacemap 104 free 1.55G segments 11536 maxsize 1.38G freepct 1% metaslab 25 offset 32000000000 spacemap 105 free 1.58G segments 7281 maxsize 1.50G freepct 1% metaslab 26 offset 34000000000 spacemap 108 free 1.57G segments 7917 maxsize 1.49G freepct 1% metaslab 27 offset 36000000000 spacemap 109 free 1.59G segments 8446 maxsize 1.51G freepct 1% metaslab 28 offset 38000000000 spacemap 110 free 1.59G segments 8437 maxsize 1.51G freepct 1% metaslab 29 offset 3a000000000 spacemap 35 free 1.60G segments 22991 maxsize 1.21G freepct 1% metaslab 30 offset 3c000000000 spacemap 111 free 1.57G segments 8358 maxsize 1.49G freepct 1% metaslab 31 offset 3e000000000 spacemap 112 free 1.62G segments 7724 maxsize 1.53G freepct 1% metaslab 32 offset 40000000000 spacemap 113 free 1.62G segments 8314 maxsize 1.53G freepct 1% metaslab 33 offset 42000000000 spacemap 114 free 1.61G segments 8294 maxsize 1.52G freepct 1% metaslab 34 offset 44000000000 spacemap 115 free 1.63G segments 8422 maxsize 1.54G freepct 1% metaslab 35 offset 46000000000 spacemap 116 free 1.60G segments 8125 maxsize 1.51G freepct 1% metaslab 36 offset 48000000000 spacemap 117 free 1.62G segments 8048 maxsize 1.53G freepct 1% metaslab 37 offset 4a000000000 spacemap 118 free 1.60G segments 8648 maxsize 1.51G freepct 1% metaslab 38 offset 4c000000000 spacemap 119 free 1.66G segments 8316 maxsize 1.57G freepct 1% metaslab 39 offset 4e000000000 spacemap 87 free 1.62G segments 27153 maxsize 1.05G freepct 1% metaslab 40 offset 50000000000 spacemap 121 free 1.66G segments 9314 maxsize 1.56G freepct 1% metaslab 41 offset 52000000000 spacemap 122 free 1.64G segments 9334 maxsize 1.56G freepct 1% metaslab 42 offset 54000000000 spacemap 123 free 1.67G segments 10391 maxsize 1.51G freepct 1% metaslab 43 offset 56000000000 spacemap 126 free 1.65G segments 12514 maxsize 1.49G freepct 1% metaslab 44 offset 58000000000 spacemap 127 free 1.67G segments 13441 maxsize 1.50G freepct 1% metaslab 45 offset 5a000000000 spacemap 129 free 1.70G segments 13288 maxsize 1.54G freepct 1% metaslab 46 offset 5c000000000 spacemap 96 free 1.71G segments 27184 maxsize 1.07G freepct 1% metaslab 47 offset 5e000000000 spacemap 130 free 1.69G segments 10019 maxsize 1.61G freepct 1% metaslab 48 offset 60000000000 spacemap 133 free 1.69G segments 13025 maxsize 1.53G freepct 1% metaslab 49 offset 62000000000 spacemap 135 free 1.71G segments 10562 maxsize 1.63G freepct 1% metaslab 50 offset 64000000000 spacemap 136 free 1.74G segments 9827 maxsize 1.66G freepct 1% metaslab 51 offset 66000000000 spacemap 137 free 1.73G segments 10206 maxsize 1.65G freepct 1% metaslab 52 offset 68000000000 spacemap 138 free 1.75G segments 9747 maxsize 1.67G freepct 1% metaslab 53 offset 6a000000000 spacemap 139 free 1.76G segments 14248 maxsize 1.57G freepct 1% metaslab 54 offset 6c000000000 spacemap 107 free 1.76G segments 29803 maxsize 987M freepct 1% metaslab 55 offset 6e000000000 spacemap 142 free 1.76G segments 9068 maxsize 1.68G freepct 1% metaslab 56 offset 70000000000 spacemap 143 free 1.76G segments 10561 maxsize 1.68G freepct 1% metaslab 57 offset 72000000000 spacemap 144 free 1.78G segments 10234 maxsize 1.70G freepct 1% metaslab 58 offset 74000000000 spacemap 34 free 1.79G segments 12737 maxsize 1.49G freepct 1% metaslab 59 offset 76000000000 spacemap 145 free 1.80G segments 10211 maxsize 1.71G freepct 1% metaslab 60 offset 78000000000 spacemap 146 free 1.77G segments 10696 maxsize 1.68G freepct 1% metaslab 61 offset 7a000000000 spacemap 147 free 1.81G segments 10934 maxsize 1.71G freepct 1% metaslab 62 offset 7c000000000 spacemap 148 free 1.81G segments 8698 maxsize 1.73G freepct 1% metaslab 63 offset 7e000000000 spacemap 149 free 1.82G segments 9165 maxsize 1.74G freepct 1% metaslab 64 offset 80000000000 spacemap 152 free 1.83G segments 9388 maxsize 1.74G freepct 1% metaslab 65 offset 82000000000 spacemap 154 free 1.84G segments 11321 maxsize 1.74G freepct 1% metaslab 66 offset 84000000000 spacemap 155 free 1.85G segments 10040 maxsize 1.76G freepct 1% metaslab 67 offset 86000000000 spacemap 156 free 1.86G segments 10531 maxsize 1.77G freepct 1% metaslab 68 offset 88000000000 spacemap 86 free 1.84G segments 8518 maxsize 1.73G freepct 1% metaslab 69 offset 8a000000000 spacemap 120 free 1.87G segments 18100 maxsize 1.51G freepct 1% metaslab 70 offset 8c000000000 spacemap 157 free 1.88G segments 12773 maxsize 1.70G freepct 1% metaslab 71 offset 8e000000000 spacemap 159 free 1.89G segments 11443 maxsize 1.79G freepct 1% metaslab 72 offset 90000000000 spacemap 125 free 1.90G segments 13633 maxsize 1.72G freepct 1% metaslab 73 offset 92000000000 spacemap 160 free 1.91G segments 10724 maxsize 1.81G freepct 1% metaslab 74 offset 94000000000 spacemap 161 free 1.92G segments 10550 maxsize 1.77G freepct 1% metaslab 75 offset 96000000000 spacemap 162 free 1.92G segments 10027 maxsize 1.83G freepct 1% metaslab 76 offset 98000000000 spacemap 132 free 1.93G segments 16007 maxsize 1.69G freepct 1% metaslab 77 offset 9a000000000 spacemap 164 free 1.94G segments 9721 maxsize 1.84G freepct 1% metaslab 78 offset 9c000000000 spacemap 134 free 1.95G segments 26262 maxsize 1.35G freepct 1% metaslab 79 offset 9e000000000 spacemap 165 free 1.95G segments 7968 maxsize 1.88G freepct 1% metaslab 80 offset a0000000000 spacemap 166 free 1.97G segments 7757 maxsize 1.89G freepct 1% metaslab 81 offset a2000000000 spacemap 167 free 1.97G segments 9206 maxsize 1.89G freepct 1% metaslab 82 offset a4000000000 spacemap 168 free 1.98G segments 9225 maxsize 1.89G freepct 1% metaslab 83 offset a6000000000 spacemap 106 free 1.99G segments 24197 maxsize 1.43G freepct 1% metaslab 84 offset a8000000000 spacemap 169 free 2.00G segments 9637 maxsize 1.91G freepct 1% metaslab 85 offset aa000000000 spacemap 170 free 2.01G segments 10167 maxsize 1.92G freepct 1% metaslab 86 offset ac000000000 spacemap 171 free 2.02G segments 12180 maxsize 1.85G freepct 1% metaslab 87 offset ae000000000 spacemap 174 free 2.02G segments 9716 maxsize 1.93G freepct 1% metaslab 88 offset b0000000000 spacemap 175 free 2.04G segments 10583 maxsize 1.94G freepct 1% metaslab 89 offset b2000000000 spacemap 176 free 2.05G segments 9935 maxsize 1.95G freepct 1% metaslab 90 offset b4000000000 spacemap 177 free 2.06G segments 10459 maxsize 1.96G freepct 1% metaslab 91 offset b6000000000 spacemap 178 free 2.07G segments 9396 maxsize 1.98G freepct 1% metaslab 92 offset b8000000000 spacemap 179 free 2.07G segments 8301 maxsize 1.96G freepct 1% metaslab 93 offset ba000000000 spacemap 151 free 2.08G segments 17800 maxsize 1.52G freepct 1% metaslab 94 offset bc000000000 spacemap 181 free 2.10G segments 10951 maxsize 2.00G freepct 1% metaslab 95 offset be000000000 spacemap 182 free 2.11G segments 11002 maxsize 2.01G freepct 1% metaslab 96 offset c0000000000 spacemap 183 free 2.12G segments 10855 maxsize 2.01G freepct 1% metaslab 97 offset c2000000000 spacemap 95 free 2.13G segments 13168 maxsize 1.96G freepct 1% metaslab 98 offset c4000000000 spacemap 184 free 2.13G segments 9408 maxsize 2.04G freepct 1% metaslab 99 offset c6000000000 spacemap 185 free 2.15G segments 10694 maxsize 2.04G freepct 1% metaslab 100 offset c8000000000 spacemap 186 free 2.16G segments 10563 maxsize 2.05G freepct 1% metaslab 101 offset ca000000000 spacemap 187 free 2.17G segments 11059 maxsize 2.06G freepct 1% metaslab 102 offset cc000000000 spacemap 188 free 2.18G segments 11516 maxsize 2.07G freepct 1% metaslab 103 offset ce000000000 spacemap 189 free 2.19G segments 10700 maxsize 2.08G freepct 1% metaslab 104 offset d0000000000 spacemap 163 free 2.20G segments 7869 maxsize 1.77G freepct 1% metaslab 105 offset d2000000000 spacemap 131 free 2.22G segments 11941 maxsize 2.08G freepct 1% metaslab 106 offset d4000000000 spacemap 190 free 9.24G segments 11407 maxsize 2.11G freepct 7% metaslab 107 offset d6000000000 spacemap 141 free 2.24G segments 13038 maxsize 2.00G freepct 1% metaslab 108 offset d8000000000 spacemap 192 free 9.7G segments 10472 maxsize 2.13G freepct 7% metaslab 109 offset da000000000 spacemap 193 free 8.61G segments 11389 maxsize 2.14G freepct 6% metaslab 110 offset dc000000000 spacemap 194 free 6.34G segments 10513 maxsize 2.16G freepct 4% metaslab 111 offset de000000000 spacemap 195 free 7.58G segments 10168 maxsize 2.17G freepct 5% metaslab 112 offset e0000000000 spacemap 197 free 7.14G segments 6922 maxsize 2.22G freepct 5% metaslab 113 offset e2000000000 spacemap 198 free 2.30G segments 6757 maxsize 2.24G freepct 1% metaslab 114 offset e4000000000 spacemap 199 free 2.32G segments 7008 maxsize 2.26G freepct 1% metaslab 115 offset e6000000000 spacemap 200 free 2.33G segments 6518 maxsize 2.27G freepct 1% metaslab 116 offset e8000000000 spacemap 173 free 2.35G segments 13857 maxsize 1.92G freepct 1% metaslab 117 offset ea000000000 spacemap 201 free 2.36G segments 7124 maxsize 2.30G freepct 1% metaslab 118 offset ec000000000 spacemap 202 free 2.37G segments 6936 maxsize 2.31G freepct 1% metaslab 119 offset ee000000000 spacemap 203 free 2.38G segments 6679 maxsize 2.32G freepct 1% metaslab 120 offset f0000000000 spacemap 204 free 2.39G segments 6818 maxsize 2.34G freepct 1% metaslab 121 offset f2000000000 spacemap 205 free 2.41G segments 7423 maxsize 2.34G freepct 1% metaslab 122 offset f4000000000 spacemap 150 free 2.42G segments 15678 maxsize 2.14G freepct 1% metaslab 123 offset f6000000000 spacemap 158 free 2.43G segments 9980 maxsize 2.28G freepct 1% metaslab 124 offset f8000000000 spacemap 180 free 2.45G segments 11702 maxsize 1.71G freepct 1% metaslab 125 offset fa000000000 spacemap 206 free 2.46G segments 7070 maxsize 2.40G freepct 1% metaslab 126 offset fc000000000 spacemap 124 free 2.47G segments 11485 maxsize 2.31G freepct 1% metaslab 127 offset fe000000000 spacemap 128 free 2.49G segments 2051 maxsize 2.42G freepct 1% metaslab 128 offset 100000000000 spacemap 207 free 2.50G segments 7309 maxsize 2.44G freepct 1% metaslab 129 offset 102000000000 spacemap 208 free 2.52G segments 7151 maxsize 2.46G freepct 1% metaslab 130 offset 104000000000 spacemap 209 free 2.52G segments 6041 maxsize 2.47G freepct 1% metaslab 131 offset 106000000000 spacemap 210 free 2.54G segments 6910 maxsize 2.47G freepct 1% metaslab 132 offset 108000000000 spacemap 211 free 2.56G segments 6816 maxsize 2.50G freepct 1% metaslab 133 offset 10a000000000 spacemap 212 free 2.57G segments 6182 maxsize 2.51G freepct 2% metaslab 134 offset 10c000000000 spacemap 213 free 2.59G segments 7541 maxsize 2.52G freepct 2% metaslab 135 offset 10e000000000 spacemap 214 free 2.61G segments 7810 maxsize 2.53G freepct 2% metaslab 136 offset 110000000000 spacemap 215 free 2.62G segments 6822 maxsize 2.56G freepct 2% metaslab 137 offset 112000000000 spacemap 191 free 9.39G segments 10891 maxsize 2.28G freepct 7% metaslab 138 offset 114000000000 spacemap 216 free 2.65G segments 7295 maxsize 2.58G freepct 2% metaslab 139 offset 116000000000 spacemap 217 free 2.67G segments 7435 maxsize 2.60G freepct 2% metaslab 140 offset 118000000000 spacemap 218 free 2.68G segments 6952 maxsize 2.62G freepct 2% metaslab 141 offset 11a000000000 spacemap 196 free 2.70G segments 5975 maxsize 2.37G freepct 2% metaslab 142 offset 11c000000000 spacemap 219 free 2.72G segments 7547 maxsize 2.65G freepct 2% metaslab 143 offset 11e000000000 spacemap 220 free 2.74G segments 7570 maxsize 2.67G freepct 2% metaslab 144 offset 120000000000 spacemap 221 free 2.75G segments 7321 maxsize 2.69G freepct 2% metaslab 145 offset 122000000000 spacemap 172 free 2.77G segments 3490 maxsize 2.68G freepct 2% metaslab 146 offset 124000000000 spacemap 222 free 2.79G segments 7448 maxsize 2.72G freepct 2% metaslab 147 offset 126000000000 spacemap 223 free 2.81G segments 7309 maxsize 2.74G freepct 2% metaslab 148 offset 128000000000 spacemap 224 free 2.82G segments 7367 maxsize 2.75G freepct 2% metaslab 149 offset 12a000000000 spacemap 225 free 2.84G segments 7249 maxsize 2.77G freepct 2% metaslab 150 offset 12c000000000 spacemap 226 free 2.86G segments 8091 maxsize 2.78G freepct 2% metaslab 151 offset 12e000000000 spacemap 153 free 2.88G segments 2771 maxsize 2.74G freepct 2% vdev 1 metaslabs 105 offset spacemap free --------------- ------------------- --------------- ------------- metaslab 0 offset 0 spacemap 38 free 128M segments 1 maxsize 128M freepct 99% metaslab 1 offset 8000000 spacemap 39 free 128M segments 1 maxsize 128M freepct 100% metaslab 2 offset 10000000 spacemap 42 free 128M segments 1 maxsize 128M freepct 100% metaslab 3 offset 18000000 spacemap 41 free 128M segments 1 maxsize 128M freepct 100% metaslab 4 offset 20000000 spacemap 51 free 128M segments 1 maxsize 128M freepct 100% metaslab 5 offset 28000000 spacemap 50 free 128M segments 1 maxsize 128M freepct 100% metaslab 6 offset 30000000 spacemap 49 free 128M segments 1 maxsize 128M freepct 100% metaslab 7 offset 38000000 spacemap 48 free 128M segments 1 maxsize 128M freepct 100% metaslab 8 offset 40000000 spacemap 47 free 128M segments 1 maxsize 128M freepct 100% metaslab 9 offset 48000000 spacemap 46 free 128M segments 1 maxsize 128M freepct 100% metaslab 10 offset 50000000 spacemap 45 free 128M segments 1 maxsize 128M freepct 100% metaslab 11 offset 58000000 spacemap 59 free 128M segments 1 maxsize 128M freepct 100% metaslab 12 offset 60000000 spacemap 62 free 128M segments 1 maxsize 128M freepct 100% metaslab 13 offset 68000000 spacemap 61 free 128M segments 1 maxsize 128M freepct 100% metaslab 14 offset 70000000 spacemap 67 free 128M segments 1 maxsize 128M freepct 100% metaslab 15 offset 78000000 spacemap 66 free 128M segments 1 maxsize 128M freepct 100% metaslab 16 offset 80000000 spacemap 74 free 128M segments 1 maxsize 128M freepct 100% metaslab 17 offset 88000000 spacemap 73 free 128M segments 1 maxsize 128M freepct 100% metaslab 18 offset 90000000 spacemap 75 free 128M segments 1 maxsize 128M freepct 100% metaslab 19 offset 98000000 spacemap 78 free 128M segments 1 maxsize 128M freepct 100% metaslab 20 offset a0000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 21 offset a8000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 22 offset b0000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 23 offset b8000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 24 offset c0000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 25 offset c8000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 26 offset d0000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 27 offset d8000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 28 offset e0000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 29 offset e8000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 30 offset f0000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 31 offset f8000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 32 offset 100000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 33 offset 108000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 34 offset 110000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 35 offset 118000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 36 offset 120000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 37 offset 128000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 38 offset 130000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 39 offset 138000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 40 offset 140000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 41 offset 148000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 42 offset 150000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 43 offset 158000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 44 offset 160000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 45 offset 168000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 46 offset 170000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 47 offset 178000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 48 offset 180000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 49 offset 188000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 50 offset 190000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 51 offset 198000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 52 offset 1a0000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 53 offset 1a8000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 54 offset 1b0000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 55 offset 1b8000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 56 offset 1c0000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 57 offset 1c8000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 58 offset 1d0000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 59 offset 1d8000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 60 offset 1e0000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 61 offset 1e8000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 62 offset 1f0000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 63 offset 1f8000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 64 offset 200000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 65 offset 208000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 66 offset 210000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 67 offset 218000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 68 offset 220000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 69 offset 228000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 70 offset 230000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 71 offset 238000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 72 offset 240000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 73 offset 248000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 74 offset 250000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 75 offset 258000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 76 offset 260000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 77 offset 268000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 78 offset 270000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 79 offset 278000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 80 offset 280000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 81 offset 288000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 82 offset 290000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 83 offset 298000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 84 offset 2a0000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 85 offset 2a8000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 86 offset 2b0000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 87 offset 2b8000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 88 offset 2c0000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 89 offset 2c8000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 90 offset 2d0000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 91 offset 2d8000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 92 offset 2e0000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 93 offset 2e8000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 94 offset 2f0000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 95 offset 2f8000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 96 offset 300000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 97 offset 308000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 98 offset 310000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 99 offset 318000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 100 offset 320000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 101 offset 328000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 102 offset 330000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 103 offset 338000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 104 offset 340000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% vdev 2 metaslabs 105 offset spacemap free --------------- ------------------- --------------- ------------- metaslab 0 offset 0 spacemap 37 free 128M segments 1 maxsize 128M freepct 100% metaslab 1 offset 8000000 spacemap 40 free 128M segments 1 maxsize 128M freepct 100% metaslab 2 offset 10000000 spacemap 44 free 128M segments 1 maxsize 128M freepct 100% metaslab 3 offset 18000000 spacemap 43 free 128M segments 1 maxsize 128M freepct 100% metaslab 4 offset 20000000 spacemap 58 free 128M segments 1 maxsize 128M freepct 100% metaslab 5 offset 28000000 spacemap 57 free 128M segments 1 maxsize 128M freepct 100% metaslab 6 offset 30000000 spacemap 56 free 128M segments 1 maxsize 128M freepct 100% metaslab 7 offset 38000000 spacemap 55 free 128M segments 1 maxsize 128M freepct 100% metaslab 8 offset 40000000 spacemap 54 free 128M segments 1 maxsize 128M freepct 100% metaslab 9 offset 48000000 spacemap 53 free 128M segments 1 maxsize 128M freepct 100% metaslab 10 offset 50000000 spacemap 52 free 128M segments 1 maxsize 128M freepct 100% metaslab 11 offset 58000000 spacemap 60 free 128M segments 1 maxsize 128M freepct 100% metaslab 12 offset 60000000 spacemap 64 free 128M segments 1 maxsize 128M freepct 100% metaslab 13 offset 68000000 spacemap 63 free 128M segments 1 maxsize 128M freepct 100% metaslab 14 offset 70000000 spacemap 69 free 128M segments 1 maxsize 128M freepct 100% metaslab 15 offset 78000000 spacemap 68 free 128M segments 1 maxsize 128M freepct 100% metaslab 16 offset 80000000 spacemap 72 free 128M segments 1 maxsize 128M freepct 100% metaslab 17 offset 88000000 spacemap 71 free 128M segments 1 maxsize 128M freepct 100% metaslab 18 offset 90000000 spacemap 76 free 128M segments 1 maxsize 128M freepct 100% metaslab 19 offset 98000000 spacemap 79 free 128M segments 1 maxsize 128M freepct 100% metaslab 20 offset a0000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 21 offset a8000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 22 offset b0000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 23 offset b8000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 24 offset c0000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 25 offset c8000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 26 offset d0000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 27 offset d8000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 28 offset e0000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 29 offset e8000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 30 offset f0000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 31 offset f8000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 32 offset 100000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 33 offset 108000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 34 offset 110000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 35 offset 118000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 36 offset 120000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 37 offset 128000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 38 offset 130000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 39 offset 138000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 40 offset 140000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 41 offset 148000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 42 offset 150000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 43 offset 158000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 44 offset 160000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 45 offset 168000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 46 offset 170000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 47 offset 178000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 48 offset 180000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 49 offset 188000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 50 offset 190000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 51 offset 198000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 52 offset 1a0000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 53 offset 1a8000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 54 offset 1b0000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 55 offset 1b8000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 56 offset 1c0000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 57 offset 1c8000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 58 offset 1d0000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 59 offset 1d8000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 60 offset 1e0000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 61 offset 1e8000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 62 offset 1f0000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 63 offset 1f8000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 64 offset 200000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 65 offset 208000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 66 offset 210000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 67 offset 218000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 68 offset 220000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 69 offset 228000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 70 offset 230000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 71 offset 238000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 72 offset 240000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 73 offset 248000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 74 offset 250000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 75 offset 258000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 76 offset 260000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 77 offset 268000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 78 offset 270000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 79 offset 278000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 80 offset 280000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 81 offset 288000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 82 offset 290000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 83 offset 298000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 84 offset 2a0000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 85 offset 2a8000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 86 offset 2b0000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 87 offset 2b8000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 88 offset 2c0000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 89 offset 2c8000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 90 offset 2d0000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 91 offset 2d8000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 92 offset 2e0000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 93 offset 2e8000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 94 offset 2f0000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 95 offset 2f8000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 96 offset 300000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 97 offset 308000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 98 offset 310000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 99 offset 318000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 100 offset 320000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 101 offset 328000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 102 offset 330000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 103 offset 338000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% metaslab 104 offset 340000000 spacemap 0 free 128M segments 1 maxsize 128M freepct 100% Dataset mos [META], ID 0, cr_txg 4, 459M, 235 objects Object lvl iblk dblk dsize lsize %full type 0 2 16K 16K 256K 128K 91.80 DMU dnode 1 1 16K 16K 19.0K 32K 100.00 object directory 2 1 16K 512 0 512 0.00 DSL directory 3 1 16K 512 3.00K 512 100.00 DSL props 4 1 16K 512 3.00K 512 100.00 DSL directory child map 5 1 16K 512 0 512 0.00 DSL directory 6 1 16K 512 3.00K 512 100.00 DSL props 7 1 16K 512 3.00K 512 100.00 DSL directory child map 8 1 16K 512 0 512 0.00 DSL directory 9 1 16K 512 3.00K 512 100.00 DSL props 10 1 16K 512 3.00K 512 100.00 DSL directory child map 11 1 16K 128K 0 128K 0.00 bpobj 12 1 16K 512 0 512 0.00 DSL directory 13 1 16K 512 3.00K 512 100.00 DSL props 14 1 16K 512 3.00K 512 100.00 DSL directory child map 15 1 16K 512 0 512 0.00 DSL dataset 16 1 16K 512 3.00K 512 100.00 DSL dataset snap map 17 1 16K 512 3.00K 512 100.00 DSL deadlist map 18 1 16K 512 0 512 0.00 DSL dataset 19 1 16K 512 3.00K 512 100.00 DSL deadlist map 20 1 16K 128K 0 128K 0.00 bpobj 21 1 16K 512 0 512 0.00 DSL dataset 22 1 16K 512 3.00K 512 100.00 DSL dataset snap map 23 1 16K 512 3.00K 512 100.00 DSL deadlist map 24 1 16K 128K 0 128K 0.00 bpobj 25 1 16K 512 3.00K 512 100.00 DSL dataset next clones 26 1 16K 512 3.00K 512 100.00 DSL dir clones 27 1 16K 16K 51.0K 16K 100.00 packed nvlist 28 1 16K 16K 51.0K 16K 100.00 bpobj (Z=uncompressed) 29 1 16K 128K 16K 128K 100.00 SPA history 30 1 16K 16K 6.50K 16K 100.00 packed nvlist 31 1 16K 512 3.00K 512 100.00 object array 32 1 16K 512 3.00K 512 100.00 object array 33 1 16K 512 9.5K 1.50K 100.00 object array 34 2 16K 4K 467K 176K 100.00 SPA space map 35 2 16K 4K 851K 296K 100.00 SPA space map 36 2 16K 4K 275K 88.0K 100.00 SPA space map 37 1 16K 4K 0 4K 0.00 SPA space map 38 1 16K 4K 3.00K 4K 100.00 SPA space map 39 1 16K 4K 0 4K 0.00 SPA space map 40 1 16K 4K 0 4K 0.00 SPA space map 41 1 16K 4K 0 4K 0.00 SPA space map 42 1 16K 4K 0 4K 0.00 SPA space map 43 1 16K 4K 0 4K 0.00 SPA space map 44 1 16K 4K 0 4K 0.00 SPA space map 45 1 16K 4K 0 4K 0.00 SPA space map 46 1 16K 4K 0 4K 0.00 SPA space map 47 1 16K 4K 3.00K 4K 100.00 SPA space map 48 1 16K 4K 0 4K 0.00 SPA space map 49 1 16K 4K 0 4K 0.00 SPA space map 50 1 16K 4K 0 4K 0.00 SPA space map 51 1 16K 4K 0 4K 0.00 SPA space map 52 1 16K 4K 0 4K 0.00 SPA space map 53 1 16K 4K 0 4K 0.00 SPA space map 54 1 16K 4K 3.00K 4K 100.00 SPA space map 55 1 16K 4K 0 4K 0.00 SPA space map 56 1 16K 4K 0 4K 0.00 SPA space map 57 1 16K 4K 0 4K 0.00 SPA space map 58 1 16K 4K 0 4K 0.00 SPA space map 59 1 16K 4K 0 4K 0.00 SPA space map 60 1 16K 4K 0 4K 0.00 SPA space map 61 1 16K 4K 0 4K 0.00 SPA space map 62 1 16K 4K 0 4K 0.00 SPA space map 63 1 16K 4K 0 4K 0.00 SPA space map 64 1 16K 4K 0 4K 0.00 SPA space map 65 2 16K 4K 291K 92.0K 100.00 SPA space map 66 1 16K 4K 0 4K 0.00 SPA space map 67 1 16K 4K 0 4K 0.00 SPA space map 68 1 16K 4K 0 4K 0.00 SPA space map 69 1 16K 4K 0 4K 0.00 SPA space map 70 2 16K 4K 311K 100K 100.00 SPA space map 71 1 16K 4K 0 4K 0.00 SPA space map 72 1 16K 4K 0 4K 0.00 SPA space map 73 1 16K 4K 0 4K 0.00 SPA space map 74 1 16K 4K 0 4K 0.00 SPA space map 75 1 16K 4K 0 4K 0.00 SPA space map 76 1 16K 4K 0 4K 0.00 SPA space map 77 2 16K 4K 336K 104K 100.00 SPA space map 78 1 16K 4K 0 4K 0.00 SPA space map 79 1 16K 4K 0 4K 0.00 SPA space map 80 2 16K 4K 330K 104K 100.00 SPA space map 81 2 16K 4K 333K 104K 100.00 SPA space map 82 2 16K 4K 339K 108K 100.00 SPA space map 83 2 16K 4K 343K 108K 100.00 SPA space map 84 2 16K 4K 368K 116K 100.00 SPA space map 85 2 16K 4K 339K 108K 100.00 SPA space map 86 2 16K 4K 384K 120K 100.00 SPA space map 87 2 16K 4K 1.15M 368K 100.00 SPA space map 88 2 16K 4K 346K 108K 100.00 SPA space map 89 2 16K 4K 339K 108K 100.00 SPA space map 90 2 16K 4K 349K 108K 100.00 SPA space map 91 2 16K 4K 339K 108K 100.00 SPA space map 92 2 16K 4K 365K 116K 100.00 SPA space map 93 2 16K 4K 391K 120K 100.00 SPA space map 94 2 16K 4K 867K 268K 100.00 SPA space map 95 2 16K 4K 506K 156K 100.00 SPA space map 96 2 16K 4K 858K 268K 100.00 SPA space map 97 2 16K 4K 490K 152K 100.00 SPA space map 98 2 16K 4K 346K 112K 100.00 SPA space map 99 2 16K 4K 371K 116K 100.00 SPA space map 100 2 16K 4K 362K 112K 100.00 SPA space map 101 2 16K 4K 384K 120K 100.00 SPA space map 102 2 16K 4K 400K 124K 100.00 SPA space map 103 2 16K 4K 522K 160K 100.00 SPA space map 104 2 16K 4K 477K 148K 100.00 SPA space map 105 2 16K 4K 346K 112K 100.00 SPA space map 106 2 16K 4K 774K 240K 100.00 SPA space map 107 2 16K 4K 906K 312K 100.00 SPA space map 108 2 16K 4K 359K 112K 100.00 SPA space map 109 2 16K 4K 387K 120K 100.00 SPA space map 110 2 16K 4K 387K 120K 100.00 SPA space map 111 2 16K 4K 413K 128K 100.00 SPA space map 112 2 16K 4K 375K 120K 100.00 SPA space map 113 2 16K 4K 378K 116K 100.00 SPA space map 114 2 16K 4K 410K 128K 100.00 SPA space map 115 2 16K 4K 400K 124K 100.00 SPA space map 116 2 16K 4K 381K 116K 100.00 SPA space map 117 2 16K 4K 387K 120K 100.00 SPA space map 118 2 16K 4K 397K 124K 100.00 SPA space map 119 2 16K 4K 394K 120K 100.00 SPA space map 120 2 16K 4K 826K 256K 100.00 SPA space map 121 2 16K 4K 407K 124K 100.00 SPA space map 122 2 16K 4K 429K 132K 100.00 SPA space map 123 2 16K 4K 451K 140K 100.00 SPA space map 124 2 16K 4K 461K 144K 100.00 SPA space map 125 2 16K 4K 519K 160K 100.00 SPA space map 126 2 16K 4K 477K 148K 100.00 SPA space map 127 2 16K 4K 605K 188K 100.00 SPA space map 128 2 16K 4K 227K 100K 100.00 SPA space map 129 2 16K 4K 528K 164K 100.00 SPA space map 130 2 16K 4K 426K 132K 100.00 SPA space map 131 2 16K 4K 528K 164K 100.00 SPA space map 132 2 16K 4K 589K 184K 100.00 SPA space map 133 2 16K 4K 605K 188K 100.00 SPA space map 134 2 16K 4K 790K 276K 100.00 SPA space map 135 2 16K 4K 439K 136K 100.00 SPA space map 136 2 16K 4K 423K 132K 100.00 SPA space map 137 2 16K 4K 455K 140K 100.00 SPA space map 138 2 16K 4K 410K 128K 100.00 SPA space map 139 2 16K 4K 528K 164K 100.00 SPA space map 141 2 16K 4K 531K 164K 100.00 SPA space map 142 2 16K 4K 397K 124K 100.00 SPA space map 143 2 16K 4K 439K 136K 100.00 SPA space map 144 2 16K 4K 423K 132K 100.00 SPA space map 145 2 16K 4K 419K 132K 100.00 SPA space map 146 2 16K 4K 477K 148K 100.00 SPA space map 147 2 16K 4K 461K 144K 100.00 SPA space map 148 2 16K 4K 384K 120K 100.00 SPA space map 149 2 16K 4K 403K 124K 100.00 SPA space map 150 2 16K 4K 560K 172K 100.00 SPA space map 151 2 16K 4K 586K 212K 100.00 SPA space map 152 2 16K 4K 407K 128K 100.00 SPA space map 153 2 16K 4K 183K 88.0K 100.00 SPA space map 154 2 16K 4K 506K 156K 100.00 SPA space map 155 2 16K 4K 506K 156K 100.00 SPA space map 156 2 16K 4K 432K 136K 100.00 SPA space map 157 2 16K 4K 499K 156K 100.00 SPA space map 158 2 16K 4K 442K 140K 100.00 SPA space map 159 2 16K 4K 448K 140K 100.00 SPA space map 160 2 16K 4K 435K 136K 100.00 SPA space map 161 2 16K 4K 471K 148K 100.00 SPA space map 162 2 16K 4K 435K 140K 100.00 SPA space map 163 2 16K 4K 343K 136K 100.00 SPA space map 164 2 16K 4K 426K 132K 100.00 SPA space map 165 2 16K 4K 365K 124K 100.00 SPA space map 166 2 16K 4K 371K 120K 100.00 SPA space map 167 2 16K 4K 416K 136K 100.00 SPA space map 168 2 16K 4K 426K 132K 100.00 SPA space map 169 2 16K 4K 419K 132K 100.00 SPA space map 170 2 16K 4K 461K 144K 100.00 SPA space map 171 2 16K 4K 541K 156K 100.00 SPA space map 172 2 16K 4K 237K 100K 100.00 SPA space map 173 2 16K 4K 531K 176K 100.00 SPA space map 174 2 16K 4K 426K 136K 100.00 SPA space map 175 2 16K 4K 445K 140K 100.00 SPA space map 176 2 16K 4K 426K 136K 100.00 SPA space map 177 2 16K 4K 451K 140K 100.00 SPA space map 178 2 16K 4K 423K 136K 100.00 SPA space map 179 2 16K 4K 413K 136K 100.00 SPA space map 180 2 16K 4K 439K 164K 100.00 SPA space map 181 2 16K 4K 471K 148K 100.00 SPA space map 182 2 16K 4K 458K 144K 100.00 SPA space map 183 2 16K 4K 471K 148K 100.00 SPA space map 184 2 16K 4K 419K 132K 100.00 SPA space map 185 2 16K 4K 490K 152K 100.00 SPA space map 186 2 16K 4K 445K 140K 100.00 SPA space map 187 2 16K 4K 464K 144K 100.00 SPA space map 188 2 16K 4K 467K 148K 100.00 SPA space map 189 2 16K 4K 451K 140K 100.00 SPA space map 190 2 16K 4K 522K 164K 100.00 SPA space map 191 2 16K 4K 503K 144K 100.00 SPA space map 192 2 16K 4K 544K 156K 100.00 SPA space map 193 2 16K 4K 535K 168K 100.00 SPA space map 194 2 16K 4K 487K 152K 100.00 SPA space map 195 2 16K 4K 429K 136K 100.00 SPA space map 196 2 16K 4K 291K 120K 100.00 SPA space map 197 2 16K 4K 339K 112K 100.00 SPA space map 198 2 16K 4K 359K 124K 100.00 SPA space map 199 2 16K 4K 311K 120K 100.00 SPA space map 200 2 16K 4K 314K 116K 100.00 SPA space map 201 2 16K 4K 336K 128K 100.00 SPA space map 202 2 16K 4K 317K 120K 100.00 SPA space map 203 2 16K 4K 327K 120K 100.00 SPA space map 204 2 16K 4K 346K 124K 100.00 SPA space map 205 2 16K 4K 323K 124K 100.00 SPA space map 206 2 16K 4K 339K 116K 100.00 SPA space map 207 2 16K 4K 349K 124K 100.00 SPA space map 208 2 16K 4K 333K 124K 100.00 SPA space map 209 2 16K 4K 291K 112K 100.00 SPA space map 210 2 16K 4K 336K 120K 100.00 SPA space map 211 2 16K 4K 320K 120K 100.00 SPA space map 212 2 16K 4K 291K 112K 100.00 SPA space map 213 2 16K 4K 349K 128K 100.00 SPA space map 214 2 16K 4K 397K 132K 100.00 SPA space map 215 2 16K 4K 330K 116K 100.00 SPA space map 216 2 16K 4K 320K 120K 100.00 SPA space map 217 2 16K 4K 394K 132K 100.00 SPA space map 218 2 16K 4K 336K 120K 100.00 SPA space map 219 2 16K 4K 327K 124K 100.00 SPA space map 220 2 16K 4K 349K 128K 100.00 SPA space map 221 2 16K 4K 391K 128K 100.00 SPA space map 222 2 16K 4K 339K 124K 100.00 SPA space map 223 2 16K 4K 339K 124K 100.00 SPA space map 224 2 16K 4K 365K 124K 100.00 SPA space map 225 2 16K 4K 346K 124K 100.00 SPA space map 226 2 16K 4K 400K 136K 100.00 SPA space map 228 3 16K 16K 395M 433M 99.95 persistent error log 229 1 16K 4K 0 4K 0.00 SPA space map 230 1 16K 4K 0 4K 0.00 SPA space map 231 1 16K 4K 0 4K 0.00 SPA space map 232 1 16K 4K 0 4K 0.00 SPA space map 233 1 16K 4K 0 4K 0.00 SPA space map 234 1 16K 4K 0 4K 0.00 SPA space map 235 1 16K 4K 0 4K 0.00 SPA space map 236 1 16K 4K 0 4K 0.00 SPA space map 237 1 16K 4K 0 4K 0.00 SPA space map Dataset tank2 [ZPL], ID 21, cr_txg 1, 13.3T, 37 objects ZIL header: claim_txg 0, claim_blk_seq 0, claim_lr_seq 0 replay_seq 0, flags 0x0 Object lvl iblk dblk dsize lsize %full type 0 7 16K 16K 40.5K 32K 57.81 DMU dnode -1 1 16K 512 2K 512 100.00 ZFS user/group used -2 1 16K 512 2K 512 100.00 ZFS user/group used 1 1 16K 512 2K 512 100.00 ZFS master node 2 1 16K 512 2K 512 100.00 SA master node 3 1 16K 512 2K 512 100.00 ZFS delete queue 4 1 16K 1.50K 2K 1.50K 100.00 ZFS directory 5 1 16K 1.50K 2K 1.50K 100.00 SA attr registration 6 1 16K 16K 10.5K 32K 100.00 SA attr layouts 7 1 16K 512 2K 512 100.00 ZFS directory 8 5 16K 128K 510G 510G 100.00 ZFS plain file 9 5 16K 128K 476G 476G 100.00 ZFS plain file 10 5 16K 128K 473G 473G 100.00 ZFS plain file 11 5 16K 128K 467G 467G 100.00 ZFS plain file 12 5 16K 128K 428G 428G 100.00 ZFS plain file 13 5 16K 128K 455G 455G 100.00 ZFS plain file 14 5 16K 128K 478G 478G 100.00 ZFS plain file 15 5 16K 128K 517G 517G 100.00 ZFS plain file 16 5 16K 128K 487G 487G 100.00 ZFS plain file 17 5 16K 128K 513G 513G 100.00 ZFS plain file 18 5 16K 128K 489G 489G 100.00 ZFS plain file 19 5 16K 128K 494G 493G 100.00 ZFS plain file 20 5 16K 128K 492G 492G 100.00 ZFS plain file 21 5 16K 128K 488G 487G 100.00 ZFS plain file 22 1 16K 1K 2K 1K 100.00 ZFS directory 23 4 16K 128K 107G 107G 100.00 ZFS plain file 24 4 16K 128K 92.4G 92.4G 100.00 ZFS plain file 25 4 16K 128K 97.2G 97.2G 100.00 ZFS plain file 26 4 16K 128K 0 128K 0.00 ZFS plain file 27 4 16K 128K 149G 149G 100.00 ZFS plain file 28 4 16K 128K 221G 221G 100.00 ZFS plain file 29 4 16K 128K 93.8G 93.8G 100.00 ZFS plain file 30 4 16K 128K 66.0G 66.0G 100.00 ZFS plain file 31 5 16K 128K 5.74T 5.74T 100.00 ZFS plain file 32 4 16K 128K 48.0G 48.0G 100.00 ZFS plain file 33 4 16K 128K 12.0G 12.0G 100.00 ZFS plain file 34 4 16K 128K 11.5G 11.5G 100.00 ZFS plain file 35 4 16K 128K 11.6G 11.5G 100.00 ZFS plain file 36 4 16K 128K 29.9G 29.8G 100.00 ZFS plain file 37 1 16K 512 0 512 0.00 ZFS plain file ================================================ This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337 or return the E.mail to postmaster@multiplay.co.uk. From owner-freebsd-fs@FreeBSD.ORG Thu Nov 1 02:56:19 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id E9240596; Thu, 1 Nov 2012 02:56:19 +0000 (UTC) (envelope-from prvs=1652892d21=killing@multiplay.co.uk) Received: from mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) by mx1.freebsd.org (Postfix) with ESMTP id 5C3598FC08; Thu, 1 Nov 2012 02:56:16 +0000 (UTC) Received: from r2d2 ([188.220.16.49]) by mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) (MDaemon PRO v10.0.4) with ESMTP id md50000904511.msg; Thu, 01 Nov 2012 02:56:14 +0000 X-Spam-Processed: mail1.multiplay.co.uk, Thu, 01 Nov 2012 02:56:14 +0000 (not processed: message from valid local sender) X-MDRemoteIP: 188.220.16.49 X-Return-Path: prvs=1652892d21=killing@multiplay.co.uk X-Envelope-From: killing@multiplay.co.uk Message-ID: <9DB937FEA7634C4BAC49EF5823F93CA3@multiplay.co.uk> From: "Steven Hartland" To: "Steven Hartland" , "Peter Jeremy" References: <27087376D1C14132A3CC1B4016912F6D@multiplay.co.uk> <20121031212346.GL3309@server.rulingia.com> Subject: Re: ZFS corruption due to lack of space? Date: Thu, 1 Nov 2012 02:56:15 -0000 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=response Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 Cc: freebsd-fs@freebsd.org, freebsd-stable@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 01 Nov 2012 02:56:20 -0000 ----- Original Message ----- From: "Steven Hartland" To: "Peter Jeremy" Cc: ; Sent: Thursday, November 01, 2012 12:19 AM Subject: Re: ZFS corruption due to lack of space? > > ----- Original Message ----- > From: "Steven Hartland" > To: "Peter Jeremy" > Cc: ; > Sent: Thursday, November 01, 2012 12:09 AM > Subject: Re: ZFS corruption due to lack of space? > > >> On 2012-Oct-31 17:25:09 -0000, Steven Hartland wrote: >>>>Been running some tests on new hardware here to verify all >>>>is good. One of the tests was to fill the zfs array which >>>>seems like its totally corrupted the tank. >>> >>>I've accidently "filled" a pool, and had multiple processes try to >>>write to the full pool, without either emptying the free space reserve >>>(so I could still delete the offending files) or corrupting the pool. >> >> Same here but its the first time I've had ZIL in place at the time so >> wondering if that may be playing a factor. >> >>> Had you tried to read/write the raw disks before you tried the >>> ZFS testing? >> >> Yes, didn't see any issues but then it wasn't checksuming so tbh I >> wouldn't have noticed if it was silently corrupting data. >> >>>Do you have compression and/or dedupe enabled on the pool? >> >> Nope bog standard raidz2 no additional settings >> >>>>1. Given the information it seems like the multiple writes filling >>>>the disk may have caused metadata corruption? >>> >>> I don't recall seeing this reported before. >> >> Nore me and we've been using ZFS for years, but never filled a pool >> with such known simultanious access + ZIL before >> >>>>2. Is there anyway to stop the scrub? >>> >>>Other than freeing up some space, I don't think so. If this is a test >>>pool that you don't need, you could try destroying it and re-creating >>>it - that may be quicker and easier than recovering the existing pool. >> >> Artems trick of cat /dev/null > /tank2/ worked and I've now >> managed to stop the scrub :) >> >>>>3. Surely low space should never prevent stopping a scrub? >>> >>> As Artem noted, ZFS is a copy-on-write filesystem. It is supposed to >>> reserve some free space to allow metadata updates (stop scrubs, delete >>> files, etc) even when it is "full" but I have seen reports of this not >>> working correctly in the past. A truncate-in-place may work. >> >> Yes it did thanks, but as you said if this metadata update was failing >> due to out of space lends credability to the fact that the same lack of >> space and hence failure to update metadata could have also caused the >> corruption in the first place. >> >> Its interesting to note that the zpool is reporting pleanty of free space >> even when the root zfs volume was showing 0, so you would expect there >> to be pleanty of space for it be able to stop the scrub but it appears >> not which is definitely interesting and could point to the underlying >> cause? >> >> zpool list tank2 >> NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT >> tank2 19T 18.7T 304G 98% 1.00x ONLINE - >> >> zfs list tank2 >> NAME USED AVAIL REFER MOUNTPOINT >> tank2 13.3T 0 13.3T /tank2 >> >> Current state is:- >> scan: scrub in progress since Wed Oct 31 16:13:53 2012 >> 1.64T scanned out of 18.7T at 62.8M/s, 79h12m to go >> 280M repaired, 8.76% done >> >> Something else that was interesting is while the scrub was running >> devd was using a good amount of CPU 40% of a 3.3Ghz core, which I've >> never seen before. Any ideas why its usage would be so high? > > > In case its useful here's the output from a zdb tank2 so far:- > zdb tank2 > > Cached configuration: > version: 28 > name: 'tank2' > state: 0 > txg: 39502 > pool_guid: 15779146362913479443 > hostid: 1751781486 > vdev_children: 3 > vdev_tree: > type: 'root' > id: 0 > guid: 15779146362913479443 > create_txg: 4 > children[0]: > type: 'raidz' > id: 0 > guid: 8518972900227438019 > nparity: 2 > metaslab_array: 33 > metaslab_shift: 37 > ashift: 9 > asize: 21004116295680 > is_log: 0 > create_txg: 4 > children[0]: > type: 'disk' > id: 0 > guid: 15577380450172137060 > path: '/dev/mfisyspd0' > phys_path: '/dev/mfisyspd0' > whole_disk: 1 > DTL: 236 > create_txg: 4 > children[1]: > type: 'disk' > id: 1 > guid: 16940350228793267704 > path: '/dev/mfisyspd1' > phys_path: '/dev/mfisyspd1' > whole_disk: 1 > DTL: 235 > create_txg: 4 > children[2]: > type: 'disk' > id: 2 > guid: 9264743178245473794 > path: '/dev/mfisyspd2' > phys_path: '/dev/mfisyspd2' > whole_disk: 1 > DTL: 234 > create_txg: 4 > children[3]: > type: 'disk' > id: 3 > guid: 432716341673487166 > path: '/dev/mfisyspd3' > phys_path: '/dev/mfisyspd3' > whole_disk: 1 > DTL: 233 > create_txg: 4 > children[4]: > type: 'disk' > id: 4 > guid: 18217760646550913544 > path: '/dev/mfisyspd4' > phys_path: '/dev/mfisyspd4' > whole_disk: 1 > DTL: 232 > create_txg: 4 > children[5]: > type: 'disk' > id: 5 > guid: 6964614355298004256 > path: '/dev/mfisyspd5' > phys_path: '/dev/mfisyspd5' > whole_disk: 1 > DTL: 231 > create_txg: 4 > children[6]: > type: 'disk' > id: 6 > guid: 4397961270160308034 > path: '/dev/mfisyspd6' > phys_path: '/dev/mfisyspd6' > whole_disk: 1 > DTL: 230 > create_txg: 4 > children[1]: > type: 'disk' > id: 1 > guid: 13757670211012452260 > path: '/dev/mfisyspd7p3' > phys_path: '/dev/mfisyspd7p3' > whole_disk: 1 > metaslab_array: 32 > metaslab_shift: 27 > ashift: 9 > asize: 14125891584 > is_log: 1 > DTL: 237 > create_txg: 4 > children[2]: > type: 'disk' > id: 2 > guid: 7315839509249482920 > path: '/dev/mfisyspd8p3' > phys_path: '/dev/mfisyspd8p3' > whole_disk: 1 > metaslab_array: 31 > metaslab_shift: 27 > ashift: 9 > asize: 14125891584 > is_log: 1 > DTL: 229 > create_txg: 4 > > MOS Configuration: > version: 28 > name: 'tank2' > state: 0 > txg: 39502 > pool_guid: 15779146362913479443 > hostid: 1751781486 > vdev_children: 3 > vdev_tree: > type: 'root' > id: 0 > guid: 15779146362913479443 > create_txg: 4 > children[0]: > type: 'raidz' > id: 0 > guid: 8518972900227438019 > nparity: 2 > metaslab_array: 33 > metaslab_shift: 37 > ashift: 9 > asize: 21004116295680 > is_log: 0 > create_txg: 4 > children[0]: > type: 'disk' > id: 0 > guid: 15577380450172137060 > path: '/dev/mfisyspd0' > phys_path: '/dev/mfisyspd0' > whole_disk: 1 > DTL: 236 > create_txg: 4 > children[1]: > type: 'disk' > id: 1 > guid: 16940350228793267704 > path: '/dev/mfisyspd1' > phys_path: '/dev/mfisyspd1' > whole_disk: 1 > DTL: 235 > create_txg: 4 > children[2]: > type: 'disk' > id: 2 > guid: 9264743178245473794 > path: '/dev/mfisyspd2' > phys_path: '/dev/mfisyspd2' > whole_disk: 1 > DTL: 234 > create_txg: 4 > children[3]: > type: 'disk' > id: 3 > guid: 432716341673487166 > path: '/dev/mfisyspd3' > phys_path: '/dev/mfisyspd3' > whole_disk: 1 > DTL: 233 > create_txg: 4 > children[4]: > type: 'disk' > id: 4 > guid: 18217760646550913544 > path: '/dev/mfisyspd4' > phys_path: '/dev/mfisyspd4' > whole_disk: 1 > DTL: 232 > create_txg: 4 > children[5]: > type: 'disk' > id: 5 > guid: 6964614355298004256 > path: '/dev/mfisyspd5' > phys_path: '/dev/mfisyspd5' > whole_disk: 1 > DTL: 231 > create_txg: 4 > children[6]: > type: 'disk' > id: 6 > guid: 4397961270160308034 > path: '/dev/mfisyspd6' > phys_path: '/dev/mfisyspd6' > whole_disk: 1 > DTL: 230 > create_txg: 4 > children[1]: > type: 'disk' > id: 1 > guid: 13757670211012452260 > path: '/dev/mfisyspd7p3' > phys_path: '/dev/mfisyspd7p3' > whole_disk: 1 > metaslab_array: 32 > metaslab_shift: 27 > ashift: 9 > asize: 14125891584 > is_log: 1 > DTL: 237 > create_txg: 4 > children[2]: > type: 'disk' > id: 2 > guid: 7315839509249482920 > path: '/dev/mfisyspd8p3' > phys_path: '/dev/mfisyspd8p3' > whole_disk: 1 > metaslab_array: 31 > metaslab_shift: 27 > ashift: 9 > asize: 14125891584 > is_log: 1 > DTL: 229 > create_txg: 4 > > Uberblock: > magic = 0000000000bab10c > version = 28 > txg = 39547 > guid_sum = 6486691012039134504 > timestamp = 1351727718 UTC = Wed Oct 31 23:55:18 2012 > > All DDTs are empty > > Metaslabs: > vdev 0 > metaslabs 152 offset spacemap > free --------------- ------------------- --------------- ------------- > metaslab 0 offset 0 spacemap 36 free 1.41G > segments 4170 maxsize 1.38G freepct 1% > metaslab 1 offset 2000000000 spacemap 65 free 1.45G > segments 4508 maxsize 1.40G freepct 1% > metaslab 2 offset 4000000000 spacemap 70 free 1.44G > segments 5723 maxsize 1.39G freepct 1% > metaslab 3 offset 6000000000 spacemap 77 free 1.41G > segments 6635 maxsize 1.35G freepct 1% > metaslab 4 offset 8000000000 spacemap 80 free 1.46G > segments 6408 maxsize 1.40G freepct 1% > metaslab 5 offset a000000000 spacemap 81 free 1.47G > segments 6480 maxsize 1.42G freepct 1% > metaslab 6 offset c000000000 spacemap 82 free 1.48G > segments 6684 maxsize 1.42G freepct 1% > metaslab 7 offset e000000000 spacemap 83 free 1.48G > segments 6873 maxsize 1.42G freepct 1% > metaslab 8 offset 10000000000 spacemap 84 free 1.48G > segments 7298 maxsize 1.42G freepct 1% > metaslab 9 offset 12000000000 spacemap 85 free 1.43G > segments 6699 maxsize 1.37G freepct 1% > metaslab 10 offset 14000000000 spacemap 88 free 1.50G > segments 6781 maxsize 1.44G freepct 1% > metaslab 11 offset 16000000000 spacemap 89 free 1.50G > segments 6434 maxsize 1.44G freepct 1% > metaslab 12 offset 18000000000 spacemap 90 free 1.51G > segments 7188 maxsize 1.44G freepct 1% > metaslab 13 offset 1a000000000 spacemap 91 free 1.49G > segments 6712 maxsize 1.42G freepct 1% > metaslab 14 offset 1c000000000 spacemap 92 free 1.52G > segments 6810 maxsize 1.46G freepct 1% > metaslab 15 offset 1e000000000 spacemap 93 free 1.52G > segments 8306 maxsize 1.41G freepct 1% > metaslab 16 offset 20000000000 spacemap 94 free 1.49G > segments 21881 maxsize 660M freepct 1% > metaslab 17 offset 22000000000 spacemap 97 free 1.50G > segments 9590 maxsize 1.32G freepct 1% > metaslab 18 offset 24000000000 spacemap 98 free 1.53G > segments 6921 maxsize 1.47G freepct 1% > metaslab 19 offset 26000000000 spacemap 99 free 1.53G > segments 7982 maxsize 1.46G freepct 1% > metaslab 20 offset 28000000000 spacemap 100 free 1.55G > segments 7943 maxsize 1.48G freepct 1% > metaslab 21 offset 2a000000000 spacemap 101 free 1.54G > segments 8049 maxsize 1.47G freepct 1% > metaslab 22 offset 2c000000000 spacemap 102 free 1.54G > segments 8205 maxsize 1.46G freepct 1% > metaslab 23 offset 2e000000000 spacemap 103 free 1.53G > segments 11339 maxsize 1.37G freepct 1% > metaslab 24 offset 30000000000 spacemap 104 free 1.55G > segments 11536 maxsize 1.38G freepct 1% > metaslab 25 offset 32000000000 spacemap 105 free 1.58G > segments 7281 maxsize 1.50G freepct 1% > metaslab 26 offset 34000000000 spacemap 108 free 1.57G > segments 7917 maxsize 1.49G freepct 1% > metaslab 27 offset 36000000000 spacemap 109 free 1.59G > segments 8446 maxsize 1.51G freepct 1% > metaslab 28 offset 38000000000 spacemap 110 free 1.59G > segments 8437 maxsize 1.51G freepct 1% > metaslab 29 offset 3a000000000 spacemap 35 free 1.60G > segments 22991 maxsize 1.21G freepct 1% > metaslab 30 offset 3c000000000 spacemap 111 free 1.57G > segments 8358 maxsize 1.49G freepct 1% > metaslab 31 offset 3e000000000 spacemap 112 free 1.62G > segments 7724 maxsize 1.53G freepct 1% > metaslab 32 offset 40000000000 spacemap 113 free 1.62G > segments 8314 maxsize 1.53G freepct 1% > metaslab 33 offset 42000000000 spacemap 114 free 1.61G > segments 8294 maxsize 1.52G freepct 1% > metaslab 34 offset 44000000000 spacemap 115 free 1.63G > segments 8422 maxsize 1.54G freepct 1% > metaslab 35 offset 46000000000 spacemap 116 free 1.60G > segments 8125 maxsize 1.51G freepct 1% > metaslab 36 offset 48000000000 spacemap 117 free 1.62G > segments 8048 maxsize 1.53G freepct 1% > metaslab 37 offset 4a000000000 spacemap 118 free 1.60G > segments 8648 maxsize 1.51G freepct 1% > metaslab 38 offset 4c000000000 spacemap 119 free 1.66G > segments 8316 maxsize 1.57G freepct 1% > metaslab 39 offset 4e000000000 spacemap 87 free 1.62G > segments 27153 maxsize 1.05G freepct 1% > metaslab 40 offset 50000000000 spacemap 121 free 1.66G > segments 9314 maxsize 1.56G freepct 1% > metaslab 41 offset 52000000000 spacemap 122 free 1.64G > segments 9334 maxsize 1.56G freepct 1% > metaslab 42 offset 54000000000 spacemap 123 free 1.67G > segments 10391 maxsize 1.51G freepct 1% > metaslab 43 offset 56000000000 spacemap 126 free 1.65G > segments 12514 maxsize 1.49G freepct 1% > metaslab 44 offset 58000000000 spacemap 127 free 1.67G > segments 13441 maxsize 1.50G freepct 1% > metaslab 45 offset 5a000000000 spacemap 129 free 1.70G > segments 13288 maxsize 1.54G freepct 1% > metaslab 46 offset 5c000000000 spacemap 96 free 1.71G > segments 27184 maxsize 1.07G freepct 1% > metaslab 47 offset 5e000000000 spacemap 130 free 1.69G > segments 10019 maxsize 1.61G freepct 1% > metaslab 48 offset 60000000000 spacemap 133 free 1.69G > segments 13025 maxsize 1.53G freepct 1% > metaslab 49 offset 62000000000 spacemap 135 free 1.71G > segments 10562 maxsize 1.63G freepct 1% > metaslab 50 offset 64000000000 spacemap 136 free 1.74G > segments 9827 maxsize 1.66G freepct 1% > metaslab 51 offset 66000000000 spacemap 137 free 1.73G > segments 10206 maxsize 1.65G freepct 1% > metaslab 52 offset 68000000000 spacemap 138 free 1.75G > segments 9747 maxsize 1.67G freepct 1% > metaslab 53 offset 6a000000000 spacemap 139 free 1.76G > segments 14248 maxsize 1.57G freepct 1% > metaslab 54 offset 6c000000000 spacemap 107 free 1.76G > segments 29803 maxsize 987M freepct 1% > metaslab 55 offset 6e000000000 spacemap 142 free 1.76G > segments 9068 maxsize 1.68G freepct 1% > metaslab 56 offset 70000000000 spacemap 143 free 1.76G > segments 10561 maxsize 1.68G freepct 1% > metaslab 57 offset 72000000000 spacemap 144 free 1.78G > segments 10234 maxsize 1.70G freepct 1% > metaslab 58 offset 74000000000 spacemap 34 free 1.79G > segments 12737 maxsize 1.49G freepct 1% > metaslab 59 offset 76000000000 spacemap 145 free 1.80G > segments 10211 maxsize 1.71G freepct 1% > metaslab 60 offset 78000000000 spacemap 146 free 1.77G > segments 10696 maxsize 1.68G freepct 1% > metaslab 61 offset 7a000000000 spacemap 147 free 1.81G > segments 10934 maxsize 1.71G freepct 1% > metaslab 62 offset 7c000000000 spacemap 148 free 1.81G > segments 8698 maxsize 1.73G freepct 1% > metaslab 63 offset 7e000000000 spacemap 149 free 1.82G > segments 9165 maxsize 1.74G freepct 1% > metaslab 64 offset 80000000000 spacemap 152 free 1.83G > segments 9388 maxsize 1.74G freepct 1% > metaslab 65 offset 82000000000 spacemap 154 free 1.84G > segments 11321 maxsize 1.74G freepct 1% > metaslab 66 offset 84000000000 spacemap 155 free 1.85G > segments 10040 maxsize 1.76G freepct 1% > metaslab 67 offset 86000000000 spacemap 156 free 1.86G > segments 10531 maxsize 1.77G freepct 1% > metaslab 68 offset 88000000000 spacemap 86 free 1.84G > segments 8518 maxsize 1.73G freepct 1% > metaslab 69 offset 8a000000000 spacemap 120 free 1.87G > segments 18100 maxsize 1.51G freepct 1% > metaslab 70 offset 8c000000000 spacemap 157 free 1.88G > segments 12773 maxsize 1.70G freepct 1% > metaslab 71 offset 8e000000000 spacemap 159 free 1.89G > segments 11443 maxsize 1.79G freepct 1% > metaslab 72 offset 90000000000 spacemap 125 free 1.90G > segments 13633 maxsize 1.72G freepct 1% > metaslab 73 offset 92000000000 spacemap 160 free 1.91G > segments 10724 maxsize 1.81G freepct 1% > metaslab 74 offset 94000000000 spacemap 161 free 1.92G > segments 10550 maxsize 1.77G freepct 1% > metaslab 75 offset 96000000000 spacemap 162 free 1.92G > segments 10027 maxsize 1.83G freepct 1% > metaslab 76 offset 98000000000 spacemap 132 free 1.93G > segments 16007 maxsize 1.69G freepct 1% > metaslab 77 offset 9a000000000 spacemap 164 free 1.94G > segments 9721 maxsize 1.84G freepct 1% > metaslab 78 offset 9c000000000 spacemap 134 free 1.95G > segments 26262 maxsize 1.35G freepct 1% > metaslab 79 offset 9e000000000 spacemap 165 free 1.95G > segments 7968 maxsize 1.88G freepct 1% > metaslab 80 offset a0000000000 spacemap 166 free 1.97G > segments 7757 maxsize 1.89G freepct 1% > metaslab 81 offset a2000000000 spacemap 167 free 1.97G > segments 9206 maxsize 1.89G freepct 1% > metaslab 82 offset a4000000000 spacemap 168 free 1.98G > segments 9225 maxsize 1.89G freepct 1% > metaslab 83 offset a6000000000 spacemap 106 free 1.99G > segments 24197 maxsize 1.43G freepct 1% > metaslab 84 offset a8000000000 spacemap 169 free 2.00G > segments 9637 maxsize 1.91G freepct 1% > metaslab 85 offset aa000000000 spacemap 170 free 2.01G > segments 10167 maxsize 1.92G freepct 1% > metaslab 86 offset ac000000000 spacemap 171 free 2.02G > segments 12180 maxsize 1.85G freepct 1% > metaslab 87 offset ae000000000 spacemap 174 free 2.02G > segments 9716 maxsize 1.93G freepct 1% > metaslab 88 offset b0000000000 spacemap 175 free 2.04G > segments 10583 maxsize 1.94G freepct 1% > metaslab 89 offset b2000000000 spacemap 176 free 2.05G > segments 9935 maxsize 1.95G freepct 1% > metaslab 90 offset b4000000000 spacemap 177 free 2.06G > segments 10459 maxsize 1.96G freepct 1% > metaslab 91 offset b6000000000 spacemap 178 free 2.07G > segments 9396 maxsize 1.98G freepct 1% > metaslab 92 offset b8000000000 spacemap 179 free 2.07G > segments 8301 maxsize 1.96G freepct 1% > metaslab 93 offset ba000000000 spacemap 151 free 2.08G > segments 17800 maxsize 1.52G freepct 1% > metaslab 94 offset bc000000000 spacemap 181 free 2.10G > segments 10951 maxsize 2.00G freepct 1% > metaslab 95 offset be000000000 spacemap 182 free 2.11G > segments 11002 maxsize 2.01G freepct 1% > metaslab 96 offset c0000000000 spacemap 183 free 2.12G > segments 10855 maxsize 2.01G freepct 1% > metaslab 97 offset c2000000000 spacemap 95 free 2.13G > segments 13168 maxsize 1.96G freepct 1% > metaslab 98 offset c4000000000 spacemap 184 free 2.13G > segments 9408 maxsize 2.04G freepct 1% > metaslab 99 offset c6000000000 spacemap 185 free 2.15G > segments 10694 maxsize 2.04G freepct 1% > metaslab 100 offset c8000000000 spacemap 186 free 2.16G > segments 10563 maxsize 2.05G freepct 1% > metaslab 101 offset ca000000000 spacemap 187 free 2.17G > segments 11059 maxsize 2.06G freepct 1% > metaslab 102 offset cc000000000 spacemap 188 free 2.18G > segments 11516 maxsize 2.07G freepct 1% > metaslab 103 offset ce000000000 spacemap 189 free 2.19G > segments 10700 maxsize 2.08G freepct 1% > metaslab 104 offset d0000000000 spacemap 163 free 2.20G > segments 7869 maxsize 1.77G freepct 1% > metaslab 105 offset d2000000000 spacemap 131 free 2.22G > segments 11941 maxsize 2.08G freepct 1% > metaslab 106 offset d4000000000 spacemap 190 free 9.24G > segments 11407 maxsize 2.11G freepct 7% > metaslab 107 offset d6000000000 spacemap 141 free 2.24G > segments 13038 maxsize 2.00G freepct 1% > metaslab 108 offset d8000000000 spacemap 192 free 9.7G > segments 10472 maxsize 2.13G freepct 7% > metaslab 109 offset da000000000 spacemap 193 free 8.61G > segments 11389 maxsize 2.14G freepct 6% > metaslab 110 offset dc000000000 spacemap 194 free 6.34G > segments 10513 maxsize 2.16G freepct 4% > metaslab 111 offset de000000000 spacemap 195 free 7.58G > segments 10168 maxsize 2.17G freepct 5% > metaslab 112 offset e0000000000 spacemap 197 free 7.14G > segments 6922 maxsize 2.22G freepct 5% > metaslab 113 offset e2000000000 spacemap 198 free 2.30G > segments 6757 maxsize 2.24G freepct 1% > metaslab 114 offset e4000000000 spacemap 199 free 2.32G > segments 7008 maxsize 2.26G freepct 1% > metaslab 115 offset e6000000000 spacemap 200 free 2.33G > segments 6518 maxsize 2.27G freepct 1% > metaslab 116 offset e8000000000 spacemap 173 free 2.35G > segments 13857 maxsize 1.92G freepct 1% > metaslab 117 offset ea000000000 spacemap 201 free 2.36G > segments 7124 maxsize 2.30G freepct 1% > metaslab 118 offset ec000000000 spacemap 202 free 2.37G > segments 6936 maxsize 2.31G freepct 1% > metaslab 119 offset ee000000000 spacemap 203 free 2.38G > segments 6679 maxsize 2.32G freepct 1% > metaslab 120 offset f0000000000 spacemap 204 free 2.39G > segments 6818 maxsize 2.34G freepct 1% > metaslab 121 offset f2000000000 spacemap 205 free 2.41G > segments 7423 maxsize 2.34G freepct 1% > metaslab 122 offset f4000000000 spacemap 150 free 2.42G > segments 15678 maxsize 2.14G freepct 1% > metaslab 123 offset f6000000000 spacemap 158 free 2.43G > segments 9980 maxsize 2.28G freepct 1% > metaslab 124 offset f8000000000 spacemap 180 free 2.45G > segments 11702 maxsize 1.71G freepct 1% > metaslab 125 offset fa000000000 spacemap 206 free 2.46G > segments 7070 maxsize 2.40G freepct 1% > metaslab 126 offset fc000000000 spacemap 124 free 2.47G > segments 11485 maxsize 2.31G freepct 1% > metaslab 127 offset fe000000000 spacemap 128 free 2.49G > segments 2051 maxsize 2.42G freepct 1% > metaslab 128 offset 100000000000 spacemap 207 free 2.50G > segments 7309 maxsize 2.44G freepct 1% > metaslab 129 offset 102000000000 spacemap 208 free 2.52G > segments 7151 maxsize 2.46G freepct 1% > metaslab 130 offset 104000000000 spacemap 209 free 2.52G > segments 6041 maxsize 2.47G freepct 1% > metaslab 131 offset 106000000000 spacemap 210 free 2.54G > segments 6910 maxsize 2.47G freepct 1% > metaslab 132 offset 108000000000 spacemap 211 free 2.56G > segments 6816 maxsize 2.50G freepct 1% > metaslab 133 offset 10a000000000 spacemap 212 free 2.57G > segments 6182 maxsize 2.51G freepct 2% > metaslab 134 offset 10c000000000 spacemap 213 free 2.59G > segments 7541 maxsize 2.52G freepct 2% > metaslab 135 offset 10e000000000 spacemap 214 free 2.61G > segments 7810 maxsize 2.53G freepct 2% > metaslab 136 offset 110000000000 spacemap 215 free 2.62G > segments 6822 maxsize 2.56G freepct 2% > metaslab 137 offset 112000000000 spacemap 191 free 9.39G > segments 10891 maxsize 2.28G freepct 7% > metaslab 138 offset 114000000000 spacemap 216 free 2.65G > segments 7295 maxsize 2.58G freepct 2% > metaslab 139 offset 116000000000 spacemap 217 free 2.67G > segments 7435 maxsize 2.60G freepct 2% > metaslab 140 offset 118000000000 spacemap 218 free 2.68G > segments 6952 maxsize 2.62G freepct 2% > metaslab 141 offset 11a000000000 spacemap 196 free 2.70G > segments 5975 maxsize 2.37G freepct 2% > metaslab 142 offset 11c000000000 spacemap 219 free 2.72G > segments 7547 maxsize 2.65G freepct 2% > metaslab 143 offset 11e000000000 spacemap 220 free 2.74G > segments 7570 maxsize 2.67G freepct 2% > metaslab 144 offset 120000000000 spacemap 221 free 2.75G > segments 7321 maxsize 2.69G freepct 2% > metaslab 145 offset 122000000000 spacemap 172 free 2.77G > segments 3490 maxsize 2.68G freepct 2% > metaslab 146 offset 124000000000 spacemap 222 free 2.79G > segments 7448 maxsize 2.72G freepct 2% > metaslab 147 offset 126000000000 spacemap 223 free 2.81G > segments 7309 maxsize 2.74G freepct 2% > metaslab 148 offset 128000000000 spacemap 224 free 2.82G > segments 7367 maxsize 2.75G freepct 2% > metaslab 149 offset 12a000000000 spacemap 225 free 2.84G > segments 7249 maxsize 2.77G freepct 2% > metaslab 150 offset 12c000000000 spacemap 226 free 2.86G > segments 8091 maxsize 2.78G freepct 2% > metaslab 151 offset 12e000000000 spacemap 153 free 2.88G > segments 2771 maxsize 2.74G freepct 2% > > vdev 1 > metaslabs 105 offset spacemap > free --------------- ------------------- --------------- ------------- > metaslab 0 offset 0 spacemap 38 free 128M > segments 1 maxsize 128M freepct 99% > metaslab 1 offset 8000000 spacemap 39 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 2 offset 10000000 spacemap 42 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 3 offset 18000000 spacemap 41 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 4 offset 20000000 spacemap 51 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 5 offset 28000000 spacemap 50 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 6 offset 30000000 spacemap 49 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 7 offset 38000000 spacemap 48 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 8 offset 40000000 spacemap 47 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 9 offset 48000000 spacemap 46 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 10 offset 50000000 spacemap 45 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 11 offset 58000000 spacemap 59 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 12 offset 60000000 spacemap 62 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 13 offset 68000000 spacemap 61 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 14 offset 70000000 spacemap 67 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 15 offset 78000000 spacemap 66 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 16 offset 80000000 spacemap 74 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 17 offset 88000000 spacemap 73 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 18 offset 90000000 spacemap 75 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 19 offset 98000000 spacemap 78 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 20 offset a0000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 21 offset a8000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 22 offset b0000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 23 offset b8000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 24 offset c0000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 25 offset c8000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 26 offset d0000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 27 offset d8000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 28 offset e0000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 29 offset e8000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 30 offset f0000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 31 offset f8000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 32 offset 100000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 33 offset 108000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 34 offset 110000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 35 offset 118000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 36 offset 120000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 37 offset 128000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 38 offset 130000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 39 offset 138000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 40 offset 140000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 41 offset 148000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 42 offset 150000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 43 offset 158000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 44 offset 160000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 45 offset 168000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 46 offset 170000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 47 offset 178000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 48 offset 180000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 49 offset 188000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 50 offset 190000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 51 offset 198000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 52 offset 1a0000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 53 offset 1a8000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 54 offset 1b0000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 55 offset 1b8000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 56 offset 1c0000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 57 offset 1c8000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 58 offset 1d0000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 59 offset 1d8000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 60 offset 1e0000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 61 offset 1e8000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 62 offset 1f0000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 63 offset 1f8000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 64 offset 200000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 65 offset 208000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 66 offset 210000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 67 offset 218000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 68 offset 220000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 69 offset 228000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 70 offset 230000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 71 offset 238000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 72 offset 240000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 73 offset 248000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 74 offset 250000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 75 offset 258000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 76 offset 260000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 77 offset 268000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 78 offset 270000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 79 offset 278000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 80 offset 280000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 81 offset 288000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 82 offset 290000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 83 offset 298000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 84 offset 2a0000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 85 offset 2a8000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 86 offset 2b0000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 87 offset 2b8000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 88 offset 2c0000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 89 offset 2c8000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 90 offset 2d0000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 91 offset 2d8000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 92 offset 2e0000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 93 offset 2e8000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 94 offset 2f0000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 95 offset 2f8000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 96 offset 300000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 97 offset 308000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 98 offset 310000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 99 offset 318000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 100 offset 320000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 101 offset 328000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 102 offset 330000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 103 offset 338000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 104 offset 340000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > > vdev 2 > metaslabs 105 offset spacemap > free --------------- ------------------- --------------- ------------- > metaslab 0 offset 0 spacemap 37 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 1 offset 8000000 spacemap 40 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 2 offset 10000000 spacemap 44 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 3 offset 18000000 spacemap 43 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 4 offset 20000000 spacemap 58 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 5 offset 28000000 spacemap 57 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 6 offset 30000000 spacemap 56 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 7 offset 38000000 spacemap 55 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 8 offset 40000000 spacemap 54 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 9 offset 48000000 spacemap 53 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 10 offset 50000000 spacemap 52 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 11 offset 58000000 spacemap 60 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 12 offset 60000000 spacemap 64 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 13 offset 68000000 spacemap 63 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 14 offset 70000000 spacemap 69 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 15 offset 78000000 spacemap 68 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 16 offset 80000000 spacemap 72 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 17 offset 88000000 spacemap 71 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 18 offset 90000000 spacemap 76 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 19 offset 98000000 spacemap 79 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 20 offset a0000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 21 offset a8000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 22 offset b0000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 23 offset b8000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 24 offset c0000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 25 offset c8000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 26 offset d0000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 27 offset d8000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 28 offset e0000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 29 offset e8000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 30 offset f0000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 31 offset f8000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 32 offset 100000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 33 offset 108000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 34 offset 110000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 35 offset 118000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 36 offset 120000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 37 offset 128000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 38 offset 130000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 39 offset 138000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 40 offset 140000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 41 offset 148000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 42 offset 150000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 43 offset 158000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 44 offset 160000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 45 offset 168000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 46 offset 170000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 47 offset 178000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 48 offset 180000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 49 offset 188000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 50 offset 190000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 51 offset 198000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 52 offset 1a0000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 53 offset 1a8000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 54 offset 1b0000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 55 offset 1b8000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 56 offset 1c0000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 57 offset 1c8000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 58 offset 1d0000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 59 offset 1d8000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 60 offset 1e0000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 61 offset 1e8000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 62 offset 1f0000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 63 offset 1f8000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 64 offset 200000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 65 offset 208000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 66 offset 210000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 67 offset 218000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 68 offset 220000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 69 offset 228000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 70 offset 230000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 71 offset 238000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 72 offset 240000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 73 offset 248000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 74 offset 250000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 75 offset 258000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 76 offset 260000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 77 offset 268000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 78 offset 270000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 79 offset 278000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 80 offset 280000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 81 offset 288000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 82 offset 290000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 83 offset 298000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 84 offset 2a0000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 85 offset 2a8000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 86 offset 2b0000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 87 offset 2b8000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 88 offset 2c0000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 89 offset 2c8000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 90 offset 2d0000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 91 offset 2d8000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 92 offset 2e0000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 93 offset 2e8000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 94 offset 2f0000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 95 offset 2f8000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 96 offset 300000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 97 offset 308000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 98 offset 310000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 99 offset 318000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 100 offset 320000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 101 offset 328000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 102 offset 330000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 103 offset 338000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > metaslab 104 offset 340000000 spacemap 0 free 128M > segments 1 maxsize 128M freepct 100% > > Dataset mos [META], ID 0, cr_txg 4, 459M, 235 objects > > Object lvl iblk dblk dsize lsize %full type > 0 2 16K 16K 256K 128K 91.80 DMU dnode > 1 1 16K 16K 19.0K 32K 100.00 object directory > 2 1 16K 512 0 512 0.00 DSL directory > 3 1 16K 512 3.00K 512 100.00 DSL props > 4 1 16K 512 3.00K 512 100.00 DSL directory child map > 5 1 16K 512 0 512 0.00 DSL directory > 6 1 16K 512 3.00K 512 100.00 DSL props > 7 1 16K 512 3.00K 512 100.00 DSL directory child map > 8 1 16K 512 0 512 0.00 DSL directory > 9 1 16K 512 3.00K 512 100.00 DSL props > 10 1 16K 512 3.00K 512 100.00 DSL directory child map > 11 1 16K 128K 0 128K 0.00 bpobj > 12 1 16K 512 0 512 0.00 DSL directory > 13 1 16K 512 3.00K 512 100.00 DSL props > 14 1 16K 512 3.00K 512 100.00 DSL directory child map > 15 1 16K 512 0 512 0.00 DSL dataset > 16 1 16K 512 3.00K 512 100.00 DSL dataset snap map > 17 1 16K 512 3.00K 512 100.00 DSL deadlist map > 18 1 16K 512 0 512 0.00 DSL dataset > 19 1 16K 512 3.00K 512 100.00 DSL deadlist map > 20 1 16K 128K 0 128K 0.00 bpobj > 21 1 16K 512 0 512 0.00 DSL dataset > 22 1 16K 512 3.00K 512 100.00 DSL dataset snap map > 23 1 16K 512 3.00K 512 100.00 DSL deadlist map > 24 1 16K 128K 0 128K 0.00 bpobj > 25 1 16K 512 3.00K 512 100.00 DSL dataset next clones > 26 1 16K 512 3.00K 512 100.00 DSL dir clones > 27 1 16K 16K 51.0K 16K 100.00 packed nvlist > 28 1 16K 16K 51.0K 16K 100.00 bpobj (Z=uncompressed) > 29 1 16K 128K 16K 128K 100.00 SPA history > 30 1 16K 16K 6.50K 16K 100.00 packed nvlist > 31 1 16K 512 3.00K 512 100.00 object array > 32 1 16K 512 3.00K 512 100.00 object array > 33 1 16K 512 9.5K 1.50K 100.00 object array > 34 2 16K 4K 467K 176K 100.00 SPA space map > 35 2 16K 4K 851K 296K 100.00 SPA space map > 36 2 16K 4K 275K 88.0K 100.00 SPA space map > 37 1 16K 4K 0 4K 0.00 SPA space map > 38 1 16K 4K 3.00K 4K 100.00 SPA space map > 39 1 16K 4K 0 4K 0.00 SPA space map > 40 1 16K 4K 0 4K 0.00 SPA space map > 41 1 16K 4K 0 4K 0.00 SPA space map > 42 1 16K 4K 0 4K 0.00 SPA space map > 43 1 16K 4K 0 4K 0.00 SPA space map > 44 1 16K 4K 0 4K 0.00 SPA space map > 45 1 16K 4K 0 4K 0.00 SPA space map > 46 1 16K 4K 0 4K 0.00 SPA space map > 47 1 16K 4K 3.00K 4K 100.00 SPA space map > 48 1 16K 4K 0 4K 0.00 SPA space map > 49 1 16K 4K 0 4K 0.00 SPA space map > 50 1 16K 4K 0 4K 0.00 SPA space map > 51 1 16K 4K 0 4K 0.00 SPA space map > 52 1 16K 4K 0 4K 0.00 SPA space map > 53 1 16K 4K 0 4K 0.00 SPA space map > 54 1 16K 4K 3.00K 4K 100.00 SPA space map > 55 1 16K 4K 0 4K 0.00 SPA space map > 56 1 16K 4K 0 4K 0.00 SPA space map > 57 1 16K 4K 0 4K 0.00 SPA space map > 58 1 16K 4K 0 4K 0.00 SPA space map > 59 1 16K 4K 0 4K 0.00 SPA space map > 60 1 16K 4K 0 4K 0.00 SPA space map > 61 1 16K 4K 0 4K 0.00 SPA space map > 62 1 16K 4K 0 4K 0.00 SPA space map > 63 1 16K 4K 0 4K 0.00 SPA space map > 64 1 16K 4K 0 4K 0.00 SPA space map > 65 2 16K 4K 291K 92.0K 100.00 SPA space map > 66 1 16K 4K 0 4K 0.00 SPA space map > 67 1 16K 4K 0 4K 0.00 SPA space map > 68 1 16K 4K 0 4K 0.00 SPA space map > 69 1 16K 4K 0 4K 0.00 SPA space map > 70 2 16K 4K 311K 100K 100.00 SPA space map > 71 1 16K 4K 0 4K 0.00 SPA space map > 72 1 16K 4K 0 4K 0.00 SPA space map > 73 1 16K 4K 0 4K 0.00 SPA space map > 74 1 16K 4K 0 4K 0.00 SPA space map > 75 1 16K 4K 0 4K 0.00 SPA space map > 76 1 16K 4K 0 4K 0.00 SPA space map > 77 2 16K 4K 336K 104K 100.00 SPA space map > 78 1 16K 4K 0 4K 0.00 SPA space map > 79 1 16K 4K 0 4K 0.00 SPA space map > 80 2 16K 4K 330K 104K 100.00 SPA space map > 81 2 16K 4K 333K 104K 100.00 SPA space map > 82 2 16K 4K 339K 108K 100.00 SPA space map > 83 2 16K 4K 343K 108K 100.00 SPA space map > 84 2 16K 4K 368K 116K 100.00 SPA space map > 85 2 16K 4K 339K 108K 100.00 SPA space map > 86 2 16K 4K 384K 120K 100.00 SPA space map > 87 2 16K 4K 1.15M 368K 100.00 SPA space map > 88 2 16K 4K 346K 108K 100.00 SPA space map > 89 2 16K 4K 339K 108K 100.00 SPA space map > 90 2 16K 4K 349K 108K 100.00 SPA space map > 91 2 16K 4K 339K 108K 100.00 SPA space map > 92 2 16K 4K 365K 116K 100.00 SPA space map > 93 2 16K 4K 391K 120K 100.00 SPA space map > 94 2 16K 4K 867K 268K 100.00 SPA space map > 95 2 16K 4K 506K 156K 100.00 SPA space map > 96 2 16K 4K 858K 268K 100.00 SPA space map > 97 2 16K 4K 490K 152K 100.00 SPA space map > 98 2 16K 4K 346K 112K 100.00 SPA space map > 99 2 16K 4K 371K 116K 100.00 SPA space map > 100 2 16K 4K 362K 112K 100.00 SPA space map > 101 2 16K 4K 384K 120K 100.00 SPA space map > 102 2 16K 4K 400K 124K 100.00 SPA space map > 103 2 16K 4K 522K 160K 100.00 SPA space map > 104 2 16K 4K 477K 148K 100.00 SPA space map > 105 2 16K 4K 346K 112K 100.00 SPA space map > 106 2 16K 4K 774K 240K 100.00 SPA space map > 107 2 16K 4K 906K 312K 100.00 SPA space map > 108 2 16K 4K 359K 112K 100.00 SPA space map > 109 2 16K 4K 387K 120K 100.00 SPA space map > 110 2 16K 4K 387K 120K 100.00 SPA space map > 111 2 16K 4K 413K 128K 100.00 SPA space map > 112 2 16K 4K 375K 120K 100.00 SPA space map > 113 2 16K 4K 378K 116K 100.00 SPA space map > 114 2 16K 4K 410K 128K 100.00 SPA space map > 115 2 16K 4K 400K 124K 100.00 SPA space map > 116 2 16K 4K 381K 116K 100.00 SPA space map > 117 2 16K 4K 387K 120K 100.00 SPA space map > 118 2 16K 4K 397K 124K 100.00 SPA space map > 119 2 16K 4K 394K 120K 100.00 SPA space map > 120 2 16K 4K 826K 256K 100.00 SPA space map > 121 2 16K 4K 407K 124K 100.00 SPA space map > 122 2 16K 4K 429K 132K 100.00 SPA space map > 123 2 16K 4K 451K 140K 100.00 SPA space map > 124 2 16K 4K 461K 144K 100.00 SPA space map > 125 2 16K 4K 519K 160K 100.00 SPA space map > 126 2 16K 4K 477K 148K 100.00 SPA space map > 127 2 16K 4K 605K 188K 100.00 SPA space map > 128 2 16K 4K 227K 100K 100.00 SPA space map > 129 2 16K 4K 528K 164K 100.00 SPA space map > 130 2 16K 4K 426K 132K 100.00 SPA space map > 131 2 16K 4K 528K 164K 100.00 SPA space map > 132 2 16K 4K 589K 184K 100.00 SPA space map > 133 2 16K 4K 605K 188K 100.00 SPA space map > 134 2 16K 4K 790K 276K 100.00 SPA space map > 135 2 16K 4K 439K 136K 100.00 SPA space map > 136 2 16K 4K 423K 132K 100.00 SPA space map > 137 2 16K 4K 455K 140K 100.00 SPA space map > 138 2 16K 4K 410K 128K 100.00 SPA space map > 139 2 16K 4K 528K 164K 100.00 SPA space map > 141 2 16K 4K 531K 164K 100.00 SPA space map > 142 2 16K 4K 397K 124K 100.00 SPA space map > 143 2 16K 4K 439K 136K 100.00 SPA space map > 144 2 16K 4K 423K 132K 100.00 SPA space map > 145 2 16K 4K 419K 132K 100.00 SPA space map > 146 2 16K 4K 477K 148K 100.00 SPA space map > 147 2 16K 4K 461K 144K 100.00 SPA space map > 148 2 16K 4K 384K 120K 100.00 SPA space map > 149 2 16K 4K 403K 124K 100.00 SPA space map > 150 2 16K 4K 560K 172K 100.00 SPA space map > 151 2 16K 4K 586K 212K 100.00 SPA space map > 152 2 16K 4K 407K 128K 100.00 SPA space map > 153 2 16K 4K 183K 88.0K 100.00 SPA space map > 154 2 16K 4K 506K 156K 100.00 SPA space map > 155 2 16K 4K 506K 156K 100.00 SPA space map > 156 2 16K 4K 432K 136K 100.00 SPA space map > 157 2 16K 4K 499K 156K 100.00 SPA space map > 158 2 16K 4K 442K 140K 100.00 SPA space map > 159 2 16K 4K 448K 140K 100.00 SPA space map > 160 2 16K 4K 435K 136K 100.00 SPA space map > 161 2 16K 4K 471K 148K 100.00 SPA space map > 162 2 16K 4K 435K 140K 100.00 SPA space map > 163 2 16K 4K 343K 136K 100.00 SPA space map > 164 2 16K 4K 426K 132K 100.00 SPA space map > 165 2 16K 4K 365K 124K 100.00 SPA space map > 166 2 16K 4K 371K 120K 100.00 SPA space map > 167 2 16K 4K 416K 136K 100.00 SPA space map > 168 2 16K 4K 426K 132K 100.00 SPA space map > 169 2 16K 4K 419K 132K 100.00 SPA space map > 170 2 16K 4K 461K 144K 100.00 SPA space map > 171 2 16K 4K 541K 156K 100.00 SPA space map > 172 2 16K 4K 237K 100K 100.00 SPA space map > 173 2 16K 4K 531K 176K 100.00 SPA space map > 174 2 16K 4K 426K 136K 100.00 SPA space map > 175 2 16K 4K 445K 140K 100.00 SPA space map > 176 2 16K 4K 426K 136K 100.00 SPA space map > 177 2 16K 4K 451K 140K 100.00 SPA space map > 178 2 16K 4K 423K 136K 100.00 SPA space map > 179 2 16K 4K 413K 136K 100.00 SPA space map > 180 2 16K 4K 439K 164K 100.00 SPA space map > 181 2 16K 4K 471K 148K 100.00 SPA space map > 182 2 16K 4K 458K 144K 100.00 SPA space map > 183 2 16K 4K 471K 148K 100.00 SPA space map > 184 2 16K 4K 419K 132K 100.00 SPA space map > 185 2 16K 4K 490K 152K 100.00 SPA space map > 186 2 16K 4K 445K 140K 100.00 SPA space map > 187 2 16K 4K 464K 144K 100.00 SPA space map > 188 2 16K 4K 467K 148K 100.00 SPA space map > 189 2 16K 4K 451K 140K 100.00 SPA space map > 190 2 16K 4K 522K 164K 100.00 SPA space map > 191 2 16K 4K 503K 144K 100.00 SPA space map > 192 2 16K 4K 544K 156K 100.00 SPA space map > 193 2 16K 4K 535K 168K 100.00 SPA space map > 194 2 16K 4K 487K 152K 100.00 SPA space map > 195 2 16K 4K 429K 136K 100.00 SPA space map > 196 2 16K 4K 291K 120K 100.00 SPA space map > 197 2 16K 4K 339K 112K 100.00 SPA space map > 198 2 16K 4K 359K 124K 100.00 SPA space map > 199 2 16K 4K 311K 120K 100.00 SPA space map > 200 2 16K 4K 314K 116K 100.00 SPA space map > 201 2 16K 4K 336K 128K 100.00 SPA space map > 202 2 16K 4K 317K 120K 100.00 SPA space map > 203 2 16K 4K 327K 120K 100.00 SPA space map > 204 2 16K 4K 346K 124K 100.00 SPA space map > 205 2 16K 4K 323K 124K 100.00 SPA space map > 206 2 16K 4K 339K 116K 100.00 SPA space map > 207 2 16K 4K 349K 124K 100.00 SPA space map > 208 2 16K 4K 333K 124K 100.00 SPA space map > 209 2 16K 4K 291K 112K 100.00 SPA space map > 210 2 16K 4K 336K 120K 100.00 SPA space map > 211 2 16K 4K 320K 120K 100.00 SPA space map > 212 2 16K 4K 291K 112K 100.00 SPA space map > 213 2 16K 4K 349K 128K 100.00 SPA space map > 214 2 16K 4K 397K 132K 100.00 SPA space map > 215 2 16K 4K 330K 116K 100.00 SPA space map > 216 2 16K 4K 320K 120K 100.00 SPA space map > 217 2 16K 4K 394K 132K 100.00 SPA space map > 218 2 16K 4K 336K 120K 100.00 SPA space map > 219 2 16K 4K 327K 124K 100.00 SPA space map > 220 2 16K 4K 349K 128K 100.00 SPA space map > 221 2 16K 4K 391K 128K 100.00 SPA space map > 222 2 16K 4K 339K 124K 100.00 SPA space map > 223 2 16K 4K 339K 124K 100.00 SPA space map > 224 2 16K 4K 365K 124K 100.00 SPA space map > 225 2 16K 4K 346K 124K 100.00 SPA space map > 226 2 16K 4K 400K 136K 100.00 SPA space map > 228 3 16K 16K 395M 433M 99.95 persistent error log > 229 1 16K 4K 0 4K 0.00 SPA space map > 230 1 16K 4K 0 4K 0.00 SPA space map > 231 1 16K 4K 0 4K 0.00 SPA space map > 232 1 16K 4K 0 4K 0.00 SPA space map > 233 1 16K 4K 0 4K 0.00 SPA space map > 234 1 16K 4K 0 4K 0.00 SPA space map > 235 1 16K 4K 0 4K 0.00 SPA space map > 236 1 16K 4K 0 4K 0.00 SPA space map > 237 1 16K 4K 0 4K 0.00 SPA space map > > Dataset tank2 [ZPL], ID 21, cr_txg 1, 13.3T, 37 objects > > ZIL header: claim_txg 0, claim_blk_seq 0, claim_lr_seq 0 replay_seq 0, flags 0x0 > > > Object lvl iblk dblk dsize lsize %full type > 0 7 16K 16K 40.5K 32K 57.81 DMU dnode > -1 1 16K 512 2K 512 100.00 ZFS user/group used > -2 1 16K 512 2K 512 100.00 ZFS user/group used > 1 1 16K 512 2K 512 100.00 ZFS master node > 2 1 16K 512 2K 512 100.00 SA master node > 3 1 16K 512 2K 512 100.00 ZFS delete queue > 4 1 16K 1.50K 2K 1.50K 100.00 ZFS directory > 5 1 16K 1.50K 2K 1.50K 100.00 SA attr registration > 6 1 16K 16K 10.5K 32K 100.00 SA attr layouts > 7 1 16K 512 2K 512 100.00 ZFS directory > 8 5 16K 128K 510G 510G 100.00 ZFS plain file > 9 5 16K 128K 476G 476G 100.00 ZFS plain file > 10 5 16K 128K 473G 473G 100.00 ZFS plain file > 11 5 16K 128K 467G 467G 100.00 ZFS plain file > 12 5 16K 128K 428G 428G 100.00 ZFS plain file > 13 5 16K 128K 455G 455G 100.00 ZFS plain file > 14 5 16K 128K 478G 478G 100.00 ZFS plain file > 15 5 16K 128K 517G 517G 100.00 ZFS plain file > 16 5 16K 128K 487G 487G 100.00 ZFS plain file > 17 5 16K 128K 513G 513G 100.00 ZFS plain file > 18 5 16K 128K 489G 489G 100.00 ZFS plain file > 19 5 16K 128K 494G 493G 100.00 ZFS plain file > 20 5 16K 128K 492G 492G 100.00 ZFS plain file > 21 5 16K 128K 488G 487G 100.00 ZFS plain file > 22 1 16K 1K 2K 1K 100.00 ZFS directory > 23 4 16K 128K 107G 107G 100.00 ZFS plain file > 24 4 16K 128K 92.4G 92.4G 100.00 ZFS plain file > 25 4 16K 128K 97.2G 97.2G 100.00 ZFS plain file > 26 4 16K 128K 0 128K 0.00 ZFS plain file > 27 4 16K 128K 149G 149G 100.00 ZFS plain file > 28 4 16K 128K 221G 221G 100.00 ZFS plain file > 29 4 16K 128K 93.8G 93.8G 100.00 ZFS plain file > 30 4 16K 128K 66.0G 66.0G 100.00 ZFS plain file > 31 5 16K 128K 5.74T 5.74T 100.00 ZFS plain file > 32 4 16K 128K 48.0G 48.0G 100.00 ZFS plain file > 33 4 16K 128K 12.0G 12.0G 100.00 ZFS plain file > 34 4 16K 128K 11.5G 11.5G 100.00 ZFS plain file > 35 4 16K 128K 11.6G 11.5G 100.00 ZFS plain file > 36 4 16K 128K 29.9G 29.8G 100.00 ZFS plain file > 37 1 16K 512 0 512 0.00 ZFS plain file After a long time waiting zdb started spaffing lots and lots of :- zdb_blkptr_cb: Got error 122 reading <21, 8, 0, ea072> DVA[0]=<0:23c1e196400:2d000> [L0 ZFS plain file] fletcher4 uncompressed LE contiguous unique single size=20000L/20000P birth=4507L/4507P fill=1 cksum=3f98d68a2453:fe4561392aca231:c327b800ff4c8ad5:25a834a4ff84cf0e -- skipping zdb_blkptr_cb: Got error 122 reading <21, 8, 0, ea073> DVA[0]=<0:23c1e169400:2d000> [L0 ZFS plain file] fletcher4 uncompressed LE contiguous unique single size=20000L/20000P birth=4507L/4507P fill=1 cksum=3ff53073c410:ff578ea3d9add37:56c79527f899fddd:c0f9a2751d3d4fe1 -- skipping zdb_blkptr_cb: Got error 122 reading <21, 8, 0, ea074> DVA[0]=<0:23c1e1c3400:2d000> [L0 ZFS plain file] fletcher4 uncompressed LE contiguous unique single size=20000L/20000P birth=4507L/4507P fill=1 cksum=403f20ed2647:100a131a2349650e:76e7f9a70f6d0b24:7329de5d1dfbd68b -- skipping zdb_blkptr_cb: Got error 122 reading <21, 8, 0, ea075> DVA[0]=<0:23c1e1f0400:2d000> [L0 ZFS plain file] fletcher4 uncompressed LE contiguous unique single size=20000L/20000P birth=4507L/4507P fill=1 cksum=3fa72784b6f2:fee8a882bc6849b:e90a52ef28fe1baa:8c84eb7b51c6d0ad -- skipping zdb_blkptr_cb: Got error 122 reading <21, 8, 0, ea076> DVA[0]=<0:23c1e21d400:2d000> [L0 ZFS plain file] fletcher4 uncompressed LE contiguous unique single size=20000L/20000P birth=4507L/4507P fill=1 cksum=4043be27b6cd:101d869585000e3c:52d438d1f3f14883:1f3e84a837d38701 -- skipping zdb_blkptr_cb: Got error 122 reading <21, 8, 0, ea077> DVA[0]=<0:23c1e24a400:2d000> [L0 ZFS plain file] fletcher4 uncompressed LE contiguous unique single size=20000L/20000P birth=4507L/4507P fill=1 cksum=402ba31a743e:10055a4e82d009fc:a6e97a0ebbaf4c80:6e60c0e410cf15e7 -- skipping Regards Steve ================================================ This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337 or return the E.mail to postmaster@multiplay.co.uk. From owner-freebsd-fs@FreeBSD.ORG Thu Nov 1 03:11:09 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id EBEC3A77 for ; Thu, 1 Nov 2012 03:11:09 +0000 (UTC) (envelope-from freebsd@penx.com) Received: from btw.pki2.com (btw.pki2.com [IPv6:2001:470:a:6fd::2]) by mx1.freebsd.org (Postfix) with ESMTP id 95BE38FC08 for ; Thu, 1 Nov 2012 03:11:09 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by btw.pki2.com (8.14.5/8.14.5) with ESMTP id qA13B5jD051542; Wed, 31 Oct 2012 20:11:05 -0700 (PDT) (envelope-from freebsd@penx.com) Subject: Re: ZFS RaidZ-2 problems From: Dennis Glatting To: Zaphod Beeblebrox In-Reply-To: References: <508F98F9.3040604@fletchermoorland.co.uk> <1351598684.88435.19.camel@btw.pki2.com> <508FE643.4090107@fletchermoorland.co.uk> <5090010A.4050109@fletchermoorland.co.uk> Content-Type: text/plain; charset="us-ascii" Date: Wed, 31 Oct 2012 20:11:05 -0700 Message-ID: <1351739465.25936.5.camel@btw.pki2.com> Mime-Version: 1.0 X-Mailer: Evolution 2.32.1 FreeBSD GNOME Team Port Content-Transfer-Encoding: 7bit X-yoursite-MailScanner-Information: Dennis Glatting X-yoursite-MailScanner-ID: qA13B5jD051542 X-yoursite-MailScanner: Found to be clean X-MailScanner-From: freebsd@penx.com Cc: freebsd-fs@freebsd.org, Ronald Klop X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 01 Nov 2012 03:11:10 -0000 On Wed, 2012-10-31 at 13:58 -0400, Zaphod Beeblebrox wrote: > I'd start off by saying "smart is your friend." Install smartmontools > and study the somewhat opaque "smartctl -a /dev/mydisk" output > carefully. Try running a short and/or long test, too. Many times the > disk can tell you what the problem is. If too many blocks are being > replaced, your drive is dying. If the drive sees errors in commands > it receives, the cable or the controller are at fault. ZFS itself > does _exceptionally_ well at trying to use what it has. > > I'll also say that bad power supplies make for bad disks. Replacing a > power supply has often been the solution to bad disk problems I've > had. Disks are sensitive to under voltage problems. Brown-outs can > exacerbate this problem. My parents live out where power is very > flaky. Cheap UPSs didn't help much ... but a good power supply can > make all the difference. > To be clear, I am unsure whether my problem was the power supply or the wiring -- it could have been a flaky connector in the strand. I simply replaced it all. I had a 1,000W power supply drawing ~400W on the intake. Assuming 80% efficiency, the power supply should have had plenty of ummpf left. Regardless, the new power supply was cheap compared to my frustration. :) > But I've also had bad controllers of late, too. My most recent > problem had my 9-disk raidZ1 array loose a disk. Smartctl said that > it was loosing blocks fast, so I RMA'd the disk. When the new disk > came, the array just wouldn't heal... it kept loosing the disks > attached to a certain controller. Now it's possible the controller > was bad before the disk had died ... or that it died during the first > attempt at resilver ... or that FreeBSD drivers don't like it anymore > ... I don't know. > > My solution was to get two more 4 drive "pro box" SATA enclosures. > They use a 1-to-4 SATA breakout and the 6 motherboard ports I have are > a revision of the ICH11 intel chipset that supports SATA port > replication (I already had two of these boxes). In this manner I > could remove the defective controller and put all disks onto the > motherboard ICH11 (it actually also allowed me to later expand the > array... but that's not part of this story). > > The upshot was that I now had all the disks present for a raidZ array, > but tonnes of the errors had occured when there were not enough disks. > zpool status -v listed hundresds thousands of files and directories > that were "bad" or lost. But I'd seen this before and started a > scrub. The result of the scrub was: perfect recovery. Actually... it > took a 2nd scrub --- I don't know why. It was happy after the 1st > scrub, but then some checksum errors were found --- and then fixed, so > I scrubbed again ... and that fixed it. > > How does it do it? Unlike other RAID systems, ZFS can tell a bad > block from a good one. When it is asked to re-recover after really > bad multiple failures, it can tell if a block is good or not. This > means that it can choose among alternate or partially recovered > versions and get the right one. Certainly, my above experience would > have been a dead array ... or an array with much loss if I had used > any other RAID technology. > > What does this mean? Well... one thing it means is that for > non-essential systems (say my home media array), using cheap > technology is less risky. None of these is enterprise level > technology, but none of it costs anywhere near what enterprise level, > either. > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Thu Nov 1 08:50:56 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 92AA7C7A for ; Thu, 1 Nov 2012 08:50:56 +0000 (UTC) (envelope-from ronald-freebsd8@klop.yi.org) Received: from smarthost1.greenhost.nl (smarthost1.greenhost.nl [195.190.28.78]) by mx1.freebsd.org (Postfix) with ESMTP id 16EC68FC12 for ; Thu, 1 Nov 2012 08:50:56 +0000 (UTC) Received: from smtp.greenhost.nl ([213.108.104.138]) by smarthost1.greenhost.nl with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.69) (envelope-from ) id 1TTqUC-0002rD-P4 for freebsd-fs@freebsd.org; Thu, 01 Nov 2012 09:50:49 +0100 Received: from [81.21.138.17] (helo=ronaldradial.versatec.local) by smtp.greenhost.nl with esmtpsa (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.72) (envelope-from ) id 1TTqUB-0002km-Er for freebsd-fs@freebsd.org; Thu, 01 Nov 2012 09:50:47 +0100 Content-Type: text/plain; charset=utf-8; format=flowed; delsp=yes To: freebsd-fs@freebsd.org Subject: Re: geli device istgt References: <9A757AF2CA7F204A8F2444FFC5C27C301CADCD4C@Exchange2010.Skynet.local> Date: Thu, 01 Nov 2012 09:50:46 +0100 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: "Ronald Klop" Message-ID: In-Reply-To: <9A757AF2CA7F204A8F2444FFC5C27C301CADCD4C@Exchange2010.Skynet.local> User-Agent: Opera Mail/12.02 (Win32) X-Virus-Scanned: by clamav at smarthost1.samage.net X-Spam-Level: - X-Spam-Score: -1.1 X-Spam-Status: No, score=-1.1 required=5.0 tests=BAYES_05 autolearn=disabled version=3.2.5 X-Scan-Signature: 1fac81774939798d1cdb19633eb460de X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 01 Nov 2012 08:50:56 -0000 On Wed, 31 Oct 2012 21:17:25 +0100, Johannes Mäulen wrote: > Hi there, > I'm new here, "Hello" :). Hopefully it's the right mailing list... > I've setup a Freebsd 9.0-RELEASE machine to handle my storage devices. > I'd like to "share" encrypted partitions via iscsi. But, I'd like to > take the encryption take place on the iscsi-target(-machine). The > machine is equipped with a aes-ni capable cpu, which I'd like to use. I > set up a geli device, but whenever I try to use it as target I get > errors like: > > /usr/local/etc/rc.d/istgt start > Starting istgt. > istgt version 0.5 (20121028) > normal mode > using kqueue > using host atomic > LU1 HDD UNIT > LU1: LUN0 file=/dev/da0p1.eli, size=1499976953856 > LU1: LUN0 2929642488 blocks, 512 bytes/block > istgt_lu_disk.c: 330:istgt_lu_disk_allocate_raw: ***ERROR*** > lu_disk_read() failed > istgt_lu_disk.c: 650:istgt_lu_disk_init: ***ERROR*** LU1: LUN0: allocate > error > istgt_lu.c:2091:istgt_lu_init_unit: ***ERROR*** LU1: lu_disk_init() > failed > istgt_lu.c:2166:istgt_lu_init: ***ERROR*** LU1: lu_init_unit() failed > istgt.c:2799:main: ***ERROR*** istgt_lu_init() failed > /usr/local/etc/rc.d/istgt: WARNING: failed to start istgt > > Could somebody help me with that? > If I try to start istgt with an unencrypted partition everything works > as expected. > > Kind regards > > Johannes Can you provide some information about how you setup geli and istg? And does the geli partition work locally without istgt? Ronald. From owner-freebsd-fs@FreeBSD.ORG Thu Nov 1 09:29:01 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 8D1EC60B for ; Thu, 1 Nov 2012 09:29:01 +0000 (UTC) (envelope-from paul-freebsd@fletchermoorland.co.uk) Received: from hercules.mthelicon.com (hercules.mthelicon.com [66.90.118.40]) by mx1.freebsd.org (Postfix) with ESMTP id 510BF8FC0A for ; Thu, 1 Nov 2012 09:29:00 +0000 (UTC) Received: from demophon.fletchermoorland.co.uk (hydra.fletchermoorland.co.uk [78.33.209.59] (may be forged)) (authenticated bits=0) by hercules.mthelicon.com (8.14.5/8.14.5) with ESMTP id qA19SqYX011127; Thu, 1 Nov 2012 09:28:53 GMT (envelope-from paul-freebsd@fletchermoorland.co.uk) Message-ID: <509240D3.7070607@fletchermoorland.co.uk> Date: Thu, 01 Nov 2012 09:28:51 +0000 From: Paul Wootton User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:12.0) Gecko/20120530 Thunderbird/12.0.1 MIME-Version: 1.0 To: Zaphod Beeblebrox Subject: Re: ZFS RaidZ-2 problems References: <508F98F9.3040604@fletchermoorland.co.uk> <1351598684.88435.19.camel@btw.pki2.com> <508FE643.4090107@fletchermoorland.co.uk> <5090010A.4050109@fletchermoorland.co.uk> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 01 Nov 2012 09:29:01 -0000 On 10/31/12 17:58, Zaphod Beeblebrox wrote: > I'd start off by saying "smart is your friend." Install smartmontools > and study the somewhat opaque "smartctl -a /dev/mydisk" output > carefully. Try running a short and/or long test, too. Many times the > disk can tell you what the problem is. If too many blocks are being > replaced, your drive is dying. If the drive sees errors in commands > it receives, the cable or the controller are at fault. ZFS itself > does _exceptionally_ well at trying to use what it has. I already run SmartMonTools regularly. I do have a one of my drives that is starting to go bad. The drive that keeps disconnecting actually looks on on SMART (when it's connected). I normally also run a period scrub every few days (I've been caught out a few times before) > I'll also say that bad power supplies make for bad disks. Replacing a > power supply has often been the solution to bad disk problems I've > had. Disks are sensitive to under voltage problems. Brown-outs can > exacerbate this problem. My parents live out where power is very > flaky. Cheap UPSs didn't help much ... but a good power supply can > make all the difference. Maybe... I will not run out a bad power supply > But I've also had bad controllers of late, too. My most recent > problem had my 9-disk raidZ1 array loose a disk. Smartctl said that > it was loosing blocks fast, so I RMA'd the disk. When the new disk > came, the array just wouldn't heal... it kept loosing the disks > attached to a certain controller. Now it's possible the controller > was bad before the disk had died ... or that it died during the first > attempt at resilver ... or that FreeBSD drivers don't like it anymore > ... I don't know. > > My solution was to get two more 4 drive "pro box" SATA enclosures. > They use a 1-to-4 SATA breakout and the 6 motherboard ports I have are > a revision of the ICH11 intel chipset that supports SATA port > replication (I already had two of these boxes). In this manner I > could remove the defective controller and put all disks onto the > motherboard ICH11 (it actually also allowed me to later expand the > array... but that's not part of this story). Again maybe... It might be a controller or cable. It could actually be the drive. I am not worried about the hardware side. I can replace the disks, cables, controllers and power supply with out any problems. As I said before, the issue I have is, I have a 9 RAIDZ-2 pack with only 1 disk showing as offline and the pack is showing as faulted. If the power supply was bouncing and a drive was giving bad data, I would expect ZFS to report that 2 drives were faulted (1 offline and 1 corrupt) Is there a way with ZDB that I can see why the pool is showing as faulted? Can it tell me which drives it thinks are bad, or has bad data? Paul From owner-freebsd-fs@FreeBSD.ORG Thu Nov 1 13:29:41 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 3903E4B9; Thu, 1 Nov 2012 13:29:41 +0000 (UTC) (envelope-from prvs=1652892d21=killing@multiplay.co.uk) Received: from mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) by mx1.freebsd.org (Postfix) with ESMTP id 950578FC16; Thu, 1 Nov 2012 13:29:40 +0000 (UTC) Received: from r2d2 ([188.220.16.49]) by mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) (MDaemon PRO v10.0.4) with ESMTP id md50000910332.msg; Thu, 01 Nov 2012 13:29:36 +0000 X-Spam-Processed: mail1.multiplay.co.uk, Thu, 01 Nov 2012 13:29:36 +0000 (not processed: message from valid local sender) X-MDRemoteIP: 188.220.16.49 X-Return-Path: prvs=1652892d21=killing@multiplay.co.uk X-Envelope-From: killing@multiplay.co.uk Message-ID: From: "Steven Hartland" To: "Steven Hartland" , "Peter Jeremy" References: <27087376D1C14132A3CC1B4016912F6D@multiplay.co.uk> <20121031212346.GL3309@server.rulingia.com> <9DB937FEA7634C4BAC49EF5823F93CA3@multiplay.co.uk> Subject: Re: ZFS corruption due to lack of space? Date: Thu, 1 Nov 2012 13:29:34 -0000 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=response Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 Cc: freebsd-fs@freebsd.org, freebsd-stable@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 01 Nov 2012 13:29:41 -0000 After destroying and re-creating the pool and then writing zeros to the disk in multiple files without filling the fs I've manged to reproduce the corruption again so we can rule out full disk as the cause. I'm now testing different senarios to try and identify the culprit, first test is removing the SSD ZIL and cache disks. Suspects: HW issues (memory, cables, MB, disks), driver issue (not used mfi on tbolt 2208 based cards before). Regards Steve ================================================ This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337 or return the E.mail to postmaster@multiplay.co.uk. From owner-freebsd-fs@FreeBSD.ORG Thu Nov 1 19:02:36 2012 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id E2575742; Thu, 1 Nov 2012 19:02:36 +0000 (UTC) (envelope-from eadler@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) by mx1.freebsd.org (Postfix) with ESMTP id B0CA38FC0A; Thu, 1 Nov 2012 19:02:36 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.5/8.14.5) with ESMTP id qA1J2aRP096665; Thu, 1 Nov 2012 19:02:36 GMT (envelope-from eadler@freefall.freebsd.org) Received: (from eadler@localhost) by freefall.freebsd.org (8.14.5/8.14.5/Submit) id qA1J2alS096661; Thu, 1 Nov 2012 19:02:36 GMT (envelope-from eadler) Date: Thu, 1 Nov 2012 19:02:36 GMT Message-Id: <201211011902.qA1J2alS096661@freefall.freebsd.org> To: eadler@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: eadler@FreeBSD.org Subject: Re: kern/173254: [zfs] [patch] Upgrade requests used in ZFS trim map based on ashift X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 01 Nov 2012 19:02:37 -0000 Old Synopsis: Upgrade requests used in ZFS trim map based on ashift (patch included) New Synopsis: [zfs] [patch] Upgrade requests used in ZFS trim map based on ashift Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: eadler Responsible-Changed-When: Thu Nov 1 19:02:08 UTC 2012 Responsible-Changed-Why: set synopsis and assign http://www.freebsd.org/cgi/query-pr.cgi?pr=173254 From owner-freebsd-fs@FreeBSD.ORG Thu Nov 1 20:07:55 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 2312D143 for ; Thu, 1 Nov 2012 20:07:55 +0000 (UTC) (envelope-from zbeeble@gmail.com) Received: from mail-la0-f54.google.com (mail-la0-f54.google.com [209.85.215.54]) by mx1.freebsd.org (Postfix) with ESMTP id 5E0498FC08 for ; Thu, 1 Nov 2012 20:07:53 +0000 (UTC) Received: by mail-la0-f54.google.com with SMTP id e12so2687425lag.13 for ; Thu, 01 Nov 2012 13:07:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=79Tm/czvDF7LIbxwpNCZyPK9GhBdcEwcPcCXl9NhJYU=; b=TOcpnWDqewz/r4pQ/in7DycAfsKpfGVOSeitLTFdum8E/xIh3vQSIaxlYgN1BUs3/r nh0f4RMWTjQNQKEgFk2GIgV+/h5aU94w6uPVsNWKd44JseesTVV6g7LZrbkNfeIVMRm5 CFtpm/ji5/MllIACrBLiIaG3QbGMkrUfYv/rtIglVDCxqfY0QAjfSonCYCvxBh8p/2zf a6wdYxWb3PjaWIrjRSTTs0Os1kd9flFBAPoYKSy5Xe1Krq/LvUjr4vmPMkQdDpCgVNyp C+zM6cvhR3V3MB46rbdml977/rP/AVk82NCYmijIdGkUiw+OomNKfNh3Wt0wJceTGrsI dIGA== MIME-Version: 1.0 Received: by 10.152.108.37 with SMTP id hh5mr38157986lab.52.1351800473090; Thu, 01 Nov 2012 13:07:53 -0700 (PDT) Received: by 10.112.49.138 with HTTP; Thu, 1 Nov 2012 13:07:52 -0700 (PDT) In-Reply-To: <1351739465.25936.5.camel@btw.pki2.com> References: <508F98F9.3040604@fletchermoorland.co.uk> <1351598684.88435.19.camel@btw.pki2.com> <508FE643.4090107@fletchermoorland.co.uk> <5090010A.4050109@fletchermoorland.co.uk> <1351739465.25936.5.camel@btw.pki2.com> Date: Thu, 1 Nov 2012 16:07:52 -0400 Message-ID: Subject: Re: ZFS RaidZ-2 problems From: Zaphod Beeblebrox To: Dennis Glatting Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-fs@freebsd.org, Ronald Klop X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 01 Nov 2012 20:07:55 -0000 On Wed, Oct 31, 2012 at 11:11 PM, Dennis Glatting wrote: > To be clear, I am unsure whether my problem was the power supply or the > wiring -- it could have been a flaky connector in the strand. I simply > replaced it all. > > I had a 1,000W power supply drawing ~400W on the intake. Assuming 80% > efficiency, the power supply should have had plenty of ummpf left. > Regardless, the new power supply was cheap compared to my > frustration. :) Well... to test the power supply, you really need to "scope" the power that the drive uses... likely 12V. Bad wires can also be of effect here, but not everyone has a scope. It's not about the % of available capacity in the case of a bad power supply unit, it's about the quality of the unit itself. What _I_ was talking about was the input to the unit. Roughly, as I understand it, switching power supplies work by "filling" the capacitors that "float" the voltage rails by diverting power from the incoming sine wave to the capacitor when it is above a threshold. A quality supply that is lightly loaded might run several seconds without power before the capacitor drains to the point of shutting off. The "quality" of the power supply comes in removing the waveform from the voltage rail that would otherwise result. Now... my suspicion of the parent's power problems come in the form of the power supply unit's reaction to brown-outs that are not severe enough to trigger the inexpensive (off-line) UPS. If the result of the brown-out is a dip in the voltage on the 12V rail while (coincidentally) the drive is writing to the disk... this is where the drive starts to loose sectors fairly rapidly. To put it more graphically: good power + cheap power supply unit == mostly working computer. bad power + cheap power supply unit == dead disks. good power supply units are generally a good idea. From owner-freebsd-fs@FreeBSD.ORG Thu Nov 1 20:44:05 2012 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id A34817A1 for ; Thu, 1 Nov 2012 20:44:05 +0000 (UTC) (envelope-from lists@jnielsen.net) Received: from ns1.jnielsen.net (secure.freebsdsolutions.net [69.55.234.48]) by mx1.freebsd.org (Postfix) with ESMTP id 80BF18FC0C for ; Thu, 1 Nov 2012 20:44:04 +0000 (UTC) Received: from [10.10.1.32] (office.betterlinux.com [199.58.199.60]) (authenticated bits=0) by ns1.jnielsen.net (8.14.4/8.14.4) with ESMTP id qA1KhpTq060551 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=NOT) for ; Thu, 1 Nov 2012 16:43:51 -0400 (EDT) (envelope-from lists@jnielsen.net) Content-Type: text/plain; charset=iso-8859-1 Mime-Version: 1.0 (Mac OS X Mail 6.2 \(1499\)) Subject: Re: creating a bootable ZFS image From: John Nielsen In-Reply-To: <50919783.5060807@FreeBSD.org> Date: Thu, 1 Nov 2012 14:43:54 -0600 Content-Transfer-Encoding: quoted-printable Message-Id: References: <50919783.5060807@FreeBSD.org> To: "freebsd-fs@freebsd.org" X-Mailer: Apple Mail (2.1499) X-DCC-x.dcc-servers-Metrics: ns1.jnielsen.net 104; Body=1 Fuz1=1 Fuz2=1 X-Virus-Scanned: clamav-milter 0.97.5 at ns1.jnielsen.net X-Virus-Status: Clean X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 01 Nov 2012 20:44:05 -0000 On Oct 31, 2012, at 3:26 PM, Andriy Gapon wrote: > on 31/10/2012 22:01 John Nielsen said the following: >> Is it possible to boot from an exported filesystem? >=20 > It is possible in head since recently. And soon it will be possible = in stable/[89]. Good to know, thanks! For this project I mainly want to stick with = release versions but I will watch for this in 9-STABLE and eventually = 9.2. JN From owner-freebsd-fs@FreeBSD.ORG Thu Nov 1 20:54:23 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 0AB23A64 for ; Thu, 1 Nov 2012 20:54:23 +0000 (UTC) (envelope-from lists@jnielsen.net) Received: from ns1.jnielsen.net (secure.freebsdsolutions.net [69.55.234.48]) by mx1.freebsd.org (Postfix) with ESMTP id C26598FC0A for ; Thu, 1 Nov 2012 20:54:22 +0000 (UTC) Received: from [10.10.1.32] (office.betterlinux.com [199.58.199.60]) (authenticated bits=0) by ns1.jnielsen.net (8.14.4/8.14.4) with ESMTP id qA1KsJox088986 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=NOT) for ; Thu, 1 Nov 2012 16:54:19 -0400 (EDT) (envelope-from lists@jnielsen.net) Content-Type: text/plain; charset=us-ascii Mime-Version: 1.0 (Mac OS X Mail 6.2 \(1499\)) Subject: Re: creating a bootable ZFS image From: John Nielsen In-Reply-To: <20121031233007.57aea90b@fabiankeil.de> Date: Thu, 1 Nov 2012 14:54:21 -0600 Content-Transfer-Encoding: quoted-printable Message-Id: References: <20121031233007.57aea90b@fabiankeil.de> To: freebsd-fs@freebsd.org X-Mailer: Apple Mail (2.1499) X-DCC-x.dcc-servers-Metrics: ns1.jnielsen.net 104; Body=1 Fuz1=1 Fuz2=1 X-Virus-Scanned: clamav-milter 0.97.5 at ns1.jnielsen.net X-Virus-Status: Clean X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 01 Nov 2012 20:54:23 -0000 On Oct 31, 2012, at 4:30 PM, Fabian Keil = wrote: > John Nielsen wrote: >=20 >> What I am doing instead is creating the pool with -o = failmode=3Dcontinue, >> installing, unmounting everything, then forcibly detaching the md >> device. This gives me an image I can use, and it boots and runs fine. >> Unfortunately, that leaves me with a defunct pool on the build host >> until I reboot it. Anything I try to do to the pool (destroy, = offline, >> export, etc) returns "cannot open 'zfsroot': pool I/O is currently >> suspended." (With the default failmode=3Dwait, it's even worse since = any >> command that tries to touch the pool never returns.) The pool state = is >> "UNAVAIL" and the device state is "REMOVED". Once the build host is >> rebooted the device state changes to UNAVAIL and zpool destroy works = as >> expected. >=20 > Did you try "zpool clear [-F] $pool" after reattaching the md? >=20 > It often works for me in situations where other zpool subcommands > just hang like you described above. Thanks for the response. I haven't tried that since I don't want to = reattach [a copy of] the md if I don't have to. However, this suggestion = prompted me to come up with the following, which will work until = something better comes along. It takes advantage of ZFS on the build = host to make a temporary snapshot of the zfs where the image file is = located. Could also be adapted to use a zvol instead of an image file. # zfs unmount imageroot (and its children) ... modify mount points, etc for target ... # zfs snapshot buildhostpool/images@mkimage_tmp # zpool destroy imageroot # md config -d -u ${MD} # zfs rollback buildhostpool/images@mkimage_tmp # zfs destroy buildhostpool/images@mkimage_tmp No orphaned zpool or busy md device on the build host, but all the bits = are still intact on the image after rolling back the snapshot. Thanks, JN From owner-freebsd-fs@FreeBSD.ORG Thu Nov 1 22:25:06 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 6809B50D for ; Thu, 1 Nov 2012 22:25:06 +0000 (UTC) (envelope-from yanegomi@gmail.com) Received: from mail-ob0-f182.google.com (mail-ob0-f182.google.com [209.85.214.182]) by mx1.freebsd.org (Postfix) with ESMTP id 213168FC0A for ; Thu, 1 Nov 2012 22:25:05 +0000 (UTC) Received: by mail-ob0-f182.google.com with SMTP id wc20so3854772obb.13 for ; Thu, 01 Nov 2012 15:25:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=X49ZvcS8i5JacVUYMB9VPpDIdJL8qf3xuYFByPIWDfg=; b=GZZ6r7K0BxEvVWmSUV2kS/LPEc4EEOD5SFaHGLEuKhKhtw50740mrOCq601OfEtAcU 5cwatRfY5nmJ6OmkdPzGenRD7PLbQ0Yk2HiyHvdI/VUadDf6J91t8+EdK+aiUdKGAs48 JPj4JhDxlagAjuOJZDSh5dXJGWCVSx+XB7CbZ48bMDZCVUst6UGHLpkb13PFdofuf4Je xgPTntl///tZ13WZN3fDzKVtj2JeeSmy4H5GhkM2wDqMbg56b43yrEzcbOBbFU6C3QPd ydiLCIgu+Q7SaqPBRT0WI1KbW5tUF3xVCxsLhVLcLlfxK0BMOTSUH3f0K8pyLAya1kYf 191w== MIME-Version: 1.0 Received: by 10.182.172.74 with SMTP id ba10mr34627427obc.83.1351808705432; Thu, 01 Nov 2012 15:25:05 -0700 (PDT) Received: by 10.76.143.33 with HTTP; Thu, 1 Nov 2012 15:25:05 -0700 (PDT) Date: Thu, 1 Nov 2012 15:25:05 -0700 Message-ID: Subject: Inconsistent/potentially incorrect behavior with relative lookups via chdir(2) on UFS/ZFS From: Garrett Cooper To: FreeBSD FS Content-Type: multipart/mixed; boundary=e89a8f839f6fb10b5f04cd767d72 X-Content-Filtered-By: Mailman/MimeDel 2.1.14 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 01 Nov 2012 22:25:06 -0000 --e89a8f839f6fb10b5f04cd767d72 Content-Type: text/plain; charset=ISO-8859-1 Hi, Just doing some interop testing on UFS/ZFS to develop a baseline for filesystem behavior, and I noticed some inconsistencies with the ENOENT requirement in chdir(2) when dealing with relative ".." paths (dot-dot lookups). In particular... 1. I would have expected chdir('.') to have failed in UFS/ZFS with ENOENT if '.' wasn't present, but it didn't. 2. I would have expected chdir('..') to have failed in ZFS with ENOENT if '..' wasn't present, but it didn't. Sidenote: python doesn't do any special handling with os.chdir, per Modules/posixmodule.c (I checked). The full test I ran is included below. Thoughts? Thanks, -Garrett # uname -a FreeBSD forza.west.isilon.com 9.1-PRERELEASE FreeBSD 9.1-PRERELEASE #0 r240770: Thu Sep 20 19:28:45 PDT 2012 gcooper@forza.west.isilon.com:/usr/obj/usr/src/sys/FORZA amd64 UFS: # (set -x; rm -Rf *; for mc in '' 1; do export MUSICAL_CHAIRS=$mc; for d in '' parent/child; do export BASEDIR=$d; python ~gcooper/pull_rug.py; env CHROOT=1 python ~gcooper/pull_rug.py; done; done) + cd /tmp/foo/bar/baz/ + rm -Rf child parent + for mc in ''\'''\''' 1 + export MUSICAL_CHAIRS= + MUSICAL_CHAIRS= + for d in ''\'''\''' parent/child + export BASEDIR= + BASEDIR= + python /home/gcooper/pull_rug.py did not fail with chdir(.) cwd after is indeterminate + env CHROOT=1 python /home/gcooper/pull_rug.py did not fail with chdir(.) cwd after is indeterminate + for d in ''\'''\''' parent/child + export BASEDIR=parent/child + BASEDIR=parent/child + python /home/gcooper/pull_rug.py did not fail with chdir(.) cwd after is indeterminate + env CHROOT=1 python /home/gcooper/pull_rug.py did not fail with chdir(.) cwd after is indeterminate + for mc in ''\'''\''' 1 + export MUSICAL_CHAIRS=1 + MUSICAL_CHAIRS=1 + for d in ''\'''\''' parent/child + export BASEDIR= + BASEDIR= + python /home/gcooper/pull_rug.py [parent,before] inode is: 9985 [parent,after] inode is: 9988 [child] inode from fstat is: 9985 [child] inode from stat is: 9988 did not fail with chdir(.) cwd after is indeterminate + env CHROOT=1 python /home/gcooper/pull_rug.py [parent,before] inode is: 9989 [parent,after] inode is: 9985 [child] inode from fstat is: 4 did not fail with chdir(.) cwd after is indeterminate + for d in ''\'''\''' parent/child + export BASEDIR=parent/child + BASEDIR=parent/child + python /home/gcooper/pull_rug.py [parent,before] inode is: 5 [parent,after] inode is: 8 [child] inode from fstat is: 5 [child] inode from stat is: 8 did not fail with chdir(.) cwd after is indeterminate + env CHROOT=1 python /home/gcooper/pull_rug.py [parent,before] inode is: 10 [parent,after] inode is: 6 [child] inode from fstat is: 4 did not fail with chdir(.) cwd after is indeterminate ZFS: # (set -x; rm -Rf *; for mc in '' 1; do export MUSICAL_CHAIRS=$mc; for d in '' parent/child; do export BASEDIR=$d; python ~gcooper/pull_rug.py; env CHROOT=1 python ~gcooper/pull_rug.py; done; done) + cd /root/foo/bar/baz/ + rm -Rf child parent + for mc in ''\'''\''' 1 + export MUSICAL_CHAIRS= + MUSICAL_CHAIRS= + for d in ''\'''\''' parent/child + export BASEDIR= + BASEDIR= + python /home/gcooper/pull_rug.py did not fail with chdir(.) did not fail with chdir(../..) cwd after is /root/foo/bar + env CHROOT=1 python /home/gcooper/pull_rug.py did not fail with chdir(.) did not fail with chdir(../..) cwd after is / + for d in ''\'''\''' parent/child + export BASEDIR=parent/child + BASEDIR=parent/child + python /home/gcooper/pull_rug.py did not fail with chdir(.) cwd after is indeterminate + env CHROOT=1 python /home/gcooper/pull_rug.py did not fail with chdir(.) cwd after is indeterminate + for mc in ''\'''\''' 1 + export MUSICAL_CHAIRS=1 + MUSICAL_CHAIRS=1 + for d in ''\'''\''' parent/child + export BASEDIR= + BASEDIR= + python /home/gcooper/pull_rug.py [parent,before] inode is: 3688787 [parent,after] inode is: 3688789 [child] inode from fstat is: 3688787 [child] inode from stat is: 3688789 did not fail with chdir(.) did not fail with chdir(../..) cwd after is /root/foo/bar + env CHROOT=1 python /home/gcooper/pull_rug.py [parent,before] inode is: 3688790 [parent,after] inode is: 3688792 [child] inode from fstat is: 4 did not fail with chdir(.) did not fail with chdir(../..) cwd after is / + for d in ''\'''\''' parent/child + export BASEDIR=parent/child + BASEDIR=parent/child + python /home/gcooper/pull_rug.py [parent,before] inode is: 3688794 [parent,after] inode is: 3688797 [child] inode from fstat is: 3688794 [child] inode from stat is: 3688797 did not fail with chdir(.) cwd after is indeterminate + env CHROOT=1 python /home/gcooper/pull_rug.py [parent,before] inode is: 3688799 [parent,after] inode is: 3688802 [child] inode from fstat is: 4 did not fail with chdir(.) cwd after is indeterminate --e89a8f839f6fb10b5f04cd767d72 Content-Type: application/octet-stream; name="pull_rug.py" Content-Disposition: attachment; filename="pull_rug.py" Content-Transfer-Encoding: base64 X-Attachment-Id: f_h90fv7kp0 IyEvdXNyL2Jpbi9lbnYgcHl0aG9uCgppbXBvcnQgZXJybm8KaW1wb3J0IG9zCmltcG9ydCBzaHV0 aWwKaW1wb3J0IHNpZ25hbAppbXBvcnQgc3lzCmltcG9ydCB0cmFjZWJhY2sKaW1wb3J0IHRpbWUK CkNIUk9PVCA9IG9zLmdldGVudignQ0hST09UJykKCmJhc2VkaXIgPSBvcy5lbnZpcm9uLmdldCgn QkFTRURJUicpIG9yICdjaGlsZCcKc3RhcnRpbmdfcHQgPSBDSFJPT1QgYW5kICcvJyBvciBvcy5w YXRoLmpvaW4ob3MuZ2V0Y3dkKCksIGJhc2VkaXIpCmZpbmFsX2Rlc3RpbmF0aW9uID0gb3MucGF0 aC5qb2luKGJhc2VkaXIsICdjaGlsZDEnKQppZiBvcy5wYXRoLmV4aXN0cyhiYXNlZGlyKToKICAg IHNodXRpbC5ybXRyZWUoYmFzZWRpci5zcGxpdChvcy5wYXRoLnNlcClbMF0pCm9zLm1ha2VkaXJz KGJhc2VkaXIsIDA3MDApCm9zLnN5bWxpbmsoJy4uL2NoaWxkJywgZmluYWxfZGVzdGluYXRpb24p CmNoaWxkX3BpZCA9IG9zLmZvcmsoKQppZiBjaGlsZF9waWQ6CiAgICB0aW1lLnNsZWVwKDUpCiAg ICB0cnk6CiAgICAgICAgaWYgb3MuZ2V0ZW52KCdNVVNJQ0FMX0NIQUlSUycpOgogICAgICAgICAg ICBwcmludCAnW3BhcmVudCxiZWZvcmVdIGlub2RlIGlzOicsIG9zLnN0YXQoYmFzZWRpcikuc3Rf aW5vCiAgICAgICAgc2h1dGlsLnJtdHJlZShiYXNlZGlyLnNwbGl0KG9zLnBhdGguc2VwKVswXSkK ICAgICAgICBpZiBvcy5nZXRlbnYoJ01VU0lDQUxfQ0hBSVJTJyk6CiAgICAgICAgICAgIG9zLm1h a2VkaXJzKGJhc2VkaXIsIDA3MDApCiAgICAgICAgICAgIHByaW50ICdbcGFyZW50LGFmdGVyXSBp bm9kZSBpczonLCBvcy5zdGF0KGJhc2VkaXIpLnN0X2lubwogICAgZXhjZXB0OgogICAgICAgIHN5 cy5zdGRvdXQud3JpdGUodHJhY2ViYWNrLnByaW50X2V4YygpKQogICAgICAgIG9zLmtpbGwoY2hp bGRfcGlkLCBzaWduYWwuU0lHS0lMTCkKICAgICAgICBzeXMuZXhpdCgyKQplbHNlOgoKICAgIGZk ID0gb3Mub3BlbihzdGFydGluZ19wdCwgb3MuT19SRE9OTFkpCiAgICBpZiBDSFJPT1Q6CiAgICAg ICAgb3MuY2hyb290KCcuJykKICAgIG9zLmNoZGlyKGZpbmFsX2Rlc3RpbmF0aW9uKQoKICAgIHRp bWUuc2xlZXAoMTApCiAgICBpZiBvcy5nZXRlbnYoJ01VU0lDQUxfQ0hBSVJTJyk6CiAgICAgICAg IyBDYW4ndCBkbyB0aGlzIGluIHB5dGhvbiBiZWNhdXNlIG9zLnBhdGgue2FicyxyZWx9cGF0aC9v cy5zdGF0IGZhaWxzCiAgICAgICAgIyBpZiB0aGUgcGF0aCBpcyBtaXNzaW5nIDspLgogICAgICAg IHByaW50ICdbY2hpbGRdIGlub2RlIGZyb20gZnN0YXQgaXM6Jywgb3MuZnN0YXQoZmQpLnN0X2lu bwogICAgICAgIG9zLnN5c3RlbSgnZWNobyBcW2NoaWxkXF0gaW5vZGUgZnJvbSBzdGF0IGlzOiBg c3RhdCAtZiAlJWkgJXNgJwogICAgICAgICAgICAgICAgICAlIChzdGFydGluZ19wdCwgKSkKCiAg ICBzaWduYWwuYWxhcm0oNSkKICAgIHRyeToKICAgICAgICBwcmludCBvcy5nZXRjd2QoKQogICAg ICAgIHByaW50ICdkaWQgbm90IGZhaWwgd2hlbiBjYWxsaW5nIGdldGN3ZCgpJwogICAgZXhjZXB0 IE9TRXJyb3IgYXMgb3NlOgogICAgICAgIGlmIG9zZS5lcnJubyAhPSBlcnJuby5FTk9FTlQ6CiAg ICAgICAgICAgIHJhaXNlCiAgICB0cnk6CiAgICAgICAgb3MuY2hkaXIoJy4nKQogICAgICAgIHBy aW50ICdkaWQgbm90IGZhaWwgd2l0aCBjaGRpciguKScKICAgIGV4Y2VwdCBPU0Vycm9yIGFzIG9z ZToKICAgICAgICBpZiBvc2UuZXJybm8gIT0gZXJybm8uRU5PRU5UOgogICAgICAgICAgICByYWlz ZQogICAgdHJ5OgogICAgICAgIG9zLmNoZGlyKCcuLi8uLicpCiAgICAgICAgcHJpbnQgJ2RpZCBu b3QgZmFpbCB3aXRoIGNoZGlyKC4uLy4uKScKICAgIGV4Y2VwdCBPU0Vycm9yIGFzIG9zZToKICAg ICAgICBpZiBvc2UuZXJybm8gIT0gZXJybm8uRU5PRU5UOgogICAgICAgICAgICByYWlzZQogICAg dHJ5OgogICAgICAgIHByaW50ICdjd2QgYWZ0ZXIgaXMnLCBvcy5nZXRjd2QoKQogICAgZXhjZXB0 IE9TRXJyb3IgYXMgb3NlOgogICAgICAgIHByaW50ICdpbmRldGVybWluYXRlJwogICAgc2lnbmFs LmFsYXJtKDApCiAgICBvcy5fZXhpdCgwKQoKc3lzLmV4aXQob3Mud2FpdHBpZChjaGlsZF9waWQs IDApWzFdKQo= --e89a8f839f6fb10b5f04cd767d72-- From owner-freebsd-fs@FreeBSD.ORG Thu Nov 1 22:44:12 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 35E398EB; Thu, 1 Nov 2012 22:44:12 +0000 (UTC) (envelope-from peter@rulingia.com) Received: from vps.rulingia.com (host-122-100-2-194.octopus.com.au [122.100.2.194]) by mx1.freebsd.org (Postfix) with ESMTP id 9A3508FC15; Thu, 1 Nov 2012 22:44:10 +0000 (UTC) Received: from server.rulingia.com (c220-239-241-202.belrs5.nsw.optusnet.com.au [220.239.241.202]) by vps.rulingia.com (8.14.5/8.14.5) with ESMTP id qA1Mi2lf050486 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Fri, 2 Nov 2012 09:44:03 +1100 (EST) (envelope-from peter@rulingia.com) X-Bogosity: Ham, spamicity=0.000000 Received: from server.rulingia.com (localhost.rulingia.com [127.0.0.1]) by server.rulingia.com (8.14.5/8.14.5) with ESMTP id qA1MhuRW012570 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Fri, 2 Nov 2012 09:43:56 +1100 (EST) (envelope-from peter@server.rulingia.com) Received: (from peter@localhost) by server.rulingia.com (8.14.5/8.14.5/Submit) id qA1MhtUi012569; Fri, 2 Nov 2012 09:43:55 +1100 (EST) (envelope-from peter) Date: Fri, 2 Nov 2012 09:43:55 +1100 From: Peter Jeremy To: Steven Hartland Subject: Re: ZFS corruption due to lack of space? Message-ID: <20121101224355.GS3309@server.rulingia.com> References: <27087376D1C14132A3CC1B4016912F6D@multiplay.co.uk> <20121031212346.GL3309@server.rulingia.com> <9DB937FEA7634C4BAC49EF5823F93CA3@multiplay.co.uk> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="nO3oAMapP4dBpMZi" Content-Disposition: inline In-Reply-To: X-PGP-Key: http://www.rulingia.com/keys/peter.pgp User-Agent: Mutt/1.5.21 (2010-09-15) Cc: freebsd-fs@freebsd.org, freebsd-stable@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 01 Nov 2012 22:44:12 -0000 --nO3oAMapP4dBpMZi Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On 2012-Nov-01 13:29:34 -0000, Steven Hartland wr= ote: >After destroying and re-creating the pool and then writing >zeros to the disk in multiple files without filling the fs >I've manged to reproduce the corruption again so we can >rule out full disk as the cause. Many years ago, I wrote a simple utility that fills a raw disk with a pseudo-random sequence and then verifies it. This sort of tool can be useful for detecting the presence of silent data corruption (or disk address wraparound). >Suspects: HW issues (memory, cables, MB, disks), driver issue >(not used mfi on tbolt 2208 based cards before). There has been a recent thread about various strange behaviours from LSI controllers and it has been stated that (at least for the 2008) the card firmware _must_ match the FreeBSD driver version. See http://lists.freebsd.org/pipermail/freebsd-stable/2012-August/069205.html --=20 Peter Jeremy --nO3oAMapP4dBpMZi Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.19 (FreeBSD) iEYEARECAAYFAlCS+ysACgkQ/opHv/APuIcMYQCgrirpHq1OO7Sc3kXoK2/MSk1x nWsAoKHR3EhxBVgFcYUBJa6v13sKOok0 =CVoP -----END PGP SIGNATURE----- --nO3oAMapP4dBpMZi-- From owner-freebsd-fs@FreeBSD.ORG Thu Nov 1 23:20:01 2012 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 8C1C4FAB for ; Thu, 1 Nov 2012 23:20:01 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) by mx1.freebsd.org (Postfix) with ESMTP id 716358FC0C for ; Thu, 1 Nov 2012 23:20:01 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.5/8.14.5) with ESMTP id qA1NK1mj023770 for ; Thu, 1 Nov 2012 23:20:01 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.5/8.14.5/Submit) id qA1NK13O023769; Thu, 1 Nov 2012 23:20:01 GMT (envelope-from gnats) Date: Thu, 1 Nov 2012 23:20:01 GMT Message-Id: <201211012320.qA1NK13O023769@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: "Steven Hartland" Subject: Re: kern/173254: [zfs] [patch] Upgrade requests used in ZFS trim map based on ashift X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list Reply-To: Steven Hartland List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 01 Nov 2012 23:20:01 -0000 The following reply was made to PR kern/173254; it has been noted by GNATS. From: "Steven Hartland" To: , Cc: Subject: Re: kern/173254: [zfs] [patch] Upgrade requests used in ZFS trim map based on ashift Date: Thu, 1 Nov 2012 23:15:56 -0000 This is a multi-part message in MIME format. ------=_NextPart_000_0AC6_01CDB886.D8C9BA80 Content-Type: text/plain; format=flowed; charset="Windows-1252"; reply-type=original Content-Transfer-Encoding: 7bit Updated patched which simplifies / optimises logic ================================================ This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337 or return the E.mail to postmaster@multiplay.co.uk. ------=_NextPart_000_0AC6_01CDB886.D8C9BA80 Content-Type: text/plain; format=flowed; name="zz-zfstrim-block-perf.txt"; reply-type=original Content-Transfer-Encoding: quoted-printable Content-Disposition: attachment; filename="zz-zfstrim-block-perf.txt" Upgrades trim free request sizes before inserting them into to free map,=0A= making range consolidation much more effective particularly for small=0A= deletes.=0A= =0A= This reduces memory used by the free map as well as reducing the number=0A= of bio requests down to geom required to process all deletes.=0A= =0A= In tests this achieved a factor of 10 reduction of trim ranges / geom=0A= call downs.=0A= --- sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c 2012-10-25 = 13:01:17.556311206 +0000=0A= +++ sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c 2012-10-25 = 13:48:39.280408543 +0000=0A= @@ -2325,7 +2325,7 @@=0A= =0A= /*=0A= * = =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=0A= - * Read and write to physical devices=0A= + * Read, write and delete to physical devices=0A= * = =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=0A= */=0A= static int=0A= --- sys/cddl/contrib/opensolaris/uts/common/fs/zfs/trim_map.c 2012-10-25 = 13:01:17.544310799 +0000=0A= +++ sys/cddl/contrib/opensolaris/uts/common/fs/zfs/trim_map.c 2012-10-25 = 14:41:49.391313700 +0000=0A= @@ -270,7 +270,15 @@=0A= return;=0A= =0A= mutex_enter(&tm->tm_lock);=0A= - trim_map_free_locked(tm, zio->io_offset, zio->io_offset + zio->io_size,=0A= + /*=0A= + * Upgrade size based on ashift which would be done by=0A= + * zio_vdev_io_start later anyway.=0A= + *=0A= + * This makes free range consolidation much more effective=0A= + * than it would otherwise be.=0A= + */=0A= + trim_map_free_locked(tm, zio->io_offset, zio->io_offset + =0A= + P2ROUNDUP(zio->io_size, 1ULL << vd->vdev_top->vdev_ashift),=0A= vd->vdev_spa->spa_syncing_txg);=0A= mutex_exit(&tm->tm_lock);=0A= }=0A= @@ -288,7 +301,14 @@=0A= return (B_TRUE);=0A= =0A= start =3D zio->io_offset;=0A= - end =3D start + zio->io_size;=0A= + /*=0A= + * Upgrade size based on ashift which would be done by=0A= + * zio_vdev_io_start later anyway.=0A= + *=0A= + * This ensures that entire blocks are invalidated by=0A= + * writes=0A= + */=0A= + end =3D start + P2ROUNDUP(zio->io_size, 1ULL << = vd->vdev_top->vdev_ashift);=0A= tsearch.ts_start =3D start;=0A= tsearch.ts_end =3D end;=0A= =0A= ------=_NextPart_000_0AC6_01CDB886.D8C9BA80-- From owner-freebsd-fs@FreeBSD.ORG Thu Nov 1 23:36:14 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id EB89318E; Thu, 1 Nov 2012 23:36:13 +0000 (UTC) (envelope-from prvs=1652892d21=killing@multiplay.co.uk) Received: from mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) by mx1.freebsd.org (Postfix) with ESMTP id 4C7078FC0A; Thu, 1 Nov 2012 23:36:12 +0000 (UTC) Received: from r2d2 ([188.220.16.49]) by mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) (MDaemon PRO v10.0.4) with ESMTP id md50000920148.msg; Thu, 01 Nov 2012 23:36:11 +0000 X-Spam-Processed: mail1.multiplay.co.uk, Thu, 01 Nov 2012 23:36:11 +0000 (not processed: message from valid local sender) X-MDRemoteIP: 188.220.16.49 X-Return-Path: prvs=1652892d21=killing@multiplay.co.uk X-Envelope-From: killing@multiplay.co.uk Message-ID: <5846347C20554E549FA512C1D59F6427@multiplay.co.uk> From: "Steven Hartland" To: , , "Peter Jeremy" References: <27087376D1C14132A3CC1B4016912F6D@multiplay.co.uk> <20121031212346.GL3309@server.rulingia.com> <9DB937FEA7634C4BAC49EF5823F93CA3@multiplay.co.uk> Subject: Re: mfi corrupts JBOD disks >2TB due to LBA overflow (was: ZFS corruption due to lack of space?) Date: Thu, 1 Nov 2012 23:36:13 -0000 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=response Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 01 Nov 2012 23:36:14 -0000 Ok after revisiting all the facts and spotting that the corruption only seemed to happen after my zpool was nearly full I came up with a wild idea, could the corruption be being caused by writes after 2TB? A few command lines latter and this was confirmed writes to the 3TB disks under mfi are wrapping at 2TB!!! Steps to prove:- 1. zero out block 1 on the disk dd if=/dev/zero bs=512 count=1 of=/dev/mfisyspd0 1+0 records in 1+0 records out 512 bytes transferred in 0.000728 secs (703171 bytes/sec) 2. confirm the first block is zeros dd if=/dev/mfisyspd0 bs=512 count=1 | hexdump -C 1+0 records in 1+0 records out 512 bytes transferred in 0.000250 secs (2047172 bytes/sec) 00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 00000200 3. write 1 block random after the 2TB boundary dd if=/dev/random bs=512 count=1 of=/dev/mfisyspd0 oseek=4294967296 1+0 records in 1+0 records out 512 bytes transferred in 0.000717 secs (714162 bytes/sec) 4. first block of the disk now contains random data dd if=/dev/mfisyspd0 bs=512 count=8 | hexdump -C 00000000 9c d1 d2 1d 9f 2c fc 30 ab 09 7a f7 64 16 2a 58 |.....,.0..z.d.*X| 00000010 18 27 9d 1f ae 4d 27 53 1a 50 e7 c1 b1 3a 9b e4 |.'...M'S.P...:..| 00000020 c3 7c d0 25 83 e2 bd 85 33 f2 33 8e 71 55 70 7c |.|.%....3.3.qUp|| 00000030 8c 15 af 55 f6 88 8d 6e 40 1c f3 1a 5c e7 80 4b |...U...n@...\..K| ... Looking at the driver code the problem is that IO on syspd disks aka JBOD is always done using 10 byte CDB commands in mfi_build_syspdio. This is clearly a serious problem as it results in total corruption on disks > 2^32 sectors when sectors above 2^32 are accessed. The fix doesn't seem too hard and I think I've already got a basic version working, just needs more testing need. The bug also effects kernel mfi_dump_blocks but thats less likely to trigger due to how its used. Will create PR when I've finished testing and am happy with the patch, but wanted to let others know in the mean time given how serious the bug is. Regards Steve ================================================ This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337 or return the E.mail to postmaster@multiplay.co.uk. From owner-freebsd-fs@FreeBSD.ORG Fri Nov 2 09:30:07 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 5D2F7DF4; Fri, 2 Nov 2012 09:30:07 +0000 (UTC) (envelope-from prvs=1653c05a59=killing@multiplay.co.uk) Received: from mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) by mx1.freebsd.org (Postfix) with ESMTP id B92F98FC08; Fri, 2 Nov 2012 09:30:06 +0000 (UTC) Received: from r2d2 ([188.220.16.49]) by mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) (MDaemon PRO v10.0.4) with ESMTP id md50000924482.msg; Fri, 02 Nov 2012 09:30:03 +0000 X-Spam-Processed: mail1.multiplay.co.uk, Fri, 02 Nov 2012 09:30:03 +0000 (not processed: message from valid local sender) X-MDRemoteIP: 188.220.16.49 X-Return-Path: prvs=1653c05a59=killing@multiplay.co.uk X-Envelope-From: killing@multiplay.co.uk Message-ID: From: "Steven Hartland" To: "Peter Jeremy" References: <27087376D1C14132A3CC1B4016912F6D@multiplay.co.uk> <20121031212346.GL3309@server.rulingia.com> <9DB937FEA7634C4BAC49EF5823F93CA3@multiplay.co.uk> <20121101224355.GS3309@server.rulingia.com> Subject: Re: ZFS corruption due to lack of space? Date: Fri, 2 Nov 2012 09:30:04 -0000 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 Cc: freebsd-fs@freebsd.org, freebsd-stable@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 02 Nov 2012 09:30:07 -0000 ----- Original Message ----- From: "Peter Jeremy" On 2012-Nov-01 13:29:34 -0000, Steven Hartland wrote: >After destroying and re-creating the pool and then writing >zeros to the disk in multiple files without filling the fs >I've manged to reproduce the corruption again so we can >rule out full disk as the cause. > Many years ago, I wrote a simple utility that fills a raw disk with > a pseudo-random sequence and then verifies it. This sort of tool > can be useful for detecting the presence of silent data corruption > (or disk address wraparound). Sounds useful, got a link? >Suspects: HW issues (memory, cables, MB, disks), driver issue >(not used mfi on tbolt 2208 based cards before). > There has been a recent thread about various strange behaviours from > LSI controllers and it has been stated that (at least for the 2008) > the card firmware _must_ match the FreeBSD driver version. See > http://lists.freebsd.org/pipermail/freebsd-stable/2012-August/069205.html Yer thats for mps, not aware of a corrilation for mfi unfortunately :( Regards Steve ================================================ This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337 or return the E.mail to postmaster@multiplay.co.uk. From owner-freebsd-fs@FreeBSD.ORG Fri Nov 2 10:33:00 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 8E79DE9F; Fri, 2 Nov 2012 10:33:00 +0000 (UTC) (envelope-from prvs=1653c05a59=killing@multiplay.co.uk) Received: from mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) by mx1.freebsd.org (Postfix) with ESMTP id AA8118FC0C; Fri, 2 Nov 2012 10:32:59 +0000 (UTC) Received: from r2d2 ([188.220.16.49]) by mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) (MDaemon PRO v10.0.4) with ESMTP id md50000924892.msg; Fri, 02 Nov 2012 10:32:56 +0000 X-Spam-Processed: mail1.multiplay.co.uk, Fri, 02 Nov 2012 10:32:56 +0000 (not processed: message from valid local sender) X-MDRemoteIP: 188.220.16.49 X-Return-Path: prvs=1653c05a59=killing@multiplay.co.uk X-Envelope-From: killing@multiplay.co.uk Message-ID: <1A4A7E2C4EDD4BAC8F21275BA647782A@multiplay.co.uk> From: "Steven Hartland" To: , , References: <27087376D1C14132A3CC1B4016912F6D@multiplay.co.uk> <20121031212346.GL3309@server.rulingia.com> <9DB937FEA7634C4BAC49EF5823F93CA3@multiplay.co.uk> <5846347C20554E549FA512C1D59F6427@multiplay.co.uk> Subject: Re: mfi corrupts JBOD disks >2TB due to LBA overflow (was: ZFS corruption due to lack of space?) Date: Fri, 2 Nov 2012 10:32:56 -0000 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=response Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 02 Nov 2012 10:33:00 -0000 Copying in freebsd-scsi@ for visability. ----- Original Message ----- From: "Steven Hartland" > Ok after revisiting all the facts and spotting that > the corruption only seemed to happen after my zpool > was nearly full I came up with a wild idea, could > the corruption be being caused by writes after 2TB? > > A few command lines latter and this was confirmed > writes to the 3TB disks under mfi are wrapping at > 2TB!!! > > Steps to prove:- > 1. zero out block 1 on the disk > dd if=/dev/zero bs=512 count=1 of=/dev/mfisyspd0 > 1+0 records in > 1+0 records out > 512 bytes transferred in 0.000728 secs (703171 bytes/sec) > > 2. confirm the first block is zeros > dd if=/dev/mfisyspd0 bs=512 count=1 | hexdump -C > 1+0 records in > 1+0 records out > 512 bytes transferred in 0.000250 secs (2047172 bytes/sec) > 00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| > * > 00000200 > > 3. write 1 block random after the 2TB boundary > dd if=/dev/random bs=512 count=1 of=/dev/mfisyspd0 oseek=4294967296 > 1+0 records in > 1+0 records out > 512 bytes transferred in 0.000717 secs (714162 bytes/sec) > > 4. first block of the disk now contains random data > dd if=/dev/mfisyspd0 bs=512 count=8 | hexdump -C > 00000000 9c d1 d2 1d 9f 2c fc 30 ab 09 7a f7 64 16 2a 58 |.....,.0..z.d.*X| > 00000010 18 27 9d 1f ae 4d 27 53 1a 50 e7 c1 b1 3a 9b e4 |.'...M'S.P...:..| > 00000020 c3 7c d0 25 83 e2 bd 85 33 f2 33 8e 71 55 70 7c |.|.%....3.3.qUp|| > 00000030 8c 15 af 55 f6 88 8d 6e 40 1c f3 1a 5c e7 80 4b |...U...n@...\..K| > ... > > Looking at the driver code the problem is that IO on syspd > disks aka JBOD is always done using 10 byte CDB commands > in mfi_build_syspdio. This is clearly a serious problem as > it results in total corruption on disks > 2^32 sectors > when sectors above 2^32 are accessed. > > The fix doesn't seem too hard and I think I've already > got a basic version working, just needs more testing need. > > The bug also effects kernel mfi_dump_blocks but thats > less likely to trigger due to how its used. > > Will create PR when I've finished testing and am happy > with the patch, but wanted to let others know in the > mean time given how serious the bug is. PR which includes a patch which fixes this issue is:- http://www.freebsd.org/cgi/query-pr.cgi?pr=173291 Given its critical nature I would strongly advise this gets MFC'ed to all branches ASAP. While someone is looking at this would be good to get the following mfi related PR's I've submitted could also be committed as well ;-) Add deviceid to mfi disk startup output http://www.freebsd.org/cgi/query-pr.cgi?pr=173290 Improvements to mfi support including foreign disks / configs in mfiutil http://www.freebsd.org/cgi/query-pr.cgi?pr=172091 Regards Steve ================================================ This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337 or return the E.mail to postmaster@multiplay.co.uk. From owner-freebsd-fs@FreeBSD.ORG Sat Nov 3 10:39:00 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 8F481A62 for ; Sat, 3 Nov 2012 10:39:00 +0000 (UTC) (envelope-from peter@rulingia.com) Received: from vps.rulingia.com (host-122-100-2-194.octopus.com.au [122.100.2.194]) by mx1.freebsd.org (Postfix) with ESMTP id 1F6D68FC08 for ; Sat, 3 Nov 2012 10:38:59 +0000 (UTC) Received: from server.rulingia.com (c220-239-241-202.belrs5.nsw.optusnet.com.au [220.239.241.202]) by vps.rulingia.com (8.14.5/8.14.5) with ESMTP id qA3Acu8A070477 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Sat, 3 Nov 2012 21:38:56 +1100 (EST) (envelope-from peter@rulingia.com) X-Bogosity: Ham, spamicity=0.000000 Received: from server.rulingia.com (localhost.rulingia.com [127.0.0.1]) by server.rulingia.com (8.14.5/8.14.5) with ESMTP id qA3AcooG096861 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Sat, 3 Nov 2012 21:38:50 +1100 (EST) (envelope-from peter@server.rulingia.com) Received: (from peter@localhost) by server.rulingia.com (8.14.5/8.14.5/Submit) id qA3Aco3H096860; Sat, 3 Nov 2012 21:38:50 +1100 (EST) (envelope-from peter) Date: Sat, 3 Nov 2012 21:38:49 +1100 From: Peter Jeremy To: Garrett Cooper Subject: Re: Inconsistent/potentially incorrect behavior with relative lookups via chdir(2) on UFS/ZFS Message-ID: <20121103103849.GD12996@server.rulingia.com> References: MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="tjCHc7DPkfUGtrlw" Content-Disposition: inline In-Reply-To: X-PGP-Key: http://www.rulingia.com/keys/peter.pgp User-Agent: Mutt/1.5.21 (2010-09-15) Cc: FreeBSD FS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 03 Nov 2012 10:39:00 -0000 --tjCHc7DPkfUGtrlw Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On 2012-Nov-01 15:25:05 -0700, Garrett Cooper wrote: > Just doing some interop testing on UFS/ZFS to develop a baseline for >filesystem behavior, and I noticed some inconsistencies with the ENOENT >requirement in chdir(2) when dealing with relative ".." paths (dot-dot >lookups). In particular... > 1. I would have expected chdir('.') to have failed in UFS/ZFS with >ENOENT if '.' wasn't present, but it didn't. > 2. I would have expected chdir('..') to have failed in ZFS with ENOENT >if '..' wasn't present, but it didn't. > Sidenote: python doesn't do any special handling with os.chdir, per >Modules/posixmodule.c (I checked). > The full test I ran is included below. Whilst playing with the above, I've found some wierd timing issues with UFS+SU: $ mkdir -p ~/p/q/r;sync $ cd p/q/r $ rm -r ~/p; while date; do sleep 3 ; ls -al;done Sat 3 Nov 2012 21:29:19 EST total 2 drwxr-xr-x 0 peter jeremy 512 3 Nov 21:28 . drwxr-xr-x 0 peter jeremy 0 3 Nov 21:29 .. Sat 3 Nov 2012 21:29:22 EST total 2 drwxr-xr-x 0 peter jeremy 512 3 Nov 21:28 . drwxr-xr-x 0 peter jeremy 0 3 Nov 21:29 .. Sat 3 Nov 2012 21:29:25 EST total 2 drwxr-xr-x 0 peter jeremy 512 3 Nov 21:28 . drwxr-xr-x 0 peter jeremy 0 3 Nov 21:29 .. Sat 3 Nov 2012 21:29:28 EST total 2 drwxr-xr-x 0 peter jeremy 512 3 Nov 21:28 . drwxr-xr-x 0 peter jeremy 0 3 Nov 21:29 .. Sat 3 Nov 2012 21:29:31 EST total 2 drwxr-xr-x 0 peter jeremy 512 3 Nov 21:28 . drwxr-xr-x 0 peter jeremy 0 3 Nov 21:29 .. Sat 3 Nov 2012 21:29:34 EST total 2 drwxr-xr-x 0 peter jeremy 512 3 Nov 21:28 . drwxr-xr-x 0 peter jeremy 0 3 Nov 21:29 .. Sat 3 Nov 2012 21:29:37 EST total 2 drwxr-xr-x 0 peter jeremy 512 3 Nov 21:28 . drwxr-xr-x 0 peter jeremy 0 3 Nov 21:29 .. Sat 3 Nov 2012 21:29:40 EST total 2 drwxr-xr-x 0 peter jeremy 512 3 Nov 21:28 . drwxr-xr-x 0 peter jeremy 0 3 Nov 21:29 .. Sat 3 Nov 2012 21:29:43 EST total 2 drwxr-xr-x 0 peter jeremy 512 3 Nov 21:28 . drwxr-xr-x 0 peter jeremy 0 3 Nov 21:29 .. Sat 3 Nov 2012 21:29:46 EST total 0 Sat 3 Nov 2012 21:29:49 EST --=20 Peter Jeremy --tjCHc7DPkfUGtrlw Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.19 (FreeBSD) iEYEARECAAYFAlCU9DkACgkQ/opHv/APuIeaLwCgkB8JgG9tSfydF3USTldqC2lF qMgAnidEXpy1og79YMoP/EredHHaDHfI =pnFO -----END PGP SIGNATURE----- --tjCHc7DPkfUGtrlw-- From owner-freebsd-fs@FreeBSD.ORG Sat Nov 3 18:17:43 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 30135BDC for ; Sat, 3 Nov 2012 18:17:43 +0000 (UTC) (envelope-from fox@cyberfoxfire.com) Received: from mail-oa0-f54.google.com (mail-oa0-f54.google.com [209.85.219.54]) by mx1.freebsd.org (Postfix) with ESMTP id DCC418FC08 for ; Sat, 3 Nov 2012 18:17:42 +0000 (UTC) Received: by mail-oa0-f54.google.com with SMTP id n9so5787326oag.13 for ; Sat, 03 Nov 2012 11:17:36 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=message-id:date:from:user-agent:mime-version:to:subject :content-type:content-transfer-encoding:x-gm-message-state; bh=wg6y5uNPMDnzPg+89+Ig91sRd3uCtYfIMl61lzdxwaQ=; b=GAsfo9sVjYDi/b71C1kpJHfr2EymedrVbQOZLFmkP1Tx14LcRgBdVppynaFR++L85r Qsuc6aQZxNyIlvxyAINWRQV9MFXuMnPr/QLg0nTUlfTAY36WTDd+hXU9L5Pz8MvXHvXW /9u5VbaD2tDLwkU6kMKFWotDm54FhYwbN5SL/Ojbhcg8RqQ2r6xTSMoxlTkPgwhraUmO 4c4+wXxz0RVwAHDBm0fKCQeKs7Mp4afGInpkUMd7hrbHLOoZHtO1h4nUaHxf0yNEc+6g +cHYJnY96qyDyNverF3TwzvfwPhNK9QN9G4IcDMxTM1NRxFRGogbq7HTF9XxTZbNRaxW OUXg== Received: by 10.60.169.243 with SMTP id ah19mr4173015oec.127.1351966656370; Sat, 03 Nov 2012 11:17:36 -0700 (PDT) Received: from [10.99.99.9] ([67.11.63.242]) by mx.google.com with ESMTPS id hc1sm12648483obc.7.2012.11.03.11.17.34 (version=SSLv3 cipher=OTHER); Sat, 03 Nov 2012 11:17:35 -0700 (PDT) Message-ID: <50955FB1.2070800@cyberfoxfire.com> Date: Sat, 03 Nov 2012 13:17:21 -0500 From: Fox F User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:16.0) Gecko/20121026 Thunderbird/16.0.2 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Convert standalone zpool to RAID1 with data in place Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Gm-Message-State: ALoCoQlQA5vyrI1zArzh0nxFUOjZ5hengUGAQDNBitP7e+3knVWL5DHOWOJzmebQropC7STrgSz2 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 03 Nov 2012 18:17:43 -0000 Hello, My intention is to have a zpool with two drives (1TB and 2TB) striped. I suppose I'd create it as such: zpool create zp_test disk1 disk2 I would then create a zfs filesystem on that zpool and add data to it. Then, I would want to mirror this data on another identical striped vdev. The question is, what is the order of operations for creating the second striped vdev and adding it as a mirror to the first one, and how do I do that in such a way that the data on disk1/2 gets mirrored to the new addition? I hope I am making sense. -F From owner-freebsd-fs@FreeBSD.ORG Sat Nov 3 20:56:06 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 3771B417 for ; Sat, 3 Nov 2012 20:56:06 +0000 (UTC) (envelope-from yanegomi@gmail.com) Received: from mail-ob0-f182.google.com (mail-ob0-f182.google.com [209.85.214.182]) by mx1.freebsd.org (Postfix) with ESMTP id DE8D78FC16 for ; Sat, 3 Nov 2012 20:56:05 +0000 (UTC) Received: by mail-ob0-f182.google.com with SMTP id wc20so5783046obb.13 for ; Sat, 03 Nov 2012 13:56:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=ofxwR4Xsm4tzscJvxTSDymE3G6PnLVDwdc7hl5SKQeo=; b=FHShCZ6ti8nDFX0skIwsDHX6fcESG/E03Kou6ywIXueNEnb5KrJ1cETbkSZ9SmTqYE Rir8X3RcVM9NzV5YortzpJFJjw+jbMRuRYMajWI+CUVEhm/s4vdw7i6NX2/7pC7EpU/I seIxW5XLZDKLFqjmYu0WKg/mtxzlh9l6OVPZwyCCHX8wM6qLfAI5Ecw1JzuwgTVU9jVo KpUTvxEv7pD5borsSL7pzgE7o4U2O/dcjzZD3iagoW4F+e8N2bKV0xQjdvvb6dSIWPHg 55zNsXue4j0sJx+0K/TXT3Zkg42HsdGkJptgVW35PYundbUJHfGJLy/+n8FxJHOrMn1F NXOA== MIME-Version: 1.0 Received: by 10.60.169.170 with SMTP id af10mr4487622oec.17.1351976164997; Sat, 03 Nov 2012 13:56:04 -0700 (PDT) Received: by 10.76.143.33 with HTTP; Sat, 3 Nov 2012 13:56:04 -0700 (PDT) In-Reply-To: <20121103103849.GD12996@server.rulingia.com> References: <20121103103849.GD12996@server.rulingia.com> Date: Sat, 3 Nov 2012 13:56:04 -0700 Message-ID: Subject: Re: Inconsistent/potentially incorrect behavior with relative lookups via chdir(2) on UFS/ZFS From: Garrett Cooper To: Peter Jeremy Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.14 Cc: FreeBSD FS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 03 Nov 2012 20:56:06 -0000 On Sat, Nov 3, 2012 at 3:38 AM, Peter Jeremy wrote: > On 2012-Nov-01 15:25:05 -0700, Garrett Cooper wrote: > > Just doing some interop testing on UFS/ZFS to develop a baseline for > >filesystem behavior, and I noticed some inconsistencies with the ENOENT > >requirement in chdir(2) when dealing with relative ".." paths (dot-dot > >lookups). In particular... > > 1. I would have expected chdir('.') to have failed in UFS/ZFS with > >ENOENT if '.' wasn't present, but it didn't. > > 2. I would have expected chdir('..') to have failed in ZFS with ENOENT > >if '..' wasn't present, but it didn't. > > Sidenote: python doesn't do any special handling with os.chdir, per > >Modules/posixmodule.c (I checked). > > The full test I ran is included below. > > Whilst playing with the above, I've found some wierd timing issues > with UFS+SU: > > $ mkdir -p ~/p/q/r;sync > $ cd p/q/r > $ rm -r ~/p; while date; do sleep 3 ; ls -al;done > Sat 3 Nov 2012 21:29:19 EST > total 2 > drwxr-xr-x 0 peter jeremy 512 3 Nov 21:28 . > drwxr-xr-x 0 peter jeremy 0 3 Nov 21:29 .. > Sat 3 Nov 2012 21:29:22 EST > total 2 > drwxr-xr-x 0 peter jeremy 512 3 Nov 21:28 . > drwxr-xr-x 0 peter jeremy 0 3 Nov 21:29 .. > Sat 3 Nov 2012 21:29:25 EST > total 2 > drwxr-xr-x 0 peter jeremy 512 3 Nov 21:28 . > drwxr-xr-x 0 peter jeremy 0 3 Nov 21:29 .. > Sat 3 Nov 2012 21:29:28 EST > total 2 > drwxr-xr-x 0 peter jeremy 512 3 Nov 21:28 . > drwxr-xr-x 0 peter jeremy 0 3 Nov 21:29 .. > Sat 3 Nov 2012 21:29:31 EST > total 2 > drwxr-xr-x 0 peter jeremy 512 3 Nov 21:28 . > drwxr-xr-x 0 peter jeremy 0 3 Nov 21:29 .. > Sat 3 Nov 2012 21:29:34 EST > total 2 > drwxr-xr-x 0 peter jeremy 512 3 Nov 21:28 . > drwxr-xr-x 0 peter jeremy 0 3 Nov 21:29 .. > Sat 3 Nov 2012 21:29:37 EST > total 2 > drwxr-xr-x 0 peter jeremy 512 3 Nov 21:28 . > drwxr-xr-x 0 peter jeremy 0 3 Nov 21:29 .. > Sat 3 Nov 2012 21:29:40 EST > total 2 > drwxr-xr-x 0 peter jeremy 512 3 Nov 21:28 . > drwxr-xr-x 0 peter jeremy 0 3 Nov 21:29 .. > Sat 3 Nov 2012 21:29:43 EST > total 2 > drwxr-xr-x 0 peter jeremy 512 3 Nov 21:28 . > drwxr-xr-x 0 peter jeremy 0 3 Nov 21:29 .. > Sat 3 Nov 2012 21:29:46 EST > total 0 > Sat 3 Nov 2012 21:29:49 EST > Interesting; with SU-J or SUJ (my testing was with the default: SUJ)? Thanks! -Garrett