From owner-freebsd-fs@freebsd.org Sun Feb 14 08:41:34 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id A44A9AA1D15 for ; Sun, 14 Feb 2016 08:41:34 +0000 (UTC) (envelope-from zanchey@ucc.gu.uwa.edu.au) Received: from mail-ext-sout1.uwa.edu.au (mail-ext-sout1.uwa.edu.au [130.95.128.72]) by mx1.freebsd.org (Postfix) with ESMTP id 2B2DD1B0C for ; Sun, 14 Feb 2016 08:41:32 +0000 (UTC) (envelope-from zanchey@ucc.gu.uwa.edu.au) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: A2DIBQC+PMBW/8+AX4JUCoQMbaZGAQEBAQEBBpU5IYI8gzACgW4BAQEBAQGBC4RCAQEEOj8QCw4KLiwrBhOIGg62eIQKAQEBAQYBAQEBARcEhUmCO4JChAgFhF8FjhqIX4VPiWKHaYUvjj5iggIZgVVdAYhOgTgBAQE X-IPAS-Result: A2DIBQC+PMBW/8+AX4JUCoQMbaZGAQEBAQEBBpU5IYI8gzACgW4BAQEBAQGBC4RCAQEEOj8QCw4KLiwrBhOIGg62eIQKAQEBAQYBAQEBARcEhUmCO4JChAgFhF8FjhqIX4VPiWKHaYUvjj5iggIZgVVdAYhOgTgBAQE X-IronPort-AV: E=Sophos;i="5.22,444,1449504000"; d="scan'208";a="199840790" Received: from f5-new.net.uwa.edu.au (HELO mooneye.ucc.gu.uwa.edu.au) ([130.95.128.207]) by mail-ext-out1.uwa.edu.au with SMTP; 14 Feb 2016 16:40:22 +0800 Received: by mooneye.ucc.gu.uwa.edu.au (Postfix, from userid 801) id DA8353C04E; Sun, 14 Feb 2016 16:40:22 +0800 (AWST) Received: from motsugo.ucc.gu.uwa.edu.au (motsugo.ucc.gu.uwa.edu.au [130.95.13.7]) by mooneye.ucc.gu.uwa.edu.au (Postfix) with ESMTP id AB8A93C04E; Sun, 14 Feb 2016 16:40:22 +0800 (AWST) Received: by motsugo.ucc.gu.uwa.edu.au (Postfix, from userid 11251) id A566420083; Sun, 14 Feb 2016 16:40:22 +0800 (AWST) Received: from localhost (localhost [127.0.0.1]) by motsugo.ucc.gu.uwa.edu.au (Postfix) with ESMTP id A190220065; Sun, 14 Feb 2016 16:40:22 +0800 (AWST) Date: Sun, 14 Feb 2016 16:40:22 +0800 (AWST) From: David Adam To: Tom Curry cc: FreeBSD Filesystems Subject: Re: Poor ZFS+NFSv3 read/write performance and panic In-Reply-To: Message-ID: References: User-Agent: Alpine 2.11 (DEB 23 2013-08-11) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 14 Feb 2016 08:41:34 -0000 On Mon, 8 Feb 2016, Tom Curry wrote: > On Sun, Feb 7, 2016 at 11:58 AM, David Adam > wrote: > > > Just wondering if anyone has any idea how to identify which devices are > > implicated in ZFS' vdev_deadman(). I have updated the firmware on the > > mps(4) card that has our disks attached but that hasn't helped. > > I too ran into this problem and spent quite some time troubleshooting > hardware. For me it turns out it was not hardware at all, but software. > Specifically the ZFS ARC. Looking at your stack I see some arc reclaim up > top, it's possible you're running into the same issue. There is a monster > of a PR that details this here > https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=187594 > > If you would like to test this theory out, the fastest way is to limit the > ARC by adding the following to /boot/loader.conf and rebooting > vfs.zfs.arc_max="24G" > > Replacing 24G with what makes sense for your system, aim for 3/4 of total > memory for starters. If this solves the problem there are more scientific > methods to a permanent fix, one would be applying the patch in the PR > above, another would be a more finely tuned arc_max value. Thanks Tom - this certainly did sound promising, but setting the ARC to 11G of our 16G of RAM didn't help. `zfs-stats` confirmed that the ARC was the expected size and that there was still 461 MB of RAM free. We'll keep looking! David Adam zanchey@ucc.gu.uwa.edu.au From owner-freebsd-fs@freebsd.org Sun Feb 14 12:59:50 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 7878FAA063B; Sun, 14 Feb 2016 12:59:50 +0000 (UTC) (envelope-from tinkr@openmailbox.org) Received: from mail2.openmailbox.org (mail2.openmailbox.org [62.4.1.33]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 3B4D6125A; Sun, 14 Feb 2016 12:59:49 +0000 (UTC) (envelope-from tinkr@openmailbox.org) Received: by mail2.openmailbox.org (Postfix, from userid 1004) id B72812AC23D8; Sun, 14 Feb 2016 13:59:40 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=openmailbox.org; s=openmailbox; t=1455454780; bh=svkyVb8OHK6GP+S1g1xbl/EC9JMRQzemiJAwu9H+ovU=; h=Date:From:To:Subject:From; b=UEIko6W7oaPLdp7a4SQDtKMWjvb17/05HsTeWmJR8Spg1keMqy558StagN4nSXg++ sFxbIOJt4V6kRncsQqSeDrhpvixzSkzp+hTM6qDfh9z0U8/qVKhbjTET3eeMgYPLyE eB163YufhMQpBYKTdtocYZF6i7x3hDhInbNjkt58= X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on openmailbox-b2 X-Spam-Level: X-Spam-Status: No, score=0.6 required=5.0 tests=ALL_TRUSTED,BAYES_50, DKIM_ADSP_ALL,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from www.openmailbox.org (openmailbox-b1 [10.91.69.218]) by mail2.openmailbox.org (Postfix) with ESMTP id 97C902AC3C0E; Sun, 14 Feb 2016 13:59:30 +0100 (CET) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII; format=flowed Content-Transfer-Encoding: 7bit Date: Sun, 14 Feb 2016 19:59:30 +0700 From: Tinker To: freebsd-stable@freebsd.org, freebsd-scsi@freebsd.org, freebsd-fs@freebsd.org Subject: MRSAS driver/LSI MegaRaid 92XX-93XX admin question: When one of the Raid's physical drives break, how is it reported in the =?UTF-8?Q?logs=3F?= Message-ID: <6a648d421b6d611b4f6f411b66303017@openmailbox.org> X-Sender: tinkr@openmailbox.org User-Agent: Roundcube Webmail/1.0.6 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 14 Feb 2016 12:59:50 -0000 ( ** Extremely sorry for crossposting! Was unclear where this RAID adapter question belongs, please clarify and I'll keep to one single list! Posted to all of stable@, scsi@ and fs@ .) Hi, When you run one of the MRSAS drives such as a Avatogech LSI MegaRaid 9361 or 9266, and then eventually one of the physical RaidDrives or a CacheCade drives breaks, how is this reported to the FreeBSD host's dmesg or syslog? I don't have the hardware in place so that I would be able to check. On the other hand someone among you may have extremely deep experience, in particular because this card is so common, so this is why I ask you here. I understand that if at least one underlying copy of the data is accessible, the RAID card will optimize all access to that one, so when it comes to keeping IO working without interruption, the LSI card does a great job. At some point, an SSD or HDD will break down, either completely (it won't connect and its SMART interface says the drive is consumed) or more discretely, through taking tons of time for its operations. My best understanding is that the Raid card automatically will take those drives out of use, transparently. Now to the main point: As admin, it's great to be informed when this happens i.e. an underlying physical Raid disk or a CacheCade disk is taken out of use or otherwise malfunctions. Does the MrSas driver output this into the dmesg or syslog somehow? Reading https://svnweb.freebsd.org/base/stable/10/sys/dev/mrsas/mrsas.c?revision=284267&view=markup , the card seems to have an "event log" that the driver downloads from the card in plaintext (??), but I don't understand from the sourcecode where that information is channeled. And also of course I can't see what that event log would contain in those cases. (The "mfiutil" has a "show events" argument, though mfiutil is only for the related "mfi" driver which does not work for both 92XX and 93XX cards. Also in this case still I'd be interested to know how it reports a broken drive) http://www.cisco.com/c/dam/en/us/td/docs/unified_computing/ucs/3rd-party/lsi/mrsas/userguide/LSI_MR_SAS_SW_UG.pdf on page 305, that is section "A.2 Event Messages" - I don't know for what LGI chip this document is, but, it does not list particular event message very clearly for when an individual underlying disk would have broken, I don't even see any event for when a hot spare would be taken in use! You who have the experience, can you clarify please? Thanks :D Tinker From owner-freebsd-fs@freebsd.org Sun Feb 14 15:13:48 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 2143DAA7FDE; Sun, 14 Feb 2016 15:13:48 +0000 (UTC) (envelope-from tinkr@openmailbox.org) Received: from mail2.openmailbox.org (mail2.openmailbox.org [62.4.1.33]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id D13091E38; Sun, 14 Feb 2016 15:13:47 +0000 (UTC) (envelope-from tinkr@openmailbox.org) Received: by mail2.openmailbox.org (Postfix, from userid 1004) id 78B892AC260D; Sun, 14 Feb 2016 16:13:43 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=openmailbox.org; s=openmailbox; t=1455462823; bh=a6xRsHv3dB8Og6u7p4fjbM5qiUhvubkqMeI/6wnFUGk=; h=Date:From:To:Subject:In-Reply-To:References:From; b=BGg9woQZ2saaEnpPj7pRPuzJLQ/6mxc71q99ZNWGdj82+STcxUMZ0lO/68mXp6e8N /kP+z4YL/Pm2g5+z1B8kN41weu7n5aZMcEk2A4bRN0Rn8MwKFqNVcOOU8Ws5PkyJ6q Datw1/fbJ+OFpKv1M1qTy9TQ+/j3aXiYZrgUEU/s= X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on openmailbox-b2 X-Spam-Level: X-Spam-Status: No, score=0.6 required=5.0 tests=ALL_TRUSTED,BAYES_50, DKIM_ADSP_ALL,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from www.openmailbox.org (openmailbox-b2 [10.91.69.220]) by mail2.openmailbox.org (Postfix) with ESMTP id C48662AC564D; Sun, 14 Feb 2016 16:13:31 +0100 (CET) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII; format=flowed Content-Transfer-Encoding: 7bit Date: Sun, 14 Feb 2016 22:13:31 +0700 From: Tinker To: freebsd-stable@freebsd.org, freebsd-scsi@freebsd.org, freebsd-fs@freebsd.org Subject: Re: MRSAS driver/LSI MegaRaid 92XX-93XX admin question: When one of the Raid's physical drives break, how is it reported in the =?UTF-8?Q?logs=3F?= In-Reply-To: <6a648d421b6d611b4f6f411b66303017@openmailbox.org> References: <6a648d421b6d611b4f6f411b66303017@openmailbox.org> Message-ID: <55de137d1ed81930cfdbee579d881d62@openmailbox.org> X-Sender: tinkr@openmailbox.org User-Agent: Roundcube Webmail/1.0.6 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 14 Feb 2016 15:13:48 -0000 (Will send any followup from now only to freebsd-scsi@ .) Did some additional research and found that the disk failure indeed is reported in MRSAS' "event log". So my final question then is, how do you extract it into userland (in the absence of an "mfiutil" as the MFI driver has)? Details below. Thanks. On 2016-02-14 19:59, Tinker wrote: [...] > http://www.cisco.com/c/dam/en/us/td/docs/unified_computing/ucs/3rd-party/lsi/mrsas/userguide/LSI_MR_SAS_SW_UG.pdf > on page 305, that is section "A.2 Event Messages" - I don't know for > what LGI chip this document is, but, it does not list particular event > message very clearly for when an individual underlying disk would have > broken, I don't even see any event for when a hot spare would be taken > in use! Wait - this page: https://www.schirmacher.de/display/Linux/Replace+failed+disk+in+MegaRAID+array (and also http://serverfault.com/questions/485147/drive-is-failing-but-lsi-megaraid-controller-does-not-detect-it ) gives an example of how the host system learns about broken disks: Code: 0x00000051 .. Event Description: State change on VD 00/1 from OPTIMAL(3) to DEGRADED(2) Code: 0x00000072 .. Event Description: State change on PD 05(e0xfc/s0) from ONLINE(18) to FAILED(11) (unclean disk broken seems to be shown as:) Code: 0x00000071 .. Event Description: Unexpected sense: PD 05(e0xfc/s0) Path 4433221103000000, CDB: 2e 00 3a 38 1b c7 00 00 01 00, Sense: b/00/00 And this version of the LSI documentation http://hwraid.le-vert.net/raw-attachment/wiki/LSIMegaRAIDSAS/megacli_user_guide.pdf gives a clearer definition of the physical and virtual drive states in "1.4.16 Physical Drive States" and "1.4.17 Virtual Disk States" on pages 1-11 to 1-12. So as we see, a physical drive breaking would * "FAILED" the physical drive * "DEGRADED" the Virtual Drive (that is the logical exported drive) (from "OPTIMAL") So then, it was indeed the card's "event log" that contains this info. Last question then would only be then, *where* FreeBSD's MRSAS driver sends its event log? From owner-freebsd-fs@freebsd.org Sun Feb 14 15:26:26 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 6256FAA8670; Sun, 14 Feb 2016 15:26:26 +0000 (UTC) (envelope-from lists@opsec.eu) Received: from home.opsec.eu (home.opsec.eu [IPv6:2001:14f8:200::1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 2A32B172D; Sun, 14 Feb 2016 15:26:26 +0000 (UTC) (envelope-from lists@opsec.eu) Received: from pi by home.opsec.eu with local (Exim 4.86 (FreeBSD)) (envelope-from ) id 1aUyZ8-000BJ1-RF; Sun, 14 Feb 2016 16:26:26 +0100 Date: Sun, 14 Feb 2016 16:26:26 +0100 From: Kurt Jaeger To: Tinker Cc: freebsd-stable@freebsd.org, freebsd-scsi@freebsd.org, freebsd-fs@freebsd.org Subject: Re: MRSAS driver/LSI MegaRaid 92XX-93XX admin question: When one of the Raid's physical drives break, how is it reported in the logs? Message-ID: <20160214152626.GH26283@home.opsec.eu> References: <6a648d421b6d611b4f6f411b66303017@openmailbox.org> <55de137d1ed81930cfdbee579d881d62@openmailbox.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <55de137d1ed81930cfdbee579d881d62@openmailbox.org> X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 14 Feb 2016 15:26:26 -0000 Hi! > So my final question then is, how do you extract it into userland (in > the absence of an "mfiutil" as the MFI driver has)? They renamed the util to StorCLI, it looks very similar to the old tw_cli, and can be downloaded from http://www.avagotech.com/products/server-storage/raid-controllers/megaraid-sas-9266-8i#downloads as MR_SAS_StorCLI_1-16-06.zip, unpacking it yields storcli_all_os.zip, unpacking that yields storcli_all_os/FreeBSD/storcli64.tar, and finally unpacking that gives $ file storcli64 storcli64: ELF 64-bit LSB executable, x86-64, version 1 (FreeBSD), statically linked, for FreeBSD 7.4, stripped which at least looks like it might work with the MRSAS controller. -- pi@opsec.eu +49 171 3101372 4 years to go ! From owner-freebsd-fs@freebsd.org Sun Feb 14 21:00:07 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id D2E41AA879C for ; Sun, 14 Feb 2016 21:00:07 +0000 (UTC) (envelope-from bugzilla-noreply@FreeBSD.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id B3CEE876 for ; Sun, 14 Feb 2016 21:00:07 +0000 (UTC) (envelope-from bugzilla-noreply@FreeBSD.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id u1EL01aj010836 for ; Sun, 14 Feb 2016 21:00:07 GMT (envelope-from bugzilla-noreply@FreeBSD.org) Message-Id: <201602142100.u1EL01aj010836@kenobi.freebsd.org> From: bugzilla-noreply@FreeBSD.org To: freebsd-fs@FreeBSD.org Subject: Problem reports for freebsd-fs@FreeBSD.org that need special attention Date: Sun, 14 Feb 2016 21:00:07 +0000 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 14 Feb 2016 21:00:07 -0000 To view an individual PR, use: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=(Bug Id). The following is a listing of current problems submitted by FreeBSD users, which need special attention. These represent problem reports covering all versions including experimental development code and obsolete releases. Status | Bug Id | Description ------------+-----------+--------------------------------------------------- New | 203492 | mount_unionfs -o below causes panic Open | 136470 | [nfs] Cannot mount / in read-only, over NFS Open | 139651 | [nfs] mount(8): read-only remount of NFS volume d Open | 144447 | [zfs] sharenfs fsunshare() & fsshare_main() non f 4 problems total for which you should take action. From owner-freebsd-fs@freebsd.org Mon Feb 15 01:20:01 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 99BBEAA8B9D for ; Mon, 15 Feb 2016 01:20:01 +0000 (UTC) (envelope-from thomasrcurry@gmail.com) Received: from mail-io0-x22e.google.com (mail-io0-x22e.google.com [IPv6:2607:f8b0:4001:c06::22e]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 5F43E183C for ; Mon, 15 Feb 2016 01:20:01 +0000 (UTC) (envelope-from thomasrcurry@gmail.com) Received: by mail-io0-x22e.google.com with SMTP id 9so144820042iom.1 for ; Sun, 14 Feb 2016 17:20:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=S5sskxGy7H5mIxvWV93089f80JoVnX3Kvvuyinr1faE=; b=K2NHJWZNThGR+OYAzuUMQzj0CKOtUQ/bIZOtVGAijDKupXoIrK7IayTVFliNzuuDvy Rmrw+rQi5CVXxPhH2Zg/82Q7g/7NJTPFCThz9Mv+sE3Lt03Prn/nLgp5g6DdzAMqapD7 8DzcYBKNTSZM2MrKNqu7Ayn5MlzhvNic4oGOhps2EkhXVdrnK2OTg6YCLGAmfc9fLxrb LF+ZHrm1jStj81MlIWCc0dFn5HnnmGZVna+Zr6h5lsqKyMvF5e+W1mM10VQYVsm3O7wg GQdTi7JtHprcZy19SMRHLG46RI22PJuEB5MFB63kBVbCrfW7ladHYGcUHpg/gYuyEsst QUWw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=S5sskxGy7H5mIxvWV93089f80JoVnX3Kvvuyinr1faE=; b=Av/RXc7SSCMth32oBrvnRyXvDhO7YN4Y14v67FJiKuByMPB6O7tMsLHkRU1iHCp111 +83TxU65+hX7fU23CjAR4LSCZUPZxtOa7wewrTfTXG9ZdMTRAZwH6V64YhailIZax4VK yWblYTnI/USclr8ZNJ18Uec/ktw7iq+fWXPpGvxmRLdbvt0ZxHbYfeWIVfy48vrZiq2X /zL6hH4DHAR4xQac86wHg/Knp1orAAH5dqx47RqQrQHM6NYEcm/BQ8xHuGDIEsfifJAa kfT/RN5UJO3n6wuuElZbLbpnQKkJ6lRM7dDhq8pZXdvJsyqr31deTWjE/dA9QKBru+Xa 3d/g== X-Gm-Message-State: AG10YORmJEBvbVRSivkjXphljXLyjHFeR2CNuy29JkTpTfEp4OCDDZxqoJoq2+5Z3ILQevVjWbdof27UxmgBYQ== MIME-Version: 1.0 X-Received: by 10.107.3.220 with SMTP id e89mr13870454ioi.99.1455499200779; Sun, 14 Feb 2016 17:20:00 -0800 (PST) Received: by 10.107.4.71 with HTTP; Sun, 14 Feb 2016 17:20:00 -0800 (PST) In-Reply-To: References: Date: Sun, 14 Feb 2016 20:20:00 -0500 Message-ID: Subject: Re: Poor ZFS+NFSv3 read/write performance and panic From: Tom Curry To: David Adam Cc: FreeBSD Filesystems Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 15 Feb 2016 01:20:01 -0000 On Sun, Feb 14, 2016 at 3:40 AM, David Adam wrote: > On Mon, 8 Feb 2016, Tom Curry wrote: > > On Sun, Feb 7, 2016 at 11:58 AM, David Adam > > wrote: > > > > > Just wondering if anyone has any idea how to identify which devices are > > > implicated in ZFS' vdev_deadman(). I have updated the firmware on the > > > mps(4) card that has our disks attached but that hasn't helped. > > > > I too ran into this problem and spent quite some time troubleshooting > > hardware. For me it turns out it was not hardware at all, but software. > > Specifically the ZFS ARC. Looking at your stack I see some arc reclaim up > > top, it's possible you're running into the same issue. There is a monster > > of a PR that details this here > > https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=187594 > > > > If you would like to test this theory out, the fastest way is to limit > the > > ARC by adding the following to /boot/loader.conf and rebooting > > vfs.zfs.arc_max="24G" > > > > Replacing 24G with what makes sense for your system, aim for 3/4 of total > > memory for starters. If this solves the problem there are more scientific > > methods to a permanent fix, one would be applying the patch in the PR > > above, another would be a more finely tuned arc_max value. > > Thanks Tom - this certainly did sound promising, but setting the ARC to > 11G of our 16G of RAM didn't help. `zfs-stats` confirmed that the ARC was > the expected size and that there was still 461 MB of RAM free. > > We'll keep looking! > > David Adam > zanchey@ucc.gu.uwa.edu.au > Did the system still panic or did it merely degrade in performance? When performance heads south are you swapping? From owner-freebsd-fs@freebsd.org Mon Feb 15 10:19:09 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 4628FAA8338 for ; Mon, 15 Feb 2016 10:19:09 +0000 (UTC) (envelope-from areilly@bigpond.net.au) Received: from nskntmtas04p.mx.bigpond.com (nskntmtas04p.mx.bigpond.com [61.9.168.146]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "InterMail Test Certificate", Issuer "Certificate Authority" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id DFE0A11DB for ; Mon, 15 Feb 2016 10:19:08 +0000 (UTC) (envelope-from areilly@bigpond.net.au) Received: from nskntcmgw06p ([61.9.169.166]) by nskntmtas04p.mx.bigpond.com with ESMTP id <20160215101859.JBVO27061.nskntmtas04p.mx.bigpond.com@nskntcmgw06p> for ; Mon, 15 Feb 2016 10:18:59 +0000 Received: from ghanamia.reilly.home ([121.211.74.3]) by nskntcmgw06p with BigPond Outbound id JaJz1s00904FjAp01aJzCW; Mon, 15 Feb 2016 10:18:59 +0000 X-Authentication-Info: Submitted using ID areilly@bigpond.net.au X-Authority-Analysis: v=2.0 cv=JY8+XD2V c=1 sm=1 a=3jNtSoK4IhUy2m3FAQj8ZQ==:17 a=zfoz2xrM8ApMh4vX9qwA:9 a=CjuIK1q_8ugA:10 a=MRFpzz4hz_DlcQhZ:21 a=0sZv6AU2dSB90eZc:21 a=3jNtSoK4IhUy2m3FAQj8ZQ==:117 From: Andrew Reilly Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable Subject: Hours of tiny transfers at the end of a ZFS resilver? Message-Id: <120226C8-3003-4334-9F5F-882CCB0D28C5@bigpond.net.au> Date: Mon, 15 Feb 2016 21:18:59 +1100 To: freebsd-fs@freebsd.org Mime-Version: 1.0 (Mac OS X Mail 9.2 \(3112\)) X-Mailer: Apple Mail (2.3112) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 15 Feb 2016 10:19:09 -0000 Hi Filesystem experts, I have a question about the nature of ZFS and the resilvering that occurs after a driver replacement from a raidz array. I have a fairly simple home file server that (by way of gradually replaced pieces and upgrades) has effectively been doing great service since well, forever, but its last re-build replaced its main UFS file systems with a four-drive ZFS raidz pool. It's been going very nicely over the years, and now it's full, so I've nearly finished replacing its 1TB drives with new 4TB ones. I'm doing that the slow way, replacing one at a time and resilvering before going on to the next, because that only requires a minute or two of down-time for the drive swaps each time. Replacing the whole array and restoring from backup would have had the system off-line for many hours (I guess). Now, one thing that I didn't realise at the start of this process was that the zpool has the original 512B sector size baked in at a fairly low level, so it is using some sort of work-around for the fact that the new drives actually have 4096B sectors (although they lie about that in smartctl -i queries): The four new drives appear to smartctl as: Model Family: HGST Deskstar NAS Device Model: HGST HDN724040ALE640 Serial Number: PK1334PEHYSZ6S LU WWN Device Id: 5 000cca 250dba043 Firmware Version: MJAOA5E0 User Capacity: 4,000,787,030,016 bytes [4.00 TB] Sector Sizes: 512 bytes logical, 4096 bytes physical Rotation Rate: 7200 rpm Form Factor: 3.5 inches Device is: In smartctl database [for details use: -P show] ATA Version is: ATA8-ACS T13/1699-D revision 4 SATA Version is: SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s) Local Time is: Mon Feb 15 20:57:30 2016 AEDT SMART support is: Available - device has SMART capability. SMART support is: Enabled They show up in zpool status as: pool: tank state: DEGRADED status: One or more devices is currently being resilvered. The pool = will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scan: resilver in progress since Mon Feb 15 07:19:45 2016 3.12T scanned out of 3.23T at 67.4M/s, 0h29m to go 798G resilvered, 96.48% done config: NAME STATE READ WRITE CKSUM tank DEGRADED 0 0 0 raidz1-0 DEGRADED 0 0 0 ada0p1 ONLINE 0 0 0 block = size: 512B configured, 4096B native ada1p1 ONLINE 0 0 0 block = size: 512B configured, 4096B native ada3p1 ONLINE 0 0 0 block = size: 512B configured, 4096B native replacing-3 DEGRADED 0 0 0 17520966596084542745 UNAVAIL 0 0 0 was = /dev/ada4p1/old ada4p1 ONLINE 0 0 0 block = size: 512B configured, 4096B native (resilvering) errors: No known data errors While clearly sub-optimal, I expect that the performance will still be good enough for my purposes: I can build a new, properly aligned file system when I do the next re-build. The odd thing is that after charging through the resilver using large blocks (around 64k according to systat), when they get to the end, as this one is now, the process drags on for hours with millions of tiny, sub-2K transfers: Here's the systat -vmstat output right now. 19 users Load 0.28 0.30 0.27 15 Feb 21:01 Mem:KB REAL VIRTUAL VN PAGER SWAP = PAGER Tot Share Tot Share Free in out in = out Act 832452 38920 5151576 111272 258776 count All 859068 52388 5349268 249964 pages 3 Proc: = Interrupts r p d s w Csw Trp Sys Int Sof Flt ioflt 1026 = total 174 4417 126 4775 1026 45 90 cow = atkbd0 1 88 zfod = hdac1 16 2.8%Sys 0.5%Intr 3.9%User 0.0%Nice 92.9%Idle ozfod = ehci0 ehci | | | | | | | | | | %ozfod 249 = siis0 ohci =3D+>> daefr 147 = hpet0:t0 4 dtbuf prcfr 165 = hpet0:t1 Namei Name-cache Dir-cache 213520 desvn 3 totfr 114 = hpet0:t2 Calls hits % hits % 12132 numvn react = hdac0 259 77130 77093 100 4853 frevn pdwak = xhci1 261 32 pdpgs = ahci0:ch0 Disks da0 ada0 ada1 ada2 ada3 ada4 pass0 intrn 170 = ahci0:ch1 KB/t 0.00 1.41 1.46 0.00 1.39 1.47 0.00 6013640 wire 151 = ahci0:3 tps 0 173 138 0 176 151 0 77140 act 30 = re0 266 MB/s 0.00 0.24 0.20 0.00 0.24 0.22 0.00 1732676 inact %busy 0 18 15 0 17 99 0 cache 258776 free 29632 buf So there's a problem wth the zpool status output: it's predicting half an hour to go based on the averaged 67M/s over the whole drive, not the <2MB/s that it's actually doing, and will probably continue to do so for several hours, if tonight goes the same way as last night. Last night zpool status said "0h05m to go" for more than three hours, before I gave up waiting to start the next drive. Is this expected behaviour, or something bad and peculiar about my system? I'm confused about how ZFS really works, given this state. I had thought that the zpool layer did parity calculation in big 256k-ish stripes across the drives, and the zfs filesystem layer coped with that large block size because it had lots of caching and wrote everything in log-structure. Clearly that mental model must be incorrect, because then it would only ever be doing large transfers. Anywhere I could go to find a nice write-up of how ZFS is working? Cheers, --=20 Andrew From owner-freebsd-fs@freebsd.org Mon Feb 15 14:29:13 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 6C09CAA9F5C; Mon, 15 Feb 2016 14:29:13 +0000 (UTC) (envelope-from lev@FreeBSD.org) Received: from onlyone.friendlyhosting.spb.ru (onlyone.friendlyhosting.spb.ru [IPv6:2a01:4f8:131:60a2::2]) by mx1.freebsd.org (Postfix) with ESMTP id 3C7ED1337; Mon, 15 Feb 2016 14:29:13 +0000 (UTC) (envelope-from lev@FreeBSD.org) Received: from [127.0.0.1] (unknown [89.113.128.32]) (Authenticated sender: lev@serebryakov.spb.ru) by onlyone.friendlyhosting.spb.ru (Postfix) with ESMTPSA id F24FFA024; Mon, 15 Feb 2016 17:29:09 +0300 (MSK) Reply-To: lev@FreeBSD.org To: freebsd-fs@freebsd.org, freebsd-stable@freebsd.org From: Lev Serebryakov Subject: ZFS ARC vs Inactive memory on 10-STABLE: is it Ok? X-Enigmail-Draft-Status: N1110 Organization: FreeBSD Message-ID: <56C1E0B4.5080201@FreeBSD.org> Date: Mon, 15 Feb 2016 17:29:08 +0300 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.5.1 MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 15 Feb 2016 14:29:13 -0000 -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 I have mostly-storage server with 8GB of physical RAM and 9TB (5x2TB HDD) raidz ZFS pool (so, about 6.5TB usable space). ARC is limited to 3GB by vfs.zfs.arc_max. This server runs Samba (of course), CrashPlan backup client (Linux Java!), and torrent client (transmission-daemon). And I'm noticing this regularly ("screenshot" of top(1)): Mem: 1712M Active, 3965M Inact, 2066M Wired, 137M Cache, 822M Buf, 4688K Free ARC: 421M Total, 132M MFU, 54M MRU, 1040K Anon, 7900K Header, 227M Other Swap: 4096M Total, 248M Used, 3848M Free, 6% Inuse As you can see, here are almost 4G of Inactive memory and only 412M of ARC! Is it Ok? Why Inactive memory (non-dirty buffers?) are pressed ARC out of memory? - -- // Lev Serebryakov -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQJ8BAEBCgBmBQJWweC0XxSAAAAAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w ZW5wZ3AuZmlmdGhob3JzZW1hbi5uZXRGOTZEMUNBMEI1RjQzMThCNjc0QjMzMEFF QUIwM0M1OEJGREM0NzhGAAoJEOqwPFi/3EePRTYP/RHajhE+EuvX3fCppShb/BSg vpJZ8F1jeInIOVXe/XLw07jht04uquTXHsMvw6F0J+WIIqsCld53q1bfj4CWAnl6 4TjULTZYUWANv3wK6KxItEN5eMmDEPOW6Eqls57OSCFcZA/32hyf/Y15Nec0L6JD sd8wpqUvQs0zb//frbUpjIRcfoVSMO2ip4doGPDtBv9IcE/kDz78IcmU9By2deXU IJE8Xlg2hDY+f/NhTR2sCuwtCSvpL9/mBztffYqsKQsAm8oIn0Sz9mNdjVzUR+rN lF4GoxcWf6c3HEM/LF4+dgOdb058YwO4amyUI7GoBSFBQq3OlJzvomGeOi2vPAvC BkWxOWOcWsmEwfk1b22k00yNAjvaXQsCx6r2L/6vyrAtoQ0moXF4Rks8+MLFRUTu FFke93UUPRQPXBdrBtlnFpXX6jpmlEm7g9pazarGc4hteYOKpvHajFvNvAB7RswI NQL70+QfLBgtaA5683scCuURNptStf/RfvhwjW/o5DPNLv+NHnT+nPk64MTDuaZD 4z9Kcj088KjB++xt9c6BXuCS4zlkyUhas5cNGG+SxupZajtIuaCBTeUv0QwjnDH5 Pnu44Xe4MCvpDSt9odICdzytxO6yzwL7mLj70o2SsPs2ijN1w/fOlNqS46bekmJ/ MtvVwObCRnoDg3aMRUL0 =In6V -----END PGP SIGNATURE----- From owner-freebsd-fs@freebsd.org Mon Feb 15 14:55:42 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 9ABE0AA8F7F for ; Mon, 15 Feb 2016 14:55:42 +0000 (UTC) (envelope-from nonesuch@longcount.org) Received: from mail-yw0-x22b.google.com (mail-yw0-x22b.google.com [IPv6:2607:f8b0:4002:c05::22b]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 5889C1A4F for ; Mon, 15 Feb 2016 14:55:42 +0000 (UTC) (envelope-from nonesuch@longcount.org) Received: by mail-yw0-x22b.google.com with SMTP id h129so116323521ywb.1 for ; Mon, 15 Feb 2016 06:55:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=longcount-org.20150623.gappssmtp.com; s=20150623; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=eIUD/uHokuj4elGJonRN+esIBKNmEbpRVHwKDNrlVKA=; b=vHlYd358yu8qvs3c9GI/gi3iDbDYiG4evYFI7cYYPm0Wg5HFm0rzBzfcb4WXvN8pEc q6pZT+oXoIfo3GCB+yYsC/5yNNuL5J63JQ0qRknKxfJQSp5fBinpWu04N/fM+XqLcd9n qq2clIyn2Ja1GDFBKUA6bpt/f+eK/1Y61vns3lip/JbJM4rjOu7LaVmy1hLMoe3w7BMX jd9nAZDhRAa2ZTmIViG7ItvhQj9A+lRHEm94+UzMvuFYKZXdiZub8rJTe1w7RyV2Qysh BP4yaaoOAyV/o+1W/BOV3cpU1SW2uBA4b0DDBRZQQ0WEqgc9+Fionei+ghPtYHhC3XrF QNfQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=eIUD/uHokuj4elGJonRN+esIBKNmEbpRVHwKDNrlVKA=; b=OQHS2bV8sat//haU6U9Vj2DDA8uZYcitj8uLMKAc1Bi4beu9VM7UxE6AfxOWlIonqu ZvS2Ywz1qllv/vLdJxEl6tjnzglhtxRGcUgAI4+UL8Kv6HhqsNZJJZ0ko4V0tbxhaZRy sUiLH1PaYrYIVlH+LouKGAcfoNodJ9yqr5EqUDUu2gjIsX6v/cdgPYKf/2QpWHgyFXDr QpELFcXuacvdVx7U5GgSM18FjndcFfav1F/jzKai6/f43pzYrvdr99fN1n4WKPhsEdSI lkD8fFG5mRViqq1p5U/kcR01dX0z2alWoE2XjMrXE1NL5wmiyzHJPoSdQ2HPpWpgg189 NQ/g== X-Gm-Message-State: AG10YORK6+51ErmZmKLZ6VMCrqu6SZXS0gwoKjmiL5Xa3CDKeiYoeq6LoH4oUlUMLQzsFIg7pOM+3y6lNmJRYQ== MIME-Version: 1.0 X-Received: by 10.129.148.133 with SMTP id l127mr9266403ywg.272.1455548141331; Mon, 15 Feb 2016 06:55:41 -0800 (PST) Received: by 10.13.214.74 with HTTP; Mon, 15 Feb 2016 06:55:41 -0800 (PST) X-Originating-IP: [67.81.241.220] In-Reply-To: <56C1E0B4.5080201@FreeBSD.org> References: <56C1E0B4.5080201@FreeBSD.org> Date: Mon, 15 Feb 2016 09:55:41 -0500 Message-ID: Subject: Re: ZFS ARC vs Inactive memory on 10-STABLE: is it Ok? From: Mark Saad To: lev@freebsd.org Cc: freebsd-fs@freebsd.org, FreeBSD-Stable ML Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 15 Feb 2016 14:55:42 -0000 On Mon, Feb 15, 2016 at 9:29 AM, Lev Serebryakov wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA512 > > > I have mostly-storage server with 8GB of physical RAM and 9TB (5x2TB > HDD) raidz ZFS pool (so, about 6.5TB usable space). > > ARC is limited to 3GB by vfs.zfs.arc_max. > > This server runs Samba (of course), CrashPlan backup client (Linux > Java!), and torrent client (transmission-daemon). > > Wow someone else as crazy as I was. :) > And I'm noticing this regularly ("screenshot" of top(1)): > > Mem: 1712M Active, 3965M Inact, 2066M Wired, 137M Cache, 822M Buf, > 4688K Free > ARC: 421M Total, 132M MFU, 54M MRU, 1040K Anon, 7900K Header, 227M Other > Swap: 4096M Total, 248M Used, 3848M Free, 6% Inuse > > As you can see, here are almost 4G of Inactive memory and only 412M > of ARC! > > Is it Ok? Why Inactive memory (non-dirty buffers?) are pressed ARC > out of memory? > > Lev so I ran a similar setup on 10.1-RELEASE with 40TB in a raid 1+0 like zpool . My top looked similar but its been a while and i had 24G of ram. With a 12G Arc max. I always wondered what was going on here but I suspected it was due to a interaction of java and arc eviction. The crash plan app is terrible and would "start doing something new" up and look hung. Disk io went to hell etc . Then things would settle down and start chugging away. Keep in mind crash plan would take like a month to back up 2TB of changes on this thing. I eventually convinced management to move to a automated tape library and a normal backup client ( netbackup ) for the backups. Also I abandoned this project about 18 months ago too . - -- > // Lev Serebryakov > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v2 > > iQJ8BAEBCgBmBQJWweC0XxSAAAAAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w > ZW5wZ3AuZmlmdGhob3JzZW1hbi5uZXRGOTZEMUNBMEI1RjQzMThCNjc0QjMzMEFF > QUIwM0M1OEJGREM0NzhGAAoJEOqwPFi/3EePRTYP/RHajhE+EuvX3fCppShb/BSg > vpJZ8F1jeInIOVXe/XLw07jht04uquTXHsMvw6F0J+WIIqsCld53q1bfj4CWAnl6 > 4TjULTZYUWANv3wK6KxItEN5eMmDEPOW6Eqls57OSCFcZA/32hyf/Y15Nec0L6JD > sd8wpqUvQs0zb//frbUpjIRcfoVSMO2ip4doGPDtBv9IcE/kDz78IcmU9By2deXU > IJE8Xlg2hDY+f/NhTR2sCuwtCSvpL9/mBztffYqsKQsAm8oIn0Sz9mNdjVzUR+rN > lF4GoxcWf6c3HEM/LF4+dgOdb058YwO4amyUI7GoBSFBQq3OlJzvomGeOi2vPAvC > BkWxOWOcWsmEwfk1b22k00yNAjvaXQsCx6r2L/6vyrAtoQ0moXF4Rks8+MLFRUTu > FFke93UUPRQPXBdrBtlnFpXX6jpmlEm7g9pazarGc4hteYOKpvHajFvNvAB7RswI > NQL70+QfLBgtaA5683scCuURNptStf/RfvhwjW/o5DPNLv+NHnT+nPk64MTDuaZD > 4z9Kcj088KjB++xt9c6BXuCS4zlkyUhas5cNGG+SxupZajtIuaCBTeUv0QwjnDH5 > Pnu44Xe4MCvpDSt9odICdzytxO6yzwL7mLj70o2SsPs2ijN1w/fOlNqS46bekmJ/ > MtvVwObCRnoDg3aMRUL0 > =In6V > -----END PGP SIGNATURE----- > _______________________________________________ > freebsd-fs@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > -- mark saad | nonesuch@longcount.org From owner-freebsd-fs@freebsd.org Mon Feb 15 15:05:50 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id D7117AA9590 for ; Mon, 15 Feb 2016 15:05:50 +0000 (UTC) (envelope-from paul@kraus-haus.org) Received: from mail-yw0-x22a.google.com (mail-yw0-x22a.google.com [IPv6:2607:f8b0:4002:c05::22a]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 9B918125B for ; Mon, 15 Feb 2016 15:05:50 +0000 (UTC) (envelope-from paul@kraus-haus.org) Received: by mail-yw0-x22a.google.com with SMTP id g127so116262245ywf.2 for ; Mon, 15 Feb 2016 07:05:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kraus-haus-org.20150623.gappssmtp.com; s=20150623; h=subject:mime-version:content-type:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=bhSgh0e37Uq4AkncxmR0jZyfz10rIsm9dwZCULi4XtQ=; b=lQBSPgoGVTkcsuGbM3Te3bmkypKcCxi/CS/lmnYX4X+jyPj4kLThYyy11j6S3uz3Gi CN5CcV3m4/PGyr6dzbIZ1XyAhD0n1fV2jW/gmyef8WY/yuYUCybtySHThayKSKrmTIYg DC5DFKJhKz13p9leGMoaGhLAiYhrqMs+r1qBbR4Th7KA52JypKdrGuT8WQMdWUcO4HGq maugPUEFaB9z2lL9dfGcDOz41EshpHj2I9YLMypzYCHMcpcmgOVIQJNtpO+ams01jF0O nYRDvyA3C+WnEihiKUDBA1yKycf/VPA4HodXGDnMuHEbbR+03/kYRmxegze0y5T2heEe Q6tw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:subject:mime-version:content-type:from :in-reply-to:date:cc:content-transfer-encoding:message-id:references :to; bh=bhSgh0e37Uq4AkncxmR0jZyfz10rIsm9dwZCULi4XtQ=; b=N65semmTyqP/uHqXljWhxyzeS/gv5UTj0U0g6317rQcVL3hL789MngKx7YXt2zKcm7 VV26jkTc1sZDsAuXMADjpoWMsOBPryvkXrTTOQ0nNVV5diATlJOUcNTgjF12fxYsa7MH M8ZbZX9n/fyI2Hwe+3JD8W0hzQhOVYItEPQqx6cru+KlZO5apSZmiNH0Bnl4qASA779g UEq8Wy3B+yhRcNl+XG1XY+Z0IKK/tpkWPf0DplJhtL/JQ13P8wJ03YjbRpKr5YJqjJc+ b21suEGL+JWjc4HXIgHlTOPZ1X7NtOXLxuUmEeK8Z8O/0dxi/6gDHQ4Vf+sRkVdpofIN Lqlg== X-Gm-Message-State: AG10YORYtcp014aVtqARhrkQWZIJ1X7m4iTu9k5+yEQFES2nCizSCCHHjPUB/gVCEh68EA== X-Received: by 10.129.56.87 with SMTP id f84mr9046014ywa.14.1455548749549; Mon, 15 Feb 2016 07:05:49 -0800 (PST) Received: from [192.168.2.137] (pool-100-4-209-221.albyny.fios.verizon.net. [100.4.209.221]) by smtp.gmail.com with ESMTPSA id p189sm20883753ywc.9.2016.02.15.07.05.48 (version=TLS1 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 15 Feb 2016 07:05:48 -0800 (PST) Subject: Re: Hours of tiny transfers at the end of a ZFS resilver? Mime-Version: 1.0 (Mac OS X Mail 7.3 \(1878.6\)) Content-Type: text/plain; charset=windows-1252 From: Paul Kraus In-Reply-To: <120226C8-3003-4334-9F5F-882CCB0D28C5@bigpond.net.au> Date: Mon, 15 Feb 2016 10:05:45 -0500 Cc: freebsd-fs@freebsd.org Content-Transfer-Encoding: quoted-printable Message-Id: <44B57B63-C9C5-4166-8737-D4866E6A9D08@kraus-haus.org> References: <120226C8-3003-4334-9F5F-882CCB0D28C5@bigpond.net.au> To: Andrew Reilly X-Mailer: Apple Mail (2.1878.6) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 15 Feb 2016 15:05:51 -0000 On Feb 15, 2016, at 5:18, Andrew Reilly wrote: > Hi Filesystem experts, >=20 > I have a question about the nature of ZFS and the resilvering > that occurs after a driver replacement from a raidz array. How many snapshots do you have ? I have seen this behavior on pools with = many snapshots and ongoing creation of snapshots during the resilver. = The resilver gets to somewhere above 95% (usually 99.xxx % for me) and = then slows to a crawl, often for days. Most of the ZFS pools I manage have automated jobs to create hourly = snapshots, so I am always creating snapshots. More below... >=20 > I have a fairly simple home file server that (by way of > have had the system off-line for many hours (I guess). >=20 > Now, one thing that I didn't realise at the start of this > process was that the zpool has the original 512B sector size > baked in at a fairly low level, so it is using some sort of > work-around for the fact that the new drives actually have 4096B > sectors (although they lie about that in smartctl -i queries): Running 4K native drives in a 512B pool will cause a performance hit. = When I ran into this I rebuilt the pool from scratch as a 4K native = pool. If there is at least one 4K native drive in a given vdev the vdev = will be created native 4K (at least under FBSD 10.x). My home server has = a pool of mixed 512B and 4K drives. I made sure each vdev was built 4K. The code in the drive that emulates 512B behavior has not been very fast = and that is the crux of the performance issues. I just had to rebuild a = pool because 2TB WD Red Pro are 4K while 2TB WD RE are 512B.=20 > While clearly sub-optimal, I expect that the performance will > still be good enough for my purposes: I can build a new, > properly aligned file system when I do the next re-build. >=20 > The odd thing is that after charging through the resilver using > large blocks (around 64k according to systat), when they get to > the end, as this one is now, the process drags on for hours with > millions of tiny, sub-2K transfers: Yup. The resilver process walks through the transaction groups (TXG) = replaying them onto the new (replacement) drive. This is different from = other traditional resync methods. It also means that the early TXG will = be large (as you loaded data) and then he size of the TXG will vary with = the size of the data written. > So there's a problem wth the zpool status output: it's > predicting half an hour to go based on the averaged 67M/s over > the whole drive, not the <2MB/s that it's actually doing, and > will probably continue to do so for several hours, if tonight > goes the same way as last night. Last night zpool status said > "0h05m to go" for more than three hours, before I gave up > waiting to start the next drive. Yup, the code that estimates time to go is based on the overall average = transfer not the current. In my experience the transfer rate peaks = somewhere in the middle of the resilver. > Is this expected behaviour, or something bad and peculiar about > my system? Expected ? I=92m not sure if the designers of ZFS expected this behavior = :-) But it is the typical behavior and is correct. > I'm confused about how ZFS really works, given this state. I > had thought that the zpool layer did parity calculation in big > 256k-ish stripes across the drives, and the zfs filesystem layer > coped with that large block size because it had lots of caching > and wrote everything in log-structure. Clearly that mental > model must be incorrect, because then it would only ever be > doing large transfers. Anywhere I could go to find a nice > write-up of how ZFS is working? You really can=92t think about ZFS the same way as older systems, with a = volume manager and a filesystem, they are fully integrated. For example, = stripe size (across all the top level vdevs) is dynamic, changing with = each write operation. I believe that it tries to include every top level = vdev in each write operation. In your case that does not apply as you = only have one top level vdev, but note that performance really scales = with the number of top level vdevs more than the number of drives per = vdev. Also note that striping within a RAIDz vdev is separate from the top = level vdev striping. Take a look here: = http://blog.delphix.com/matt/2014/06/06/zfs-stripe-width/ for a good = discussion of ZFS striping for RAIDz vdevs. And don=92t forget to = follow the links at the bottom of the page for more details. P.S. For performance it is generally recommended to use mirrors while = for capacity use RAIDz, all tempered by the mean time to data loss = (MTTDL) you need. Hint, a 3-way mirror has about the same MTTDL as a = RAIDz2. -- Paul Kraus paul@kraus-haus.org From owner-freebsd-fs@freebsd.org Mon Feb 15 15:07:48 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id AD0C5AA9693; Mon, 15 Feb 2016 15:07:48 +0000 (UTC) (envelope-from lev@FreeBSD.org) Received: from onlyone.friendlyhosting.spb.ru (onlyone.friendlyhosting.spb.ru [46.4.40.135]) by mx1.freebsd.org (Postfix) with ESMTP id 7AA2C1321; Mon, 15 Feb 2016 15:07:48 +0000 (UTC) (envelope-from lev@FreeBSD.org) Received: from [127.0.0.1] (unknown [89.113.128.32]) (Authenticated sender: lev@serebryakov.spb.ru) by onlyone.friendlyhosting.spb.ru (Postfix) with ESMTPSA id 67AF3A054; Mon, 15 Feb 2016 18:07:39 +0300 (MSK) Reply-To: lev@FreeBSD.org Subject: Re: ZFS ARC vs Inactive memory on 10-STABLE: is it Ok? References: <56C1E0B4.5080201@FreeBSD.org> To: Mark Saad Cc: freebsd-fs@freebsd.org, FreeBSD-Stable ML From: Lev Serebryakov Organization: FreeBSD Message-ID: <56C1E9BA.3080504@FreeBSD.org> Date: Mon, 15 Feb 2016 18:07:38 +0300 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.5.1 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 15 Feb 2016 15:07:48 -0000 -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 15.02.2016 17:55, Mark Saad wrote: > Wow someone else as crazy as I was. :) ... > I eventually convinced management to move to a automated tape > library and a normal backup client It is my own home NAS, and I need to backup about 2TB offsite (you could call me paranoid, yes). CrashPlan is almost only offer on the market I could afford. I will be happy to use tarsnap or rsync.net, for example, but it is too expensive for me :( - -- // Lev Serebryakov -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQJ8BAEBCgBmBQJWwem6XxSAAAAAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w ZW5wZ3AuZmlmdGhob3JzZW1hbi5uZXRGOTZEMUNBMEI1RjQzMThCNjc0QjMzMEFF QUIwM0M1OEJGREM0NzhGAAoJEOqwPFi/3EePIqwP/iefNVwjlBfyPVuIXKtlldPP CHLgdUaUSb+Nj67WJ20aKxw5nBHoJtRi60Ao1azutzYVEJdMDLGCX7ExqBAoMreY sDZlKfTQViYlMhFhXch0AddEcticKrIl24D8a2WAaKLsnFSNY/U4XEd/jYl/TUVd kbPbN+9Tufr/xzkilwiQJ7M1jKgq2PfPF6mrezPPRkvJGuBHpQPSOMRnlylQmw4K wpBxroepNohcSIdLHOKXD6nGt9vaIF7vWycjF/IGEoWi/mmyjjR8eqBGHc6t3kZt +yktp9q56TdPgh4EgfivoQyFiQwhlcUOB6HbrXyTSXXhpTKcy0/KpeiNUwnnp6j/ Qlm1xJPJnrw1mUj5i0790h4ZuFfurfFf7cL79RL9ZQHr7os5a5A5jNQlX2+GKSgD J5eraHgYiio4a3d805wsvCETJjZGjBn0Jk5YANuodAcZyzo66RffFQomuWMlSe13 Et9NXmWT6rrNhxVC7BJv8zhK7Xy7YhIBiY3xONbpxcX7bVJt/1LIha44/Ft/BoLp rlnJITPQZPn2FUwILQz4D+caJqkEeELpGd6Q385RLEHlf++izyT3MXvJl1MAhnYS RoByj7qdy1EfSi/C0uVdxRHwk+tItySpgTVQ+5gm6T8B1dDs+noXednC6j72i+Xf L14BPz8y5/9rCR93XhdP =5Vz8 -----END PGP SIGNATURE----- From owner-freebsd-fs@freebsd.org Mon Feb 15 17:13:33 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 4D966AA944E for ; Mon, 15 Feb 2016 17:13:33 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 3EB371BF4 for ; Mon, 15 Feb 2016 17:13:33 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id u1FHDXJr035448 for ; Mon, 15 Feb 2016 17:13:33 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 200663] zfs allow/unallow doesn't show numeric UID when the ID no longer exists in the password file Date: Mon, 15 Feb 2016 17:13:33 +0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 11.0-CURRENT X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: ler@lerctr.org X-Bugzilla-Status: New X-Bugzilla-Resolution: X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-bugs@FreeBSD.org X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 15 Feb 2016 17:13:33 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D200663 --- Comment #3 from Larry Rosenman --- any news on this? --=20 You are receiving this mail because: You are on the CC list for the bug.= From owner-freebsd-fs@freebsd.org Mon Feb 15 20:33:13 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 92001AA862F for ; Mon, 15 Feb 2016 20:33:13 +0000 (UTC) (envelope-from rpokala@mac.com) Received: from mr11p00im-asmtp003.me.com (mr11p00im-asmtp003.me.com [17.110.69.254]) (using TLSv1.2 with cipher DHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 812CB1CE0 for ; Mon, 15 Feb 2016 20:33:13 +0000 (UTC) (envelope-from rpokala@mac.com) Received: from [172.20.10.3] (unknown [172.56.38.124]) by mr11p00im-asmtp003.me.com (Oracle Communications Messaging Server 7.0.5.36.0 64bit (built Sep 8 2015)) with ESMTPSA id <0O2L003QIVR69O00@mr11p00im-asmtp003.me.com> for freebsd-fs@freebsd.org; Mon, 15 Feb 2016 20:33:07 +0000 (GMT) X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2016-02-15_10:,, signatures=0 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 clxscore=1011 suspectscore=0 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1510270003 definitions=main-1602150340 User-Agent: Microsoft-MacOutlook/0.0.0.160109 Date: Mon, 15 Feb 2016 12:33:04 -0800 Subject: Re: Hours of tiny transfers at the end of a ZFS resilver? From: Ravi Pokala Sender: "Pokala, Ravi" To: "freebsd-fs@freebsd.org" Message-id: <8E04E52A-2635-4253-8140-F69495D7D0A6@panasas.com> Thread-topic: Hours of tiny transfers at the end of a ZFS resilver? MIME-version: 1.0 Content-type: text/plain; charset=UTF-8 Content-transfer-encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 15 Feb 2016 20:33:13 -0000 >Date: Mon, 15 Feb 2016 21:18:59 +1100 >From: Andrew Reilly >To: freebsd-fs@freebsd.org >Subject: Hours of tiny transfers at the end of a ZFS resilver? >Message-ID: <120226C8-3003-4334-9F5F-882CCB0D28C5@bigpond.net.au> >Content-Type: text/plain; charset=us-ascii > >Hi Filesystem experts, Hi Andrew, I am in no way, shape, or form a filesystem expert. :-) I *am*, however, an ATA drive expert. I wanted to clarify something you said, because it seems to be a common misunderstanding. >... the new drives actually have 4096B sectors (although they lie about that in smartctl -i queries): They're not lying. >Sector Sizes: 512 bytes logical, 4096 bytes physical Right there - "Sector Size: ... 4096 bytes physical". This is 512B logical / 4KB physical scheme is called AF-512e, and is a documented, standard format. https://en.wikipedia.org/wiki/Advanced_Format#512e The intent is to allow backwards-compatibility with software going back decades, which only knows about 512-byte sectors. Such software would treat an AF-512e drive the same as any other drive. The trade-off is performance, because the drive has to transparently perform read-modify-write operations (for sub-4096B writes), and read the full 4096B physical sector even if only a single 512B logical sector was requested. (Ditto if a properly-sized but un-aligned request were made.) I know GEOM reports both the logical and physical sector sizes (as the provider's "sectorsize" and "stripesize", respectively), and I know that the ATACAM driver is populating them correctly based on the drive's IDENTIFY_DEVICE information. (My very first submission to the project was to fix some bugs in this code; r262886, 2014-03-07.) [*] I *don't* know if ZFS does the right thing automatically; it might not be able to determine what "the right thing" is in all cases. I leave answering that to the actual ZFS experts. :-) -Ravi (rpokala@) [*] This is probably a good segue into discussing why we even have the ADA_Q_4K quirk, and whether we should get rid of it...? --rp From owner-freebsd-fs@freebsd.org Mon Feb 15 21:08:42 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 7F70FAA968D for ; Mon, 15 Feb 2016 21:08:42 +0000 (UTC) (envelope-from killing@multiplay.co.uk) Received: from mail-wm0-x235.google.com (mail-wm0-x235.google.com [IPv6:2a00:1450:400c:c09::235]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 2179F10E5 for ; Mon, 15 Feb 2016 21:08:41 +0000 (UTC) (envelope-from killing@multiplay.co.uk) Received: by mail-wm0-x235.google.com with SMTP id g62so77086459wme.1 for ; Mon, 15 Feb 2016 13:08:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=multiplay-co-uk.20150623.gappssmtp.com; s=20150623; h=subject:to:references:from:message-id:date:user-agent:mime-version :in-reply-to:content-type:content-transfer-encoding; bh=lbq2bmY+rEZzV3bJkZPlMaJOOp/DL8mOndJnQ4O4ZA0=; b=XhP4ONcZ4tFlwss5qww7S1vGGqLap4sDwe6Jmqf8KGYFJw61Y++JaBcX0ACYXHvA6B XysFrY5Hyo79BnPsresrCUD+wPf8W6lArYiLfpR55jhUisQKg/iSRNv2ITEX2pQpweF0 F6r4Jna3WOfqux5KK/UkeM3jbnkvR63cd+SFC7/NH+WOvbSKXl3bKgwcJjvjW2n+5Nw2 /MkN71MtI9wKTaHYRTf6MFzfqXwuxK02NJ1RjJlnf6GaiKAI6KP3QcsHNeVl+7h7kz3b +WEjikopccd5QjcAyADdegtBQKv2T6z7aKLmvnNEOcFGZSTr5HGHUF3mUhaWiLchs9IE Kutg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:subject:to:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-type :content-transfer-encoding; bh=lbq2bmY+rEZzV3bJkZPlMaJOOp/DL8mOndJnQ4O4ZA0=; b=g9PO2MHpHC33FWvm5gP8fEbPDQkloGoZ0J/x6qk3UVgj52dYfvoztZy8Mr+LBwH2U4 3C5TPrZMRLnUBAMU5pGSmhZvYv/33Ij8i9FiL+Z3wKwFs0i9I8sZem/KSQUchhDfC1fY s6ectEex3x1bqc07S9Ko19Vec9WiQQ7fbKfEJBYMMFUVhzsnCipPl5+N7lcXvj5BLq2e g0CdE5R61Kg2bix+Yd8MPv9hmH42/vHqVYZtZaC2pLxhsNMi1WgdcW9Q4px088vEts5u dzFdIIM4U0F7Mix+ojxbEbkwYAoBtmMpwWxLagurFIeFxhrf73nrckm82zR5dfDYgoiy VIUA== X-Gm-Message-State: AG10YOR2C1jU0fQ2FOE7SuYV3YhUBwdHxa8BQF8ZRxn/oy5rMqBCxRU2ypBpPNPC0pqZgRue X-Received: by 10.28.212.9 with SMTP id l9mr15646122wmg.75.1455570520075; Mon, 15 Feb 2016 13:08:40 -0800 (PST) Received: from [10.10.1.58] (liv3d.labs.multiplay.co.uk. [82.69.141.171]) by smtp.gmail.com with ESMTPSA id et11sm27072530wjc.30.2016.02.15.13.08.38 for (version=TLSv1/SSLv3 cipher=OTHER); Mon, 15 Feb 2016 13:08:38 -0800 (PST) Subject: Re: Hours of tiny transfers at the end of a ZFS resilver? To: freebsd-fs@freebsd.org References: <8E04E52A-2635-4253-8140-F69495D7D0A6@panasas.com> From: Steven Hartland Message-ID: <56C23E5B.7060207@multiplay.co.uk> Date: Mon, 15 Feb 2016 21:08:43 +0000 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.5.1 MIME-Version: 1.0 In-Reply-To: <8E04E52A-2635-4253-8140-F69495D7D0A6@panasas.com> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 15 Feb 2016 21:08:42 -0000 On 15/02/2016 20:33, Ravi Pokala wrote: >> Date: Mon, 15 Feb 2016 21:18:59 +1100 >> From: Andrew Reilly >> To: freebsd-fs@freebsd.org >> Subject: Hours of tiny transfers at the end of a ZFS resilver? >> Message-ID: <120226C8-3003-4334-9F5F-882CCB0D28C5@bigpond.net.au> >> Content-Type: text/plain; charset=us-ascii >> >> Hi Filesystem experts, > Hi Andrew, > > I am in no way, shape, or form a filesystem expert. :-) I *am*, however, an ATA drive expert. I wanted to clarify something you said, because it seems to be a common misunderstanding. > >> ... the new drives actually have 4096B sectors (although they lie about that in smartctl -i queries): > They're not lying. > >> Sector Sizes: 512 bytes logical, 4096 bytes physical > Right there - "Sector Size: ... 4096 bytes physical". This is 512B logical / 4KB physical scheme is called AF-512e, and is a documented, standard format. > > https://en.wikipedia.org/wiki/Advanced_Format#512e > > The intent is to allow backwards-compatibility with software going back decades, which only knows about 512-byte sectors. Such software would treat an AF-512e drive the same as any other drive. The trade-off is performance, because the drive has to transparently perform read-modify-write operations (for sub-4096B writes), and read the full 4096B physical sector even if only a single 512B logical sector was requested. (Ditto if a properly-sized but un-aligned request were made.) > > I know GEOM reports both the logical and physical sector sizes (as the provider's "sectorsize" and "stripesize", respectively), and I know that the ATACAM driver is populating them correctly based on the drive's IDENTIFY_DEVICE information. (My very first submission to the project was to fix some bugs in this code; r262886, 2014-03-07.) [*] > > I *don't* know if ZFS does the right thing automatically; it might not be able to determine what "the right thing" is in all cases. I leave answering that to the actual ZFS experts. :-) Yes this was added nearly 2 1/2 years ago by r254591 > > -Ravi (rpokala@) > > [*] This is probably a good segue into discussing why we even have the ADA_Q_4K quirk, and whether we should get rid of it...? --rp The 4k quirks exists because a large amount of devices don't report 4k correctly instead just reporting 512 for both logical and physical even when they are actually 4k or larger physical sector size. Regards Steve From owner-freebsd-fs@freebsd.org Mon Feb 15 21:35:10 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 751FDAA8495 for ; Mon, 15 Feb 2016 21:35:10 +0000 (UTC) (envelope-from gpalmer@freebsd.org) Received: from mail.in-addr.com (mail.in-addr.com [IPv6:2a01:4f8:191:61e8::2525:2525]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 4421524D for ; Mon, 15 Feb 2016 21:35:10 +0000 (UTC) (envelope-from gpalmer@freebsd.org) Received: from gjp by mail.in-addr.com with local (Exim 4.86 (FreeBSD)) (envelope-from ) id 1aVQnS-0005o8-7p for freebsd-fs@freebsd.org; Mon, 15 Feb 2016 21:35:06 +0000 Date: Mon, 15 Feb 2016 21:35:06 +0000 From: Gary Palmer To: freebsd-fs@freebsd.org Subject: Re: Hours of tiny transfers at the end of a ZFS resilver? Message-ID: <20160215213506.GB28757@in-addr.com> References: <8E04E52A-2635-4253-8140-F69495D7D0A6@panasas.com> <56C23E5B.7060207@multiplay.co.uk> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <56C23E5B.7060207@multiplay.co.uk> X-SA-Exim-Connect-IP: X-SA-Exim-Mail-From: gpalmer@freebsd.org X-SA-Exim-Scanned: No (on mail.in-addr.com); SAEximRunCond expanded to false X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 15 Feb 2016 21:35:10 -0000 On Mon, Feb 15, 2016 at 09:08:43PM +0000, Steven Hartland wrote: > > > On 15/02/2016 20:33, Ravi Pokala wrote: > >> Date: Mon, 15 Feb 2016 21:18:59 +1100 > >> From: Andrew Reilly > >> To: freebsd-fs@freebsd.org > >> Subject: Hours of tiny transfers at the end of a ZFS resilver? > >> Message-ID: <120226C8-3003-4334-9F5F-882CCB0D28C5@bigpond.net.au> > >> Content-Type: text/plain; charset=us-ascii > >> > >> Hi Filesystem experts, > > Hi Andrew, > > > > I am in no way, shape, or form a filesystem expert. :-) I *am*, however, an ATA drive expert. I wanted to clarify something you said, because it seems to be a common misunderstanding. > > > >> ... the new drives actually have 4096B sectors (although they lie about that in smartctl -i queries): > > They're not lying. > > > >> Sector Sizes: 512 bytes logical, 4096 bytes physical > > Right there - "Sector Size: ... 4096 bytes physical". This is 512B logical / 4KB physical scheme is called AF-512e, and is a documented, standard format. > > > > https://en.wikipedia.org/wiki/Advanced_Format#512e > > > > The intent is to allow backwards-compatibility with software going back decades, which only knows about 512-byte sectors. Such software would treat an AF-512e drive the same as any other drive. The trade-off is performance, because the drive has to transparently perform read-modify-write operations (for sub-4096B writes), and read the full 4096B physical sector even if only a single 512B logical sector was requested. (Ditto if a properly-sized but un-aligned request were made.) > > > > I know GEOM reports both the logical and physical sector sizes (as the provider's "sectorsize" and "stripesize", respectively), and I know that the ATACAM driver is populating them correctly based on the drive's IDENTIFY_DEVICE information. (My very first submission to the project was to fix some bugs in this code; r262886, 2014-03-07.) [*] > > > > I *don't* know if ZFS does the right thing automatically; it might not be able to determine what "the right thing" is in all cases. I leave answering that to the actual ZFS experts. :-) > Yes this was added nearly 2 1/2 years ago by r254591 It should be noted that ZFS can do the right thing only at pool creation time. Once the pool has been created the sector size of the underlying disks is baked in and can only be changed by creating a new pool on the advanced format disks (or forcing the larger ashift value when you initially create the pool, even if the disks are really 512 byte sector drives) Regards, Gary > > > > -Ravi (rpokala@) > > > > [*] This is probably a good segue into discussing why we even have the ADA_Q_4K quirk, and whether we should get rid of it...? --rp > The 4k quirks exists because a large amount of devices don't report 4k > correctly instead just reporting 512 for both logical and physical even > when they are actually 4k or larger physical sector size. > > Regards > Steve > _______________________________________________ > freebsd-fs@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@freebsd.org Mon Feb 15 23:26:09 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id F2E65AA98EC for ; Mon, 15 Feb 2016 23:26:08 +0000 (UTC) (envelope-from demis@yandex.ru) Received: from forward14p.cmail.yandex.net (forward14p.cmail.yandex.net [IPv6:2a02:6b8:0:1465::be]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "forwards.mail.yandex.net", Issuer "Yandex CA" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 8CE131E33 for ; Mon, 15 Feb 2016 23:26:08 +0000 (UTC) (envelope-from demis@yandex.ru) Received: from web3g.yandex.ru (web3g.yandex.ru [95.108.252.103]) by forward14p.cmail.yandex.net (Yandex) with ESMTP id CAAC721BA9 for ; Tue, 16 Feb 2016 02:26:03 +0300 (MSK) Received: from web3g.yandex.ru (localhost [127.0.0.1]) by web3g.yandex.ru (Yandex) with ESMTP id 5F61C39627F3; Tue, 16 Feb 2016 02:26:03 +0300 (MSK) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex.ru; s=mail; t=1455578763; bh=UxwSChaGc00vs/fI0Dp7Z5zU47UF42eCJlvtvKwNCYs=; h=From:To:Subject:Date; b=t/jrpfiXjE5FeUC52Rb8N15hEdY9ski0ENyC7JsimRUcnw2Z/j4x84t/Xdq3ZcwQi jIFalG1h5NrwH6hv49XF8Unf6Oe6ljL9QxKVYNeVp6Rbf8dVd4MDkORdDWius3KRp5 /tce0VKMf/cWMbHxfIT93SnRhz2fmC+sNUUO2eaU= Received: by web3g.yandex.ru with HTTP; Tue, 16 Feb 2016 02:26:00 +0300 From: DemIS To: freebsd-fs@freebsd.org Subject: Kernel panic zio.c, line: 270 FreeBSD 10.2 (or 10.3) MIME-Version: 1.0 Message-Id: <1061671455578760@web3g.yandex.ru> X-Mailer: Yamail [ http://yandex.ru ] 5.0 Date: Tue, 16 Feb 2016 02:26:00 +0300 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset=koi8-r X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 15 Feb 2016 23:26:09 -0000 Any one knows about problem? Server: SuperMicro Model:SYS-6026T-6RF+, MB:X8DTU-6F+, RAM 24 DDR3, two XEON RAM: KVR1333D3E9S/4G - DDR3, 1333MHz, ECC, CL9, X8, 1.5V, Unbuffered, DIMM Version:uname -a FreeBSD teo.some.loc 10.2-RELEASE-p12 FreeBSD 10.2-RELEASE-p12 #0: Sat Feb 13 18:04:04 MSK 2016 demis@teo.some.loc:/usr/obj/usr/src/sys/TEO amd64 (on GENERIC or custom kernel config persist too !!!) Memtest86+ v.4.40 (ECC mode) test - OK. Every disk checked too (physically - mhdd, logically - zpool scrub, and additional checkit in external company recovery disk). No errors. Part of df -H Filesystem Size Used Avail Capacity Mounted on hdd/usr/wf 6,6T 4,1T 2,5T 62% /hdd/usr/wf zpool status hdd pool: hdd state: ONLINE status: Some supported features are not enabled on the pool. The pool can still be used, but some features are unavailable. action: Enable all features using 'zpool upgrade'. Once this is done, the pool may no longer be accessible by software that does not support the features. See zpool-features(7) for details. scan: scrub repaired 0 in 14h57m with 0 errors on Thu Feb 11 03:35:43 2016 config: NAME STATE READ WRITE CKSUM hdd ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 mfid1p1 ONLINE 0 0 0 mfid2p1 ONLINE 0 0 0 mfid3p1 ONLINE 0 0 0 mfid4p1 ONLINE 0 0 0 mfid5p1 ONLINE 0 0 0 errors: No known data errors hdd - is My zfs volume. When I run command like: rm /hdd/usr/some/path/to/file or rm /hdd/usr/some/path/to/folder or chown root:wheel /hdd/usr/some/path/to/file or chown root:wheel /hdd/usr/some/path/to/folder or setfacl ... to /hdd/usr/some/path/to/file I'm get kernel panic: GNU gdb 6.1.1 [FreeBSD] Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "amd64-marcel-freebsd"... Unread portion of the kernel message buffer: panic: solaris assert: c < (1ULL << 24) >> 9 (0x7fffffffffffff < 0x8000), file: /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c, line: 270 cpuid = 9 KDB: stack backtrace: #0 0xffffffff80984ef0 at kdb_backtrace+0x60 #1 0xffffffff80948aa6 at vpanic+0x126 #2 0xffffffff80948973 at panic+0x43 #3 0xffffffff81c0222f at assfail3+0x2f #4 0xffffffff81aa9d40 at zio_buf_alloc+0x50 #5 0xffffffff81a2b9f8 at arc_get_data_buf+0x358 #6 0xffffffff81a2e20a at arc_read+0x1ea #7 0xffffffff81a3669c at dbuf_read+0x6ac #8 0xffffffff81a3d8bf at dmu_spill_hold_existing+0xbf #9 0xffffffff81a70dd7 at sa_attr_op+0x167 #10 0xffffffff81a72ffb at sa_lookup+0x4b #11 0xffffffff81abc82a at zfs_rmnode+0x2ba #12 0xffffffff81ada58e at zfs_freebsd_reclaim+0x4e #13 0xffffffff80e73537 at VOP_RECLAIM_APV+0xa7 #14 0xffffffff809ec5b4 at vgonel+0x1b4 #15 0xffffffff809eca49 at vrecycle+0x59 #16 0xffffffff81ada52d at zfs_freebsd_inactive+0xd #17 0xffffffff80e73427 at VOP_INACTIVE_APV+0xa7 Uptime: 9m31s Dumping 1286 out of 24543 MB:..2%..12%..22%..32%..42%..51%..61%..71% (CTRL-C to abort) ..81%..91% Reading symbols from /boot/kernel/zfs.ko.symbols...done. Loaded symbols for /boot/kernel/zfs.ko.symbols Reading symbols from /boot/kernel/opensolaris.ko.symbols...done. Loaded symbols for /boot/kernel/opensolaris.ko.symbols Reading symbols from /boot/kernel/if_lagg.ko.symbols...done. Loaded symbols for /boot/kernel/if_lagg.ko.symbols Reading symbols from /boot/kernel/ums.ko.symbols...done. Loaded symbols for /boot/kernel/ums.ko.symbols Reading symbols from /boot/kernel/ipfw.ko.symbols...done. Loaded symbols for /boot/kernel/ipfw.ko.symbols #0 doadump (textdump=) at pcpu.h:219 219 pcpu.h: No such file or directory. in pcpu.h (kgdb) bt #0 doadump (textdump=) at pcpu.h:219 #1 0xffffffff80948702 in kern_reboot (howto=260) at /usr/src/sys/kern/kern_shutdown.c:451 #2 0xffffffff80948ae5 in vpanic (fmt=, ap=) at /usr/src/sys/kern/kern_shutdown.c:758 #3 0xffffffff80948973 in panic (fmt=0x0) at /usr/src/sys/kern/kern_shutdown.c:687 #4 0xffffffff81c0222f in assfail3 (a=, lv=, op=, rv=, f=, l=) at /usr/src/sys/modules/opensolaris/../../cddl/compat/opensolaris/kern/opensolaris_cmn_err.c:91 #5 0xffffffff81aa9d40 in zio_buf_alloc (size=0) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c:270 #6 0xffffffff81a2b9f8 in arc_get_data_buf (buf=) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:2898 #7 0xffffffff81a2e20a in arc_read (pio=0xfffff80011791730, spa=0xfffff80011579000, bp=0xfffffe000aee7980, done=0xffffffff81a3a2d0 , private=0xfffff8002244b000, priority=ZIO_PRIORITY_SYNC_READ, zio_flags=-528866606, arc_flags=0xfffffe06727fb3c4, zb=0xffffffff81a3a2d0) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:1551 #8 0xffffffff81a3669c in dbuf_read (db=0xfffff8002244b000, zio=0x0, flags=6) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dbuf.c:573 #9 0xffffffff81a3d8bf in dmu_spill_hold_existing (bonus=0xfffff800223bed20, tag=0x0, dbp=0xfffff800919966b8) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu.c:333 #10 0xffffffff81a70dd7 in sa_attr_op (hdl=0xfffff80091996690, bulk=0xfffffe06727fb528, count=1, data_op=SA_LOOKUP, tx=0x0) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/sa.c:310 #11 0xffffffff81a72ffb in sa_lookup (hdl=0xfffff80091996690, attr=, buf=, buflen=) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/sa.c:1441 #12 0xffffffff81abc82a in zfs_rmnode (zp=0xfffff80091993730) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_dir.c:633 #13 0xffffffff81ada58e in zfs_freebsd_reclaim (ap=) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c:6569 #14 0xffffffff80e73537 in VOP_RECLAIM_APV (vop=, a=) at vnode_if.c:2019 #15 0xffffffff809ec5b4 in vgonel (vp=0xfffff800111733b0) at vnode_if.h:830 #16 0xffffffff809eca49 in vrecycle (vp=0xfffff800111733b0) at /usr/src/sys/kern/vfs_subr.c:2703 #17 0xffffffff81ada52d in zfs_freebsd_inactive (ap=) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c:6540 #18 0xffffffff80e73427 in VOP_INACTIVE_APV (vop=, a=) at vnode_if.c:1953 #19 0xffffffff809eb382 in vinactive (vp=0xfffff800111733b0, td=0xfffff800113d8000) at vnode_if.h:807 #20 0xffffffff809eb772 in vputx (vp=0xfffff800111733b0, func=2) at /usr/src/sys/kern/vfs_subr.c:2306 #21 0xffffffff809f401e in kern_rmdirat (td=, fd=, path=, pathseg=) at /usr/src/sys/kern/vfs_syscalls.c:3842 #22 0xffffffff80d4b3e7 in amd64_syscall (td=0xfffff800113d8000, traced=0) at subr_syscall.c:134 #23 0xffffffff80d30acb in Xfast_syscall () at /usr/src/sys/amd64/amd64/exception.S:396 #24 0x00000008008914ea in ?? () Previous frame inner to this frame (corrupt stack?) Current language: auto; currently minimal If setup FreeBSD 10.3 BETA (on GENERIC or custom kernel config): Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "amd64-marcel-freebsd"... Unread portion of the kernel message buffer: panic: solaris assert: c < (1ULL << 24) >> 9 (0x7fffffffffffff < 0x8000), file: /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c, line: 273 cpuid = 13 KDB: stack backtrace: #0 0xffffffff8098f000 at kdb_backtrace+0x60 #1 0xffffffff80951d06 at vpanic+0x126 #2 0xffffffff80951bd3 at panic+0x43 #3 0xffffffff81e0022f at assfail3+0x2f #4 0xffffffff81cacc70 at zio_buf_alloc+0x50 #5 0xffffffff81c2b8f2 at arc_get_data_buf+0x262 #6 0xffffffff81c2b657 at arc_buf_alloc+0xc7 #7 0xffffffff81c2d601 at arc_read+0x1c1 #8 0xffffffff81c36ce9 at dbuf_read+0x6b9 #9 0xffffffff81c3e415 at dmu_spill_hold_existing+0xc5 #10 0xffffffff81c73707 at sa_attr_op+0x167 #11 0xffffffff81c75972 at sa_lookup+0x52 #12 0xffffffff81cbf8da at zfs_rmnode+0x2ba #13 0xffffffff81cdd75e at zfs_freebsd_reclaim+0x4e #14 0xffffffff80e81c27 at VOP_RECLAIM_APV+0xa7 #15 0xffffffff809f9581 at vgonel+0x221 #16 0xffffffff809f9a19 at vrecycle+0x59 #17 0xffffffff81cdd6fd at zfs_freebsd_inactive+0xd Uptime: 11m11s Dumping 1368 out of 24542 MB:..2%..11%..22%..31%..41%..51%..61%..71%..81%..91% Reading symbols from /boot/kernel/if_lagg.ko.symbols...done. Loaded symbols for /boot/kernel/if_lagg.ko.symbols Reading symbols from /boot/kernel/aio.ko.symbols...done. Loaded symbols for /boot/kernel/aio.ko.symbols Reading symbols from /boot/kernel/ichsmb.ko.symbols...done. Loaded symbols for /boot/kernel/ichsmb.ko.symbols Reading symbols from /boot/kernel/smbus.ko.symbols...done. Loaded symbols for /boot/kernel/smbus.ko.symbols Reading symbols from /boot/kernel/ipmi.ko.symbols...done. Loaded symbols for /boot/kernel/ipmi.ko.symbols Reading symbols from /boot/kernel/zfs.ko.symbols...done. Loaded symbols for /boot/kernel/zfs.ko.symbols Reading symbols from /boot/kernel/opensolaris.ko.symbols...done. Loaded symbols for /boot/kernel/opensolaris.ko.symbols Reading symbols from /boot/kernel/ums.ko.symbols...done. Loaded symbols for /boot/kernel/ums.ko.symbols Reading symbols from /boot/kernel/ipfw.ko.symbols...done. Loaded symbols for /boot/kernel/ipfw.ko.symbols #0 doadump (textdump=) at pcpu.h:219 219 pcpu.h: No such file or directory. in pcpu.h (kgdb) backtrace #0 doadump (textdump=) at pcpu.h:219 #1 0xffffffff80951962 in kern_reboot (howto=260) at /usr/src/sys/kern/kern_shutdown.c:486 #2 0xffffffff80951d45 in vpanic (fmt=, ap=) at /usr/src/sys/kern/kern_shutdown.c:889 #3 0xffffffff80951bd3 in panic (fmt=0x0) at /usr/src/sys/kern/kern_shutdown.c:818 #4 0xffffffff81e0022f in assfail3 (a=, lv=, op=, rv=, f=, l=) at /usr/src/sys/modules/opensolaris/../../cddl/compat/opensolaris/kern/opensolaris_cmn_err.c:91 #5 0xffffffff81cacc70 in zio_buf_alloc (size=0) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c:273 #6 0xffffffff81c2b8f2 in arc_get_data_buf (buf=) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:3880 #7 0xffffffff81c2b657 in arc_buf_alloc (spa=, size=, tag=0x0, type=) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:2057 #8 0xffffffff81c2d601 in arc_read (pio=0xfffff8000fad03b0, spa=0xfffff8000f63d000, bp=0xfffffe000e509980, done=0xffffffff81c3aed0 , private=0xfffff8000fdd6360, priority=ZIO_PRIORITY_SYNC_READ, zio_flags=-2117882160, arc_flags=0xfffffe02925483c4, zb=0xfffff8000fdd6360) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:4397 #9 0xffffffff81c36ce9 in dbuf_read (db=0xfffff8000fdd6360, zio=0x0, flags=) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dbuf.c:682 #10 0xffffffff81c3e415 in dmu_spill_hold_existing (bonus=0xfffff8001f312438, tag=0x0, dbp=0xfffff80062d4e7d0) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu.c:333 #11 0xffffffff81c73707 in sa_attr_op (hdl=0xfffff80062d4e770, bulk=0xfffffe0292548528, count=1, data_op=SA_LOOKUP, tx=0x0) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/sa.c:305 #12 0xffffffff81c75972 in sa_lookup (hdl=0xfffff80062d4e770, attr=, buf=, buflen=) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/sa.c:1443 #13 0xffffffff81cbf8da in zfs_rmnode (zp=0xfffff80062d4c8a0) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_dir.c:633 #14 0xffffffff81cdd75e in zfs_freebsd_reclaim (ap=) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c:6619 #15 0xffffffff80e81c27 in VOP_RECLAIM_APV (vop=, a=) at vnode_if.c:2019 #16 0xffffffff809f9581 in vgonel (vp=0xfffff8000f1beb10) at vnode_if.h:830 #17 0xffffffff809f9a19 in vrecycle (vp=0xfffff8000f1beb10) at /usr/src/sys/kern/vfs_subr.c:2951 #18 0xffffffff81cdd6fd in zfs_freebsd_inactive (ap=) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c:6590 #19 0xffffffff80e81b17 in VOP_INACTIVE_APV (vop=, a=) at vnode_if.c:1953 #20 0xffffffff809f8322 in vinactive (vp=0xfffff8000f1beb10, td=0xfffff8000f9f34b0) at vnode_if.h:807 #21 0xffffffff809f8712 in vputx (vp=0xfffff8000f1beb10, func=2) at /usr/src/sys/kern/vfs_subr.c:2547 #22 0xffffffff80a0137e in kern_rmdirat (td=, fd=, path=, pathseg=) at /usr/src/sys/kern/vfs_syscalls.c:3964 #23 0xffffffff80d574bf in amd64_syscall (td=0xfffff8000f9f34b0, traced=0) at subr_syscall.c:141 #24 0xffffffff80d3c72b in Xfast_syscall () at /usr/src/sys/amd64/amd64/exception.S:396 #25 0x000000080089458a in ?? () Previous frame inner to this frame (corrupt stack?) Current language: auto; currently minimal Crash folder (or file) have strange rights: d---------+ 3 anna domain users 3 10 10:32 01-Projcts d---------+ 2 anna domain users 2 8 21:46 02-Text How correct kernel panic? From owner-freebsd-fs@freebsd.org Mon Feb 15 23:55:10 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 5EA30AAA375 for ; Mon, 15 Feb 2016 23:55:10 +0000 (UTC) (envelope-from killing@multiplay.co.uk) Received: from mail-wm0-x22e.google.com (mail-wm0-x22e.google.com [IPv6:2a00:1450:400c:c09::22e]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 0F103D23 for ; Mon, 15 Feb 2016 23:55:09 +0000 (UTC) (envelope-from killing@multiplay.co.uk) Received: by mail-wm0-x22e.google.com with SMTP id a4so76536394wme.1 for ; Mon, 15 Feb 2016 15:55:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=multiplay-co-uk.20150623.gappssmtp.com; s=20150623; h=subject:to:references:from:message-id:date:user-agent:mime-version :in-reply-to:content-type:content-transfer-encoding; bh=X5rGmyKKWUldLS2KjDgnuf/7Y58Jt6wzwzDUpNpEBWg=; b=rBB6E7F5nyZ3BLJ5q8pfuTyzgJZM7SB5A9f3dygAaaCtODfhirVcSIXxvwdeLZ/NEU GjdXdvP5tZVacBZHtTsyLIV8tG6hTUO4qiHbRCYXjrH/EaGC0RbLB7vf5hrfSzQ0ky4d /mvmQEI2OgE2bF8hITqvpKyL6ISA0in86kd3W49SlyFCLeJngDPPyF7LReqdV/wserJH Rvrzy7UqxS9MMYIKpysr8M/2Wued/vB9/4xOtbwz/26dCBBxthsx7b2R+AOjRCkFwcBg jxN9Xv3PxhWNeBVMEGgEM+NdxM22aVOudix0Ky0pqznErIjf00lYUNE4f7uKxRq8lmex wx3w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:subject:to:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-type :content-transfer-encoding; bh=X5rGmyKKWUldLS2KjDgnuf/7Y58Jt6wzwzDUpNpEBWg=; b=R+xZsNZ0oNbYCaJUxai+TJpCirLImPoD6GIPYGmaZx4ulLtDgI+5qkSU/pZNUxmAGm zph1CNO/PhfdpRBPhsj4XhwJRUURKp6MOC9AfYrGsT24MHliW1vKtDnNUH3vdtwxn2oH EFe1y4FkfW6r1NGy77ANoUVHsXCHyGt2RCQFdl0a3tJ4Ng722bEu6W59pbjpmvQCCtpf IwamXtJnoKjFSgGwzhqIUBm8ZXT8XCdfmItYKu0HJUKoMcZL4rH4rGijbm5m8LzzB2W2 bg/s1GMgRCk2wzqSy+HxqRDrgJACDCdtPFgBrYcz+EEGTIKVZ8ry1P1SkNsnKy+gNhmC iXNw== X-Gm-Message-State: AG10YORjsRQNKqHKNakx6Y98HGsupz2iJg98UiaLFz7KJdYK2e467R8HeOofuzjv0NV1U1Ta X-Received: by 10.194.236.233 with SMTP id ux9mr1618674wjc.161.1455580507293; Mon, 15 Feb 2016 15:55:07 -0800 (PST) Received: from [10.10.1.58] (liv3d.labs.multiplay.co.uk. [82.69.141.171]) by smtp.gmail.com with ESMTPSA id b203sm17800765wmh.8.2016.02.15.15.55.05 for (version=TLSv1/SSLv3 cipher=OTHER); Mon, 15 Feb 2016 15:55:06 -0800 (PST) Subject: Re: Kernel panic zio.c, line: 270 FreeBSD 10.2 (or 10.3) To: freebsd-fs@freebsd.org References: <1061671455578760@web3g.yandex.ru> From: Steven Hartland Message-ID: <56C2655F.9010809@multiplay.co.uk> Date: Mon, 15 Feb 2016 23:55:11 +0000 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.5.1 MIME-Version: 1.0 In-Reply-To: <1061671455578760@web3g.yandex.ru> Content-Type: text/plain; charset=koi8-r; format=flowed Content-Transfer-Encoding: 8bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 15 Feb 2016 23:55:10 -0000 That sounds like you have some pool data corruption, from the 10.3 version dump can you print out the following: 1. frame 8: bp and size 2. frame 6: buf->b_hdr On 15/02/2016 23:26, DemIS wrote: > Any one knows about problem? > Server: SuperMicro Model:SYS-6026T-6RF+, MB:X8DTU-6F+, RAM 24 DDR3, two XEON > RAM: KVR1333D3E9S/4G - DDR3, 1333MHz, ECC, CL9, X8, 1.5V, Unbuffered, DIMM > Version:uname -a > > FreeBSD teo.some.loc 10.2-RELEASE-p12 FreeBSD 10.2-RELEASE-p12 #0: Sat Feb 13 18:04:04 MSK 2016 demis@teo.some.loc:/usr/obj/usr/src/sys/TEO amd64 > (on GENERIC or custom kernel config persist too !!!) > > Memtest86+ v.4.40 (ECC mode) test - OK. > Every disk checked too (physically - mhdd, logically - zpool scrub, and additional checkit in external company recovery disk). No errors. > > Part of df -H > Filesystem Size Used Avail Capacity Mounted on > hdd/usr/wf 6,6T 4,1T 2,5T 62% /hdd/usr/wf > > zpool status hdd > pool: hdd > state: ONLINE > status: Some supported features are not enabled on the pool. The pool can > still be used, but some features are unavailable. > action: Enable all features using 'zpool upgrade'. Once this is done, > the pool may no longer be accessible by software that does not support > the features. See zpool-features(7) for details. > scan: scrub repaired 0 in 14h57m with 0 errors on Thu Feb 11 03:35:43 2016 > config: > > NAME STATE READ WRITE CKSUM > hdd ONLINE 0 0 0 > raidz2-0 ONLINE 0 0 0 > mfid1p1 ONLINE 0 0 0 > mfid2p1 ONLINE 0 0 0 > mfid3p1 ONLINE 0 0 0 > mfid4p1 ONLINE 0 0 0 > mfid5p1 ONLINE 0 0 0 > > errors: No known data errors > > hdd - is My zfs volume. > When I run command like: > rm /hdd/usr/some/path/to/file > or > rm /hdd/usr/some/path/to/folder > or > chown root:wheel /hdd/usr/some/path/to/file > or > chown root:wheel /hdd/usr/some/path/to/folder > or > setfacl ... to /hdd/usr/some/path/to/file > > I'm get kernel panic: > GNU gdb 6.1.1 [FreeBSD] > Copyright 2004 Free Software Foundation, Inc. > GDB is free software, covered by the GNU General Public License, and you are > welcome to change it and/or distribute copies of it under certain conditions. > Type "show copying" to see the conditions. > There is absolutely no warranty for GDB. Type "show warranty" for details. > This GDB was configured as "amd64-marcel-freebsd"... > > Unread portion of the kernel message buffer: > panic: solaris assert: c < (1ULL << 24) >> 9 (0x7fffffffffffff < 0x8000), file: /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c, line: 270 > cpuid = 9 > KDB: stack backtrace: > #0 0xffffffff80984ef0 at kdb_backtrace+0x60 > #1 0xffffffff80948aa6 at vpanic+0x126 > #2 0xffffffff80948973 at panic+0x43 > #3 0xffffffff81c0222f at assfail3+0x2f > #4 0xffffffff81aa9d40 at zio_buf_alloc+0x50 > #5 0xffffffff81a2b9f8 at arc_get_data_buf+0x358 > #6 0xffffffff81a2e20a at arc_read+0x1ea > #7 0xffffffff81a3669c at dbuf_read+0x6ac > #8 0xffffffff81a3d8bf at dmu_spill_hold_existing+0xbf > #9 0xffffffff81a70dd7 at sa_attr_op+0x167 > #10 0xffffffff81a72ffb at sa_lookup+0x4b > #11 0xffffffff81abc82a at zfs_rmnode+0x2ba > #12 0xffffffff81ada58e at zfs_freebsd_reclaim+0x4e > #13 0xffffffff80e73537 at VOP_RECLAIM_APV+0xa7 > #14 0xffffffff809ec5b4 at vgonel+0x1b4 > #15 0xffffffff809eca49 at vrecycle+0x59 > #16 0xffffffff81ada52d at zfs_freebsd_inactive+0xd > #17 0xffffffff80e73427 at VOP_INACTIVE_APV+0xa7 > Uptime: 9m31s > Dumping 1286 out of 24543 MB:..2%..12%..22%..32%..42%..51%..61%..71% (CTRL-C to abort) ..81%..91% > > Reading symbols from /boot/kernel/zfs.ko.symbols...done. > Loaded symbols for /boot/kernel/zfs.ko.symbols > Reading symbols from /boot/kernel/opensolaris.ko.symbols...done. > Loaded symbols for /boot/kernel/opensolaris.ko.symbols > Reading symbols from /boot/kernel/if_lagg.ko.symbols...done. > Loaded symbols for /boot/kernel/if_lagg.ko.symbols > Reading symbols from /boot/kernel/ums.ko.symbols...done. > Loaded symbols for /boot/kernel/ums.ko.symbols > Reading symbols from /boot/kernel/ipfw.ko.symbols...done. > Loaded symbols for /boot/kernel/ipfw.ko.symbols > #0 doadump (textdump=) at pcpu.h:219 > 219 pcpu.h: No such file or directory. > in pcpu.h > (kgdb) bt > #0 doadump (textdump=) at pcpu.h:219 > #1 0xffffffff80948702 in kern_reboot (howto=260) at /usr/src/sys/kern/kern_shutdown.c:451 > #2 0xffffffff80948ae5 in vpanic (fmt=, ap=) at /usr/src/sys/kern/kern_shutdown.c:758 > #3 0xffffffff80948973 in panic (fmt=0x0) at /usr/src/sys/kern/kern_shutdown.c:687 > #4 0xffffffff81c0222f in assfail3 (a=, lv=, op=, rv=, > f=, l=) at /usr/src/sys/modules/opensolaris/../../cddl/compat/opensolaris/kern/opensolaris_cmn_err.c:91 > #5 0xffffffff81aa9d40 in zio_buf_alloc (size=0) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c:270 > #6 0xffffffff81a2b9f8 in arc_get_data_buf (buf=) > at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:2898 > #7 0xffffffff81a2e20a in arc_read (pio=0xfffff80011791730, spa=0xfffff80011579000, bp=0xfffffe000aee7980, done=0xffffffff81a3a2d0 , > private=0xfffff8002244b000, priority=ZIO_PRIORITY_SYNC_READ, zio_flags=-528866606, arc_flags=0xfffffe06727fb3c4, zb=0xffffffff81a3a2d0) > at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:1551 > #8 0xffffffff81a3669c in dbuf_read (db=0xfffff8002244b000, zio=0x0, flags=6) > at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dbuf.c:573 > #9 0xffffffff81a3d8bf in dmu_spill_hold_existing (bonus=0xfffff800223bed20, tag=0x0, dbp=0xfffff800919966b8) > at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu.c:333 > #10 0xffffffff81a70dd7 in sa_attr_op (hdl=0xfffff80091996690, bulk=0xfffffe06727fb528, count=1, data_op=SA_LOOKUP, tx=0x0) > at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/sa.c:310 > #11 0xffffffff81a72ffb in sa_lookup (hdl=0xfffff80091996690, attr=, buf=, buflen=) > at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/sa.c:1441 > #12 0xffffffff81abc82a in zfs_rmnode (zp=0xfffff80091993730) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_dir.c:633 > #13 0xffffffff81ada58e in zfs_freebsd_reclaim (ap=) > at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c:6569 > #14 0xffffffff80e73537 in VOP_RECLAIM_APV (vop=, a=) at vnode_if.c:2019 > #15 0xffffffff809ec5b4 in vgonel (vp=0xfffff800111733b0) at vnode_if.h:830 > #16 0xffffffff809eca49 in vrecycle (vp=0xfffff800111733b0) at /usr/src/sys/kern/vfs_subr.c:2703 > #17 0xffffffff81ada52d in zfs_freebsd_inactive (ap=) > at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c:6540 > #18 0xffffffff80e73427 in VOP_INACTIVE_APV (vop=, a=) at vnode_if.c:1953 > #19 0xffffffff809eb382 in vinactive (vp=0xfffff800111733b0, td=0xfffff800113d8000) at vnode_if.h:807 > #20 0xffffffff809eb772 in vputx (vp=0xfffff800111733b0, func=2) at /usr/src/sys/kern/vfs_subr.c:2306 > #21 0xffffffff809f401e in kern_rmdirat (td=, fd=, path=, pathseg=) > at /usr/src/sys/kern/vfs_syscalls.c:3842 > #22 0xffffffff80d4b3e7 in amd64_syscall (td=0xfffff800113d8000, traced=0) at subr_syscall.c:134 > #23 0xffffffff80d30acb in Xfast_syscall () at /usr/src/sys/amd64/amd64/exception.S:396 > #24 0x00000008008914ea in ?? () > Previous frame inner to this frame (corrupt stack?) > Current language: auto; currently minimal > > If setup FreeBSD 10.3 BETA (on GENERIC or custom kernel config): > Copyright 2004 Free Software Foundation, Inc. > GDB is free software, covered by the GNU General Public License, and you are > welcome to change it and/or distribute copies of it under certain conditions. > Type "show copying" to see the conditions. > There is absolutely no warranty for GDB. Type "show warranty" for details. > This GDB was configured as "amd64-marcel-freebsd"... > > Unread portion of the kernel message buffer: > panic: solaris assert: c < (1ULL << 24) >> 9 (0x7fffffffffffff < 0x8000), file: /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c, line: 273 > cpuid = 13 > KDB: stack backtrace: > #0 0xffffffff8098f000 at kdb_backtrace+0x60 > #1 0xffffffff80951d06 at vpanic+0x126 > #2 0xffffffff80951bd3 at panic+0x43 > #3 0xffffffff81e0022f at assfail3+0x2f > #4 0xffffffff81cacc70 at zio_buf_alloc+0x50 > #5 0xffffffff81c2b8f2 at arc_get_data_buf+0x262 > #6 0xffffffff81c2b657 at arc_buf_alloc+0xc7 > #7 0xffffffff81c2d601 at arc_read+0x1c1 > #8 0xffffffff81c36ce9 at dbuf_read+0x6b9 > #9 0xffffffff81c3e415 at dmu_spill_hold_existing+0xc5 > #10 0xffffffff81c73707 at sa_attr_op+0x167 > #11 0xffffffff81c75972 at sa_lookup+0x52 > #12 0xffffffff81cbf8da at zfs_rmnode+0x2ba > #13 0xffffffff81cdd75e at zfs_freebsd_reclaim+0x4e > #14 0xffffffff80e81c27 at VOP_RECLAIM_APV+0xa7 > #15 0xffffffff809f9581 at vgonel+0x221 > #16 0xffffffff809f9a19 at vrecycle+0x59 > #17 0xffffffff81cdd6fd at zfs_freebsd_inactive+0xd > Uptime: 11m11s > Dumping 1368 out of 24542 MB:..2%..11%..22%..31%..41%..51%..61%..71%..81%..91% > > Reading symbols from /boot/kernel/if_lagg.ko.symbols...done. > Loaded symbols for /boot/kernel/if_lagg.ko.symbols > Reading symbols from /boot/kernel/aio.ko.symbols...done. > Loaded symbols for /boot/kernel/aio.ko.symbols > Reading symbols from /boot/kernel/ichsmb.ko.symbols...done. > Loaded symbols for /boot/kernel/ichsmb.ko.symbols > Reading symbols from /boot/kernel/smbus.ko.symbols...done. > Loaded symbols for /boot/kernel/smbus.ko.symbols > Reading symbols from /boot/kernel/ipmi.ko.symbols...done. > Loaded symbols for /boot/kernel/ipmi.ko.symbols > Reading symbols from /boot/kernel/zfs.ko.symbols...done. > Loaded symbols for /boot/kernel/zfs.ko.symbols > Reading symbols from /boot/kernel/opensolaris.ko.symbols...done. > Loaded symbols for /boot/kernel/opensolaris.ko.symbols > Reading symbols from /boot/kernel/ums.ko.symbols...done. > Loaded symbols for /boot/kernel/ums.ko.symbols > Reading symbols from /boot/kernel/ipfw.ko.symbols...done. > Loaded symbols for /boot/kernel/ipfw.ko.symbols > #0 doadump (textdump=) at pcpu.h:219 > 219 pcpu.h: No such file or directory. > in pcpu.h > (kgdb) backtrace > #0 doadump (textdump=) at pcpu.h:219 > #1 0xffffffff80951962 in kern_reboot (howto=260) at /usr/src/sys/kern/kern_shutdown.c:486 > #2 0xffffffff80951d45 in vpanic (fmt=, ap=) at /usr/src/sys/kern/kern_shutdown.c:889 > #3 0xffffffff80951bd3 in panic (fmt=0x0) at /usr/src/sys/kern/kern_shutdown.c:818 > #4 0xffffffff81e0022f in assfail3 (a=, lv=, op=, rv=, f=, > l=) at /usr/src/sys/modules/opensolaris/../../cddl/compat/opensolaris/kern/opensolaris_cmn_err.c:91 > #5 0xffffffff81cacc70 in zio_buf_alloc (size=0) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c:273 > #6 0xffffffff81c2b8f2 in arc_get_data_buf (buf=) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:3880 > #7 0xffffffff81c2b657 in arc_buf_alloc (spa=, size=, tag=0x0, type=) > at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:2057 > #8 0xffffffff81c2d601 in arc_read (pio=0xfffff8000fad03b0, spa=0xfffff8000f63d000, bp=0xfffffe000e509980, done=0xffffffff81c3aed0 , private=0xfffff8000fdd6360, > priority=ZIO_PRIORITY_SYNC_READ, zio_flags=-2117882160, arc_flags=0xfffffe02925483c4, zb=0xfffff8000fdd6360) > at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:4397 > #9 0xffffffff81c36ce9 in dbuf_read (db=0xfffff8000fdd6360, zio=0x0, flags=) > at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dbuf.c:682 > #10 0xffffffff81c3e415 in dmu_spill_hold_existing (bonus=0xfffff8001f312438, tag=0x0, dbp=0xfffff80062d4e7d0) > at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu.c:333 > #11 0xffffffff81c73707 in sa_attr_op (hdl=0xfffff80062d4e770, bulk=0xfffffe0292548528, count=1, data_op=SA_LOOKUP, tx=0x0) > at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/sa.c:305 > #12 0xffffffff81c75972 in sa_lookup (hdl=0xfffff80062d4e770, attr=, buf=, buflen=) > at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/sa.c:1443 > #13 0xffffffff81cbf8da in zfs_rmnode (zp=0xfffff80062d4c8a0) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_dir.c:633 > #14 0xffffffff81cdd75e in zfs_freebsd_reclaim (ap=) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c:6619 > #15 0xffffffff80e81c27 in VOP_RECLAIM_APV (vop=, a=) at vnode_if.c:2019 > #16 0xffffffff809f9581 in vgonel (vp=0xfffff8000f1beb10) at vnode_if.h:830 > #17 0xffffffff809f9a19 in vrecycle (vp=0xfffff8000f1beb10) at /usr/src/sys/kern/vfs_subr.c:2951 > #18 0xffffffff81cdd6fd in zfs_freebsd_inactive (ap=) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c:6590 > #19 0xffffffff80e81b17 in VOP_INACTIVE_APV (vop=, a=) at vnode_if.c:1953 > #20 0xffffffff809f8322 in vinactive (vp=0xfffff8000f1beb10, td=0xfffff8000f9f34b0) at vnode_if.h:807 > #21 0xffffffff809f8712 in vputx (vp=0xfffff8000f1beb10, func=2) at /usr/src/sys/kern/vfs_subr.c:2547 > #22 0xffffffff80a0137e in kern_rmdirat (td=, fd=, path=, pathseg=) > at /usr/src/sys/kern/vfs_syscalls.c:3964 > #23 0xffffffff80d574bf in amd64_syscall (td=0xfffff8000f9f34b0, traced=0) at subr_syscall.c:141 > #24 0xffffffff80d3c72b in Xfast_syscall () at /usr/src/sys/amd64/amd64/exception.S:396 > #25 0x000000080089458a in ?? () > Previous frame inner to this frame (corrupt stack?) > Current language: auto; currently minimal > > Crash folder (or file) have strange rights: > d---------+ 3 anna domain users 3 10 10:32 01-Projcts > d---------+ 2 anna domain users 2 8 21:46 02-Text > > How correct kernel panic? > > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@freebsd.org Tue Feb 16 00:46:52 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 80AB1AA8C2A for ; Tue, 16 Feb 2016 00:46:52 +0000 (UTC) (envelope-from demis@yandex.ru) Received: from forward18p.cmail.yandex.net (forward18p.cmail.yandex.net [IPv6:2a02:6b8:0:1465::ab]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "forwards.mail.yandex.net", Issuer "Yandex CA" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 23FC114F4 for ; Tue, 16 Feb 2016 00:46:51 +0000 (UTC) (envelope-from demis@yandex.ru) Received: from web2g.yandex.ru (web2g.yandex.ru [95.108.252.102]) by forward18p.cmail.yandex.net (Yandex) with ESMTP id 6D09321682; Tue, 16 Feb 2016 03:46:38 +0300 (MSK) Received: from web2g.yandex.ru (localhost [127.0.0.1]) by web2g.yandex.ru (Yandex) with ESMTP id D2E6F48621AD; Tue, 16 Feb 2016 03:46:37 +0300 (MSK) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex.ru; s=mail; t=1455583598; bh=0BpSXshlGZlnB3noVHXAsgFV8PT/f+lUdOrkKQVpLKg=; h=From:To:In-Reply-To:References:Subject:Date; b=n2qr3PH9W1X5ibkD0Dg5A0PSPVf7E92Evw7s1jfmesPoHigFeOys8Yd8BBXeii4Xe QFL7fGgicmPjvVGfY1DRGHcmxsFTO7y9nkBvYMFpbs8Jdtwb1rQC7tnIJO7FXcRVQV 5V6+Sg7NwCLu1/kl/efNw+K1IgNmMwZOtnUUxNEs= Received: by web2g.yandex.ru with HTTP; Tue, 16 Feb 2016 03:46:35 +0300 From: DemIS To: Steven Hartland , "freebsd-fs@freebsd.org" In-Reply-To: <56C2655F.9010809@multiplay.co.uk> References: <1061671455578760@web3g.yandex.ru> <56C2655F.9010809@multiplay.co.uk> Subject: Re: Kernel panic zio.c, line: 270 FreeBSD 10.2 (or 10.3) MIME-Version: 1.0 Message-Id: <1076701455583595@web2g.yandex.ru> X-Mailer: Yamail [ http://yandex.ru ] 5.0 Date: Tue, 16 Feb 2016 03:46:35 +0300 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset=koi8-r X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 16 Feb 2016 00:46:52 -0000 Today I fell back to version 10.2 . Users work on the server . To reinstall 10.3 I need 4:00 . Therefore, it is possible only on weekends . But this has the same effect on the version 10.2 16.02.2016, 02:55, "Steven Hartland" : > That sounds like you have some pool data corruption, from the 10.3 > version dump can you print out the following: > 1. frame 8: bp and size > 2. frame 6: buf->b_hdr > > On 15/02/2016 23:26, DemIS wrote: >> Any one knows about problem? >> Server: SuperMicro Model:SYS-6026T-6RF+, MB:X8DTU-6F+, RAM 24 DDR3, two XEON >> RAM: KVR1333D3E9S/4G - DDR3, 1333MHz, ECC, CL9, X8, 1.5V, Unbuffered, DIMM >> Version:uname -a >> >> FreeBSD teo.some.loc 10.2-RELEASE-p12 FreeBSD 10.2-RELEASE-p12 #0: Sat Feb 13 18:04:04 MSK 2016 demis@teo.some.loc:/usr/obj/usr/src/sys/TEO amd64 >> (on GENERIC or custom kernel config persist too !!!) >> >> Memtest86+ v.4.40 (ECC mode) test - OK. >> Every disk checked too (physically - mhdd, logically - zpool scrub, and additional checkit in external company recovery disk). No errors. >> >> Part of df -H >> Filesystem Size Used Avail Capacity Mounted on >> hdd/usr/wf 6,6T 4,1T 2,5T 62% /hdd/usr/wf >> >> zpool status hdd >> pool: hdd >> state: ONLINE >> status: Some supported features are not enabled on the pool. The pool can >> still be used, but some features are unavailable. >> action: Enable all features using 'zpool upgrade'. Once this is done, >> the pool may no longer be accessible by software that does not support >> the features. See zpool-features(7) for details. >> scan: scrub repaired 0 in 14h57m with 0 errors on Thu Feb 11 03:35:43 2016 >> config: >> >> NAME STATE READ WRITE CKSUM >> hdd ONLINE 0 0 0 >> raidz2-0 ONLINE 0 0 0 >> mfid1p1 ONLINE 0 0 0 >> mfid2p1 ONLINE 0 0 0 >> mfid3p1 ONLINE 0 0 0 >> mfid4p1 ONLINE 0 0 0 >> mfid5p1 ONLINE 0 0 0 >> >> errors: No known data errors >> >> hdd - is My zfs volume. >> When I run command like: >> rm /hdd/usr/some/path/to/file >> or >> rm /hdd/usr/some/path/to/folder >> or >> chown root:wheel /hdd/usr/some/path/to/file >> or >> chown root:wheel /hdd/usr/some/path/to/folder >> or >> setfacl ... to /hdd/usr/some/path/to/file >> >> I'm get kernel panic: >> GNU gdb 6.1.1 [FreeBSD] >> Copyright 2004 Free Software Foundation, Inc. >> GDB is free software, covered by the GNU General Public License, and you are >> welcome to change it and/or distribute copies of it under certain conditions. >> Type "show copying" to see the conditions. >> There is absolutely no warranty for GDB. Type "show warranty" for details. >> This GDB was configured as "amd64-marcel-freebsd"... >> >> Unread portion of the kernel message buffer: >> panic: solaris assert: c < (1ULL << 24) >> 9 (0x7fffffffffffff < 0x8000), file: /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c, line: 270 >> cpuid = 9 >> KDB: stack backtrace: >> #0 0xffffffff80984ef0 at kdb_backtrace+0x60 >> #1 0xffffffff80948aa6 at vpanic+0x126 >> #2 0xffffffff80948973 at panic+0x43 >> #3 0xffffffff81c0222f at assfail3+0x2f >> #4 0xffffffff81aa9d40 at zio_buf_alloc+0x50 >> #5 0xffffffff81a2b9f8 at arc_get_data_buf+0x358 >> #6 0xffffffff81a2e20a at arc_read+0x1ea >> #7 0xffffffff81a3669c at dbuf_read+0x6ac >> #8 0xffffffff81a3d8bf at dmu_spill_hold_existing+0xbf >> #9 0xffffffff81a70dd7 at sa_attr_op+0x167 >> #10 0xffffffff81a72ffb at sa_lookup+0x4b >> #11 0xffffffff81abc82a at zfs_rmnode+0x2ba >> #12 0xffffffff81ada58e at zfs_freebsd_reclaim+0x4e >> #13 0xffffffff80e73537 at VOP_RECLAIM_APV+0xa7 >> #14 0xffffffff809ec5b4 at vgonel+0x1b4 >> #15 0xffffffff809eca49 at vrecycle+0x59 >> #16 0xffffffff81ada52d at zfs_freebsd_inactive+0xd >> #17 0xffffffff80e73427 at VOP_INACTIVE_APV+0xa7 >> Uptime: 9m31s >> Dumping 1286 out of 24543 MB:..2%..12%..22%..32%..42%..51%..61%..71% (CTRL-C to abort) ..81%..91% >> >> Reading symbols from /boot/kernel/zfs.ko.symbols...done. >> Loaded symbols for /boot/kernel/zfs.ko.symbols >> Reading symbols from /boot/kernel/opensolaris.ko.symbols...done. >> Loaded symbols for /boot/kernel/opensolaris.ko.symbols >> Reading symbols from /boot/kernel/if_lagg.ko.symbols...done. >> Loaded symbols for /boot/kernel/if_lagg.ko.symbols >> Reading symbols from /boot/kernel/ums.ko.symbols...done. >> Loaded symbols for /boot/kernel/ums.ko.symbols >> Reading symbols from /boot/kernel/ipfw.ko.symbols...done. >> Loaded symbols for /boot/kernel/ipfw.ko.symbols >> #0 doadump (textdump=) at pcpu.h:219 >> 219 pcpu.h: No such file or directory. >> in pcpu.h >> (kgdb) bt >> #0 doadump (textdump=) at pcpu.h:219 >> #1 0xffffffff80948702 in kern_reboot (howto=260) at /usr/src/sys/kern/kern_shutdown.c:451 >> #2 0xffffffff80948ae5 in vpanic (fmt=, ap=) at /usr/src/sys/kern/kern_shutdown.c:758 >> #3 0xffffffff80948973 in panic (fmt=0x0) at /usr/src/sys/kern/kern_shutdown.c:687 >> #4 0xffffffff81c0222f in assfail3 (a=, lv=, op=, rv=, >> f=, l=) at /usr/src/sys/modules/opensolaris/../../cddl/compat/opensolaris/kern/opensolaris_cmn_err.c:91 >> #5 0xffffffff81aa9d40 in zio_buf_alloc (size=0) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c:270 >> #6 0xffffffff81a2b9f8 in arc_get_data_buf (buf=) >> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:2898 >> #7 0xffffffff81a2e20a in arc_read (pio=0xfffff80011791730, spa=0xfffff80011579000, bp=0xfffffe000aee7980, done=0xffffffff81a3a2d0 , >> private=0xfffff8002244b000, priority=ZIO_PRIORITY_SYNC_READ, zio_flags=-528866606, arc_flags=0xfffffe06727fb3c4, zb=0xffffffff81a3a2d0) >> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:1551 >> #8 0xffffffff81a3669c in dbuf_read (db=0xfffff8002244b000, zio=0x0, flags=6) >> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dbuf.c:573 >> #9 0xffffffff81a3d8bf in dmu_spill_hold_existing (bonus=0xfffff800223bed20, tag=0x0, dbp=0xfffff800919966b8) >> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu.c:333 >> #10 0xffffffff81a70dd7 in sa_attr_op (hdl=0xfffff80091996690, bulk=0xfffffe06727fb528, count=1, data_op=SA_LOOKUP, tx=0x0) >> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/sa.c:310 >> #11 0xffffffff81a72ffb in sa_lookup (hdl=0xfffff80091996690, attr=, buf=, buflen=) >> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/sa.c:1441 >> #12 0xffffffff81abc82a in zfs_rmnode (zp=0xfffff80091993730) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_dir.c:633 >> #13 0xffffffff81ada58e in zfs_freebsd_reclaim (ap=) >> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c:6569 >> #14 0xffffffff80e73537 in VOP_RECLAIM_APV (vop=, a=) at vnode_if.c:2019 >> #15 0xffffffff809ec5b4 in vgonel (vp=0xfffff800111733b0) at vnode_if.h:830 >> #16 0xffffffff809eca49 in vrecycle (vp=0xfffff800111733b0) at /usr/src/sys/kern/vfs_subr.c:2703 >> #17 0xffffffff81ada52d in zfs_freebsd_inactive (ap=) >> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c:6540 >> #18 0xffffffff80e73427 in VOP_INACTIVE_APV (vop=, a=) at vnode_if.c:1953 >> #19 0xffffffff809eb382 in vinactive (vp=0xfffff800111733b0, td=0xfffff800113d8000) at vnode_if.h:807 >> #20 0xffffffff809eb772 in vputx (vp=0xfffff800111733b0, func=2) at /usr/src/sys/kern/vfs_subr.c:2306 >> #21 0xffffffff809f401e in kern_rmdirat (td=, fd=, path=, pathseg=) >> at /usr/src/sys/kern/vfs_syscalls.c:3842 >> #22 0xffffffff80d4b3e7 in amd64_syscall (td=0xfffff800113d8000, traced=0) at subr_syscall.c:134 >> #23 0xffffffff80d30acb in Xfast_syscall () at /usr/src/sys/amd64/amd64/exception.S:396 >> #24 0x00000008008914ea in ?? () >> Previous frame inner to this frame (corrupt stack?) >> Current language: auto; currently minimal >> >> If setup FreeBSD 10.3 BETA (on GENERIC or custom kernel config): >> Copyright 2004 Free Software Foundation, Inc. >> GDB is free software, covered by the GNU General Public License, and you are >> welcome to change it and/or distribute copies of it under certain conditions. >> Type "show copying" to see the conditions. >> There is absolutely no warranty for GDB. Type "show warranty" for details. >> This GDB was configured as "amd64-marcel-freebsd"... >> >> Unread portion of the kernel message buffer: >> panic: solaris assert: c < (1ULL << 24) >> 9 (0x7fffffffffffff < 0x8000), file: /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c, line: 273 >> cpuid = 13 >> KDB: stack backtrace: >> #0 0xffffffff8098f000 at kdb_backtrace+0x60 >> #1 0xffffffff80951d06 at vpanic+0x126 >> #2 0xffffffff80951bd3 at panic+0x43 >> #3 0xffffffff81e0022f at assfail3+0x2f >> #4 0xffffffff81cacc70 at zio_buf_alloc+0x50 >> #5 0xffffffff81c2b8f2 at arc_get_data_buf+0x262 >> #6 0xffffffff81c2b657 at arc_buf_alloc+0xc7 >> #7 0xffffffff81c2d601 at arc_read+0x1c1 >> #8 0xffffffff81c36ce9 at dbuf_read+0x6b9 >> #9 0xffffffff81c3e415 at dmu_spill_hold_existing+0xc5 >> #10 0xffffffff81c73707 at sa_attr_op+0x167 >> #11 0xffffffff81c75972 at sa_lookup+0x52 >> #12 0xffffffff81cbf8da at zfs_rmnode+0x2ba >> #13 0xffffffff81cdd75e at zfs_freebsd_reclaim+0x4e >> #14 0xffffffff80e81c27 at VOP_RECLAIM_APV+0xa7 >> #15 0xffffffff809f9581 at vgonel+0x221 >> #16 0xffffffff809f9a19 at vrecycle+0x59 >> #17 0xffffffff81cdd6fd at zfs_freebsd_inactive+0xd >> Uptime: 11m11s >> Dumping 1368 out of 24542 MB:..2%..11%..22%..31%..41%..51%..61%..71%..81%..91% >> >> Reading symbols from /boot/kernel/if_lagg.ko.symbols...done. >> Loaded symbols for /boot/kernel/if_lagg.ko.symbols >> Reading symbols from /boot/kernel/aio.ko.symbols...done. >> Loaded symbols for /boot/kernel/aio.ko.symbols >> Reading symbols from /boot/kernel/ichsmb.ko.symbols...done. >> Loaded symbols for /boot/kernel/ichsmb.ko.symbols >> Reading symbols from /boot/kernel/smbus.ko.symbols...done. >> Loaded symbols for /boot/kernel/smbus.ko.symbols >> Reading symbols from /boot/kernel/ipmi.ko.symbols...done. >> Loaded symbols for /boot/kernel/ipmi.ko.symbols >> Reading symbols from /boot/kernel/zfs.ko.symbols...done. >> Loaded symbols for /boot/kernel/zfs.ko.symbols >> Reading symbols from /boot/kernel/opensolaris.ko.symbols...done. >> Loaded symbols for /boot/kernel/opensolaris.ko.symbols >> Reading symbols from /boot/kernel/ums.ko.symbols...done. >> Loaded symbols for /boot/kernel/ums.ko.symbols >> Reading symbols from /boot/kernel/ipfw.ko.symbols...done. >> Loaded symbols for /boot/kernel/ipfw.ko.symbols >> #0 doadump (textdump=) at pcpu.h:219 >> 219 pcpu.h: No such file or directory. >> in pcpu.h >> (kgdb) backtrace >> #0 doadump (textdump=) at pcpu.h:219 >> #1 0xffffffff80951962 in kern_reboot (howto=260) at /usr/src/sys/kern/kern_shutdown.c:486 >> #2 0xffffffff80951d45 in vpanic (fmt=, ap=) at /usr/src/sys/kern/kern_shutdown.c:889 >> #3 0xffffffff80951bd3 in panic (fmt=0x0) at /usr/src/sys/kern/kern_shutdown.c:818 >> #4 0xffffffff81e0022f in assfail3 (a=, lv=, op=, rv=, f=, >> l=) at /usr/src/sys/modules/opensolaris/../../cddl/compat/opensolaris/kern/opensolaris_cmn_err.c:91 >> #5 0xffffffff81cacc70 in zio_buf_alloc (size=0) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c:273 >> #6 0xffffffff81c2b8f2 in arc_get_data_buf (buf=) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:3880 >> #7 0xffffffff81c2b657 in arc_buf_alloc (spa=, size=, tag=0x0, type=) >> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:2057 >> #8 0xffffffff81c2d601 in arc_read (pio=0xfffff8000fad03b0, spa=0xfffff8000f63d000, bp=0xfffffe000e509980, done=0xffffffff81c3aed0 , private=0xfffff8000fdd6360, >> priority=ZIO_PRIORITY_SYNC_READ, zio_flags=-2117882160, arc_flags=0xfffffe02925483c4, zb=0xfffff8000fdd6360) >> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:4397 >> #9 0xffffffff81c36ce9 in dbuf_read (db=0xfffff8000fdd6360, zio=0x0, flags=) >> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dbuf.c:682 >> #10 0xffffffff81c3e415 in dmu_spill_hold_existing (bonus=0xfffff8001f312438, tag=0x0, dbp=0xfffff80062d4e7d0) >> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu.c:333 >> #11 0xffffffff81c73707 in sa_attr_op (hdl=0xfffff80062d4e770, bulk=0xfffffe0292548528, count=1, data_op=SA_LOOKUP, tx=0x0) >> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/sa.c:305 >> #12 0xffffffff81c75972 in sa_lookup (hdl=0xfffff80062d4e770, attr=, buf=, buflen=) >> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/sa.c:1443 >> #13 0xffffffff81cbf8da in zfs_rmnode (zp=0xfffff80062d4c8a0) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_dir.c:633 >> #14 0xffffffff81cdd75e in zfs_freebsd_reclaim (ap=) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c:6619 >> #15 0xffffffff80e81c27 in VOP_RECLAIM_APV (vop=, a=) at vnode_if.c:2019 >> #16 0xffffffff809f9581 in vgonel (vp=0xfffff8000f1beb10) at vnode_if.h:830 >> #17 0xffffffff809f9a19 in vrecycle (vp=0xfffff8000f1beb10) at /usr/src/sys/kern/vfs_subr.c:2951 >> #18 0xffffffff81cdd6fd in zfs_freebsd_inactive (ap=) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c:6590 >> #19 0xffffffff80e81b17 in VOP_INACTIVE_APV (vop=, a=) at vnode_if.c:1953 >> #20 0xffffffff809f8322 in vinactive (vp=0xfffff8000f1beb10, td=0xfffff8000f9f34b0) at vnode_if.h:807 >> #21 0xffffffff809f8712 in vputx (vp=0xfffff8000f1beb10, func=2) at /usr/src/sys/kern/vfs_subr.c:2547 >> #22 0xffffffff80a0137e in kern_rmdirat (td=, fd=, path=, pathseg=) >> at /usr/src/sys/kern/vfs_syscalls.c:3964 >> #23 0xffffffff80d574bf in amd64_syscall (td=0xfffff8000f9f34b0, traced=0) at subr_syscall.c:141 >> #24 0xffffffff80d3c72b in Xfast_syscall () at /usr/src/sys/amd64/amd64/exception.S:396 >> #25 0x000000080089458a in ?? () >> Previous frame inner to this frame (corrupt stack?) >> Current language: auto; currently minimal >> >> Crash folder (or file) have strange rights: >> d---------+ 3 anna domain users 3 10 10:32 01-Projcts >> d---------+ 2 anna domain users 2 8 21:46 02-Text >> >> How correct kernel panic? >> >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> https://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@freebsd.org Tue Feb 16 01:15:25 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id ABCC9AA9A16 for ; Tue, 16 Feb 2016 01:15:25 +0000 (UTC) (envelope-from killing@multiplay.co.uk) Received: from mail-wm0-x236.google.com (mail-wm0-x236.google.com [IPv6:2a00:1450:400c:c09::236]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 5675111D2 for ; Tue, 16 Feb 2016 01:15:25 +0000 (UTC) (envelope-from killing@multiplay.co.uk) Received: by mail-wm0-x236.google.com with SMTP id c200so137246408wme.0 for ; Mon, 15 Feb 2016 17:15:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=multiplay-co-uk.20150623.gappssmtp.com; s=20150623; h=subject:to:references:from:message-id:date:user-agent:mime-version :in-reply-to:content-type:content-transfer-encoding; bh=pcwqVKbKy9RyLiNijCsIXuNJ1UDokKKMsBehG/hbRo4=; b=xxm906IEKJpETpqCpn5v2wvIjd34w/WBCRskAH0rsnMN6rwxKhBTBAbKewJb0jT2js S1HkLRHLhfvtU/yxp1JFrMkDY0dtuJIjihMb0Oe6typ8inBB3sx/TQ36H9j0VZhddS8u m7ZIBx0LMLPkOL1iO4WK4CCO6jubsYnV/4TqZflYx60mJ1yNZ7rdy80lmfQ36uDdeiFD xk6Hoo5UVIBLd0/s54OrvbxiAJ5Y9cr5G+egKYIqZ9u7z0hTdZzzYWncdWjNmgTYBipk 3UdDRqTR28va3cQeNY3Vi2KB65LoNiQXPjrERmfuOo/xa4h5GPyPo9KVOOLOFxcZ+fLq iQxw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:subject:to:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-type :content-transfer-encoding; bh=pcwqVKbKy9RyLiNijCsIXuNJ1UDokKKMsBehG/hbRo4=; b=U1Rdpq5vAwT8nEWv64u1ZqPiaRe04GaaVPN67qogPe+MZgQS0pjNDpzvDac4J7viNh Fp44SQF/kNzsokwR5Q+9b9YLvfFhUYa23l6uOuKEjnuVQhHSUbJG4TVRj7N9WrQKYz+s b96svl2cmsvgWHHkRx1qUOOyo2hdhyfEBv2Gp50BfsHmfhfsJvymWIAAzbm21qrz9iYX ANHiF5NY4e4u3TIc0tSPIHnvf3R36VF8k3DTTZw4tbh/3RxUgBE+LP9B4Hg9mmC3GGu9 nFCiVMvqRCjriIWohjQnQjxACSXRFJom3+MVTTRsHZk0Zf+r2cyfmqjRDd9UkxtatUIT SQsw== X-Gm-Message-State: AG10YOQk8VBiqzl8GHwC+rqObTjV6cyTTnuShohuEVTJJzncjmDBwr5/it2BQ+A1X7qHlp0X X-Received: by 10.194.246.35 with SMTP id xt3mr18644088wjc.57.1455585323223; Mon, 15 Feb 2016 17:15:23 -0800 (PST) Received: from [10.10.1.58] (liv3d.labs.multiplay.co.uk. [82.69.141.171]) by smtp.gmail.com with ESMTPSA id e19sm18090643wmd.1.2016.02.15.17.15.21 (version=TLSv1/SSLv3 cipher=OTHER); Mon, 15 Feb 2016 17:15:21 -0800 (PST) Subject: Re: Kernel panic zio.c, line: 270 FreeBSD 10.2 (or 10.3) To: DemIS , "freebsd-fs@freebsd.org" References: <1061671455578760@web3g.yandex.ru> <56C2655F.9010809@multiplay.co.uk> <1076701455583595@web2g.yandex.ru> From: Steven Hartland Message-ID: <56C2782E.2010404@multiplay.co.uk> Date: Tue, 16 Feb 2016 01:15:26 +0000 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.5.1 MIME-Version: 1.0 In-Reply-To: <1076701455583595@web2g.yandex.ru> Content-Type: text/plain; charset=koi8-r; format=flowed Content-Transfer-Encoding: 8bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 16 Feb 2016 01:15:25 -0000 You don't need the system live just the kernel and the crash dump to get those values. On 16/02/2016 00:46, DemIS wrote: > Today I fell back to version 10.2 . Users work on the server . To reinstall 10.3 I need 4:00 . > Therefore, it is possible only on weekends . But this has the same effect on the version 10.2 > > 16.02.2016, 02:55, "Steven Hartland" : >> That sounds like you have some pool data corruption, from the 10.3 >> version dump can you print out the following: >> 1. frame 8: bp and size >> 2. frame 6: buf->b_hdr >> >> On 15/02/2016 23:26, DemIS wrote: >>> Any one knows about problem? >>> Server: SuperMicro Model:SYS-6026T-6RF+, MB:X8DTU-6F+, RAM 24 DDR3, two XEON >>> RAM: KVR1333D3E9S/4G - DDR3, 1333MHz, ECC, CL9, X8, 1.5V, Unbuffered, DIMM >>> Version:uname -a >>> >>> FreeBSD teo.some.loc 10.2-RELEASE-p12 FreeBSD 10.2-RELEASE-p12 #0: Sat Feb 13 18:04:04 MSK 2016 demis@teo.some.loc:/usr/obj/usr/src/sys/TEO amd64 >>> (on GENERIC or custom kernel config persist too !!!) >>> >>> Memtest86+ v.4.40 (ECC mode) test - OK. >>> Every disk checked too (physically - mhdd, logically - zpool scrub, and additional checkit in external company recovery disk). No errors. >>> >>> Part of df -H >>> Filesystem Size Used Avail Capacity Mounted on >>> hdd/usr/wf 6,6T 4,1T 2,5T 62% /hdd/usr/wf >>> >>> zpool status hdd >>> pool: hdd >>> state: ONLINE >>> status: Some supported features are not enabled on the pool. The pool can >>> still be used, but some features are unavailable. >>> action: Enable all features using 'zpool upgrade'. Once this is done, >>> the pool may no longer be accessible by software that does not support >>> the features. See zpool-features(7) for details. >>> scan: scrub repaired 0 in 14h57m with 0 errors on Thu Feb 11 03:35:43 2016 >>> config: >>> >>> NAME STATE READ WRITE CKSUM >>> hdd ONLINE 0 0 0 >>> raidz2-0 ONLINE 0 0 0 >>> mfid1p1 ONLINE 0 0 0 >>> mfid2p1 ONLINE 0 0 0 >>> mfid3p1 ONLINE 0 0 0 >>> mfid4p1 ONLINE 0 0 0 >>> mfid5p1 ONLINE 0 0 0 >>> >>> errors: No known data errors >>> >>> hdd - is My zfs volume. >>> When I run command like: >>> rm /hdd/usr/some/path/to/file >>> or >>> rm /hdd/usr/some/path/to/folder >>> or >>> chown root:wheel /hdd/usr/some/path/to/file >>> or >>> chown root:wheel /hdd/usr/some/path/to/folder >>> or >>> setfacl ... to /hdd/usr/some/path/to/file >>> >>> I'm get kernel panic: >>> GNU gdb 6.1.1 [FreeBSD] >>> Copyright 2004 Free Software Foundation, Inc. >>> GDB is free software, covered by the GNU General Public License, and you are >>> welcome to change it and/or distribute copies of it under certain conditions. >>> Type "show copying" to see the conditions. >>> There is absolutely no warranty for GDB. Type "show warranty" for details. >>> This GDB was configured as "amd64-marcel-freebsd"... >>> >>> Unread portion of the kernel message buffer: >>> panic: solaris assert: c < (1ULL << 24) >> 9 (0x7fffffffffffff < 0x8000), file: /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c, line: 270 >>> cpuid = 9 >>> KDB: stack backtrace: >>> #0 0xffffffff80984ef0 at kdb_backtrace+0x60 >>> #1 0xffffffff80948aa6 at vpanic+0x126 >>> #2 0xffffffff80948973 at panic+0x43 >>> #3 0xffffffff81c0222f at assfail3+0x2f >>> #4 0xffffffff81aa9d40 at zio_buf_alloc+0x50 >>> #5 0xffffffff81a2b9f8 at arc_get_data_buf+0x358 >>> #6 0xffffffff81a2e20a at arc_read+0x1ea >>> #7 0xffffffff81a3669c at dbuf_read+0x6ac >>> #8 0xffffffff81a3d8bf at dmu_spill_hold_existing+0xbf >>> #9 0xffffffff81a70dd7 at sa_attr_op+0x167 >>> #10 0xffffffff81a72ffb at sa_lookup+0x4b >>> #11 0xffffffff81abc82a at zfs_rmnode+0x2ba >>> #12 0xffffffff81ada58e at zfs_freebsd_reclaim+0x4e >>> #13 0xffffffff80e73537 at VOP_RECLAIM_APV+0xa7 >>> #14 0xffffffff809ec5b4 at vgonel+0x1b4 >>> #15 0xffffffff809eca49 at vrecycle+0x59 >>> #16 0xffffffff81ada52d at zfs_freebsd_inactive+0xd >>> #17 0xffffffff80e73427 at VOP_INACTIVE_APV+0xa7 >>> Uptime: 9m31s >>> Dumping 1286 out of 24543 MB:..2%..12%..22%..32%..42%..51%..61%..71% (CTRL-C to abort) ..81%..91% >>> >>> Reading symbols from /boot/kernel/zfs.ko.symbols...done. >>> Loaded symbols for /boot/kernel/zfs.ko.symbols >>> Reading symbols from /boot/kernel/opensolaris.ko.symbols...done. >>> Loaded symbols for /boot/kernel/opensolaris.ko.symbols >>> Reading symbols from /boot/kernel/if_lagg.ko.symbols...done. >>> Loaded symbols for /boot/kernel/if_lagg.ko.symbols >>> Reading symbols from /boot/kernel/ums.ko.symbols...done. >>> Loaded symbols for /boot/kernel/ums.ko.symbols >>> Reading symbols from /boot/kernel/ipfw.ko.symbols...done. >>> Loaded symbols for /boot/kernel/ipfw.ko.symbols >>> #0 doadump (textdump=) at pcpu.h:219 >>> 219 pcpu.h: No such file or directory. >>> in pcpu.h >>> (kgdb) bt >>> #0 doadump (textdump=) at pcpu.h:219 >>> #1 0xffffffff80948702 in kern_reboot (howto=260) at /usr/src/sys/kern/kern_shutdown.c:451 >>> #2 0xffffffff80948ae5 in vpanic (fmt=, ap=) at /usr/src/sys/kern/kern_shutdown.c:758 >>> #3 0xffffffff80948973 in panic (fmt=0x0) at /usr/src/sys/kern/kern_shutdown.c:687 >>> #4 0xffffffff81c0222f in assfail3 (a=, lv=, op=, rv=, >>> f=, l=) at /usr/src/sys/modules/opensolaris/../../cddl/compat/opensolaris/kern/opensolaris_cmn_err.c:91 >>> #5 0xffffffff81aa9d40 in zio_buf_alloc (size=0) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c:270 >>> #6 0xffffffff81a2b9f8 in arc_get_data_buf (buf=) >>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:2898 >>> #7 0xffffffff81a2e20a in arc_read (pio=0xfffff80011791730, spa=0xfffff80011579000, bp=0xfffffe000aee7980, done=0xffffffff81a3a2d0 , >>> private=0xfffff8002244b000, priority=ZIO_PRIORITY_SYNC_READ, zio_flags=-528866606, arc_flags=0xfffffe06727fb3c4, zb=0xffffffff81a3a2d0) >>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:1551 >>> #8 0xffffffff81a3669c in dbuf_read (db=0xfffff8002244b000, zio=0x0, flags=6) >>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dbuf.c:573 >>> #9 0xffffffff81a3d8bf in dmu_spill_hold_existing (bonus=0xfffff800223bed20, tag=0x0, dbp=0xfffff800919966b8) >>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu.c:333 >>> #10 0xffffffff81a70dd7 in sa_attr_op (hdl=0xfffff80091996690, bulk=0xfffffe06727fb528, count=1, data_op=SA_LOOKUP, tx=0x0) >>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/sa.c:310 >>> #11 0xffffffff81a72ffb in sa_lookup (hdl=0xfffff80091996690, attr=, buf=, buflen=) >>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/sa.c:1441 >>> #12 0xffffffff81abc82a in zfs_rmnode (zp=0xfffff80091993730) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_dir.c:633 >>> #13 0xffffffff81ada58e in zfs_freebsd_reclaim (ap=) >>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c:6569 >>> #14 0xffffffff80e73537 in VOP_RECLAIM_APV (vop=, a=) at vnode_if.c:2019 >>> #15 0xffffffff809ec5b4 in vgonel (vp=0xfffff800111733b0) at vnode_if.h:830 >>> #16 0xffffffff809eca49 in vrecycle (vp=0xfffff800111733b0) at /usr/src/sys/kern/vfs_subr.c:2703 >>> #17 0xffffffff81ada52d in zfs_freebsd_inactive (ap=) >>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c:6540 >>> #18 0xffffffff80e73427 in VOP_INACTIVE_APV (vop=, a=) at vnode_if.c:1953 >>> #19 0xffffffff809eb382 in vinactive (vp=0xfffff800111733b0, td=0xfffff800113d8000) at vnode_if.h:807 >>> #20 0xffffffff809eb772 in vputx (vp=0xfffff800111733b0, func=2) at /usr/src/sys/kern/vfs_subr.c:2306 >>> #21 0xffffffff809f401e in kern_rmdirat (td=, fd=, path=, pathseg=) >>> at /usr/src/sys/kern/vfs_syscalls.c:3842 >>> #22 0xffffffff80d4b3e7 in amd64_syscall (td=0xfffff800113d8000, traced=0) at subr_syscall.c:134 >>> #23 0xffffffff80d30acb in Xfast_syscall () at /usr/src/sys/amd64/amd64/exception.S:396 >>> #24 0x00000008008914ea in ?? () >>> Previous frame inner to this frame (corrupt stack?) >>> Current language: auto; currently minimal >>> >>> If setup FreeBSD 10.3 BETA (on GENERIC or custom kernel config): >>> Copyright 2004 Free Software Foundation, Inc. >>> GDB is free software, covered by the GNU General Public License, and you are >>> welcome to change it and/or distribute copies of it under certain conditions. >>> Type "show copying" to see the conditions. >>> There is absolutely no warranty for GDB. Type "show warranty" for details. >>> This GDB was configured as "amd64-marcel-freebsd"... >>> >>> Unread portion of the kernel message buffer: >>> panic: solaris assert: c < (1ULL << 24) >> 9 (0x7fffffffffffff < 0x8000), file: /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c, line: 273 >>> cpuid = 13 >>> KDB: stack backtrace: >>> #0 0xffffffff8098f000 at kdb_backtrace+0x60 >>> #1 0xffffffff80951d06 at vpanic+0x126 >>> #2 0xffffffff80951bd3 at panic+0x43 >>> #3 0xffffffff81e0022f at assfail3+0x2f >>> #4 0xffffffff81cacc70 at zio_buf_alloc+0x50 >>> #5 0xffffffff81c2b8f2 at arc_get_data_buf+0x262 >>> #6 0xffffffff81c2b657 at arc_buf_alloc+0xc7 >>> #7 0xffffffff81c2d601 at arc_read+0x1c1 >>> #8 0xffffffff81c36ce9 at dbuf_read+0x6b9 >>> #9 0xffffffff81c3e415 at dmu_spill_hold_existing+0xc5 >>> #10 0xffffffff81c73707 at sa_attr_op+0x167 >>> #11 0xffffffff81c75972 at sa_lookup+0x52 >>> #12 0xffffffff81cbf8da at zfs_rmnode+0x2ba >>> #13 0xffffffff81cdd75e at zfs_freebsd_reclaim+0x4e >>> #14 0xffffffff80e81c27 at VOP_RECLAIM_APV+0xa7 >>> #15 0xffffffff809f9581 at vgonel+0x221 >>> #16 0xffffffff809f9a19 at vrecycle+0x59 >>> #17 0xffffffff81cdd6fd at zfs_freebsd_inactive+0xd >>> Uptime: 11m11s >>> Dumping 1368 out of 24542 MB:..2%..11%..22%..31%..41%..51%..61%..71%..81%..91% >>> >>> Reading symbols from /boot/kernel/if_lagg.ko.symbols...done. >>> Loaded symbols for /boot/kernel/if_lagg.ko.symbols >>> Reading symbols from /boot/kernel/aio.ko.symbols...done. >>> Loaded symbols for /boot/kernel/aio.ko.symbols >>> Reading symbols from /boot/kernel/ichsmb.ko.symbols...done. >>> Loaded symbols for /boot/kernel/ichsmb.ko.symbols >>> Reading symbols from /boot/kernel/smbus.ko.symbols...done. >>> Loaded symbols for /boot/kernel/smbus.ko.symbols >>> Reading symbols from /boot/kernel/ipmi.ko.symbols...done. >>> Loaded symbols for /boot/kernel/ipmi.ko.symbols >>> Reading symbols from /boot/kernel/zfs.ko.symbols...done. >>> Loaded symbols for /boot/kernel/zfs.ko.symbols >>> Reading symbols from /boot/kernel/opensolaris.ko.symbols...done. >>> Loaded symbols for /boot/kernel/opensolaris.ko.symbols >>> Reading symbols from /boot/kernel/ums.ko.symbols...done. >>> Loaded symbols for /boot/kernel/ums.ko.symbols >>> Reading symbols from /boot/kernel/ipfw.ko.symbols...done. >>> Loaded symbols for /boot/kernel/ipfw.ko.symbols >>> #0 doadump (textdump=) at pcpu.h:219 >>> 219 pcpu.h: No such file or directory. >>> in pcpu.h >>> (kgdb) backtrace >>> #0 doadump (textdump=) at pcpu.h:219 >>> #1 0xffffffff80951962 in kern_reboot (howto=260) at /usr/src/sys/kern/kern_shutdown.c:486 >>> #2 0xffffffff80951d45 in vpanic (fmt=, ap=) at /usr/src/sys/kern/kern_shutdown.c:889 >>> #3 0xffffffff80951bd3 in panic (fmt=0x0) at /usr/src/sys/kern/kern_shutdown.c:818 >>> #4 0xffffffff81e0022f in assfail3 (a=, lv=, op=, rv=, f=, >>> l=) at /usr/src/sys/modules/opensolaris/../../cddl/compat/opensolaris/kern/opensolaris_cmn_err.c:91 >>> #5 0xffffffff81cacc70 in zio_buf_alloc (size=0) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c:273 >>> #6 0xffffffff81c2b8f2 in arc_get_data_buf (buf=) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:3880 >>> #7 0xffffffff81c2b657 in arc_buf_alloc (spa=, size=, tag=0x0, type=) >>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:2057 >>> #8 0xffffffff81c2d601 in arc_read (pio=0xfffff8000fad03b0, spa=0xfffff8000f63d000, bp=0xfffffe000e509980, done=0xffffffff81c3aed0 , private=0xfffff8000fdd6360, >>> priority=ZIO_PRIORITY_SYNC_READ, zio_flags=-2117882160, arc_flags=0xfffffe02925483c4, zb=0xfffff8000fdd6360) >>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:4397 >>> #9 0xffffffff81c36ce9 in dbuf_read (db=0xfffff8000fdd6360, zio=0x0, flags=) >>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dbuf.c:682 >>> #10 0xffffffff81c3e415 in dmu_spill_hold_existing (bonus=0xfffff8001f312438, tag=0x0, dbp=0xfffff80062d4e7d0) >>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu.c:333 >>> #11 0xffffffff81c73707 in sa_attr_op (hdl=0xfffff80062d4e770, bulk=0xfffffe0292548528, count=1, data_op=SA_LOOKUP, tx=0x0) >>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/sa.c:305 >>> #12 0xffffffff81c75972 in sa_lookup (hdl=0xfffff80062d4e770, attr=, buf=, buflen=) >>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/sa.c:1443 >>> #13 0xffffffff81cbf8da in zfs_rmnode (zp=0xfffff80062d4c8a0) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_dir.c:633 >>> #14 0xffffffff81cdd75e in zfs_freebsd_reclaim (ap=) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c:6619 >>> #15 0xffffffff80e81c27 in VOP_RECLAIM_APV (vop=, a=) at vnode_if.c:2019 >>> #16 0xffffffff809f9581 in vgonel (vp=0xfffff8000f1beb10) at vnode_if.h:830 >>> #17 0xffffffff809f9a19 in vrecycle (vp=0xfffff8000f1beb10) at /usr/src/sys/kern/vfs_subr.c:2951 >>> #18 0xffffffff81cdd6fd in zfs_freebsd_inactive (ap=) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c:6590 >>> #19 0xffffffff80e81b17 in VOP_INACTIVE_APV (vop=, a=) at vnode_if.c:1953 >>> #20 0xffffffff809f8322 in vinactive (vp=0xfffff8000f1beb10, td=0xfffff8000f9f34b0) at vnode_if.h:807 >>> #21 0xffffffff809f8712 in vputx (vp=0xfffff8000f1beb10, func=2) at /usr/src/sys/kern/vfs_subr.c:2547 >>> #22 0xffffffff80a0137e in kern_rmdirat (td=, fd=, path=, pathseg=) >>> at /usr/src/sys/kern/vfs_syscalls.c:3964 >>> #23 0xffffffff80d574bf in amd64_syscall (td=0xfffff8000f9f34b0, traced=0) at subr_syscall.c:141 >>> #24 0xffffffff80d3c72b in Xfast_syscall () at /usr/src/sys/amd64/amd64/exception.S:396 >>> #25 0x000000080089458a in ?? () >>> Previous frame inner to this frame (corrupt stack?) >>> Current language: auto; currently minimal >>> >>> Crash folder (or file) have strange rights: >>> d---------+ 3 anna domain users 3 10 10:32 01-Projcts >>> d---------+ 2 anna domain users 2 8 21:46 02-Text >>> >>> How correct kernel panic? >>> >>> _______________________________________________ >>> freebsd-fs@freebsd.org mailing list >>> https://lists.freebsd.org/mailman/listinfo/freebsd-fs >>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> https://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@freebsd.org Tue Feb 16 01:30:22 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 77569AAA119 for ; Tue, 16 Feb 2016 01:30:22 +0000 (UTC) (envelope-from paul@kraus-haus.org) Received: from mail-yk0-x235.google.com (mail-yk0-x235.google.com [IPv6:2607:f8b0:4002:c07::235]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 3911418FC for ; Tue, 16 Feb 2016 01:30:21 +0000 (UTC) (envelope-from paul@kraus-haus.org) Received: by mail-yk0-x235.google.com with SMTP id u9so67329643ykd.1 for ; Mon, 15 Feb 2016 17:30:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kraus-haus-org.20150623.gappssmtp.com; s=20150623; h=subject:mime-version:content-type:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=Nzq2VGaEGyfOuMFpQcAugE/Os7Yaz7VQxq18tzvTk8I=; b=Jkm2nsOT/HGlTC1e6t0ZBpXNvNwkNRFiEh3YhcnKPSohw/tMiA7458Ke6mFBAaei+6 gTTq7BgHHYhgHx+SckNTEUnTT73EEaoCXGXkkVzaKPeMoqIbLzORNZw/UBC3GEJskTgq mGtvb3VZBSzcjw2m0nbjqTd2Dqhs1vj7lz/3FVagcF1U1aJ7+qtA7tNAFXeS+4rnefMV Fuj1j3ZNGquKVRZ+RevmJs+zYRZ30kOzvR4fU1kx0d+w27YIsXfm6m3V0HitnZHG8ig4 PWXvlrkHt4zXEi3AIn3sB6VaIr9/RJjP7yUXieYBZyeCzMUEXzee8apKosIVffTWLCp8 iOZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:subject:mime-version:content-type:from :in-reply-to:date:cc:content-transfer-encoding:message-id:references :to; bh=Nzq2VGaEGyfOuMFpQcAugE/Os7Yaz7VQxq18tzvTk8I=; b=NDYpnK743MHPqm6J1W8k7pny6c0sGL14J7C0A8DYSdgGA4dED/kjbajqqjfwnrgbZX igeuybAOzSFyFI1KBnTLAFWHiDEQxAJlyl50vvXGejbWzs/iBoRomnpcTRacue+IWHSj 5gmWEUfQdj1j2Q2pDUKZ6mEppPJyXIiP6i7G5/3n1pVQT2KIaD/oXqXyekb/GT2izZdG 4JR0S8EvF/qEL5F0VW6/VscasBj02Es/0myukneweai4w3gG2JXTNrcrk1P+caHyFQYv mIZeZmZ3ysDe5jTzWDvGt0ExxseOoVMz7ZO8JizZQsFThvtmmzeyW+HQwdUoGuhWnerf zS2Q== X-Gm-Message-State: AG10YORAVamh6bnOxHaOpOXL0bQrnO+sqlpiSApR2FyJmDQ55Fw9KTjR42KhZHJ9ky3/BA== X-Received: by 10.37.59.72 with SMTP id i69mr10796743yba.30.1455586220934; Mon, 15 Feb 2016 17:30:20 -0800 (PST) Received: from [192.168.2.138] (pool-100-4-209-221.albyny.fios.verizon.net. [100.4.209.221]) by smtp.gmail.com with ESMTPSA id h200sm22659810ywc.44.2016.02.15.17.30.18 (version=TLS1 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 15 Feb 2016 17:30:18 -0800 (PST) Subject: Re: Hours of tiny transfers at the end of a ZFS resilver? Mime-Version: 1.0 (Mac OS X Mail 7.3 \(1878.6\)) Content-Type: text/plain; charset=us-ascii From: Paul Kraus In-Reply-To: <20160215213506.GB28757@in-addr.com> Date: Mon, 15 Feb 2016 20:30:10 -0500 Cc: freebsd-fs@freebsd.org Content-Transfer-Encoding: quoted-printable Message-Id: References: <8E04E52A-2635-4253-8140-F69495D7D0A6@panasas.com> <56C23E5B.7060207@multiplay.co.uk> <20160215213506.GB28757@in-addr.com> To: Gary Palmer X-Mailer: Apple Mail (2.1878.6) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 16 Feb 2016 01:30:22 -0000 On Feb 15, 2016, at 16:35, Gary Palmer wrote: > It should be noted that ZFS can do the right thing only at pool = creation > time. Once the pool has been created the sector size of the = underlying > disks is baked in and can only be changed by creating a new pool on > the advanced format disks (or forcing the larger ashift value when > you initially create the pool, even if the disks are really 512 byte > sector drives) Is it baked in at the pool layer or the vdev layer ? I thought the = ashift was set on a vdev by vdev basis. -- Paul Kraus paul@kraus-haus.org From owner-freebsd-fs@freebsd.org Tue Feb 16 02:28:09 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 9C8CBAA91E8 for ; Tue, 16 Feb 2016 02:28:09 +0000 (UTC) (envelope-from fjwcash@gmail.com) Received: from mail-ig0-x22a.google.com (mail-ig0-x22a.google.com [IPv6:2607:f8b0:4001:c05::22a]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 66C1C1638; Tue, 16 Feb 2016 02:28:09 +0000 (UTC) (envelope-from fjwcash@gmail.com) Received: by mail-ig0-x22a.google.com with SMTP id y8so87445620igp.0; Mon, 15 Feb 2016 18:28:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=g/qopAxE1ZLZ4EndWxLaEAeSyFZb7H8ALGq0j+sS3us=; b=wBLs2tHEEunpuLHmDoXQqXCaRaCJxBdO2PTa62dhXCSBEovMEPw2ZyMiZwtGvw93Vq 0q2u4SS8bkYTlUGNTOE2uYJBfSQ81rmWLIgfS+sN/hHo3lhNiV1q+J6bYqhAg+5UIB5m 67y6QgwwwhwHZZpT35lA9qdBx3sv8xpwbBFCLmaq8iv4tO2jyfbwLouVsfjB0iKOF18S Wwt4/WEhT93UdY1yKkXdRFcYNirqZeGAV/SfBRNiIqb9DbqJsvB9tvImzo3ULPLPSl2B E9NFhNzK9beiZ2Wv1RUYScvMQTT5iz7nkvw+lzAgV8kPFMrPOD84uOzhpXqU8wuPlFac kbOQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=g/qopAxE1ZLZ4EndWxLaEAeSyFZb7H8ALGq0j+sS3us=; b=MLp2sl7o3cG+A1b4jyaovc7XBuW+xJl40paj23nZhSGFnTCr9eIsLuQ3pGC3N+aORs tr0nAHZt2PX8GZuOQILF2QUuRFI8GsiaYQj/0FD22UFLLydtEXtnZO1XQSzUhgYbRFKv ZXKKYSW1SXodEdw2SQinb3O8nGk2pYwrMYF/GZgqLk8eQd+UbR/9XOPcaGqlNSJeAjOE G4QHYULcZRVNnhEQgVJVeUXzmTihqo8YmY+TqyHXap1xryZTRDxAUVSk+0zfU++0S1OY 65/9cFxRMez6uo0ihXM44MiB8MwOpLz4HxChj7IonI1NGiWgHQAH8kUZoiiONPNOT3bN GJMA== X-Gm-Message-State: AG10YOS4qLR/9cGMqYgaYsoWSMkwC9yVeruIqX4bLmZcBdeY3iPwcV+Zb21lYIkxgw5z4thj9u6vcjVWqauIcQ== MIME-Version: 1.0 X-Received: by 10.50.142.42 with SMTP id rt10mr16288450igb.14.1455589688798; Mon, 15 Feb 2016 18:28:08 -0800 (PST) Received: by 10.107.140.130 with HTTP; Mon, 15 Feb 2016 18:28:08 -0800 (PST) Received: by 10.107.140.130 with HTTP; Mon, 15 Feb 2016 18:28:08 -0800 (PST) In-Reply-To: References: <8E04E52A-2635-4253-8140-F69495D7D0A6@panasas.com> <56C23E5B.7060207@multiplay.co.uk> <20160215213506.GB28757@in-addr.com> Date: Mon, 15 Feb 2016 18:28:08 -0800 Message-ID: Subject: Re: Hours of tiny transfers at the end of a ZFS resilver? From: Freddie Cash To: Paul Kraus Cc: Gary Palmer , FreeBSD Filesystems Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 16 Feb 2016 02:28:09 -0000 On Feb 15, 2016 5:30 PM, "Paul Kraus" wrote: > > On Feb 15, 2016, at 16:35, Gary Palmer wrote: > > > It should be noted that ZFS can do the right thing only at pool creation > > time. Once the pool has been created the sector size of the underlying > > disks is baked in and can only be changed by creating a new pool on > > the advanced format disks (or forcing the larger ashift value when > > you initially create the pool, even if the disks are really 512 byte > > sector drives) > > Is it baked in at the pool layer or the vdev layer ? I thought the ashift was set on a vdev by vdev basis. ashift property is set per vdev, and is set when the vdev is created. You cab have multiple different ashift values in a single pool, although it may be detrimental to performance. You can see the ashift value via "zdb poolname | grep ashift" Cheers, Freddie From owner-freebsd-fs@freebsd.org Tue Feb 16 03:31:36 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 2EC7CAAAC04 for ; Tue, 16 Feb 2016 03:31:36 +0000 (UTC) (envelope-from rpokala@mac.com) Received: from mr11p00im-asmtp001.me.com (mr11p00im-asmtp001.me.com [17.110.69.252]) (using TLSv1.2 with cipher DHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 1E471632 for ; Tue, 16 Feb 2016 03:31:36 +0000 (UTC) (envelope-from rpokala@mac.com) Received: from [172.20.10.3] (unknown [172.56.39.94]) by mr11p00im-asmtp001.me.com (Oracle Communications Messaging Server 7.0.5.36.0 64bit (built Sep 8 2015)) with ESMTPSA id <0O2M0082RF4MA030@mr11p00im-asmtp001.me.com> for freebsd-fs@freebsd.org; Tue, 16 Feb 2016 03:31:35 +0000 (GMT) X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2016-02-16_03:,, signatures=0 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 clxscore=1015 suspectscore=0 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1510270003 definitions=main-1602160063 User-Agent: Microsoft-MacOutlook/0.0.0.160109 Date: Mon, 15 Feb 2016 19:31:32 -0800 Subject: Re: Hours of tiny transfers at the end of a ZFS resilver? From: Ravi Pokala Sender: "Pokala, Ravi" To: "freebsd-fs@freebsd.org" Message-id: <871E2D0C-C131-407E-A982-6AFE896901F6@panasas.com> Thread-topic: Hours of tiny transfers at the end of a ZFS resilver? MIME-version: 1.0 Content-type: text/plain; charset=UTF-8 Content-transfer-encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 16 Feb 2016 03:31:36 -0000 >Date: Mon, 15 Feb 2016 21:08:43 +0000 >From: Steven Hartland >To: freebsd-fs@freebsd.org >Subject: Re: Hours of tiny transfers at the end of a ZFS resilver? >Message-ID: <56C23E5B.7060207@multiplay.co.uk> >Content-Type: text/plain; charset=windows-1252; format=flowed Hi Steve, >>[*] This is probably a good segue into discussing why we even have the ADA_Q_4K quirk, and whether we should get rid of it...? --rp >The 4k quirks exists because a large amount of devices don't report 4k correctly instead just reporting 512 for both logical and physical even when they are actually 4k or larger physical sector size. If true, that's a gross violation of ATA, and I would consider that a disqualifying firmware bug. After over a decade at a storage vendor, I've seen some *really* stupid firmware issues, but lying about the sector size would be a new low. :-( Are we sure that they are really, truly claiming to be 512n rather than AF-512e, rather than us mis-parsing the sector sizes due to the aforementioned kernel bugs? If someone running -CURRENT has a drive which has the ADA_Q_4K quirk, could you paste the output of `geom disk list $DRIVE'? Thanks, Ravi (rpokala@) From owner-freebsd-fs@freebsd.org Tue Feb 16 07:08:16 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id AC5F6AA94C1 for ; Tue, 16 Feb 2016 07:08:16 +0000 (UTC) (envelope-from namaedayo00@gmail.com) Received: from mail-yw0-x236.google.com (mail-yw0-x236.google.com [IPv6:2607:f8b0:4002:c05::236]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 758051166 for ; Tue, 16 Feb 2016 07:08:16 +0000 (UTC) (envelope-from namaedayo00@gmail.com) Received: by mail-yw0-x236.google.com with SMTP id u200so132536322ywf.0 for ; Mon, 15 Feb 2016 23:08:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:from:date:message-id:subject:to:content-type :content-transfer-encoding; bh=5Z7BZYbxuJvhubUObZGL0y4CkF/J8B4q8NOgixcBw+0=; b=bJ/e6DZ3PWRqyAF5w36AT8Jg+YLF014WZBa3J+19yK4IZtF1kdQRFlMAmFhyq1Y3iu IcbGHgSMKrBkAALVnnBP+QWIXWIY0OReg8c6KGNZUPf8JtW0PgGLS28mgjb+yxxUFGym GEW5TO+ZFegEMDfEooUyOLl/RIknQhcSjthHMJJU9cadqMiZUvLckL2s7/D2FkP2V3yC oU7ox3dWVyXkNhVqDVPK0/O0nN3DX1MCapqMCiJ01LO73z+94W69swSVXMrbKY0rgjeG LHEsB/EUqu65cqYxOfR1VbwDszLWgew+h1gAQUuWakfpqMwTiCKUe5edDzHAsg7n1XPu TaaA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:from:date:message-id:subject:to :content-type:content-transfer-encoding; bh=5Z7BZYbxuJvhubUObZGL0y4CkF/J8B4q8NOgixcBw+0=; b=OdxKt1jAlm73KL/DmFmXwSStFiJj9doFZN3cGwcljnrzCYDF6a7PK15tuRQPYjiY2V rnHSlIqXpmBh3+yjlKWcPAZ1grcFj2AG/Zql1mjVlbuPDIeD5ZJ8OX8OJlyaxa8bOTcv Jxg1U+U/o/dCsLxdazFntQXiv8uUME9HPWHPUF4Nx1lGQgUnaNM2ybib2d+TJNWeiR1x zJheVaqbFPmxwPgLy5wEmatnNGdt+C4Tho2kXHecX2/7sDSWUVmc5jCVzdTu3C3ToXy1 7Gaeqep70PYPzWfiN7D24tbcfbxrSVPyb1jSBEhc9hEvdKUjqzE4kRCyZEDmhilp4PuT 9nWg== X-Gm-Message-State: AG10YOT/2yqxbUQGLWoSGFwyNxS/vzT5cBsK6cOV7+YXEYadWMySnQDoSZChu/MwgvPRuhed/poLiQ/GJTaapA== X-Received: by 10.13.217.129 with SMTP id b123mr12302777ywe.297.1455606495457; Mon, 15 Feb 2016 23:08:15 -0800 (PST) MIME-Version: 1.0 Received: by 10.37.202.2 with HTTP; Mon, 15 Feb 2016 23:07:56 -0800 (PST) From: namaedayo Date: Tue, 16 Feb 2016 16:07:56 +0900 Message-ID: Subject: Free space on ZFS is not reported correctly(?) To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 16 Feb 2016 07:08:16 -0000 I=E2=80=99m Japanese. My English ability is not native level, So if I say something wrong, please don=E2=80=99t be offended. The clumsy is sorry because it is generated by Google translation. Free space on ZFS is not reported correctly. There is a "zpool list -v" in sufficient disk space, but dataset other than zvol can not use it. Chassis : HP Prolient ML110 G7 CPU : Xeon E3-1220 Memory : 32GB ECC HDD : TOSHIBA DT01ACA300 x4 SSD : Intel SSD DC S3500 80GB ZIL 8GB, L2ARC 16GB (L2ARC has been temporarily manually disabled by D2764) $ uname -a FreeBSD namaedayo-hp.local 10.2-RELEASE-p8 FreeBSD 10.2-RELEASE-p8 #14 r292713: Fri Dec 25 11:49:13 JST 2015 root@namaedayo-hp.local:/usr/obj/usr/src/sys/NAMAEDAYO amd64 $ zpool list -v NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALT= ROOT zpool0 5.44T 4.58T 881G - 23% 84% 2.08x ONLINE - mirror 2.72T 2.29T 441G - 24% 84% ada0p2 - - - - - - ada2p2 - - - - - - mirror 2.72T 2.29T 441G - 23% 84% ada1p2 - - - - - - ada3p2 - - - - - - ada4p2 7.94G 1.51M 7.94G - 33% 0% $ zfs list NAME USED AVAIL REFER MOUNTPOINT zpool0 5.25T 45.3G 216K /zp= ool0 zpool0/FreeBSD-Buckup 169G 45.3G 169G /zpool0/FreeBSD-Buckup zpool0/FreeBSD-Root 862G 45.3G 810G / zpool0/Ubuntu-14-04_server 206G 230G 21.9G - zpool0/Ubuntu_Desktop 82.5G 121G 7.18G - zpool0/Ubuntu_Desktop-i386 135G 115G 48.9G - zpool0/freebsd-swap 66.0G 95.2G 16.1G - zpool0/namae-iSCSI 82.5G 68.5G 59.3G - zpool0/namaedayo_work 351G 45.3G 346G /zpool0/namaedayo_work zpool0/test-dedup 53.8G 45.3G 53.8G /zpool0/test-dedup zpool0/test-lz4 47.8G 45.3G 47.8G /zpool0/test-lz4 (Omitted because too much) $ df -h Filesystem Size Used Avail Capacity Mounted on zpool0/FreeBSD-Root 855G 810G 45G 95% / devfs 1.0K 1.0K 0B 100% /dev linprocfs 4.0K 4.0K 0B 100% /compat/linux/proc linsysfs 4.0K 4.0K 0B 100% /compat/linux/sys fdescfs 1.0K 1.0K 0B 100% /dev/fd procfs 4.0K 4.0K 0B 100% /proc zpool0/FreeBSD-Buckup 214G 169G 45G 79% /zpool0/FreeBSD-Buckup zpool0/namaedayo_work 391G 346G 45G 88% /zpool0/namaedayo_work zpool0/test-dedup 99G 54G 45G 54% /zpool0/test-dedup zpool0/test-lz4 93G 48G 45G 51% /zpool0/test-lz4 (Omitted because too much) $ gpart show =3D> 34 5860533101 ada0 GPT (2.7T) 34 6 - free - (3.0K) 40 128 1 freebsd-boot (64K) 168 5860532960 2 freebsd-zfs (2.7T) 5860533128 7 - free - (3.5K) =3D> 34 5860533101 ada1 GPT (2.7T) 34 6 - free - (3.0K) 40 128 1 freebsd-boot (64K) 168 5860532960 2 freebsd-zfs (2.7T) 5860533128 7 - free - (3.5K) =3D> 34 5860533101 ada2 GPT (2.7T) 34 6 - free - (3.0K) 40 128 1 freebsd-boot (64K) 168 5860532960 2 freebsd-zfs (2.7T) 5860533128 7 - free - (3.5K) =3D> 34 5860533101 ada3 GPT (2.7T) 34 6 - free - (3.0K) 40 128 1 freebsd-boot (64K) 168 5860532960 2 freebsd-zfs (2.7T) 5860533128 7 - free - (3.5K) =3D> 34 156301421 ada4 GPT (75G) 34 6 - free - (3.0K) 40 1024 1 freebsd-boot (512K) 1064 16777216 2 freebsd-zfs (8.0G) 16778280 33554432 3 freebsd-zfs (16G) 50332712 105968743 - free - (51G) # zdb -C zpool0 | grep 'ashift' ashift: 12 $ zfs get compressratio zpool0 NAME PROPERTY VALUE SOURCE zpool0 compressratio 1.13x - $ zpool status pool: zpool0 state: ONLINE scan: resilvered 1.78T in 3h27m with 0 errors on Tue Jul 15 10:11:44 2014 config: NAME STATE READ WRITE CKSUM zpool0 ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 ada0p2 ONLINE 0 0 0 ada2p2 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 ada1p2 ONLINE 0 0 0 ada3p2 ONLINE 0 0 0 logs ada4p2 ONLINE 0 0 0 errors: No known data errors From owner-freebsd-fs@freebsd.org Tue Feb 16 08:08:27 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 4853BAAAE13 for ; Tue, 16 Feb 2016 08:08:27 +0000 (UTC) (envelope-from demis@yandex.ru) Received: from forward13j.cmail.yandex.net (forward13j.cmail.yandex.net [IPv6:2a02:6b8:0:1630::b3]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "forwards.mail.yandex.net", Issuer "Yandex CA" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id DC80818CA for ; Tue, 16 Feb 2016 08:08:26 +0000 (UTC) (envelope-from demis@yandex.ru) Received: from web11j.yandex.ru (web11j.yandex.ru [IPv6:2a02:6b8:0:1619::311]) by forward13j.cmail.yandex.net (Yandex) with ESMTP id 05FA521786; Tue, 16 Feb 2016 11:08:22 +0300 (MSK) Received: from web11j.yandex.ru (localhost [127.0.0.1]) by web11j.yandex.ru (Yandex) with ESMTP id 2964614A0E74; Tue, 16 Feb 2016 11:08:21 +0300 (MSK) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex.ru; s=mail; t=1455610102; bh=GkYm6QTIh6xULJE0o/vlPaqKG2xac1pmsj2A7G3nMEQ=; h=From:To:In-Reply-To:References:Subject:Date; b=Ln8epDi/ZvK7P34q5g0ubkckGXU29pJ3WfTVI6J7g0wt2EbsZ1x41nNWnrNTgjyHD NrufLvqiCYwal69yx/YXfoKXu1rTXT4ugij/sNXk2vwQPa1oQpnuqYfXlPokCETcV1 WoCe7OAqBEu+gLyifsRrghC6TJhVu/TIBBo1ILeY= Received: by web11j.yandex.ru with HTTP; Tue, 16 Feb 2016 11:08:21 +0300 From: DemIS To: Steven Hartland , "freebsd-fs@freebsd.org" In-Reply-To: <56C2782E.2010404@multiplay.co.uk> References: <1061671455578760@web3g.yandex.ru> <56C2655F.9010809@multiplay.co.uk> <1076701455583595@web2g.yandex.ru> <56C2782E.2010404@multiplay.co.uk> Subject: Re: Kernel panic zio.c, line: 270 FreeBSD 10.2 (or 10.3) MIME-Version: 1.0 Message-Id: <2311371455610101@web11j.yandex.ru> X-Mailer: Yamail [ http://yandex.ru ] 5.0 Date: Tue, 16 Feb 2016 11:08:21 +0300 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset=koi8-r X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 16 Feb 2016 08:08:27 -0000 frame 8 #8 0xffffffff81c3669c in dbuf_read (db=0xfffff8013ca46380, zio=0x0, flags=6) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dbuf.c:573 573 (void) arc_read(zio, db->db_objset->os_spa, db->db_blkptr, Current language: auto; currently minimal frame 6 #6 0xffffffff81c2b9f8 in arc_get_data_buf (buf=) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:2898 2898 buf->b_data = zio_buf_alloc(size); 16.02.2016, 04:15, "Steven Hartland" : > You don't need the system live just the kernel and the crash dump to get > those values. > > On 16/02/2016 00:46, DemIS wrote: >> Today I fell back to version 10.2 . Users work on the server . To reinstall 10.3 I need 4:00 . >> Therefore, it is possible only on weekends . But this has the same effect on the version 10.2 >> >> 16.02.2016, 02:55, "Steven Hartland" : >>> That sounds like you have some pool data corruption, from the 10.3 >>> version dump can you print out the following: >>> 1. frame 8: bp and size >>> 2. frame 6: buf->b_hdr >>> >>> On 15/02/2016 23:26, DemIS wrote: >>>> Any one knows about problem? >>>> Server: SuperMicro Model:SYS-6026T-6RF+, MB:X8DTU-6F+, RAM 24 DDR3, two XEON >>>> RAM: KVR1333D3E9S/4G - DDR3, 1333MHz, ECC, CL9, X8, 1.5V, Unbuffered, DIMM >>>> Version:uname -a >>>> >>>> FreeBSD teo.some.loc 10.2-RELEASE-p12 FreeBSD 10.2-RELEASE-p12 #0: Sat Feb 13 18:04:04 MSK 2016 demis@teo.some.loc:/usr/obj/usr/src/sys/TEO amd64 >>>> (on GENERIC or custom kernel config persist too !!!) >>>> >>>> Memtest86+ v.4.40 (ECC mode) test - OK. >>>> Every disk checked too (physically - mhdd, logically - zpool scrub, and additional checkit in external company recovery disk). No errors. >>>> >>>> Part of df -H >>>> Filesystem Size Used Avail Capacity Mounted on >>>> hdd/usr/wf 6,6T 4,1T 2,5T 62% /hdd/usr/wf >>>> >>>> zpool status hdd >>>> pool: hdd >>>> state: ONLINE >>>> status: Some supported features are not enabled on the pool. The pool can >>>> still be used, but some features are unavailable. >>>> action: Enable all features using 'zpool upgrade'. Once this is done, >>>> the pool may no longer be accessible by software that does not support >>>> the features. See zpool-features(7) for details. >>>> scan: scrub repaired 0 in 14h57m with 0 errors on Thu Feb 11 03:35:43 2016 >>>> config: >>>> >>>> NAME STATE READ WRITE CKSUM >>>> hdd ONLINE 0 0 0 >>>> raidz2-0 ONLINE 0 0 0 >>>> mfid1p1 ONLINE 0 0 0 >>>> mfid2p1 ONLINE 0 0 0 >>>> mfid3p1 ONLINE 0 0 0 >>>> mfid4p1 ONLINE 0 0 0 >>>> mfid5p1 ONLINE 0 0 0 >>>> >>>> errors: No known data errors >>>> >>>> hdd - is My zfs volume. >>>> When I run command like: >>>> rm /hdd/usr/some/path/to/file >>>> or >>>> rm /hdd/usr/some/path/to/folder >>>> or >>>> chown root:wheel /hdd/usr/some/path/to/file >>>> or >>>> chown root:wheel /hdd/usr/some/path/to/folder >>>> or >>>> setfacl ... to /hdd/usr/some/path/to/file >>>> >>>> I'm get kernel panic: >>>> GNU gdb 6.1.1 [FreeBSD] >>>> Copyright 2004 Free Software Foundation, Inc. >>>> GDB is free software, covered by the GNU General Public License, and you are >>>> welcome to change it and/or distribute copies of it under certain conditions. >>>> Type "show copying" to see the conditions. >>>> There is absolutely no warranty for GDB. Type "show warranty" for details. >>>> This GDB was configured as "amd64-marcel-freebsd"... >>>> >>>> Unread portion of the kernel message buffer: >>>> panic: solaris assert: c < (1ULL << 24) >> 9 (0x7fffffffffffff < 0x8000), file: /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c, line: 270 >>>> cpuid = 9 >>>> KDB: stack backtrace: >>>> #0 0xffffffff80984ef0 at kdb_backtrace+0x60 >>>> #1 0xffffffff80948aa6 at vpanic+0x126 >>>> #2 0xffffffff80948973 at panic+0x43 >>>> #3 0xffffffff81c0222f at assfail3+0x2f >>>> #4 0xffffffff81aa9d40 at zio_buf_alloc+0x50 >>>> #5 0xffffffff81a2b9f8 at arc_get_data_buf+0x358 >>>> #6 0xffffffff81a2e20a at arc_read+0x1ea >>>> #7 0xffffffff81a3669c at dbuf_read+0x6ac >>>> #8 0xffffffff81a3d8bf at dmu_spill_hold_existing+0xbf >>>> #9 0xffffffff81a70dd7 at sa_attr_op+0x167 >>>> #10 0xffffffff81a72ffb at sa_lookup+0x4b >>>> #11 0xffffffff81abc82a at zfs_rmnode+0x2ba >>>> #12 0xffffffff81ada58e at zfs_freebsd_reclaim+0x4e >>>> #13 0xffffffff80e73537 at VOP_RECLAIM_APV+0xa7 >>>> #14 0xffffffff809ec5b4 at vgonel+0x1b4 >>>> #15 0xffffffff809eca49 at vrecycle+0x59 >>>> #16 0xffffffff81ada52d at zfs_freebsd_inactive+0xd >>>> #17 0xffffffff80e73427 at VOP_INACTIVE_APV+0xa7 >>>> Uptime: 9m31s >>>> Dumping 1286 out of 24543 MB:..2%..12%..22%..32%..42%..51%..61%..71% (CTRL-C to abort) ..81%..91% >>>> >>>> Reading symbols from /boot/kernel/zfs.ko.symbols...done. >>>> Loaded symbols for /boot/kernel/zfs.ko.symbols >>>> Reading symbols from /boot/kernel/opensolaris.ko.symbols...done. >>>> Loaded symbols for /boot/kernel/opensolaris.ko.symbols >>>> Reading symbols from /boot/kernel/if_lagg.ko.symbols...done. >>>> Loaded symbols for /boot/kernel/if_lagg.ko.symbols >>>> Reading symbols from /boot/kernel/ums.ko.symbols...done. >>>> Loaded symbols for /boot/kernel/ums.ko.symbols >>>> Reading symbols from /boot/kernel/ipfw.ko.symbols...done. >>>> Loaded symbols for /boot/kernel/ipfw.ko.symbols >>>> #0 doadump (textdump=) at pcpu.h:219 >>>> 219 pcpu.h: No such file or directory. >>>> in pcpu.h >>>> (kgdb) bt >>>> #0 doadump (textdump=) at pcpu.h:219 >>>> #1 0xffffffff80948702 in kern_reboot (howto=260) at /usr/src/sys/kern/kern_shutdown.c:451 >>>> #2 0xffffffff80948ae5 in vpanic (fmt=, ap=) at /usr/src/sys/kern/kern_shutdown.c:758 >>>> #3 0xffffffff80948973 in panic (fmt=0x0) at /usr/src/sys/kern/kern_shutdown.c:687 >>>> #4 0xffffffff81c0222f in assfail3 (a=, lv=, op=, rv=, >>>> f=, l=) at /usr/src/sys/modules/opensolaris/../../cddl/compat/opensolaris/kern/opensolaris_cmn_err.c:91 >>>> #5 0xffffffff81aa9d40 in zio_buf_alloc (size=0) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c:270 >>>> #6 0xffffffff81a2b9f8 in arc_get_data_buf (buf=) >>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:2898 >>>> #7 0xffffffff81a2e20a in arc_read (pio=0xfffff80011791730, spa=0xfffff80011579000, bp=0xfffffe000aee7980, done=0xffffffff81a3a2d0 , >>>> private=0xfffff8002244b000, priority=ZIO_PRIORITY_SYNC_READ, zio_flags=-528866606, arc_flags=0xfffffe06727fb3c4, zb=0xffffffff81a3a2d0) >>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:1551 >>>> #8 0xffffffff81a3669c in dbuf_read (db=0xfffff8002244b000, zio=0x0, flags=6) >>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dbuf.c:573 >>>> #9 0xffffffff81a3d8bf in dmu_spill_hold_existing (bonus=0xfffff800223bed20, tag=0x0, dbp=0xfffff800919966b8) >>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu.c:333 >>>> #10 0xffffffff81a70dd7 in sa_attr_op (hdl=0xfffff80091996690, bulk=0xfffffe06727fb528, count=1, data_op=SA_LOOKUP, tx=0x0) >>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/sa.c:310 >>>> #11 0xffffffff81a72ffb in sa_lookup (hdl=0xfffff80091996690, attr=, buf=, buflen=) >>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/sa.c:1441 >>>> #12 0xffffffff81abc82a in zfs_rmnode (zp=0xfffff80091993730) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_dir.c:633 >>>> #13 0xffffffff81ada58e in zfs_freebsd_reclaim (ap=) >>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c:6569 >>>> #14 0xffffffff80e73537 in VOP_RECLAIM_APV (vop=, a=) at vnode_if.c:2019 >>>> #15 0xffffffff809ec5b4 in vgonel (vp=0xfffff800111733b0) at vnode_if.h:830 >>>> #16 0xffffffff809eca49 in vrecycle (vp=0xfffff800111733b0) at /usr/src/sys/kern/vfs_subr.c:2703 >>>> #17 0xffffffff81ada52d in zfs_freebsd_inactive (ap=) >>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c:6540 >>>> #18 0xffffffff80e73427 in VOP_INACTIVE_APV (vop=, a=) at vnode_if.c:1953 >>>> #19 0xffffffff809eb382 in vinactive (vp=0xfffff800111733b0, td=0xfffff800113d8000) at vnode_if.h:807 >>>> #20 0xffffffff809eb772 in vputx (vp=0xfffff800111733b0, func=2) at /usr/src/sys/kern/vfs_subr.c:2306 >>>> #21 0xffffffff809f401e in kern_rmdirat (td=, fd=, path=, pathseg=) >>>> at /usr/src/sys/kern/vfs_syscalls.c:3842 >>>> #22 0xffffffff80d4b3e7 in amd64_syscall (td=0xfffff800113d8000, traced=0) at subr_syscall.c:134 >>>> #23 0xffffffff80d30acb in Xfast_syscall () at /usr/src/sys/amd64/amd64/exception.S:396 >>>> #24 0x00000008008914ea in ?? () >>>> Previous frame inner to this frame (corrupt stack?) >>>> Current language: auto; currently minimal >>>> >>>> If setup FreeBSD 10.3 BETA (on GENERIC or custom kernel config): >>>> Copyright 2004 Free Software Foundation, Inc. >>>> GDB is free software, covered by the GNU General Public License, and you are >>>> welcome to change it and/or distribute copies of it under certain conditions. >>>> Type "show copying" to see the conditions. >>>> There is absolutely no warranty for GDB. Type "show warranty" for details. >>>> This GDB was configured as "amd64-marcel-freebsd"... >>>> >>>> Unread portion of the kernel message buffer: >>>> panic: solaris assert: c < (1ULL << 24) >> 9 (0x7fffffffffffff < 0x8000), file: /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c, line: 273 >>>> cpuid = 13 >>>> KDB: stack backtrace: >>>> #0 0xffffffff8098f000 at kdb_backtrace+0x60 >>>> #1 0xffffffff80951d06 at vpanic+0x126 >>>> #2 0xffffffff80951bd3 at panic+0x43 >>>> #3 0xffffffff81e0022f at assfail3+0x2f >>>> #4 0xffffffff81cacc70 at zio_buf_alloc+0x50 >>>> #5 0xffffffff81c2b8f2 at arc_get_data_buf+0x262 >>>> #6 0xffffffff81c2b657 at arc_buf_alloc+0xc7 >>>> #7 0xffffffff81c2d601 at arc_read+0x1c1 >>>> #8 0xffffffff81c36ce9 at dbuf_read+0x6b9 >>>> #9 0xffffffff81c3e415 at dmu_spill_hold_existing+0xc5 >>>> #10 0xffffffff81c73707 at sa_attr_op+0x167 >>>> #11 0xffffffff81c75972 at sa_lookup+0x52 >>>> #12 0xffffffff81cbf8da at zfs_rmnode+0x2ba >>>> #13 0xffffffff81cdd75e at zfs_freebsd_reclaim+0x4e >>>> #14 0xffffffff80e81c27 at VOP_RECLAIM_APV+0xa7 >>>> #15 0xffffffff809f9581 at vgonel+0x221 >>>> #16 0xffffffff809f9a19 at vrecycle+0x59 >>>> #17 0xffffffff81cdd6fd at zfs_freebsd_inactive+0xd >>>> Uptime: 11m11s >>>> Dumping 1368 out of 24542 MB:..2%..11%..22%..31%..41%..51%..61%..71%..81%..91% >>>> >>>> Reading symbols from /boot/kernel/if_lagg.ko.symbols...done. >>>> Loaded symbols for /boot/kernel/if_lagg.ko.symbols >>>> Reading symbols from /boot/kernel/aio.ko.symbols...done. >>>> Loaded symbols for /boot/kernel/aio.ko.symbols >>>> Reading symbols from /boot/kernel/ichsmb.ko.symbols...done. >>>> Loaded symbols for /boot/kernel/ichsmb.ko.symbols >>>> Reading symbols from /boot/kernel/smbus.ko.symbols...done. >>>> Loaded symbols for /boot/kernel/smbus.ko.symbols >>>> Reading symbols from /boot/kernel/ipmi.ko.symbols...done. >>>> Loaded symbols for /boot/kernel/ipmi.ko.symbols >>>> Reading symbols from /boot/kernel/zfs.ko.symbols...done. >>>> Loaded symbols for /boot/kernel/zfs.ko.symbols >>>> Reading symbols from /boot/kernel/opensolaris.ko.symbols...done. >>>> Loaded symbols for /boot/kernel/opensolaris.ko.symbols >>>> Reading symbols from /boot/kernel/ums.ko.symbols...done. >>>> Loaded symbols for /boot/kernel/ums.ko.symbols >>>> Reading symbols from /boot/kernel/ipfw.ko.symbols...done. >>>> Loaded symbols for /boot/kernel/ipfw.ko.symbols >>>> #0 doadump (textdump=) at pcpu.h:219 >>>> 219 pcpu.h: No such file or directory. >>>> in pcpu.h >>>> (kgdb) backtrace >>>> #0 doadump (textdump=) at pcpu.h:219 >>>> #1 0xffffffff80951962 in kern_reboot (howto=260) at /usr/src/sys/kern/kern_shutdown.c:486 >>>> #2 0xffffffff80951d45 in vpanic (fmt=, ap=) at /usr/src/sys/kern/kern_shutdown.c:889 >>>> #3 0xffffffff80951bd3 in panic (fmt=0x0) at /usr/src/sys/kern/kern_shutdown.c:818 >>>> #4 0xffffffff81e0022f in assfail3 (a=, lv=, op=, rv=, f=, >>>> l=) at /usr/src/sys/modules/opensolaris/../../cddl/compat/opensolaris/kern/opensolaris_cmn_err.c:91 >>>> #5 0xffffffff81cacc70 in zio_buf_alloc (size=0) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c:273 >>>> #6 0xffffffff81c2b8f2 in arc_get_data_buf (buf=) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:3880 >>>> #7 0xffffffff81c2b657 in arc_buf_alloc (spa=, size=, tag=0x0, type=) >>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:2057 >>>> #8 0xffffffff81c2d601 in arc_read (pio=0xfffff8000fad03b0, spa=0xfffff8000f63d000, bp=0xfffffe000e509980, done=0xffffffff81c3aed0 , private=0xfffff8000fdd6360, >>>> priority=ZIO_PRIORITY_SYNC_READ, zio_flags=-2117882160, arc_flags=0xfffffe02925483c4, zb=0xfffff8000fdd6360) >>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:4397 >>>> #9 0xffffffff81c36ce9 in dbuf_read (db=0xfffff8000fdd6360, zio=0x0, flags=) >>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dbuf.c:682 >>>> #10 0xffffffff81c3e415 in dmu_spill_hold_existing (bonus=0xfffff8001f312438, tag=0x0, dbp=0xfffff80062d4e7d0) >>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu.c:333 >>>> #11 0xffffffff81c73707 in sa_attr_op (hdl=0xfffff80062d4e770, bulk=0xfffffe0292548528, count=1, data_op=SA_LOOKUP, tx=0x0) >>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/sa.c:305 >>>> #12 0xffffffff81c75972 in sa_lookup (hdl=0xfffff80062d4e770, attr=, buf=, buflen=) >>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/sa.c:1443 >>>> #13 0xffffffff81cbf8da in zfs_rmnode (zp=0xfffff80062d4c8a0) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_dir.c:633 >>>> #14 0xffffffff81cdd75e in zfs_freebsd_reclaim (ap=) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c:6619 >>>> #15 0xffffffff80e81c27 in VOP_RECLAIM_APV (vop=, a=) at vnode_if.c:2019 >>>> #16 0xffffffff809f9581 in vgonel (vp=0xfffff8000f1beb10) at vnode_if.h:830 >>>> #17 0xffffffff809f9a19 in vrecycle (vp=0xfffff8000f1beb10) at /usr/src/sys/kern/vfs_subr.c:2951 >>>> #18 0xffffffff81cdd6fd in zfs_freebsd_inactive (ap=) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c:6590 >>>> #19 0xffffffff80e81b17 in VOP_INACTIVE_APV (vop=, a=) at vnode_if.c:1953 >>>> #20 0xffffffff809f8322 in vinactive (vp=0xfffff8000f1beb10, td=0xfffff8000f9f34b0) at vnode_if.h:807 >>>> #21 0xffffffff809f8712 in vputx (vp=0xfffff8000f1beb10, func=2) at /usr/src/sys/kern/vfs_subr.c:2547 >>>> #22 0xffffffff80a0137e in kern_rmdirat (td=, fd=, path=, pathseg=) >>>> at /usr/src/sys/kern/vfs_syscalls.c:3964 >>>> #23 0xffffffff80d574bf in amd64_syscall (td=0xfffff8000f9f34b0, traced=0) at subr_syscall.c:141 >>>> #24 0xffffffff80d3c72b in Xfast_syscall () at /usr/src/sys/amd64/amd64/exception.S:396 >>>> #25 0x000000080089458a in ?? () >>>> Previous frame inner to this frame (corrupt stack?) >>>> Current language: auto; currently minimal >>>> >>>> Crash folder (or file) have strange rights: >>>> d---------+ 3 anna domain users 3 10 10:32 01-Projcts >>>> d---------+ 2 anna domain users 2 8 21:46 02-Text >>>> >>>> How correct kernel panic? >>>> >>>> _______________________________________________ >>>> freebsd-fs@freebsd.org mailing list >>>> https://lists.freebsd.org/mailman/listinfo/freebsd-fs >>>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >>> _______________________________________________ >>> freebsd-fs@freebsd.org mailing list >>> https://lists.freebsd.org/mailman/listinfo/freebsd-fs >>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@freebsd.org Tue Feb 16 09:09:16 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 22D3EAA825B for ; Tue, 16 Feb 2016 09:09:16 +0000 (UTC) (envelope-from killing@multiplay.co.uk) Received: from mail-wm0-x230.google.com (mail-wm0-x230.google.com [IPv6:2a00:1450:400c:c09::230]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id B7F071A42 for ; Tue, 16 Feb 2016 09:09:15 +0000 (UTC) (envelope-from killing@multiplay.co.uk) Received: by mail-wm0-x230.google.com with SMTP id c200so149445172wme.0 for ; Tue, 16 Feb 2016 01:09:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=multiplay-co-uk.20150623.gappssmtp.com; s=20150623; h=subject:to:references:from:message-id:date:user-agent:mime-version :in-reply-to:content-type:content-transfer-encoding; bh=fQA6Sdh5LFMW10vym2/vyESbN0H1Cw7Ry3m4QU2zZJs=; b=B2MsIv5/nQNIfYuSDl/p+4TCOlB2YpuiNZ04TxJRh8FCc39VUFbVe6Yhi1Kc3134VY AeN+EoczmbrzORgZq4eNyCxc+/u1HovAnW5J78EtBUd86y7vRtNiMVvctbR5Po00qm0y y7WGih31m+JxCuGWUD/KCltfc8ExdkVnNa16aM+npsJme9SdDae4iERpj1tYt9vyc7yi xo1qcAJKCSA/hLXHoUox/lVIrBMRhFZqXRkrtnfK4bWzs9NYgeMw8Ru7avFHimvVj9FW MTCyt2xPneIPBOqCceJczBkIiSN9NKHZSEEK4+y60M3Hluy31AMUfz4HZ/0cLZchOU9B ENzw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:subject:to:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-type :content-transfer-encoding; bh=fQA6Sdh5LFMW10vym2/vyESbN0H1Cw7Ry3m4QU2zZJs=; b=DDDP40ZzbzXZJCkSE/lD8zeW1CNDFJ3VPSuAmoYDdkSD+GYTT1ulS4zGxnhSVCNHMr S1OI3WHsfk7UlQitl9bJP9Uwy/QTAs5dwZfr2wCB3/tutM1mJSLz7TULRWhzkX11Dsl9 fDzzcJf6XGc5KONisbyzmbGHXDJOo21Tj8nfSsqlqJJhlzkX7U07w9m6/QENKwVxmuge ScKX7QULGAWpeJVmYZ6c8dhahx8rpyJoCJJcqpdiz5QQVSsdsO3j2D3YHjzwyomxU5Hj rFlfcJzNjnmI0csXLQS/58RVThgBVi5Z0M+JUmv+sdP6OPbTdDF9a1GpDU88tNwSMclz YC6g== X-Gm-Message-State: AG10YORcdIOTMtuhuJsdbCALX1KRK1tg2dRwLQQizcaYAM0maNdoERN5pT9gb5zAt0Uoi8V6 X-Received: by 10.28.127.5 with SMTP id a5mr18602310wmd.32.1455613753722; Tue, 16 Feb 2016 01:09:13 -0800 (PST) Received: from [10.10.1.58] (liv3d.labs.multiplay.co.uk. [82.69.141.171]) by smtp.gmail.com with ESMTPSA id gt7sm29286953wjc.1.2016.02.16.01.09.12 for (version=TLSv1/SSLv3 cipher=OTHER); Tue, 16 Feb 2016 01:09:12 -0800 (PST) Subject: Re: Hours of tiny transfers at the end of a ZFS resilver? To: freebsd-fs@freebsd.org References: <871E2D0C-C131-407E-A982-6AFE896901F6@panasas.com> From: Steven Hartland Message-ID: <56C2E73D.2010405@multiplay.co.uk> Date: Tue, 16 Feb 2016 09:09:17 +0000 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.5.1 MIME-Version: 1.0 In-Reply-To: <871E2D0C-C131-407E-A982-6AFE896901F6@panasas.com> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 16 Feb 2016 09:09:16 -0000 On 16/02/2016 03:31, Ravi Pokala wrote: >> Date: Mon, 15 Feb 2016 21:08:43 +0000 >> From: Steven Hartland >> To: freebsd-fs@freebsd.org >> Subject: Re: Hours of tiny transfers at the end of a ZFS resilver? >> Message-ID: <56C23E5B.7060207@multiplay.co.uk> >> Content-Type: text/plain; charset=windows-1252; format=flowed > Hi Steve, > >>> [*] This is probably a good segue into discussing why we even have the ADA_Q_4K quirk, and whether we should get rid of it...? --rp >> The 4k quirks exists because a large amount of devices don't report 4k correctly instead just reporting 512 for both logical and physical even when they are actually 4k or larger physical sector size. > If true, that's a gross violation of ATA, and I would consider that a disqualifying firmware bug. After over a decade at a storage vendor, I've seen some *really* stupid firmware issues, but lying about the sector size would be a new low. :-( > > Are we sure that they are really, truly claiming to be 512n rather than AF-512e, rather than us mis-parsing the sector sizes due to the aforementioned kernel bugs? If someone running -CURRENT has a drive which has the ADA_Q_4K quirk, could you paste the output of `geom disk list $DRIVE'? > My head box doesn't have the feature that would do what you're after but here's what camcontrol says for one such device: camcontrol identify ada0 pass1: ATA8-ACS SATA 3.x device pass1: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes) protocol ATA/ATAPI-8 SATA 3.x device model Corsair Force GS firmware revision 5.05A serial number 1304790400009741003E WWN 0000000000000000 cylinders 16383 heads 16 sectors/track 63 sector size logical 512, physical 512, offset 0 LBA supported 250069680 sectors LBA48 supported 250069680 sectors PIO supported PIO4 DMA supported WDMA2 UDMA6 media RPM non-rotating This is 4k underlying, SSD's are by far and away the worst culprits for this. Regards Steve From owner-freebsd-fs@freebsd.org Tue Feb 16 09:11:00 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 44FE0AA84A4 for ; Tue, 16 Feb 2016 09:11:00 +0000 (UTC) (envelope-from killing@multiplay.co.uk) Received: from mail-wm0-x233.google.com (mail-wm0-x233.google.com [IPv6:2a00:1450:400c:c09::233]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id CCE241CE8 for ; Tue, 16 Feb 2016 09:10:59 +0000 (UTC) (envelope-from killing@multiplay.co.uk) Received: by mail-wm0-x233.google.com with SMTP id g62so142453804wme.0 for ; Tue, 16 Feb 2016 01:10:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=multiplay-co-uk.20150623.gappssmtp.com; s=20150623; h=subject:to:references:from:message-id:date:user-agent:mime-version :in-reply-to:content-type:content-transfer-encoding; bh=4gvxJrtIy9DpZvUdkmkjz37ksovRzDMerJGZq5r2/rs=; b=KoWlIFYCIKW6e4v5JZhPc3YNzxMH2jYdszqhzVsDgkzRJAtqz19gfRJ4O7V7c1+KcX 1Vdkkr5dyda+DMw1Q1b8eUZ9+k7B0QDqC0Apbt41ZySya/8LGS2UHH6HCyUcVqpb8ecv y40yDNd4IaRz+nLW4EJOjCaX3sc9r5bczi3gQtPjsRVre/NBcGOp6nqM/9heQVN6u/sU h1oMhYznEiruOJJdXsSxYcJbc4cCD5TvwzTYAmDqRYwYb/oC7R7bwU9H8GIrjiIK1HYr Ups1VnzXIoLPS0vw/JPU3goqn4H1OCSGQL0xA/b2ciyhQOhqLVhLCjh6uZ0tNdrn65+y DJnQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:subject:to:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-type :content-transfer-encoding; bh=4gvxJrtIy9DpZvUdkmkjz37ksovRzDMerJGZq5r2/rs=; b=kZC0Z4pl8qc92RwJ4V4q7YKlXmTbMi5+22+AylzZeYzD/NDR8XCBQa6aVvLTqmQFq6 CZDAiZMpNVWHkRXcZbzuXT0pDa97PAtJp/zd1I2zd56PZLbIpFvGD2R1wn9XeJDfcGd0 Wuk/ux3PtmkumOvHlSt3Gl/WP51bqMAzwg3Ftt+3X2EY3Z3z9axQfnp69zRKh49QKTIy QCVB3jqJhwipwPcSpH+9ZVsTX45T0LfIYOTgtpbu6iK9u7d4EqdKUTXrWuXNo4jjMmdz ORzSeZsyF6NuOYq+u1yseNLLQD4ce/Z9I5aC1DUddIm3t6XXAQtrIbFvrjgo+yd7M50J 4lEg== X-Gm-Message-State: AG10YOQXD24EPoPoIev+1osCbIBXw0hldSRNdWMUeQmbWlLcX9G4vCQCYsnTL4R68YmpI8j+ X-Received: by 10.28.6.139 with SMTP id 133mr18415880wmg.84.1455613858280; Tue, 16 Feb 2016 01:10:58 -0800 (PST) Received: from [10.10.1.58] (liv3d.labs.multiplay.co.uk. [82.69.141.171]) by smtp.gmail.com with ESMTPSA id t3sm29267276wjz.11.2016.02.16.01.10.56 (version=TLSv1/SSLv3 cipher=OTHER); Tue, 16 Feb 2016 01:10:57 -0800 (PST) Subject: Re: Kernel panic zio.c, line: 270 FreeBSD 10.2 (or 10.3) To: DemIS , "freebsd-fs@freebsd.org" References: <1061671455578760@web3g.yandex.ru> <56C2655F.9010809@multiplay.co.uk> <1076701455583595@web2g.yandex.ru> <56C2782E.2010404@multiplay.co.uk> <2311371455610101@web11j.yandex.ru> From: Steven Hartland Message-ID: <56C2E7A6.9090004@multiplay.co.uk> Date: Tue, 16 Feb 2016 09:11:02 +0000 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.5.1 MIME-Version: 1.0 In-Reply-To: <2311371455610101@web11j.yandex.ru> Content-Type: text/plain; charset=koi8-r; format=flowed Content-Transfer-Encoding: 8bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 16 Feb 2016 09:11:00 -0000 I need the values of the vars specified, so you'll need to: print bp If it reports just an address try: print *bp etc. On 16/02/2016 08:08, DemIS wrote: > frame 8 > #8 0xffffffff81c3669c in dbuf_read (db=0xfffff8013ca46380, zio=0x0, flags=6) > at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dbuf.c:573 > 573 (void) arc_read(zio, db->db_objset->os_spa, db->db_blkptr, > Current language: auto; currently minimal > > frame 6 > #6 0xffffffff81c2b9f8 in arc_get_data_buf (buf=) > at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:2898 > 2898 buf->b_data = zio_buf_alloc(size); > > > 16.02.2016, 04:15, "Steven Hartland" : >> You don't need the system live just the kernel and the crash dump to get >> those values. >> >> On 16/02/2016 00:46, DemIS wrote: >>> Today I fell back to version 10.2 . Users work on the server . To reinstall 10.3 I need 4:00 . >>> Therefore, it is possible only on weekends . But this has the same effect on the version 10.2 >>> >>> 16.02.2016, 02:55, "Steven Hartland" : >>>> That sounds like you have some pool data corruption, from the 10.3 >>>> version dump can you print out the following: >>>> 1. frame 8: bp and size >>>> 2. frame 6: buf->b_hdr >>>> >>>> On 15/02/2016 23:26, DemIS wrote: >>>>> Any one knows about problem? >>>>> Server: SuperMicro Model:SYS-6026T-6RF+, MB:X8DTU-6F+, RAM 24 DDR3, two XEON >>>>> RAM: KVR1333D3E9S/4G - DDR3, 1333MHz, ECC, CL9, X8, 1.5V, Unbuffered, DIMM >>>>> Version:uname -a >>>>> >>>>> FreeBSD teo.some.loc 10.2-RELEASE-p12 FreeBSD 10.2-RELEASE-p12 #0: Sat Feb 13 18:04:04 MSK 2016 demis@teo.some.loc:/usr/obj/usr/src/sys/TEO amd64 >>>>> (on GENERIC or custom kernel config persist too !!!) >>>>> >>>>> Memtest86+ v.4.40 (ECC mode) test - OK. >>>>> Every disk checked too (physically - mhdd, logically - zpool scrub, and additional checkit in external company recovery disk). No errors. >>>>> >>>>> Part of df -H >>>>> Filesystem Size Used Avail Capacity Mounted on >>>>> hdd/usr/wf 6,6T 4,1T 2,5T 62% /hdd/usr/wf >>>>> >>>>> zpool status hdd >>>>> pool: hdd >>>>> state: ONLINE >>>>> status: Some supported features are not enabled on the pool. The pool can >>>>> still be used, but some features are unavailable. >>>>> action: Enable all features using 'zpool upgrade'. Once this is done, >>>>> the pool may no longer be accessible by software that does not support >>>>> the features. See zpool-features(7) for details. >>>>> scan: scrub repaired 0 in 14h57m with 0 errors on Thu Feb 11 03:35:43 2016 >>>>> config: >>>>> >>>>> NAME STATE READ WRITE CKSUM >>>>> hdd ONLINE 0 0 0 >>>>> raidz2-0 ONLINE 0 0 0 >>>>> mfid1p1 ONLINE 0 0 0 >>>>> mfid2p1 ONLINE 0 0 0 >>>>> mfid3p1 ONLINE 0 0 0 >>>>> mfid4p1 ONLINE 0 0 0 >>>>> mfid5p1 ONLINE 0 0 0 >>>>> >>>>> errors: No known data errors >>>>> >>>>> hdd - is My zfs volume. >>>>> When I run command like: >>>>> rm /hdd/usr/some/path/to/file >>>>> or >>>>> rm /hdd/usr/some/path/to/folder >>>>> or >>>>> chown root:wheel /hdd/usr/some/path/to/file >>>>> or >>>>> chown root:wheel /hdd/usr/some/path/to/folder >>>>> or >>>>> setfacl ... to /hdd/usr/some/path/to/file >>>>> >>>>> I'm get kernel panic: >>>>> GNU gdb 6.1.1 [FreeBSD] >>>>> Copyright 2004 Free Software Foundation, Inc. >>>>> GDB is free software, covered by the GNU General Public License, and you are >>>>> welcome to change it and/or distribute copies of it under certain conditions. >>>>> Type "show copying" to see the conditions. >>>>> There is absolutely no warranty for GDB. Type "show warranty" for details. >>>>> This GDB was configured as "amd64-marcel-freebsd"... >>>>> >>>>> Unread portion of the kernel message buffer: >>>>> panic: solaris assert: c < (1ULL << 24) >> 9 (0x7fffffffffffff < 0x8000), file: /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c, line: 270 >>>>> cpuid = 9 >>>>> KDB: stack backtrace: >>>>> #0 0xffffffff80984ef0 at kdb_backtrace+0x60 >>>>> #1 0xffffffff80948aa6 at vpanic+0x126 >>>>> #2 0xffffffff80948973 at panic+0x43 >>>>> #3 0xffffffff81c0222f at assfail3+0x2f >>>>> #4 0xffffffff81aa9d40 at zio_buf_alloc+0x50 >>>>> #5 0xffffffff81a2b9f8 at arc_get_data_buf+0x358 >>>>> #6 0xffffffff81a2e20a at arc_read+0x1ea >>>>> #7 0xffffffff81a3669c at dbuf_read+0x6ac >>>>> #8 0xffffffff81a3d8bf at dmu_spill_hold_existing+0xbf >>>>> #9 0xffffffff81a70dd7 at sa_attr_op+0x167 >>>>> #10 0xffffffff81a72ffb at sa_lookup+0x4b >>>>> #11 0xffffffff81abc82a at zfs_rmnode+0x2ba >>>>> #12 0xffffffff81ada58e at zfs_freebsd_reclaim+0x4e >>>>> #13 0xffffffff80e73537 at VOP_RECLAIM_APV+0xa7 >>>>> #14 0xffffffff809ec5b4 at vgonel+0x1b4 >>>>> #15 0xffffffff809eca49 at vrecycle+0x59 >>>>> #16 0xffffffff81ada52d at zfs_freebsd_inactive+0xd >>>>> #17 0xffffffff80e73427 at VOP_INACTIVE_APV+0xa7 >>>>> Uptime: 9m31s >>>>> Dumping 1286 out of 24543 MB:..2%..12%..22%..32%..42%..51%..61%..71% (CTRL-C to abort) ..81%..91% >>>>> >>>>> Reading symbols from /boot/kernel/zfs.ko.symbols...done. >>>>> Loaded symbols for /boot/kernel/zfs.ko.symbols >>>>> Reading symbols from /boot/kernel/opensolaris.ko.symbols...done. >>>>> Loaded symbols for /boot/kernel/opensolaris.ko.symbols >>>>> Reading symbols from /boot/kernel/if_lagg.ko.symbols...done. >>>>> Loaded symbols for /boot/kernel/if_lagg.ko.symbols >>>>> Reading symbols from /boot/kernel/ums.ko.symbols...done. >>>>> Loaded symbols for /boot/kernel/ums.ko.symbols >>>>> Reading symbols from /boot/kernel/ipfw.ko.symbols...done. >>>>> Loaded symbols for /boot/kernel/ipfw.ko.symbols >>>>> #0 doadump (textdump=) at pcpu.h:219 >>>>> 219 pcpu.h: No such file or directory. >>>>> in pcpu.h >>>>> (kgdb) bt >>>>> #0 doadump (textdump=) at pcpu.h:219 >>>>> #1 0xffffffff80948702 in kern_reboot (howto=260) at /usr/src/sys/kern/kern_shutdown.c:451 >>>>> #2 0xffffffff80948ae5 in vpanic (fmt=, ap=) at /usr/src/sys/kern/kern_shutdown.c:758 >>>>> #3 0xffffffff80948973 in panic (fmt=0x0) at /usr/src/sys/kern/kern_shutdown.c:687 >>>>> #4 0xffffffff81c0222f in assfail3 (a=, lv=, op=, rv=, >>>>> f=, l=) at /usr/src/sys/modules/opensolaris/../../cddl/compat/opensolaris/kern/opensolaris_cmn_err.c:91 >>>>> #5 0xffffffff81aa9d40 in zio_buf_alloc (size=0) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c:270 >>>>> #6 0xffffffff81a2b9f8 in arc_get_data_buf (buf=) >>>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:2898 >>>>> #7 0xffffffff81a2e20a in arc_read (pio=0xfffff80011791730, spa=0xfffff80011579000, bp=0xfffffe000aee7980, done=0xffffffff81a3a2d0 , >>>>> private=0xfffff8002244b000, priority=ZIO_PRIORITY_SYNC_READ, zio_flags=-528866606, arc_flags=0xfffffe06727fb3c4, zb=0xffffffff81a3a2d0) >>>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:1551 >>>>> #8 0xffffffff81a3669c in dbuf_read (db=0xfffff8002244b000, zio=0x0, flags=6) >>>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dbuf.c:573 >>>>> #9 0xffffffff81a3d8bf in dmu_spill_hold_existing (bonus=0xfffff800223bed20, tag=0x0, dbp=0xfffff800919966b8) >>>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu.c:333 >>>>> #10 0xffffffff81a70dd7 in sa_attr_op (hdl=0xfffff80091996690, bulk=0xfffffe06727fb528, count=1, data_op=SA_LOOKUP, tx=0x0) >>>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/sa.c:310 >>>>> #11 0xffffffff81a72ffb in sa_lookup (hdl=0xfffff80091996690, attr=, buf=, buflen=) >>>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/sa.c:1441 >>>>> #12 0xffffffff81abc82a in zfs_rmnode (zp=0xfffff80091993730) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_dir.c:633 >>>>> #13 0xffffffff81ada58e in zfs_freebsd_reclaim (ap=) >>>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c:6569 >>>>> #14 0xffffffff80e73537 in VOP_RECLAIM_APV (vop=, a=) at vnode_if.c:2019 >>>>> #15 0xffffffff809ec5b4 in vgonel (vp=0xfffff800111733b0) at vnode_if.h:830 >>>>> #16 0xffffffff809eca49 in vrecycle (vp=0xfffff800111733b0) at /usr/src/sys/kern/vfs_subr.c:2703 >>>>> #17 0xffffffff81ada52d in zfs_freebsd_inactive (ap=) >>>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c:6540 >>>>> #18 0xffffffff80e73427 in VOP_INACTIVE_APV (vop=, a=) at vnode_if.c:1953 >>>>> #19 0xffffffff809eb382 in vinactive (vp=0xfffff800111733b0, td=0xfffff800113d8000) at vnode_if.h:807 >>>>> #20 0xffffffff809eb772 in vputx (vp=0xfffff800111733b0, func=2) at /usr/src/sys/kern/vfs_subr.c:2306 >>>>> #21 0xffffffff809f401e in kern_rmdirat (td=, fd=, path=, pathseg=) >>>>> at /usr/src/sys/kern/vfs_syscalls.c:3842 >>>>> #22 0xffffffff80d4b3e7 in amd64_syscall (td=0xfffff800113d8000, traced=0) at subr_syscall.c:134 >>>>> #23 0xffffffff80d30acb in Xfast_syscall () at /usr/src/sys/amd64/amd64/exception.S:396 >>>>> #24 0x00000008008914ea in ?? () >>>>> Previous frame inner to this frame (corrupt stack?) >>>>> Current language: auto; currently minimal >>>>> >>>>> If setup FreeBSD 10.3 BETA (on GENERIC or custom kernel config): >>>>> Copyright 2004 Free Software Foundation, Inc. >>>>> GDB is free software, covered by the GNU General Public License, and you are >>>>> welcome to change it and/or distribute copies of it under certain conditions. >>>>> Type "show copying" to see the conditions. >>>>> There is absolutely no warranty for GDB. Type "show warranty" for details. >>>>> This GDB was configured as "amd64-marcel-freebsd"... >>>>> >>>>> Unread portion of the kernel message buffer: >>>>> panic: solaris assert: c < (1ULL << 24) >> 9 (0x7fffffffffffff < 0x8000), file: /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c, line: 273 >>>>> cpuid = 13 >>>>> KDB: stack backtrace: >>>>> #0 0xffffffff8098f000 at kdb_backtrace+0x60 >>>>> #1 0xffffffff80951d06 at vpanic+0x126 >>>>> #2 0xffffffff80951bd3 at panic+0x43 >>>>> #3 0xffffffff81e0022f at assfail3+0x2f >>>>> #4 0xffffffff81cacc70 at zio_buf_alloc+0x50 >>>>> #5 0xffffffff81c2b8f2 at arc_get_data_buf+0x262 >>>>> #6 0xffffffff81c2b657 at arc_buf_alloc+0xc7 >>>>> #7 0xffffffff81c2d601 at arc_read+0x1c1 >>>>> #8 0xffffffff81c36ce9 at dbuf_read+0x6b9 >>>>> #9 0xffffffff81c3e415 at dmu_spill_hold_existing+0xc5 >>>>> #10 0xffffffff81c73707 at sa_attr_op+0x167 >>>>> #11 0xffffffff81c75972 at sa_lookup+0x52 >>>>> #12 0xffffffff81cbf8da at zfs_rmnode+0x2ba >>>>> #13 0xffffffff81cdd75e at zfs_freebsd_reclaim+0x4e >>>>> #14 0xffffffff80e81c27 at VOP_RECLAIM_APV+0xa7 >>>>> #15 0xffffffff809f9581 at vgonel+0x221 >>>>> #16 0xffffffff809f9a19 at vrecycle+0x59 >>>>> #17 0xffffffff81cdd6fd at zfs_freebsd_inactive+0xd >>>>> Uptime: 11m11s >>>>> Dumping 1368 out of 24542 MB:..2%..11%..22%..31%..41%..51%..61%..71%..81%..91% >>>>> >>>>> Reading symbols from /boot/kernel/if_lagg.ko.symbols...done. >>>>> Loaded symbols for /boot/kernel/if_lagg.ko.symbols >>>>> Reading symbols from /boot/kernel/aio.ko.symbols...done. >>>>> Loaded symbols for /boot/kernel/aio.ko.symbols >>>>> Reading symbols from /boot/kernel/ichsmb.ko.symbols...done. >>>>> Loaded symbols for /boot/kernel/ichsmb.ko.symbols >>>>> Reading symbols from /boot/kernel/smbus.ko.symbols...done. >>>>> Loaded symbols for /boot/kernel/smbus.ko.symbols >>>>> Reading symbols from /boot/kernel/ipmi.ko.symbols...done. >>>>> Loaded symbols for /boot/kernel/ipmi.ko.symbols >>>>> Reading symbols from /boot/kernel/zfs.ko.symbols...done. >>>>> Loaded symbols for /boot/kernel/zfs.ko.symbols >>>>> Reading symbols from /boot/kernel/opensolaris.ko.symbols...done. >>>>> Loaded symbols for /boot/kernel/opensolaris.ko.symbols >>>>> Reading symbols from /boot/kernel/ums.ko.symbols...done. >>>>> Loaded symbols for /boot/kernel/ums.ko.symbols >>>>> Reading symbols from /boot/kernel/ipfw.ko.symbols...done. >>>>> Loaded symbols for /boot/kernel/ipfw.ko.symbols >>>>> #0 doadump (textdump=) at pcpu.h:219 >>>>> 219 pcpu.h: No such file or directory. >>>>> in pcpu.h >>>>> (kgdb) backtrace >>>>> #0 doadump (textdump=) at pcpu.h:219 >>>>> #1 0xffffffff80951962 in kern_reboot (howto=260) at /usr/src/sys/kern/kern_shutdown.c:486 >>>>> #2 0xffffffff80951d45 in vpanic (fmt=, ap=) at /usr/src/sys/kern/kern_shutdown.c:889 >>>>> #3 0xffffffff80951bd3 in panic (fmt=0x0) at /usr/src/sys/kern/kern_shutdown.c:818 >>>>> #4 0xffffffff81e0022f in assfail3 (a=, lv=, op=, rv=, f=, >>>>> l=) at /usr/src/sys/modules/opensolaris/../../cddl/compat/opensolaris/kern/opensolaris_cmn_err.c:91 >>>>> #5 0xffffffff81cacc70 in zio_buf_alloc (size=0) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c:273 >>>>> #6 0xffffffff81c2b8f2 in arc_get_data_buf (buf=) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:3880 >>>>> #7 0xffffffff81c2b657 in arc_buf_alloc (spa=, size=, tag=0x0, type=) >>>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:2057 >>>>> #8 0xffffffff81c2d601 in arc_read (pio=0xfffff8000fad03b0, spa=0xfffff8000f63d000, bp=0xfffffe000e509980, done=0xffffffff81c3aed0 , private=0xfffff8000fdd6360, >>>>> priority=ZIO_PRIORITY_SYNC_READ, zio_flags=-2117882160, arc_flags=0xfffffe02925483c4, zb=0xfffff8000fdd6360) >>>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:4397 >>>>> #9 0xffffffff81c36ce9 in dbuf_read (db=0xfffff8000fdd6360, zio=0x0, flags=) >>>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dbuf.c:682 >>>>> #10 0xffffffff81c3e415 in dmu_spill_hold_existing (bonus=0xfffff8001f312438, tag=0x0, dbp=0xfffff80062d4e7d0) >>>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu.c:333 >>>>> #11 0xffffffff81c73707 in sa_attr_op (hdl=0xfffff80062d4e770, bulk=0xfffffe0292548528, count=1, data_op=SA_LOOKUP, tx=0x0) >>>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/sa.c:305 >>>>> #12 0xffffffff81c75972 in sa_lookup (hdl=0xfffff80062d4e770, attr=, buf=, buflen=) >>>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/sa.c:1443 >>>>> #13 0xffffffff81cbf8da in zfs_rmnode (zp=0xfffff80062d4c8a0) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_dir.c:633 >>>>> #14 0xffffffff81cdd75e in zfs_freebsd_reclaim (ap=) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c:6619 >>>>> #15 0xffffffff80e81c27 in VOP_RECLAIM_APV (vop=, a=) at vnode_if.c:2019 >>>>> #16 0xffffffff809f9581 in vgonel (vp=0xfffff8000f1beb10) at vnode_if.h:830 >>>>> #17 0xffffffff809f9a19 in vrecycle (vp=0xfffff8000f1beb10) at /usr/src/sys/kern/vfs_subr.c:2951 >>>>> #18 0xffffffff81cdd6fd in zfs_freebsd_inactive (ap=) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c:6590 >>>>> #19 0xffffffff80e81b17 in VOP_INACTIVE_APV (vop=, a=) at vnode_if.c:1953 >>>>> #20 0xffffffff809f8322 in vinactive (vp=0xfffff8000f1beb10, td=0xfffff8000f9f34b0) at vnode_if.h:807 >>>>> #21 0xffffffff809f8712 in vputx (vp=0xfffff8000f1beb10, func=2) at /usr/src/sys/kern/vfs_subr.c:2547 >>>>> #22 0xffffffff80a0137e in kern_rmdirat (td=, fd=, path=, pathseg=) >>>>> at /usr/src/sys/kern/vfs_syscalls.c:3964 >>>>> #23 0xffffffff80d574bf in amd64_syscall (td=0xfffff8000f9f34b0, traced=0) at subr_syscall.c:141 >>>>> #24 0xffffffff80d3c72b in Xfast_syscall () at /usr/src/sys/amd64/amd64/exception.S:396 >>>>> #25 0x000000080089458a in ?? () >>>>> Previous frame inner to this frame (corrupt stack?) >>>>> Current language: auto; currently minimal >>>>> >>>>> Crash folder (or file) have strange rights: >>>>> d---------+ 3 anna domain users 3 10 10:32 01-Projcts >>>>> d---------+ 2 anna domain users 2 8 21:46 02-Text >>>>> >>>>> How correct kernel panic? >>>>> >>>>> _______________________________________________ >>>>> freebsd-fs@freebsd.org mailing list >>>>> https://lists.freebsd.org/mailman/listinfo/freebsd-fs >>>>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >>>> _______________________________________________ >>>> freebsd-fs@freebsd.org mailing list >>>> https://lists.freebsd.org/mailman/listinfo/freebsd-fs >>>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@freebsd.org Tue Feb 16 09:58:29 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 3683AAA9E5B for ; Tue, 16 Feb 2016 09:58:29 +0000 (UTC) (envelope-from demis@yandex.ru) Received: from forward9p.cmail.yandex.net (forward9p.cmail.yandex.net [IPv6:2a02:6b8:0:1465::101]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "forwards.mail.yandex.net", Issuer "Yandex CA" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id CAF1B1491 for ; Tue, 16 Feb 2016 09:58:28 +0000 (UTC) (envelope-from demis@yandex.ru) Received: from web18g.yandex.ru (web18g.yandex.ru [IPv6:2a02:6b8:0:1402::28]) by forward9p.cmail.yandex.net (Yandex) with ESMTP id 64E2121688; Tue, 16 Feb 2016 12:58:25 +0300 (MSK) Received: from web18g.yandex.ru (localhost [127.0.0.1]) by web18g.yandex.ru (Yandex) with ESMTP id A148B42A016D; Tue, 16 Feb 2016 12:58:24 +0300 (MSK) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex.ru; s=mail; t=1455616705; bh=Kz8vo1MVvUBXUbWPo3duS/ad2/sYjFsaAH058AE83EM=; h=From:To:In-Reply-To:References:Subject:Date; b=HlLZ0Z4eKy9vBJt1ustN3uSnBFGRmgo+6DIBd+LtebLa5QLy3029hMe7hV5hlTWCP FzCe4i5r4a4e9taGsK2oeRCPc1utQh84MDoFt62hmTko9AKu1bX0PXXfunVbxIfeD3 rFa22r4PJGNFqP2m6HaqD/N1GhEOzBEiQWlSEg4o= Received: by web18g.yandex.ru with HTTP; Tue, 16 Feb 2016 12:58:22 +0300 From: DemIS To: Steven Hartland , "freebsd-fs@freebsd.org" In-Reply-To: <56C2E7A6.9090004@multiplay.co.uk> References: <1061671455578760@web3g.yandex.ru> <56C2655F.9010809@multiplay.co.uk> <1076701455583595@web2g.yandex.ru> <56C2782E.2010404@multiplay.co.uk> <2311371455610101@web11j.yandex.ru> <56C2E7A6.9090004@multiplay.co.uk> Subject: Re: Kernel panic zio.c, line: 270 FreeBSD 10.2 (or 10.3) MIME-Version: 1.0 Message-Id: <2998351455616702@web18g.yandex.ru> X-Mailer: Yamail [ http://yandex.ru ] 5.0 Date: Tue, 16 Feb 2016 12:58:22 +0300 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset=koi8-r X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 16 Feb 2016 09:58:29 -0000 (kgdb) print bp No symbol "bp" in current context. (kgdb) print *bp No symbol "bp" in current context. 16.02.2016, 12:10, "Steven Hartland" : > I need the values of the vars specified, so you'll need to: > print bp > If it reports just an address try: > print *bp > etc. > > On 16/02/2016 08:08, DemIS wrote: >> frame 8 >> #8 0xffffffff81c3669c in dbuf_read (db=0xfffff8013ca46380, zio=0x0, flags=6) >> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dbuf.c:573 >> 573 (void) arc_read(zio, db->db_objset->os_spa, db->db_blkptr, >> Current language: auto; currently minimal >> >> frame 6 >> #6 0xffffffff81c2b9f8 in arc_get_data_buf (buf=) >> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:2898 >> 2898 buf->b_data = zio_buf_alloc(size); >> >> 16.02.2016, 04:15, "Steven Hartland" : >>> You don't need the system live just the kernel and the crash dump to get >>> those values. >>> >>> On 16/02/2016 00:46, DemIS wrote: >>>> Today I fell back to version 10.2 . Users work on the server . To reinstall 10.3 I need 4:00 . >>>> Therefore, it is possible only on weekends . But this has the same effect on the version 10.2 >>>> >>>> 16.02.2016, 02:55, "Steven Hartland" : >>>>> That sounds like you have some pool data corruption, from the 10.3 >>>>> version dump can you print out the following: >>>>> 1. frame 8: bp and size >>>>> 2. frame 6: buf->b_hdr >>>>> >>>>> On 15/02/2016 23:26, DemIS wrote: >>>>>> Any one knows about problem? >>>>>> Server: SuperMicro Model:SYS-6026T-6RF+, MB:X8DTU-6F+, RAM 24 DDR3, two XEON >>>>>> RAM: KVR1333D3E9S/4G - DDR3, 1333MHz, ECC, CL9, X8, 1.5V, Unbuffered, DIMM >>>>>> Version:uname -a >>>>>> >>>>>> FreeBSD teo.some.loc 10.2-RELEASE-p12 FreeBSD 10.2-RELEASE-p12 #0: Sat Feb 13 18:04:04 MSK 2016 demis@teo.some.loc:/usr/obj/usr/src/sys/TEO amd64 >>>>>> (on GENERIC or custom kernel config persist too !!!) >>>>>> >>>>>> Memtest86+ v.4.40 (ECC mode) test - OK. >>>>>> Every disk checked too (physically - mhdd, logically - zpool scrub, and additional checkit in external company recovery disk). No errors. >>>>>> >>>>>> Part of df -H >>>>>> Filesystem Size Used Avail Capacity Mounted on >>>>>> hdd/usr/wf 6,6T 4,1T 2,5T 62% /hdd/usr/wf >>>>>> >>>>>> zpool status hdd >>>>>> pool: hdd >>>>>> state: ONLINE >>>>>> status: Some supported features are not enabled on the pool. The pool can >>>>>> still be used, but some features are unavailable. >>>>>> action: Enable all features using 'zpool upgrade'. Once this is done, >>>>>> the pool may no longer be accessible by software that does not support >>>>>> the features. See zpool-features(7) for details. >>>>>> scan: scrub repaired 0 in 14h57m with 0 errors on Thu Feb 11 03:35:43 2016 >>>>>> config: >>>>>> >>>>>> NAME STATE READ WRITE CKSUM >>>>>> hdd ONLINE 0 0 0 >>>>>> raidz2-0 ONLINE 0 0 0 >>>>>> mfid1p1 ONLINE 0 0 0 >>>>>> mfid2p1 ONLINE 0 0 0 >>>>>> mfid3p1 ONLINE 0 0 0 >>>>>> mfid4p1 ONLINE 0 0 0 >>>>>> mfid5p1 ONLINE 0 0 0 >>>>>> >>>>>> errors: No known data errors >>>>>> >>>>>> hdd - is My zfs volume. >>>>>> When I run command like: >>>>>> rm /hdd/usr/some/path/to/file >>>>>> or >>>>>> rm /hdd/usr/some/path/to/folder >>>>>> or >>>>>> chown root:wheel /hdd/usr/some/path/to/file >>>>>> or >>>>>> chown root:wheel /hdd/usr/some/path/to/folder >>>>>> or >>>>>> setfacl ... to /hdd/usr/some/path/to/file >>>>>> >>>>>> I'm get kernel panic: >>>>>> GNU gdb 6.1.1 [FreeBSD] >>>>>> Copyright 2004 Free Software Foundation, Inc. >>>>>> GDB is free software, covered by the GNU General Public License, and you are >>>>>> welcome to change it and/or distribute copies of it under certain conditions. >>>>>> Type "show copying" to see the conditions. >>>>>> There is absolutely no warranty for GDB. Type "show warranty" for details. >>>>>> This GDB was configured as "amd64-marcel-freebsd"... >>>>>> >>>>>> Unread portion of the kernel message buffer: >>>>>> panic: solaris assert: c < (1ULL << 24) >> 9 (0x7fffffffffffff < 0x8000), file: /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c, line: 270 >>>>>> cpuid = 9 >>>>>> KDB: stack backtrace: >>>>>> #0 0xffffffff80984ef0 at kdb_backtrace+0x60 >>>>>> #1 0xffffffff80948aa6 at vpanic+0x126 >>>>>> #2 0xffffffff80948973 at panic+0x43 >>>>>> #3 0xffffffff81c0222f at assfail3+0x2f >>>>>> #4 0xffffffff81aa9d40 at zio_buf_alloc+0x50 >>>>>> #5 0xffffffff81a2b9f8 at arc_get_data_buf+0x358 >>>>>> #6 0xffffffff81a2e20a at arc_read+0x1ea >>>>>> #7 0xffffffff81a3669c at dbuf_read+0x6ac >>>>>> #8 0xffffffff81a3d8bf at dmu_spill_hold_existing+0xbf >>>>>> #9 0xffffffff81a70dd7 at sa_attr_op+0x167 >>>>>> #10 0xffffffff81a72ffb at sa_lookup+0x4b >>>>>> #11 0xffffffff81abc82a at zfs_rmnode+0x2ba >>>>>> #12 0xffffffff81ada58e at zfs_freebsd_reclaim+0x4e >>>>>> #13 0xffffffff80e73537 at VOP_RECLAIM_APV+0xa7 >>>>>> #14 0xffffffff809ec5b4 at vgonel+0x1b4 >>>>>> #15 0xffffffff809eca49 at vrecycle+0x59 >>>>>> #16 0xffffffff81ada52d at zfs_freebsd_inactive+0xd >>>>>> #17 0xffffffff80e73427 at VOP_INACTIVE_APV+0xa7 >>>>>> Uptime: 9m31s >>>>>> Dumping 1286 out of 24543 MB:..2%..12%..22%..32%..42%..51%..61%..71% (CTRL-C to abort) ..81%..91% >>>>>> >>>>>> Reading symbols from /boot/kernel/zfs.ko.symbols...done. >>>>>> Loaded symbols for /boot/kernel/zfs.ko.symbols >>>>>> Reading symbols from /boot/kernel/opensolaris.ko.symbols...done. >>>>>> Loaded symbols for /boot/kernel/opensolaris.ko.symbols >>>>>> Reading symbols from /boot/kernel/if_lagg.ko.symbols...done. >>>>>> Loaded symbols for /boot/kernel/if_lagg.ko.symbols >>>>>> Reading symbols from /boot/kernel/ums.ko.symbols...done. >>>>>> Loaded symbols for /boot/kernel/ums.ko.symbols >>>>>> Reading symbols from /boot/kernel/ipfw.ko.symbols...done. >>>>>> Loaded symbols for /boot/kernel/ipfw.ko.symbols >>>>>> #0 doadump (textdump=) at pcpu.h:219 >>>>>> 219 pcpu.h: No such file or directory. >>>>>> in pcpu.h >>>>>> (kgdb) bt >>>>>> #0 doadump (textdump=) at pcpu.h:219 >>>>>> #1 0xffffffff80948702 in kern_reboot (howto=260) at /usr/src/sys/kern/kern_shutdown.c:451 >>>>>> #2 0xffffffff80948ae5 in vpanic (fmt=, ap=) at /usr/src/sys/kern/kern_shutdown.c:758 >>>>>> #3 0xffffffff80948973 in panic (fmt=0x0) at /usr/src/sys/kern/kern_shutdown.c:687 >>>>>> #4 0xffffffff81c0222f in assfail3 (a=, lv=, op=, rv=, >>>>>> f=, l=) at /usr/src/sys/modules/opensolaris/../../cddl/compat/opensolaris/kern/opensolaris_cmn_err.c:91 >>>>>> #5 0xffffffff81aa9d40 in zio_buf_alloc (size=0) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c:270 >>>>>> #6 0xffffffff81a2b9f8 in arc_get_data_buf (buf=) >>>>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:2898 >>>>>> #7 0xffffffff81a2e20a in arc_read (pio=0xfffff80011791730, spa=0xfffff80011579000, bp=0xfffffe000aee7980, done=0xffffffff81a3a2d0 , >>>>>> private=0xfffff8002244b000, priority=ZIO_PRIORITY_SYNC_READ, zio_flags=-528866606, arc_flags=0xfffffe06727fb3c4, zb=0xffffffff81a3a2d0) >>>>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:1551 >>>>>> #8 0xffffffff81a3669c in dbuf_read (db=0xfffff8002244b000, zio=0x0, flags=6) >>>>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dbuf.c:573 >>>>>> #9 0xffffffff81a3d8bf in dmu_spill_hold_existing (bonus=0xfffff800223bed20, tag=0x0, dbp=0xfffff800919966b8) >>>>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu.c:333 >>>>>> #10 0xffffffff81a70dd7 in sa_attr_op (hdl=0xfffff80091996690, bulk=0xfffffe06727fb528, count=1, data_op=SA_LOOKUP, tx=0x0) >>>>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/sa.c:310 >>>>>> #11 0xffffffff81a72ffb in sa_lookup (hdl=0xfffff80091996690, attr=, buf=, buflen=) >>>>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/sa.c:1441 >>>>>> #12 0xffffffff81abc82a in zfs_rmnode (zp=0xfffff80091993730) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_dir.c:633 >>>>>> #13 0xffffffff81ada58e in zfs_freebsd_reclaim (ap=) >>>>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c:6569 >>>>>> #14 0xffffffff80e73537 in VOP_RECLAIM_APV (vop=, a=) at vnode_if.c:2019 >>>>>> #15 0xffffffff809ec5b4 in vgonel (vp=0xfffff800111733b0) at vnode_if.h:830 >>>>>> #16 0xffffffff809eca49 in vrecycle (vp=0xfffff800111733b0) at /usr/src/sys/kern/vfs_subr.c:2703 >>>>>> #17 0xffffffff81ada52d in zfs_freebsd_inactive (ap=) >>>>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c:6540 >>>>>> #18 0xffffffff80e73427 in VOP_INACTIVE_APV (vop=, a=) at vnode_if.c:1953 >>>>>> #19 0xffffffff809eb382 in vinactive (vp=0xfffff800111733b0, td=0xfffff800113d8000) at vnode_if.h:807 >>>>>> #20 0xffffffff809eb772 in vputx (vp=0xfffff800111733b0, func=2) at /usr/src/sys/kern/vfs_subr.c:2306 >>>>>> #21 0xffffffff809f401e in kern_rmdirat (td=, fd=, path=, pathseg=) >>>>>> at /usr/src/sys/kern/vfs_syscalls.c:3842 >>>>>> #22 0xffffffff80d4b3e7 in amd64_syscall (td=0xfffff800113d8000, traced=0) at subr_syscall.c:134 >>>>>> #23 0xffffffff80d30acb in Xfast_syscall () at /usr/src/sys/amd64/amd64/exception.S:396 >>>>>> #24 0x00000008008914ea in ?? () >>>>>> Previous frame inner to this frame (corrupt stack?) >>>>>> Current language: auto; currently minimal >>>>>> >>>>>> If setup FreeBSD 10.3 BETA (on GENERIC or custom kernel config): >>>>>> Copyright 2004 Free Software Foundation, Inc. >>>>>> GDB is free software, covered by the GNU General Public License, and you are >>>>>> welcome to change it and/or distribute copies of it under certain conditions. >>>>>> Type "show copying" to see the conditions. >>>>>> There is absolutely no warranty for GDB. Type "show warranty" for details. >>>>>> This GDB was configured as "amd64-marcel-freebsd"... >>>>>> >>>>>> Unread portion of the kernel message buffer: >>>>>> panic: solaris assert: c < (1ULL << 24) >> 9 (0x7fffffffffffff < 0x8000), file: /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c, line: 273 >>>>>> cpuid = 13 >>>>>> KDB: stack backtrace: >>>>>> #0 0xffffffff8098f000 at kdb_backtrace+0x60 >>>>>> #1 0xffffffff80951d06 at vpanic+0x126 >>>>>> #2 0xffffffff80951bd3 at panic+0x43 >>>>>> #3 0xffffffff81e0022f at assfail3+0x2f >>>>>> #4 0xffffffff81cacc70 at zio_buf_alloc+0x50 >>>>>> #5 0xffffffff81c2b8f2 at arc_get_data_buf+0x262 >>>>>> #6 0xffffffff81c2b657 at arc_buf_alloc+0xc7 >>>>>> #7 0xffffffff81c2d601 at arc_read+0x1c1 >>>>>> #8 0xffffffff81c36ce9 at dbuf_read+0x6b9 >>>>>> #9 0xffffffff81c3e415 at dmu_spill_hold_existing+0xc5 >>>>>> #10 0xffffffff81c73707 at sa_attr_op+0x167 >>>>>> #11 0xffffffff81c75972 at sa_lookup+0x52 >>>>>> #12 0xffffffff81cbf8da at zfs_rmnode+0x2ba >>>>>> #13 0xffffffff81cdd75e at zfs_freebsd_reclaim+0x4e >>>>>> #14 0xffffffff80e81c27 at VOP_RECLAIM_APV+0xa7 >>>>>> #15 0xffffffff809f9581 at vgonel+0x221 >>>>>> #16 0xffffffff809f9a19 at vrecycle+0x59 >>>>>> #17 0xffffffff81cdd6fd at zfs_freebsd_inactive+0xd >>>>>> Uptime: 11m11s >>>>>> Dumping 1368 out of 24542 MB:..2%..11%..22%..31%..41%..51%..61%..71%..81%..91% >>>>>> >>>>>> Reading symbols from /boot/kernel/if_lagg.ko.symbols...done. >>>>>> Loaded symbols for /boot/kernel/if_lagg.ko.symbols >>>>>> Reading symbols from /boot/kernel/aio.ko.symbols...done. >>>>>> Loaded symbols for /boot/kernel/aio.ko.symbols >>>>>> Reading symbols from /boot/kernel/ichsmb.ko.symbols...done. >>>>>> Loaded symbols for /boot/kernel/ichsmb.ko.symbols >>>>>> Reading symbols from /boot/kernel/smbus.ko.symbols...done. >>>>>> Loaded symbols for /boot/kernel/smbus.ko.symbols >>>>>> Reading symbols from /boot/kernel/ipmi.ko.symbols...done. >>>>>> Loaded symbols for /boot/kernel/ipmi.ko.symbols >>>>>> Reading symbols from /boot/kernel/zfs.ko.symbols...done. >>>>>> Loaded symbols for /boot/kernel/zfs.ko.symbols >>>>>> Reading symbols from /boot/kernel/opensolaris.ko.symbols...done. >>>>>> Loaded symbols for /boot/kernel/opensolaris.ko.symbols >>>>>> Reading symbols from /boot/kernel/ums.ko.symbols...done. >>>>>> Loaded symbols for /boot/kernel/ums.ko.symbols >>>>>> Reading symbols from /boot/kernel/ipfw.ko.symbols...done. >>>>>> Loaded symbols for /boot/kernel/ipfw.ko.symbols >>>>>> #0 doadump (textdump=) at pcpu.h:219 >>>>>> 219 pcpu.h: No such file or directory. >>>>>> in pcpu.h >>>>>> (kgdb) backtrace >>>>>> #0 doadump (textdump=) at pcpu.h:219 >>>>>> #1 0xffffffff80951962 in kern_reboot (howto=260) at /usr/src/sys/kern/kern_shutdown.c:486 >>>>>> #2 0xffffffff80951d45 in vpanic (fmt=, ap=) at /usr/src/sys/kern/kern_shutdown.c:889 >>>>>> #3 0xffffffff80951bd3 in panic (fmt=0x0) at /usr/src/sys/kern/kern_shutdown.c:818 >>>>>> #4 0xffffffff81e0022f in assfail3 (a=, lv=, op=, rv=, f=, >>>>>> l=) at /usr/src/sys/modules/opensolaris/../../cddl/compat/opensolaris/kern/opensolaris_cmn_err.c:91 >>>>>> #5 0xffffffff81cacc70 in zio_buf_alloc (size=0) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c:273 >>>>>> #6 0xffffffff81c2b8f2 in arc_get_data_buf (buf=) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:3880 >>>>>> #7 0xffffffff81c2b657 in arc_buf_alloc (spa=, size=, tag=0x0, type=) >>>>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:2057 >>>>>> #8 0xffffffff81c2d601 in arc_read (pio=0xfffff8000fad03b0, spa=0xfffff8000f63d000, bp=0xfffffe000e509980, done=0xffffffff81c3aed0 , private=0xfffff8000fdd6360, >>>>>> priority=ZIO_PRIORITY_SYNC_READ, zio_flags=-2117882160, arc_flags=0xfffffe02925483c4, zb=0xfffff8000fdd6360) >>>>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:4397 >>>>>> #9 0xffffffff81c36ce9 in dbuf_read (db=0xfffff8000fdd6360, zio=0x0, flags=) >>>>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dbuf.c:682 >>>>>> #10 0xffffffff81c3e415 in dmu_spill_hold_existing (bonus=0xfffff8001f312438, tag=0x0, dbp=0xfffff80062d4e7d0) >>>>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu.c:333 >>>>>> #11 0xffffffff81c73707 in sa_attr_op (hdl=0xfffff80062d4e770, bulk=0xfffffe0292548528, count=1, data_op=SA_LOOKUP, tx=0x0) >>>>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/sa.c:305 >>>>>> #12 0xffffffff81c75972 in sa_lookup (hdl=0xfffff80062d4e770, attr=, buf=, buflen=) >>>>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/sa.c:1443 >>>>>> #13 0xffffffff81cbf8da in zfs_rmnode (zp=0xfffff80062d4c8a0) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_dir.c:633 >>>>>> #14 0xffffffff81cdd75e in zfs_freebsd_reclaim (ap=) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c:6619 >>>>>> #15 0xffffffff80e81c27 in VOP_RECLAIM_APV (vop=, a=) at vnode_if.c:2019 >>>>>> #16 0xffffffff809f9581 in vgonel (vp=0xfffff8000f1beb10) at vnode_if.h:830 >>>>>> #17 0xffffffff809f9a19 in vrecycle (vp=0xfffff8000f1beb10) at /usr/src/sys/kern/vfs_subr.c:2951 >>>>>> #18 0xffffffff81cdd6fd in zfs_freebsd_inactive (ap=) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c:6590 >>>>>> #19 0xffffffff80e81b17 in VOP_INACTIVE_APV (vop=, a=) at vnode_if.c:1953 >>>>>> #20 0xffffffff809f8322 in vinactive (vp=0xfffff8000f1beb10, td=0xfffff8000f9f34b0) at vnode_if.h:807 >>>>>> #21 0xffffffff809f8712 in vputx (vp=0xfffff8000f1beb10, func=2) at /usr/src/sys/kern/vfs_subr.c:2547 >>>>>> #22 0xffffffff80a0137e in kern_rmdirat (td=, fd=, path=, pathseg=) >>>>>> at /usr/src/sys/kern/vfs_syscalls.c:3964 >>>>>> #23 0xffffffff80d574bf in amd64_syscall (td=0xfffff8000f9f34b0, traced=0) at subr_syscall.c:141 >>>>>> #24 0xffffffff80d3c72b in Xfast_syscall () at /usr/src/sys/amd64/amd64/exception.S:396 >>>>>> #25 0x000000080089458a in ?? () >>>>>> Previous frame inner to this frame (corrupt stack?) >>>>>> Current language: auto; currently minimal >>>>>> >>>>>> Crash folder (or file) have strange rights: >>>>>> d---------+ 3 anna domain users 3 10 10:32 01-Projcts >>>>>> d---------+ 2 anna domain users 2 8 21:46 02-Text >>>>>> >>>>>> How correct kernel panic? >>>>>> >>>>>> _______________________________________________ >>>>>> freebsd-fs@freebsd.org mailing list >>>>>> https://lists.freebsd.org/mailman/listinfo/freebsd-fs >>>>>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >>>>> _______________________________________________ >>>>> freebsd-fs@freebsd.org mailing list >>>>> https://lists.freebsd.org/mailman/listinfo/freebsd-fs >>>>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@freebsd.org Tue Feb 16 21:23:27 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id D12ECAABEE8 for ; Tue, 16 Feb 2016 21:23:27 +0000 (UTC) (envelope-from demis@yandex.ru) Received: from forward11o.cmail.yandex.net (forward11o.cmail.yandex.net [IPv6:2a02:6b8:0:1a72::1e1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "forwards.mail.yandex.net", Issuer "Yandex CA" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 6EF2AD6B for ; Tue, 16 Feb 2016 21:23:27 +0000 (UTC) (envelope-from demis@yandex.ru) Received: from web13o.yandex.ru (web13o.yandex.ru [95.108.205.113]) by forward11o.cmail.yandex.net (Yandex) with ESMTP id 1FE3620BFE; Wed, 17 Feb 2016 00:23:24 +0300 (MSK) Received: from web13o.yandex.ru (localhost [127.0.0.1]) by web13o.yandex.ru (Yandex) with ESMTP id 8EEE5488220A; Wed, 17 Feb 2016 00:23:23 +0300 (MSK) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex.ru; s=mail; t=1455657803; bh=cDbWD7loRvOnFnYr4FLFEbrGXl4msnXfdOUq+Aoc1qg=; h=From:To:In-Reply-To:References:Subject:Date; b=d4vU7r+lwd4GBMsVYoxfRWpjJjL2AZETDtWiEI13IalcCSNWXVOH+5FO+dR4uzGpq xL6a9VIIpleTVnPzbrCJfE2WGDPwFWBEH/R9yFs5kVjRu/6kuc/bCJOvmlXfRfXJ7U Hv4EIKswDyJS0LWPM2v+eBFCPXec8lE6LNmqK8dk= Received: by web13o.yandex.ru with HTTP; Wed, 17 Feb 2016 00:23:22 +0300 From: DemIS To: Steven Hartland , "freebsd-fs@freebsd.org" In-Reply-To: <56C2E7A6.9090004@multiplay.co.uk> References: <1061671455578760@web3g.yandex.ru> <56C2655F.9010809@multiplay.co.uk> <1076701455583595@web2g.yandex.ru> <56C2782E.2010404@multiplay.co.uk> <2311371455610101@web11j.yandex.ru> <56C2E7A6.9090004@multiplay.co.uk> Subject: Re: Kernel panic zio.c, line: 270 FreeBSD 10.2 (or 10.3) MIME-Version: 1.0 Message-Id: <1903871455657802@web13o.yandex.ru> X-Mailer: Yamail [ http://yandex.ru ] 5.0 Date: Wed, 17 Feb 2016 00:23:22 +0300 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset=koi8-r X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 16 Feb 2016 21:23:28 -0000 New information: After 37 hours works zdb -cc -AAA hdd , but why zpool scrub hdd not have any error? Traversing all blocks to verify checksums and verify nothing leaked ... loading space map for vdev 0 of 1, metaslab 108 of 109 ... 815G completed ( 49MB/s) estimated time remaining: 37hr 00min 56sec zdb_blkptr_cb: Got error 122 reading <9586, 8417926, 0, 0> -- skipping 815G completed ( 49MB/s) estimated time remaining: 37hr 02min 59sec zdb_blkptr_cb: Got error 122 reading <9697, 8417983, 0, 0> -- skipping 7.14T completed ( 66MB/s) estimated time remaining: 0hr 00min 00sec Error counts: errno count 122 2 leaked space: vdev 0, offset 0xaf19dc51000, size 12288 leaked space: vdev 0, offset 0xaf19dc4b000, size 12288 leaked space: vdev 0, offset 0xaf19dc5d000, size 12288 leaked space: vdev 0, offset 0xaf19dc6f000, size 36864 leaked space: vdev 0, offset 0xaf19dc63000, size 12288 leaked space: vdev 0, offset 0xaf19dc57000, size 12288 leaked space: vdev 0, offset 0xaf19dc90000, size 12288 leaked space: vdev 0, offset 0xaf19dca2000, size 12288 leaked space: vdev 0, offset 0xaf19dc99000, size 12288 leaked space: vdev 0, offset 0xaf19dc81000, size 12288 leaked space: vdev 0, offset 0xaf19dcb4000, size 12288 leaked space: vdev 0, offset 0xaf19dcc3000, size 12288 leaked space: vdev 0, offset 0xaf19dcba000, size 12288 leaked space: vdev 0, offset 0xaf19dccf000, size 12288 leaked space: vdev 0, offset 0xaf19dcc9000, size 12288 leaked space: vdev 0, offset 0xaf19dcde000, size 12288 leaked space: vdev 0, offset 0xaf19dcf3000, size 12288 leaked space: vdev 0, offset 0xaf19dced000, size 12288 leaked space: vdev 0, offset 0xaf19dce7000, size 12288 leaked space: vdev 0, offset 0xaf19dcd5000, size 24576 leaked space: vdev 0, offset 0xaf19dcae000, size 12288 leaked space: vdev 0, offset 0xaf19dd08000, size 12288 leaked space: vdev 0, offset 0xaf19dd1a000, size 12288 leaked space: vdev 0, offset 0xaf19dd11000, size 24576 leaked space: vdev 0, offset 0xaf19dd32000, size 12288 leaked space: vdev 0, offset 0xaf19dd3e000, size 12288 leaked space: vdev 0, offset 0xaf19dd38000, size 12288 leaked space: vdev 0, offset 0xaf19dd20000, size 24576 leaked space: vdev 0, offset 0xaf19dd4d000, size 24576 leaked space: vdev 0, offset 0xaf19dd68000, size 12288 leaked space: vdev 0, offset 0xaf19dd5f000, size 24576 leaked space: vdev 0, offset 0xaf19dd77000, size 12288 leaked space: vdev 0, offset 0xaf19dd86000, size 12288 leaked space: vdev 0, offset 0xaf19dd80000, size 12288 leaked space: vdev 0, offset 0xaf19dd6e000, size 24576 leaked space: vdev 0, offset 0xaf19dd47000, size 12288 leaked space: vdev 0, offset 0xaf19dd92000, size 12288 leaked space: vdev 0, offset 0xaf19dda1000, size 12288 leaked space: vdev 0, offset 0xaf19dd9b000, size 12288 leaked space: vdev 0, offset 0xaf2e440b000, size 12288 leaked space: vdev 0, offset 0xaf2e441d000, size 12288 leaked space: vdev 0, offset 0xaf2e447d000, size 12288 leaked space: vdev 0, offset 0xaf2e4429000, size 24576 leaked space: vdev 0, offset 0xaf2e4411000, size 12288 leaked space: vdev 0, offset 0xaf19dda7000, size 12288 leaked space: vdev 0, offset 0xaf19dd8c000, size 12288 leaked space: vdev 0, offset 0xaf19dcff000, size 12288 leaked space: vdev 0, offset 0xd33cddfc000, size 12288 leaked space: vdev 0, offset 0xd33cddf6000, size 12288 leaked space: vdev 0, offset 0xd33cde0e000, size 12288 leaked space: vdev 0, offset 0xd33cde08000, size 12288 leaked space: vdev 0, offset 0xd33cde02000, size 12288 leaked space: vdev 0, offset 0xd33cde2c000, size 12288 leaked space: vdev 0, offset 0xd33cde5f000, size 12288 leaked space: vdev 0, offset 0xd33cde44000, size 12288 leaked space: vdev 0, offset 0xd33cde3b000, size 12288 leaked space: vdev 0, offset 0xd33cde1a000, size 36864 leaked space: vdev 0, offset 0xd33cde71000, size 12288 leaked space: vdev 0, offset 0xd33cde80000, size 12288 leaked space: vdev 0, offset 0xd33cde77000, size 12288 leaked space: vdev 0, offset 0xd33cde8c000, size 12288 leaked space: vdev 0, offset 0xd33cde86000, size 12288 leaked space: vdev 0, offset 0xd33cde9b000, size 12288 leaked space: vdev 0, offset 0xd33cdeaa000, size 12288 leaked space: vdev 0, offset 0xd33cdeb0000, size 12288 leaked space: vdev 0, offset 0xd33cdea4000, size 12288 leaked space: vdev 0, offset 0xd33cde92000, size 24576 leaked space: vdev 0, offset 0xd33cde6b000, size 12288 leaked space: vdev 0, offset 0xd33cded7000, size 12288 leaked space: vdev 0, offset 0xd33cdee9000, size 12288 leaked space: vdev 0, offset 0xd33cdee0000, size 24576 leaked space: vdev 0, offset 0xd33cdf01000, size 12288 leaked space: vdev 0, offset 0xd33cdf0d000, size 12288 leaked space: vdev 0, offset 0xd33cdf07000, size 12288 leaked space: vdev 0, offset 0xd33cdeef000, size 24576 leaked space: vdev 0, offset 0xd33cdf1c000, size 24576 leaked space: vdev 0, offset 0xd33cdf37000, size 12288 leaked space: vdev 0, offset 0xd33cdf2e000, size 24576 leaked space: vdev 0, offset 0xd33cdf16000, size 12288 leaked space: vdev 0, offset 0xd33cdf46000, size 12288 leaked space: vdev 0, offset 0xd33cdf55000, size 12288 leaked space: vdev 0, offset 0xd33cdf4f000, size 12288 leaked space: vdev 0, offset 0xd33cdf61000, size 12288 leaked space: vdev 0, offset 0xd33cdf6a000, size 24576 leaked space: vdev 0, offset 0xd33cdf5b000, size 12288 leaked space: vdev 0, offset 0xd3530d71000, size 12288 leaked space: vdev 0, offset 0xd3530d83000, size 12288 leaked space: vdev 0, offset 0xd3532043000, size 12288 leaked space: vdev 0, offset 0xd3530d8f000, size 24576 leaked space: vdev 0, offset 0xd3530d77000, size 12288 leaked space: vdev 0, offset 0xd33cdf76000, size 12288 leaked space: vdev 0, offset 0xd33cdf3d000, size 24576 leaked space: vdev 0, offset 0xd33cdece000, size 12288 block traversal size 7855262306304 != alloc 7855263682560 (leaked 1376256) bp count: 39832472 ganged count: 0 bp logical: 4631741891584 avg: 116280 bp physical: 4595599106048 avg: 115373 compression: 1.01 bp allocated: 7855262306304 avg: 197207 compression: 0.59 bp deduped: 0 ref>1: 0 deduplication: 1.00 SPA allocated: 7855263682560 used: 52.44% Dittoed blocks on same vdev: 3448060 16.02.2016, 12:10, "Steven Hartland" : > I need the values of the vars specified, so you'll need to: > print bp > If it reports just an address try: > print *bp > etc. > > On 16/02/2016 08:08, DemIS wrote: >> frame 8 >> #8 0xffffffff81c3669c in dbuf_read (db=0xfffff8013ca46380, zio=0x0, flags=6) >> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dbuf.c:573 >> 573 (void) arc_read(zio, db->db_objset->os_spa, db->db_blkptr, >> Current language: auto; currently minimal >> >> frame 6 >> #6 0xffffffff81c2b9f8 in arc_get_data_buf (buf=) >> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:2898 >> 2898 buf->b_data = zio_buf_alloc(size); >> >> 16.02.2016, 04:15, "Steven Hartland" : >>> You don't need the system live just the kernel and the crash dump to get >>> those values. >>> >>> On 16/02/2016 00:46, DemIS wrote: >>>> Today I fell back to version 10.2 . Users work on the server . To reinstall 10.3 I need 4:00 . >>>> Therefore, it is possible only on weekends . But this has the same effect on the version 10.2 >>>> >>>> 16.02.2016, 02:55, "Steven Hartland" : >>>>> That sounds like you have some pool data corruption, from the 10.3 >>>>> version dump can you print out the following: >>>>> 1. frame 8: bp and size >>>>> 2. frame 6: buf->b_hdr >>>>> >>>>> On 15/02/2016 23:26, DemIS wrote: >>>>>> Any one knows about problem? >>>>>> Server: SuperMicro Model:SYS-6026T-6RF+, MB:X8DTU-6F+, RAM 24 DDR3, two XEON >>>>>> RAM: KVR1333D3E9S/4G - DDR3, 1333MHz, ECC, CL9, X8, 1.5V, Unbuffered, DIMM >>>>>> Version:uname -a >>>>>> >>>>>> FreeBSD teo.some.loc 10.2-RELEASE-p12 FreeBSD 10.2-RELEASE-p12 #0: Sat Feb 13 18:04:04 MSK 2016 demis@teo.some.loc:/usr/obj/usr/src/sys/TEO amd64 >>>>>> (on GENERIC or custom kernel config persist too !!!) >>>>>> >>>>>> Memtest86+ v.4.40 (ECC mode) test - OK. >>>>>> Every disk checked too (physically - mhdd, logically - zpool scrub, and additional checkit in external company recovery disk). No errors. >>>>>> >>>>>> Part of df -H >>>>>> Filesystem Size Used Avail Capacity Mounted on >>>>>> hdd/usr/wf 6,6T 4,1T 2,5T 62% /hdd/usr/wf >>>>>> >>>>>> zpool status hdd >>>>>> pool: hdd >>>>>> state: ONLINE >>>>>> status: Some supported features are not enabled on the pool. The pool can >>>>>> still be used, but some features are unavailable. >>>>>> action: Enable all features using 'zpool upgrade'. Once this is done, >>>>>> the pool may no longer be accessible by software that does not support >>>>>> the features. See zpool-features(7) for details. >>>>>> scan: scrub repaired 0 in 14h57m with 0 errors on Thu Feb 11 03:35:43 2016 >>>>>> config: >>>>>> >>>>>> NAME STATE READ WRITE CKSUM >>>>>> hdd ONLINE 0 0 0 >>>>>> raidz2-0 ONLINE 0 0 0 >>>>>> mfid1p1 ONLINE 0 0 0 >>>>>> mfid2p1 ONLINE 0 0 0 >>>>>> mfid3p1 ONLINE 0 0 0 >>>>>> mfid4p1 ONLINE 0 0 0 >>>>>> mfid5p1 ONLINE 0 0 0 >>>>>> >>>>>> errors: No known data errors >>>>>> >>>>>> hdd - is My zfs volume. >>>>>> When I run command like: >>>>>> rm /hdd/usr/some/path/to/file >>>>>> or >>>>>> rm /hdd/usr/some/path/to/folder >>>>>> or >>>>>> chown root:wheel /hdd/usr/some/path/to/file >>>>>> or >>>>>> chown root:wheel /hdd/usr/some/path/to/folder >>>>>> or >>>>>> setfacl ... to /hdd/usr/some/path/to/file >>>>>> >>>>>> I'm get kernel panic: >>>>>> GNU gdb 6.1.1 [FreeBSD] >>>>>> Copyright 2004 Free Software Foundation, Inc. >>>>>> GDB is free software, covered by the GNU General Public License, and you are >>>>>> welcome to change it and/or distribute copies of it under certain conditions. >>>>>> Type "show copying" to see the conditions. >>>>>> There is absolutely no warranty for GDB. Type "show warranty" for details. >>>>>> This GDB was configured as "amd64-marcel-freebsd"... >>>>>> >>>>>> Unread portion of the kernel message buffer: >>>>>> panic: solaris assert: c < (1ULL << 24) >> 9 (0x7fffffffffffff < 0x8000), file: /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c, line: 270 >>>>>> cpuid = 9 >>>>>> KDB: stack backtrace: >>>>>> #0 0xffffffff80984ef0 at kdb_backtrace+0x60 >>>>>> #1 0xffffffff80948aa6 at vpanic+0x126 >>>>>> #2 0xffffffff80948973 at panic+0x43 >>>>>> #3 0xffffffff81c0222f at assfail3+0x2f >>>>>> #4 0xffffffff81aa9d40 at zio_buf_alloc+0x50 >>>>>> #5 0xffffffff81a2b9f8 at arc_get_data_buf+0x358 >>>>>> #6 0xffffffff81a2e20a at arc_read+0x1ea >>>>>> #7 0xffffffff81a3669c at dbuf_read+0x6ac >>>>>> #8 0xffffffff81a3d8bf at dmu_spill_hold_existing+0xbf >>>>>> #9 0xffffffff81a70dd7 at sa_attr_op+0x167 >>>>>> #10 0xffffffff81a72ffb at sa_lookup+0x4b >>>>>> #11 0xffffffff81abc82a at zfs_rmnode+0x2ba >>>>>> #12 0xffffffff81ada58e at zfs_freebsd_reclaim+0x4e >>>>>> #13 0xffffffff80e73537 at VOP_RECLAIM_APV+0xa7 >>>>>> #14 0xffffffff809ec5b4 at vgonel+0x1b4 >>>>>> #15 0xffffffff809eca49 at vrecycle+0x59 >>>>>> #16 0xffffffff81ada52d at zfs_freebsd_inactive+0xd >>>>>> #17 0xffffffff80e73427 at VOP_INACTIVE_APV+0xa7 >>>>>> Uptime: 9m31s >>>>>> Dumping 1286 out of 24543 MB:..2%..12%..22%..32%..42%..51%..61%..71% (CTRL-C to abort) ..81%..91% >>>>>> >>>>>> Reading symbols from /boot/kernel/zfs.ko.symbols...done. >>>>>> Loaded symbols for /boot/kernel/zfs.ko.symbols >>>>>> Reading symbols from /boot/kernel/opensolaris.ko.symbols...done. >>>>>> Loaded symbols for /boot/kernel/opensolaris.ko.symbols >>>>>> Reading symbols from /boot/kernel/if_lagg.ko.symbols...done. >>>>>> Loaded symbols for /boot/kernel/if_lagg.ko.symbols >>>>>> Reading symbols from /boot/kernel/ums.ko.symbols...done. >>>>>> Loaded symbols for /boot/kernel/ums.ko.symbols >>>>>> Reading symbols from /boot/kernel/ipfw.ko.symbols...done. >>>>>> Loaded symbols for /boot/kernel/ipfw.ko.symbols >>>>>> #0 doadump (textdump=) at pcpu.h:219 >>>>>> 219 pcpu.h: No such file or directory. >>>>>> in pcpu.h >>>>>> (kgdb) bt >>>>>> #0 doadump (textdump=) at pcpu.h:219 >>>>>> #1 0xffffffff80948702 in kern_reboot (howto=260) at /usr/src/sys/kern/kern_shutdown.c:451 >>>>>> #2 0xffffffff80948ae5 in vpanic (fmt=, ap=) at /usr/src/sys/kern/kern_shutdown.c:758 >>>>>> #3 0xffffffff80948973 in panic (fmt=0x0) at /usr/src/sys/kern/kern_shutdown.c:687 >>>>>> #4 0xffffffff81c0222f in assfail3 (a=, lv=, op=, rv=, >>>>>> f=, l=) at /usr/src/sys/modules/opensolaris/../../cddl/compat/opensolaris/kern/opensolaris_cmn_err.c:91 >>>>>> #5 0xffffffff81aa9d40 in zio_buf_alloc (size=0) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c:270 >>>>>> #6 0xffffffff81a2b9f8 in arc_get_data_buf (buf=) >>>>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:2898 >>>>>> #7 0xffffffff81a2e20a in arc_read (pio=0xfffff80011791730, spa=0xfffff80011579000, bp=0xfffffe000aee7980, done=0xffffffff81a3a2d0 , >>>>>> private=0xfffff8002244b000, priority=ZIO_PRIORITY_SYNC_READ, zio_flags=-528866606, arc_flags=0xfffffe06727fb3c4, zb=0xffffffff81a3a2d0) >>>>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:1551 >>>>>> #8 0xffffffff81a3669c in dbuf_read (db=0xfffff8002244b000, zio=0x0, flags=6) >>>>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dbuf.c:573 >>>>>> #9 0xffffffff81a3d8bf in dmu_spill_hold_existing (bonus=0xfffff800223bed20, tag=0x0, dbp=0xfffff800919966b8) >>>>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu.c:333 >>>>>> #10 0xffffffff81a70dd7 in sa_attr_op (hdl=0xfffff80091996690, bulk=0xfffffe06727fb528, count=1, data_op=SA_LOOKUP, tx=0x0) >>>>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/sa.c:310 >>>>>> #11 0xffffffff81a72ffb in sa_lookup (hdl=0xfffff80091996690, attr=, buf=, buflen=) >>>>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/sa.c:1441 >>>>>> #12 0xffffffff81abc82a in zfs_rmnode (zp=0xfffff80091993730) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_dir.c:633 >>>>>> #13 0xffffffff81ada58e in zfs_freebsd_reclaim (ap=) >>>>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c:6569 >>>>>> #14 0xffffffff80e73537 in VOP_RECLAIM_APV (vop=, a=) at vnode_if.c:2019 >>>>>> #15 0xffffffff809ec5b4 in vgonel (vp=0xfffff800111733b0) at vnode_if.h:830 >>>>>> #16 0xffffffff809eca49 in vrecycle (vp=0xfffff800111733b0) at /usr/src/sys/kern/vfs_subr.c:2703 >>>>>> #17 0xffffffff81ada52d in zfs_freebsd_inactive (ap=) >>>>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c:6540 >>>>>> #18 0xffffffff80e73427 in VOP_INACTIVE_APV (vop=, a=) at vnode_if.c:1953 >>>>>> #19 0xffffffff809eb382 in vinactive (vp=0xfffff800111733b0, td=0xfffff800113d8000) at vnode_if.h:807 >>>>>> #20 0xffffffff809eb772 in vputx (vp=0xfffff800111733b0, func=2) at /usr/src/sys/kern/vfs_subr.c:2306 >>>>>> #21 0xffffffff809f401e in kern_rmdirat (td=, fd=, path=, pathseg=) >>>>>> at /usr/src/sys/kern/vfs_syscalls.c:3842 >>>>>> #22 0xffffffff80d4b3e7 in amd64_syscall (td=0xfffff800113d8000, traced=0) at subr_syscall.c:134 >>>>>> #23 0xffffffff80d30acb in Xfast_syscall () at /usr/src/sys/amd64/amd64/exception.S:396 >>>>>> #24 0x00000008008914ea in ?? () >>>>>> Previous frame inner to this frame (corrupt stack?) >>>>>> Current language: auto; currently minimal >>>>>> >>>>>> If setup FreeBSD 10.3 BETA (on GENERIC or custom kernel config): >>>>>> Copyright 2004 Free Software Foundation, Inc. >>>>>> GDB is free software, covered by the GNU General Public License, and you are >>>>>> welcome to change it and/or distribute copies of it under certain conditions. >>>>>> Type "show copying" to see the conditions. >>>>>> There is absolutely no warranty for GDB. Type "show warranty" for details. >>>>>> This GDB was configured as "amd64-marcel-freebsd"... >>>>>> >>>>>> Unread portion of the kernel message buffer: >>>>>> panic: solaris assert: c < (1ULL << 24) >> 9 (0x7fffffffffffff < 0x8000), file: /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c, line: 273 >>>>>> cpuid = 13 >>>>>> KDB: stack backtrace: >>>>>> #0 0xffffffff8098f000 at kdb_backtrace+0x60 >>>>>> #1 0xffffffff80951d06 at vpanic+0x126 >>>>>> #2 0xffffffff80951bd3 at panic+0x43 >>>>>> #3 0xffffffff81e0022f at assfail3+0x2f >>>>>> #4 0xffffffff81cacc70 at zio_buf_alloc+0x50 >>>>>> #5 0xffffffff81c2b8f2 at arc_get_data_buf+0x262 >>>>>> #6 0xffffffff81c2b657 at arc_buf_alloc+0xc7 >>>>>> #7 0xffffffff81c2d601 at arc_read+0x1c1 >>>>>> #8 0xffffffff81c36ce9 at dbuf_read+0x6b9 >>>>>> #9 0xffffffff81c3e415 at dmu_spill_hold_existing+0xc5 >>>>>> #10 0xffffffff81c73707 at sa_attr_op+0x167 >>>>>> #11 0xffffffff81c75972 at sa_lookup+0x52 >>>>>> #12 0xffffffff81cbf8da at zfs_rmnode+0x2ba >>>>>> #13 0xffffffff81cdd75e at zfs_freebsd_reclaim+0x4e >>>>>> #14 0xffffffff80e81c27 at VOP_RECLAIM_APV+0xa7 >>>>>> #15 0xffffffff809f9581 at vgonel+0x221 >>>>>> #16 0xffffffff809f9a19 at vrecycle+0x59 >>>>>> #17 0xffffffff81cdd6fd at zfs_freebsd_inactive+0xd >>>>>> Uptime: 11m11s >>>>>> Dumping 1368 out of 24542 MB:..2%..11%..22%..31%..41%..51%..61%..71%..81%..91% >>>>>> >>>>>> Reading symbols from /boot/kernel/if_lagg.ko.symbols...done. >>>>>> Loaded symbols for /boot/kernel/if_lagg.ko.symbols >>>>>> Reading symbols from /boot/kernel/aio.ko.symbols...done. >>>>>> Loaded symbols for /boot/kernel/aio.ko.symbols >>>>>> Reading symbols from /boot/kernel/ichsmb.ko.symbols...done. >>>>>> Loaded symbols for /boot/kernel/ichsmb.ko.symbols >>>>>> Reading symbols from /boot/kernel/smbus.ko.symbols...done. >>>>>> Loaded symbols for /boot/kernel/smbus.ko.symbols >>>>>> Reading symbols from /boot/kernel/ipmi.ko.symbols...done. >>>>>> Loaded symbols for /boot/kernel/ipmi.ko.symbols >>>>>> Reading symbols from /boot/kernel/zfs.ko.symbols...done. >>>>>> Loaded symbols for /boot/kernel/zfs.ko.symbols >>>>>> Reading symbols from /boot/kernel/opensolaris.ko.symbols...done. >>>>>> Loaded symbols for /boot/kernel/opensolaris.ko.symbols >>>>>> Reading symbols from /boot/kernel/ums.ko.symbols...done. >>>>>> Loaded symbols for /boot/kernel/ums.ko.symbols >>>>>> Reading symbols from /boot/kernel/ipfw.ko.symbols...done. >>>>>> Loaded symbols for /boot/kernel/ipfw.ko.symbols >>>>>> #0 doadump (textdump=) at pcpu.h:219 >>>>>> 219 pcpu.h: No such file or directory. >>>>>> in pcpu.h >>>>>> (kgdb) backtrace >>>>>> #0 doadump (textdump=) at pcpu.h:219 >>>>>> #1 0xffffffff80951962 in kern_reboot (howto=260) at /usr/src/sys/kern/kern_shutdown.c:486 >>>>>> #2 0xffffffff80951d45 in vpanic (fmt=, ap=) at /usr/src/sys/kern/kern_shutdown.c:889 >>>>>> #3 0xffffffff80951bd3 in panic (fmt=0x0) at /usr/src/sys/kern/kern_shutdown.c:818 >>>>>> #4 0xffffffff81e0022f in assfail3 (a=, lv=, op=, rv=, f=, >>>>>> l=) at /usr/src/sys/modules/opensolaris/../../cddl/compat/opensolaris/kern/opensolaris_cmn_err.c:91 >>>>>> #5 0xffffffff81cacc70 in zio_buf_alloc (size=0) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c:273 >>>>>> #6 0xffffffff81c2b8f2 in arc_get_data_buf (buf=) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:3880 >>>>>> #7 0xffffffff81c2b657 in arc_buf_alloc (spa=, size=, tag=0x0, type=) >>>>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:2057 >>>>>> #8 0xffffffff81c2d601 in arc_read (pio=0xfffff8000fad03b0, spa=0xfffff8000f63d000, bp=0xfffffe000e509980, done=0xffffffff81c3aed0 , private=0xfffff8000fdd6360, >>>>>> priority=ZIO_PRIORITY_SYNC_READ, zio_flags=-2117882160, arc_flags=0xfffffe02925483c4, zb=0xfffff8000fdd6360) >>>>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:4397 >>>>>> #9 0xffffffff81c36ce9 in dbuf_read (db=0xfffff8000fdd6360, zio=0x0, flags=) >>>>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dbuf.c:682 >>>>>> #10 0xffffffff81c3e415 in dmu_spill_hold_existing (bonus=0xfffff8001f312438, tag=0x0, dbp=0xfffff80062d4e7d0) >>>>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu.c:333 >>>>>> #11 0xffffffff81c73707 in sa_attr_op (hdl=0xfffff80062d4e770, bulk=0xfffffe0292548528, count=1, data_op=SA_LOOKUP, tx=0x0) >>>>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/sa.c:305 >>>>>> #12 0xffffffff81c75972 in sa_lookup (hdl=0xfffff80062d4e770, attr=, buf=, buflen=) >>>>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/sa.c:1443 >>>>>> #13 0xffffffff81cbf8da in zfs_rmnode (zp=0xfffff80062d4c8a0) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_dir.c:633 >>>>>> #14 0xffffffff81cdd75e in zfs_freebsd_reclaim (ap=) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c:6619 >>>>>> #15 0xffffffff80e81c27 in VOP_RECLAIM_APV (vop=, a=) at vnode_if.c:2019 >>>>>> #16 0xffffffff809f9581 in vgonel (vp=0xfffff8000f1beb10) at vnode_if.h:830 >>>>>> #17 0xffffffff809f9a19 in vrecycle (vp=0xfffff8000f1beb10) at /usr/src/sys/kern/vfs_subr.c:2951 >>>>>> #18 0xffffffff81cdd6fd in zfs_freebsd_inactive (ap=) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c:6590 >>>>>> #19 0xffffffff80e81b17 in VOP_INACTIVE_APV (vop=, a=) at vnode_if.c:1953 >>>>>> #20 0xffffffff809f8322 in vinactive (vp=0xfffff8000f1beb10, td=0xfffff8000f9f34b0) at vnode_if.h:807 >>>>>> #21 0xffffffff809f8712 in vputx (vp=0xfffff8000f1beb10, func=2) at /usr/src/sys/kern/vfs_subr.c:2547 >>>>>> #22 0xffffffff80a0137e in kern_rmdirat (td=, fd=, path=, pathseg=) >>>>>> at /usr/src/sys/kern/vfs_syscalls.c:3964 >>>>>> #23 0xffffffff80d574bf in amd64_syscall (td=0xfffff8000f9f34b0, traced=0) at subr_syscall.c:141 >>>>>> #24 0xffffffff80d3c72b in Xfast_syscall () at /usr/src/sys/amd64/amd64/exception.S:396 >>>>>> #25 0x000000080089458a in ?? () >>>>>> Previous frame inner to this frame (corrupt stack?) >>>>>> Current language: auto; currently minimal >>>>>> >>>>>> Crash folder (or file) have strange rights: >>>>>> d---------+ 3 anna domain users 3 10 10:32 01-Projcts >>>>>> d---------+ 2 anna domain users 2 8 21:46 02-Text >>>>>> >>>>>> How correct kernel panic? >>>>>> >>>>>> _______________________________________________ >>>>>> freebsd-fs@freebsd.org mailing list >>>>>> https://lists.freebsd.org/mailman/listinfo/freebsd-fs >>>>>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >>>>> _______________________________________________ >>>>> freebsd-fs@freebsd.org mailing list >>>>> https://lists.freebsd.org/mailman/listinfo/freebsd-fs >>>>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@freebsd.org Wed Feb 17 01:09:35 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 37E54AAAAEF for ; Wed, 17 Feb 2016 01:09:35 +0000 (UTC) (envelope-from nowakpl@platinum.linux.pl) Received: from platinum.linux.pl (platinum.biol.amu.edu.pl [150.254.122.114]) by mx1.freebsd.org (Postfix) with ESMTP id D559510F2 for ; Wed, 17 Feb 2016 01:09:33 +0000 (UTC) (envelope-from nowakpl@platinum.linux.pl) Received: by platinum.linux.pl (Postfix, from userid 87) id CD4F35081B4; Wed, 17 Feb 2016 02:04:20 +0100 (CET) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on platinum.linux.pl X-Spam-Level: X-Spam-Status: No, score=-1.3 required=3.0 tests=ALL_TRUSTED,AWL autolearn=disabled version=3.4.0 Received: from [10.255.1.11] (c38-073.client.duna.pl [83.151.38.73]) by platinum.linux.pl (Postfix) with ESMTPA id 3C19A5081B1 for ; Wed, 17 Feb 2016 02:04:20 +0100 (CET) Subject: Re: Free space on ZFS is not reported correctly(?) To: freebsd-fs@freebsd.org References: From: Adam Nowacki X-Enigmail-Draft-Status: N1110 Message-ID: <56C3C6FF.2050804@platinum.linux.pl> Date: Wed, 17 Feb 2016 02:03:59 +0100 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.5.1 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 17 Feb 2016 01:09:35 -0000 On 2016-02-16 08:07, namaedayo wrote: > I’m Japanese. > My English ability is not native level, So if I say something wrong, > please don’t be offended. > The clumsy is sorry because it is generated by Google translation. > > Free space on ZFS is not reported correctly. > There is a "zpool list -v" in sufficient disk space, but dataset other > than zvol can not use it. By default zvols have refreservation property set to zvol size. You can set it to 'none' to make unused zvol disk space available to other datasets. > Chassis : HP Prolient ML110 G7 > CPU : Xeon E3-1220 > Memory : 32GB ECC > HDD : TOSHIBA DT01ACA300 x4 > SSD : Intel SSD DC S3500 80GB > > ZIL 8GB, L2ARC 16GB (L2ARC has been temporarily manually disabled by D2764) > > $ uname -a > FreeBSD namaedayo-hp.local 10.2-RELEASE-p8 FreeBSD 10.2-RELEASE-p8 #14 > r292713: Fri Dec 25 11:49:13 JST 2015 > root@namaedayo-hp.local:/usr/obj/usr/src/sys/NAMAEDAYO amd64 > > $ zpool list -v > NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT > zpool0 5.44T 4.58T 881G - 23% 84% 2.08x ONLINE - > mirror 2.72T 2.29T 441G - 24% 84% > ada0p2 - - - - - - > ada2p2 - - - - - - > mirror 2.72T 2.29T 441G - 23% 84% > ada1p2 - - - - - - > ada3p2 - - - - - - > ada4p2 7.94G 1.51M 7.94G - 33% 0% > > $ zfs list > NAME USED AVAIL REFER > MOUNTPOINT > zpool0 5.25T 45.3G 216K /zpool0 > zpool0/FreeBSD-Buckup 169G 45.3G 169G > /zpool0/FreeBSD-Buckup > zpool0/FreeBSD-Root 862G 45.3G 810G / > zpool0/Ubuntu-14-04_server 206G 230G 21.9G - > zpool0/Ubuntu_Desktop 82.5G 121G 7.18G - > zpool0/Ubuntu_Desktop-i386 135G 115G 48.9G - > zpool0/freebsd-swap 66.0G 95.2G 16.1G - > zpool0/namae-iSCSI 82.5G 68.5G 59.3G - > zpool0/namaedayo_work 351G 45.3G 346G > /zpool0/namaedayo_work > zpool0/test-dedup 53.8G 45.3G 53.8G > /zpool0/test-dedup > zpool0/test-lz4 47.8G 45.3G 47.8G > /zpool0/test-lz4 > (Omitted because too much) > > $ df -h > Filesystem Size Used > Avail Capacity Mounted on > zpool0/FreeBSD-Root 855G 810G > 45G 95% / > devfs 1.0K 1.0K > 0B 100% /dev > linprocfs 4.0K 4.0K > 0B 100% /compat/linux/proc > linsysfs 4.0K 4.0K > 0B 100% /compat/linux/sys > fdescfs 1.0K 1.0K > 0B 100% /dev/fd > procfs 4.0K 4.0K > 0B 100% /proc > zpool0/FreeBSD-Buckup 214G 169G > 45G 79% /zpool0/FreeBSD-Buckup > zpool0/namaedayo_work 391G 346G > 45G 88% /zpool0/namaedayo_work > zpool0/test-dedup 99G 54G > 45G 54% /zpool0/test-dedup > zpool0/test-lz4 93G 48G > 45G 51% /zpool0/test-lz4 > (Omitted because too much) > > $ gpart show > => 34 5860533101 ada0 GPT (2.7T) > 34 6 - free - (3.0K) > 40 128 1 freebsd-boot (64K) > 168 5860532960 2 freebsd-zfs (2.7T) > 5860533128 7 - free - (3.5K) > > => 34 5860533101 ada1 GPT (2.7T) > 34 6 - free - (3.0K) > 40 128 1 freebsd-boot (64K) > 168 5860532960 2 freebsd-zfs (2.7T) > 5860533128 7 - free - (3.5K) > > => 34 5860533101 ada2 GPT (2.7T) > 34 6 - free - (3.0K) > 40 128 1 freebsd-boot (64K) > 168 5860532960 2 freebsd-zfs (2.7T) > 5860533128 7 - free - (3.5K) > > => 34 5860533101 ada3 GPT (2.7T) > 34 6 - free - (3.0K) > 40 128 1 freebsd-boot (64K) > 168 5860532960 2 freebsd-zfs (2.7T) > 5860533128 7 - free - (3.5K) > > => 34 156301421 ada4 GPT (75G) > 34 6 - free - (3.0K) > 40 1024 1 freebsd-boot (512K) > 1064 16777216 2 freebsd-zfs (8.0G) > 16778280 33554432 3 freebsd-zfs (16G) > 50332712 105968743 - free - (51G) > > # zdb -C zpool0 | grep 'ashift' > ashift: 12 > > $ zfs get compressratio zpool0 > NAME PROPERTY VALUE SOURCE > zpool0 compressratio 1.13x - > > $ zpool status > pool: zpool0 > state: ONLINE > scan: resilvered 1.78T in 3h27m with 0 errors on Tue Jul 15 10:11:44 2014 > config: > > NAME STATE READ WRITE CKSUM > zpool0 ONLINE 0 0 0 > mirror-0 ONLINE 0 0 0 > ada0p2 ONLINE 0 0 0 > ada2p2 ONLINE 0 0 0 > mirror-1 ONLINE 0 0 0 > ada1p2 ONLINE 0 0 0 > ada3p2 ONLINE 0 0 0 > logs > ada4p2 ONLINE 0 0 0 > > errors: No known data errors > _______________________________________________ > freebsd-fs@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@freebsd.org Wed Feb 17 07:04:06 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 36FABAAA744 for ; Wed, 17 Feb 2016 07:04:06 +0000 (UTC) (envelope-from namaedayo00@gmail.com) Received: from mail-yw0-x234.google.com (mail-yw0-x234.google.com [IPv6:2607:f8b0:4002:c05::234]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id EFDE0FA9 for ; Wed, 17 Feb 2016 07:04:05 +0000 (UTC) (envelope-from namaedayo00@gmail.com) Received: by mail-yw0-x234.google.com with SMTP id e63so6312014ywc.3 for ; Tue, 16 Feb 2016 23:04:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:from:date:message-id:subject:to:content-type; bh=CcRtjv9zBEeYk1VW3/6siBB+0KuSPsA23HK2lFdJs5s=; b=hrlso37EuRKPvYOHbM6bA0oXJoQSybntQPhnMDabEF6j1+O3O8m8AWHIZG0mgLOXbq f2tdFpTDnYJuUWBXzBs1MoSw1EolUjbbEsAxO3EZhk2/DcEpxfVrzDj3PB/AmbWa4Y/0 a9sg1DF8Pd6eBNpfBy3K6LV0mXZNEBvfGPr8qv2Cepk96s/hx7UlVLNIr/Wu4MHr2nAK j5QihmfmPJ3LBWAxeP3pSpktjXkfO3153CSRD4SI7zMZHfklw+MTHXjgoMBJGq4Kn2lE GdNVb/X9MO7PX1Qy6ZNqHminIiDQzKjlfs/1ug/XYIA988HcPYIYkxz3P6nl+hw/UKGV TOeg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:from:date:message-id:subject:to :content-type; bh=CcRtjv9zBEeYk1VW3/6siBB+0KuSPsA23HK2lFdJs5s=; b=YgcUwAAiQatLDH+MDC/wUwCxK1GsR0pGskqbly98dG4qZYjewfs+tiwDiUdf8POWlg FefSGPEcT77yQBrUzpA/AwG/bnzGnBL5R1BCSEAKjKirovolmcr7/XmAqf5lPqfmnFGb el+4WEfHoYp1R6hGIkICxNHbZx2gXuxTYCLxrUcBoTMwkFT1XMKKLBQTCHC5EwXgabd/ MZpbsONY18EDr0fMvqPspdO2f6yzLPjj4r4CTCQElwh0s1K4H2L7uiqhr9r/qLiG/lSV xV0G6GZpRg/NE4Hf+D/zBU06WHv1T/TH5IyQuvFI6eIgxOrnXyub+TAxHU8tm+lP5B6a 1h3g== X-Gm-Message-State: AG10YOSN6Fq2Fem5CIs9JU7UlV689SBdCqMfo4rLUPmoJepU3K0YfoxFpDqfdZ/tj27ALzP/sX87g92DfKiwGA== X-Received: by 10.13.195.196 with SMTP id f187mr15697148ywd.196.1455692645166; Tue, 16 Feb 2016 23:04:05 -0800 (PST) MIME-Version: 1.0 Received: by 10.37.202.2 with HTTP; Tue, 16 Feb 2016 23:03:45 -0800 (PST) From: namaedayo Date: Wed, 17 Feb 2016 16:03:45 +0900 Message-ID: Subject: Re: Free space on ZFS is not reported correctly(?) [Resolved] To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 17 Feb 2016 07:04:06 -0000 Thank you for reply. I can't thank you enough! Adam Nowacki wrote: > By default zvols have refreservation property set to zvol size. You can > set it to 'none' to make unused zvol disk space available to other datasets. dedup is enabled for "zpool0/test-dedup" in experimental purposes. I have knowledge of the risk of dedup. It has received the advice many times in the kind people. If ZFS to LZMA2 compression is implemented I'm glad. kpneal wrote: > Using dedup takes a surprisingly large amount of memory. Unless you only have a small amount of data to dedup you probably shouldn't have dedup turned on with only 32GB of memory. [root@namaedayo-hp ~]# zfs get refreservation zpool0/namae-iSCSI NAME PROPERTY VALUE SOURCE zpool0/namae-iSCSI refreservation 82.5G local # zfs set refreservation=none zpool0/InnoDB # zfs set refreservation=none zpool0/FreeBSD-10-1_i386-icc # zfs set refreservation=none zpool0/FreeBSD-10-1_i386-icc_zfs # zfs set refreservation=none zpool0/Ubuntu-14-04_server # zfs set refreservation=none zpool0/Ubuntu_Desktop # zfs set refreservation=none zpool0/namae-iSCSI # zfs set refreservation=none zpool0/fedora-21-i686-LXDE # zfs set refreservation=none zpool0/Ubuntu_Desktop $ zfs list NAME USED AVAIL REFER MOUNTPOINT zpool0 4.73T 580G 216K /zpool0 zpool0/FreeBSD-Buckup 169G 580G 169G /zpool0/FreeBSD-Buckup zpool0/FreeBSD-Root 864G 580G 811G / zpool0/Ubuntu-14-04_server 21.9G 580G 21.9G - zpool0/Ubuntu_Desktop 7.18G 580G 7.18G - zpool0/Ubuntu_Desktop-i386 135G 650G 48.9G - zpool0/freebsd-swap 66.0G 630G 16.1G - zpool0/namae-iSCSI 59.3G 580G 59.3G - zpool0/namaedayo_work 351G 580G 346G /zpool0/namaedayo_work zpool0/test-dedup 53.8G 580G 53.8G /zpool0/test-dedup zpool0/test-lz4 47.8G 580G 47.8G /zpool0/test-lz4 $ df -h Filesystem Size Used Avail Capacity Mounted on zpool0/FreeBSD-Root 1.4T 811G 580G 58% / devfs 1.0K 1.0K 0B 100% /dev linprocfs 4.0K 4.0K 0B 100% /compat/linux/proc linsysfs 4.0K 4.0K 0B 100% /compat/linux/sys fdescfs 1.0K 1.0K 0B 100% /dev/fd procfs 4.0K 4.0K 0B 100% /proc From owner-freebsd-fs@freebsd.org Thu Feb 18 12:09:38 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 81797AAC55D for ; Thu, 18 Feb 2016 12:09:38 +0000 (UTC) (envelope-from demis@yandex.ru) Received: from forward13m.cmail.yandex.net (forward13m.cmail.yandex.net [IPv6:2a02:6b8:b030::9a]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "forwards.mail.yandex.net", Issuer "Yandex CA" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 1CBFE186C for ; Thu, 18 Feb 2016 12:09:38 +0000 (UTC) (envelope-from demis@yandex.ru) Received: from web22m.yandex.ru (web22m.yandex.ru [37.140.138.113]) by forward13m.cmail.yandex.net (Yandex) with ESMTP id E60E621EBE; Thu, 18 Feb 2016 15:08:15 +0300 (MSK) Received: from web22m.yandex.ru (localhost [127.0.0.1]) by web22m.yandex.ru (Yandex) with ESMTP id CCC2B761ADD; Thu, 18 Feb 2016 15:08:14 +0300 (MSK) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex.ru; s=mail; t=1455797295; bh=kFNRe28R5TSiIP02WEwsNlNs6kfJVB5VtzKDXF2tuN4=; h=From:To:In-Reply-To:References:Subject:Date; b=Qw445DPaz3L5jbhwTXS87W2uKCngPYSIYbDOVX8VvggVVhmrewn8yhR6ygoSz0vds qkh4e0daxxYAHCP/BNxV00yytjJ4xCpP91P2EtvWf9lhNhwDiJkUYcbIb8dC/FE89V /KJr6kCN3xekoI3yip+NPhhQQZji5Fpj7WxRaH+c= Received: by web22m.yandex.ru with HTTP; Thu, 18 Feb 2016 15:08:14 +0300 From: DemIS To: Steven Hartland , "freebsd-fs@freebsd.org" In-Reply-To: <56C2E7A6.9090004@multiplay.co.uk> References: <1061671455578760@web3g.yandex.ru> <56C2655F.9010809@multiplay.co.uk> <1076701455583595@web2g.yandex.ru> <56C2782E.2010404@multiplay.co.uk> <2311371455610101@web11j.yandex.ru> <56C2E7A6.9090004@multiplay.co.uk> Subject: Re: Kernel panic zio.c, line: 270 FreeBSD 10.2 (or 10.3) MIME-Version: 1.0 Message-Id: <3757751455797294@web22m.yandex.ru> X-Mailer: Yamail [ http://yandex.ru ] 5.0 Date: Thu, 18 Feb 2016 15:08:14 +0300 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset=koi8-r X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 18 Feb 2016 12:09:38 -0000 Hello. Sorry I write via google translator. I can not get the information you have requested. The mode sysctl -w debug.minidump = 0 dump is about fifteen minutes. But when I run kgdb /usr/obj/usr/src/sys/TEO/kernel.debug /usr/crash/vmcore.last writes "Can not access memory at address 0xfffff8063fffffb8". I can not understand why this is happening. I need more time to get the * bp for you. The mode minidump values such as I wrote earlier. 16.02.2016, 12:10, "Steven Hartland" : > I need the values of the vars specified, so you'll need to: > print bp > If it reports just an address try: > print *bp > etc. > > On 16/02/2016 08:08, DemIS wrote: >> frame 8 >> #8 0xffffffff81c3669c in dbuf_read (db=0xfffff8013ca46380, zio=0x0, flags=6) >> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dbuf.c:573 >> 573 (void) arc_read(zio, db->db_objset->os_spa, db->db_blkptr, >> Current language: auto; currently minimal >> >> frame 6 >> #6 0xffffffff81c2b9f8 in arc_get_data_buf (buf=) >> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:2898 >> 2898 buf->b_data = zio_buf_alloc(size); >> >> 16.02.2016, 04:15, "Steven Hartland" : >>> You don't need the system live just the kernel and the crash dump to get >>> those values. >>> >>> On 16/02/2016 00:46, DemIS wrote: >>>> Today I fell back to version 10.2 . Users work on the server . To reinstall 10.3 I need 4:00 . >>>> Therefore, it is possible only on weekends . But this has the same effect on the version 10.2 >>>> >>>> 16.02.2016, 02:55, "Steven Hartland" : >>>>> That sounds like you have some pool data corruption, from the 10.3 >>>>> version dump can you print out the following: >>>>> 1. frame 8: bp and size >>>>> 2. frame 6: buf->b_hdr >>>>> >>>>> On 15/02/2016 23:26, DemIS wrote: >>>>>> Any one knows about problem? >>>>>> Server: SuperMicro Model:SYS-6026T-6RF+, MB:X8DTU-6F+, RAM 24 DDR3, two XEON >>>>>> RAM: KVR1333D3E9S/4G - DDR3, 1333MHz, ECC, CL9, X8, 1.5V, Unbuffered, DIMM >>>>>> Version:uname -a >>>>>> >>>>>> FreeBSD teo.some.loc 10.2-RELEASE-p12 FreeBSD 10.2-RELEASE-p12 #0: Sat Feb 13 18:04:04 MSK 2016 demis@teo.some.loc:/usr/obj/usr/src/sys/TEO amd64 >>>>>> (on GENERIC or custom kernel config persist too !!!) >>>>>> >>>>>> Memtest86+ v.4.40 (ECC mode) test - OK. >>>>>> Every disk checked too (physically - mhdd, logically - zpool scrub, and additional checkit in external company recovery disk). No errors. >>>>>> >>>>>> Part of df -H >>>>>> Filesystem Size Used Avail Capacity Mounted on >>>>>> hdd/usr/wf 6,6T 4,1T 2,5T 62% /hdd/usr/wf >>>>>> >>>>>> zpool status hdd >>>>>> pool: hdd >>>>>> state: ONLINE >>>>>> status: Some supported features are not enabled on the pool. The pool can >>>>>> still be used, but some features are unavailable. >>>>>> action: Enable all features using 'zpool upgrade'. Once this is done, >>>>>> the pool may no longer be accessible by software that does not support >>>>>> the features. See zpool-features(7) for details. >>>>>> scan: scrub repaired 0 in 14h57m with 0 errors on Thu Feb 11 03:35:43 2016 >>>>>> config: >>>>>> >>>>>> NAME STATE READ WRITE CKSUM >>>>>> hdd ONLINE 0 0 0 >>>>>> raidz2-0 ONLINE 0 0 0 >>>>>> mfid1p1 ONLINE 0 0 0 >>>>>> mfid2p1 ONLINE 0 0 0 >>>>>> mfid3p1 ONLINE 0 0 0 >>>>>> mfid4p1 ONLINE 0 0 0 >>>>>> mfid5p1 ONLINE 0 0 0 >>>>>> >>>>>> errors: No known data errors >>>>>> >>>>>> hdd - is My zfs volume. >>>>>> When I run command like: >>>>>> rm /hdd/usr/some/path/to/file >>>>>> or >>>>>> rm /hdd/usr/some/path/to/folder >>>>>> or >>>>>> chown root:wheel /hdd/usr/some/path/to/file >>>>>> or >>>>>> chown root:wheel /hdd/usr/some/path/to/folder >>>>>> or >>>>>> setfacl ... to /hdd/usr/some/path/to/file >>>>>> >>>>>> I'm get kernel panic: >>>>>> GNU gdb 6.1.1 [FreeBSD] >>>>>> Copyright 2004 Free Software Foundation, Inc. >>>>>> GDB is free software, covered by the GNU General Public License, and you are >>>>>> welcome to change it and/or distribute copies of it under certain conditions. >>>>>> Type "show copying" to see the conditions. >>>>>> There is absolutely no warranty for GDB. Type "show warranty" for details. >>>>>> This GDB was configured as "amd64-marcel-freebsd"... >>>>>> >>>>>> Unread portion of the kernel message buffer: >>>>>> panic: solaris assert: c < (1ULL << 24) >> 9 (0x7fffffffffffff < 0x8000), file: /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c, line: 270 >>>>>> cpuid = 9 >>>>>> KDB: stack backtrace: >>>>>> #0 0xffffffff80984ef0 at kdb_backtrace+0x60 >>>>>> #1 0xffffffff80948aa6 at vpanic+0x126 >>>>>> #2 0xffffffff80948973 at panic+0x43 >>>>>> #3 0xffffffff81c0222f at assfail3+0x2f >>>>>> #4 0xffffffff81aa9d40 at zio_buf_alloc+0x50 >>>>>> #5 0xffffffff81a2b9f8 at arc_get_data_buf+0x358 >>>>>> #6 0xffffffff81a2e20a at arc_read+0x1ea >>>>>> #7 0xffffffff81a3669c at dbuf_read+0x6ac >>>>>> #8 0xffffffff81a3d8bf at dmu_spill_hold_existing+0xbf >>>>>> #9 0xffffffff81a70dd7 at sa_attr_op+0x167 >>>>>> #10 0xffffffff81a72ffb at sa_lookup+0x4b >>>>>> #11 0xffffffff81abc82a at zfs_rmnode+0x2ba >>>>>> #12 0xffffffff81ada58e at zfs_freebsd_reclaim+0x4e >>>>>> #13 0xffffffff80e73537 at VOP_RECLAIM_APV+0xa7 >>>>>> #14 0xffffffff809ec5b4 at vgonel+0x1b4 >>>>>> #15 0xffffffff809eca49 at vrecycle+0x59 >>>>>> #16 0xffffffff81ada52d at zfs_freebsd_inactive+0xd >>>>>> #17 0xffffffff80e73427 at VOP_INACTIVE_APV+0xa7 >>>>>> Uptime: 9m31s >>>>>> Dumping 1286 out of 24543 MB:..2%..12%..22%..32%..42%..51%..61%..71% (CTRL-C to abort) ..81%..91% >>>>>> >>>>>> Reading symbols from /boot/kernel/zfs.ko.symbols...done. >>>>>> Loaded symbols for /boot/kernel/zfs.ko.symbols >>>>>> Reading symbols from /boot/kernel/opensolaris.ko.symbols...done. >>>>>> Loaded symbols for /boot/kernel/opensolaris.ko.symbols >>>>>> Reading symbols from /boot/kernel/if_lagg.ko.symbols...done. >>>>>> Loaded symbols for /boot/kernel/if_lagg.ko.symbols >>>>>> Reading symbols from /boot/kernel/ums.ko.symbols...done. >>>>>> Loaded symbols for /boot/kernel/ums.ko.symbols >>>>>> Reading symbols from /boot/kernel/ipfw.ko.symbols...done. >>>>>> Loaded symbols for /boot/kernel/ipfw.ko.symbols >>>>>> #0 doadump (textdump=) at pcpu.h:219 >>>>>> 219 pcpu.h: No such file or directory. >>>>>> in pcpu.h >>>>>> (kgdb) bt >>>>>> #0 doadump (textdump=) at pcpu.h:219 >>>>>> #1 0xffffffff80948702 in kern_reboot (howto=260) at /usr/src/sys/kern/kern_shutdown.c:451 >>>>>> #2 0xffffffff80948ae5 in vpanic (fmt=, ap=) at /usr/src/sys/kern/kern_shutdown.c:758 >>>>>> #3 0xffffffff80948973 in panic (fmt=0x0) at /usr/src/sys/kern/kern_shutdown.c:687 >>>>>> #4 0xffffffff81c0222f in assfail3 (a=, lv=, op=, rv=, >>>>>> f=, l=) at /usr/src/sys/modules/opensolaris/../../cddl/compat/opensolaris/kern/opensolaris_cmn_err.c:91 >>>>>> #5 0xffffffff81aa9d40 in zio_buf_alloc (size=0) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c:270 >>>>>> #6 0xffffffff81a2b9f8 in arc_get_data_buf (buf=) >>>>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:2898 >>>>>> #7 0xffffffff81a2e20a in arc_read (pio=0xfffff80011791730, spa=0xfffff80011579000, bp=0xfffffe000aee7980, done=0xffffffff81a3a2d0 , >>>>>> private=0xfffff8002244b000, priority=ZIO_PRIORITY_SYNC_READ, zio_flags=-528866606, arc_flags=0xfffffe06727fb3c4, zb=0xffffffff81a3a2d0) >>>>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:1551 >>>>>> #8 0xffffffff81a3669c in dbuf_read (db=0xfffff8002244b000, zio=0x0, flags=6) >>>>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dbuf.c:573 >>>>>> #9 0xffffffff81a3d8bf in dmu_spill_hold_existing (bonus=0xfffff800223bed20, tag=0x0, dbp=0xfffff800919966b8) >>>>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu.c:333 >>>>>> #10 0xffffffff81a70dd7 in sa_attr_op (hdl=0xfffff80091996690, bulk=0xfffffe06727fb528, count=1, data_op=SA_LOOKUP, tx=0x0) >>>>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/sa.c:310 >>>>>> #11 0xffffffff81a72ffb in sa_lookup (hdl=0xfffff80091996690, attr=, buf=, buflen=) >>>>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/sa.c:1441 >>>>>> #12 0xffffffff81abc82a in zfs_rmnode (zp=0xfffff80091993730) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_dir.c:633 >>>>>> #13 0xffffffff81ada58e in zfs_freebsd_reclaim (ap=) >>>>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c:6569 >>>>>> #14 0xffffffff80e73537 in VOP_RECLAIM_APV (vop=, a=) at vnode_if.c:2019 >>>>>> #15 0xffffffff809ec5b4 in vgonel (vp=0xfffff800111733b0) at vnode_if.h:830 >>>>>> #16 0xffffffff809eca49 in vrecycle (vp=0xfffff800111733b0) at /usr/src/sys/kern/vfs_subr.c:2703 >>>>>> #17 0xffffffff81ada52d in zfs_freebsd_inactive (ap=) >>>>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c:6540 >>>>>> #18 0xffffffff80e73427 in VOP_INACTIVE_APV (vop=, a=) at vnode_if.c:1953 >>>>>> #19 0xffffffff809eb382 in vinactive (vp=0xfffff800111733b0, td=0xfffff800113d8000) at vnode_if.h:807 >>>>>> #20 0xffffffff809eb772 in vputx (vp=0xfffff800111733b0, func=2) at /usr/src/sys/kern/vfs_subr.c:2306 >>>>>> #21 0xffffffff809f401e in kern_rmdirat (td=, fd=, path=, pathseg=) >>>>>> at /usr/src/sys/kern/vfs_syscalls.c:3842 >>>>>> #22 0xffffffff80d4b3e7 in amd64_syscall (td=0xfffff800113d8000, traced=0) at subr_syscall.c:134 >>>>>> #23 0xffffffff80d30acb in Xfast_syscall () at /usr/src/sys/amd64/amd64/exception.S:396 >>>>>> #24 0x00000008008914ea in ?? () >>>>>> Previous frame inner to this frame (corrupt stack?) >>>>>> Current language: auto; currently minimal >>>>>> >>>>>> If setup FreeBSD 10.3 BETA (on GENERIC or custom kernel config): >>>>>> Copyright 2004 Free Software Foundation, Inc. >>>>>> GDB is free software, covered by the GNU General Public License, and you are >>>>>> welcome to change it and/or distribute copies of it under certain conditions. >>>>>> Type "show copying" to see the conditions. >>>>>> There is absolutely no warranty for GDB. Type "show warranty" for details. >>>>>> This GDB was configured as "amd64-marcel-freebsd"... >>>>>> >>>>>> Unread portion of the kernel message buffer: >>>>>> panic: solaris assert: c < (1ULL << 24) >> 9 (0x7fffffffffffff < 0x8000), file: /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c, line: 273 >>>>>> cpuid = 13 >>>>>> KDB: stack backtrace: >>>>>> #0 0xffffffff8098f000 at kdb_backtrace+0x60 >>>>>> #1 0xffffffff80951d06 at vpanic+0x126 >>>>>> #2 0xffffffff80951bd3 at panic+0x43 >>>>>> #3 0xffffffff81e0022f at assfail3+0x2f >>>>>> #4 0xffffffff81cacc70 at zio_buf_alloc+0x50 >>>>>> #5 0xffffffff81c2b8f2 at arc_get_data_buf+0x262 >>>>>> #6 0xffffffff81c2b657 at arc_buf_alloc+0xc7 >>>>>> #7 0xffffffff81c2d601 at arc_read+0x1c1 >>>>>> #8 0xffffffff81c36ce9 at dbuf_read+0x6b9 >>>>>> #9 0xffffffff81c3e415 at dmu_spill_hold_existing+0xc5 >>>>>> #10 0xffffffff81c73707 at sa_attr_op+0x167 >>>>>> #11 0xffffffff81c75972 at sa_lookup+0x52 >>>>>> #12 0xffffffff81cbf8da at zfs_rmnode+0x2ba >>>>>> #13 0xffffffff81cdd75e at zfs_freebsd_reclaim+0x4e >>>>>> #14 0xffffffff80e81c27 at VOP_RECLAIM_APV+0xa7 >>>>>> #15 0xffffffff809f9581 at vgonel+0x221 >>>>>> #16 0xffffffff809f9a19 at vrecycle+0x59 >>>>>> #17 0xffffffff81cdd6fd at zfs_freebsd_inactive+0xd >>>>>> Uptime: 11m11s >>>>>> Dumping 1368 out of 24542 MB:..2%..11%..22%..31%..41%..51%..61%..71%..81%..91% >>>>>> >>>>>> Reading symbols from /boot/kernel/if_lagg.ko.symbols...done. >>>>>> Loaded symbols for /boot/kernel/if_lagg.ko.symbols >>>>>> Reading symbols from /boot/kernel/aio.ko.symbols...done. >>>>>> Loaded symbols for /boot/kernel/aio.ko.symbols >>>>>> Reading symbols from /boot/kernel/ichsmb.ko.symbols...done. >>>>>> Loaded symbols for /boot/kernel/ichsmb.ko.symbols >>>>>> Reading symbols from /boot/kernel/smbus.ko.symbols...done. >>>>>> Loaded symbols for /boot/kernel/smbus.ko.symbols >>>>>> Reading symbols from /boot/kernel/ipmi.ko.symbols...done. >>>>>> Loaded symbols for /boot/kernel/ipmi.ko.symbols >>>>>> Reading symbols from /boot/kernel/zfs.ko.symbols...done. >>>>>> Loaded symbols for /boot/kernel/zfs.ko.symbols >>>>>> Reading symbols from /boot/kernel/opensolaris.ko.symbols...done. >>>>>> Loaded symbols for /boot/kernel/opensolaris.ko.symbols >>>>>> Reading symbols from /boot/kernel/ums.ko.symbols...done. >>>>>> Loaded symbols for /boot/kernel/ums.ko.symbols >>>>>> Reading symbols from /boot/kernel/ipfw.ko.symbols...done. >>>>>> Loaded symbols for /boot/kernel/ipfw.ko.symbols >>>>>> #0 doadump (textdump=) at pcpu.h:219 >>>>>> 219 pcpu.h: No such file or directory. >>>>>> in pcpu.h >>>>>> (kgdb) backtrace >>>>>> #0 doadump (textdump=) at pcpu.h:219 >>>>>> #1 0xffffffff80951962 in kern_reboot (howto=260) at /usr/src/sys/kern/kern_shutdown.c:486 >>>>>> #2 0xffffffff80951d45 in vpanic (fmt=, ap=) at /usr/src/sys/kern/kern_shutdown.c:889 >>>>>> #3 0xffffffff80951bd3 in panic (fmt=0x0) at /usr/src/sys/kern/kern_shutdown.c:818 >>>>>> #4 0xffffffff81e0022f in assfail3 (a=, lv=, op=, rv=, f=, >>>>>> l=) at /usr/src/sys/modules/opensolaris/../../cddl/compat/opensolaris/kern/opensolaris_cmn_err.c:91 >>>>>> #5 0xffffffff81cacc70 in zio_buf_alloc (size=0) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c:273 >>>>>> #6 0xffffffff81c2b8f2 in arc_get_data_buf (buf=) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:3880 >>>>>> #7 0xffffffff81c2b657 in arc_buf_alloc (spa=, size=, tag=0x0, type=) >>>>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:2057 >>>>>> #8 0xffffffff81c2d601 in arc_read (pio=0xfffff8000fad03b0, spa=0xfffff8000f63d000, bp=0xfffffe000e509980, done=0xffffffff81c3aed0 , private=0xfffff8000fdd6360, >>>>>> priority=ZIO_PRIORITY_SYNC_READ, zio_flags=-2117882160, arc_flags=0xfffffe02925483c4, zb=0xfffff8000fdd6360) >>>>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:4397 >>>>>> #9 0xffffffff81c36ce9 in dbuf_read (db=0xfffff8000fdd6360, zio=0x0, flags=) >>>>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dbuf.c:682 >>>>>> #10 0xffffffff81c3e415 in dmu_spill_hold_existing (bonus=0xfffff8001f312438, tag=0x0, dbp=0xfffff80062d4e7d0) >>>>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu.c:333 >>>>>> #11 0xffffffff81c73707 in sa_attr_op (hdl=0xfffff80062d4e770, bulk=0xfffffe0292548528, count=1, data_op=SA_LOOKUP, tx=0x0) >>>>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/sa.c:305 >>>>>> #12 0xffffffff81c75972 in sa_lookup (hdl=0xfffff80062d4e770, attr=, buf=, buflen=) >>>>>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/sa.c:1443 >>>>>> #13 0xffffffff81cbf8da in zfs_rmnode (zp=0xfffff80062d4c8a0) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_dir.c:633 >>>>>> #14 0xffffffff81cdd75e in zfs_freebsd_reclaim (ap=) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c:6619 >>>>>> #15 0xffffffff80e81c27 in VOP_RECLAIM_APV (vop=, a=) at vnode_if.c:2019 >>>>>> #16 0xffffffff809f9581 in vgonel (vp=0xfffff8000f1beb10) at vnode_if.h:830 >>>>>> #17 0xffffffff809f9a19 in vrecycle (vp=0xfffff8000f1beb10) at /usr/src/sys/kern/vfs_subr.c:2951 >>>>>> #18 0xffffffff81cdd6fd in zfs_freebsd_inactive (ap=) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c:6590 >>>>>> #19 0xffffffff80e81b17 in VOP_INACTIVE_APV (vop=, a=) at vnode_if.c:1953 >>>>>> #20 0xffffffff809f8322 in vinactive (vp=0xfffff8000f1beb10, td=0xfffff8000f9f34b0) at vnode_if.h:807 >>>>>> #21 0xffffffff809f8712 in vputx (vp=0xfffff8000f1beb10, func=2) at /usr/src/sys/kern/vfs_subr.c:2547 >>>>>> #22 0xffffffff80a0137e in kern_rmdirat (td=, fd=, path=, pathseg=) >>>>>> at /usr/src/sys/kern/vfs_syscalls.c:3964 >>>>>> #23 0xffffffff80d574bf in amd64_syscall (td=0xfffff8000f9f34b0, traced=0) at subr_syscall.c:141 >>>>>> #24 0xffffffff80d3c72b in Xfast_syscall () at /usr/src/sys/amd64/amd64/exception.S:396 >>>>>> #25 0x000000080089458a in ?? () >>>>>> Previous frame inner to this frame (corrupt stack?) >>>>>> Current language: auto; currently minimal >>>>>> >>>>>> Crash folder (or file) have strange rights: >>>>>> d---------+ 3 anna domain users 3 10 10:32 01-Projcts >>>>>> d---------+ 2 anna domain users 2 8 21:46 02-Text >>>>>> >>>>>> How correct kernel panic? >>>>>> >>>>>> _______________________________________________ >>>>>> freebsd-fs@freebsd.org mailing list >>>>>> https://lists.freebsd.org/mailman/listinfo/freebsd-fs >>>>>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >>>>> _______________________________________________ >>>>> freebsd-fs@freebsd.org mailing list >>>>> https://lists.freebsd.org/mailman/listinfo/freebsd-fs >>>>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@freebsd.org Thu Feb 18 20:37:06 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 299BFAACB3E for ; Thu, 18 Feb 2016 20:37:06 +0000 (UTC) (envelope-from nonesuch@longcount.org) Received: from mail-yk0-x230.google.com (mail-yk0-x230.google.com [IPv6:2607:f8b0:4002:c07::230]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id E99B6ACB for ; Thu, 18 Feb 2016 20:37:05 +0000 (UTC) (envelope-from nonesuch@longcount.org) Received: by mail-yk0-x230.google.com with SMTP id u9so26324942ykd.1 for ; Thu, 18 Feb 2016 12:37:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=longcount-org.20150623.gappssmtp.com; s=20150623; h=mime-version:date:message-id:subject:from:to:content-type; bh=pd5BpvCNButsU2WaWssTS1UzMjj19Kizkx5LSdWcdw4=; b=0QZyVGbHSx6hLRqq4XU8bm7unTFe7p+6aH19e0gHUGZuDFOn4eKNfFqH5DwDQCiLSU f5AzMWvlcS7ohAfHkZCWhxShAy0INyZPvl0stREtU4+e91R0qdGch7KLuE0FtuJYa5St 3xqnUOA13dsFYjDs5zl3x8FCQSXeLGp1C4cV96u9dOT0o+qcb9A09ihPkYTb11ypavSo D8d5PrZ+wd9+7EBE1E7j59UNjz60G+w/k4Hi6m/Y2a3y+hLd/tKnhwjdRyLHR0rJMqEi ID8rfe6NFthnXK/ESsGx8rtbvLU13i7ePL8zsX59yKDqbyozygogDoxkZ/p9mkrr/mzW vVfQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:date:message-id:subject:from:to :content-type; bh=pd5BpvCNButsU2WaWssTS1UzMjj19Kizkx5LSdWcdw4=; b=Dw1mgjE5CBecXH5TTc0ANJMXeXRmPOmPR25YYuv4eOB7R9LqLnXkc9PY6ZQhXUZ4G8 fDe+s0R1e11+hhZ//UKcOgPdyhaJpaFnmZerAh3Mvyy4buOLLpaGrApKKsxvvBu8M6S5 RIkaXEvoRjDyT5Y0rV5K+ZB3VknKihbuyqfk5iisr4MpCbZoY8M09bEUGrA/wd4RyZrG 5dfGUj+EG83/5ZFb1tdR2sbMd1cadJIovnu8tNE+IGcjgyTvT/zT8fvMK8ePpVC+TJT0 WZ2zuqzX5Xtb47hyYgFXCfyuLkeuqKAiAnYyW8K33L4EWJvGIRhVHzYnrjUu2Rs4VDSf uArA== X-Gm-Message-State: AG10YORElgb3B6fV8Ae5RVW+a2c2TsEEpjG2jsKeG4pQp143/iQTIuks7MeePivjR3JaqBtF+KjIMa5EFpnSVA== MIME-Version: 1.0 X-Received: by 10.37.45.67 with SMTP id t64mr5971656ybt.170.1455827824764; Thu, 18 Feb 2016 12:37:04 -0800 (PST) Received: by 10.13.214.74 with HTTP; Thu, 18 Feb 2016 12:37:04 -0800 (PST) X-Originating-IP: [38.104.68.66] Date: Thu, 18 Feb 2016 15:37:04 -0500 Message-ID: Subject: Zvol receive removes properties on import From: Mark Saad To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 18 Feb 2016 20:37:06 -0000 All I am not sure if this is a bug or if t his is intended. Here is my use case. On 10-STABLE amd64, built last week. I have an application that directly uses a zvol for storage. To make things easy to manage I depend on the refreservation; I don't want to thin provision the zvol IE one with out a refreservation . In 10-STABLE this is the default mode of operations . The short version of the issue is this. When I send the zvol to another server and import it, or send it to a file and delete the original and reimport it locally I loose the refreservation . The newly imported / received zvol is now thin provisioned, or better put has no space guarantee on the zpool for the zvol I just imported. On Freebsd 10-STABLE I do the following tasks root@rangefinder:~ # zfs create -V 10G zroot/testvol make my app do some work on /dev/zvol/zroot/testvol make a few snapshots of this data root@rangefinder:~ # zfs snapshot zroot/testvol@snap0 ... Then I need to send the zvol to another server and work on it there . I first check to see what props are set on the zvol root@rangefinder:~ # zfs get all zroot/testvol NAME PROPERTY VALUE SOURCE zroot/testvol type volume - zroot/testvol creation Thu Feb 18 20:13 2016 - zroot/testvol used 10.3G - zroot/testvol available 885G - zroot/testvol referenced 64K - zroot/testvol compressratio 1.00x - zroot/testvol reservation none default zroot/testvol volsize 10G local zroot/testvol volblocksize 8K - zroot/testvol checksum on default zroot/testvol compression lz4 inherited from zroot zroot/testvol readonly off default zroot/testvol copies 1 default zroot/testvol refreservation 10.3G local zroot/testvol primarycache all default zroot/testvol secondarycache all default zroot/testvol usedbysnapshots 0 - zroot/testvol usedbydataset 64K - zroot/testvol usedbychildren 0 - zroot/testvol usedbyrefreservation 10.3G - zroot/testvol logbias latency default zroot/testvol dedup off default zroot/testvol mlslabel - zroot/testvol sync standard default zroot/testvol refcompressratio 1.00x - zroot/testvol written 64K - zroot/testvol logicalused 30K - zroot/testvol logicalreferenced 30K - zroot/testvol volmode default default zroot/testvol snapshot_limit none default zroot/testvol snapshot_count none default zroot/testvol redundant_metadata all default I send it and re-import it on the other server root@rangefinder:~ # zfs send zroot/testvol@snap3 |gzip -7 > /archive/testvol@snap3.zvol.gz root@rangefinder:~ # zfs destroy -r zroot/testvol ssh magic .... root@depthfinder:~ # gzcat /import/testvol@snap3.zvol.gz |zfs receive zroot/testvol Then I check the props on the new box to insure its all good . root@depthfinder:~ # zfs get all zroot/testvol NAME PROPERTY VALUE SOURCE zroot/testvol type volume - zroot/testvol creation Thu Feb 18 20:20 2016 - zroot/testvol used 1.02G - zroot/testvol available 883G - zroot/testvol referenced 1.02G - zroot/testvol compressratio 1.00x - zroot/testvol reservation none default zroot/testvol volsize 10G local zroot/testvol volblocksize 8K - zroot/testvol checksum on default zroot/testvol compression lz4 inherited from zroot zroot/testvol readonly off default zroot/testvol copies 1 default zroot/testvol refreservation none default zroot/testvol primarycache all default zroot/testvol secondarycache all default zroot/testvol usedbysnapshots 0 - zroot/testvol usedbydataset 1.02G - zroot/testvol usedbychildren 0 - zroot/testvol usedbyrefreservation 0 - zroot/testvol logbias latency default zroot/testvol dedup off default zroot/testvol mlslabel - zroot/testvol sync standard default zroot/testvol refcompressratio 1.00x - zroot/testvol written 0 - zroot/testvol logicalused 1.01G - zroot/testvol logicalreferenced 1.01G - zroot/testvol volmode default default zroot/testvol snapshot_limit none default zroot/testvol snapshot_count none default However at this point I notice we no longer have a refreservation . -- mark saad | nonesuch@longcount.org From owner-freebsd-fs@freebsd.org Fri Feb 19 06:23:16 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 54EF1AAD007 for ; Fri, 19 Feb 2016 06:23:16 +0000 (UTC) (envelope-from rpokala@mac.com) Received: from mr11p00im-asmtp004.me.com (mr11p00im-asmtp004.me.com [17.110.69.135]) (using TLSv1.2 with cipher DHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 42AE21C63 for ; Fri, 19 Feb 2016 06:23:16 +0000 (UTC) (envelope-from rpokala@mac.com) Received: from [172.20.10.3] (unknown [172.56.41.194]) by mr11p00im-asmtp004.me.com (Oracle Communications Messaging Server 7.0.5.36.0 64bit (built Sep 8 2015)) with ESMTPSA id <0O2S006U172PN730@mr11p00im-asmtp004.me.com> for freebsd-fs@freebsd.org; Fri, 19 Feb 2016 06:23:14 +0000 (GMT) X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2016-02-19_05:,, signatures=0 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 clxscore=1015 suspectscore=0 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1510270003 definitions=main-1602190088 User-Agent: Microsoft-MacOutlook/0.0.0.160109 Date: Thu, 18 Feb 2016 22:23:10 -0800 Subject: Re: Hours of tiny transfers at the end of a ZFS resilver? From: Ravi Pokala Sender: "Pokala, Ravi" To: "freebsd-fs@freebsd.org" Message-id: <75A39599-A668-4A2E-9956-98479A28930A@panasas.com> Thread-topic: Hours of tiny transfers at the end of a ZFS resilver? MIME-version: 1.0 Content-type: text/plain; charset=UTF-8 Content-transfer-encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 19 Feb 2016 06:23:16 -0000 >Date: Tue, 16 Feb 2016 09:09:17 +0000 >From: Steven Hartland >To: freebsd-fs@freebsd.org >Subject: Re: Hours of tiny transfers at the end of a ZFS resilver? >Message-ID: <56C2E73D.2010405@multiplay.co.uk> >Content-Type: text/plain; charset=windows-1252; format=flowed > >This is 4k underlying, SSD's are by far and away the worst culprits for this. On the one hand, you're right - SSDs actually *do* lie about their 512n/AF-512e/AF-4Kn designation. On the other hand, SSD erase blocks - and even program blocks - are larger than 4KB, so neither AF-* designation would be correct anyway. And to some extent, it doesn't matter for SSDs, since everything is so heavily indirected anyway. -Ravi (rpokala@) From owner-freebsd-fs@freebsd.org Fri Feb 19 06:55:09 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 28CD7AAC0B3 for ; Fri, 19 Feb 2016 06:55:09 +0000 (UTC) (envelope-from zanchey@ucc.gu.uwa.edu.au) Received: from mail-ext-sout1.uwa.edu.au (mail-ext-sout1.uwa.edu.au [130.95.128.72]) (using TLSv1 with cipher RC4-SHA (128/128 bits)) (Client CN "IronPort Appliance Demo Certificate", Issuer "IronPort Appliance Demo Certificate" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 5DC781D62 for ; Fri, 19 Feb 2016 06:55:07 +0000 (UTC) (envelope-from zanchey@ucc.gu.uwa.edu.au) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: A2DtBADMu8ZW/8+AX4JUCoQMbaZSAQEBAQEBBpVBIYI8gzACgiEBAQEBAQFlJ4RCAQEEOjoFEAsOCi4sKwYTiBoOt1qEHgEBAQEGAQEBAQEXBIVKgj2CRoQLBQ0JhEkFjh2IaIVXiWKHaIUvjkdiggIZgVVdAYcCAR6BGgEBAQ X-IPAS-Result: A2DtBADMu8ZW/8+AX4JUCoQMbaZSAQEBAQEBBpVBIYI8gzACgiEBAQEBAQFlJ4RCAQEEOjoFEAsOCi4sKwYTiBoOt1qEHgEBAQEGAQEBAQEXBIVKgj2CRoQLBQ0JhEkFjh2IaIVXiWKHaIUvjkdiggIZgVVdAYcCAR6BGgEBAQ X-IronPort-AV: E=Sophos;i="5.22,469,1449504000"; d="scan'208";a="200827212" Received: from f5-new.net.uwa.edu.au (HELO mooneye.ucc.gu.uwa.edu.au) ([130.95.128.207]) by mail-ext-out1.uwa.edu.au with ESMTP/TLS/ADH-AES256-SHA; 19 Feb 2016 14:54:58 +0800 Received: by mooneye.ucc.gu.uwa.edu.au (Postfix, from userid 801) id F13E33C04E; Fri, 19 Feb 2016 14:54:58 +0800 (AWST) Received: from motsugo.ucc.gu.uwa.edu.au (motsugo.ucc.gu.uwa.edu.au [130.95.13.7]) by mooneye.ucc.gu.uwa.edu.au (Postfix) with ESMTP id C2F6F3C04E; Fri, 19 Feb 2016 14:54:58 +0800 (AWST) Received: by motsugo.ucc.gu.uwa.edu.au (Postfix, from userid 11251) id BABB120083; Fri, 19 Feb 2016 14:54:58 +0800 (AWST) Received: from localhost (localhost [127.0.0.1]) by motsugo.ucc.gu.uwa.edu.au (Postfix) with ESMTP id B5D2420081; Fri, 19 Feb 2016 14:54:58 +0800 (AWST) Date: Fri, 19 Feb 2016 14:54:58 +0800 (AWST) From: David Adam To: Tom Curry cc: FreeBSD Filesystems Subject: Re: Poor ZFS+NFSv3 read/write performance and panic In-Reply-To: Message-ID: References: User-Agent: Alpine 2.11 (DEB 23 2013-08-11) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 19 Feb 2016 06:55:09 -0000 On Sun, 14 Feb 2016, Tom Curry wrote: > On Sun, Feb 14, 2016 at 3:40 AM, David Adam > wrote: > > > On Mon, 8 Feb 2016, Tom Curry wrote: > > > On Sun, Feb 7, 2016 at 11:58 AM, David Adam > > > wrote: > > > > > > > Just wondering if anyone has any idea how to identify which devices are > > > > implicated in ZFS' vdev_deadman(). I have updated the firmware on the > > > > mps(4) card that has our disks attached but that hasn't helped. > > > > > > I too ran into this problem and spent quite some time troubleshooting > > > hardware. For me it turns out it was not hardware at all, but software. > > > Specifically the ZFS ARC. Looking at your stack I see some arc reclaim up > > > top, it's possible you're running into the same issue. There is a monster > > > of a PR that details this here > > > https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=187594 > > > > > > If you would like to test this theory out, the fastest way is to limit > > the > > > ARC by adding the following to /boot/loader.conf and rebooting > > > vfs.zfs.arc_max="24G" > > > > > Thanks Tom - this certainly did sound promising, but setting the ARC to > > 11G of our 16G of RAM didn't help. `zfs-stats` confirmed that the ARC was > > the expected size and that there was still 461 MB of RAM free. > > Did the system still panic or did it merely degrade in performance? When > performance heads south are you swapping? I had booted back into a GENERIC kernel, so it slowed down and then deadlocked - no network traffic and no response on the console. I've never actually managed to capture the panic with a GENERIC kernel, only with one built with DDB/WITNESS/DIAGNOSTIC. My colleagues tended to try and reboot the server before it got to that stage (and then ask who was going to install Linux). It seems to be fixed now but I have committed a mortal sin and changed two things at once - upgraded to 10.3-BETA1 (as suggested by jwd@ off-list) but also dropped the ARC size further to 10G. If I can make it happen again, I'll certainly be asking for more help and will see what the swap state is. Thanks to everyone who replied on and off list. David Adam Wheel Group University Computer Club, The University of Western Australia zanchey@ucc.gu.uwa.edu.au From owner-freebsd-fs@freebsd.org Fri Feb 19 11:07:03 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id F24ABAAEF24 for ; Fri, 19 Feb 2016 11:07:03 +0000 (UTC) (envelope-from n.corvini@gmail.com) Received: from mail-ig0-x22f.google.com (mail-ig0-x22f.google.com [IPv6:2607:f8b0:4001:c05::22f]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id C16FB1AF7 for ; Fri, 19 Feb 2016 11:07:03 +0000 (UTC) (envelope-from n.corvini@gmail.com) Received: by mail-ig0-x22f.google.com with SMTP id y8so36060840igp.0 for ; Fri, 19 Feb 2016 03:07:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to; bh=QKy5xG3ZUvwvmCPriA0DxQxSlh28x8FsNYUlFDH5SFw=; b=n/CLQgXo70cPDuFW5r+l9AUoyxPzORXONSCh6Vn+WN+TZmIDvXu7TM/h0g/oTr8ZaZ VoZBkd/an4K7UDtKoBZ6GUwK3xJ0N5QO4kbSzEfdXCEPnLVGVeEA755WfRdzoHrBPkPS RzmwgLMQPntrHYY7rfVojz89OflLWunzoBS68sC3WqVeFQVXl2qu6FhDbcKwG5Y6Tz1K ravDX7T5zCtqw77L0ixmVHScuJ0koGiMWIgE5+800+pzRNBWCX86NcfcXD5rcrdcUNfM IByrt7PAqrO3G/xVNg63Jgx5nLi2n0IMHs5l6eo5ROddXfscpHylEDJFneKRfpBoECUk aTdQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:date:message-id:subject:from:to; bh=QKy5xG3ZUvwvmCPriA0DxQxSlh28x8FsNYUlFDH5SFw=; b=hVun1X5qE6Z32r4kbuFiP0UcKT8rpBO0m7qbFCmMf4z95vhn6yPKn7COElYqzbAMsh W1uDf+Z/Z7MiZKkV2hYZBCdJKPkMp86emcSJYWLil1lS3pcXwUAqLRRPqRQdFC6pBqJQ y+CGWtv/GG1m5WyIfNiPDAE7uouSOrPQGIg+lTy+P/2eputK4qY0q4dGd+RnXiQAOsQK jZdyivc63TqEJBAqCToWRo2aKRYb9fvkTfd2xfebsm9usy12nZs4k7u2lIEaZm4tYcjI sXYyk4L8P2Xm/ioGGU+il86JTZaKt4X6dnzGXTIz95OK+n/YK/u69IJjMvSD9e9PwQs/ qAZA== X-Gm-Message-State: AG10YOQ2QbGaABBllgwCLO5CCJb331gQOi68Z0YyiqqE3ExPGH4q3hPn40B/JxtQoXuvQEygiK1K/MVG3HpTpA== MIME-Version: 1.0 X-Received: by 10.50.131.201 with SMTP id oo9mr8345808igb.68.1455880023205; Fri, 19 Feb 2016 03:07:03 -0800 (PST) Received: by 10.107.136.166 with HTTP; Fri, 19 Feb 2016 03:07:03 -0800 (PST) Date: Fri, 19 Feb 2016 12:07:03 +0100 Message-ID: Subject: Zfs heavy io writing | zfskern txg_thread_enter From: =?UTF-8?Q?Niccol=C3=B2_Corvini?= To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 19 Feb 2016 11:07:04 -0000 Hi, first time here! We are having a problem with a server running FreeBsd 9.1 with ZFS on a single sata drive. Since a few days ago, in the morning the system becomes really slow due of a really heavy io writing. We investigated and we think it might start at night, maybe correlated to to crondaily (standard) but we are not sure. After a few hours the situation returns to normal. Any help is much appreciated The machine is a Intel Xeon E5-2620 with 36GB of RAM, the HDD is a 2TB an is half full. gstat output: L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name 13 135 21 641 256.7 108 6410 41.4 128.8| ada0 0 0 0 0 0.0 0 0 0.0 0.0| ada0p1 13 135 21 641 256.7 108 6410 41.7 128.8| ada0p2 0 0 0 0 0.0 0 0 0.0 0.0| cd0 0 0 0 0 0.0 0 0 0.0 0.0| gptid/3c0de011-4f37-11e5-8217-3085a91c3292 0 0 0 0 0.0 0 0 0.0 0.0| zvol/zroot/swap 13 135 21 641 256.7 108 6410 41.7 128.9| gpt/disk1 Using top -m io shows that the responsible is [zfskern{txg_thread_enter}] top -m io output: PID JID USERNAME VCSW IVCSW READ WRITE FAULT TOTAL PERCENT COMMAND 3 0 root 14 1 0 37 0 37 30.33% [zfskern{txg_thread_enter}] 49866 215 7070 26 2 0 5 0 5 4.10% postgres: stats collector process (postgres) 99901 5 70 42 0 0 4 0 4 3.28% postgres: promeditec promeditec.osr.test 192.168.0.246(278 24820 199 www 10 0 7 0 0 7 5.74% [jsvc{jsvc}] 33869 212 88 19 2 0 2 0 2 1.64% [mysqld{mysqld}] 93400 0 root 13 0 10 0 0 10 8.20% [find] 89407 215 7070 10 0 0 1 0 1 0.82% postgres: alfresco alfconservazione.dotcom.ts.it 192.168.0 15776 5 70 11 0 0 4 0 4 3.28% postgres: stats collector process (postgres) 33869 212 88 10 0 0 3 0 3 2.46% [mysqld{mysqld}] 33869 212 88 2 0 0 11 0 11 9.02% [mysqld{mysqld}] 18685 198 root 5 0 0 2 0 2 1.64% /usr/sbin/syslogd -s 15852 214 70 4 1 0 1 0 1 0.82% postgres: alfresco alfcomunets.dotcom.ts.it 192.168.0.212( 98335 120 root 11 0 29 0 0 29 23.77% find /var/log -name messages.* -mtime -2 16128 214 70 8 0 0 1 0 1 0.82% postgres: alfresco alfaxErre8 192.168.0.208(50558) (postg 1116 198 root 10 0 0 1 0 1 0.82% sendmail: ./u1J9k90d001112 local: client DATA status (send 1120 198 root 7 0 0 4 0 4 3.28% mail.local -l Using procstat -kk on the zfskern pid shows: PID TID COMM TDNAME KSTACK 3 100129 zfskern arc_reclaim_thre mi_switch sleepq_timedwait _cv_timedwait arc_reclaim_thread fork_exit fork_trampoline 3 100130 zfskern l2arc_feed_threa mi_switch sleepq_timedwait _cv_timedwait l2arc_feed_thread fork_exit fork_trampoline 3 100504 zfskern txg_thread_enter mi_switch sleepq_wait _cv_wait txg_thread_wait txg_quiesce_thread fork_exit fork_trampoline 3 100505 zfskern txg_thread_enter mi_switch sleepq_wait _cv_wait zio_wait dsl_pool_sync spa_sync txg_sync_thread fork_exit fork_trampoline 3 100506 zfskern zvol zroot/swap mi_switch sleepq_wait _sleep zvol_geom_worker fork_exit fork_trampoline systat -vmstat 7 users Load 0.50 0.62 1.46 Feb 19 11:46 Mem:KB REAL VIRTUAL VN PAGER SWAP PAGER Tot Share Tot Share Free in out in out Act 41318k 199492 1251183k 588716 2254748 count All 46844k 229048 -1968M 892088 pages Proc: Interrupts r p d s w Csw Trp Sys Int Sof Flt cow 2836 total 4k 8516 393 12k 236 1445 11k 11281 zfod atkbd0 1 ozfod acpi0 9 4.4%Sys 0.0%Intr 0.3%User 0.0%Nice 95.4%Idle %ozfod ehci0 17 | | | | | | | | | | | daefr ehci1 23 == 11171 prcfr 79 cpu0:timer dtbuf 11265 totfr isci0 264 Namei Name-cache Dir-cache 1095774 desvn react 24 em0:rx 0 Calls hits % hits % 409282 numvn pdwak 16 em0:tx 0 78 41 53 273943 frevn pdpgs em0:link intrn 196 ahci0 278 Disks ada0 cd0 pass0 pass1 20445132 wire 267 cpu21:time KB/t 2.55 0.00 0.00 0.00 37317552 act 55 cpu13:time tps 223 0 0 0 4948708 inact 86 cpu5:timer MB/s 0.56 0.00 0.00 0.00 884804 cache 24 cpu12:time %busy 94 0 0 0 1370012 free 63 cpu10:time buf 306 cpu19:time 63 cpu11:time 55 cpu14:time 86 cpu9:timer 71 cpu18:time 86 cpu3:timer 47 cpu23:time 55 cpu6:timer 55 cpu22:time 71 cpu2:timer 39 cpu20:time zpool status: pool: zroot state: ONLINE scan: scrub repaired 0 in 3h46m with 0 errors on Wed Nov 4 21:54:44 2015 config: NAME STATE READ WRITE CKSUM zroot ONLINE 0 0 0 gpt/disk1 ONLINE 0 0 0 errors: No known data errors From owner-freebsd-fs@freebsd.org Fri Feb 19 11:37:35 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id BEF49AADE47 for ; Fri, 19 Feb 2016 11:37:35 +0000 (UTC) (envelope-from killing@multiplay.co.uk) Received: from mail-wm0-x236.google.com (mail-wm0-x236.google.com [IPv6:2a00:1450:400c:c09::236]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 595231BFC for ; Fri, 19 Feb 2016 11:37:35 +0000 (UTC) (envelope-from killing@multiplay.co.uk) Received: by mail-wm0-x236.google.com with SMTP id c200so71288688wme.0 for ; Fri, 19 Feb 2016 03:37:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=multiplay-co-uk.20150623.gappssmtp.com; s=20150623; h=subject:to:references:from:message-id:date:user-agent:mime-version :in-reply-to:content-transfer-encoding; bh=rwuNwV46ythJgxi1VZXpjciqhfWF5ueFs3dmQEoWCW8=; b=bNC8jZp7kLQYO8vufHoOTgenpYDuma7RK0bZNfpjJGLwwIfJ5w8O9UiT+mR5SMFJjL B2PY8FqGefR13e/0os8vbUEvp/r+G5nV/b5pR22Mztr9Of4GGqRk6NccukQ7f3H6M6NX XgOeootKkDQeVpcxZCpLCbjzx9evsY7s62xGnjSogbX+sAWxcWY6KByqKHDMJrLSl/OJ Wb7KUR5lE2FFSI/TArG0AP/3EA/YXzAp5lZonhBW1vhsqcibeahfZg9eSz5pqKQuX+cP ZyLthCku1FT7yml9hLHTaU4QL89wQHbb82FX3CHSpwcq69QE3WNw02OZhBfdhY1hcTxk DkSg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:subject:to:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-transfer-encoding; bh=rwuNwV46ythJgxi1VZXpjciqhfWF5ueFs3dmQEoWCW8=; b=WhTdZAgdCWCmocB6SXJOfbdnjzRffOWS4bLAdNrcAsd2FhyRV7GrfyWrnnqohskJvp 2BWlKJV1KIpdB+I6iNPXr4lte6Kb+w9uY6WOjh0rVaWfecOnvYa3/orHWz4hiaTdgB3L EbUqB4vAscBvium8eiWoNa/y8BHf/qc0Rw/pJ1pOWXb4PMeGPwz9nYOFygVa68/ITUJO 2sA3r3A7vKHef2QNPk6TP6nG3WM1jHtrStgwm/bhHI1eetRqpRm93fXVs6EMbABJ/svf ACfEIUEEIxXOg4kfe9fT1kq+s+FTXEwUVzRnvdQk2Aus4prIX0+1H9epbQOJdwmRMMXw gTrg== X-Gm-Message-State: AG10YORCjpDHGksEM4zMgGpi8eH7RlkCuOZnMTGSb5JK4/f2Ki+TDFqkH4Yw8ib3zso4J7ss X-Received: by 10.28.218.145 with SMTP id r139mr9371267wmg.52.1455881853600; Fri, 19 Feb 2016 03:37:33 -0800 (PST) Received: from [10.10.1.58] (liv3d.labs.multiplay.co.uk. [82.69.141.171]) by smtp.gmail.com with ESMTPSA id g126sm7230498wmf.16.2016.02.19.03.37.32 for (version=TLSv1/SSLv3 cipher=OTHER); Fri, 19 Feb 2016 03:37:32 -0800 (PST) Subject: Re: Zfs heavy io writing | zfskern txg_thread_enter To: freebsd-fs@freebsd.org References: From: Steven Hartland Message-ID: <56C6FE7C.7080700@multiplay.co.uk> Date: Fri, 19 Feb 2016 11:37:32 +0000 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.6.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 8bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 19 Feb 2016 11:37:35 -0000 Please try 10.3-BETA2, FreeBSD 9.1 is so old its impossible to comment on its performance. On 19/02/2016 11:07, Niccol Corvini wrote: > Hi, first time here! > We are having a problem with a server running FreeBsd 9.1 with ZFS on a > single sata drive. Since a few days ago, in the morning the system becomes > really slow due of a really heavy io writing. We investigated and we think > it might start at night, maybe correlated to to crondaily (standard) but we > are not sure. After a few hours the situation returns to normal. > Any help is much appreciated > The machine is a Intel Xeon E5-2620 with 36GB of RAM, the HDD is a 2TB an > is half full. > gstat output: > > L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name > 13 135 21 641 256.7 108 6410 41.4 128.8| ada0 > 0 0 0 0 0.0 0 0 0.0 0.0| ada0p1 > 13 135 21 641 256.7 108 6410 41.7 128.8| ada0p2 > 0 0 0 0 0.0 0 0 0.0 0.0| cd0 > 0 0 0 0 0.0 0 0 0.0 0.0| > gptid/3c0de011-4f37-11e5-8217-3085a91c3292 > 0 0 0 0 0.0 0 0 0.0 0.0| > zvol/zroot/swap > 13 135 21 641 256.7 108 6410 41.7 128.9| gpt/disk1 > > Using top -m io shows that the responsible is [zfskern{txg_thread_enter}] > top -m io output: > > PID JID USERNAME VCSW IVCSW READ WRITE FAULT TOTAL PERCENT COMMAND > 3 0 root 14 1 0 37 0 37 30.33% > [zfskern{txg_thread_enter}] > 49866 215 7070 26 2 0 5 0 5 4.10% > postgres: stats collector process (postgres) > 99901 5 70 42 0 0 4 0 4 3.28% > postgres: promeditec promeditec.osr.test 192.168.0.246(278 > 24820 199 www 10 0 7 0 0 7 5.74% > [jsvc{jsvc}] > 33869 212 88 19 2 0 2 0 2 1.64% > [mysqld{mysqld}] > 93400 0 root 13 0 10 0 0 10 8.20% [find] > 89407 215 7070 10 0 0 1 0 1 0.82% > postgres: alfresco alfconservazione.dotcom.ts.it 192.168.0 > 15776 5 70 11 0 0 4 0 4 3.28% > postgres: stats collector process (postgres) > 33869 212 88 10 0 0 3 0 3 2.46% > [mysqld{mysqld}] > 33869 212 88 2 0 0 11 0 11 9.02% > [mysqld{mysqld}] > 18685 198 root 5 0 0 2 0 2 1.64% > /usr/sbin/syslogd -s > 15852 214 70 4 1 0 1 0 1 0.82% > postgres: alfresco alfcomunets.dotcom.ts.it 192.168.0.212( > 98335 120 root 11 0 29 0 0 29 23.77% find > /var/log -name messages.* -mtime -2 > 16128 214 70 8 0 0 1 0 1 0.82% > postgres: alfresco alfaxErre8 192.168.0.208(50558) (postg > 1116 198 root 10 0 0 1 0 1 0.82% > sendmail: ./u1J9k90d001112 local: client DATA status (send > 1120 198 root 7 0 0 4 0 4 3.28% > mail.local -l > > Using procstat -kk on the zfskern pid shows: > > PID TID COMM TDNAME KSTACK > 3 100129 zfskern arc_reclaim_thre mi_switch sleepq_timedwait > _cv_timedwait arc_reclaim_thread fork_exit fork_trampoline > 3 100130 zfskern l2arc_feed_threa mi_switch sleepq_timedwait > _cv_timedwait l2arc_feed_thread fork_exit fork_trampoline > 3 100504 zfskern txg_thread_enter mi_switch sleepq_wait > _cv_wait txg_thread_wait txg_quiesce_thread fork_exit fork_trampoline > 3 100505 zfskern txg_thread_enter mi_switch sleepq_wait > _cv_wait zio_wait dsl_pool_sync spa_sync txg_sync_thread fork_exit > fork_trampoline > 3 100506 zfskern zvol zroot/swap mi_switch sleepq_wait _sleep > zvol_geom_worker fork_exit fork_trampoline > > systat -vmstat > > 7 users Load 0.50 0.62 1.46 Feb 19 11:46 > > Mem:KB REAL VIRTUAL VN PAGER SWAP > PAGER > Tot Share Tot Share Free in out in > out > Act 41318k 199492 1251183k 588716 2254748 count > All 46844k 229048 -1968M 892088 pages > Proc: Interrupts > r p d s w Csw Trp Sys Int Sof Flt cow 2836 total > 4k 8516 393 12k 236 1445 11k 11281 zfod > atkbd0 1 > ozfod acpi0 > 9 > 4.4%Sys 0.0%Intr 0.3%User 0.0%Nice 95.4%Idle %ozfod ehci0 > 17 > | | | | | | | | | | | daefr ehci1 > 23 > == 11171 prcfr 79 > cpu0:timer > dtbuf 11265 totfr isci0 > 264 > Namei Name-cache Dir-cache 1095774 desvn react 24 > em0:rx 0 > Calls hits % hits % 409282 numvn pdwak 16 > em0:tx 0 > 78 41 53 273943 frevn pdpgs > em0:link > intrn 196 ahci0 > 278 > Disks ada0 cd0 pass0 pass1 20445132 wire 267 > cpu21:time > KB/t 2.55 0.00 0.00 0.00 37317552 act 55 > cpu13:time > tps 223 0 0 0 4948708 inact 86 > cpu5:timer > MB/s 0.56 0.00 0.00 0.00 884804 cache 24 > cpu12:time > %busy 94 0 0 0 1370012 free 63 > cpu10:time > buf 306 > cpu19:time > 63 > cpu11:time > 55 > cpu14:time > 86 > cpu9:timer > 71 > cpu18:time > 86 > cpu3:timer > 47 > cpu23:time > 55 > cpu6:timer > 55 > cpu22:time > 71 > cpu2:timer > 39 > cpu20:time > > zpool status: > pool: zroot > state: ONLINE > scan: scrub repaired 0 in 3h46m with 0 errors on Wed Nov 4 21:54:44 2015 > config: > > NAME STATE READ WRITE CKSUM > zroot ONLINE 0 0 0 > gpt/disk1 ONLINE 0 0 0 > > errors: No known data errors > _______________________________________________ > freebsd-fs@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@freebsd.org Fri Feb 19 11:58:34 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 4415CAAC921 for ; Fri, 19 Feb 2016 11:58:34 +0000 (UTC) (envelope-from michelle@sorbs.net) Received: from hades.sorbs.net (mail.sorbs.net [67.231.146.200]) by mx1.freebsd.org (Postfix) with ESMTP id 397561596 for ; Fri, 19 Feb 2016 11:58:33 +0000 (UTC) (envelope-from michelle@sorbs.net) MIME-version: 1.0 Content-transfer-encoding: 8BIT Content-type: text/plain; charset=ISO-8859-1 Received: from isux.com (firewall.isux.com [213.165.190.213]) by hades.sorbs.net (Oracle Communications Messaging Server 7.0.5.29.0 64bit (built Jul 9 2013)) with ESMTPSA id <0O2S00F2WMXA9Q00@hades.sorbs.net> for freebsd-fs@freebsd.org; Fri, 19 Feb 2016 04:05:36 -0800 (PST) Message-id: <56C70365.1050800@sorbs.net> Date: Fri, 19 Feb 2016 12:58:29 +0100 From: Michelle Sullivan User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X; en-US; rv:1.8.1.24) Gecko/20100301 SeaMonkey/1.1.19 To: =?ISO-8859-1?Q?Niccol=F2_Corvini?= Cc: freebsd-fs@freebsd.org Subject: Re: Zfs heavy io writing | zfskern txg_thread_enter References: In-reply-to: X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 19 Feb 2016 11:58:34 -0000 Niccol Corvini wrote: > Hi, first time here! > We are having a problem with a server running FreeBsd 9.1 with ZFS on a > You should upgrade to a supported version first... 9.3 would probably be the best (rather than 10.x) as it's still supported and uses the same ABI (ie you should need to reinstall all your ports/packages - though you should because it sometimes breaks things - at least check for broken things :) .) If you're not familiar "freebsd-update -r 9.3-RELEASE upgrade" will help you do it without too many problems. > single sata drive. Since a few days ago, in the morning the system becomes > really slow due of a really heavy io writing. We investigated and we think > it might start at night, maybe correlated to to crondaily (standard) but we > are not sure. After a few hours the situation returns to normal. > Yeah this sounds like something I am quite familiar with... It's the security check cronjob that runs every day... its looking for any setuid/setgid files, new/modified files...etc... across all file systems > Any help is much appreciated > The machine is a Intel Xeon E5-2620 with 36GB of RAM, the HDD is a 2TB an > is half full. > gstat output: > > > PID JID USERNAME VCSW IVCSW READ WRITE FAULT TOTAL PERCENT COMMAND > 3 0 root 14 1 0 37 0 37 30.33% > [zfskern{txg_thread_enter}] > 49866 215 7070 26 2 0 5 0 5 4.10% > > > 93400 0 root 13 0 10 0 0 10 8.20% [find] > > > 98335 120 root 11 0 29 0 0 29 23.77% find > /var/log -name messages.* -mtime -2 > 16128 214 70 8 0 0 1 0 1 0.82% > sendmail: ./u1J9k90d001112 local: client DATA status (send > 1120 198 root 7 0 0 4 0 4 3.28% > mail.local -l > > You'll find it kicked off from /etc/periodic/daily/ with a config option you can find in /etc/defaults/periodic.conf (or /etc/periodic.conf or /etc/periodic.conf.local )... Regards, -- Michelle Sullivan http://www.mhix.org/ From owner-freebsd-fs@freebsd.org Fri Feb 19 12:29:40 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 08891AADFD7 for ; Fri, 19 Feb 2016 12:29:40 +0000 (UTC) (envelope-from killing@multiplay.co.uk) Received: from mail-wm0-x22e.google.com (mail-wm0-x22e.google.com [IPv6:2a00:1450:400c:c09::22e]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 9616719F6 for ; Fri, 19 Feb 2016 12:29:39 +0000 (UTC) (envelope-from killing@multiplay.co.uk) Received: by mail-wm0-x22e.google.com with SMTP id a4so68490060wme.1 for ; Fri, 19 Feb 2016 04:29:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=multiplay-co-uk.20150623.gappssmtp.com; s=20150623; h=subject:to:references:from:message-id:date:user-agent:mime-version :in-reply-to:content-transfer-encoding; bh=3a7490QJPajESWeUTEMOF93kQJdP+NobdJRolpjUdeI=; b=g+9dB2ujhVvfJY1dDCYpepcKylWCc2URgmre/7ckvXe+jmt+6wRhJaat2ryquZwlHf Y5xrD/3NdQ/BXPSNoXekrkWrYVW7tcXY8PvgTuBE3cWUlAo7YpWhnpcCrX7x8oI90SkY rucJR2qrqLBGyW0P8KijVhVK6keTgaxK8rN82Vi3aewTwWDJf97tNUVMheAeUpfD2VMY bARQjZKnIZ4et7V5K/eTLG/ZxQo6kLkmQgDjQrNkf3jnSZgJyNcuclx1+0PLfs9e8xb/ Ncdtl1xr3/YugZUJBw0hlez4Le1lRnp+Y6D6qfNLp7MvgAuSIdtKeWmrSYk2mN2bPxuE aE+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:subject:to:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-transfer-encoding; bh=3a7490QJPajESWeUTEMOF93kQJdP+NobdJRolpjUdeI=; b=jKSrebUFfiwqscftu21fC2sjhE75v2ca0w1+++hkz1KeejEis6seMar40LSHvA8RTK 2hmvggfw2ElrFC4tNfFziJzA22CDPgLy/tW6B0H5xQiNN6Z37j9m3ggqDColW+W7pKpl etCUsFV0z5KhZxsNyL9Kqn+fPv4EEQjDP7so7RsKDpgmWXYGFz4KNrFkkAob/yeNwhry MQY4jKLwMCXxr0HFoHgFmnr5IPsa20vd6gHkdeiFo6nsZPWau+NFnj/MvnTvA7o8lgiv 5htgSMjmYLlUxLM2gxYH69MObOEhMdxc0HaIEQLFVo75+sHdxfguxmVqrAFZXqQqHeGx yG9A== X-Gm-Message-State: AG10YOS3DHeuCmrtkEkG5oBnQHaTsJ0fzHOtd2CR6lBBMuqLx0p5E3mPvjll+1SL5Yz31kqz X-Received: by 10.28.129.10 with SMTP id c10mr8625229wmd.35.1455884978161; Fri, 19 Feb 2016 04:29:38 -0800 (PST) Received: from [10.10.1.58] (liv3d.labs.multiplay.co.uk. [82.69.141.171]) by smtp.gmail.com with ESMTPSA id s2sm11023038wjs.39.2016.02.19.04.29.36 for (version=TLSv1/SSLv3 cipher=OTHER); Fri, 19 Feb 2016 04:29:36 -0800 (PST) Subject: Re: Zfs heavy io writing | zfskern txg_thread_enter To: freebsd-fs@freebsd.org References: <56C70365.1050800@sorbs.net> From: Steven Hartland Message-ID: <56C70AB0.6050400@multiplay.co.uk> Date: Fri, 19 Feb 2016 12:29:36 +0000 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.6.0 MIME-Version: 1.0 In-Reply-To: <56C70365.1050800@sorbs.net> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 8bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 19 Feb 2016 12:29:40 -0000 On 19/02/2016 11:58, Michelle Sullivan wrote: > Niccol Corvini wrote: >> Hi, first time here! >> We are having a problem with a server running FreeBsd 9.1 with ZFS on a >> > You should upgrade to a supported version first... 9.3 would probably > be the best (rather than 10.x) as it's still supported and uses the same > ABI (ie you should need to reinstall all your ports/packages - though > you should because it sometimes breaks things - at least check for > broken things :) .) > > If you're not familiar "freebsd-update -r 9.3-RELEASE upgrade" will help > you do it without too many problems. 9.3 is still ancient, and while "supported" its not in active development, and to be blunt no one will be interested in helping to diagnose any actual issue on something so old. 10.x has a totally different ZFS IO scheduler for example, so its differently for most workloads. > >> single sata drive. Since a few days ago, in the morning the system becomes >> really slow due of a really heavy io writing. We investigated and we think >> it might start at night, maybe correlated to to crondaily (standard) but we >> are not sure. After a few hours the situation returns to normal. >> > Yeah this sounds like something I am quite familiar with... It's the > security check cronjob that runs every day... its looking for any > setuid/setgid files, new/modified files...etc... across all file systems This is quite likely, so while updating to 10 may not fix the issue running on 9.x. Be aware that 10.3-BETA2 has a known issue related to vnode memory usage which can be triggered by such workloads so trying BETA3 when released, which should address this would be a good idea. Regards Steve From owner-freebsd-fs@freebsd.org Fri Feb 19 13:06:29 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 68B04AAEEA9 for ; Fri, 19 Feb 2016 13:06:29 +0000 (UTC) (envelope-from michelle@sorbs.net) Received: from hades.sorbs.net (mail.sorbs.net [67.231.146.200]) by mx1.freebsd.org (Postfix) with ESMTP id 5D1991EDE for ; Fri, 19 Feb 2016 13:06:28 +0000 (UTC) (envelope-from michelle@sorbs.net) MIME-version: 1.0 Content-transfer-encoding: 8BIT Content-type: text/plain; charset=windows-1252 Received: from isux.com (firewall.isux.com [213.165.190.213]) by hades.sorbs.net (Oracle Communications Messaging Server 7.0.5.29.0 64bit (built Jul 9 2013)) with ESMTPSA id <0O2S00F3OQ2G9Q00@hades.sorbs.net> for freebsd-fs@freebsd.org; Fri, 19 Feb 2016 05:13:31 -0800 (PST) Message-id: <56C71350.3020602@sorbs.net> Date: Fri, 19 Feb 2016 14:06:24 +0100 From: Michelle Sullivan User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X; en-US; rv:1.8.1.24) Gecko/20100301 SeaMonkey/1.1.19 To: Steven Hartland Cc: freebsd-fs@freebsd.org Subject: Re: Zfs heavy io writing | zfskern txg_thread_enter References: <56C70365.1050800@sorbs.net> <56C70AB0.6050400@multiplay.co.uk> In-reply-to: <56C70AB0.6050400@multiplay.co.uk> X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 19 Feb 2016 13:06:29 -0000 Steven Hartland wrote: > > > On 19/02/2016 11:58, Michelle Sullivan wrote: >> Niccol Corvini wrote: >>> Hi, first time here! >>> We are having a problem with a server running FreeBsd 9.1 with ZFS on a >>> >> You should upgrade to a supported version first... 9.3 would probably >> be the best (rather than 10.x) as it's still supported and uses the same >> ABI (ie you should need to reinstall all your ports/packages - though >> you should because it sometimes breaks things - at least check for >> broken things :) .) >> >> If you're not familiar "freebsd-update -r 9.3-RELEASE upgrade" will help >> you do it without too many problems. > 9.3 is still ancient, and while "supported" its not in active > development, and to be blunt no one will be interested in helping to > diagnose any actual issue on something so old. So supported is not really supported... Is that an official position? > > 10.x has a totally different ZFS IO scheduler for example, so its > differently for most workloads. But the user is on 9.x not 10.x and 10.x changes a lot more than just the ZFS IO scheduler. If this is a production machine, then an upgrade to 9.3 may be easier as it would require less regression testing.... Or is this another case of people don't run FreeBSD in production environments so it doesn't matter...? >> >>> single sata drive. Since a few days ago, in the morning the system >>> becomes >>> really slow due of a really heavy io writing. We investigated and we >>> think >>> it might start at night, maybe correlated to to crondaily (standard) >>> but we >>> are not sure. After a few hours the situation returns to normal. >>> >> Yeah this sounds like something I am quite familiar with... It's the >> security check cronjob that runs every day... its looking for any >> setuid/setgid files, new/modified files...etc... across all file systems > This is quite likely, so while updating to 10 may not fix the issue > running on 9.x. Which means you just told the user to do something that is not likely to fix the issue but will give them more problems to deal with so they might forget the original problem in the mean time... You know this was the reason I was the Technical Lead for the support teams first in Europe then in AsiaPAC for Netscape back in the day, and why I never worked for Microsoft... You diagnose the problem as best as possible with as minimal changes to the system at first, then if all else fails or you come across evidence that points to a known bug that you know is fixed you tell them to upgrade to the latest *supported* version > > Be aware that 10.3-BETA2 has a known issue related to vnode memory > usage which can be triggered by such workloads so trying BETA3 when > released, which should address this would be a good idea. > ...supported version... i.e. *NOT* a BETA release - especially if that beta release has other known issues that might well trigger on the very problem they indicated...! Seriously, sorry to pick on you, but "Upgrade to 10.x Beta as it might help" is not the *first* answer anyone should give... you might as well have told him to upgrade to 11.... because that has as much chance of fixing the problem... Regards, -- Michelle Sullivan http://www.mhix.org/ From owner-freebsd-fs@freebsd.org Fri Feb 19 13:51:24 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id E30EDAAE0F5 for ; Fri, 19 Feb 2016 13:51:23 +0000 (UTC) (envelope-from killing@multiplay.co.uk) Received: from mail-wm0-x22c.google.com (mail-wm0-x22c.google.com [IPv6:2a00:1450:400c:c09::22c]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 8387718BE for ; Fri, 19 Feb 2016 13:51:23 +0000 (UTC) (envelope-from killing@multiplay.co.uk) Received: by mail-wm0-x22c.google.com with SMTP id c200so76628383wme.0 for ; Fri, 19 Feb 2016 05:51:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=multiplay-co-uk.20150623.gappssmtp.com; s=20150623; h=subject:to:references:cc:from:message-id:date:user-agent :mime-version:in-reply-to:content-type:content-transfer-encoding; bh=i5WCWXq8CjP1Gn47s++4VTJJPjct0tkUZThdIMRKcpw=; b=bw/dEFMZXTXcwVOhE5VLEzc9v0jSIIeBraWW6Sq0LZK6GgZ/Ft/vCp1T/jvrVDm+bk Qun5zojzL0FEhRcmMlwXIShiydzBL/3VvAfgKsgjd3Npl9zw5q+Y/3ZnXf60FOl/gjeW XAL592xe0UMe3OqGK7KuRq0eoXl94hSMXvCurFfgCGBTP5ow/WprAwihq107j7UNR8BD 2Ag3dV+g3xMVoFDvXtWq5AD7eKtXaPBKi/cdVNmNrrihPj9BU1FoGNm+YDqzPpyXZ0wQ yUITTr4OzXurTIpZgAMza3LWgXbJRGUkskH/4vmo/MhcUSaNReAuX0LnVD3++kZibAuk iV7Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:subject:to:references:cc:from:message-id:date :user-agent:mime-version:in-reply-to:content-type :content-transfer-encoding; bh=i5WCWXq8CjP1Gn47s++4VTJJPjct0tkUZThdIMRKcpw=; b=li+YuLz6F9UPHI8P+I5yl90UkyQO03I3wVXm4B73ukUEIMSYrFwwIGL9Y8Ihu7sXrH MGnR5g6Plo/iRdvKx2mfPBB9kM7gR1r6VROqG0gAW7fQUyrH5a/TIXcVe4hu4N4Ad/4C uI5x1RL1MAJZ/oVsseZ72L1T9Y+h4vasyb27+NUjd5Bp0U18Sj5pL0bOpy+oPzjYpZTQ /2NRw9MjH45mmMTdOIgRHWkTnniMNZUp9iVvGE6IJevV3HgOiL/aenojcWKxjw5ywWys L4HeHwpmsIHi+hKtr1sil1fl/V/w5W3goE3cDsZSuDhJDQ1HughiMZH66gFoHeV6ruHs hxFw== X-Gm-Message-State: AG10YORMR93nfuky2tzGh7WEyK8E4OltCJw2vlRDzLMZ2HgTOkZK9MXO8mm0gbL/UAJcK5b4 X-Received: by 10.194.133.201 with SMTP id pe9mr13213100wjb.101.1455889881514; Fri, 19 Feb 2016 05:51:21 -0800 (PST) Received: from [10.10.1.58] (liv3d.labs.multiplay.co.uk. [82.69.141.171]) by smtp.gmail.com with ESMTPSA id i5sm11328646wja.23.2016.02.19.05.51.19 (version=TLSv1/SSLv3 cipher=OTHER); Fri, 19 Feb 2016 05:51:19 -0800 (PST) Subject: Re: Zfs heavy io writing | zfskern txg_thread_enter To: Michelle Sullivan References: <56C70365.1050800@sorbs.net> <56C70AB0.6050400@multiplay.co.uk> <56C71350.3020602@sorbs.net> Cc: freebsd-fs@freebsd.org From: Steven Hartland Message-ID: <56C71DD8.3040505@multiplay.co.uk> Date: Fri, 19 Feb 2016 13:51:20 +0000 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.6.0 MIME-Version: 1.0 In-Reply-To: <56C71350.3020602@sorbs.net> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 8bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 19 Feb 2016 13:51:24 -0000 On 19/02/2016 13:06, Michelle Sullivan wrote: > Steven Hartland wrote: >> >> On 19/02/2016 11:58, Michelle Sullivan wrote: >>> Niccol Corvini wrote: >>>> Hi, first time here! >>>> We are having a problem with a server running FreeBsd 9.1 with ZFS on a >>>> >>> You should upgrade to a supported version first... 9.3 would probably >>> be the best (rather than 10.x) as it's still supported and uses the same >>> ABI (ie you should need to reinstall all your ports/packages - though >>> you should because it sometimes breaks things - at least check for >>> broken things :) .) >>> >>> If you're not familiar "freebsd-update -r 9.3-RELEASE upgrade" will help >>> you do it without too many problems. >> 9.3 is still ancient, and while "supported" its not in active >> development, and to be blunt no one will be interested in helping to >> diagnose any actual issue on something so old. > So supported is not really supported... Is that an official position? Supported for 9.x, which is a "Legacy Release", I would say is supported for security and other critical issues only, which is the same for pretty much every project / product out there. > >> 10.x has a totally different ZFS IO scheduler for example, so its >> differently for most workloads. > But the user is on 9.x not 10.x and 10.x changes a lot more than just > the ZFS IO scheduler. If this is a production machine, then an upgrade > to 9.3 may be easier as it would require less regression testing.... Or > is this another case of people don't run FreeBSD in production > environments so it doesn't matter...? Yes but 9.x is already legacy and becomes unsupported in December of this year, so the process of migration to 10.x should be well on the way by now tbh. > >>>> single sata drive. Since a few days ago, in the morning the system >>>> becomes >>>> really slow due of a really heavy io writing. We investigated and we >>>> think >>>> it might start at night, maybe correlated to to crondaily (standard) >>>> but we >>>> are not sure. After a few hours the situation returns to normal. >>>> >>> Yeah this sounds like something I am quite familiar with... It's the >>> security check cronjob that runs every day... its looking for any >>> setuid/setgid files, new/modified files...etc... across all file systems >> This is quite likely, so while updating to 10 may not fix the issue >> running on 9.x. > Which means you just told the user to do something that is not likely to > fix the issue but will give them more problems to deal with so they > might forget the original problem in the mean time... You know this was > the reason I was the Technical Lead for the support teams first in > Europe then in AsiaPAC for Netscape back in the day, and why I never > worked for Microsoft... You diagnose the problem as best as possible > with as minimal changes to the system at first, then if all else fails > or you come across evidence that points to a known bug that you know is > fixed you tell them to upgrade to the latest *supported* version > >> Be aware that 10.3-BETA2 has a known issue related to vnode memory >> usage which can be triggered by such workloads so trying BETA3 when >> released, which should address this would be a good idea. >> > ...supported version... i.e. *NOT* a BETA release - especially if that > beta release has other known issues that might well trigger on the very > problem they indicated...! > > Seriously, sorry to pick on you, but "Upgrade to 10.x Beta as it might > help" is not the *first* answer anyone should give... you might as well > have told him to upgrade to 11.... because that has as much chance of > fixing the problem... Beta and soon to become RC, so no its nothing like 11. If no one bothered to update and test until release then you can't expect a good result at the end of the day now can you. There's a reason why people like Netflix run code close to head. Gleb Smirnoff iirc has Youtube vid talking about it, you should watch it. Obviously when updating to any new version, be that on mission critical system or dev test box, then the implications must be considered. At the end of the day it doesn't change the fact that if any investigation results in the issue requiring code changes, you're not going to get that running 9.x, and it may already be fixed, so you'll have wasted your time. Yes you could spend lots of time investigating the problem to come to that conclusion and be at a dead end or you could get on a version which is being actively developed and hence help that process along, which is something that's going to need to be done anyway, so why not take a step in the right direction. At the end of the day its a balancing act and something that only the user can decide given the relevant information. Regards Steve From owner-freebsd-fs@freebsd.org Fri Feb 19 14:06:17 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id C5BBDAAE4EF for ; Fri, 19 Feb 2016 14:06:17 +0000 (UTC) (envelope-from michelle@sorbs.net) Received: from hades.sorbs.net (mail.sorbs.net [67.231.146.200]) by mx1.freebsd.org (Postfix) with ESMTP id B9ABC1065 for ; Fri, 19 Feb 2016 14:06:16 +0000 (UTC) (envelope-from michelle@sorbs.net) MIME-version: 1.0 Content-transfer-encoding: 8BIT Content-type: text/plain; charset=windows-1252 Received: from isux.com (firewall.isux.com [213.165.190.213]) by hades.sorbs.net (Oracle Communications Messaging Server 7.0.5.29.0 64bit (built Jul 9 2013)) with ESMTPSA id <0O2S00F4CSU69Q00@hades.sorbs.net> for freebsd-fs@freebsd.org; Fri, 19 Feb 2016 06:13:20 -0800 (PST) Message-id: <56C72155.10500@sorbs.net> Date: Fri, 19 Feb 2016 15:06:13 +0100 From: Michelle Sullivan User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X; en-US; rv:1.8.1.24) Gecko/20100301 SeaMonkey/1.1.19 To: Steven Hartland Cc: freebsd-fs@freebsd.org Subject: Re: Zfs heavy io writing | zfskern txg_thread_enter References: <56C70365.1050800@sorbs.net> <56C70AB0.6050400@multiplay.co.uk> <56C71350.3020602@sorbs.net> <56C71DD8.3040505@multiplay.co.uk> In-reply-to: <56C71DD8.3040505@multiplay.co.uk> X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 19 Feb 2016 14:06:17 -0000 Steven Hartland wrote: > On 19/02/2016 13:06, Michelle Sullivan wrote: >> Steven Hartland wrote: >>> >>> On 19/02/2016 11:58, Michelle Sullivan wrote: >>>> Niccol Corvini wrote: >>>>> Hi, first time here! >>>>> We are having a problem with a server running FreeBsd 9.1 with ZFS >>>>> on a >>>>> >>>> You should upgrade to a supported version first... 9.3 would probably >>>> be the best (rather than 10.x) as it's still supported and uses the >>>> same >>>> ABI (ie you should need to reinstall all your ports/packages - though >>>> you should because it sometimes breaks things - at least check for >>>> broken things :) .) >>>> >>>> If you're not familiar "freebsd-update -r 9.3-RELEASE upgrade" will >>>> help >>>> you do it without too many problems. >>> 9.3 is still ancient, and while "supported" its not in active >>> development, and to be blunt no one will be interested in helping to >>> diagnose any actual issue on something so old. >> So supported is not really supported... Is that an official position? > Supported for 9.x, which is a "Legacy Release", I would say is > supported for security and other critical issues only, which is the > same for pretty much every project / product out there. Oh, it's legacy is it? Funny you should say that ... "Production = Legacy" then...? Oh wait... it's support ... lets look at what the "support lifecycle" says on it.. >>> 10.x has a totally different ZFS IO scheduler for example, so its >>> differently for most workloads. >> But the user is on 9.x not 10.x and 10.x changes a lot more than just >> the ZFS IO scheduler. If this is a production machine, then an upgrade >> to 9.3 may be easier as it would require less regression testing.... Or >> is this another case of people don't run FreeBSD in production >> environments so it doesn't matter...? > > Yes but 9.x is already legacy and becomes unsupported in December of > this year, so the process of migration to 10.x should be well on the > way by now tbh. So is 10.1 and 10.2 then... Oh so that will be why you're telling them to upgrade to a non-supported beta release....! Seriously...! (Sorry for the html email with jpegs embedded, hate it myself - especially to mailing lists, but a picture is worth a 1000 words especially when screen grabbed from a few minutes ago...) -- Michelle Sullivan http://www.mhix.org/ From owner-freebsd-fs@freebsd.org Fri Feb 19 14:09:56 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 91039AAE644 for ; Fri, 19 Feb 2016 14:09:56 +0000 (UTC) (envelope-from michelle@sorbs.net) Received: from hades.sorbs.net (mail.sorbs.net [67.231.146.200]) by mx1.freebsd.org (Postfix) with ESMTP id 84EF111C9 for ; Fri, 19 Feb 2016 14:09:55 +0000 (UTC) (envelope-from michelle@sorbs.net) MIME-version: 1.0 Content-transfer-encoding: 8BIT Content-type: text/plain; charset=windows-1252 Received: from isux.com (firewall.isux.com [213.165.190.213]) by hades.sorbs.net (Oracle Communications Messaging Server 7.0.5.29.0 64bit (built Jul 9 2013)) with ESMTPSA id <0O2S00F4IT089Q00@hades.sorbs.net> for freebsd-fs@freebsd.org; Fri, 19 Feb 2016 06:16:59 -0800 (PST) Message-id: <56C7222F.2090009@sorbs.net> Date: Fri, 19 Feb 2016 15:09:51 +0100 From: Michelle Sullivan User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X; en-US; rv:1.8.1.24) Gecko/20100301 SeaMonkey/1.1.19 To: Steven Hartland Cc: freebsd-fs@freebsd.org Subject: Re: Zfs heavy io writing | zfskern txg_thread_enter References: <56C70365.1050800@sorbs.net> <56C70AB0.6050400@multiplay.co.uk> <56C71350.3020602@sorbs.net> <56C71DD8.3040505@multiplay.co.uk> <56C72155.10500@sorbs.net> In-reply-to: <56C72155.10500@sorbs.net> X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 19 Feb 2016 14:09:56 -0000 Links to the grabs inline because the list seems to have converted html->plaintext Michelle Sullivan wrote: > Steven Hartland wrote: > >> On 19/02/2016 13:06, Michelle Sullivan wrote: >> >>> Steven Hartland wrote: >>> >>>> On 19/02/2016 11:58, Michelle Sullivan wrote: >>>> >>>>> Niccol Corvini wrote: >>>>> >>>>>> Hi, first time here! >>>>>> We are having a problem with a server running FreeBsd 9.1 with ZFS >>>>>> on a >>>>>> >>>>>> >>>>> You should upgrade to a supported version first... 9.3 would probably >>>>> be the best (rather than 10.x) as it's still supported and uses the >>>>> same >>>>> ABI (ie you should need to reinstall all your ports/packages - though >>>>> you should because it sometimes breaks things - at least check for >>>>> broken things :) .) >>>>> >>>>> If you're not familiar "freebsd-update -r 9.3-RELEASE upgrade" will >>>>> help >>>>> you do it without too many problems. >>>>> >>>> 9.3 is still ancient, and while "supported" its not in active >>>> development, and to be blunt no one will be interested in helping to >>>> diagnose any actual issue on something so old. >>>> >>> So supported is not really supported... Is that an official position? >>> >> Supported for 9.x, which is a "Legacy Release", I would say is >> supported for security and other critical issues only, which is the >> same for pretty much every project / product out there. >> > Oh, it's legacy is it? Funny you should say that ... "Production = > Legacy" then...? > > > ----> http://flashback.sorbs.net/packages/freebsd-page.jpg > > Oh wait... it's support ... lets look at what the "support lifecycle" > says on it.. > > > > ----> http://flashback.sorbs.net/packages/freebsd-support.jpg > > > > > >>>> 10.x has a totally different ZFS IO scheduler for example, so its >>>> differently for most workloads. >>>> >>> But the user is on 9.x not 10.x and 10.x changes a lot more than just >>> the ZFS IO scheduler. If this is a production machine, then an upgrade >>> to 9.3 may be easier as it would require less regression testing.... Or >>> is this another case of people don't run FreeBSD in production >>> environments so it doesn't matter...? >>> >> Yes but 9.x is already legacy and becomes unsupported in December of >> this year, so the process of migration to 10.x should be well on the >> way by now tbh. >> > > So is 10.1 and 10.2 then... Oh so that will be why you're telling them > to upgrade to a non-supported beta release....! > See: http://flashback.sorbs.net/packages/freebsd-support.jpg > Seriously...! > > > (Sorry for the html email with jpegs embedded, hate it myself - > especially to mailing lists, but a picture is worth a 1000 words > especially when screen grabbed from a few minutes ago...) > > -- Michelle Sullivan http://www.mhix.org/ From owner-freebsd-fs@freebsd.org Fri Feb 19 14:29:14 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 9A5A9AAEB3B for ; Fri, 19 Feb 2016 14:29:14 +0000 (UTC) (envelope-from michelle@sorbs.net) Received: from hades.sorbs.net (mail.sorbs.net [67.231.146.200]) by mx1.freebsd.org (Postfix) with ESMTP id 8E43E1C87 for ; Fri, 19 Feb 2016 14:29:13 +0000 (UTC) (envelope-from michelle@sorbs.net) MIME-version: 1.0 Content-transfer-encoding: 8BIT Content-type: text/plain; charset=windows-1252 Received: from isux.com (firewall.isux.com [213.165.190.213]) by hades.sorbs.net (Oracle Communications Messaging Server 7.0.5.29.0 64bit (built Jul 9 2013)) with ESMTPSA id <0O2S00F4STWF9Q00@hades.sorbs.net> for freebsd-fs@freebsd.org; Fri, 19 Feb 2016 06:36:17 -0800 (PST) Message-id: <56C726B6.5050108@sorbs.net> Date: Fri, 19 Feb 2016 15:29:10 +0100 From: Michelle Sullivan User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X; en-US; rv:1.8.1.24) Gecko/20100301 SeaMonkey/1.1.19 To: Steven Hartland Cc: freebsd-fs@freebsd.org Subject: Re: Zfs heavy io writing | zfskern txg_thread_enter References: <56C70365.1050800@sorbs.net> <56C70AB0.6050400@multiplay.co.uk> <56C71350.3020602@sorbs.net> <56C71DD8.3040505@multiplay.co.uk> <56C72155.10500@sorbs.net> <56C7222F.2090009@sorbs.net> In-reply-to: <56C7222F.2090009@sorbs.net> X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 19 Feb 2016 14:29:14 -0000 Michelle Sullivan wrote: > Links to the grabs inline because the list seems to have converted > html->plaintext > > Michelle Sullivan wrote: > >> Steven Hartland wrote: >> >> >>> On 19/02/2016 13:06, Michelle Sullivan wrote: >>> >>> >>>> Steven Hartland wrote: >>>> >>>> >>>>> On 19/02/2016 11:58, Michelle Sullivan wrote: >>>>> >>>>> >>>>>> Niccol Corvini wrote: >>>>>> >>>>>> >>>>>>> Hi, first time here! >>>>>>> We are having a problem with a server running FreeBsd 9.1 with ZFS >>>>>>> on a >>>>>>> >>>>>>> >>>>>>> >>>>>> You should upgrade to a supported version first... 9.3 would probably >>>>>> be the best (rather than 10.x) as it's still supported and uses the >>>>>> same >>>>>> ABI (ie you should need to reinstall all your ports/packages - though >>>>>> you should because it sometimes breaks things - at least check for >>>>>> broken things :) .) >>>>>> >>>>>> If you're not familiar "freebsd-update -r 9.3-RELEASE upgrade" will >>>>>> help >>>>>> you do it without too many problems. >>>>>> >>>>>> >>>>> 9.3 is still ancient, and while "supported" its not in active >>>>> development, and to be blunt no one will be interested in helping to >>>>> diagnose any actual issue on something so old. >>>>> >>>>> >>>> So supported is not really supported... Is that an official position? >>>> >>>> >>> Supported for 9.x, which is a "Legacy Release", I would say is >>> supported for security and other critical issues only, which is the >>> same for pretty much every project / product out there. >>> >>> >> Oh, it's legacy is it? Funny you should say that ... "Production = >> Legacy" then...? >> >> >> > ----> http://flashback.sorbs.net/packages/freebsd-page.jpg > And yes to reply to my own message before you do... I know what it says on the releases page ... but that's not what it says to anyone looking for information about what to download and install, or what is supported or not... I mean seriously if there is no support for 9.x why are you telling people it is an option for a new installation because its a current production version...? Why is it even listed on the support page as a 'currently supported' version? I mean it became obvious to some of us even though I never saw an official announcement that 9.x is being retired as quickly as possible therefore there will be no x.4 release unlike other releases... but at least update the documentation to say 9.x is no longer a production version and nothing except critical security issues will be fixed on the support page... because *that* is what you just indicated... Put it in the release documentation, announce it, so at least everyone will know for certain there is no point in asking for help.... Michelle -- Michelle Sullivan http://www.mhix.org/ From owner-freebsd-fs@freebsd.org Fri Feb 19 17:32:59 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 8AF99AAD4C9 for ; Fri, 19 Feb 2016 17:32:59 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 7B1381A3B for ; Fri, 19 Feb 2016 17:32:59 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id u1JHWxa1086602 for ; Fri, 19 Feb 2016 17:32:59 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 206634] "panic: ncllock1" from FreeBSD client after NFSv4 server was taken offline and brought back to life; lots of spam about "protocol prob err=10006" Date: Fri, 19 Feb 2016 17:32:59 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 11.0-CURRENT X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: lev@FreeBSD.org X-Bugzilla-Status: New X-Bugzilla-Resolution: X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: cc Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 19 Feb 2016 17:32:59 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D206634 Lev A. Serebryakov changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |lev@FreeBSD.org --- Comment #3 from Lev A. Serebryakov --- I've got same panic on FreeBSD build from r295696, in VirtualBox. My NFS se= rver was on-line, and panic occured when sources were updated from my local SVN mirror via NFS (file:// protocol). I have core. bt is: #0 doadump (textdump=3D0) at pcpu.h:221 #1 0xffffffff802f230b in db_dump (dummy=3D, dummy2=3D= false, dummy3=3D0, dummy4=3D0x0) at /data/src/sys/ddb/db_command.c:533 #2 0xffffffff802f20fe in db_command (cmd_table=3D0x0) at /data/src/sys/ddb/db_command.c:440 #3 0xffffffff802f1e94 in db_command_loop () at /data/src/sys/ddb/db_command.c:493 #4 0xffffffff802f492b in db_trap (type=3D, code=3D0) = at /data/src/sys/ddb/db_main.c:251 #5 0xffffffff804a7c43 in kdb_trap (type=3D3, code=3D0, tf=3D) at /data/src/sys/kern/subr_kdb.c:654 #6 0xffffffff806e0510 in trap (frame=3D0xfffffe023b618210) at /data/src/sys/amd64/amd64/trap.c:556 #7 0xffffffff806c21a7 in calltrap () at /data/src/sys/amd64/amd64/exception.S:234 #8 0xffffffff804a732b in kdb_enter (why=3D0xffffffff80795899 "panic", msg= =3D0x400
) at cpufunc.h:63 #9 0xffffffff8046c06f in vpanic (fmt=3D, ap=3D) at /data/src/sys/kern/kern_shutdown.c:750 #10 0xffffffff8046c0d3 in panic (fmt=3D0xffffffff80b29c40 "\004") at /data/src/sys/kern/kern_shutdown.c:688 #11 0xffffffff803aeb37 in nfs_lock1 (ap=3D) at /data/src/sys/fs/nfsclient/nfs_clvnops.c:3372 #12 0xffffffff8073e020 in VOP_LOCK1_APV (vop=3D, a=3D<= value optimized out>) at vnode_if.c:2083 #13 0xffffffff8052887a in _vn_lock (vp=3D0xfffff800ba28bb10, flags=3D, file=3D0xffffffff807a7901 "/data/src/sys/kern/vfs_subr.c", line=3D2476) at vnode_if.h:859 #14 0xffffffff80519953 in vget (vp=3D0xfffff800ba28bb10, flags=3D279040, td= =3D0x0) at /data/src/sys/kern/vfs_subr.c:2476 #15 0xffffffff8050c51c in vfs_hash_get (mp=3D0xfffff800077e6990, hash=3D790= 540428, flags=3D, td=3D0x0, vpp=3D0xfffffe023b618568, fn=3D0xffffffff803b66e0 ) at /data/src/sys/kern/vfs_hash= .c:89 #16 0xffffffff803b6f81 in nfscl_ngetreopen (mntp=3D0xfffff800077e6990, fhp= =3D, fhsize=3D, td=3D0x0, npp=3D0xfffffe023= b618670) at /data/src/sys/fs/nfsclient/nfs_clport.c:347 #17 0xffffffff80392bcd in nfscl_hasexpired (clp=3D0xfffffe00039d2000, clidrev=3D, p=3D0x0) at /data/src/sys/fs/nfsclient/nfs_clstate.c:4052 #18 0xffffffff803a0d0a in nfsrpc_read (vp=3D0xfffff800937fd760, uiop=3D0xfffffe023b6189d0, cred=3D0xfffff800b3286600, p=3D0x0, nap=3D0xfffffe023b618890, attrflagp=3D0xfffffe023b61895c) at /data/src/sys/fs/nfsclient/nfs_clrpcops.c:1381 #19 0xffffffff803af9fa in ncl_readrpc (vp=3D0xfffff800937fd760, uiop=3D0xfffffe023b6189d0, cred=3D0xfffff800b3195d00) at /data/src/sys/fs/nfsclient/nfs_clvnops.c:1381 #20 0xffffffff803ba638 in ncl_doio (vp=3D0xfffff800937fd760, bp=3D0xfffffe01f168aeb0, cr=3D0xfffffe023b617e40, td=3D, called_from_strategy=3D-510) at /data/src/sys/fs/nfsclient/nfs_clbio.c:1610 #21 0xffffffff803bc694 in nfssvc_iod (instance=3D) at /data/src/sys/fs/nfsclient/nfs_clnfsiod.c:302 #22 0xffffffff80436544 in fork_exit (callout=3D0xffffffff803bc420 , arg=3D0xffffffff80b28ec0, frame=3D0xfffffe023b618ac0) at /data/src/sys/kern/kern_fork.c:1034 #23 0xffffffff806c266e in fork_trampoline () at /data/src/sys/amd64/amd64/exception.S:609 --=20 You are receiving this mail because: You are the assignee for the bug.= From owner-freebsd-fs@freebsd.org Fri Feb 19 17:37:22 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id DA198AAD5BA for ; Fri, 19 Feb 2016 17:37:22 +0000 (UTC) (envelope-from lev@FreeBSD.org) Received: from onlyone.friendlyhosting.spb.ru (onlyone.friendlyhosting.spb.ru [IPv6:2a01:4f8:131:60a2::2]) by mx1.freebsd.org (Postfix) with ESMTP id A99FE1CCF for ; Fri, 19 Feb 2016 17:37:22 +0000 (UTC) (envelope-from lev@FreeBSD.org) Received: from [127.0.0.1] (unknown [89.113.128.32]) (Authenticated sender: lev@serebryakov.spb.ru) by onlyone.friendlyhosting.spb.ru (Postfix) with ESMTPSA id 4521CA7FC for ; Fri, 19 Feb 2016 20:37:19 +0300 (MSK) Reply-To: lev@FreeBSD.org To: "freebsd-fs@freebsd.org" From: Lev Serebryakov Subject: Panic in NFS client on CURRENT Organization: FreeBSD Message-ID: <56C752CD.4090203@FreeBSD.org> Date: Fri, 19 Feb 2016 20:37:17 +0300 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.6.0 MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 19 Feb 2016 17:37:22 -0000 -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 I've hit panic in NFSv4 client on fresh CURRENT, which looks like bug 206634. I have core saved and could provide additional information. - -- // Lev Serebryakov -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQJ8BAEBCgBmBQJWx1LNXxSAAAAAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w ZW5wZ3AuZmlmdGhob3JzZW1hbi5uZXRGOTZEMUNBMEI1RjQzMThCNjc0QjMzMEFF QUIwM0M1OEJGREM0NzhGAAoJEOqwPFi/3EeP5RQP+gIqYjxZKpUk+sxAoE1Tds1K BOkBPeFPj38FuaWUFN3LmM2HH3DZpjjScHa3WUeNov7KNduBTnBB0QJbYUOmXThg 9ggOExQR+Cjci1YmBBa/8m8Naeik0wza0XmdbUlzx/qMyfpEMPmITZqq9X+/dcbb 7xxpwUTveR3mZc9z7yS6qFDN+oqUMILFF08jq3B+715My0f2urBSBNqP4CJ5lc/a sTNo2jTZG9PWug7blVIX03cX/hwVz9Wa3io+p5XNF+8ZGq2b+86sSPYf/6FjPWh3 g/pMfX+cVYiOVsyWewASnRKse4S2my5gZ4OTbtrnMjOhFPhDbacM65XBdKSBIQDK IXEsod+YvyMB+cBjvReyErLV0KoYkY/u5TzfaVuSAvKZ+MwuFSo57m6u2XSG/Z5f XBvcTElIvFf78ZAmqJ5heyJSgONUKEUDo17w74Um6l3d0gP2QZDVrx5QF6hLb7Tx ssepvE9DIEOijImBkn78QbxRaJzqXLYckxp1LPM0XtspbjoZImfLrCm8jE3RQJFs cIcNCE9iWR0DD1fgSl7C/eifljfkSxF8hKf601ZYtweXZ89OWCh/HP3O8TfS0Fj/ 9vOHdtC/AtCMQVHU1kdRyvjRCrpjVQV5WweCf1q641DSXuHkSIhWz2rfueX4I187 BUokRbH5pntYoubADtmN =y6WT -----END PGP SIGNATURE----- From owner-freebsd-fs@freebsd.org Fri Feb 19 18:08:48 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id AA9ADAAE2B9 for ; Fri, 19 Feb 2016 18:08:48 +0000 (UTC) (envelope-from mwlucas@mail.michaelwlucas.com) Received: from mailman.ysv.freebsd.org (mailman.ysv.freebsd.org [IPv6:2001:1900:2254:206a::50:5]) by mx1.freebsd.org (Postfix) with ESMTP id 9696E1142 for ; Fri, 19 Feb 2016 18:08:48 +0000 (UTC) (envelope-from mwlucas@mail.michaelwlucas.com) Received: by mailman.ysv.freebsd.org (Postfix) id 93CBDAAE2B8; Fri, 19 Feb 2016 18:08:48 +0000 (UTC) Delivered-To: fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 9364CAAE2B7 for ; Fri, 19 Feb 2016 18:08:48 +0000 (UTC) (envelope-from mwlucas@mail.michaelwlucas.com) Received: from mail.michaelwlucas.com (mail.michaelwlucas.com [104.236.197.233]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 3B9651141 for ; Fri, 19 Feb 2016 18:08:47 +0000 (UTC) (envelope-from mwlucas@mail.michaelwlucas.com) Received: from mail.michaelwlucas.com (localhost [127.0.0.1]) by mail.michaelwlucas.com (8.15.2/8.15.2) with ESMTP id u1JI7H5e046977 for ; Fri, 19 Feb 2016 13:07:21 -0500 (EST) (envelope-from mwlucas@mail.michaelwlucas.com) Received: (from mwlucas@localhost) by mail.michaelwlucas.com (8.15.2/8.15.2/Submit) id u1JI7Gc6046976 for fs@freebsd.org; Fri, 19 Feb 2016 13:07:16 -0500 (EST) (envelope-from mwlucas) Date: Fri, 19 Feb 2016 13:07:16 -0500 From: "Michael W. Lucas" To: fs@freebsd.org Subject: dtracing ZFS on FreeBSD Message-ID: <20160219180716.GA46881@mail.michaelwlucas.com> MIME-Version: 1.0 Content-Type: text/plain; charset=unknown-8bit Content-Disposition: inline Content-Transfer-Encoding: 8bit User-Agent: Mutt/1.5.24 (2015-08-30) X-Spam-Status: No, score=0.0 required=5.0 tests=UNPARSEABLE_RELAY, URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.1 X-Spam-Checker-Version: SpamAssassin 3.4.1 (2015-04-28) on mail.michaelwlucas.com X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (mail.michaelwlucas.com [127.0.0.1]); Fri, 19 Feb 2016 13:07:26 -0500 (EST) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 19 Feb 2016 18:08:48 -0000 Hi, I'm trying get Adam Leventhal's dtrace script for measuring latency and number of operations on a pool (http://dtrace.org/blogs/ahl/2014/08/31/openzfs-tuning/). Asking for guidance here because it's a filesystem thing and the dtrace list is dead. The script is: #pragma D option aggpack #pragma D option quiet fbt::vdev_queue_max_async_writes:entry { self->spa = args[0]; } fbt::vdev_queue_max_async_writes:return /self->spa && self->spa->spa_name == $$1/ { @ = lquantize(args[1], 0, 30, 1); } tick-1s { printa(@); clear(@); } fbt::vdev_queue_max_async_writes:return /self->spa/ { self->spa = 0; } When I run it: # dtrace -s q.d zroot most lines look like this: min .--------------------------------. max | count < 0 : : >= 30 | 0 dtrace: 15857 dynamic variable drops with non-empty dirty list My reading of dtrace discussions says I'm losing data here. I suspect this is the data I'm actually interested in. Sometimes, the scale gets a marker on it. Pardon the weird characters: min .--------------------------------. max | count < 0 : █ : >= 30 | 3438 Or there's min .--------------------------------. max | count < 0 : ▁▂▃▅ : >= 30 | 19172 Any thoughts on why? Thanks, ==ml -- Michael W. Lucas - mwlucas@michaelwlucas.com, Twitter @mwlauthor http://www.MichaelWLucas.com/, http://blather.MichaelWLucas.com/ From owner-freebsd-fs@freebsd.org Fri Feb 19 18:51:22 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 564C1AAD7A1 for ; Fri, 19 Feb 2016 18:51:22 +0000 (UTC) (envelope-from thomasrcurry@gmail.com) Received: from mailman.ysv.freebsd.org (mailman.ysv.freebsd.org [IPv6:2001:1900:2254:206a::50:5]) by mx1.freebsd.org (Postfix) with ESMTP id 385551146 for ; Fri, 19 Feb 2016 18:51:22 +0000 (UTC) (envelope-from thomasrcurry@gmail.com) Received: by mailman.ysv.freebsd.org (Postfix) id 355A6AAD7A0; Fri, 19 Feb 2016 18:51:22 +0000 (UTC) Delivered-To: fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 34EBCAAD79B for ; Fri, 19 Feb 2016 18:51:22 +0000 (UTC) (envelope-from thomasrcurry@gmail.com) Received: from mail-ig0-x22c.google.com (mail-ig0-x22c.google.com [IPv6:2607:f8b0:4001:c05::22c]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 014161145 for ; Fri, 19 Feb 2016 18:51:22 +0000 (UTC) (envelope-from thomasrcurry@gmail.com) Received: by mail-ig0-x22c.google.com with SMTP id y8so45483999igp.0 for ; Fri, 19 Feb 2016 10:51:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc; bh=z3q4CZhrPdAZeITCCBjayJj38Sxv+zrqeit9GXCrx8M=; b=baA+/jlWdUyJU06XDs4dGfcjQ2E/OkQx55hOJW2LMGatPN6Dp/s0KFvYFnNrnGEnQV ua7Jk+dcJ1zP7PkASZhl+taH6KrVPGQlVtP/Eub/ENh/jkofZ0gGO+FHOpco3fmob9DW ad78WZit7adrsLj37ApLhvK0hVUudP+VtNgkZNTuNSCr/4HdZsaIjvEtz8LI6SN6zo3W EfP7AnKGSCGh3KM7MoTnRnUdXYHWL1wkRCnZXliERK+rCHTMr588zOGWcz81lwxjnC1p nA4u7C0FjXyas1oylbydW4G4/0qsxaXPAaLxOkcyCFrk7hBz5OaD544yC3WJorQmSoEy Mr/g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc; bh=z3q4CZhrPdAZeITCCBjayJj38Sxv+zrqeit9GXCrx8M=; b=Ad35radx02RVhUwYo43TOoN7WmpuOV7JNaTO7tHjDOBYJQCpPZIYh+f/nOs5d8EGzP +tPeMFRTZm2nfJc6dwY9yRc2IBY/47/u43Tontu5Jj+S2x2GXZKMO9YbJiAs8zXz50Aj Dm7q6jS0nD+ElFnfbXAIXYAd3rS/adbtfvT9WOTjds6t5cxSvDxBttZ4NPi8pI9d08Ji oJo8Hl0ZwzYtMl//DPLaj4YIJgmLDPyhpGg9yTs/koMbJDTOZNvW7NFZzlI/ctDYpy1E yz7NAQO++YEmUViCCkePwG3HXiJGgLZqn8kPfhCWIRhxOS55JGj6/00RxCA8zf0JjMSl oDBQ== X-Gm-Message-State: AG10YOTj6wFgpy/RPaxWL2AHOPcX+Cipu7vv894CQOUHlqWQ1PPHRE9FYDYS4Rn3v9Kx06N0w7lAGSDhRXJ71A== MIME-Version: 1.0 X-Received: by 10.50.137.65 with SMTP id qg1mr10532416igb.28.1455907881339; Fri, 19 Feb 2016 10:51:21 -0800 (PST) Received: by 10.107.4.71 with HTTP; Fri, 19 Feb 2016 10:51:21 -0800 (PST) In-Reply-To: <20160219180716.GA46881@mail.michaelwlucas.com> References: <20160219180716.GA46881@mail.michaelwlucas.com> Date: Fri, 19 Feb 2016 13:51:21 -0500 Message-ID: Subject: Re: dtracing ZFS on FreeBSD From: Tom Curry To: "Michael W. Lucas" Cc: fs@freebsd.org Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 19 Feb 2016 18:51:22 -0000 On Fri, Feb 19, 2016 at 1:07 PM, Michael W. Lucas wrote: > Hi, > > I'm trying get Adam Leventhal's dtrace script for measuring latency > and number of operations on a pool > (http://dtrace.org/blogs/ahl/2014/08/31/openzfs-tuning/). Asking for > guidance here because it's a filesystem thing and the dtrace list is > dead. > > The script is: > > #pragma D option aggpack > #pragma D option quiet > > fbt::vdev_queue_max_async_writes:entry > { > self->spa =3D args[0]; > } > fbt::vdev_queue_max_async_writes:return > /self->spa && self->spa->spa_name =3D=3D $$1/ > { > @ =3D lquantize(args[1], 0, 30, 1); > } > > tick-1s > { > printa(@); > clear(@); > } > > fbt::vdev_queue_max_async_writes:return > /self->spa/ > { > self->spa =3D 0; > } > > When I run it: > > # dtrace -s q.d zroot > > most lines look like this: > > min .--------------------------------. max | count > < 0 : : >=3D 30 | 0 > dtrace: 15857 dynamic variable drops with non-empty dirty list > > My reading of dtrace discussions says I'm losing data here. I suspect > this is the data I'm actually interested in. > > Sometimes, the scale gets a marker on it. Pardon the weird characters: > > min .--------------------------------. max | count > < 0 : =E2=96=88 : >=3D 30 | 3438 > > > Or there's > > min .--------------------------------. max | count > < 0 : =E2=96=81=E2=96=82=E2=96=83=E2=96=85 = : >=3D 30 | 19172 > > Any thoughts on why? > > Thanks, > =3D=3Dml > > -- > Michael W. Lucas - mwlucas@michaelwlucas.com, Twitter @mwlauthor > http://www.MichaelWLucas.com/, http://blather.MichaelWLucas.com/ > > > For the variable drops you could try increasing the buffer size, I know I ran into this when I was tracing something very noisy and it definitely helped. #pragma D option bufsize=3D1m http://dtrace.org/guide/chp-buf.html From owner-freebsd-fs@freebsd.org Fri Feb 19 19:09:13 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id B5E0EAAE00C for ; Fri, 19 Feb 2016 19:09:13 +0000 (UTC) (envelope-from mwlucas@mail.michaelwlucas.com) Received: from mailman.ysv.freebsd.org (mailman.ysv.freebsd.org [IPv6:2001:1900:2254:206a::50:5]) by mx1.freebsd.org (Postfix) with ESMTP id A0B491CC9 for ; Fri, 19 Feb 2016 19:09:13 +0000 (UTC) (envelope-from mwlucas@mail.michaelwlucas.com) Received: by mailman.ysv.freebsd.org (Postfix) id 9DCBFAAE00B; Fri, 19 Feb 2016 19:09:13 +0000 (UTC) Delivered-To: fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 9D6E2AAE00A for ; Fri, 19 Feb 2016 19:09:13 +0000 (UTC) (envelope-from mwlucas@mail.michaelwlucas.com) Received: from mail.michaelwlucas.com (mail.michaelwlucas.com [104.236.197.233]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id ADBFD1CC5 for ; Fri, 19 Feb 2016 19:09:12 +0000 (UTC) (envelope-from mwlucas@mail.michaelwlucas.com) Received: from mail.michaelwlucas.com (localhost [127.0.0.1]) by mail.michaelwlucas.com (8.15.2/8.15.2) with ESMTP id u1JJ7rp1047520; Fri, 19 Feb 2016 14:07:54 -0500 (EST) (envelope-from mwlucas@mail.michaelwlucas.com) Received: (from mwlucas@localhost) by mail.michaelwlucas.com (8.15.2/8.15.2/Submit) id u1JJ7rC4047519; Fri, 19 Feb 2016 14:07:53 -0500 (EST) (envelope-from mwlucas) Date: Fri, 19 Feb 2016 14:07:53 -0500 From: "Michael W. Lucas" To: Tom Curry Cc: fs@freebsd.org Subject: Re: dtracing ZFS on FreeBSD Message-ID: <20160219190753.GA47502@mail.michaelwlucas.com> References: <20160219180716.GA46881@mail.michaelwlucas.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.24 (2015-08-30) X-Spam-Status: No, score=0.0 required=5.0 tests=UNPARSEABLE_RELAY, URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.1 X-Spam-Checker-Version: SpamAssassin 3.4.1 (2015-04-28) on mail.michaelwlucas.com X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (mail.michaelwlucas.com [127.0.0.1]); Fri, 19 Feb 2016 14:07:55 -0500 (EST) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 19 Feb 2016 19:09:13 -0000 On Fri, Feb 19, 2016 at 01:51:21PM -0500, Tom Curry wrote: > For the variable drops you could try increasing the buffer size, I know > I ran into this when I was tracing something very noisy and it > definitely helped. > #pragma D option bufsize=1m Cranking it up to the limit of 16m helps, thank you. But the problem still appears during load, just not as often. Any other suggestions, anyone? ==ml -- Michael W. Lucas - mwlucas@michaelwlucas.com, Twitter @mwlauthor http://www.MichaelWLucas.com/, http://blather.MichaelWLucas.com/ From owner-freebsd-fs@freebsd.org Fri Feb 19 19:14:25 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 32A3BAAE235 for ; Fri, 19 Feb 2016 19:14:25 +0000 (UTC) (envelope-from thomasrcurry@gmail.com) Received: from mailman.ysv.freebsd.org (mailman.ysv.freebsd.org [IPv6:2001:1900:2254:206a::50:5]) by mx1.freebsd.org (Postfix) with ESMTP id 147AE113B for ; Fri, 19 Feb 2016 19:14:25 +0000 (UTC) (envelope-from thomasrcurry@gmail.com) Received: by mailman.ysv.freebsd.org (Postfix) id 11709AAE234; Fri, 19 Feb 2016 19:14:25 +0000 (UTC) Delivered-To: fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id ECDEAAAE233 for ; Fri, 19 Feb 2016 19:14:24 +0000 (UTC) (envelope-from thomasrcurry@gmail.com) Received: from mail-ig0-x231.google.com (mail-ig0-x231.google.com [IPv6:2607:f8b0:4001:c05::231]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id B9328113A for ; Fri, 19 Feb 2016 19:14:24 +0000 (UTC) (envelope-from thomasrcurry@gmail.com) Received: by mail-ig0-x231.google.com with SMTP id 5so43278962igt.0 for ; Fri, 19 Feb 2016 11:14:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc; bh=BUa0WWs6dqta9E1JQvZYh30gvaT5By61zdVCpM78Ms4=; b=aZjhYcLtS+FW0CdIyIDHCAZMtUPsFmQUaLaFTduj4CilbOs3SZD6zuzEVrpVjAOdGm y/D2BoDjDOYrqOGSdFjG/FmmA5/0H5N8j09TIAwjjJoUPmRT7nIhKI/zmcsoedo+rP0l 7xuFFgODhLLn/kQXptdzZrG1s0TcI1dKyWCH3D59zlqZocdqqlx+XQ+qjcA013x/VBkD i75c809cnVjAiQG3fnNU+5qTAI5Xg3BNR5is4cgr2U/m3IdsxqGc17K6ZJdRbryfAkZG 5g+5sam0FgxHBVZVrfjOsNH5vwPOOBkEQovXNYEFhwl4KB6fktNuV3L86Jp8HSeySqdm sExw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc; bh=BUa0WWs6dqta9E1JQvZYh30gvaT5By61zdVCpM78Ms4=; b=aipcoyzegJp5r32UT8gTVwHD70lO0Nj/OLJs0JJBrGus+NOFmMFpd4v4JO/HznaYs/ e/9G8znvLc5dHlF6o1FBJHaBuUx55GZINxvQ8LOVGfZ3rOG9MtT105smO9AaM6WSR8Ct Y80PdqHIyTuSrMeEYMnlkiKhm9pHD6JxiegJ0wl9R/l07sxPMSDvB6YYGQsTMAFph9Ke KssXTQpQN8927SNb4DwmcX1Owgr5rqeewhJQz1zXJP6gXeeQ1P8Ljw8Pav08RMwvybiJ K7iwM4KVRyDkEbVxwxEvdOkRh28tszYyJjdEE9cm3vc1EMVoajnq3LjaRRKeMYe3nP18 lz8Q== X-Gm-Message-State: AG10YOQzEhLsgT7oOyPK6EVdPoH2sSegqTRPf+kMwmgBdF53sDgDOMYPjHikczy/+ZL/P3xE172J5fys4JB9NQ== MIME-Version: 1.0 X-Received: by 10.50.137.65 with SMTP id qg1mr10629684igb.28.1455909264064; Fri, 19 Feb 2016 11:14:24 -0800 (PST) Received: by 10.107.4.71 with HTTP; Fri, 19 Feb 2016 11:14:24 -0800 (PST) In-Reply-To: <20160219190753.GA47502@mail.michaelwlucas.com> References: <20160219180716.GA46881@mail.michaelwlucas.com> <20160219190753.GA47502@mail.michaelwlucas.com> Date: Fri, 19 Feb 2016 14:14:24 -0500 Message-ID: Subject: Re: dtracing ZFS on FreeBSD From: Tom Curry To: "Michael W. Lucas" Cc: fs@freebsd.org Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 19 Feb 2016 19:14:25 -0000 On Fri, Feb 19, 2016 at 2:07 PM, Michael W. Lucas wrote: > On Fri, Feb 19, 2016 at 01:51:21PM -0500, Tom Curry wrote: > > For the variable drops you could try increasing the buffer size, I > know > > I ran into this when I was tracing something very noisy and it > > definitely helped. > > #pragma D option bufsize=1m > > Cranking it up to the limit of 16m helps, thank you. > > But the problem still appears during load, just not as often. > > Any other suggestions, anyone? > > One other thing to try in concert with the larger buffer, #pragma D option bufpolicy=ring In theory I would expect it not to drop, but the window of tracing would be smaller. From owner-freebsd-fs@freebsd.org Fri Feb 19 19:35:24 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 6D8B7AAEBDC for ; Fri, 19 Feb 2016 19:35:24 +0000 (UTC) (envelope-from mwlucas@mail.michaelwlucas.com) Received: from mailman.ysv.freebsd.org (mailman.ysv.freebsd.org [IPv6:2001:1900:2254:206a::50:5]) by mx1.freebsd.org (Postfix) with ESMTP id 571ED1444 for ; Fri, 19 Feb 2016 19:35:24 +0000 (UTC) (envelope-from mwlucas@mail.michaelwlucas.com) Received: by mailman.ysv.freebsd.org (Postfix) id 5606CAAEBDA; Fri, 19 Feb 2016 19:35:24 +0000 (UTC) Delivered-To: fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 54C03AAEBD8 for ; Fri, 19 Feb 2016 19:35:24 +0000 (UTC) (envelope-from mwlucas@mail.michaelwlucas.com) Received: from mail.michaelwlucas.com (mail.michaelwlucas.com [104.236.197.233]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 109531443 for ; Fri, 19 Feb 2016 19:35:23 +0000 (UTC) (envelope-from mwlucas@mail.michaelwlucas.com) Received: from mail.michaelwlucas.com (localhost [127.0.0.1]) by mail.michaelwlucas.com (8.15.2/8.15.2) with ESMTP id u1JJY6jN047669; Fri, 19 Feb 2016 14:34:07 -0500 (EST) (envelope-from mwlucas@mail.michaelwlucas.com) Received: (from mwlucas@localhost) by mail.michaelwlucas.com (8.15.2/8.15.2/Submit) id u1JJY6cm047668; Fri, 19 Feb 2016 14:34:06 -0500 (EST) (envelope-from mwlucas) Date: Fri, 19 Feb 2016 14:34:05 -0500 From: "Michael W. Lucas" To: Tom Curry Cc: fs@freebsd.org Subject: Re: dtracing ZFS on FreeBSD Message-ID: <20160219193405.GA47628@mail.michaelwlucas.com> References: <20160219180716.GA46881@mail.michaelwlucas.com> <20160219190753.GA47502@mail.michaelwlucas.com> MIME-Version: 1.0 Content-Type: text/plain; charset=unknown-8bit Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: User-Agent: Mutt/1.5.24 (2015-08-30) X-Spam-Status: No, score=0.0 required=5.0 tests=UNPARSEABLE_RELAY, URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.1 X-Spam-Checker-Version: SpamAssassin 3.4.1 (2015-04-28) on mail.michaelwlucas.com X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (mail.michaelwlucas.com [127.0.0.1]); Fri, 19 Feb 2016 14:34:08 -0500 (EST) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 19 Feb 2016 19:35:24 -0000 On Fri, Feb 19, 2016 at 02:14:24PM -0500, Tom Curry wrote: > One other thing to try in concert with the larger buffer, > #pragma D option bufpolicy=ring > In theory I would expect it not to drop, but the window of tracing > would be smaller. Still dropping, but thanks. Output is buffered, so I can't say which graphs the drops happened with. (If that's actually what the drop messages tell me, that is...) ==ml -- Michael W. Lucas - mwlucas@michaelwlucas.com, Twitter @mwlauthor http://www.MichaelWLucas.com/, http://blather.MichaelWLucas.com/ From owner-freebsd-fs@freebsd.org Fri Feb 19 22:48:01 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 1E030AA9A4B for ; Fri, 19 Feb 2016 22:48:01 +0000 (UTC) (envelope-from n.corvini@gmail.com) Received: from mail-io0-x22e.google.com (mail-io0-x22e.google.com [IPv6:2607:f8b0:4001:c06::22e]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id DC2101FA3 for ; Fri, 19 Feb 2016 22:48:00 +0000 (UTC) (envelope-from n.corvini@gmail.com) Received: by mail-io0-x22e.google.com with SMTP id 9so125109342iom.1 for ; Fri, 19 Feb 2016 14:48:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=aU/jNmXNmHkbnNcF9iQ0dhUkkCq7yXN7PaAl0Khj054=; b=yeDWLYB9QovLTAhCDkIJxog9tGBk5VV/XtbPufbEUNs6yfvEKMsVOAGdZ7R+aS4iiI g4aqcCNYvJ2kPut9R6DbJ3Fb9HowWZ+ijF6bLi2GyZRP3AMjJ1BJ7pFBAGOXoUiNJ8LN Ywzghe4If+/Pq82MmjHGEXqSirAMaxeTxUQu/NyT2cImFHyNLNlBPqW3WrVHECLJZ4aT WnNtKE9XCcoXPq+6qgWd+5b49ZrPfDvkTRRL2fivSzcmfBWMTL8sH7tvIyLNOrPRi1Lz B6qZojXDPTwDNHI7xdJyOH4rdwzDvCJhItH9dcoJoIn5Hpczk9uuEJm18/zqCqSq61kz Is2Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=aU/jNmXNmHkbnNcF9iQ0dhUkkCq7yXN7PaAl0Khj054=; b=ZWvMQi51GV89ha7OGEFcRM4bu++A4gko+MKndKJZshOWry9XGrI32XG3f+/rNfE2ZS bIELq5kEOb7nvKWHum4WgTqV49WzsNRmlmWk7g5b1qsrnOzkicruEtVJugvh86Ft/7Hw gLcWXIT0HPExN7qPrkyNC/vcfIdcyGdKfblYBticXUOK37WtrChGe2h3EMPREumOMkUL g97TGTowQH2slV0Ww/Fj+nsSk8wnH38LUymYJLc7bytaiMkUnuSNQE+mPigFtvxbL43s YwQRPpWQoVjSOM7DPKiKA5mDqsAjtrLcoiDzmwRrLJfGlLLDiNYLvnZn9BGKc3P58NA+ yzGw== X-Gm-Message-State: AG10YOQPuLj3knVJnGoIyk4X3CVBWE2rgfqdBrWPPvy0WrKpMTFkBqU0LZAYioE+PT01Eg3Zn9SoDk/xvPdCsg== MIME-Version: 1.0 X-Received: by 10.107.17.32 with SMTP id z32mr19589328ioi.97.1455922080196; Fri, 19 Feb 2016 14:48:00 -0800 (PST) Received: by 10.107.136.166 with HTTP; Fri, 19 Feb 2016 14:48:00 -0800 (PST) Received: by 10.107.136.166 with HTTP; Fri, 19 Feb 2016 14:48:00 -0800 (PST) In-Reply-To: <20160219215033.GA25663@neutralgood.org> References: <56C70365.1050800@sorbs.net> <56C70AB0.6050400@multiplay.co.uk> <56C71350.3020602@sorbs.net> <56C71DD8.3040505@multiplay.co.uk> <56C72155.10500@sorbs.net> <56C7222F.2090009@sorbs.net> <56C726B6.5050108@sorbs.net> <20160219215033.GA25663@neutralgood.org> Date: Fri, 19 Feb 2016 23:48:00 +0100 Message-ID: Subject: Re: Zfs heavy io writing | zfskern txg_thread_enter From: =?UTF-8?Q?Niccol=C3=B2_Corvini?= To: kpneal@pobox.com Cc: Michelle Sullivan , freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 19 Feb 2016 22:48:01 -0000 My intention wasn't to start a flame, we were just looking for some answers to the problem. The upgrade of the system is in schedule asap but stuff like that requires some time. Anyhow thanks for the reply. Niccol=C3=B2 Corvini Il 19/feb/2016 10:50 PM, ha scritto: > On Fri, Feb 19, 2016 at 03:29:10PM +0100, Michelle Sullivan wrote: > > > > *snip* > > I'm willing to bet that support for 9.x would be better if there were > dedicated employees whose job was to support 9.x. But those employees > cost money. > > How about you balance out some of your complaining with some money to > hire people to fix the problems you seem to enjoy complaining about? > -- > "A method for inducing cats to exercise consists of directing a beam of > invisible light produced by a hand-held laser apparatus onto the floor ..= . > in the vicinity of the cat, then moving the laser ... in an irregular way > fascinating to cats,..." -- US patent 5443036, "Method of exercising a ca= t" > _______________________________________________ > freebsd-fs@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@freebsd.org Fri Feb 19 22:50:31 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 1DE01AA9C4E for ; Fri, 19 Feb 2016 22:50:31 +0000 (UTC) (envelope-from truckman@FreeBSD.org) Received: from gw.catspoiler.org (unknown [IPv6:2602:304:b010:ef20::f2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "gw.catspoiler.org", Issuer "gw.catspoiler.org" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 0639911CB for ; Fri, 19 Feb 2016 22:50:31 +0000 (UTC) (envelope-from truckman@FreeBSD.org) Received: from FreeBSD.org (mousie.catspoiler.org [192.168.101.2]) by gw.catspoiler.org (8.15.2/8.15.2) with ESMTP id u1JMoLjQ079393; Fri, 19 Feb 2016 14:50:25 -0800 (PST) (envelope-from truckman@FreeBSD.org) Message-Id: <201602192250.u1JMoLjQ079393@gw.catspoiler.org> Date: Fri, 19 Feb 2016 14:50:21 -0800 (PST) From: Don Lewis Subject: Re: Zfs heavy io writing | zfskern txg_thread_enter To: n.corvini@gmail.com cc: freebsd-fs@freebsd.org In-Reply-To: MIME-Version: 1.0 Content-Type: TEXT/plain; charset=iso-8859-1 Content-Transfer-Encoding: 8BIT X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 19 Feb 2016 22:50:31 -0000 On 19 Feb, Niccol Corvini wrote: > Hi, first time here! > We are having a problem with a server running FreeBsd 9.1 with ZFS on a > single sata drive. Since a few days ago, in the morning the system becomes > really slow due of a really heavy io writing. We investigated and we think > it might start at night, maybe correlated to to crondaily (standard) but we > are not sure. After a few hours the situation returns to normal. > Any help is much appreciated > The machine is a Intel Xeon E5-2620 with 36GB of RAM, the HDD is a 2TB an > is half full. The only way that you should get a lot of write traffic during the daily periodic runs is if atime updates are enabled. I don't know what the default was for FreeBSD 9.1, but recent versions of FreeBSD disable atime updates on ZFS everywhere except /var/mail. What does zfs get -o all atime say? The other thing I was going to ask is how full the disk is. ZFS performance degrades quite a bit when the disk gets close to full. At 50% of capacity, you shouldn't be running into that problem. From owner-freebsd-fs@freebsd.org Sat Feb 20 00:06:56 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 1DB43AAE708 for ; Sat, 20 Feb 2016 00:06:56 +0000 (UTC) (envelope-from michelle@sorbs.net) Received: from hades.sorbs.net (mail.sorbs.net [67.231.146.200]) by mx1.freebsd.org (Postfix) with ESMTP id 125AD39E for ; Sat, 20 Feb 2016 00:06:55 +0000 (UTC) (envelope-from michelle@sorbs.net) MIME-version: 1.0 Content-transfer-encoding: 7BIT Content-type: text/plain; CHARSET=US-ASCII Received: from isux.com (firewall.isux.com [213.165.190.213]) by hades.sorbs.net (Oracle Communications Messaging Server 7.0.5.29.0 64bit (built Jul 9 2013)) with ESMTPSA id <0O2T00FBEKN79Q00@hades.sorbs.net> for freebsd-fs@freebsd.org; Fri, 19 Feb 2016 16:13:58 -0800 (PST) Message-id: <56C7AE1A.8020708@sorbs.net> Date: Sat, 20 Feb 2016 01:06:50 +0100 From: Michelle Sullivan User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X; en-US; rv:1.8.1.24) Gecko/20100301 SeaMonkey/1.1.19 To: kpneal@pobox.com Cc: freebsd-fs@freebsd.org Subject: Re: Zfs heavy io writing | zfskern txg_thread_enter References: <56C70365.1050800@sorbs.net> <56C70AB0.6050400@multiplay.co.uk> <56C71350.3020602@sorbs.net> <56C71DD8.3040505@multiplay.co.uk> <56C72155.10500@sorbs.net> <56C7222F.2090009@sorbs.net> <56C726B6.5050108@sorbs.net> <20160219215033.GA25663@neutralgood.org> In-reply-to: <20160219215033.GA25663@neutralgood.org> X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 20 Feb 2016 00:06:56 -0000 kpneal@pobox.com wrote: > On Fri, Feb 19, 2016 at 03:29:10PM +0100, Michelle Sullivan wrote: > > > *snip* > > I'm willing to bet that support for 9.x would be better if there were > dedicated employees whose job was to support 9.x. But those employees > cost money. > > How about you balance out some of your complaining with some money to > hire people to fix the problems you seem to enjoy complaining about? > Yeah I was arranging that then there was the decision to nuke anyone not using pkgng ... that killed any chance of my employer making a sizable donation (cash or devs).. so completely out of my hands now... Regards, -- Michelle Sullivan http://www.mhix.org/ From owner-freebsd-fs@freebsd.org Sat Feb 20 00:12:14 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id EC938AAEA10 for ; Sat, 20 Feb 2016 00:12:14 +0000 (UTC) (envelope-from michelle@sorbs.net) Received: from hades.sorbs.net (mail.sorbs.net [67.231.146.200]) by mx1.freebsd.org (Postfix) with ESMTP id DEEA585D; Sat, 20 Feb 2016 00:12:14 +0000 (UTC) (envelope-from michelle@sorbs.net) MIME-version: 1.0 Content-transfer-encoding: 8BIT Content-type: text/plain; charset=ISO-8859-1 Received: from isux.com (firewall.isux.com [213.165.190.213]) by hades.sorbs.net (Oracle Communications Messaging Server 7.0.5.29.0 64bit (built Jul 9 2013)) with ESMTPSA id <0O2T00FBMKW29Q00@hades.sorbs.net>; Fri, 19 Feb 2016 16:19:18 -0800 (PST) Message-id: <56C7AF59.6060600@sorbs.net> Date: Sat, 20 Feb 2016 01:12:09 +0100 From: Michelle Sullivan User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X; en-US; rv:1.8.1.24) Gecko/20100301 SeaMonkey/1.1.19 To: Don Lewis Cc: n.corvini@gmail.com, freebsd-fs@freebsd.org Subject: Re: Zfs heavy io writing | zfskern txg_thread_enter References: <201602192250.u1JMoLjQ079393@gw.catspoiler.org> In-reply-to: <201602192250.u1JMoLjQ079393@gw.catspoiler.org> X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 20 Feb 2016 00:12:15 -0000 Don Lewis wrote: > On 19 Feb, Niccol Corvini wrote: > >> Hi, first time here! >> We are having a problem with a server running FreeBsd 9.1 with ZFS on a >> single sata drive. Since a few days ago, in the morning the system becomes >> really slow due of a really heavy io writing. We investigated and we think >> it might start at night, maybe correlated to to crondaily (standard) but we >> are not sure. After a few hours the situation returns to normal. >> Any help is much appreciated >> The machine is a Intel Xeon E5-2620 with 36GB of RAM, the HDD is a 2TB an >> is half full. >> > > The only way that you should get a lot of write traffic during the daily > periodic runs is if atime updates are enabled. I don't know what the > default was for FreeBSD 9.1, but recent versions of FreeBSD disable > atime updates on ZFS everywhere except /var/mail. What does > zfs get -o all atime > say? > On 9.2 and 9.3 it seems the default is (respectively): NAME PROPERTY VALUE RECEIVED SOURCE storage atime on - default NAME PROPERTY VALUE RECEIVED SOURCE VirtualDisks atime on - default VirtualDisks/FreeBSD10.0amd64-Build atime - - - VirtualDisks/FreeBSD10.0amd64-OS atime - - - VirtualDisks/FreeBSD10.1amd64-Build atime - - - VirtualDisks/FreeBSD10.1amd64-OS atime - - - VirtualDisks/FreeBSD8.4-Build atime - - - VirtualDisks/FreeBSD8.4-OS atime - - - VirtualDisks/FreeBSD9.0-Build atime - - - VirtualDisks/FreeBSD9.0-OS atime - - - VirtualDisks/FreeBSD9.1-Build atime - - - VirtualDisks/FreeBSD9.1-OS atime - - - VirtualDisks/FreeBSD9.2amd64-Build atime - - - VirtualDisks/FreeBSD9.2amd64-OS atime - - - VirtualDisks/FreeBSD9.2i386-Build atime - - - VirtualDisks/FreeBSD9.2i386-OS atime - - - VirtualDisks/FreeBSD9.3amd64-Build atime - - - VirtualDisks/FreeBSD9.3amd64-OS atime - - - VirtualDisks/FreeBSD9.3i386-Build atime - - - VirtualDisks/FreeBSD9.3i386-OS atime - - - sorbs atime on - default sorbs/VirtualDisks atime - - - It's probably the same for Niccol on the 9.1 system. Best regards, Michelle -- Michelle Sullivan http://www.mhix.org/ From owner-freebsd-fs@freebsd.org Sat Feb 20 01:03:14 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 7CFDAAAD2CB for ; Sat, 20 Feb 2016 01:03:14 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id 2CEF823A; Sat, 20 Feb 2016 01:03:13 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) IronPort-PHdr: 9a23:ed9/zRX5W2/ARYeAENUcQdLcAnTV8LGtZVwlr6E/grcLSJyIuqrYZheDt8tkgFKBZ4jH8fUM07OQ6PC/HzFZqs3d+Fk5M7VyFDY9wf0MmAIhBMPXQWbaF9XNKxIAIcJZSVV+9Gu6O0UGUOz3ZlnVv2HgpWVKQka3CwN5K6zPF5LIiIzvjqbpq8KVPlwD32b1SIgxBSv1hD2ZjtMRj4pmJ/R54TryiVwMRd5rw3h1L0mYhRf265T41pdi9yNNp6BprJYYAu2pN5g/GLhVEhwIKW04zvbH8x7ZQlih/HwZB18XmRkAJgHO7xX3W9+lqC7zvel51SyyIMr5UL0wQTTk5K49G0ygszsOKzNsqDKfscd3lq8O+B8= X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: A2CtBAAqusdW/61jaINehAxtBrpMgWgXCoUiSgKBehMBAQEBAQEBAWMngi2CFAEBAQMBAQEBICsgCwULAgEIGAICDRkCAicBCSYCDAcEARwEh3EIDq0HjmIBAQEBAQUBAQEBAQEBFQR7hReBdYJGhBYBAQVkgjSBOgWOHohphVeFIJFajkYCIQFAggIBGYFmHi4Hh0I0fQEBAQ X-IronPort-AV: E=Sophos;i="5.22,473,1449550800"; d="scan'208";a="267048638" Received: from nipigon.cs.uoguelph.ca (HELO zcs1.mail.uoguelph.ca) ([131.104.99.173]) by esa-jnhn.mail.uoguelph.ca with ESMTP; 19 Feb 2016 20:02:04 -0500 Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id DA23615F5AF; Fri, 19 Feb 2016 20:02:04 -0500 (EST) Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id 2jRamINdf-sX; Fri, 19 Feb 2016 20:02:04 -0500 (EST) Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 34ACE15F5B2; Fri, 19 Feb 2016 20:02:04 -0500 (EST) X-Virus-Scanned: amavisd-new at zcs1.mail.uoguelph.ca Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id Yxm5KIfhVIRj; Fri, 19 Feb 2016 20:02:04 -0500 (EST) Received: from zcs1.mail.uoguelph.ca (zcs1.mail.uoguelph.ca [172.17.95.18]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 1122F15F5AF; Fri, 19 Feb 2016 20:02:04 -0500 (EST) Date: Fri, 19 Feb 2016 20:02:03 -0500 (EST) From: Rick Macklem To: lev@FreeBSD.org Cc: freebsd-fs@freebsd.org Message-ID: <1022369130.4303814.1455930123897.JavaMail.zimbra@uoguelph.ca> In-Reply-To: <56C752CD.4090203@FreeBSD.org> References: <56C752CD.4090203@FreeBSD.org> Subject: Re: Panic in NFS client on CURRENT MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.95.12] X-Mailer: Zimbra 8.0.9_GA_6191 (ZimbraWebClient - FF44 (Win)/8.0.9_GA_6191) Thread-Topic: Panic in NFS client on CURRENT Thread-Index: I9FlwW7TJMBKL2RpOCnW2+SvWn2F0A== X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 20 Feb 2016 01:03:14 -0000 Lev wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA512 > > > I've hit panic in NFSv4 client on fresh CURRENT, which looks like bug > 206634. > Did this happen after the server was taken "offline" (whatever the PR reported meant by "offline") or was the server simply running when the panic occurred? Basically, I'm asking if there was a server reboot or nfsd thread restart or some kind of network partition that would separate some client(s) from the server. OR Panic occurred during normal operation. rick > I have core saved and could provide additional information. > > - -- > // Lev Serebryakov > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v2 > > iQJ8BAEBCgBmBQJWx1LNXxSAAAAAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w > ZW5wZ3AuZmlmdGhob3JzZW1hbi5uZXRGOTZEMUNBMEI1RjQzMThCNjc0QjMzMEFF > QUIwM0M1OEJGREM0NzhGAAoJEOqwPFi/3EeP5RQP+gIqYjxZKpUk+sxAoE1Tds1K > BOkBPeFPj38FuaWUFN3LmM2HH3DZpjjScHa3WUeNov7KNduBTnBB0QJbYUOmXThg > 9ggOExQR+Cjci1YmBBa/8m8Naeik0wza0XmdbUlzx/qMyfpEMPmITZqq9X+/dcbb > 7xxpwUTveR3mZc9z7yS6qFDN+oqUMILFF08jq3B+715My0f2urBSBNqP4CJ5lc/a > sTNo2jTZG9PWug7blVIX03cX/hwVz9Wa3io+p5XNF+8ZGq2b+86sSPYf/6FjPWh3 > g/pMfX+cVYiOVsyWewASnRKse4S2my5gZ4OTbtrnMjOhFPhDbacM65XBdKSBIQDK > IXEsod+YvyMB+cBjvReyErLV0KoYkY/u5TzfaVuSAvKZ+MwuFSo57m6u2XSG/Z5f > XBvcTElIvFf78ZAmqJ5heyJSgONUKEUDo17w74Um6l3d0gP2QZDVrx5QF6hLb7Tx > ssepvE9DIEOijImBkn78QbxRaJzqXLYckxp1LPM0XtspbjoZImfLrCm8jE3RQJFs > cIcNCE9iWR0DD1fgSl7C/eifljfkSxF8hKf601ZYtweXZ89OWCh/HP3O8TfS0Fj/ > 9vOHdtC/AtCMQVHU1kdRyvjRCrpjVQV5WweCf1q641DSXuHkSIhWz2rfueX4I187 > BUokRbH5pntYoubADtmN > =y6WT > -----END PGP SIGNATURE----- > _______________________________________________ > freebsd-fs@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@freebsd.org Sat Feb 20 01:58:24 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id E5D0CAAEE72 for ; Sat, 20 Feb 2016 01:58:23 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id 963021DAA; Sat, 20 Feb 2016 01:58:23 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) IronPort-PHdr: 9a23:W/AzbB3QGM83i0t/smDT+DRfVm0co7zxezQtwd8ZsegVLvad9pjvdHbS+e9qxAeQG96LtLQZ0qGN7OjJYi8p39WoiDg6aptCVhsI2409vjcLJ4q7M3D9N+PgdCcgHc5PBxdP9nC/NlVJSo6lPwWB6kO74TNaIBjjLw09fr2zQd6NyZnunLvts7ToICx2xxOFKYtoKxu3qQiD/uI3uqBFbpgL9x3Sv3FTcP5Xz247bXianhL7+9vitMU7q3cY6Lod8JtEXLvSUb41QJZjIHIhKW9mytfssEz5TACMrl4VWWYSnx8AVxLA5Rr5Wpr0mjb9ufdw3DGae8b/G+NnEQ++5rtmHUe7wBwMMCQ0pTna X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: A2CtBACrx8dW/61jaINehAxtBrpMgWgXCoUiSgKBehMBAQEBAQEBAWMngi2CFAEBAQMBAQEBICsgCwULAgEIGAICDRkCAicBCSYCDAcEARwEh3EIDq0KjmQBAQEBAQUBAQEBAQEBFQR7hReBdYJGhBYBAQVkgjSBOgWOHohphVeFIJFajkYCIQFAggIagWYeLgeHQjR9AQEB X-IronPort-AV: E=Sophos;i="5.22,473,1449550800"; d="scan'208";a="268704028" Received: from nipigon.cs.uoguelph.ca (HELO zcs1.mail.uoguelph.ca) ([131.104.99.173]) by esa-annu.net.uoguelph.ca with ESMTP; 19 Feb 2016 20:58:22 -0500 Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 6D46A15F5AD; Fri, 19 Feb 2016 20:58:22 -0500 (EST) Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id TpMzLxHRadOD; Fri, 19 Feb 2016 20:58:21 -0500 (EST) Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id AD9BE15F5B2; Fri, 19 Feb 2016 20:58:21 -0500 (EST) X-Virus-Scanned: amavisd-new at zcs1.mail.uoguelph.ca Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id IVs8Em9s68UE; Fri, 19 Feb 2016 20:58:21 -0500 (EST) Received: from zcs1.mail.uoguelph.ca (zcs1.mail.uoguelph.ca [172.17.95.18]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 7337015F5AD; Fri, 19 Feb 2016 20:58:21 -0500 (EST) Date: Fri, 19 Feb 2016 20:58:21 -0500 (EST) From: Rick Macklem To: lev@FreeBSD.org Cc: freebsd-fs@freebsd.org Message-ID: <571539848.4340752.1455933501390.JavaMail.zimbra@uoguelph.ca> In-Reply-To: <56C752CD.4090203@FreeBSD.org> References: <56C752CD.4090203@FreeBSD.org> Subject: Re: Panic in NFS client on CURRENT MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.95.12] X-Mailer: Zimbra 8.0.9_GA_6191 (ZimbraWebClient - FF44 (Win)/8.0.9_GA_6191) Thread-Topic: Panic in NFS client on CURRENT Thread-Index: EtQjrJrUaWkim3H1cvthW9NvQL526w== X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 20 Feb 2016 01:58:24 -0000 Lev wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA512 > > > I've hit panic in NFSv4 client on fresh CURRENT, which looks like bug > 206634. > I took a look and this appears to have been introduced by r285632, which changed vfs_hash_get() to no longer VI_LOCK() the vnode. I'm not sure, but I think you can just delete/comment out the test for the VI_INTERLOCK and subsequent panic. (line#3371,3372 in sys/fs/nfsclient/nfs_clvnops.c in head). It should only happen when the server goes into state recovery mode after a server reboot or network partitioning from clients. (As I mentioned in PR#206634, these recoveries should be avoided whenever possible. If a scheduled restart of an NFSv4 server is done, all clients should be unmounted before the reboot, if at all practical.) However, I think the panic() is now just bogus and should be removed. rick > I have core saved and could provide additional information. > > - -- > // Lev Serebryakov > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v2 > > iQJ8BAEBCgBmBQJWx1LNXxSAAAAAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w > ZW5wZ3AuZmlmdGhob3JzZW1hbi5uZXRGOTZEMUNBMEI1RjQzMThCNjc0QjMzMEFF > QUIwM0M1OEJGREM0NzhGAAoJEOqwPFi/3EeP5RQP+gIqYjxZKpUk+sxAoE1Tds1K > BOkBPeFPj38FuaWUFN3LmM2HH3DZpjjScHa3WUeNov7KNduBTnBB0QJbYUOmXThg > 9ggOExQR+Cjci1YmBBa/8m8Naeik0wza0XmdbUlzx/qMyfpEMPmITZqq9X+/dcbb > 7xxpwUTveR3mZc9z7yS6qFDN+oqUMILFF08jq3B+715My0f2urBSBNqP4CJ5lc/a > sTNo2jTZG9PWug7blVIX03cX/hwVz9Wa3io+p5XNF+8ZGq2b+86sSPYf/6FjPWh3 > g/pMfX+cVYiOVsyWewASnRKse4S2my5gZ4OTbtrnMjOhFPhDbacM65XBdKSBIQDK > IXEsod+YvyMB+cBjvReyErLV0KoYkY/u5TzfaVuSAvKZ+MwuFSo57m6u2XSG/Z5f > XBvcTElIvFf78ZAmqJ5heyJSgONUKEUDo17w74Um6l3d0gP2QZDVrx5QF6hLb7Tx > ssepvE9DIEOijImBkn78QbxRaJzqXLYckxp1LPM0XtspbjoZImfLrCm8jE3RQJFs > cIcNCE9iWR0DD1fgSl7C/eifljfkSxF8hKf601ZYtweXZ89OWCh/HP3O8TfS0Fj/ > 9vOHdtC/AtCMQVHU1kdRyvjRCrpjVQV5WweCf1q641DSXuHkSIhWz2rfueX4I187 > BUokRbH5pntYoubADtmN > =y6WT > -----END PGP SIGNATURE----- > _______________________________________________ > freebsd-fs@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@freebsd.org Sat Feb 20 02:30:44 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id C660DAAEE71 for ; Sat, 20 Feb 2016 02:30:44 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id 5D069E0A; Sat, 20 Feb 2016 02:30:43 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) IronPort-PHdr: 9a23:ke8hSBR6SjC97js5WIqXGnFDINpsv+yvbD5Q0YIujvd0So/mwa64ZxCN2/xhgRfzUJnB7Loc0qyN4/+mBDVLvcjJmUtBWaIPfidNsd8RkQ0kDZzNImzAB9muURYHGt9fXkRu5XCxPBsdMs//Y1rPvi/6tmZKSV3BPAZ4bt74BpTVx5zukbvipNuOOk4U2nKUWvBbElaflU3prM4YgI9veO4a6yDihT92QdlQ3n5iPlmJnhzxtY+a9Z9n9DlM6bp6r5YTGfayQ6NtSbFGJBo8Pm0f3+GtsgPMHiWV4X5JaGQdkVJtCgPG6Bz/FsPrtyLxte5w3QGHOsLrQLQsWXKp5vE4G1fTlC4bOmthoynsgctqgfcDrQ== X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: A2B9AgDNzsdW/61jaINehAxtBro+DoFoFwqFIkoCgXkUAQEBAQEBAQFjJ4ItghQBAQEDAQEBASAEJyALBQsCAQgYERkCAgIlAQkmAgwHBAEcBIdxCA6tC45mAQEBAQEBAQEBAQEBAQEBAQEBAQEOBASGEoF1gkaEFgEBGhkWIII0gToFjh6IaYMIgk+FIJFajkYCHgFDggIagWYeLgeHQjR9AQEB X-IronPort-AV: E=Sophos;i="5.22,473,1449550800"; d="scan'208";a="268705748" Received: from nipigon.cs.uoguelph.ca (HELO zcs1.mail.uoguelph.ca) ([131.104.99.173]) by esa-annu.net.uoguelph.ca with ESMTP; 19 Feb 2016 21:30:42 -0500 Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id A2C6C15F5AF; Fri, 19 Feb 2016 21:30:42 -0500 (EST) Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id LPnduk4o75tM; Fri, 19 Feb 2016 21:30:42 -0500 (EST) Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id EEFF115F5B2; Fri, 19 Feb 2016 21:30:41 -0500 (EST) X-Virus-Scanned: amavisd-new at zcs1.mail.uoguelph.ca Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id FrFOW3NuMpy8; Fri, 19 Feb 2016 21:30:41 -0500 (EST) Received: from zcs1.mail.uoguelph.ca (zcs1.mail.uoguelph.ca [172.17.95.18]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id C4E7815F5AF; Fri, 19 Feb 2016 21:30:41 -0500 (EST) Date: Fri, 19 Feb 2016 21:30:41 -0500 (EST) From: Rick Macklem To: lev@FreeBSD.org Cc: freebsd-fs@freebsd.org Message-ID: <2475129.4356736.1455935441771.JavaMail.zimbra@uoguelph.ca> In-Reply-To: <56C752CD.4090203@FreeBSD.org> References: <56C752CD.4090203@FreeBSD.org> Subject: Re: Panic in NFS client on CURRENT MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="----=_Part_4356734_988038364.1455935441769" X-Originating-IP: [172.17.95.11] X-Mailer: Zimbra 8.0.9_GA_6191 (ZimbraWebClient - FF44 (Win)/8.0.9_GA_6191) Thread-Topic: Panic in NFS client on CURRENT Thread-Index: lxY3BcYsNy84Pn3gr63Z0dkBHcpfcw== X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 20 Feb 2016 02:30:44 -0000 ------=_Part_4356734_988038364.1455935441769 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Lev wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA512 > > > I've hit panic in NFSv4 client on fresh CURRENT, which looks like bug > 206634. > Oops, along with commenting out the panic(), the VI_UNLOCK() needs to be deleted/commented out. The attached patch (not yet tested by me) should do it. (Btw, this only applies to head/current and not stable/10 or earlier.) rick > I have core saved and could provide additional information. > > - -- > // Lev Serebryakov > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v2 > > iQJ8BAEBCgBmBQJWx1LNXxSAAAAAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w > ZW5wZ3AuZmlmdGhob3JzZW1hbi5uZXRGOTZEMUNBMEI1RjQzMThCNjc0QjMzMEFF > QUIwM0M1OEJGREM0NzhGAAoJEOqwPFi/3EeP5RQP+gIqYjxZKpUk+sxAoE1Tds1K > BOkBPeFPj38FuaWUFN3LmM2HH3DZpjjScHa3WUeNov7KNduBTnBB0QJbYUOmXThg > 9ggOExQR+Cjci1YmBBa/8m8Naeik0wza0XmdbUlzx/qMyfpEMPmITZqq9X+/dcbb > 7xxpwUTveR3mZc9z7yS6qFDN+oqUMILFF08jq3B+715My0f2urBSBNqP4CJ5lc/a > sTNo2jTZG9PWug7blVIX03cX/hwVz9Wa3io+p5XNF+8ZGq2b+86sSPYf/6FjPWh3 > g/pMfX+cVYiOVsyWewASnRKse4S2my5gZ4OTbtrnMjOhFPhDbacM65XBdKSBIQDK > IXEsod+YvyMB+cBjvReyErLV0KoYkY/u5TzfaVuSAvKZ+MwuFSo57m6u2XSG/Z5f > XBvcTElIvFf78ZAmqJ5heyJSgONUKEUDo17w74Um6l3d0gP2QZDVrx5QF6hLb7Tx > ssepvE9DIEOijImBkn78QbxRaJzqXLYckxp1LPM0XtspbjoZImfLrCm8jE3RQJFs > cIcNCE9iWR0DD1fgSl7C/eifljfkSxF8hKf601ZYtweXZ89OWCh/HP3O8TfS0Fj/ > 9vOHdtC/AtCMQVHU1kdRyvjRCrpjVQV5WweCf1q641DSXuHkSIhWz2rfueX4I187 > BUokRbH5pntYoubADtmN > =y6WT > -----END PGP SIGNATURE----- > _______________________________________________ > freebsd-fs@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > ------=_Part_4356734_988038364.1455935441769 Content-Type: text/x-patch; name=ncllockpanic.patch Content-Disposition: attachment; filename=ncllockpanic.patch Content-Transfer-Encoding: base64 LS0tIGZzL25mc2NsaWVudC9uZnNfY2x2bm9wcy5jLnNhdgkyMDE2LTAyLTE5IDE4OjAxOjQ3Ljc5 Njk2OTAwMCAtMDUwMAorKysgZnMvbmZzY2xpZW50L25mc19jbHZub3BzLmMJMjAxNi0wMi0xOSAx ODowNjoxNS4xMzU5MzMwMDAgLTA1MDAKQEAgLTMzNjgsMTEgKzMzNjgsOCBAQCBuZnNfbG9jazEo c3RydWN0IHZvcF9sb2NrMV9hcmdzICphcCkKIAkgKiB0aGVyZSBpc24ndCBhbnkgcmFjZSBwcm9i bGVtLgogCSAqLwogCWlmICgoYXAtPmFfZmxhZ3MgJiBMS19UWVBFX01BU0spID09IExLX0VYQ0xP VEhFUikgewotCQlpZiAoKGFwLT5hX2ZsYWdzICYgTEtfSU5URVJMT0NLKSA9PSAwKQotCQkJcGFu aWMoIm5jbGxvY2sxIik7CiAJCWlmICgodnAtPnZfaWZsYWcgJiBWSV9ET09NRUQpKQogCQkJZXJy b3IgPSBFTk9FTlQ7Ci0JCVZJX1VOTE9DSyh2cCk7CiAJCXJldHVybiAoZXJyb3IpOwogCX0KIAly ZXR1cm4gKF9sb2NrbWdyX2FyZ3ModnAtPnZfdm5sb2NrLCBhcC0+YV9mbGFncywgVklfTVRYKHZw KSwK ------=_Part_4356734_988038364.1455935441769-- From owner-freebsd-fs@freebsd.org Sat Feb 20 02:36:13 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 30035AAD208 for ; Sat, 20 Feb 2016 02:36:13 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 1E8171198 for ; Sat, 20 Feb 2016 02:36:13 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id u1K2aCFS013541 for ; Sat, 20 Feb 2016 02:36:12 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 206634] "panic: ncllock1" from FreeBSD client after NFSv4 server was taken offline and brought back to life; lots of spam about "protocol prob err=10006" Date: Sat, 20 Feb 2016 02:36:13 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 11.0-CURRENT X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: rmacklem@FreeBSD.org X-Bugzilla-Status: In Progress X-Bugzilla-Resolution: X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: rmacklem@FreeBSD.org X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: assigned_to bug_status attachments.created Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 20 Feb 2016 02:36:13 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D206634 Rick Macklem changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|freebsd-fs@FreeBSD.org |rmacklem@FreeBSD.org Status|New |In Progress --- Comment #4 from Rick Macklem --- Created attachment 167209 --> https://bugs.freebsd.org/bugzilla/attachment.cgi?id=3D167209&action= =3Dedit Patch to delete panic that no longer applies to current/head r285632 changed vfs_hash_get() so that it no longer calls vget() with VI_LOCK() held. As such, this panic() and the VI_UNLOCK() should not be done. The attached patch (not yet tested by me) makes this change. (Note that this panic and patch only apply to head/current and not stable/10 or earlier.) Please let me know if you have an NFSv4 server crash/reboot after applying the patch and whether or not it seems to work. I will try and test the patch. I cannot commit it to head/current until mid-April. --=20 You are receiving this mail because: You are the assignee for the bug.= From owner-freebsd-fs@freebsd.org Sat Feb 20 11:08:22 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id BD17FAAFEF0 for ; Sat, 20 Feb 2016 11:08:22 +0000 (UTC) (envelope-from lev@FreeBSD.org) Received: from onlyone.friendlyhosting.spb.ru (onlyone.friendlyhosting.spb.ru [IPv6:2a01:4f8:131:60a2::2]) by mx1.freebsd.org (Postfix) with ESMTP id 8A6C8E5B for ; Sat, 20 Feb 2016 11:08:22 +0000 (UTC) (envelope-from lev@FreeBSD.org) Received: from [127.0.0.1] (unknown [89.113.128.32]) (Authenticated sender: lev@serebryakov.spb.ru) by onlyone.friendlyhosting.spb.ru (Postfix) with ESMTPSA id 62683A91B; Sat, 20 Feb 2016 14:08:19 +0300 (MSK) Reply-To: lev@FreeBSD.org Subject: Re: Panic in NFS client on CURRENT References: <56C752CD.4090203@FreeBSD.org> <1022369130.4303814.1455930123897.JavaMail.zimbra@uoguelph.ca> To: Rick Macklem Cc: freebsd-fs@freebsd.org From: Lev Serebryakov Organization: FreeBSD Message-ID: <56C84922.8050803@FreeBSD.org> Date: Sat, 20 Feb 2016 14:08:18 +0300 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.6.0 MIME-Version: 1.0 In-Reply-To: <1022369130.4303814.1455930123897.JavaMail.zimbra@uoguelph.ca> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 20 Feb 2016 11:08:22 -0000 -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 20.02.2016 04:02, Rick Macklem wrote: >> Basically, I'm asking if there was a server reboot or nfsd thread >> restart or some kind of network partition that would separate >> some client(s) from the server. OR Panic occurred during normal >> operation. There was NO server reboot/restarts. MAYBE, this VM (where client runs) lost network connectivity for several seconds, but server itself was NOT stopped, restarted or rebooted. - -- // Lev Serebryakov -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQJ7BAEBCgBmBQJWyEkiXxSAAAAAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w ZW5wZ3AuZmlmdGhob3JzZW1hbi5uZXRGOTZEMUNBMEI1RjQzMThCNjc0QjMzMEFF QUIwM0M1OEJGREM0NzhGAAoJEOqwPFi/3EePfCsP9AuK494J8cUft0MmvAly7yVw iF0R0joxwttp9t6qydMjlQfmj6yoX+UACFWWRBZGgGrS8K7PcGSsGFl5s/Bt1ylL lw3GDr7GsVNDOhG4ypwsiqI2Wq/PzFhBMUpuUq6A+kdqZVH1ApQFyDKrdWbvDQLx 9Dm6vvW/fx6W1PgJp4i2B8zSf4vz7s91JyPMXnN9IQNG/1H9WERudzx/2kp1ws9y wYCXVmsidMO9j0DQ4eVVSM2vSfc6VKgyjWhVeHguRXc5F3L5VGuoSXyzCkceC66r t+8MDYhrsm00hrkZyTO6s1KcC8OKrgZBr9p0UIM1oMaqo02DyWp7KfM1nDMW9FI6 IXsLaizPnnf7u+gGI2SllNXMaPvcREAxrnQDHKTdifKkpXrSroYYfJGmxAsRidmY 8nwZ1bytGeSHlTYSq1XTJLCWsSoM/o0Vgl+bGXvajWFkFT/GRGb5akWUBZhkzo7n TTpm0zrLuSvqWwRvqisoAuKW7QmCF2E0ei0E01TA3DDpF31dLOCApMq4t/UooT5h w25dTRpc+WPUEwKXSzZ90kPHmmoRz7dn8y6Oeb681GtqoauMBgVUuWhI7+sobRBy gcyIIpPB1Y0vteslzd5JDRUWcDUGg23fqRgax+J+motaNEXus2P6RxZTkq3DmOgO qpvv/BwLVn++rVnTNWY= =hihT -----END PGP SIGNATURE----- From owner-freebsd-fs@freebsd.org Sat Feb 20 11:42:08 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id BFFDCAAEDAF for ; Sat, 20 Feb 2016 11:42:08 +0000 (UTC) (envelope-from lev@FreeBSD.org) Received: from onlyone.friendlyhosting.spb.ru (onlyone.friendlyhosting.spb.ru [IPv6:2a01:4f8:131:60a2::2]) by mx1.freebsd.org (Postfix) with ESMTP id 8F6A41FD9 for ; Sat, 20 Feb 2016 11:42:08 +0000 (UTC) (envelope-from lev@FreeBSD.org) Received: from [127.0.0.1] (unknown [89.113.128.32]) (Authenticated sender: lev@serebryakov.spb.ru) by onlyone.friendlyhosting.spb.ru (Postfix) with ESMTPSA id E5F32A925; Sat, 20 Feb 2016 14:42:06 +0300 (MSK) Reply-To: lev@FreeBSD.org Subject: Re: Panic in NFS client on CURRENT References: <56C752CD.4090203@FreeBSD.org> <2475129.4356736.1455935441771.JavaMail.zimbra@uoguelph.ca> To: Rick Macklem Cc: freebsd-fs@freebsd.org From: Lev Serebryakov Organization: FreeBSD Message-ID: <56C8510F.6030008@FreeBSD.org> Date: Sat, 20 Feb 2016 14:42:07 +0300 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.6.0 MIME-Version: 1.0 In-Reply-To: <2475129.4356736.1455935441771.JavaMail.zimbra@uoguelph.ca> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 20 Feb 2016 11:42:08 -0000 -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 20.02.2016 05:30, Rick Macklem wrote: > The attached patch (not yet tested by me) should do it. (Btw, this > only applies to head/current and not stable/10 or earlier.) I'll give it a try after day or two and return with result. I'll try to imitate network partitioning with and without this patch, too. - -- // Lev Serebryakov -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQJ8BAEBCgBmBQJWyFEOXxSAAAAAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w ZW5wZ3AuZmlmdGhob3JzZW1hbi5uZXRGOTZEMUNBMEI1RjQzMThCNjc0QjMzMEFF QUIwM0M1OEJGREM0NzhGAAoJEOqwPFi/3EePSL8QAJlENZzWM30e/AbLxp8KGdce yHW3LfQ9CShHfi1ydptmfH0MPQZ0Lu9CF4+2X4gdBzYLEF+GBWjjeNwuYJHovWNx MiUPy99EGexX8trU8ogT2z2OZvvsvStMx6v7ZT9wwJdM9QoNZQXuswW1lMxdsdau TRFSsvgjQWv6338YmhFdzCb1UhetPXdLvgDPfUy61+Iy7Rd8qQs/YN16fIk9OpM/ /rH78ldp0b9YFDBb7FdTrnwwc5KqZFMngocxYgHalaQY0eAK0IqZBVYm0820/MaY 3B50sa8kw6JZ5M3LTrNjY1DXkJCsZXOrUZ+znj4djkyxiHGk+dpBnqNTPRSwBvtm kSBvubNVUkNAxjZZsvb9+2aCtmb0TEFF1qw3S9wUdfH1TbtRDFjk13iCxpUYiOd7 epxVr4sNPFPDxtbrITNCo0+/GYclY7F02CZZ4vUU0lQTdprygBBE5vqriyybjsw+ VnJYk5UmrtTvPh4PIC0g1N+r3G2T1TbsmlGFEJmcOe1lcrmATSZtODQmodHZan0f hHx1XD2otYb+qnsjuIIQ4/9doS+4VfJ8jZ7gfUIembP0PIVD1KBMb2oYxfYvx8Qv eCUyBm5g68ftgN6807HrO05J7QnbVjv36lba8QLJOWSearfiADqngSKoxOLO4VBY Iic6tWD2a2A35U8f6poC =7T+q -----END PGP SIGNATURE----- From owner-freebsd-fs@freebsd.org Sat Feb 20 20:42:47 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id CB95AAAE018 for ; Sat, 20 Feb 2016 20:42:47 +0000 (UTC) (envelope-from lev@FreeBSD.org) Received: from onlyone.friendlyhosting.spb.ru (onlyone.friendlyhosting.spb.ru [46.4.40.135]) by mx1.freebsd.org (Postfix) with ESMTP id 97AD91A4F for ; Sat, 20 Feb 2016 20:42:47 +0000 (UTC) (envelope-from lev@FreeBSD.org) Received: from [127.0.0.1] (unknown [89.113.128.32]) (Authenticated sender: lev@serebryakov.spb.ru) by onlyone.friendlyhosting.spb.ru (Postfix) with ESMTPSA id 3340FA996 for ; Sat, 20 Feb 2016 23:42:38 +0300 (MSK) Reply-To: lev@FreeBSD.org To: freebsd-fs@freebsd.org From: Lev Serebryakov Subject: funny "zfs send -Rv" glitch Organization: FreeBSD Message-ID: <56C8CFBC.6010301@FreeBSD.org> Date: Sat, 20 Feb 2016 23:42:36 +0300 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.6.0 MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 20 Feb 2016 20:42:47 -0000 -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 I'm send'ing ZFS dataset with many FSes and snapshots from one server to another via "zfs send -Rv zpool@to-send". Looks like, "-v" prints header row (TIME SENT SNAPSHOT) before every new snapshot it processes. But if snapshot in question is very small (takes less than a second to send?) it doesn't print DATA for this snapshot, so it looks like TIME SENT SNAPSHOT TIME SENT SNAPSHOT TIME SENT SNAPSHOT TIME SENT SNAPSHOT TIME SENT SNAPSHOT TIME SENT SNAPSHOT TIME SENT SNAPSHOT TIME SENT SNAPSHOT 23:40:35 178G zroot/home/hosted/printnotdead@daily-2016-02-07_03.01.46--1m 23:40:36 178G zroot/home/hosted/printnotdead@daily-2016-02-07_03.01.46--1m TIME SENT SNAPSHOT TIME SENT SNAPSHOT TIME SENT SNAPSHOT TIME SENT SNAPSHOT TIME SENT SNAPSHOT TIME SENT SNAPSHOT :) - -- // Lev Serebryakov -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQJ8BAEBCgBmBQJWyM+8XxSAAAAAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w ZW5wZ3AuZmlmdGhob3JzZW1hbi5uZXRGOTZEMUNBMEI1RjQzMThCNjc0QjMzMEFF QUIwM0M1OEJGREM0NzhGAAoJEOqwPFi/3EePpEEQAKO9fuMCODHGZhfreqd7IuFH +7r2O4G34t6Lwi/GWUlybYrfRGR39JzS+uoEJUGGb46/ToMPK0aTAoScBlp01wyl mLz52eI9RZvVo/SQ2nqbBrEX3gsS3KzX97wQyfoJws2GJYXZ1ADIjlM4VeYOegd+ 2SFqEtR78o26y8b+rGPcYnV5QyNznVQ9s0njjGAMGd0dD213SX8XKKzATR1lczMt U8y9VqkFSlqU2PhIOO3Y0J4MugRrtOHLslUyPltdprDl+wdAclb/uwEYczIVTM5u f4F6YW9tvzaKjROtug2FgCGG3n2FgdR9nUfkr1xZTXAE9sksPzy+ubq7uRr9N9qN hSW+qUJbZjlUQ7Uh2tdhDMsjUAGLEh0sm/jO52ViFcI0q9lDl8nFzW2+47vp46vH ZzoWYCUuDAGeVpFT7mmh0fbCS+9XzldLavG11f//TS0Qfup4BcX+4sSu0Vm9G9s+ 0D1yXlasOqzmBoAmFBK6spM7oqPI6/jgqAlz/VmehMA+dJTvCpJRFYG6UvFUQG/h c5XTSePSvwLu8A14wSjnZMKIq0x/Ppwz3S5Jr2iMEtu5lxa13UKu9TRYEUgVeh01 2/L86JD/0OdUpQCQ2X18dZd6UDUmq+w8yuEGSEzKiYduSFxfAgV1Pjg19q2deIpv fC/5N6Byj0AkoW2b4AFv =kZki -----END PGP SIGNATURE----- From owner-freebsd-fs@freebsd.org Sat Feb 20 22:58:27 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id D5644AAF0C2 for ; Sat, 20 Feb 2016 22:58:27 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id 56668D37; Sat, 20 Feb 2016 22:58:26 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) IronPort-PHdr: 9a23:gymhrBaj0uW/ZIFVobs4jLD/LSx+4OfEezUN459isYplN5qZpc+8bnLW6fgltlLVR4KTs6sC0LqJ9f26EjVeu96oizMrTt9lb1c9k8IYnggtUoauKHbQC7rUVRE8B9lIT1R//nu2YgB/Ecf6YEDO8DXptWZBUiv2OQc9HOnpAIma153xjLDtvcCJKFwW3nKUWvBbElaflU3prM4YgI9veO4a6yDihT92QdlQ3n5iPlmJnhzxtY+a9Z9n9DlM6bp6r5YTGfayQ6NtSbFGJBo8Pm0f3+GtsgPMHiWV4X5JaGQdkVJtCgPG6Bz/FsPrtyLxte5w3QGHOsLrQLQsWXKp5vE4G1fTlC4bOmthoynsgctqgfcDrQ== X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: A2DOAQCI7shW/61jaINehH+4NYITAQ2BaIYNAoFnFAEBAQEBAQEBYyeCLYIVAQEEIwRSEAIBCBgCAg0ZAgJXAogxrGiONQEBAQEGAgEde4UXgXWCRoQFARABBhaDAoE6BYdShkw9iCycUY5HAh4BAUKCAhqBZh6HaQgXHX0BAQE X-IronPort-AV: E=Sophos;i="5.22,478,1449550800"; d="scan'208";a="267131325" Received: from nipigon.cs.uoguelph.ca (HELO zcs1.mail.uoguelph.ca) ([131.104.99.173]) by esa-jnhn.mail.uoguelph.ca with ESMTP; 20 Feb 2016 17:58:25 -0500 Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 5348415F5B4; Sat, 20 Feb 2016 17:58:25 -0500 (EST) Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id Yta-nqU_U3Ga; Sat, 20 Feb 2016 17:58:24 -0500 (EST) Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 80B7015F5B8; Sat, 20 Feb 2016 17:58:24 -0500 (EST) X-Virus-Scanned: amavisd-new at zcs1.mail.uoguelph.ca Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id gE0Yjo_hm-Ii; Sat, 20 Feb 2016 17:58:24 -0500 (EST) Received: from zcs1.mail.uoguelph.ca (zcs1.mail.uoguelph.ca [172.17.95.18]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 64CF815F5B4; Sat, 20 Feb 2016 17:58:24 -0500 (EST) Date: Sat, 20 Feb 2016 17:58:24 -0500 (EST) From: Rick Macklem To: lev@FreeBSD.org Cc: freebsd-fs@freebsd.org Message-ID: <353969052.570755.1456009104365.JavaMail.zimbra@uoguelph.ca> In-Reply-To: <56C84922.8050803@FreeBSD.org> References: <56C752CD.4090203@FreeBSD.org> <1022369130.4303814.1455930123897.JavaMail.zimbra@uoguelph.ca> <56C84922.8050803@FreeBSD.org> Subject: Re: Panic in NFS client on CURRENT MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.95.10] X-Mailer: Zimbra 8.0.9_GA_6191 (ZimbraWebClient - FF44 (Win)/8.0.9_GA_6191) Thread-Topic: Panic in NFS client on CURRENT Thread-Index: JDQ9VGQ3TCkbUOS8TmKxloHynFS2UA== X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 20 Feb 2016 22:58:27 -0000 Lev wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA512 > > On 20.02.2016 04:02, Rick Macklem wrote: > > >> Basically, I'm asking if there was a server reboot or nfsd thread > >> restart or some kind of network partition that would separate > >> some client(s) from the server. OR Panic occurred during normal > >> operation. > There was NO server reboot/restarts. MAYBE, this VM (where client > runs) lost network connectivity for several seconds, but server itself > was NOT stopped, restarted or rebooted. > Well, the stack trace you put in the PR showed a recovery from an expired lease. This should only occur when the client is partitioned from the server for more than a lease duration (120sec on FreeBSD). Even a 120+sec network partitioning won't cause an expired recovery unless a conflicting open/lock request is made for a FreeBSD server. (A Linux server will NFS4ERR_EXPIRED as soon as the lease has exceeded without a renewal and Linux uses a lease of 60sec, so it is easier to reproduce with a Linux NFSv4 server if you happen to have one.) --> So I don't know why it would go into a lease expired recovery. (A network partitioning of a few seconds shouldn't do it.) I think the only way to know what caused this would be to have a packet capture that started before the problem occurred. (Maybe your network setup is somehow directing some RPC messages to the wrong place or they`re being blocked by some firewall setup.) If you have an NFSv4.0 mount you should see a Renew RPC about once per minute (half a lease duration) which keeps the lease from expiring. For NFSv4.1, it is an RPC with just a Sequence operation which should have the same effect. Reproducing this shouldn't be easy (which is a good thing;-). It has been a while, but it should take something like: - network partition a client from the server while it has a file open, for several minutes. (It might also need to have a byte range lock on the file, I can`t remember for sure if just an open is sufficient.) - Try and open the same file on another client (and get a conflicting byte range lock maybe). --> This should result in a reply to the client of NFS4ERR_EXPIRED. If you look at a packet trace in wireshark, it is a server reply of NFS4ERR_EXPIRED that tells the client to go into this recovery cycle. Unfortunately I am away from home until April, so I don't have access to wireshark until then. (I will try and reproduce a NFSERR_EXPIRED failure with the laptops I have with me, but I'm not sure if I can pull it off.) Btw, this type of recovery isn't specified by the RFC and can only recover opens and not byte range locks. Fyi, the recovery from a server reboot (or reload of nfsd.ko in a FreeBSD server) is specified by the RFC and can recover opens and locks. It starts with the server replying NFSERR_STALECLIENTID or NFSERR_STALESTATEID. Good luck with it, rick ps: I`ll email if I reproduce the NFSERR_EXPIRED and find any problem beyond the panic with fix already posted. > - -- > // Lev Serebryakov > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v2 > > iQJ7BAEBCgBmBQJWyEkiXxSAAAAAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w > ZW5wZ3AuZmlmdGhob3JzZW1hbi5uZXRGOTZEMUNBMEI1RjQzMThCNjc0QjMzMEFF > QUIwM0M1OEJGREM0NzhGAAoJEOqwPFi/3EePfCsP9AuK494J8cUft0MmvAly7yVw > iF0R0joxwttp9t6qydMjlQfmj6yoX+UACFWWRBZGgGrS8K7PcGSsGFl5s/Bt1ylL > lw3GDr7GsVNDOhG4ypwsiqI2Wq/PzFhBMUpuUq6A+kdqZVH1ApQFyDKrdWbvDQLx > 9Dm6vvW/fx6W1PgJp4i2B8zSf4vz7s91JyPMXnN9IQNG/1H9WERudzx/2kp1ws9y > wYCXVmsidMO9j0DQ4eVVSM2vSfc6VKgyjWhVeHguRXc5F3L5VGuoSXyzCkceC66r > t+8MDYhrsm00hrkZyTO6s1KcC8OKrgZBr9p0UIM1oMaqo02DyWp7KfM1nDMW9FI6 > IXsLaizPnnf7u+gGI2SllNXMaPvcREAxrnQDHKTdifKkpXrSroYYfJGmxAsRidmY > 8nwZ1bytGeSHlTYSq1XTJLCWsSoM/o0Vgl+bGXvajWFkFT/GRGb5akWUBZhkzo7n > TTpm0zrLuSvqWwRvqisoAuKW7QmCF2E0ei0E01TA3DDpF31dLOCApMq4t/UooT5h > w25dTRpc+WPUEwKXSzZ90kPHmmoRz7dn8y6Oeb681GtqoauMBgVUuWhI7+sobRBy > gcyIIpPB1Y0vteslzd5JDRUWcDUGg23fqRgax+J+motaNEXus2P6RxZTkq3DmOgO > qpvv/BwLVn++rVnTNWY= > =hihT > -----END PGP SIGNATURE----- > From owner-freebsd-fs@freebsd.org Sat Feb 20 23:03:33 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 2C2D4AAF34E for ; Sat, 20 Feb 2016 23:03:33 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id BDDFE10F2; Sat, 20 Feb 2016 23:03:31 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) IronPort-PHdr: 9a23:YGhAwR0DA75nr9PgsmDT+DRfVm0co7zxezQtwd8ZsegVLPad9pjvdHbS+e9qxAeQG96LtLQZ0aGP7fqocFdDyKjCmUhKSIZLWR4BhJdetC0bK+nBN3fGKuX3ZTcxBsVIWQwt1Xi6NU9IBJS2PAWK8TWM5DIfUi/yKRBybrysXNWC0ILqjavrpcebSj4LrQT+SIs6FA+xowTVu5teqqpZAYF19CH0pGBVcf9d32JiKAHbtR/94sCt4MwrqHwI6Lpyv/JHBKH3YYwWV7FVJg8KdWcv657Frx7GGDGO7XhUd2wdkR5FBkCR9hTzVZT1vy7Sq+1yxSSeJc2wRrliCmfq1LtiVBK90HRPDDU+6myC0sE= X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: A2DOAQB48MhW/61jaINehH+4NYITAQ2BaIYNAoFnFAEBAQEBAQEBYyeCLYIUAQEBBCNWDAQCAQgYAgINGQICVwKIMaxojjUBAQEBBgEBAQEBG3uFF4F1gkaEHRZOgjSBOgWOHohpnFGORwIeAQFCggIagWYeiCV9AQEB X-IronPort-AV: E=Sophos;i="5.22,478,1449550800"; d="scan'208";a="268785675" Received: from nipigon.cs.uoguelph.ca (HELO zcs1.mail.uoguelph.ca) ([131.104.99.173]) by esa-annu.net.uoguelph.ca with ESMTP; 20 Feb 2016 18:03:30 -0500 Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 8365E15F5B4; Sat, 20 Feb 2016 18:03:30 -0500 (EST) Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id U8O8xuR5OyuO; Sat, 20 Feb 2016 18:03:30 -0500 (EST) Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id E75A315F5B8; Sat, 20 Feb 2016 18:03:29 -0500 (EST) X-Virus-Scanned: amavisd-new at zcs1.mail.uoguelph.ca Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id VFUkq_WyQsS7; Sat, 20 Feb 2016 18:03:29 -0500 (EST) Received: from zcs1.mail.uoguelph.ca (zcs1.mail.uoguelph.ca [172.17.95.18]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id CDDD615F5B4; Sat, 20 Feb 2016 18:03:29 -0500 (EST) Date: Sat, 20 Feb 2016 18:03:29 -0500 (EST) From: Rick Macklem To: lev@FreeBSD.org Cc: freebsd-fs@freebsd.org Message-ID: <796102748.572844.1456009409825.JavaMail.zimbra@uoguelph.ca> In-Reply-To: <56C84922.8050803@FreeBSD.org> References: <56C752CD.4090203@FreeBSD.org> <1022369130.4303814.1455930123897.JavaMail.zimbra@uoguelph.ca> <56C84922.8050803@FreeBSD.org> Subject: Re: Panic in NFS client on CURRENT MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.95.11] X-Mailer: Zimbra 8.0.9_GA_6191 (ZimbraWebClient - FF44 (Win)/8.0.9_GA_6191) Thread-Topic: Panic in NFS client on CURRENT Thread-Index: cSTJZAsR/gRMCfmoGisEauLbyF/iIQ== X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 20 Feb 2016 23:03:33 -0000 Oh, and if you are using jails, they are probably what is causing the problem. One known jail related issue is that the nfsuserd upcalls to userland don`t work because they don`t come from 127.0.0.1. There are probably others I am not aware of. rick ----- Original Message ----- > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA512 > > On 20.02.2016 04:02, Rick Macklem wrote: > > >> Basically, I'm asking if there was a server reboot or nfsd thread > >> restart or some kind of network partition that would separate > >> some client(s) from the server. OR Panic occurred during normal > >> operation. > There was NO server reboot/restarts. MAYBE, this VM (where client > runs) lost network connectivity for several seconds, but server itself > was NOT stopped, restarted or rebooted. > > - -- > // Lev Serebryakov > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v2 > > iQJ7BAEBCgBmBQJWyEkiXxSAAAAAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w > ZW5wZ3AuZmlmdGhob3JzZW1hbi5uZXRGOTZEMUNBMEI1RjQzMThCNjc0QjMzMEFF > QUIwM0M1OEJGREM0NzhGAAoJEOqwPFi/3EePfCsP9AuK494J8cUft0MmvAly7yVw > iF0R0joxwttp9t6qydMjlQfmj6yoX+UACFWWRBZGgGrS8K7PcGSsGFl5s/Bt1ylL > lw3GDr7GsVNDOhG4ypwsiqI2Wq/PzFhBMUpuUq6A+kdqZVH1ApQFyDKrdWbvDQLx > 9Dm6vvW/fx6W1PgJp4i2B8zSf4vz7s91JyPMXnN9IQNG/1H9WERudzx/2kp1ws9y > wYCXVmsidMO9j0DQ4eVVSM2vSfc6VKgyjWhVeHguRXc5F3L5VGuoSXyzCkceC66r > t+8MDYhrsm00hrkZyTO6s1KcC8OKrgZBr9p0UIM1oMaqo02DyWp7KfM1nDMW9FI6 > IXsLaizPnnf7u+gGI2SllNXMaPvcREAxrnQDHKTdifKkpXrSroYYfJGmxAsRidmY > 8nwZ1bytGeSHlTYSq1XTJLCWsSoM/o0Vgl+bGXvajWFkFT/GRGb5akWUBZhkzo7n > TTpm0zrLuSvqWwRvqisoAuKW7QmCF2E0ei0E01TA3DDpF31dLOCApMq4t/UooT5h > w25dTRpc+WPUEwKXSzZ90kPHmmoRz7dn8y6Oeb681GtqoauMBgVUuWhI7+sobRBy > gcyIIpPB1Y0vteslzd5JDRUWcDUGg23fqRgax+J+motaNEXus2P6RxZTkq3DmOgO > qpvv/BwLVn++rVnTNWY= > =hihT > -----END PGP SIGNATURE----- >