From owner-freebsd-fs@freebsd.org Sun Aug 7 08:09:42 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id DE6B7BB197E for ; Sun, 7 Aug 2016 08:09:42 +0000 (UTC) (envelope-from sobomax@sippysoft.com) Received: from mail-it0-x22c.google.com (mail-it0-x22c.google.com [IPv6:2607:f8b0:4001:c0b::22c]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id A36C51F35 for ; Sun, 7 Aug 2016 08:09:42 +0000 (UTC) (envelope-from sobomax@sippysoft.com) Received: by mail-it0-x22c.google.com with SMTP id j124so59016833ith.1 for ; Sun, 07 Aug 2016 01:09:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sippysoft-com.20150623.gappssmtp.com; s=20150623; h=mime-version:sender:in-reply-to:references:from:date:message-id :subject:to:cc; bh=1ICcGA8NYkZvkqM0cIC9PbwnVvM3xboeA+5ZnIsrack=; b=Vq9YNEIhrU7tn2YhOmnBUqaNH6bT4utDgl9a/Dqv94+dIA1WPlsfC0Jw9KEtvpUYKa p/v/1hm4jSA6aX8PiiuWVt58aJ+NRvP7/YLF/FhrMSqHys3jiXRPJYhjWNvQey+m5g0B EHYEHkSPEyyfoTVpfOMo2IrsY3TAaQybAbZsNyMxa2ehVZqoPdn6TXux+VWSogzh+6Fd 9PY6IGwT8UAAxfl3l4qo5fLbJL4c2ttFBiYKBtiX4nObqzjV2vliO4qiUmfH/y3mWll8 cR8QCHoKkEI+Io7q2n2hhjXayFTLPdmapJkw6jaSryaybEi42GO4NRoMAUFTSFNrc3Z+ VjAg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:sender:in-reply-to:references:from :date:message-id:subject:to:cc; bh=1ICcGA8NYkZvkqM0cIC9PbwnVvM3xboeA+5ZnIsrack=; b=hlPNQQk2G826KTCtKsiHOCKRPVZDh6I59o7RPLAad9oySj5cGrBC1OU45P325x2A4P Gv+VG4E87gEnF1wQwSPv3e3HHYw86fztZsaoDMJiMEnt/Or7QJmhxcB2ygATnnLiqCa6 xEFeCJH3CJB+Q12h5uWRGDM7goKbbJ16shm6n9kelPKpg4BHak21nEGF1IzpHOWGhrV1 WjCJc9hbnUl3NykarDdOgx+NM6vsQ0IvtHcvluz838apg3m3xcVBJn+JeML6O0RqG+2o LglbrRWr6c70kDzlCltpQK6QbADIPMdOqD8BYU1NbkDa4VGqaIj+CKogOligU8X1+kk/ jbPQ== X-Gm-Message-State: AEkoous/1WscjtmJQyuT1H9NPdBybwxB0E3+Q/6fsjPcE3PXXoGL33QrmIxlsugp2oexyniNESWiRiUM8aUqZe5N X-Received: by 10.36.253.194 with SMTP id m185mr12962400ith.2.1470557381887; Sun, 07 Aug 2016 01:09:41 -0700 (PDT) MIME-Version: 1.0 Sender: sobomax@sippysoft.com Received: by 10.36.184.134 with HTTP; Sun, 7 Aug 2016 01:09:41 -0700 (PDT) In-Reply-To: <201608041704.u74H47hb090342@chez.mckusick.com> References: <201608041704.u74H47hb090342@chez.mckusick.com> From: Maxim Sobolev Date: Sun, 7 Aug 2016 01:09:41 -0700 X-Google-Sender-Auth: Y6M--mJlTGgr3AqnSCBTpFxkU8Y Message-ID: Subject: Re: Optimizing UFS 1/2 for non-rotating / compressed storage To: Kirk McKusick Cc: FreeBSD Filesystems Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.22 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 07 Aug 2016 08:09:43 -0000 Thanks, Kirk, I hope you've got great time off down there! So far, we've been set the following, which seems to pessimize compression levels slightly but greatly reduce size of incremental upgrade using rsync after we change just few files and re-pack: newfs -n -b 65536 -f $((65536 / 2)) -m 0 -L "${FW_LABEL}" "/dev/${MD_UNIT}" Unfortunately 64k is the max block size we can get out of it (128k is rejected) and we run out of inodes if we set fragment size to be 64k as well. Is there a fundamental limitation on the size of the block? I am curious to see how 128/32 might work considering that bigger block size is preferred by the compressor. We'll try to play with other options too, as you've suggested. -Max On Thu, Aug 4, 2016 at 10:04 AM, Kirk McKusick wrote: > > From: Maxim Sobolev > > Date: Wed, 20 Jul 2016 11:45:03 -0700 > > Subject: Optimizing UFS 1/2 for non-rotating / compressed storage > > To: Kirk McKusick , > > FreeBSD Filesystems > > > > Hi Kirk et al, > > > > Do you by any chance have some hints of what parameters we need to set in > > newfs to maximally fit the following criteria: > > > > 1. Minimize free space fragmentation, i.e. we start with huge array of > > zeroes, we want to end up with as few number of continuous zero areas as > > possible (i.e. minimize free space discontinuity). > > > > 2. Blocks that belong to the same file should be as continuous as > possible > > "on disk". > > > > 3. Each individual file should preferably start at the block offset that > is > > multiple of certain pre-defined power-of-two size from the start of > > partition, e.g. 64k, 128k etc. > > > > The file system in question is write-mostly. We create image from scratch > > every time and them populate with installworld + pkg add. Any free space > is > > subsequently erased with dd if=/dev/zero of=/myfs/bigfile; rm > > /myfs/bigfile, unmounted and image is compressed. We also grossly > > over-provision space, i.e. 2GB UFS image is created, less than 1GB is > used > > at the end. > > > > Any hints would be appreciated. Thanks! > > > > -Maxim > > Just back from spending the month of July in Tasmania (Australia) > and trying to get caught up on email... > > Unfortunately UFS/FFS is not well designed for what you want to do. > It splits the filesystem space up into "cylinder groups" and then > tries to place the files evenly across the cylinder groups. At least > it packs the files into the front of each cylinder group, so you > will tend to get a big block of unallocated space at the end of > each cylinder group. > > You could benefit from allocating the fewest number of cylinder > groups possible which is what newfs does by default. But you could > help this along by creating a filesystem with no fragments (just > full-sized blocks) as that keeps the bitmaps small (the bitmap needs > one bit per possible fragment). I will note that going without > fragments will blow up your disk usage if you have many small files, > as a small file will use 8x as much space as it would if you had > fragments. > > Use the `-e maxbpg' parameter to newfs (or tunefs after the fact) > to set a huge value for contiguous blocks before being forced to > move to a new cylinder group. Note that doing this will penalize > your small file read performance, so you may want to leave this > alone. > > To get all files to start on a particular block boundary, set your > filesystem block size to the starting offset boundary you desire > (e.g., if you want files to start on a 32k offset, use a 32k block > size for your filesystem). If you create a filesystem with no > fragments, then all files will by definition start on a block boundary. > > Kirk McKusick > > From owner-freebsd-fs@freebsd.org Sun Aug 7 17:29:16 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 06668BB1697 for ; Sun, 7 Aug 2016 17:29:16 +0000 (UTC) (envelope-from mckusick@chez.mckusick.com) Received: from chez.mckusick.com (chez.mckusick.com [IPv6:2001:5a8:4:7e72:d250:99ff:fe57:4030]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "chez.mckusick.com", Issuer "chez.mckusick.com" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id DD40A1848; Sun, 7 Aug 2016 17:29:15 +0000 (UTC) (envelope-from mckusick@chez.mckusick.com) Received: from chez.mckusick.com (localhost [IPv6:::1]) by chez.mckusick.com (8.15.2/8.15.2) with ESMTP id u77HTEYF087254; Sun, 7 Aug 2016 10:29:14 -0700 (PDT) (envelope-from mckusick@chez.mckusick.com) Message-Id: <201608071729.u77HTEYF087254@chez.mckusick.com> From: Kirk McKusick To: Maxim Sobolev Subject: Re: Optimizing UFS 1/2 for non-rotating / compressed storage cc: FreeBSD Filesystems In-reply-to: MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-ID: <87252.1470590954.1@chez.mckusick.com> Date: Sun, 07 Aug 2016 10:29:14 -0700 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 07 Aug 2016 17:29:16 -0000 > From: Maxim Sobolev > Date: Sun, 7 Aug 2016 01:09:41 -0700 > Subject: Re: Optimizing UFS 1/2 for non-rotating / compressed storage > To: Kirk McKusick > > Thanks, Kirk, > > So far, we've been set the following, which seems to pessimize compression > levels slightly but greatly reduce size of incremental upgrade using rsync > after we change just few files and re-pack: > > newfs -n -b 65536 -f $((65536 / 2)) -m 0 -L "${FW_LABEL}" "/dev/${MD_UNIT}" > > Unfortunately 64k is the max block size we can get out of it (128k is > rejected) and we run out of inodes if we set fragment size to be 64k as > well. Is there a fundamental limitation on the size of the block? I am > curious to see how 128/32 might work considering that bigger block size is > preferred by the compressor. We'll try to play with other options too, as > you've suggested. > > -Max You can get more inodes by using the -i option to newfs. If you use -i $((65536 / 2)) you should then be able to set the fragment size to the block size. The limit on the block size is set by the kernel. It is not an inherent limitation of the filesystem. If you want to try 128K blocksize, just bump up MAXBSIZE in /sys/sys/param.h to 128k and buildworld. Note that MAXBSIZE cannot exceed MAXPHYS which is currently 128k. I would not recommend trying to push MAXPHYS bigger as that affects a *lot* of the underlying I/O amd VM subsystems. Kirk McKusick From owner-freebsd-fs@freebsd.org Mon Aug 8 06:32:01 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 05E88BB1F99 for ; Mon, 8 Aug 2016 06:32:01 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id E9F021510 for ; Mon, 8 Aug 2016 06:32:00 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id u786W0MA065993 for ; Mon, 8 Aug 2016 06:32:00 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 211535] ZFS crash zap_leaf_array_create() in zap_leaf.c Date: Mon, 08 Aug 2016 06:32:00 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 10.3-RELEASE X-Bugzilla-Keywords: crash X-Bugzilla-Severity: Affects Some People X-Bugzilla-Who: linimon@FreeBSD.org X-Bugzilla-Status: New X-Bugzilla-Resolution: X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: assigned_to Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 08 Aug 2016 06:32:01 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D211535 Mark Linimon changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|freebsd-bugs@FreeBSD.org |freebsd-fs@FreeBSD.org --=20 You are receiving this mail because: You are the assignee for the bug.= From owner-freebsd-fs@freebsd.org Mon Aug 8 17:48:25 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 71ABCBB30FA for ; Mon, 8 Aug 2016 17:48:25 +0000 (UTC) (envelope-from mqudsi@neosmart.net) Received: from mail-io0-x231.google.com (mail-io0-x231.google.com [IPv6:2607:f8b0:4001:c06::231]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 3F1E019DD for ; Mon, 8 Aug 2016 17:48:25 +0000 (UTC) (envelope-from mqudsi@neosmart.net) Received: by mail-io0-x231.google.com with SMTP id q83so365207037iod.1 for ; Mon, 08 Aug 2016 10:48:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=neosmart.net; s=google; h=from:to:subject:date:organization:message-id:mime-version :content-transfer-encoding:thread-index:content-language; bh=7xfYROT0MXWc1gcVZpjKE02LNjeb9wRsjE+bL2Q1JgU=; b=K0ksD5M4YsBJKvdmiAOmXtNBdFV85FtkaKA6pJ5f+DE6JDEDlRGjQodhqHYi1lBPAc QZGEhgjINGBtJ6+Wd0Sg2QWeJxqD5igHflNYwClLFJffuP57JUJ+2w42GVnyMGqUcHtc y923Z2JqkLLxlPOft/DkvjGto2s/bXtRwPehA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:subject:date:organization:message-id :mime-version:content-transfer-encoding:thread-index :content-language; bh=7xfYROT0MXWc1gcVZpjKE02LNjeb9wRsjE+bL2Q1JgU=; b=UCby0x+zVlduJTEdviqpUKGbcZQRru9BDucP31grk2/n9T389s/JuX7d7l2DtboNYA 5vdlmi7q4rdGLX2jXc1opdGdVDX8999yINtH3K0lQcFzOBUIwrobiPeiaAaBJHaXmuon BXB8cdBt4gRB40pz5MX5WcvT3Y75LduJhoI7oCwIqAnPmoTKbvP0lsy8VvmzEc4XWwzx aiL0W2M6lYrS1MoUz4qZbDVtV1WETELqGYaiqqGRZXCd/xjmKv+Ni15vnvg9FzYKhOfK sAh83Ed6G+Wjw9qNJXBkjjtW0m/DUHOjC7QLCARSK1e2g1iQ3RPL0A9wDy0eLbMssFyf 8ucg== X-Gm-Message-State: AEkoouvPFAhV3GfTfyLcUaAUpUcaEK1fMda9qs+jcc6gnOBKE4O2pzB5DTFQrKE+BO58ng== X-Received: by 10.107.173.234 with SMTP id m103mr103631307ioo.127.1470678504083; Mon, 08 Aug 2016 10:48:24 -0700 (PDT) Received: from Blitzkrieg ([2001:559:8050:0:64f5:4565:302f:f9d]) by smtp.gmail.com with ESMTPSA id x13sm12074683ite.1.2016.08.08.10.48.23 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 08 Aug 2016 10:48:23 -0700 (PDT) From: "Mahmoud Al-Qudsi" To: Subject: PR-211674, fuse_vnode and fuse_msgbuf leak in fusefs-ntfs Date: Mon, 8 Aug 2016 12:48:15 -0500 Organization: NeoSmart Technologies Message-ID: <012c01d1f19d$0aae9c70$200bd550$@neosmart.net> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Mailer: Microsoft Outlook 16.0 Thread-Index: AdHxnNoX8eB2gk++Rnigc2FfbUZx+w== Content-Language: en-us X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 08 Aug 2016 17:48:25 -0000 Hello, Please forgive me if it is not correct form to discuss fusefs-ntfs on = the FreeBSD fs mailing list. SUMMARY Running on FreeBSD 10.3-RELEASE-p6/i386 with fuse compiled into kernel = and with fusefs-ntfs 2016.2.22 installed, there is a fuse_vnode leak = (though it seems it may be more of a complete failure to reclaim vnodes) = resulting in quick resource exhaustion. REPRODUCTION This is easily reproduced with the following: ntfs-3g /dev/xxx /mnt/yyyy cd /mnt/yyyy find . -exec touch {} \; In another virtual terminal: vmstat | head -n1; vmstat -m | sed 1d | sort -hk 3,3 ACTUAL RESULTS fuse_vnode will continuously balloon, and will not be reclaimed until = the filesystem is unmounted. (likewise, fuse_msgbuff also balloons but unlike fuse_vnode, it is never = reclaimed. Separate PR?) EXPECTED RESULTS fuse_vnode entries should be reclaimed ADDITIONAL INFORMATION Here's a snapshot of the fuse-related vmstat entries after this process: fuse_vnode 36020 9005K - 502349 256 fuse_msgbuf 58141 14895K - 311095 256,512,1024,2048,4096,8192 Thank you, Mahmoud Al-Qudsi NeoSmart Technologies From owner-freebsd-fs@freebsd.org Mon Aug 8 22:48:19 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 82967BB3DE5; Mon, 8 Aug 2016 22:48:19 +0000 (UTC) (envelope-from asomers@gmail.com) Received: from mail-oi0-x231.google.com (mail-oi0-x231.google.com [IPv6:2607:f8b0:4003:c06::231]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4738E12D7; Mon, 8 Aug 2016 22:48:19 +0000 (UTC) (envelope-from asomers@gmail.com) Received: by mail-oi0-x231.google.com with SMTP id 4so256900020oih.2; Mon, 08 Aug 2016 15:48:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:in-reply-to:references:from:date:message-id :subject:to:cc; bh=5UumD63tkY36vKyctmT9/WrEk+ij11xWL4z4+zw77tg=; b=EoT/aiBjXIeL9dESvwgyfLe7Il2pDs6F3sB7Z9pqcXO4h9GN20sF/AcctzxcPFuFyS tRD5oYMq8PMABqafNiYRbbof3YkTLR/kgXQ3Uvps/C715tbfAlNYEZ0/WzR050fP5Uz8 uYbjV0eO+NDCfzsT9ek6j83d5joVLspemzYEgN5Pilu77Z73EJGyB4Dbdwodu3WUX7TH IciYf1edUiQiAUy1tTWjiC1jEU3APJDGk9Wq7C0CXFw4Z9mPeatL5wBlHO8v0QoehM9d xIep/UYg1iUX9p/W6CKkoWVtiS9dV4pZi4wy8JXsP7rCPjC1S2BO5+OGodTXyNbJzDn2 Xptg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:sender:in-reply-to:references:from :date:message-id:subject:to:cc; bh=5UumD63tkY36vKyctmT9/WrEk+ij11xWL4z4+zw77tg=; b=fa7P7WYqweevrX/TmSc1UjmH6qB10bymp4+KtefNog7p8jOaopSOttZNeHdiPndoSl JU2Bfa3WdxmlIc5+rjkZgHJOijKW4GLIVdJwpWhI9IhuJEq1ndhsTIfYRdkC00YEUs4n lstzLg0bIuUR2jKZ+vw615eot5h/8cLL5GiXgHsQbmKPROw4xOoPQ7mTxVz4z2HvVVmm qkM38rQIX2OApjcfAWUMAt1OFVvDaMQDgYYiTI428rfdWNQTGMj3c0riIfl5YyI9oAMn GKAzX9Pjh7q3+mXHo1z/oQKNLrFSgg+kGAg/v81k/xEdWK4wl60SHwbzlt9hwf9OtxJi 3Bfg== X-Gm-Message-State: AEkoouvsLeduBxhDL2qIgj/9JwMAMOqPeCsdnYKGLFeT8Dx4qsCSGPVLUDUAZZtY8TX9Qcj8EYDi8xSRSUFo+A== X-Received: by 10.157.32.195 with SMTP id x61mr1531490ota.10.1470696498284; Mon, 08 Aug 2016 15:48:18 -0700 (PDT) MIME-Version: 1.0 Sender: asomers@gmail.com Received: by 10.202.196.149 with HTTP; Mon, 8 Aug 2016 15:48:17 -0700 (PDT) In-Reply-To: <4bb4d0c0-fdbb-ed8c-1a47-0e789d777c1a@FreeBSD.org> References: <3f79e88d-a519-0c8d-f16a-7c83460a37c1@FreeBSD.org> <04a12279-ab01-5f5c-d8d3-5571db07c229@FreeBSD.org> <4bb4d0c0-fdbb-ed8c-1a47-0e789d777c1a@FreeBSD.org> From: Alan Somers Date: Mon, 8 Aug 2016 16:48:17 -0600 X-Google-Sender-Auth: 943pNaoiqTErGnrXMF9O_s2T26A Message-ID: Subject: Re: some [big] changes to ZPL (ZFS<->VFS ) To: Andriy Gapon Cc: FreeBSD Filesystems , FreeBSD Current Content-Type: text/plain; charset=UTF-8 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 08 Aug 2016 22:48:19 -0000 On r303834 I can no longer reproduce the problem. -Alan On Sat, Aug 6, 2016 at 5:05 AM, Andriy Gapon wrote: > On 05/08/2016 23:31, Alan Somers wrote: >> I'm not certain it's related, but on a head build at r303767 I see a >> LOR and a reproducible panic that involve the snapdir code. > > Alan, > > thank you very much for the clear report and for the very easy > reproduction scenario. I am not sure how I missed this simple and > severe bug. > Please try r303791, it should fix the problem. > > I believe that the LOR is not new and been there since we started using > distinct tags for .zfs special vnodes. > >> First, the LOR: >> $ zpool destroy foo >> >> lock order reversal: >> 1st 0xfffff800404c8b78 zfs (zfs) @ >> /usr/home/alans/freebsd/head/sys/kern/vfs_mount.c:1244 >> 2nd 0xfffff800404c85f0 zfs_gfs (zfs_gfs) @ >> /usr/home/alans/freebsd/head/sys/cddl/contrib/opensolaris/uts/common/fs/gfs.c:484 >> stack backtrace: >> #0 0xffffffff80aa90b0 at witness_debugger+0x70 >> #1 0xffffffff80aa8fa4 at witness_checkorder+0xe54 >> #2 0xffffffff80a22072 at __lockmgr_args+0x4c2 >> #3 0xffffffff80af8e7c at vop_stdlock+0x3c >> #4 0xffffffff81018880 at VOP_LOCK1_APV+0xe0 >> #5 0xffffffff80b19f2a at _vn_lock+0x9a >> #6 0xffffffff821b9c53 at gfs_file_create+0x73 >> #7 0xffffffff821b9cfd at gfs_dir_create+0x1d >> #8 0xffffffff8228aa07 at zfsctl_mknode_snapdir+0x47 >> #9 0xffffffff821ba1a5 at gfs_dir_lookup+0x185 >> #10 0xffffffff821ba68d at gfs_vop_lookup+0x1d >> #11 0xffffffff82289a42 at zfsctl_root_lookup+0xf2 >> #12 0xffffffff8228a8c3 at zfsctl_umount_snapshots+0x83 >> #13 0xffffffff822a1d2b at zfs_umount+0x7b >> #14 0xffffffff80b02a14 at dounmount+0x6f4 >> #15 0xffffffff80b0228d at sys_unmount+0x35d >> #16 0xffffffff80ebbb7b at amd64_syscall+0x2db >> #17 0xffffffff80e9b72b at Xfast_syscall+0xfb >> >> >> Here's the panic: >> $ zpool create testpool da0 >> $ touch /testpool/testfile >> $ zfs snapshot testpool@testsnap >> $ cd /testpool/.zfs/snapshots >> >> Fatal trap 12: page fault while in kernel mode >> cpuid = 2; apic id = 04 >> fault virtual address = 0x8 >> fault code = supervisor read data, page not present >> instruction pointer = 0x20:0xffffffff80b19f1c >> stack pointer = 0x28:0xfffffe0b54bf7430 >> frame pointer = 0x28:0xfffffe0b54bf74a0 >> code segment = base 0x0, limit 0xfffff, type 0x1b >> = DPL 0, pres 1, long 1, def32 0, gran 1 >> processor eflags = interrupt enabled, resume, IOPL = 0 >> current process = 966 (bash) >> trap number = 12 >> panic: page fault >> cpuid = 2 >> KDB: stack backtrace: >> db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame 0xfffffe0b54bf6fc0 >> vpanic() at vpanic+0x182/frame 0xfffffe0b54bf7040 >> panic() at panic+0x43/frame 0xfffffe0b54bf70a0 >> trap_fatal() at trap_fatal+0x351/frame 0xfffffe0b54bf7100 >> trap_pfault() at trap_pfault+0x1fd/frame 0xfffffe0b54bf7160 >> trap() at trap+0x284/frame 0xfffffe0b54bf7370 >> calltrap() at calltrap+0x8/frame 0xfffffe0b54bf7370 >> --- trap 0xc, rip = 0xffffffff80b19f1c, rsp = 0xfffffe0b54bf7440, rbp >> = 0xfffffe0b54bf74a0 --- >> _vn_lock() at _vn_lock+0x8c/frame 0xfffffe0b54bf74a0 >> zfs_lookup() at zfs_lookup+0x50d/frame 0xfffffe0b54bf7540 >> zfs_freebsd_lookup() at zfs_freebsd_lookup+0x91/frame 0xfffffe0b54bf7680 >> VOP_CACHEDLOOKUP_APV() at VOP_CACHEDLOOKUP_APV+0xda/frame 0xfffffe0b54bf76b0 >> vfs_cache_lookup() at vfs_cache_lookup+0xd6/frame 0xfffffe0b54bf7710 >> VOP_LOOKUP_APV() at VOP_LOOKUP_APV+0xda/frame 0xfffffe0b54bf7740 >> lookup() at lookup+0x5a2/frame 0xfffffe0b54bf77d0 >> namei() at namei+0x5b2/frame 0xfffffe0b54bf7890 >> kern_statat() at kern_statat+0xa8/frame 0xfffffe0b54bf7a40 >> sys_stat() at sys_stat+0x2d/frame 0xfffffe0b54bf7ae0 >> amd64_syscall() at amd64_syscall+0x2db/frame 0xfffffe0b54bf7bf0 >> Xfast_syscall() at Xfast_syscall+0xfb/frame 0xfffffe0b54bf7bf0 >> >> >> I can provide core files, test scripts, whatever you need. Thanks for >> tackling this difficult problem. >> >> -Alan >> >> On Fri, Aug 5, 2016 at 12:36 AM, Andriy Gapon wrote: >>> On 03/08/2016 17:25, Andriy Gapon wrote: >>>> Another change that was not strictly required and which is probably too >>>> intrusive is killing the support for case insensitive operations. My >>>> thinking was that FreeBSD VFS does not provide support for those anyway. >>>> But I'll probably restore the code, at least in the bottom half of the >>>> ZPL, before committing the change. >>> >>> It turned out that most of the removed code was dead anyway and it took >>> just a few lines of code to restore support for case-insensitive >>> filesystems. Filesystems with mixed case sensitivity behave exactly the >>> same as case-sensitive filesystem as it has always been the case on FreeBSD. >>> >>> Anyway the big change has just been committed: >>> https://svnweb.freebsd.org/changeset/base/303763 >>> Please test away. >>> >>> Another note is that the filesystem name cache is now disabled for case >>> insensitive filesystems and filesystems with normalization other than >>> none. That may hurt the lookup performance, but should ensure >>> correctness of operations. >>> >>> -- >>> Andriy Gapon >>> _______________________________________________ >>> freebsd-current@freebsd.org mailing list >>> https://lists.freebsd.org/mailman/listinfo/freebsd-current >>> To unsubscribe, send any mail to "freebsd-current-unsubscribe@freebsd.org" > > > -- > Andriy Gapon From owner-freebsd-fs@freebsd.org Tue Aug 9 12:41:55 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id F0245BB26B7 for ; Tue, 9 Aug 2016 12:41:55 +0000 (UTC) (envelope-from julien@perdition.city) Received: from relay-b02.edpnet.be (relay-b02.edpnet.be [212.71.1.222]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "edpnet.email", Issuer "Go Daddy Secure Certificate Authority - G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id AFDD01741 for ; Tue, 9 Aug 2016 12:41:54 +0000 (UTC) (envelope-from julien@perdition.city) X-ASG-Debug-ID: 1470746503-0a7b8d2a6e18b8160001-3nHGF7 Received: from mordor.lan (213.211.139.72.dyn.edpnet.net [213.211.139.72]) by relay-b02.edpnet.be with ESMTP id z3XoXAmWkBErDwtn (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO) for ; Tue, 09 Aug 2016 14:41:45 +0200 (CEST) X-Barracuda-Envelope-From: julien@perdition.city X-Barracuda-Effective-Source-IP: 213.211.139.72.dyn.edpnet.net[213.211.139.72] X-Barracuda-Apparent-Source-IP: 213.211.139.72 Date: Tue, 9 Aug 2016 14:41:43 +0200 From: Julien Cigar To: freebsd-fs@freebsd.org Subject: zpool cachefile Message-ID: <20160809124143.GE70364@mordor.lan> X-ASG-Orig-Subj: zpool cachefile MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha512; protocol="application/pgp-signature"; boundary="FN+gV9K+162wdwwF" Content-Disposition: inline User-Agent: Mutt/1.6.1 (2016-04-27) X-Barracuda-Connect: 213.211.139.72.dyn.edpnet.net[213.211.139.72] X-Barracuda-Start-Time: 1470746504 X-Barracuda-Encrypted: ECDHE-RSA-AES256-GCM-SHA384 X-Barracuda-URL: https://212.71.1.222:443/cgi-mod/mark.cgi X-Barracuda-Scan-Msg-Size: 660 X-Virus-Scanned: by bsmtpd at edpnet.be X-Barracuda-BRTS-Status: 1 X-Barracuda-Bayes: INNOCENT GLOBAL 0.6143 1.0000 0.8180 X-Barracuda-Spam-Score: 2.32 X-Barracuda-Spam-Status: No, SCORE=2.32 using global scores of TAG_LEVEL=1000.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=6.0 tests=BSF_SC0_MV0713, BSF_SC0_MV0713_2 X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.3.31867 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.50 BSF_SC0_MV0713 Custom rule MV0713 1.00 BSF_SC0_MV0713_2 BSF_SC0_MV0713_2 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 09 Aug 2016 12:41:56 -0000 --FN+gV9K+162wdwwF Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: quoted-printable Hello, I'd like to prevent a zpool from being mounted at boot time, which is achieved by setting the cachefile property of the zpool to "none". It works as expected, but every time every time I'm issuing a $> zpool import mypool the cachefile property rollback to the default value.. is it expected?=20 Do I need to do a zpool import -o cachefile=3Dnone mypool instead? Thanks! Julien --=20 Julien Cigar Belgian Biodiversity Platform (http://www.biodiversity.be) PGP fingerprint: EEF9 F697 4B68 D275 7B11 6A25 B2BB 3710 A204 23C0 No trees were killed in the creation of this message. However, many electrons were terribly inconvenienced. --FN+gV9K+162wdwwF Content-Type: application/pgp-signature; name="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAABCgAGBQJXqc+DAAoJELK7NxCiBCPA6hkQAKWQ5ILLm1uvYLcK0a0V8DJM frm1JsD75eWdtvC4BXxmxzow2h2oqHua9YWp6A/7LKimxBWXwv7f0R9sLBopSBgB 50fxyvTx8uwPe1pWgXVEiAIL5kPyTHNnJcztlEFm69JEALd9gLpuXguc1dGzIoej SNinzkq9Iu2M07+C41kwt8B0Vgy0dE+ncjRZaKlmd/FUPWjRZ7eap45pzRpbOMBl WM9uSaERpcoSCEc+BwSn3dYTjGPh0YOmTGK8/LWQtusguAstIQoUMTPUW+HwSLKK JPVrT9gqT4Qo/3PQasnpPYgj7hzsmOAShLxh9+ttohDA82E8brlTymTS1ksSw/I5 Zv4cYzVFuFwgYeMnAl/VytgteQjsOLxHOxCE/YucOFCjx0oMsiKci/2hdlnOq7/t ojBUkNwgZIfyVasJyXagVOeAVvMJO1wd6sJ20Rcg+6YiCUQ8i3QMNq2XzOuOvRwp BvetiGFRgrjcF5xfAiHNAPORtClJJq8bU6Ys8+sCCbSalDJY0mhq0urMfRGCymwg jR3LoqQ7UNvg6dL1bfNQTpALYEwyh7houapnnkj/+4OhjbvmBYXDtIzc4CQSj8YB 8SrRnjZz6Q54Q2isuUZbxoRkAkNzRiwahh85F3AkJbiOCZl7ZXuMzfe3h2ft7Okc Z15relzJMmfibvVrNZY0 =0IOm -----END PGP SIGNATURE----- --FN+gV9K+162wdwwF-- From owner-freebsd-fs@freebsd.org Tue Aug 9 13:05:22 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id A52ECBB2F59 for ; Tue, 9 Aug 2016 13:05:22 +0000 (UTC) (envelope-from julien@perdition.city) Received: from relay-b03.edpnet.be (relay-b03.edpnet.be [212.71.1.220]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "edpnet.email", Issuer "Go Daddy Secure Certificate Authority - G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 67BC41452 for ; Tue, 9 Aug 2016 13:05:22 +0000 (UTC) (envelope-from julien@perdition.city) X-ASG-Debug-ID: 1470746642-0a88181ce423404d0001-3nHGF7 Received: from mordor.lan (213.211.139.72.dyn.edpnet.net [213.211.139.72]) by relay-b03.edpnet.be with ESMTP id 0JtPFL9GawmSBNEA (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO) for ; Tue, 09 Aug 2016 14:44:03 +0200 (CEST) X-Barracuda-Envelope-From: julien@perdition.city X-Barracuda-Effective-Source-IP: 213.211.139.72.dyn.edpnet.net[213.211.139.72] X-Barracuda-Apparent-Source-IP: 213.211.139.72 Date: Tue, 9 Aug 2016 14:44:02 +0200 From: Julien Cigar To: freebsd-fs@freebsd.org Subject: Re: zpool cachefile Message-ID: <20160809124401.GF70364@mordor.lan> X-ASG-Orig-Subj: Re: zpool cachefile References: <20160809124143.GE70364@mordor.lan> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha512; protocol="application/pgp-signature"; boundary="Il7n/DHsA0sMLmDu" Content-Disposition: inline In-Reply-To: <20160809124143.GE70364@mordor.lan> User-Agent: Mutt/1.6.1 (2016-04-27) X-Barracuda-Connect: 213.211.139.72.dyn.edpnet.net[213.211.139.72] X-Barracuda-Start-Time: 1470746642 X-Barracuda-Encrypted: ECDHE-RSA-AES256-GCM-SHA384 X-Barracuda-URL: https://212.71.1.220:443/cgi-mod/mark.cgi X-Barracuda-Scan-Msg-Size: 1116 X-Virus-Scanned: by bsmtpd at edpnet.be X-Barracuda-BRTS-Status: 1 X-Barracuda-Bayes: INNOCENT GLOBAL 0.5378 1.0000 0.7500 X-Barracuda-Spam-Score: 0.75 X-Barracuda-Spam-Status: No, SCORE=0.75 using global scores of TAG_LEVEL=1000.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=6.0 tests= X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.3.31868 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 09 Aug 2016 13:05:22 -0000 --Il7n/DHsA0sMLmDu Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Tue, Aug 09, 2016 at 02:41:43PM +0200, Julien Cigar wrote: > Hello, >=20 > I'd like to prevent a zpool from being mounted at boot time, which is > achieved by setting the cachefile property of the zpool to "none". > It works as expected, but every time every time I'm issuing a $> zpool > import mypool the cachefile property rollback to the default value.. is > it expected?=20 forgot the gist: https://gist.github.com/silenius/60a2d915250f71ea6babaa9781bd628f >=20 > Do I need to do a zpool import -o cachefile=3Dnone mypool instead? >=20 > Thanks! >=20 > Julien >=20 > --=20 > Julien Cigar > Belgian Biodiversity Platform (http://www.biodiversity.be) > PGP fingerprint: EEF9 F697 4B68 D275 7B11 6A25 B2BB 3710 A204 23C0 > No trees were killed in the creation of this message. > However, many electrons were terribly inconvenienced. --=20 Julien Cigar Belgian Biodiversity Platform (http://www.biodiversity.be) PGP fingerprint: EEF9 F697 4B68 D275 7B11 6A25 B2BB 3710 A204 23C0 No trees were killed in the creation of this message. However, many electrons were terribly inconvenienced. --Il7n/DHsA0sMLmDu Content-Type: application/pgp-signature; name="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAABCgAGBQJXqdARAAoJELK7NxCiBCPA+CQQAOLitAkEC+8YScUWyr/1Lt5Y n3Gjj4CtRV0l2rhYWaS7Ba9paPcOsJNHTW/XHQHD0JSWgW2l4nIgH3JqcHt7qjK7 IQimlmmXflhTvS3nUWyI7LqXJOyadOLsnPqCjwCBanKMigVpsDFLKx1U+Py2Jw4U k/9tAfJnHV1sFQoP/lElpImIn94g+pnSBeKKEF7mmYmdcy3okLjI/wLGH6og7UFi YWZUifEBe1ajEGU59zBoZiLrhGM3g2s++y2EQ1YU7tpYXWeLXRLwtp9KfoFVwzMc XmeDJRsMIwXeerHE4xjyF1R4ZQL9oWgQDTVXt7Ag82pNlFmGYs3A8YWYaLrY2KHd jhWYeuhsvZ3wCciKtH66DGdBZKYz0Vo25ggARoAqLK88QY5BBMoYiTIBjetMaBxE IED6+g4uihPx19DS5UimVXmKZEPyWdyBuFCPjDkUB2cRUKqxvBWkaKlhh4hH8kOS eH/+acbi3Rk8YJ1cVJQmIdGT3HKOqZuaseO1EW6Undj3qdILQ7PJocHmdcVwzu7n bc+vO4BlFMtvXjwRJF/TzvC1IHT2ML9j4PjaCLaEOnQigVuZQpbZWslBhIwSnhsa x4DWyP2JqeSu7+9+WBfvOE2U2EZ0xRI0MLTClG/NXYvgDeMc86PPF4H7+RaWFBeC ThYfN3BKVqMJuWDP2WoI =6wjV -----END PGP SIGNATURE----- --Il7n/DHsA0sMLmDu-- From owner-freebsd-fs@freebsd.org Tue Aug 9 13:56:22 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id F0547BB3FEE for ; Tue, 9 Aug 2016 13:56:22 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id DD2651E7F for ; Tue, 9 Aug 2016 13:56:22 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id u79DuMQn060914 for ; Tue, 9 Aug 2016 13:56:22 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 210409] zfs: panic during boot Date: Tue, 09 Aug 2016 13:56:22 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: CURRENT X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: avg@FreeBSD.org X-Bugzilla-Status: Open X-Bugzilla-Resolution: X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: resolution cc assigned_to bug_status Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 09 Aug 2016 13:56:23 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D210409 Andriy Gapon changed: What |Removed |Added ---------------------------------------------------------------------------- Resolution|FIXED |--- CC| |avg@FreeBSD.org Assignee|asomers@FreeBSD.org |freebsd-fs@FreeBSD.org Status|Closed |Open --- Comment #8 from Andriy Gapon --- It doesn't seem that either the problem was really caused by r300881 or tha= t it was really fixed by r302058. --=20 You are receiving this mail because: You are the assignee for the bug.= From owner-freebsd-fs@freebsd.org Tue Aug 9 14:08:12 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 1862EBB42C2 for ; Tue, 9 Aug 2016 14:08:12 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from citapm.icyb.net.ua (citapm.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 53A2315D1 for ; Tue, 9 Aug 2016 14:08:10 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from porto.starpoint.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citapm.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id RAA08736; Tue, 09 Aug 2016 17:08:09 +0300 (EEST) (envelope-from avg@FreeBSD.org) Received: from localhost ([127.0.0.1]) by porto.starpoint.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1bX7hR-000686-7i; Tue, 09 Aug 2016 17:08:09 +0300 Subject: Re: zpool cachefile To: Julien Cigar , freebsd-fs@FreeBSD.org References: <20160809124143.GE70364@mordor.lan> From: Andriy Gapon Message-ID: <81ee2466-7566-9a49-648a-296b3f38cb71@FreeBSD.org> Date: Tue, 9 Aug 2016 17:06:48 +0300 User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:45.0) Gecko/20100101 Thunderbird/45.2.0 MIME-Version: 1.0 In-Reply-To: <20160809124143.GE70364@mordor.lan> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 09 Aug 2016 14:08:12 -0000 On 09/08/2016 15:41, Julien Cigar wrote: > Hello, > > I'd like to prevent a zpool from being mounted at boot time, which is > achieved by setting the cachefile property of the zpool to "none". > It works as expected, but every time every time I'm issuing a $> zpool > import mypool the cachefile property rollback to the default value.. is > it expected? Yes. > Do I need to do a zpool import -o cachefile=none mypool instead? Yes. Or you can use -R. -- Andriy Gapon From owner-freebsd-fs@freebsd.org Tue Aug 9 14:14:04 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id A4309BB45BD for ; Tue, 9 Aug 2016 14:14:04 +0000 (UTC) (envelope-from julien@perdition.city) Received: from relay-b02.edpnet.be (relay-b02.edpnet.be [212.71.1.222]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "edpnet.email", Issuer "Go Daddy Secure Certificate Authority - G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 63B0C1C27 for ; Tue, 9 Aug 2016 14:14:04 +0000 (UTC) (envelope-from julien@perdition.city) X-ASG-Debug-ID: 1470752039-0a7b8d25997e4520001-dE2xID Received: from mordor.lan (213.211.139.72.dyn.edpnet.net [213.211.139.72]) by relay-b02.edpnet.be with ESMTP id qidTMgUfCQxhPefP (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Tue, 09 Aug 2016 16:14:01 +0200 (CEST) X-Barracuda-Envelope-From: julien@perdition.city X-Barracuda-Effective-Source-IP: 213.211.139.72.dyn.edpnet.net[213.211.139.72] X-Barracuda-Apparent-Source-IP: 213.211.139.72 Date: Tue, 9 Aug 2016 16:13:59 +0200 From: Julien Cigar To: Andriy Gapon Cc: freebsd-fs@FreeBSD.org Subject: Re: zpool cachefile Message-ID: <20160809141359.GG70364@mordor.lan> X-ASG-Orig-Subj: Re: zpool cachefile References: <20160809124143.GE70364@mordor.lan> <81ee2466-7566-9a49-648a-296b3f38cb71@FreeBSD.org> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha512; protocol="application/pgp-signature"; boundary="IuhbYIxU28t+Kd57" Content-Disposition: inline In-Reply-To: <81ee2466-7566-9a49-648a-296b3f38cb71@FreeBSD.org> User-Agent: Mutt/1.6.1 (2016-04-27) X-Barracuda-Connect: 213.211.139.72.dyn.edpnet.net[213.211.139.72] X-Barracuda-Start-Time: 1470752040 X-Barracuda-Encrypted: ECDHE-RSA-AES256-GCM-SHA384 X-Barracuda-URL: https://212.71.1.222:443/cgi-mod/mark.cgi X-Barracuda-Scan-Msg-Size: 875 X-Virus-Scanned: by bsmtpd at edpnet.be X-Barracuda-BRTS-Status: 1 X-Barracuda-Bayes: INNOCENT GLOBAL 0.5783 1.0000 0.7500 X-Barracuda-Spam-Score: 0.75 X-Barracuda-Spam-Status: No, SCORE=0.75 using global scores of TAG_LEVEL=1000.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=6.0 tests= X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.3.31869 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 09 Aug 2016 14:14:04 -0000 --IuhbYIxU28t+Kd57 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Tue, Aug 09, 2016 at 05:06:48PM +0300, Andriy Gapon wrote: > On 09/08/2016 15:41, Julien Cigar wrote: > > Hello, > >=20 > > I'd like to prevent a zpool from being mounted at boot time, which is > > achieved by setting the cachefile property of the zpool to "none". > > It works as expected, but every time every time I'm issuing a $> zpool > > import mypool the cachefile property rollback to the default value.. is > > it expected?=20 >=20 > Yes. >=20 > > Do I need to do a zpool import -o cachefile=3Dnone mypool instead? >=20 > Yes. Or you can use -R. Got it, thanks! >=20 >=20 > --=20 > Andriy Gapon --=20 Julien Cigar Belgian Biodiversity Platform (http://www.biodiversity.be) PGP fingerprint: EEF9 F697 4B68 D275 7B11 6A25 B2BB 3710 A204 23C0 No trees were killed in the creation of this message. However, many electrons were terribly inconvenienced. --IuhbYIxU28t+Kd57 Content-Type: application/pgp-signature; name="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAABCgAGBQJXqeUkAAoJELK7NxCiBCPAc50P/2iX+5Yo3tzgVQesDFItI/ph bZ/sHFO4mSABLoW1v/LBQJj1tSA1DCVc5gvcVNvcN5ppmZN+wZZ5X+nXOvppt8bP uOjKVyINccW39C7Z+S1a1WOZQk0laY2L1jOn7E9v55TI/aZRYTFmvaDb2fo1CCQ5 72RDtbqWGfIIhMGQ2cUGJP+oV0SXEixLT55fHcX1BOBvpbq9hMgN4og5gPDzepAz U/ifEm56fAwzzLXaVBg85sE0oA/H2Vp6GpnKM0LnMuw3nUwpR5xMqoXiJhPh7CnF pBsUIgkHsY8ssYSYabGpfCRobqzURa7X1TLP4UhEchjCFtpVITbdr73OIglGFxXY b6RrGcOOcQwF0BzHlPLUhOqUWugpkdCV8W/lg0eo5pbEud9uHTQ/+8hYPl+VE9pn WswcAQl1gpMIulRJG5Q7tP9OE2igLWWnSeYCzAmjgxDB3KsnQ0DWj1p+G32vLdtR 66efIIGykzyqCJGt44mfJAWTgNU9aTVgQvw3c58nSyH4RyT2M64OKIbSitFpgwwL /l/86RUvBzeinKE3/e+U5IlT2kBIMDGeDfzDqpKr3F6sTc3cW9v1Chz339kRzPAo uw9Rc/GUJWlqxQERFMmrw2bJLZj0VvHasFqJ4M7Xzku4dkOmxOGC9jKxY5zgRnV0 V9lDfBZrQvWh7GG70kIm =MgyP -----END PGP SIGNATURE----- --IuhbYIxU28t+Kd57-- From owner-freebsd-fs@freebsd.org Tue Aug 9 18:02:16 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 16932BB3C21 for ; Tue, 9 Aug 2016 18:02:16 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 0658D18BD for ; Tue, 9 Aug 2016 18:02:16 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id u79I2Fc5055595 for ; Tue, 9 Aug 2016 18:02:15 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 204764] Filesystem deadlock, process in vodead state Date: Tue, 09 Aug 2016 18:02:15 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 10.2-STABLE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Some People X-Bugzilla-Who: bdrewery@FreeBSD.org X-Bugzilla-Status: In Progress X-Bugzilla-Resolution: X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: bug_status Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 09 Aug 2016 18:02:16 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D204764 Bryan Drewery changed: What |Removed |Added ---------------------------------------------------------------------------- Status|New |In Progress --- Comment #30 from Bryan Drewery --- https://lists.freebsd.org/pipermail/freebsd-stable/2016-August/085150.html --=20 You are receiving this mail because: You are the assignee for the bug.= From owner-freebsd-fs@freebsd.org Tue Aug 9 20:39:40 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id C10B0BB427E for ; Tue, 9 Aug 2016 20:39:40 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id B040B1380 for ; Tue, 9 Aug 2016 20:39:40 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id u79KddaI033981 for ; Tue, 9 Aug 2016 20:39:40 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 211491] System hangs after "Uptime" on reboot with iSCSI, zfs, and altroot Date: Tue, 09 Aug 2016 20:39:40 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 11.0-BETA3 X-Bugzilla-Keywords: needs-qa X-Bugzilla-Severity: Affects Many People X-Bugzilla-Who: delphij@FreeBSD.org X-Bugzilla-Status: Open X-Bugzilla-Resolution: X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Flags: mfc-stable10? mfc-stable11? X-Bugzilla-Changed-Fields: cc assigned_to bug_severity Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 09 Aug 2016 20:39:40 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D211491 Xin LI changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |delphij@FreeBSD.org Assignee|freebsd-bugs@FreeBSD.org |freebsd-fs@FreeBSD.org Severity|Affects Some People |Affects Many People --- Comment #10 from Xin LI --- I noticed this too but not 100% reproducible. I don't have iSCSI setup, but do have zvol. It was a fresh -CURRENT. --=20 You are receiving this mail because: You are the assignee for the bug.= From owner-freebsd-fs@freebsd.org Tue Aug 9 21:25:52 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 86879BB4036 for ; Tue, 9 Aug 2016 21:25:52 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 7596C13B9 for ; Tue, 9 Aug 2016 21:25:52 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id u79LPpnT002271 for ; Tue, 9 Aug 2016 21:25:52 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 211491] System hangs after "Uptime" on reboot with iSCSI, zfs, and altroot Date: Tue, 09 Aug 2016 21:25:51 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 11.0-BETA3 X-Bugzilla-Keywords: needs-qa X-Bugzilla-Severity: Affects Many People X-Bugzilla-Who: vangyzen@freebsd.org X-Bugzilla-Status: Open X-Bugzilla-Resolution: X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Flags: mfc-stable10? mfc-stable11? X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 09 Aug 2016 21:25:52 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D211491 --- Comment #11 from Eric van Gyzen --- I just reproduced this on 10.3-STABLE r303633. I'll try to reproduce on 10.3-RELEASE to see if it would be a new regression in 11.0-RELEASE. --=20 You are receiving this mail because: You are the assignee for the bug.= From owner-freebsd-fs@freebsd.org Tue Aug 9 21:35:08 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 06300BB43FD for ; Tue, 9 Aug 2016 21:35:08 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id E9A421C1D for ; Tue, 9 Aug 2016 21:35:07 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id u79LZ6Fs075418 for ; Tue, 9 Aug 2016 21:35:07 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 211491] System hangs after "Uptime" on reboot with iSCSI, zfs, and altroot Date: Tue, 09 Aug 2016 21:35:07 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 11.0-BETA3 X-Bugzilla-Keywords: needs-qa X-Bugzilla-Severity: Affects Many People X-Bugzilla-Who: vangyzen@freebsd.org X-Bugzilla-Status: Open X-Bugzilla-Resolution: X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Flags: mfc-stable10? mfc-stable11? X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 09 Aug 2016 21:35:08 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D211491 --- Comment #12 from Eric van Gyzen --- I just reproduced this on 12-CURRENT r303626. I'm now updating that machin= e to the latest head. --=20 You are receiving this mail because: You are the assignee for the bug.= From owner-freebsd-fs@freebsd.org Tue Aug 9 23:10:18 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 564A2BB46A5 for ; Tue, 9 Aug 2016 23:10:18 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 458F1197F for ; Tue, 9 Aug 2016 23:10:18 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id u79NAGIL038178 for ; Tue, 9 Aug 2016 23:10:18 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 211491] System hangs after "Uptime" on reboot with iSCSI, zfs, and altroot Date: Tue, 09 Aug 2016 23:10:17 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 11.0-BETA3 X-Bugzilla-Keywords: needs-qa X-Bugzilla-Severity: Affects Some People X-Bugzilla-Who: ngie@FreeBSD.org X-Bugzilla-Status: Open X-Bugzilla-Resolution: X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Flags: mfc-stable10? mfc-stable11? X-Bugzilla-Changed-Fields: cc bug_severity Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 09 Aug 2016 23:10:18 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D211491 Ngie Cooper changed: What |Removed |Added ---------------------------------------------------------------------------- CC|freebsd-bugs@FreeBSD.org, |ngie@FreeBSD.org |freebsd-stable@FreeBSD.org | Severity|Affects Many People |Affects Some People --- Comment #13 from Ngie Cooper --- Please don't add -current or -stable to bugs like this; it spams the list unnecessarily (this issue impacts users of iSCSI + ZFS -- which seems a bit niche right now) --=20 You are receiving this mail because: You are the assignee for the bug.= From owner-freebsd-fs@freebsd.org Tue Aug 9 23:21:44 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 10B7DBB4B52 for ; Tue, 9 Aug 2016 23:21:44 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 002A412DE for ; Tue, 9 Aug 2016 23:21:44 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id u79NLhA5065870 for ; Tue, 9 Aug 2016 23:21:43 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 211491] System hangs after "Uptime" on reboot with iSCSI, zfs, and altroot Date: Tue, 09 Aug 2016 23:21:44 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 11.0-BETA3 X-Bugzilla-Keywords: needs-qa X-Bugzilla-Severity: Affects Some People X-Bugzilla-Who: g_amanakis@yahoo.com X-Bugzilla-Status: Open X-Bugzilla-Resolution: X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Flags: mfc-stable10? mfc-stable11? X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 09 Aug 2016 23:21:44 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D211491 --- Comment #14 from g_amanakis@yahoo.com --- On my system ctld is enabled but there are no clients. The bug persists with ctld disabled, too. Also there is a bhyve-VM with passthrough of an onboard= NIC but this doesn't affect the bug. The root filesystem is ZFS. Summing up, no iSCSI, no altroot, ZFS on root. I could normally reboot and shutdown the sy= stem on 10.3-RELEASE and 10.3-STABLE before upgrading to 11.0-BETA1, which is wh= en I noticed the bug. No kernel panic happens though. Could I get more verbose logging during shutdown to see what is going on? Most strikingly, after the system "hangs"= on "Uptime ..." I can still successfully ping one of the onboard ifaces, not t= he VT-d one. --=20 You are receiving this mail because: You are the assignee for the bug.= From owner-freebsd-fs@freebsd.org Wed Aug 10 06:58:06 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id B1D38BB4A6B for ; Wed, 10 Aug 2016 06:58:06 +0000 (UTC) (envelope-from jordanhubbard@icloud.com) Received: from pv35p22im-ztdg05131101.me.com (pv35p22im-ztdg05131101.me.com [17.133.189.43]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 93F4918CF for ; Wed, 10 Aug 2016 06:58:06 +0000 (UTC) (envelope-from jordanhubbard@icloud.com) Received: from process-dkim-sign-daemon.pv35p22im-ztdg05131101.me.com by pv35p22im-ztdg05131101.me.com (Oracle Communications Messaging Server 7.0.5.38.0 64bit (built Feb 26 2016)) id <0OBO00C00J1ACQ00@pv35p22im-ztdg05131101.me.com> for freebsd-fs@freebsd.org; Wed, 10 Aug 2016 05:58:00 +0000 (GMT) Received: from [10.11.111.236] (50-250-239-90-static.hfc.comcastbusiness.net [50.250.239.90]) by pv35p22im-ztdg05131101.me.com (Oracle Communications Messaging Server 7.0.5.38.0 64bit (built Feb 26 2016)) with ESMTPSA id <0OBO00C05J8LXQ30@pv35p22im-ztdg05131101.me.com>; Wed, 10 Aug 2016 05:58:00 +0000 (GMT) X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2016-08-10_05:,, signatures=0 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 clxscore=1015 suspectscore=0 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1603290000 definitions=main-1608100066 From: Jordan Hubbard Message-id: MIME-version: 1.0 (Mac OS X Mail 10.0 \(3212\)) Subject: Re: CEPH + FreeBSD Date: Tue, 09 Aug 2016 22:57:56 -0700 In-reply-to: Cc: freebsd-fs@freebsd.org To: Willem Jan Withagen References: <5661752C.1090200@digiware.nl> <88732E11-8570-4D02-9374-3F1419EABC6F@icloud.com> <5664BC1C.6060207@digiware.nl> X-Mailer: Apple Mail (2.3212) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=icloud.com; s=4d515a; t=1470808680; bh=Ro9zVxsF6CoJx4QKZoDnInL36xqJqnCriGjx2vzh61Y=; h=From:Message-id:Content-type:MIME-version:Subject:Date:To; b=Dj1pLPdNyfs/pLtxgnUxq5aNEebqvXHyDKbAYgzIfLcO46aDt5iuP1MLYqVEkOFSt 2OeKv1AtYHlYrDGsvyDeO0rqPSf6IKPZ3aELEJcjEiFRNBjOkN1z7JFPgKfrgpSJsj 0QKMzLH84IPvMfZI5Jt/TMK2P1MTUB82W1e9s0jSJCXQ3KpWLfrDTVkHuN1ndV4aTA rEqygLGb4fU0bXeksS0VlaDgqKJsY+nK1Pb8+sFvCKWh13xvCFjTpaVdkAs4XuNinn G4zeAe8rivK3FyU8x59nWM14mXjAMiUtK7W3onwyE1DUpmxlyuft/u6I1vhDzOVp3X q7i20OZpTTrGA== Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.22 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 10 Aug 2016 06:58:06 -0000 > On Aug 5, 2016, at 2:47 AM, Willem Jan Withagen = wrote: >=20 > Biggest thing to get working for me ATM is ceph-disk, because that = will > get things to start installing. Making it packageble, and something = for > other to start playing with. And it is only going to work out if = people > start using it. Hi Willem, Those seem like reasonable priorities to me. I would even say that the = port is the *first* priority since it gives interested parties a quick = way to bootstrap the ceph port on FreeBSD (IIRC, the build process = itself is a little arcane) and start checking it out. I would also say = that ceps-disk is the right place to start since it=E2=80=99s a fairly = low-level place in the stack. - Jordan From owner-freebsd-fs@freebsd.org Wed Aug 10 09:54:05 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 0FCB3BB2D78 for ; Wed, 10 Aug 2016 09:54:05 +0000 (UTC) (envelope-from ben.rubson@gmail.com) Received: from mail-wm0-x232.google.com (mail-wm0-x232.google.com [IPv6:2a00:1450:400c:c09::232]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 9B4401EED for ; Wed, 10 Aug 2016 09:54:04 +0000 (UTC) (envelope-from ben.rubson@gmail.com) Received: by mail-wm0-x232.google.com with SMTP id o80so91611754wme.1 for ; Wed, 10 Aug 2016 02:54:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:content-transfer-encoding:subject:message-id:date:to :mime-version; bh=oiQpw6lRIAQXHSdwBIfrXSy9SiFXjMOK1hgvIKwQEpU=; b=R3aL3CdiKTIkBPw9N1z1JlFJshoNTNOwDygB6/Q4dIUD/KgHFqQiEClZ9nww2Dg4Yp tGforq7JPqClblImke+ZGAuBN1TpUOOPDClOeGUbpLPtEz/TSi7AkzO7UZBtd/25zgtr jyO4I36tg7K2y6zMM0dYS76HnLzhk4MPXrZOSVwDVcvzczPXUChdl0bUu4V9o/sFnIo5 Hc5RryOSx/i2ryYhi9NRrQ64rkFxl3fvIXFbU12hxSLt7a052GN6filRooEqaOZOoOVq NjsMM7xpVJ1x9bn78mnpl9Z83kvkJOiBK6kjRaFm6Hb/P8JYmkTmSa1wMOsRqdnQ7rQE cKKg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:content-transfer-encoding:subject :message-id:date:to:mime-version; bh=oiQpw6lRIAQXHSdwBIfrXSy9SiFXjMOK1hgvIKwQEpU=; b=TuG4Ps2oTMg5UmDNxzQJANcII8IbZD756DFl35L8jNF9BhINk/8mW4xMz5J9vEFjXg RfnVwapdHJ1BFTRyHzSNsC7p7FWufujVCDFmMqPV3WxV5/L6SSn28bTvIo4LeeSknZ6S OAp0yPPFrohoGMvthCATNwzlUzrrYIFn1JPgdCh6eKun4aQ/X5p4hrPhqSR/TFp1gYxy Wdz7CdyIZrAl5tTP1dNEyx0lEBb3E4Fnw2qbUytTBSop/l2g3lmBw+BNWGPbs+laPLz7 jqcB6nyEqb6Vy67BwlqUxYs8mxZnUxPN2Jinw1vm8LzS0LsLcZagl5Z/PcWpLwgRfY9K ei5w== X-Gm-Message-State: AEkoouspKUzhJphC3bjsNkKps4pPKyGH8QwUtB9VmttIcs8wos4cEmGnf5C8GmjypLlaKg== X-Received: by 10.194.70.68 with SMTP id k4mr3822797wju.59.1470822842860; Wed, 10 Aug 2016 02:54:02 -0700 (PDT) Received: from macbook-air-de-benjamin-1.home (LFbn-1-7077-85.w90-116.abo.wanadoo.fr. [90.116.246.85]) by smtp.gmail.com with ESMTPSA id xa2sm42204996wjc.0.2016.08.10.02.54.01 for (version=TLS1 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 10 Aug 2016 02:54:02 -0700 (PDT) From: Ben RUBSON Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable Subject: [iSCSI] Trying to reach max disk throughput Message-Id: <6B32251D-49B4-4E61-A5E8-08013B15C82B@gmail.com> Date: Wed, 10 Aug 2016 11:54:01 +0200 To: freebsd-fs@freebsd.org Mime-Version: 1.0 (Mac OS X Mail 9.3 \(3124\)) X-Mailer: Apple Mail (2.3124) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 10 Aug 2016 09:54:05 -0000 Hello, I'm facing something strange with iSCSI, I can't manage to reach the = expected disk throughput using one (read or write) thread. ### Target : local disk throughput, one thread : # dd if=3D/dev/da8 of=3D/dev/null bs=3D$((128*1024)) count=3D81920 10737418240 bytes transferred in 22.127838 secs (485244798 bytes/sec) - = 462MB/s ### Initiator : network throughput to target, one thread : # iperf -c 192.168.2.2 -t 30 -i 5 -P 1 -l 128KB [ ID] Interval Transfer Bandwidth [ 3] 0.0- 5.0 sec 19.5 GBytes 33.5 Gbits/sec [ 3] 5.0-10.0 sec 19.7 GBytes 33.9 Gbits/sec [ 3] 10.0-15.0 sec 19.6 GBytes 33.6 Gbits/sec [ 3] 15.0-20.0 sec 19.6 GBytes 33.7 Gbits/sec [ 3] 20.0-25.0 sec 19.8 GBytes 34.0 Gbits/sec [ 3] 25.0-30.0 sec 19.9 GBytes 34.2 Gbits/sec ### Initiator : network latency to target : # ping -c 10 192.168.2.2=20 64 bytes from 192.168.2.2: icmp_seq=3D0 ttl=3D64 time=3D0.025 ms 64 bytes from 192.168.2.2: icmp_seq=3D1 ttl=3D64 time=3D0.024 ms 64 bytes from 192.168.2.2: icmp_seq=3D2 ttl=3D64 time=3D0.027 ms 64 bytes from 192.168.2.2: icmp_seq=3D3 ttl=3D64 time=3D0.021 ms 64 bytes from 192.168.2.2: icmp_seq=3D4 ttl=3D64 time=3D0.020 ms 64 bytes from 192.168.2.2: icmp_seq=3D5 ttl=3D64 time=3D0.025 ms 64 bytes from 192.168.2.2: icmp_seq=3D6 ttl=3D64 time=3D0.022 ms 64 bytes from 192.168.2.2: icmp_seq=3D7 ttl=3D64 time=3D0.020 ms 64 bytes from 192.168.2.2: icmp_seq=3D8 ttl=3D64 time=3D0.022 ms 64 bytes from 192.168.2.2: icmp_seq=3D9 ttl=3D64 time=3D0.023 ms round-trip min/avg/max/stddev =3D 0.020/0.023/0.027/0.002 ms ### Initiator : iscsi disk throughput : ## dd if=3D/dev/da8 of=3D/dev/null bs=3D$((128*1024)) count=3D81920 10737418240 bytes transferred in 34.731815 secs (309152234 bytes/sec) - = 295MB/s With 2 parallel dd jobs : 345MB/s With 4 parallel dd jobs : 502MB/s ### Questions : Why such a difference ? Where are the 167MB/s (462-295) lost ? All CPUs, on both sides, are above 90% idle during these tests. I tried to increase net.inet.tcp.sendbuf_max, net.inet.tcp.recvbuf_max. I also increased SOCKBUF_SIZE in iscsid.h and ctld.h. And tried HTCP as the TCP algorithm. But with no chance. Any idea ? Many thanks ! Ben From owner-freebsd-fs@freebsd.org Wed Aug 10 11:44:09 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 5DB1EBB5057 for ; Wed, 10 Aug 2016 11:44:09 +0000 (UTC) (envelope-from etnapierala@gmail.com) Received: from mail-wm0-x242.google.com (mail-wm0-x242.google.com [IPv6:2a00:1450:400c:c09::242]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id E4FB91D2F for ; Wed, 10 Aug 2016 11:44:08 +0000 (UTC) (envelope-from etnapierala@gmail.com) Received: by mail-wm0-x242.google.com with SMTP id o80so9012696wme.0 for ; Wed, 10 Aug 2016 04:44:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:date:from:to:cc:subject:message-id:mail-followup-to :references:mime-version:content-disposition:in-reply-to:user-agent; bh=P6XjQKNezTiKy/Z9OXe1vTCnCNi0NoRgJ99ZEgFPfzQ=; b=tavNiPHi3IYXPeQ2N56rjZFkEel8RNT1x6DW6Yq0b3jo0JDBqlQmcTKVxDZpFUTuQH NlfD82dxlUi2E11nJdhU0FKfrb2LIgPf0EXOquCsCIwShzEBTzyFiOPKRd49n1MDlR9I U+Wcr60GJ5ufAdUygIXeAV+8zQbdjWhkdtmxGaFl51hVCNQTa8xyStxTr9lBmRbt15dD zx8zJhrXrPgJS+FiNZmEacAPvSjUsOOMBhQWC1dQB1Qt9XMXcbW0pYwunfKkQMzvr3Sz wBjjNzIY3QSx20Yq6c0U0oydeu5YXTeX1xrc1WSECaKf50zP1BHubDgXm9iFNRrp2D73 mgaA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:sender:date:from:to:cc:subject:message-id :mail-followup-to:references:mime-version:content-disposition :in-reply-to:user-agent; bh=P6XjQKNezTiKy/Z9OXe1vTCnCNi0NoRgJ99ZEgFPfzQ=; b=iHp6svMQgAgRhehkmN2gDomHQCAEri5baKUETpBpQHNE6ytMuwsVJAvW3YVlgOOcPE 9Y2DZfjdBVXMaCX4NIgH2r6qyE6v0Qdd/H16XVMr1W5daP4POyAdjgM23TfSKRr10Z8K kVyhn2oFqQAhM5LyOn60PnCueNKxlGZTWetxPQquuMNeiNWKvTx1nd6apkntzc6onTfy 1AyzvQp3Mq9bk/EcO93d3Lo0ESe04UTHzlGLL09Nlhdx9cUjSTnVJcVcTH1aJ6sv9+LV /HE3QTJZgCUEMKiDHSm7xR9YZU2gkt+4Ig3hODd1eSYDqs9qu8QUWXiKDXrAJ1wLaNiH u0cQ== X-Gm-Message-State: AEkoouvV4V88Nw2Zluwj69PMF/Cx+3G+M8+Co64nlfKcKrbrqVX0KkpHpP9MVRdzmDCdMQ== X-Received: by 10.28.134.203 with SMTP id i194mr2919000wmd.22.1470829447510; Wed, 10 Aug 2016 04:44:07 -0700 (PDT) Received: from brick (abuf79.neoplus.adsl.tpnet.pl. [83.8.177.79]) by smtp.gmail.com with ESMTPSA id za2sm42607988wjb.34.2016.08.10.04.44.05 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 10 Aug 2016 04:44:06 -0700 (PDT) Sender: =?UTF-8?Q?Edward_Tomasz_Napiera=C5=82a?= Date: Wed, 10 Aug 2016 13:44:04 +0200 From: Edward Tomasz =?utf-8?Q?Napiera=C5=82a?= To: Ben RUBSON Cc: freebsd-fs@freebsd.org Subject: Re: [iSCSI] Trying to reach max disk throughput Message-ID: <20160810114404.GA80485@brick> Mail-Followup-To: Ben RUBSON , freebsd-fs@freebsd.org References: <6B32251D-49B4-4E61-A5E8-08013B15C82B@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <6B32251D-49B4-4E61-A5E8-08013B15C82B@gmail.com> User-Agent: Mutt/1.6.1 (2016-04-27) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 10 Aug 2016 11:44:09 -0000 On 0810T1154, Ben RUBSON wrote: > Hello, > > I'm facing something strange with iSCSI, I can't manage to reach the expected disk throughput using one (read or write) thread. [..] > ### Initiator : iscsi disk throughput : > > ## dd if=/dev/da8 of=/dev/null bs=$((128*1024)) count=81920 > 10737418240 bytes transferred in 34.731815 secs (309152234 bytes/sec) - 295MB/s > > With 2 parallel dd jobs : 345MB/s > With 4 parallel dd jobs : 502MB/s > > > > ### Questions : > > Why such a difference ? > Where are the 167MB/s (462-295) lost ? Network delays, I suppose. A single dd(1) would spend some time waiting for the data to get pushed over the network - due to delays (lag), not bandwidth. Having multiple ones makes it possible to compensate, by having multiple outstanding IO operations. From owner-freebsd-fs@freebsd.org Wed Aug 10 11:50:48 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 0A1BEBB50FF for ; Wed, 10 Aug 2016 11:50:48 +0000 (UTC) (envelope-from wjw@digiware.nl) Received: from smtp.digiware.nl (gtw.digiware.nl [IPv6:2001:4cb8:90:ffff::3]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id C34C71E67 for ; Wed, 10 Aug 2016 11:50:47 +0000 (UTC) (envelope-from wjw@digiware.nl) Received: from router.digiware.nl (localhost.digiware.nl [127.0.0.1]) by smtp.digiware.nl (Postfix) with ESMTP id 62E6025B85; Wed, 10 Aug 2016 13:50:44 +0200 (CEST) X-Virus-Scanned: amavisd-new at digiware.com Received: from smtp.digiware.nl ([127.0.0.1]) by router.digiware.nl (router.digiware.nl [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id j_lG8xI71OHe; Wed, 10 Aug 2016 13:50:43 +0200 (CEST) Received: from [192.168.101.139] (vpn.ecoracks.nl [176.74.240.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.digiware.nl (Postfix) with ESMTPSA id 967FD25B84; Wed, 10 Aug 2016 13:50:43 +0200 (CEST) Subject: Re: CEPH + FreeBSD To: Jordan Hubbard References: <5661752C.1090200@digiware.nl> <88732E11-8570-4D02-9374-3F1419EABC6F@icloud.com> <5664BC1C.6060207@digiware.nl> Cc: freebsd-fs@freebsd.org From: Willem Jan Withagen Message-ID: <09577148-59e1-50d9-1f52-965819532bd0@digiware.nl> Date: Wed, 10 Aug 2016 13:50:36 +0200 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.2.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 10 Aug 2016 11:50:48 -0000 On 10-8-2016 07:57, Jordan Hubbard wrote: > >> On Aug 5, 2016, at 2:47 AM, Willem Jan Withagen > > wrote: >> >> Biggest thing to get working for me ATM is ceph-disk, because that will >> get things to start installing. Making it packageble, and something for >> other to start playing with. And it is only going to work out if people >> start using it. > > Hi Willem, > > Those seem like reasonable priorities to me. I would even say that the > port is the *first* priority since it gives interested parties a quick > way to bootstrap the ceph port on FreeBSD (IIRC, the build process > itself is a little arcane) and start checking it out. I would also say > that ceps-disk is the right place to start since it’s a fairly low-level > place in the stack. Jordan, Do agree with all of that. I would not call building arcane, but convoluted. That in itself does not make it less fun. The merge to CMake has made things a lot better. But you have to know what parts work and which don't. Packaging is next on my list once I've ironed out most of the bugs in signaling and process termination. ceph-disk is so loaded with all low level linuxisms that I'm contemplating ripping out the innards and build a ceph-disk-freebsd for the time being. As not te get distracted by all this Linux mojo. Especially since the only store supported atm. is filestore. --WjW From owner-freebsd-fs@freebsd.org Wed Aug 10 13:10:53 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 45217BB46A5 for ; Wed, 10 Aug 2016 13:10:53 +0000 (UTC) (envelope-from julien@perdition.city) Received: from relay-b03.edpnet.be (relay-b03.edpnet.be [212.71.1.220]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "edpnet.email", Issuer "Go Daddy Secure Certificate Authority - G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id DE5C91B5D for ; Wed, 10 Aug 2016 13:10:52 +0000 (UTC) (envelope-from julien@perdition.city) X-ASG-Debug-ID: 1470834640-0a88181ce723fdd50001-3nHGF7 Received: from mordor.lan (213.211.139.72.dyn.edpnet.net [213.211.139.72]) by relay-b03.edpnet.be with ESMTP id Thc0tgChMlB29H2c (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Wed, 10 Aug 2016 15:10:41 +0200 (CEST) X-Barracuda-Envelope-From: julien@perdition.city X-Barracuda-Effective-Source-IP: 213.211.139.72.dyn.edpnet.net[213.211.139.72] X-Barracuda-Apparent-Source-IP: 213.211.139.72 Date: Wed, 10 Aug 2016 15:10:40 +0200 From: Julien Cigar To: Ben RUBSON Cc: freebsd-fs@freebsd.org Subject: Re: HAST + ZFS + NFS + CARP Message-ID: <20160810131040.GH70364@mordor.lan> X-ASG-Orig-Subj: Re: HAST + ZFS + NFS + CARP References: <20160630144546.GB99997@mordor.lan> <71b8da1e-acb2-9d4e-5d11-20695aa5274a@internetx.com> <20160630153747.GB5695@mordor.lan> <63C07474-BDD5-42AA-BF4A-85A0E04D3CC2@gmail.com> <678321AB-A9F7-4890-A8C7-E20DFDC69137@gmail.com> <20160630185701.GD5695@mordor.lan> <6035AB85-8E62-4F0A-9FA8-125B31A7A387@gmail.com> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha512; protocol="application/pgp-signature"; boundary="KscVNZbUup0vZz0f" Content-Disposition: inline In-Reply-To: <6035AB85-8E62-4F0A-9FA8-125B31A7A387@gmail.com> User-Agent: Mutt/1.6.1 (2016-04-27) X-Barracuda-Connect: 213.211.139.72.dyn.edpnet.net[213.211.139.72] X-Barracuda-Start-Time: 1470834640 X-Barracuda-Encrypted: ECDHE-RSA-AES256-GCM-SHA384 X-Barracuda-URL: https://212.71.1.220:443/cgi-mod/mark.cgi X-Barracuda-Scan-Msg-Size: 6236 X-Virus-Scanned: by bsmtpd at edpnet.be X-Barracuda-BRTS-Status: 1 X-Barracuda-Bayes: INNOCENT GLOBAL 0.5000 1.0000 0.7500 X-Barracuda-Spam-Score: 1.25 X-Barracuda-Spam-Status: No, SCORE=1.25 using global scores of TAG_LEVEL=1000.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=6.0 tests=BSF_SC1_TG070 X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.3.31900 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.50 BSF_SC1_TG070 Custom Rule TG070 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 10 Aug 2016 13:10:53 -0000 --KscVNZbUup0vZz0f Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Sat, Jul 02, 2016 at 05:04:22PM +0200, Ben RUBSON wrote: >=20 > > On 30 Jun 2016, at 20:57, Julien Cigar wrote: > >=20 > > On Thu, Jun 30, 2016 at 11:32:17AM -0500, Chris Watson wrote: > >>=20 > >>=20 > >> Sent from my iPhone 5 > >>=20 > >>>=20 > >>>>=20 > >>>> Yes that's another option, so a zpool with two mirrors (local +=20 > >>>> exported iSCSI) ? > >>>=20 > >>> Yes, you would then have a real time replication solution (as HAST), = compared to ZFS send/receive which is not. > >>> Depends on what you need :) > >>>=20 > >>>>=20 > >>>>> ZFS would then know as soon as a disk is failing. > >>=20 > >> So as an aside, but related, for those watching this from the peanut g= allery and for the benefit of the OP perhaps those that run with this setup= might give some best practices and tips here in this thread on making this= a good reliable setup. I can see someone reading this thread and tossing t= wo crappy Ethernet cards in a box and then complaining it doesn't work well= =2E=20 > >=20 > > It would be more than welcome indeed..! I have the feeling that HAST > > isn't that much used (but maybe I am wrong) and it's difficult to find= =20 > > informations on it's reliability and concrete long-term use cases... > >=20 > > Also the pros vs cons of HAST vs iSCSI >=20 > I made further testing today. >=20 > # serverA, serverB : > kern.iscsi.ping_timeout=3D5 > kern.iscsi.iscsid_timeout=3D5 > kern.iscsi.login_timeout=3D5 > kern.iscsi.fail_on_disconnection=3D1 >=20 > # Preparation : > - serverB : let's make 2 iSCSI targets : rem3, rem4. > - serverB : let's start ctld. > - serverA : let's create a mirror pool made of 4 disks : loc1, loc2, rem3= , rem4. > - serverA : pool is healthy. >=20 > # Test 1 : > - serverA : put a lot of data into the pool ; > - serverB : stop ctld ; > - serverA : put a lot of data into the pool ; > - serverB : start ctld ; > - serverA : make all pool disks online : it works, pool is healthy. >=20 > # Test 2 : > - serverA : put a lot of data into the pool ; > - serverA : export the pool ; > - serverB : import the pool : it does not work, as ctld locks the disks != Good news, nice protection (both servers won't be able to access the same = disks at the same time). > - serverB : stop ctld ; > - serverB : import the pool : it works, 2 disks missing ; > - serverA : let's make 2 iSCSI targets : rem1, rem2 ; > - serverB : make all pool disks online : it works, pool is healthy. >=20 > # Test 3 : > - serverA : put a lot of data into the pool ; > - serverB : stop ctld ; > - serverA : put a lot of data into the pool ; > - serverB : import the pool : it works, 2 disks missing ; > - serverA : let's make 2 iSCSI targets : rem1, rem2 ; > - serverB : make all pool disks online : it works, pool is healthy, but o= f course data written at step3 is lost. >=20 > # Test 4 : > - serverA : put a lot of data into the pool ; > - serverB : stop ctld ; > - serverA : put a lot of data into the pool ; > - serverA : export the pool ; > - serverA : let's make 2 iSCSI targets : rem1, rem2 ; > - serverB : import the pool : it works, pool is healthy, data written at = step3 is here. >=20 > # Test 5 : > - serverA : rsync a huge remote repo into the pool in the background ; > - serverB : stop ctld ; > - serverA : 2 disks missing, but rsync still runs flawlessly ; > - serverB : start ctld ; > - serverA : make all pool disks online : it works, pool is healthy. > - serverB : ifconfig down ; > - serverA : 2 disks missing, but rsync still runs flawlessly ; > - serverB : ifconfig up ; > - serverA : make all pool disks online : it works, pool is healthy. > - serverB : power reset ! > - serverA : 2 disks missing, but rsync still runs flawlessly ; > - serverB : let's wait for server to be up ; > - serverA : make all pool disks online : it works, pool is healthy. >=20 > Quite happy with these tests actually :) Hello, So, after testing ZFS replication with zrep (which works more or less perfectly) I'm busy to experiment a ZFS + iSCSI solution with two small HP DL20 and 2 disks in each. Machines are partitionned the same=20 (https://gist.github.com/silenius/d3fdcd52ab35957f37527af892615ca7)=20 with a zfs root (https://gist.github.com/silenius/f347e90ab187495cdea6e3baf64b881b) On filer2.prod.lan I have exported the two dedicated partitions (/dev/da0p4 and /dev/da1p4) as an iSCSI target (https://gist.github.com/silenius/8efda8334cb16cd779efff027ff5f3bd) which are available on filer1.prod.lan as /dev/da3 and /dev/da4 (https://gist.github.com/silenius/f6746bc02ae1a5fb7e472e5f5334238b) Then on filer1.prod.lan I made a zpool mirror over those 4 disks (https://gist.github.com/silenius/eecd61ad07385e16b41b05e6d2373a9a) Interfaces are configured as the following: https://gist.github.com/silenius/4af55df446f82319eaf072049bc9a287 with "bge1" being the dedicated interface for iSCSI traffic, and "bge0" the "main" interface through which $clients access the filer (it has a floating IP 192.168.10.15). (I haven't made any network optimizations=20 yet) Primary results are encouraging too, although I haven't tested under heavy write yet. I made more or less what Ben did above, trying to corrupt the pool and ... without success :) I also checked manually with: $> md5 -qs "$(find -s DIR -type f -print0|xargs -0 md5 -q)" to check the integrity of the DIR I copied. I tried also a basic failover scenario with https://gist.github.com/silenius/b81e577f0f0a37bf7773ef15f7d05b5d which seems to work atm.=20 To avoid a split-brain scenario I think it is also very important that the pool isn't automatically mounted at boot (so setting cachefile=3Dnone) Comments ? :) Julien >=20 > Ben >=20 > _______________________________________________ > freebsd-fs@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" --=20 Julien Cigar Belgian Biodiversity Platform (http://www.biodiversity.be) PGP fingerprint: EEF9 F697 4B68 D275 7B11 6A25 B2BB 3710 A204 23C0 No trees were killed in the creation of this message. However, many electrons were terribly inconvenienced. --KscVNZbUup0vZz0f Content-Type: application/pgp-signature; name="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAABCgAGBQJXqyfNAAoJELK7NxCiBCPAn60QAIBd0aHdlnnJEUXV/QxmtEoH 79RadAUJov0C2feea/SPBWaPXAqc/k5Q8BTLNABsSBu0LFcKzDDzVTbQ8l+JhvIn MkcMRICuiFMRYQvE3LNLthZAQrzTguJkcYYshTFvYmk6qSYpCWwPQvtlfw64a9bC Eclk7889otGlRRL3Bi44MshoGuCsFlh9jrNrKlSjQJxNZfO49UysejZxALIKSnw0 lp4J/ByT/AfcSNMjwBYxYPZ08jUiq1Fjo7CYQJuvQlBll/GxirRxTypQPxe7jGEf Ij9eKp/gyTNgOrn/i7TZj8LmLKtsYM4XOKjaMYnrA0yQ49+Ez331Ub9Fta20NyzU fPn0qusttxMy9nc9GZN4QabV5LI36p85cQjiibF22euuyB0jf+EwE6kqPdYGSYeH +5Nrl0hDln62RfXEihwJu0oqNta8/uFlCoEVBvZkZBf2rGJLc7yi+TqHq5jODv37 H/PFBpIYz+t0z8EzA8uZYLgQ3hATEz+z9+PaVaxYqGDbMehy+4o51GQ7O/aJjVoL bs3LxJzkiElNx+32lmWrq2gcdpn5ZZQTQQr0hV7Uzw/VoWQcz39C5gCEIKuT8us4 6OQ4Slgbrnb8Vx3Na0H1tGaH9T8+Nthn1GQpJGlSISH5e9FRMvsPzCg2vubZdhlm jZv2Dt8MqA9Bttrml/BG =lU8A -----END PGP SIGNATURE----- --KscVNZbUup0vZz0f-- From owner-freebsd-fs@freebsd.org Wed Aug 10 13:27:19 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 26995BB4D57 for ; Wed, 10 Aug 2016 13:27:19 +0000 (UTC) (envelope-from ben.rubson@gmail.com) Received: from mail-wm0-x242.google.com (mail-wm0-x242.google.com [IPv6:2a00:1450:400c:c09::242]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id AF3411752 for ; Wed, 10 Aug 2016 13:27:18 +0000 (UTC) (envelope-from ben.rubson@gmail.com) Received: by mail-wm0-x242.google.com with SMTP id o80so9543096wme.0 for ; Wed, 10 Aug 2016 06:27:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:subject:from:in-reply-to:date :content-transfer-encoding:message-id:references:to; bh=D2KVe/wbULAQMNJwit67vtxA3SAX6hCc5yUlSn7M6+o=; b=w8wc8o4ehyRjFc54zJ5yEo+K6BI5kpb8LoRri2o5ALjpZwcZIVVyIclZF0xj4Z9rHp 5jgY9k7qKj4+BHA5DAgPZ/fDpjJ7gKOj1D3Jva/nfUWSNDWtb88TZDhBqXAFhNZHNlY4 XcKS2nCbANF9mVsrXnmrR/8Ao+8hcDXh/mwnArCscyBqxZ/Ty7Zjp9qqx05guE80PpYM 0qtSA5/P2gNuruxhgkHZgaYNmrYI1Cts0jq11/OTPjtidsVwwalvDTw/HiO6jcx5ZGNH d3hQqaKN9dR6O6AU/GlpdAjPnlTHwCHaMG9FhPUEGWUyNNVa36bD6irGWRX/61JGsZB7 uPTQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:subject:from:in-reply-to:date :content-transfer-encoding:message-id:references:to; bh=D2KVe/wbULAQMNJwit67vtxA3SAX6hCc5yUlSn7M6+o=; b=aH3kYruuS4a6HD+ndnXcgOkvfKnHVuVWvnxjrERNaWt33KdLtFlDytsF2RCIRYspUJ yDIvBHF+hsPtn8iVlMs03znvYHiv1ymFhGKkbsOwO8s9/SQ46J4WNdrR2K3aCDWDaSXW X99yVleBD11dFcfhOSyCHMv5XF2gWX55AXdhV4zNBL4TCfGZsLYivRFeyBbv7yiiXhgo 5wiwlFQcGNubcd8wXRQsDqosotVtpRZ8RusUh150UGLjVONryITFjJsAdETj3JI+bxLH EIKDLAHNb7QmG4SU1I9jaUD031lFMh2R9Jl8gYFXuLdZmtMRHwAKGmWoqh2hgZjyoxhr viNA== X-Gm-Message-State: AEkoousu9sCJGFoWakEKpr+t+/ZLqQ5jqK4V4o0Lvn2tNpRYYS70RcD4Yv7uJJ3rcoTOkQ== X-Received: by 10.194.148.81 with SMTP id tq17mr3946188wjb.67.1470835636828; Wed, 10 Aug 2016 06:27:16 -0700 (PDT) Received: from macbook-air-de-benjamin-1.home (LFbn-1-7077-85.w90-116.abo.wanadoo.fr. [90.116.246.85]) by smtp.gmail.com with ESMTPSA id o4sm43055788wjd.15.2016.08.10.06.27.16 for (version=TLS1 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 10 Aug 2016 06:27:16 -0700 (PDT) Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Mac OS X Mail 9.3 \(3124\)) Subject: Re: [iSCSI] Trying to reach max disk throughput From: Ben RUBSON In-Reply-To: <20160810114404.GA80485@brick> Date: Wed, 10 Aug 2016 15:27:15 +0200 Content-Transfer-Encoding: quoted-printable Message-Id: <7CE3E62B-8251-4390-BD90-CF2F76F57CA7@gmail.com> References: <6B32251D-49B4-4E61-A5E8-08013B15C82B@gmail.com> <20160810114404.GA80485@brick> To: freebsd-fs@freebsd.org X-Mailer: Apple Mail (2.3124) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 10 Aug 2016 13:27:19 -0000 > On 10 Aug 2016, at 13:44, Edward Tomasz Napiera=C5=82a = wrote: >=20 > On 0810T1154, Ben RUBSON wrote: >> Hello, >>=20 >> I'm facing something strange with iSCSI, I can't manage to reach the = expected disk throughput using one (read or write) thread. >=20 > [..] >=20 >> ### Initiator : iscsi disk throughput : >>=20 >> ## dd if=3D/dev/da8 of=3D/dev/null bs=3D$((128*1024)) count=3D81920 >> 10737418240 bytes transferred in 34.731815 secs (309152234 bytes/sec) = - 295MB/s >>=20 >> With 2 parallel dd jobs : 345MB/s >> With 4 parallel dd jobs : 502MB/s >>=20 >>=20 >>=20 >> ### Questions : >>=20 >> Why such a difference ? >> Where are the 167MB/s (462-295) lost ? >=20 > Network delays, I suppose. I just saw that iSER is available in FreeBSD 11, let's install BETA4 and = give it a try.= From owner-freebsd-fs@freebsd.org Wed Aug 10 14:55:52 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id BA467BB5962 for ; Wed, 10 Aug 2016 14:55:52 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id A94CC139B for ; Wed, 10 Aug 2016 14:55:52 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id u7AEtqIp093333 for ; Wed, 10 Aug 2016 14:55:52 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 211491] System hangs after "Uptime" on reboot with ZFS Date: Wed, 10 Aug 2016 14:55:52 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 11.0-BETA3 X-Bugzilla-Keywords: needs-qa X-Bugzilla-Severity: Affects Many People X-Bugzilla-Who: vangyzen@freebsd.org X-Bugzilla-Status: Open X-Bugzilla-Resolution: X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Flags: mfc-stable10? mfc-stable11? X-Bugzilla-Changed-Fields: bug_severity short_desc Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 10 Aug 2016 14:55:52 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D211491 Eric van Gyzen changed: What |Removed |Added ---------------------------------------------------------------------------- Severity|Affects Some People |Affects Many People Summary|System hangs after "Uptime" |System hangs after "Uptime" |on reboot with iSCSI, zfs, |on reboot with ZFS |and altroot | --=20 You are receiving this mail because: You are the assignee for the bug.= From owner-freebsd-fs@freebsd.org Wed Aug 10 14:56:44 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id A2900BB5A0F for ; Wed, 10 Aug 2016 14:56:44 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 8B2F015A9 for ; Wed, 10 Aug 2016 14:56:44 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id u7AEuhiL094320 for ; Wed, 10 Aug 2016 14:56:44 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 211491] System hangs after "Uptime" on reboot with ZFS Date: Wed, 10 Aug 2016 14:56:44 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 11.0-BETA3 X-Bugzilla-Keywords: needs-qa X-Bugzilla-Severity: Affects Many People X-Bugzilla-Who: vangyzen@freebsd.org X-Bugzilla-Status: Open X-Bugzilla-Resolution: X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Flags: mfc-stable10? mfc-stable11? X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 10 Aug 2016 14:56:44 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D211491 --- Comment #15 from Eric van Gyzen --- This bug is not limited to iSCSI. I have updated the summary accordingly. --=20 You are receiving this mail because: You are the assignee for the bug.= From owner-freebsd-fs@freebsd.org Wed Aug 10 15:34:35 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 7D034BB5862 for ; Wed, 10 Aug 2016 15:34:35 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 6C9E9190B for ; Wed, 10 Aug 2016 15:34:35 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id u7AFYYEo005142 for ; Wed, 10 Aug 2016 15:34:35 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 211491] System hangs after "Uptime" on reboot with ZFS Date: Wed, 10 Aug 2016 15:34:34 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 11.0-BETA3 X-Bugzilla-Keywords: needs-qa X-Bugzilla-Severity: Affects Many People X-Bugzilla-Who: vangyzen@freebsd.org X-Bugzilla-Status: Open X-Bugzilla-Resolution: X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Flags: mfc-stable10? mfc-stable11? X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 10 Aug 2016 15:34:35 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D211491 --- Comment #16 from Eric van Gyzen --- I could NOT reproduce this on 10.3-RELEASE, so this will be a new regressio= n in 11.0-RELEASE. I can still reproduce it on head at r303895 (9 August). I can't spend any more time on this. I suggest that someone reproduce and bisect the commits between 10.3-RELEASE and 10-STABLE. --=20 You are receiving this mail because: You are the assignee for the bug.= From owner-freebsd-fs@freebsd.org Thu Aug 11 08:16:43 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id E5592BB5908 for ; Thu, 11 Aug 2016 08:16:43 +0000 (UTC) (envelope-from borjam@sarenet.es) Received: from cu01176b.smtpx.saremail.com (cu01176b.smtpx.saremail.com [195.16.151.151]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 84B02151C for ; Thu, 11 Aug 2016 08:16:43 +0000 (UTC) (envelope-from borjam@sarenet.es) Received: from [172.16.8.36] (izaro.sarenet.es [192.148.167.11]) by proxypop01.sare.net (Postfix) with ESMTPSA id CA1429DD2AB; Thu, 11 Aug 2016 10:11:15 +0200 (CEST) Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Mac OS X Mail 9.3 \(3124\)) Subject: Re: HAST + ZFS + NFS + CARP From: Borja Marcos In-Reply-To: <20160704193131.GJ41276@mordor.lan> Date: Thu, 11 Aug 2016 10:11:15 +0200 Cc: Jordan Hubbard , freebsd-fs@freebsd.org Content-Transfer-Encoding: quoted-printable Message-Id: References: <678321AB-A9F7-4890-A8C7-E20DFDC69137@gmail.com> <20160630185701.GD5695@mordor.lan> <6035AB85-8E62-4F0A-9FA8-125B31A7A387@gmail.com> <20160703192945.GE41276@mordor.lan> <20160703214723.GF41276@mordor.lan> <65906F84-CFFC-40E9-8236-56AFB6BE2DE1@ixsystems.com> <61283600-A41A-4A8A-92F9-7FAFF54DD175@ixsystems.com> <20160704183643.GI41276@mordor.lan> <20160704193131.GJ41276@mordor.lan> To: Julien Cigar X-Mailer: Apple Mail (2.3124) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 11 Aug 2016 08:16:44 -0000 > On 04 Jul 2016, at 21:31, Julien Cigar wrote: >=20 >> To get specific again, I am not sure I would do what you are = contemplating given your circumstances since it=E2=80=99s not the = cheapest / simplest solution. The cheapest / simplest solution would be = to create 2 small ZFS servers and simply do zfs snapshot replication = between them at periodic intervals, so you have a backup copy of the = data for maximum safety as well as a physically separate server in case = one goes down hard. Disk storage is the cheap part now, particularly if = you have data redundancy and can therefore use inexpensive disks, and = ZFS replication is certainly =E2=80=9Cgood enough=E2=80=9D for disaster = recovery. As others have said, adding additional layers will only = increase the overall fragility of the solution, and =E2=80=9Cfragile=E2=80= =9D is kind of the last thing you need when you=E2=80=99re frantically = trying to deal with a server that has gone down for what could be any = number of reasons. >>=20 >> I, for example, use a pair of FreeNAS Minis at home to store all my = media and they work fine at minimal cost. I use one as the primary = server that talks to all of the VMWare / Plex / iTunes server = applications (and serves as a backup device for all my iDevices) and it = replicates the entire pool to another secondary server that can be = pushed into service as the primary if the first one loses a power supply = / catches fire / loses more than 1 drive at a time / etc. Since I have = a backup, I can also just use RAIDZ1 for the 4x4Tb drive configuration = on the primary and get a good storage / redundancy ratio (I can lose a = single drive without data loss but am also not wasting a lot of storage = on parity). >=20 > You're right, I'll definitively reconsider the zfs send / zfs receive > approach. Sorry to be so late to the party. Unless you have a *hard* requirement for synchronous replication, I = would avoid it like the plague. Synchronous replication sounds sexy, but = it has several disadvantages: Complexity and in case you wish to keep an = off-site replica it will definitely impact performance. Distance will increase delay. Asynchronous replication with ZFS has several advantages, however. First and foremost: the snapshot-replicate approach is a terrific = short-term =E2=80=9Cbackup=E2=80=9D solution that will allow you to = recover quickly from some often too quickly incidents, like your own software corrupting data. A = ZFS snapshot is trivial to roll back and it won=E2=80=99t involve a = costly =E2=80=9Cbackup recovery=E2=80=9D procedure. You can do both replication *and* keep some = snapshot retention policy =C3=A0la Apple=E2=80=99s Time Machine.=20 Second: I mentioned distance when keeping off-site replicas, as distance = necessarily increases delay. Asynchronous replication doesn=C2=B4t have = that problem. Third: With some care you can do a one to N replication, even involving = different replication frequencies. Several years ago, in 2009 I think, I set up a system that worked quite = well. It was based on NFS and ZFS. The requirements were a bit = particular, which in this case greatly simplified it for me. I had a farm of front-end web servers (running Apache) that took all of = the content from a NFS server. The NFS server used ZFS as the file = system. This might not be useful for everyone, but in this case the web = servers were CPU bound due to plenty of PHP crap. As the front ends = weren=E2=80=99t supposed to write to the file server (and indeed it was = undesirable for security reasons) I could afford to export the NFS file = systems in read-only mode.=20 The server was replicated to a sibling in 1 or 2 minute intervals, I = don=E2=80=99t remember. And the interesting part was this. I used = Heartbeat to decide which of the servers was the master. When Heartbeat = decided which one was the master, a specific IP address was assigned to = it, starting the NFS service. So, the front-ends would happily mount it. What happened in case of a server failure?=20 Heartbeat would detect it in a minute more or less. Assuming a master = failure, the former slave would become master, assigning itself the NFS server IP address and starting up NFS. Meanwhile, the front-ends had a = silly script running in 1 minute intervals that simply read a file from = the NFS mounted filesystem. In case there was a reading error it would force = an unmount of the NFS server and it would enter a loop trying to mount = it again until it succeeded. It looks kludgy, but that means that in case of a server loss (ZFS on = FreeBSD wasn=E2=80=99t that stable at the time and we suffered a couple = of them) the website was titsup for maybe two minutes, recovering = automatically. It worked.=20 Both NFS servers were in the same datacenter, but I could have added = geographical dispersion by using BGP to announce the NFS IP address to = our routers.=20 There are better solutions, but this one involved no fancy software = licenses, no expensive hardware and it was quite reliable. The only = problem we had was, maybe I was just too daring, we were bitten by a ZFS = deadlock bug several times. But it worked anyway. Borja. From owner-freebsd-fs@freebsd.org Thu Aug 11 08:49:40 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 372A3BB6286 for ; Thu, 11 Aug 2016 08:49:40 +0000 (UTC) (envelope-from ben.rubson@gmail.com) Received: from mail-wm0-x230.google.com (mail-wm0-x230.google.com [IPv6:2a00:1450:400c:c09::230]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id B68AF181F for ; Thu, 11 Aug 2016 08:49:39 +0000 (UTC) (envelope-from ben.rubson@gmail.com) Received: by mail-wm0-x230.google.com with SMTP id q128so2636601wma.1 for ; Thu, 11 Aug 2016 01:49:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:subject:from:in-reply-to:date :content-transfer-encoding:message-id:references:to; bh=gVAdWj7CrgK2yIA9BMvYMKZRLAxfLY6Nv9hrFyUCiRU=; b=FyNA4Ju9paf6g3wPryFcxLg8u7JLruI7lOzhIO/jeq4KYkgRHlZAkvk+RCZjWihAhC w0Z+wVZJZQ9+EmHqbsA6ib9mXLN57QguXy+kTzPt+OKn3TzO346vatkX72tvhePw26MQ 5bhtBwf6VjV5U9g36fBOAAvg4MG+8+C9zKHlgLnsg5d6rmPjZ3ONDVLVz29hS3MZKrkm 8YsjIaSoxpafQpKhcEGilEb0dewYJwrTHjPwG+PybasTwAlUHzTbM7UxpeoVBZa9s0jB QQFgTELb5dze6sG0GXJ2T7gh1++FkbMIBT3lSUFMmHH4zi/w2ygS/ADYK5cABARvG7yV /zWA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:subject:from:in-reply-to:date :content-transfer-encoding:message-id:references:to; bh=gVAdWj7CrgK2yIA9BMvYMKZRLAxfLY6Nv9hrFyUCiRU=; b=T2DT85UJ0MqdhGhsA1ZeZiPD3Z2uaAzvkpa3yx8wQ1vxJGLqYokv7QFUlIxxdKr88O ZvXGpmnxPNIzyT9nSW5VtvdGj2QP6YMalX69DXgy5pn3aZNV8ZubjspJgaQvfcnIJ7cU Y+iIopCKq5WoCgQWCjyuWcHdqLuFV3N8oQMtehf2bigSspAV117Ojm/bb00Wmzn0lYVB 7fUvtjrX+H4HFUwvFcRnBmQ0RC9Nz+pDFnrHgKzmVhnFvXMP5pcY85gNWjWnkj+zpXBO cxHHIq4KeNUyJVwSu8uQFmH2QQw5014UCMUrQaLpL0oBM0TglLkoFE/+drWGOII2zPr7 +CgA== X-Gm-Message-State: AEkoousZtnzGoyHp4yJMHjKfjN3ccNMJTHha24FLe5/ZXFRcCU1d+9Uc2Il2HlBi0kPoPQ== X-Received: by 10.194.223.40 with SMTP id qr8mr8496770wjc.16.1470905377605; Thu, 11 Aug 2016 01:49:37 -0700 (PDT) Received: from macbook-air-de-benjamin-1.home (LFbn-1-7077-85.w90-116.abo.wanadoo.fr. [90.116.246.85]) by smtp.gmail.com with ESMTPSA id b203sm2056211wmh.20.2016.08.11.01.49.36 for (version=TLS1 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 11 Aug 2016 01:49:36 -0700 (PDT) Content-Type: text/plain; charset=us-ascii Mime-Version: 1.0 (Mac OS X Mail 9.3 \(3124\)) Subject: Re: HAST + ZFS + NFS + CARP From: Ben RUBSON In-Reply-To: <20160810131040.GH70364@mordor.lan> Date: Thu, 11 Aug 2016 10:49:35 +0200 Content-Transfer-Encoding: quoted-printable Message-Id: <3851B157-0A0A-4185-B326-0EE5BEAA887A@gmail.com> References: <20160630144546.GB99997@mordor.lan> <71b8da1e-acb2-9d4e-5d11-20695aa5274a@internetx.com> <20160630153747.GB5695@mordor.lan> <63C07474-BDD5-42AA-BF4A-85A0E04D3CC2@gmail.com> <678321AB-A9F7-4890-A8C7-E20DFDC69137@gmail.com> <20160630185701.GD5695@mordor.lan> <6035AB85-8E62-4F0A-9FA8-125B31A7A387@gmail.com> <20160810131040.GH70364@mordor.lan> To: freebsd-fs@freebsd.org X-Mailer: Apple Mail (2.3124) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 11 Aug 2016 08:49:40 -0000 > On 10 Aug 2016, at 15:10, Julien Cigar wrote: >=20 > Hello, >=20 > So, after testing ZFS replication with zrep (which works more or less > perfectly) I'm busy to experiment a ZFS + iSCSI solution with two = small > HP DL20 and 2 disks in each. > (...) > Comments ? :) Use one iSCSI target per disk ? (so that if a disk fails, you can easily switch the target down without = impacting the other targets) Use jumbo frames on your replication interfaces ? Give nice GPT labels to your disks ? Ben= From owner-freebsd-fs@freebsd.org Thu Aug 11 09:10:22 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 3ADE5BB67C9 for ; Thu, 11 Aug 2016 09:10:22 +0000 (UTC) (envelope-from julien@perdition.city) Received: from relay-b03.edpnet.be (relay-b03.edpnet.be [212.71.1.220]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "edpnet.email", Issuer "Go Daddy Secure Certificate Authority - G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id D6B2F1234 for ; Thu, 11 Aug 2016 09:10:21 +0000 (UTC) (envelope-from julien@perdition.city) X-ASG-Debug-ID: 1470906616-0a88181ce62487290001-3nHGF7 Received: from mordor.lan (213.211.139.72.dyn.edpnet.net [213.211.139.72]) by relay-b03.edpnet.be with ESMTP id VfSRaO20zTfAs5K9 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Thu, 11 Aug 2016 11:10:18 +0200 (CEST) X-Barracuda-Envelope-From: julien@perdition.city X-Barracuda-Effective-Source-IP: 213.211.139.72.dyn.edpnet.net[213.211.139.72] X-Barracuda-Apparent-Source-IP: 213.211.139.72 Date: Thu, 11 Aug 2016 11:10:16 +0200 From: Julien Cigar To: Borja Marcos Cc: Jordan Hubbard , freebsd-fs@freebsd.org Subject: Re: HAST + ZFS + NFS + CARP Message-ID: <20160811091016.GI70364@mordor.lan> X-ASG-Orig-Subj: Re: HAST + ZFS + NFS + CARP References: <6035AB85-8E62-4F0A-9FA8-125B31A7A387@gmail.com> <20160703192945.GE41276@mordor.lan> <20160703214723.GF41276@mordor.lan> <65906F84-CFFC-40E9-8236-56AFB6BE2DE1@ixsystems.com> <61283600-A41A-4A8A-92F9-7FAFF54DD175@ixsystems.com> <20160704183643.GI41276@mordor.lan> <20160704193131.GJ41276@mordor.lan> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha512; protocol="application/pgp-signature"; boundary="1E1Oui4vdubnXi3o" Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.6.1 (2016-04-27) X-Barracuda-Connect: 213.211.139.72.dyn.edpnet.net[213.211.139.72] X-Barracuda-Start-Time: 1470906617 X-Barracuda-Encrypted: ECDHE-RSA-AES256-GCM-SHA384 X-Barracuda-URL: https://212.71.1.220:443/cgi-mod/mark.cgi X-Barracuda-Scan-Msg-Size: 7302 X-Virus-Scanned: by bsmtpd at edpnet.be X-Barracuda-BRTS-Status: 1 X-Barracuda-Bayes: INNOCENT GLOBAL 0.5000 1.0000 0.0100 X-Barracuda-Spam-Score: 0.01 X-Barracuda-Spam-Status: No, SCORE=0.01 using global scores of TAG_LEVEL=1000.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=6.0 tests= X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.3.31928 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 11 Aug 2016 09:10:22 -0000 --1E1Oui4vdubnXi3o Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Thu, Aug 11, 2016 at 10:11:15AM +0200, Borja Marcos wrote: >=20 > > On 04 Jul 2016, at 21:31, Julien Cigar wrote: > >=20 > >> To get specific again, I am not sure I would do what you are contempla= ting given your circumstances since it=E2=80=99s not the cheapest / simples= t solution. The cheapest / simplest solution would be to create 2 small ZF= S servers and simply do zfs snapshot replication between them at periodic i= ntervals, so you have a backup copy of the data for maximum safety as well = as a physically separate server in case one goes down hard. Disk storage i= s the cheap part now, particularly if you have data redundancy and can ther= efore use inexpensive disks, and ZFS replication is certainly =E2=80=9Cgood= enough=E2=80=9D for disaster recovery. As others have said, adding additi= onal layers will only increase the overall fragility of the solution, and = =E2=80=9Cfragile=E2=80=9D is kind of the last thing you need when you=E2=80= =99re frantically trying to deal with a server that has gone down for what = could be any number of reasons. > >>=20 > >> I, for example, use a pair of FreeNAS Minis at home to store all my me= dia and they work fine at minimal cost. I use one as the primary server th= at talks to all of the VMWare / Plex / iTunes server applications (and serv= es as a backup device for all my iDevices) and it replicates the entire poo= l to another secondary server that can be pushed into service as the primar= y if the first one loses a power supply / catches fire / loses more than 1 = drive at a time / etc. Since I have a backup, I can also just use RAIDZ1 f= or the 4x4Tb drive configuration on the primary and get a good storage / re= dundancy ratio (I can lose a single drive without data loss but am also not= wasting a lot of storage on parity). > >=20 > > You're right, I'll definitively reconsider the zfs send / zfs receive > > approach. >=20 > Sorry to be so late to the party. >=20 > Unless you have a *hard* requirement for synchronous replication, I would= avoid it like the plague. Synchronous replication sounds sexy, but it > has several disadvantages: Complexity and in case you wish to keep an off= -site replica it will definitely impact performance. Distance will > increase delay. >=20 > Asynchronous replication with ZFS has several advantages, however. >=20 > First and foremost: the snapshot-replicate approach is a terrific short-t= erm =E2=80=9Cbackup=E2=80=9D solution that will allow you to recover quickl= y from some > often too quickly incidents, like your own software corrupting data. A ZF= S snapshot is trivial to roll back and it won=E2=80=99t involve a costly = =E2=80=9Cbackup > recovery=E2=80=9D procedure. You can do both replication *and* keep some = snapshot retention policy =C3=A0la Apple=E2=80=99s Time Machine.=20 >=20 > Second: I mentioned distance when keeping off-site replicas, as distance = necessarily increases delay. Asynchronous replication doesn=C2=B4t have tha= t problem. >=20 > Third: With some care you can do a one to N replication, even involving d= ifferent replication frequencies. >=20 > Several years ago, in 2009 I think, I set up a system that worked quite w= ell. It was based on NFS and ZFS. The requirements were a bit particular, > which in this case greatly simplified it for me. >=20 > I had a farm of front-end web servers (running Apache) that took all of t= he content from a NFS server. The NFS server used ZFS as the file system. T= his might not be useful for everyone, but in this case the web servers were= CPU bound due to plenty of PHP crap. As the front ends weren=E2=80=99t sup= posed to write to the file server (and indeed it was undesirable for securi= ty reasons) I could afford to export the NFS file systems in read-only mode= =2E=20 >=20 > The server was replicated to a sibling in 1 or 2 minute intervals, I don= =E2=80=99t remember. And the interesting part was this. I used Heartbeat to= decide which of the servers was the master. When Heartbeat decided which o= ne was the master, a specific IP address was assigned to it, starting the N= FS service. So, the front-ends would happily mount it. >=20 > What happened in case of a server failure?=20 >=20 > Heartbeat would detect it in a minute more or less. Assuming a master fai= lure, the former slave would become master, assigning itself the NFS > server IP address and starting up NFS. Meanwhile, the front-ends had a si= lly script running in 1 minute intervals that simply read a file from the > NFS mounted filesystem. In case there was a reading error it would force = an unmount of the NFS server and it would enter a loop trying to mount it a= gain until it succeeded. >=20 > It looks kludgy, but that means that in case of a server loss (ZFS on Fre= eBSD wasn=E2=80=99t that stable at the time and we suffered a couple of the= m) the website was titsup for maybe two minutes, recovering automatically. = It worked.=20 >=20 > Both NFS servers were in the same datacenter, but I could have added geog= raphical dispersion by using BGP to announce the NFS IP address to our rout= ers.=20 >=20 > There are better solutions, but this one involved no fancy software licen= ses, no expensive hardware and it was quite reliable. The only problem we h= ad was, maybe I was just too daring, we were bitten by a ZFS deadlock bug s= everal times. But it worked anyway. >=20 >=20 As I said in a previous post I tested the zfs send/receive approach (with zrep) and it works (more or less) perfectly.. so I concur in all what you said, especially about off-site replicate and synchronous replication. Out of curiosity I'm also testing a ZFS + iSCSI + CARP at the moment,=20 I'm in the early tests, haven't done any heavy writes yet, but ATM it=20 works as expected, I havent' managed to corrupt the zpool. I think that with the following assumptions the failover from MASTER (old master) -> BACKUP (new master) can be done quite safely (the opposite *MUST* always be done manually IMHO): 1) Don't mount the zpool at boot 2) Ensure that the failover script is not executed at boot 3) Once the failover script has been executed and that the BACKUP is=20 the new MASTER assume that it will remain so, unless changed manually This is to avoid the case of a catastrophic power loss in the DC and a possible split-brain scenario when they both go off / on simultaneously. 2) is especially important with CARPed interface where the state could flip from BACKUP -> MASTER -> BACKUP at boot sometimes. For 3) you must adapt the advskew if the CARPed interface, so that even if the BACKUP (now master) has an unplanned shutdown/reboot the old MASTER (now backup) doesn't switch, unless done manually. So you should do something like: sysrc ifconfig_bge0_alias0=3D"vhid 54 advskew 10 pass xxx alias 192.168.10.15/32" ifconfig bge0 vhid 54 advskew 10 in the failover script (where the "new" advskew (10) is smaller than=20 the old master (now backup) advskew) The failover should only be done for unplanned events, so if you reboot the MASTER for some reasons (freebsd-update, etc) the failover script on the BACKUP should handle that. (more soon...) Julien >=20 >=20 > Borja. >=20 >=20 >=20 --=20 Julien Cigar Belgian Biodiversity Platform (http://www.biodiversity.be) PGP fingerprint: EEF9 F697 4B68 D275 7B11 6A25 B2BB 3710 A204 23C0 No trees were killed in the creation of this message. However, many electrons were terribly inconvenienced. --1E1Oui4vdubnXi3o Content-Type: application/pgp-signature; name="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAABCgAGBQJXrED1AAoJELK7NxCiBCPAqOgQAJPq/UXsc8VfydbO0R4WCXsO pQuoRErCLu0wYOyeZmKyPgRO05V+Iv8fDcvw/uhzrx6bz+mxISmSgUFt/7PQM7M/ q+VkFyE1whh/Yh3G23n6s3tISoopXgAi+kvSJal/hcOmYDxJ6nlFZ27QsIrBL8UN JWJ0kY+MBUR9wdQhZES37Y4pu/o3ZA+uthRyH+VpW7DavKjVU9yNzddPp+8kCL8K si2c6QQaRiTTIEszOXygeRaZTuwjSy5dzuFqtpOQvJrQcrBJ4duapXWfTVr97I9u 9VuAs+Ffr1eWi4U2VhChGxwc3zivcpU+OvZDrJTWIeWJQtYvxQ0S37WUijeSPOi/ iGR5daA4zbiaN9OIDyODKtOjAzNNSehqGxRneWLN7I16BbCkg8U8rI5ObDZf3wn7 yUHZ34MXA5X+wB1z0q/uNq9vG5KYEaIcM35NtzBWE+iLkLaSwuJjpBJCmO2SCk3F 3yg24LiSdNagiRIDykW0I0BU0r1hbv3zPSWSEjNwx7hxOkZp32C4sImoFBY0Df6H njj93uZM5/c05HluhSp2T4SemmnGTkjU0vtfNxr2l3JadLay/VAQr37nsoaOisN1 L4W2UaAGNqW/5iL2EkxaQFYBPbmm2vQgCsn7nwlZ9bvkqPEi2CRUSVeIrUdH6H+I DQCYrx+6HLkdiEQi/7cu =H3Gg -----END PGP SIGNATURE----- --1E1Oui4vdubnXi3o-- From owner-freebsd-fs@freebsd.org Thu Aug 11 09:33:15 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 15FC9BB5202 for ; Thu, 11 Aug 2016 09:33:15 +0000 (UTC) (envelope-from borjam@sarenet.es) Received: from cu1176c.smtpx.saremail.com (cu1176c.smtpx.saremail.com [195.16.148.151]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id CC7271575 for ; Thu, 11 Aug 2016 09:33:14 +0000 (UTC) (envelope-from borjam@sarenet.es) Received: from [172.16.8.36] (izaro.sarenet.es [192.148.167.11]) by proxypop02.sare.net (Postfix) with ESMTPSA id 141BC9DC5CE; Thu, 11 Aug 2016 11:24:41 +0200 (CEST) Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Mac OS X Mail 9.3 \(3124\)) Subject: Re: HAST + ZFS + NFS + CARP From: Borja Marcos In-Reply-To: <20160811091016.GI70364@mordor.lan> Date: Thu, 11 Aug 2016 11:24:40 +0200 Cc: Jordan Hubbard , freebsd-fs@freebsd.org Content-Transfer-Encoding: quoted-printable Message-Id: <1AA52221-9B04-4CF6-97A3-D2C2B330B7F9@sarenet.es> References: <6035AB85-8E62-4F0A-9FA8-125B31A7A387@gmail.com> <20160703192945.GE41276@mordor.lan> <20160703214723.GF41276@mordor.lan> <65906F84-CFFC-40E9-8236-56AFB6BE2DE1@ixsystems.com> <61283600-A41A-4A8A-92F9-7FAFF54DD175@ixsystems.com> <20160704183643.GI41276@mordor.lan> <20160704193131.GJ41276@mordor.lan> <20160811091016.GI70364@mordor.lan> To: Julien Cigar X-Mailer: Apple Mail (2.3124) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 11 Aug 2016 09:33:15 -0000 > On 11 Aug 2016, at 11:10, Julien Cigar wrote: >=20 > As I said in a previous post I tested the zfs send/receive approach = (with > zrep) and it works (more or less) perfectly.. so I concur in all what = you > said, especially about off-site replicate and synchronous replication. >=20 > Out of curiosity I'm also testing a ZFS + iSCSI + CARP at the moment,=20= > I'm in the early tests, haven't done any heavy writes yet, but ATM it=20= > works as expected, I havent' managed to corrupt the zpool. I must be too old school, but I don=E2=80=99t quite like the idea of = using an essentially unreliable transport (Ethernet) for low-level filesystem operations. In case something went wrong, that approach could risk corrupting a = pool. Although, frankly, ZFS is extremely resilient. One of mine even survived a SAS HBA problem = that caused some silent corruption. The advantage of ZFS send/receive of datasets is, however, that you can = consider it essentially atomic. A transport corruption should not cause trouble = (apart from a failed "zfs receive") and with snapshot retention you can even roll back. You = can=E2=80=99t roll back zpool replications :) ZFS receive does a lot of sanity checks as well. As long as your zfs = receive doesn=E2=80=99t involve a rollback to the latest snapshot, it won=E2=80=99t destroy anything by mistake. = Just make sure that your replica datasets aren=E2=80=99t mounted and zfs receive won=E2=80=99t complain. Cheers, Borja. From owner-freebsd-fs@freebsd.org Thu Aug 11 09:39:13 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 8C938BB53CD for ; Thu, 11 Aug 2016 09:39:13 +0000 (UTC) (envelope-from ben.rubson@gmail.com) Received: from mail-wm0-x22a.google.com (mail-wm0-x22a.google.com [IPv6:2a00:1450:400c:c09::22a]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 1DA4E1739 for ; Thu, 11 Aug 2016 09:39:13 +0000 (UTC) (envelope-from ben.rubson@gmail.com) Received: by mail-wm0-x22a.google.com with SMTP id o80so3372208wme.1 for ; Thu, 11 Aug 2016 02:39:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:subject:from:in-reply-to:date :content-transfer-encoding:message-id:references:to; bh=xXwyBhp/fZ5t4MXwMwo2B8pU7qW7AXHdGHxZPYBoz2Q=; b=ehBCeSJiziru6InbN3/3y9uvlnjuRHbvnclCL3jfPMUf4BdFcD+eP8QIq44uNxQkdZ akJDrigZ0C2eQv+nyliLiQppkqwCMurS+N7ZHn8ohHdY4aFjk+i/gzeWBskXYLabyB/0 DGxeFqWMjXlvOEr6Tgfze/Ey/OOJFLAja7BmYrSD/9+B//mxknTxLUhS7Zr5MCR4giMu BWDDyC5F9716EOIc0DIBgka6WbLU2C/redSburu0hvwg6y3SWNLoBX0NQ986UfReMwqM C8Q5T6g1UwU29GtTKHGu05lpXxlZ0xph1HfULu4nPIF/e9bsT0GUKH/Duv9JRMEMyjWA FRlg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:subject:from:in-reply-to:date :content-transfer-encoding:message-id:references:to; bh=xXwyBhp/fZ5t4MXwMwo2B8pU7qW7AXHdGHxZPYBoz2Q=; b=VMKKCfK+TPnLZwGH9biglGpfrGza+pm648ShaC2gzAo87utg0iJjPRUPj1WUf9zRzT R/FfRd46Pj2gF0aRZDhaxDNjIzjUaHWOUdVWznle45jsCszhpIa/kljxZeuwvbm2zjYW FQDlwe+whruuKfoHPFUYLiZSAhRnH0tnkv7abIWSa1az1ZQ+FmTUCnzPwzeDVXEo7zW6 w8hsZ5idfGmRDWPmzHysB/+mdbR+vQQ7ZHLDJfe9+OuKlpMWrO88vouBrPGBDuQT+mMi m7jlpvDlm/rb23L9q2M4c1WimB4ZQHWqafWXWmV0j1ZV2KJLqvF7t9i6FYM0PGidOck1 WFeA== X-Gm-Message-State: AEkooutvxj1apHmcqt/lWV+Yya/xm1BinapJgwAWTGS0pglwcYpqmYCza7MrlRuKKaoupA== X-Received: by 10.28.154.21 with SMTP id c21mr8554250wme.63.1470908351355; Thu, 11 Aug 2016 02:39:11 -0700 (PDT) Received: from macbook-air-de-benjamin-1.home (LFbn-1-7077-85.w90-116.abo.wanadoo.fr. [90.116.246.85]) by smtp.gmail.com with ESMTPSA id q23sm2271273wme.17.2016.08.11.02.39.10 for (version=TLS1 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 11 Aug 2016 02:39:10 -0700 (PDT) Content-Type: text/plain; charset=us-ascii Mime-Version: 1.0 (Mac OS X Mail 9.3 \(3124\)) Subject: Re: HAST + ZFS + NFS + CARP From: Ben RUBSON In-Reply-To: <1AA52221-9B04-4CF6-97A3-D2C2B330B7F9@sarenet.es> Date: Thu, 11 Aug 2016 11:39:09 +0200 Content-Transfer-Encoding: quoted-printable Message-Id: <226B5D47-72AF-4325-9A7D-9D6356C4D463@gmail.com> References: <6035AB85-8E62-4F0A-9FA8-125B31A7A387@gmail.com> <20160703192945.GE41276@mordor.lan> <20160703214723.GF41276@mordor.lan> <65906F84-CFFC-40E9-8236-56AFB6BE2DE1@ixsystems.com> <61283600-A41A-4A8A-92F9-7FAFF54DD175@ixsystems.com> <20160704183643.GI41276@mordor.lan> <20160704193131.GJ41276@mordor.lan> <20160811091016.GI70364@mordor.lan> <1AA52221-9B04-4CF6-97A3-D2C2B330B7F9@sarenet.es> To: freebsd-fs@freebsd.org X-Mailer: Apple Mail (2.3124) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 11 Aug 2016 09:39:13 -0000 > On 11 Aug 2016, at 11:24, Borja Marcos wrote: >=20 > Although, frankly, > ZFS is extremely resilient. One of mine even survived a SAS HBA = problem that caused some > silent corruption. Any link to this issue Borja ? Thank you ! Ben From owner-freebsd-fs@freebsd.org Thu Aug 11 09:50:24 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 459CFBB5705 for ; Thu, 11 Aug 2016 09:50:24 +0000 (UTC) (envelope-from borjam@sarenet.es) Received: from cu01176a.smtpx.saremail.com (cu01176a.smtpx.saremail.com [195.16.150.151]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id EABDD1C3B for ; Thu, 11 Aug 2016 09:50:23 +0000 (UTC) (envelope-from borjam@sarenet.es) Received: from [172.16.8.36] (izaro.sarenet.es [192.148.167.11]) by proxypop03.sare.net (Postfix) with ESMTPSA id E4F879DCCCA; Thu, 11 Aug 2016 11:43:41 +0200 (CEST) Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Mac OS X Mail 9.3 \(3124\)) Subject: Re: HAST + ZFS + NFS + CARP From: Borja Marcos In-Reply-To: <226B5D47-72AF-4325-9A7D-9D6356C4D463@gmail.com> Date: Thu, 11 Aug 2016 11:43:41 +0200 Cc: freebsd-fs@freebsd.org Content-Transfer-Encoding: quoted-printable Message-Id: <93B4257C-5EFC-4304-A7F9-5E8BFA7792FC@sarenet.es> References: <6035AB85-8E62-4F0A-9FA8-125B31A7A387@gmail.com> <20160703192945.GE41276@mordor.lan> <20160703214723.GF41276@mordor.lan> <65906F84-CFFC-40E9-8236-56AFB6BE2DE1@ixsystems.com> <61283600-A41A-4A8A-92F9-7FAFF54DD175@ixsystems.com> <20160704183643.GI41276@mordor.lan> <20160704193131.GJ41276@mordor.lan> <20160811091016.GI70364@mordor.lan> <1AA52221-9B04-4CF6-97A3-D2C2B330B7F9@sarenet.es> <226B5D47-72AF-4325-9A7D-9D6356C4D463@gmail.com> To: Ben RUBSON X-Mailer: Apple Mail (2.3124) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 11 Aug 2016 09:50:24 -0000 > On 11 Aug 2016, at 11:39, Ben RUBSON wrote: >=20 >=20 >> On 11 Aug 2016, at 11:24, Borja Marcos wrote: >>=20 >> Although, frankly, >> ZFS is extremely resilient. One of mine even survived a SAS HBA = problem that caused some >> silent corruption. >=20 > Any link to this issue Borja ? > Thank you ! It wasn=E2=80=99t a FreeBSD or ZFS bug, but a defective part (a HBA). = Once in a while we saw some errors in /var/log/messages and zfs scrub revealed some corruption that ZFS fixed without issues. = Determining the cause wasn=E2=80=99t easy (at first it looked like a defective backplane) and IBM, who are no longer welcome here = thanks to their totally fabulous support and warranty policy, didn=E2=80=99t help much. So we took the system offline, using = the replicated server instead, and it took some time doing tests (during which we caused more silent corrption which ZFS fixed without = problems) to determine that it was indeed the HBA. Finally we replaced the HBA and the system is back at work. But not a = single bit was lost. Borja. From owner-freebsd-fs@freebsd.org Thu Aug 11 10:15:45 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id D7A11BB0158 for ; Thu, 11 Aug 2016 10:15:45 +0000 (UTC) (envelope-from julien@perdition.city) Received: from relay-b03.edpnet.be (relay-b03.edpnet.be [212.71.1.220]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "edpnet.email", Issuer "Go Daddy Secure Certificate Authority - G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 7FDCE1DE4 for ; Thu, 11 Aug 2016 10:15:45 +0000 (UTC) (envelope-from julien@perdition.city) X-ASG-Debug-ID: 1470910540-0a88181ce524934a0001-3nHGF7 Received: from mordor.lan (213.211.139.72.dyn.edpnet.net [213.211.139.72]) by relay-b03.edpnet.be with ESMTP id jZy0ZZjshFt2Le5J (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Thu, 11 Aug 2016 12:15:41 +0200 (CEST) X-Barracuda-Envelope-From: julien@perdition.city X-Barracuda-Effective-Source-IP: 213.211.139.72.dyn.edpnet.net[213.211.139.72] X-Barracuda-Apparent-Source-IP: 213.211.139.72 Date: Thu, 11 Aug 2016 12:15:39 +0200 From: Julien Cigar To: Borja Marcos Cc: Jordan Hubbard , freebsd-fs@freebsd.org Subject: Re: HAST + ZFS + NFS + CARP Message-ID: <20160811101539.GM70364@mordor.lan> X-ASG-Orig-Subj: Re: HAST + ZFS + NFS + CARP References: <20160703214723.GF41276@mordor.lan> <65906F84-CFFC-40E9-8236-56AFB6BE2DE1@ixsystems.com> <61283600-A41A-4A8A-92F9-7FAFF54DD175@ixsystems.com> <20160704183643.GI41276@mordor.lan> <20160704193131.GJ41276@mordor.lan> <20160811091016.GI70364@mordor.lan> <1AA52221-9B04-4CF6-97A3-D2C2B330B7F9@sarenet.es> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha512; protocol="application/pgp-signature"; boundary="aiCxlS1GuupXjEh3" Content-Disposition: inline In-Reply-To: <1AA52221-9B04-4CF6-97A3-D2C2B330B7F9@sarenet.es> User-Agent: Mutt/1.6.1 (2016-04-27) X-Barracuda-Connect: 213.211.139.72.dyn.edpnet.net[213.211.139.72] X-Barracuda-Start-Time: 1470910540 X-Barracuda-Encrypted: ECDHE-RSA-AES256-GCM-SHA384 X-Barracuda-URL: https://212.71.1.220:443/cgi-mod/mark.cgi X-Barracuda-Scan-Msg-Size: 2203 X-Virus-Scanned: by bsmtpd at edpnet.be X-Barracuda-BRTS-Status: 1 X-Barracuda-Bayes: INNOCENT GLOBAL 0.5000 1.0000 0.0100 X-Barracuda-Spam-Score: 0.01 X-Barracuda-Spam-Status: No, SCORE=0.01 using global scores of TAG_LEVEL=1000.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=6.0 tests= X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.3.31929 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 11 Aug 2016 10:15:45 -0000 --aiCxlS1GuupXjEh3 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Thu, Aug 11, 2016 at 11:24:40AM +0200, Borja Marcos wrote: >=20 > > On 11 Aug 2016, at 11:10, Julien Cigar wrote: > >=20 > > As I said in a previous post I tested the zfs send/receive approach (wi= th > > zrep) and it works (more or less) perfectly.. so I concur in all what y= ou > > said, especially about off-site replicate and synchronous replication. > >=20 > > Out of curiosity I'm also testing a ZFS + iSCSI + CARP at the moment,= =20 > > I'm in the early tests, haven't done any heavy writes yet, but ATM it= =20 > > works as expected, I havent' managed to corrupt the zpool. >=20 > I must be too old school, but I don=E2=80=99t quite like the idea of usin= g an essentially unreliable transport > (Ethernet) for low-level filesystem operations. >=20 > In case something went wrong, that approach could risk corrupting a pool.= Although, frankly, Yeah.. although you could have silent data corruption with any broken hardware too. Some years ago I suffered a silent data corruption due to=20 a broken RAID card, and had to restore from backups.. > ZFS is extremely resilient. One of mine even survived a SAS HBA problem t= hat caused some > silent corruption. Yep, and I would certainly not use another FS to do that. Scrubbing the pool more regularly is also something to do. >=20 > The advantage of ZFS send/receive of datasets is, however, that you can c= onsider it > essentially atomic. A transport corruption should not cause trouble (apar= t from a failed > "zfs receive") and with snapshot retention you can even roll back. You ca= n=E2=80=99t roll back > zpool replications :) >=20 > ZFS receive does a lot of sanity checks as well. As long as your zfs rece= ive doesn=E2=80=99t involve a rollback > to the latest snapshot, it won=E2=80=99t destroy anything by mistake. Jus= t make sure that your replica datasets > aren=E2=80=99t mounted and zfs receive won=E2=80=99t complain. >=20 >=20 > Cheers, >=20 >=20 >=20 >=20 > Borja. >=20 >=20 >=20 --=20 Julien Cigar Belgian Biodiversity Platform (http://www.biodiversity.be) PGP fingerprint: EEF9 F697 4B68 D275 7B11 6A25 B2BB 3710 A204 23C0 No trees were killed in the creation of this message. However, many electrons were terribly inconvenienced. --aiCxlS1GuupXjEh3 Content-Type: application/pgp-signature; name="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAABCgAGBQJXrFBIAAoJELK7NxCiBCPA5JsQAMOTJhdRd+NhiuSe3tggOkHb ZGZfaCuTPLsTg2enL7iFNpV2HIT9guiE88cZzr7tB5GGwJtIVT/JBZ3tkKbLo4bu +v82RbZxEDR8XhJPPtDM6mziqIuIMzhoya03LqrDW1BKhOPM3wBCYXSZj6TW15gZ ++koMV1QQzCOku8tjzG0yLbuDZ5q1fpadq9EfIV/4B10d/O3kfYilr500Lagc9x2 bNU0j5u+MwAr3lQjXtgfhg34YM7TAAS8vIJM1AzFKC9Sh6WKANOOJ2xWLkQVg583 LOSdhQ8LF+gwSQMPkDSnGYyvLw6pBaxQQZQDOT+V9bOY+r7zfOXuMWObO9LcM5r5 hT6J5xpIbs4y1uoPjmFSudbZ0oiQ9C8XR60W2tzTBPatgCc/s5zurz1iYwbIIR9C r34jUcRSYJd9gVHRhDdrBt3Pxb/kifaGAgbaZwRudSij51ynmADwYcJvY9evYdOF qkjPYtJNvETBSisWJs2Y/qTuYLmZl1n8n8jqS8pHwCtAAt7rBPHtZGwek41E6n59 7TWDmmt3XiFz4kTfDw8RErsP8c4TJo5VDbDJVdOQCwDBWuiP8o1BzG6tjNi/tF+W qVwoWL16vi2Qt4frDvcSiPCDzQVqdp7qLIgYpmzUh91Rd6LRQ3HFI+sIz+Cs9AyU rkQIvybQBCVqiYPN3EKC =e1vn -----END PGP SIGNATURE----- --aiCxlS1GuupXjEh3-- From owner-freebsd-fs@freebsd.org Thu Aug 11 11:19:25 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 2E231BB59DE for ; Thu, 11 Aug 2016 11:19:25 +0000 (UTC) (envelope-from julien@perdition.city) Received: from relay-b01.edpnet.be (relay-b01.edpnet.be [212.71.1.221]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "edpnet.email", Issuer "Go Daddy Secure Certificate Authority - G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id D36F41720 for ; Thu, 11 Aug 2016 11:19:24 +0000 (UTC) (envelope-from julien@perdition.city) X-ASG-Debug-ID: 1470913355-0a7ff529ab1b6eba0001-3nHGF7 Received: from mordor.lan (213.211.139.72.dyn.edpnet.net [213.211.139.72]) by relay-b01.edpnet.be with ESMTP id ae7ZsecfcVglKeSC (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Thu, 11 Aug 2016 13:02:37 +0200 (CEST) X-Barracuda-Envelope-From: julien@perdition.city X-Barracuda-Effective-Source-IP: 213.211.139.72.dyn.edpnet.net[213.211.139.72] X-Barracuda-Apparent-Source-IP: 213.211.139.72 Date: Thu, 11 Aug 2016 13:02:35 +0200 From: Julien Cigar To: Borja Marcos Cc: freebsd-fs@freebsd.org, Jordan Hubbard Subject: Re: HAST + ZFS + NFS + CARP Message-ID: <20160811110235.GN70364@mordor.lan> X-ASG-Orig-Subj: Re: HAST + ZFS + NFS + CARP References: <65906F84-CFFC-40E9-8236-56AFB6BE2DE1@ixsystems.com> <61283600-A41A-4A8A-92F9-7FAFF54DD175@ixsystems.com> <20160704183643.GI41276@mordor.lan> <20160704193131.GJ41276@mordor.lan> <20160811091016.GI70364@mordor.lan> <1AA52221-9B04-4CF6-97A3-D2C2B330B7F9@sarenet.es> <20160811101539.GM70364@mordor.lan> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha512; protocol="application/pgp-signature"; boundary="yAzUYvkKIfeS0jQX" Content-Disposition: inline In-Reply-To: <20160811101539.GM70364@mordor.lan> User-Agent: Mutt/1.6.1 (2016-04-27) X-Barracuda-Connect: 213.211.139.72.dyn.edpnet.net[213.211.139.72] X-Barracuda-Start-Time: 1470913355 X-Barracuda-Encrypted: ECDHE-RSA-AES256-GCM-SHA384 X-Barracuda-URL: https://212.71.1.221:443/cgi-mod/mark.cgi X-Barracuda-Scan-Msg-Size: 3070 X-Virus-Scanned: by bsmtpd at edpnet.be X-Barracuda-BRTS-Status: 1 X-Barracuda-Bayes: INNOCENT GLOBAL 0.5000 1.0000 0.0100 X-Barracuda-Spam-Score: 0.01 X-Barracuda-Spam-Status: No, SCORE=0.01 using global scores of TAG_LEVEL=1000.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=6.0 tests= X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.3.31929 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 11 Aug 2016 11:19:25 -0000 --yAzUYvkKIfeS0jQX Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Thu, Aug 11, 2016 at 12:15:39PM +0200, Julien Cigar wrote: > On Thu, Aug 11, 2016 at 11:24:40AM +0200, Borja Marcos wrote: > >=20 > > > On 11 Aug 2016, at 11:10, Julien Cigar wrote: > > >=20 > > > As I said in a previous post I tested the zfs send/receive approach (= with > > > zrep) and it works (more or less) perfectly.. so I concur in all what= you > > > said, especially about off-site replicate and synchronous replication. > > >=20 > > > Out of curiosity I'm also testing a ZFS + iSCSI + CARP at the moment,= =20 > > > I'm in the early tests, haven't done any heavy writes yet, but ATM it= =20 > > > works as expected, I havent' managed to corrupt the zpool. > >=20 > > I must be too old school, but I don=E2=80=99t quite like the idea of us= ing an essentially unreliable transport > > (Ethernet) for low-level filesystem operations. > >=20 > > In case something went wrong, that approach could risk corrupting a poo= l. Although, frankly, Now I'm thinking of the following scenario: - filer1 is the MASTER, filer2 the BACKUP - on filer1 a zpool data mirror over loc1, loc2, rem1, rem2 (where rem1=20 and rem2 are iSCSI disks) - the pool is mounted on MASTER Now imagine that the replication interface corrupts packets silently, but data are still written on rem1 and rem2. Does ZFS will detect=20 immediately that written blocks on rem1 and rem2 are corrupted? >=20 > Yeah.. although you could have silent data corruption with any broken > hardware too. Some years ago I suffered a silent data corruption due to= =20 > a broken RAID card, and had to restore from backups.. >=20 > > ZFS is extremely resilient. One of mine even survived a SAS HBA problem= that caused some > > silent corruption. >=20 > Yep, and I would certainly not use another FS to do that. Scrubbing the > pool more regularly is also something to do. >=20 > >=20 > > The advantage of ZFS send/receive of datasets is, however, that you can= consider it > > essentially atomic. A transport corruption should not cause trouble (ap= art from a failed > > "zfs receive") and with snapshot retention you can even roll back. You = can=E2=80=99t roll back > > zpool replications :) > >=20 > > ZFS receive does a lot of sanity checks as well. As long as your zfs re= ceive doesn=E2=80=99t involve a rollback > > to the latest snapshot, it won=E2=80=99t destroy anything by mistake. J= ust make sure that your replica datasets > > aren=E2=80=99t mounted and zfs receive won=E2=80=99t complain. > >=20 > >=20 > > Cheers, > >=20 > >=20 > >=20 > >=20 > > Borja. > >=20 > >=20 > >=20 >=20 > --=20 > Julien Cigar > Belgian Biodiversity Platform (http://www.biodiversity.be) > PGP fingerprint: EEF9 F697 4B68 D275 7B11 6A25 B2BB 3710 A204 23C0 > No trees were killed in the creation of this message. > However, many electrons were terribly inconvenienced. --=20 Julien Cigar Belgian Biodiversity Platform (http://www.biodiversity.be) PGP fingerprint: EEF9 F697 4B68 D275 7B11 6A25 B2BB 3710 A204 23C0 No trees were killed in the creation of this message. However, many electrons were terribly inconvenienced. --yAzUYvkKIfeS0jQX Content-Type: application/pgp-signature; name="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAABCgAGBQJXrFtIAAoJELK7NxCiBCPAc60QAJc0Trdq5aR2+B56Nru38wDs w7EsfdJtaPYqVHfn3JtinY0ShblNyvCqTWC5Cbm3yW9sJjmKht+Q1AlOuaSQM39U GVhq7SP71tnh72tgLu7UFHoagLeyF/QadJcvyYKdIJRlYMjZv5lUMdWdid2hhncb fBGGnSdyyuh+7IrGnExpG71gwv56BBDM0012831ypqSxUf++h3OQwutytjYKx1OK NEmpHgh9erTMk/wd6fb0oRKNLIK3RGiRPQijWGvkzkuURCSLcSDXCQTdNn0UQVWr I2SLaNg8HRWnEx9Ch030p7qhtjCv9jBQIyU9Vcj16ePJmqgbVXcaHHmUnH9v8sXB bO64Wgrp++ofKsqBM6dGdbqTOQGv4uJLY25uyVK+CAGUEMzvxeWhkC4A/Kubh2Dq CqfaEVhQwfPKpP3iilXZow05sFLVprqBqP8nHHUSo+QacNyuTv8ZhCaQwZSXzuL8 GVzNvt2foZndzGJCCfd0L+LhFydaJjMpnz05BQSRxVpljLrI7QSL8Jm3xTM7a9GS T1VP4dFqHHYqWEo/cGNQUPYhVYiqUIVIVwlyrZCMMaInDqIgdZQZiGdV2pn1qXJN U75nBSsKCq7wjYg7pBf2JtzP6cYZkbSgFyimK9+vH/iLNhdfnZioNEsNreggzr5Y kAesIncY5bdr1ELwLia5 =ViF3 -----END PGP SIGNATURE----- --yAzUYvkKIfeS0jQX-- From owner-freebsd-fs@freebsd.org Thu Aug 11 11:22:10 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 01555BB5B9E for ; Thu, 11 Aug 2016 11:22:10 +0000 (UTC) (envelope-from borjam@sarenet.es) Received: from cu01176a.smtpx.saremail.com (cu01176a.smtpx.saremail.com [195.16.150.151]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 7DC981A09 for ; Thu, 11 Aug 2016 11:22:09 +0000 (UTC) (envelope-from borjam@sarenet.es) Received: from [172.16.8.36] (izaro.sarenet.es [192.148.167.11]) by proxypop03.sare.net (Postfix) with ESMTPSA id D48709DC696; Thu, 11 Aug 2016 13:22:05 +0200 (CEST) Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Mac OS X Mail 9.3 \(3124\)) Subject: Re: HAST + ZFS + NFS + CARP From: Borja Marcos In-Reply-To: <20160811110235.GN70364@mordor.lan> Date: Thu, 11 Aug 2016 13:22:05 +0200 Cc: freebsd-fs@freebsd.org, Jordan Hubbard Content-Transfer-Encoding: quoted-printable Message-Id: References: <65906F84-CFFC-40E9-8236-56AFB6BE2DE1@ixsystems.com> <61283600-A41A-4A8A-92F9-7FAFF54DD175@ixsystems.com> <20160704183643.GI41276@mordor.lan> <20160704193131.GJ41276@mordor.lan> <20160811091016.GI70364@mordor.lan> <1AA52221-9B04-4CF6-97A3-D2C2B330B7F9@sarenet.es> <20160811101539.GM70364@mordor.lan> <20160811110235.GN70364@mordor.lan> To: Julien Cigar X-Mailer: Apple Mail (2.3124) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 11 Aug 2016 11:22:10 -0000 > On 11 Aug 2016, at 13:02, Julien Cigar wrote: >=20 > On Thu, Aug 11, 2016 at 12:15:39PM +0200, Julien Cigar wrote: >> On Thu, Aug 11, 2016 at 11:24:40AM +0200, Borja Marcos wrote: >>>=20 >>>> On 11 Aug 2016, at 11:10, Julien Cigar = wrote: >>>>=20 >>>> As I said in a previous post I tested the zfs send/receive approach = (with >>>> zrep) and it works (more or less) perfectly.. so I concur in all = what you >>>> said, especially about off-site replicate and synchronous = replication. >>>>=20 >>>> Out of curiosity I'm also testing a ZFS + iSCSI + CARP at the = moment,=20 >>>> I'm in the early tests, haven't done any heavy writes yet, but ATM = it=20 >>>> works as expected, I havent' managed to corrupt the zpool. >>>=20 >>> I must be too old school, but I don=E2=80=99t quite like the idea of = using an essentially unreliable transport >>> (Ethernet) for low-level filesystem operations. >>>=20 >>> In case something went wrong, that approach could risk corrupting a = pool. Although, frankly, >=20 > Now I'm thinking of the following scenario: > - filer1 is the MASTER, filer2 the BACKUP > - on filer1 a zpool data mirror over loc1, loc2, rem1, rem2 (where = rem1=20 > and rem2 are iSCSI disks) > - the pool is mounted on MASTER >=20 > Now imagine that the replication interface corrupts packets silently, > but data are still written on rem1 and rem2. Does ZFS will detect=20 > immediately that written blocks on rem1 and rem2 are corrupted? As far as I know ZFS does not read after write. It can detect silent = corruption when reading a file or a metadata block, but that will happen only when requested (file), = when needed (metadata) or in a scrub. It doesn=E2=80=99t do preemptive read-after-write, I = think. Or I don=E2=80=99t recall having read it. Silent corruption can be overcome by ZFS as long as it isn=E2=80=99t too = much. In my case with the evil HBA it was like a block operation error in an hour of intensive = I/O. In normal operation it could be a block error in a week or so. With that error rate, the chance of a = random I/O error corrupting the same block in three different devices (it=E2=80=99s a raidz2 vdev) are = really remote.=20 But, again, and I won=E2=80=99t push more at the risk of annoying you to = death. Just, think that your I/O=20 throughput will be bound by your network and iSCSI performance, anyway = ;) Borja. P.D: I forgot to reply to this before: >> Yeah.. although you could have silent data corruption with any broken >> hardware too. Some years ago I suffered a silent data corruption due = to=20 >> a broken RAID card, and had to restore from backups.. Ethernet hardware is designed with the assumption that the loss of a = packet is not such a big deal.=20 Shit happens on SAS and other specialized storage networks of course, = but you should expect it to be=20 at least a bit less. ;) From owner-freebsd-fs@freebsd.org Thu Aug 11 11:49:25 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 82B8CBB61E1 for ; Thu, 11 Aug 2016 11:49:25 +0000 (UTC) (envelope-from julien@perdition.city) Received: from relay-b01.edpnet.be (relay-b01.edpnet.be [212.71.1.221]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "edpnet.email", Issuer "Go Daddy Secure Certificate Authority - G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 2B0E115B0 for ; Thu, 11 Aug 2016 11:49:24 +0000 (UTC) (envelope-from julien@perdition.city) X-ASG-Debug-ID: 1470916160-0a7ff569f93107780001-3nHGF7 Received: from mordor.lan (213.211.139.72.dyn.edpnet.net [213.211.139.72]) by relay-b01.edpnet.be with ESMTP id 2y00BqHroQqeUrwp (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Thu, 11 Aug 2016 13:49:21 +0200 (CEST) X-Barracuda-Envelope-From: julien@perdition.city X-Barracuda-Effective-Source-IP: 213.211.139.72.dyn.edpnet.net[213.211.139.72] X-Barracuda-Apparent-Source-IP: 213.211.139.72 Date: Thu, 11 Aug 2016 13:49:20 +0200 From: Julien Cigar To: Borja Marcos Cc: freebsd-fs@freebsd.org, Jordan Hubbard Subject: Re: HAST + ZFS + NFS + CARP Message-ID: <20160811114919.GP70364@mordor.lan> X-ASG-Orig-Subj: Re: HAST + ZFS + NFS + CARP References: <61283600-A41A-4A8A-92F9-7FAFF54DD175@ixsystems.com> <20160704183643.GI41276@mordor.lan> <20160704193131.GJ41276@mordor.lan> <20160811091016.GI70364@mordor.lan> <1AA52221-9B04-4CF6-97A3-D2C2B330B7F9@sarenet.es> <20160811101539.GM70364@mordor.lan> <20160811110235.GN70364@mordor.lan> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha512; protocol="application/pgp-signature"; boundary="5/6IVfYouxg+lu1D" Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.6.1 (2016-04-27) X-Barracuda-Connect: 213.211.139.72.dyn.edpnet.net[213.211.139.72] X-Barracuda-Start-Time: 1470916160 X-Barracuda-Encrypted: ECDHE-RSA-AES256-GCM-SHA384 X-Barracuda-URL: https://212.71.1.221:443/cgi-mod/mark.cgi X-Barracuda-Scan-Msg-Size: 4031 X-Virus-Scanned: by bsmtpd at edpnet.be X-Barracuda-BRTS-Status: 1 X-Barracuda-Bayes: INNOCENT GLOBAL 0.5000 1.0000 0.0100 X-Barracuda-Spam-Score: 0.01 X-Barracuda-Spam-Status: No, SCORE=0.01 using global scores of TAG_LEVEL=1000.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=6.0 tests= X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.3.31931 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 11 Aug 2016 11:49:25 -0000 --5/6IVfYouxg+lu1D Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Thu, Aug 11, 2016 at 01:22:05PM +0200, Borja Marcos wrote: >=20 > > On 11 Aug 2016, at 13:02, Julien Cigar wrote: > >=20 > > On Thu, Aug 11, 2016 at 12:15:39PM +0200, Julien Cigar wrote: > >> On Thu, Aug 11, 2016 at 11:24:40AM +0200, Borja Marcos wrote: > >>>=20 > >>>> On 11 Aug 2016, at 11:10, Julien Cigar wrote: > >>>>=20 > >>>> As I said in a previous post I tested the zfs send/receive approach = (with > >>>> zrep) and it works (more or less) perfectly.. so I concur in all wha= t you > >>>> said, especially about off-site replicate and synchronous replicatio= n. > >>>>=20 > >>>> Out of curiosity I'm also testing a ZFS + iSCSI + CARP at the moment= ,=20 > >>>> I'm in the early tests, haven't done any heavy writes yet, but ATM i= t=20 > >>>> works as expected, I havent' managed to corrupt the zpool. > >>>=20 > >>> I must be too old school, but I don=E2=80=99t quite like the idea of = using an essentially unreliable transport > >>> (Ethernet) for low-level filesystem operations. > >>>=20 > >>> In case something went wrong, that approach could risk corrupting a p= ool. Although, frankly, > >=20 > > Now I'm thinking of the following scenario: > > - filer1 is the MASTER, filer2 the BACKUP > > - on filer1 a zpool data mirror over loc1, loc2, rem1, rem2 (where rem1= =20 > > and rem2 are iSCSI disks) > > - the pool is mounted on MASTER > >=20 > > Now imagine that the replication interface corrupts packets silently, > > but data are still written on rem1 and rem2. Does ZFS will detect=20 > > immediately that written blocks on rem1 and rem2 are corrupted? >=20 > As far as I know ZFS does not read after write. It can detect silent corr= uption when reading a file > or a metadata block, but that will happen only when requested (file), whe= n needed (metadata) > or in a scrub. It doesn=E2=80=99t do preemptive read-after-write, I think= =2E Or I don=E2=80=99t recall having read it. Nop, ZFS doesn't read after write. So in theory you pool can become corrupted in the following case: T1: a zpool scrub is made, everything is OK T2: the replication interface starts to silently corrupt packets T3: corrupted data blocks are written on the two iSCSI disks while=20 valid data blocks are written on the two local disks. T4: corrupted data blocks are not replayed, so ZFS will not notice it. T5: master dies before another zpool scrub is run T6: failover happens, BACKUP becomes the new MASTER, try to import the pool -> corruption -> fail >:O Although very very unlikely, this scenario is in theory possible. BTW any idea if some sort of checksum for payload is made in the iSCSI protocol? >=20 > Silent corruption can be overcome by ZFS as long as it isn=E2=80=99t too = much. In my case with the > evil HBA it was like a block operation error in an hour of intensive I/O.= In normal operation it could > be a block error in a week or so. With that error rate, the chance of a r= andom I/O error corrupting the > same block in three different devices (it=E2=80=99s a raidz2 vdev) are re= ally remote.=20 >=20 > But, again, and I won=E2=80=99t push more at the risk of annoying you to = death. Just, think that your I/O=20 > throughput will be bound by your network and iSCSI performance, anyway ;) >=20 >=20 >=20 >=20 > Borja. >=20 >=20 > P.D: I forgot to reply to this before: >=20 > >> Yeah.. although you could have silent data corruption with any broken > >> hardware too. Some years ago I suffered a silent data corruption due t= o=20 > >> a broken RAID card, and had to restore from backups.. >=20 > Ethernet hardware is designed with the assumption that the loss of a pack= et is not such a big deal.=20 > Shit happens on SAS and other specialized storage networks of course, but= you should expect it to be=20 > at least a bit less. ;) >=20 >=20 --=20 Julien Cigar Belgian Biodiversity Platform (http://www.biodiversity.be) PGP fingerprint: EEF9 F697 4B68 D275 7B11 6A25 B2BB 3710 A204 23C0 No trees were killed in the creation of this message. However, many electrons were terribly inconvenienced. --5/6IVfYouxg+lu1D Content-Type: application/pgp-signature; name="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAABCgAGBQJXrGY8AAoJELK7NxCiBCPAVR4QAJjCONr/kBtfr3gUX1xRD28K GSB463adUYv8DCLIUFSSWqC95+qpP8EkjKNK85I54y2SsBeuUh97QFCStEqMVjxJ k48DPikAyByrC9ohu2MoqHJprPC4v7M0EizoMiA3CUQ7pOWEWyMQ6bpaB/TYxA1J X9DQGqDbT1nWJNS3KVQ2rdAzFyq8nAfaKOoyFz6QGghiw0/p6tUY1s0qJT43ir0g n/1fuHuoktG9KwjiAnC+6ULDUnZX2ZW3um4nnvi13u2Cc9M+S7XRgIzvDpnzKskq 29Y787tF11AxmGmGq9jjYXyZ1CZkR/bybSC3b774Llheje2jK3zezwh48PHdMaez 4rP0w7tbIa848CpBYNHQkFwS1/UwmjvyU+KTACc4nVA50+nZ0FJrRBmhU+J1/NCS QhsMhmJ2hLWKvMMD9y9TjBK5L7Yf6gKiTeZ5tg4aq7cmLka3o7zBXOaA/0kRdyfG CjnxMDCAmxzvr9xItQDnWen3/IYYwn5IkN32I8w2sl9k1Y6PCwOxgvDzPl+pAXH7 OGv1JU31UV6w0Jo4uxODwkKmBHt04FZuykfPdKaWUvsrWaD7Py80oeBXxgSdwsmz 8uj7bm2EssT7sljRrq5AiuYJn/DtV4kqDHTRssJdmifVRK++qqfUYtAcR6vyLIHS ioqio1mcGxm3eblqay4U =j2GE -----END PGP SIGNATURE----- --5/6IVfYouxg+lu1D-- From owner-freebsd-fs@freebsd.org Thu Aug 11 20:49:01 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id BE963BB6E0B for ; Thu, 11 Aug 2016 20:49:01 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id A37C11DD7 for ; Thu, 11 Aug 2016 20:49:01 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id u7BKn06q015833 for ; Thu, 11 Aug 2016 20:49:01 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 209158] node / npm triggering zfs rename deadlock Date: Thu, 11 Aug 2016 20:49:00 +0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: CURRENT X-Bugzilla-Keywords: needs-qa, patch X-Bugzilla-Severity: Affects Some People X-Bugzilla-Who: commit-hook@freebsd.org X-Bugzilla-Status: In Progress X-Bugzilla-Resolution: X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: avg@FreeBSD.org X-Bugzilla-Flags: mfc-stable10? mfc-stable11? X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 11 Aug 2016 20:49:01 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D209158 --- Comment #40 from commit-hook@freebsd.org --- A commit references this bug: Author: avg Date: Thu Aug 11 20:48:04 UTC 2016 New revision: 303970 URL: https://svnweb.freebsd.org/changeset/base/303970 Log: MFC r303763,303791,303869: zfs: honour and make use of vfs vnode locking protocol ZFS POSIX Layer is originally written for Solaris VFS which is very different from FreeBSD VFS. Most importantly many things that FreeBSD VFS manages on behalf of all filesystems are implemented in ZPL in a different way. Thus, ZPL contains code that is redundant on FreeBSD or duplicates VFS functionality or, in the worst cases, badly interacts / interferes with VFS. The most prominent problem is a deadlock caused by the lock order reversal of vnode locks that may happen with concurrent zfs_rename() and lookup(). The deadlock is a result of zfs_rename() not observing the vnode locking contract expected by VFS. This commit removes all ZPL internal locking that protects parent-child relationships of filesystem nodes. These relationships are protected by vnode locks and the code is changed to take advantage of that fact and to properly interact with VFS. Removal of the internal locking allowed all ZPL dmu_tx_assign calls to use TXG_WAIT mode. Another victim, disputable perhaps, is ZFS support for filesystems with mixed case sensitivity. That support is not provided by the OS anyway, so in ZFS it was a buch of dead code. To do: - replace ZFS_ENTER mechanism with VFS managed / visible mechanism - replace zfs_zget with zfs_vget[f] as much as possible - get rid of not really useful now zfs_freebsd_* adapters - more cleanups of unneeded / unused code - fix / replace .zfs support PR: 209158 Approved by: re (gjb) Changes: _U stable/11/ stable/11/sys/cddl/compat/opensolaris/sys/vnode.h stable/11/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/sys/zfs_dir.h stable/11/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/sys/zfs_vfsops.h stable/11/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/sys/zfs_znode.h stable/11/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_acl.c stable/11/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_dir.c stable/11/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_sa.c stable/11/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vfsops.c stable/11/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c stable/11/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_znode.c --=20 You are receiving this mail because: You are on the CC list for the bug.= From owner-freebsd-fs@freebsd.org Fri Aug 12 00:35:54 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id BEFFFBB5F76 for ; Fri, 12 Aug 2016 00:35:54 +0000 (UTC) (envelope-from Marc.Goroff@Quorum.net) Received: from mail.quorumlabs.com (mail.quorum.net [64.74.133.216]) (using TLSv1 with cipher AES128-SHA (128/128 bits)) (Client CN "mail.quorumlabs.com", Issuer "Go Daddy Secure Certification Authority" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id AC7BB1D7D for ; Fri, 12 Aug 2016 00:35:53 +0000 (UTC) (envelope-from Marc.Goroff@Quorum.net) Received: from Marc-Goroffs-MacBook-Air-4.local (10.20.7.96) by QLEXC01.Quorum.local (10.30.0.22) with Microsoft SMTP Server (TLS) id 14.2.318.1; Thu, 11 Aug 2016 17:34:46 -0700 Subject: Re: Hanging/stalling mountd on heavily loaded NFS server To: Rick Macklem , "freebsd-fs@freebsd.org" References: <98b4db11-8b41-608c-c714-f704a78914b7@quorum.net> From: Marc Goroff Message-ID: Date: Thu, 11 Aug 2016 17:34:45 -0700 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:45.0) Gecko/20100101 Thunderbird/45.2.0 MIME-Version: 1.0 In-Reply-To: X-Originating-IP: [10.20.7.96] Content-Type: text/plain; charset="windows-1252"; format=flowed Content-Transfer-Encoding: 7bit X-Content-Filtered-By: Mailman/MimeDel 2.1.22 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 12 Aug 2016 00:35:54 -0000 Just to followup on this issue, the patch referenced below seems to have fixed the problem. Thanks! Marc On 7/27/16 6:41 PM, Rick Macklem wrote: > > Marc Goroff wrote: > > *> From:* owner-freebsd-fs@freebsd.org > on behalf of Marc Goroff > *> Sent:* Wednesday, July 27, 2016 7:04 PM > *> To:* freebsd-fs@freebsd.org > *> Subject:* Hanging/stalling mountd on heavily loaded NFS server > > > > We have a large and busy production NFS server running 10.2 that is > > serving approximately 200 ZFS file systems to production VMs. The > system > > has been very stable up until last night when we attempted to mount new > > ZFS filesystems on NFS clients. The mountd process hung and client > mount > > requests timed out. The NFS server continued to serve traffic to > > existing clients during this time. The mountd was hung in state > nfsv4lck: > > > > [root@zfs-west1 ~]# ps -axgl|grep mount > 0 38043 1 0 20 0 63672 17644 nfsv4lck Ds - 0:00.30 > /usr/sbin/mountd -r -S /etc/exports /etc/zfs/exports > > > > It remains in this state for an indeterminate amount of time. I once > saw > > it continue on after several minutes, but most of the time it seems to > > stay in this state for 15+ minutes. During this time, it does not > > respond to kill -9 but it will eventually exit after many minutes. > > Restarting mountd will allow the existing NFS clients to continue (they > > hang when mountd exits), but any attempt to perform additional NFS > > mounts will push mountd back into the bad state. > > > > This problem seems to be related to the number of NFS mounts off the > > server. If we unmount some of the clients, we can successfully perform > > the NFS mounts of the new ZFS filesystems. However, when we attempt to > > mount all of the production NFS mounts, mountd will hang as above. > > > Stuff snipped for brevity... > > > > Any suggestion on how to resolve this issue? Since this is a production > > server, my options for intrusive debugging are very limited. > > > I think you should try the patch that is r300254 in stable/10. It is a > simple > patch you can apply to your kernel without other changes. > > http://svnweb.freebsd.org/base/stable/10/sys/fs/nfsserver/nfs_nfsdkrpc.c?r1=291869&r2=300254 > > It reverses the lock acquisition priority so that mountd doesn't wait > until the > nfsd threads are idle before updating exports. > > rick > > > Thanks. > > > > Marc > > > > _______________________________________________ > > freebsd-fs@freebsd.org mailing list > > https://lists.freebsd.org/mailman/listinfo/freebsd-fs > > > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@freebsd.org Fri Aug 12 14:48:11 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 732E5BB7D34 for ; Fri, 12 Aug 2016 14:48:11 +0000 (UTC) (envelope-from julien@perdition.city) Received: from relay-b03.edpnet.be (relay-b03.edpnet.be [212.71.1.220]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "edpnet.email", Issuer "Go Daddy Secure Certificate Authority - G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 339FC1E29 for ; Fri, 12 Aug 2016 14:48:10 +0000 (UTC) (envelope-from julien@perdition.city) X-ASG-Debug-ID: 1471013279-0a88181ce7256e5c0001-3nHGF7 Received: from mordor.lan (213.211.139.72.dyn.edpnet.net [213.211.139.72]) by relay-b03.edpnet.be with ESMTP id hgQsPNWEKd4oISqx (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO) for ; Fri, 12 Aug 2016 16:48:00 +0200 (CEST) X-Barracuda-Envelope-From: julien@perdition.city X-Barracuda-Effective-Source-IP: 213.211.139.72.dyn.edpnet.net[213.211.139.72] X-Barracuda-Apparent-Source-IP: 213.211.139.72 Date: Fri, 12 Aug 2016 16:47:59 +0200 From: Julien Cigar To: freebsd-fs@freebsd.org Subject: NFS shares and jails Message-ID: <20160812144759.GQ70364@mordor.lan> X-ASG-Orig-Subj: NFS shares and jails MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha512; protocol="application/pgp-signature"; boundary="UJEqbsikIZBgizPR" Content-Disposition: inline User-Agent: Mutt/1.6.1 (2016-04-27) X-Barracuda-Connect: 213.211.139.72.dyn.edpnet.net[213.211.139.72] X-Barracuda-Start-Time: 1471013279 X-Barracuda-Encrypted: ECDHE-RSA-AES256-GCM-SHA384 X-Barracuda-URL: https://212.71.1.220:443/cgi-mod/mark.cgi X-Barracuda-Scan-Msg-Size: 525 X-Virus-Scanned: by bsmtpd at edpnet.be X-Barracuda-BRTS-Status: 1 X-Barracuda-Bayes: SPAM GLOBAL 0.9031 1.0000 3.2589 X-Barracuda-Spam-Score: 4.76 X-Barracuda-Spam-Status: No, SCORE=4.76 using global scores of TAG_LEVEL=1000.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=6.0 tests=BSF_SC0_MV0713, BSF_SC0_MV0713_2 X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.3.31961 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.50 BSF_SC0_MV0713 Custom rule MV0713 1.00 BSF_SC0_MV0713_2 BSF_SC0_MV0713_2 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 12 Aug 2016 14:48:11 -0000 --UJEqbsikIZBgizPR Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: quoted-printable Hello, Let's say I have 10 jails on a machine that need access to the same NFS share. I wondered what was best: 10 NFS mount on the HOST (one for each jail), or 1 NFS mount on the HOST and 1 NULLFS mount per jail.. (10 NULLFS in total)? Thanks, Julien --=20 Julien Cigar Belgian Biodiversity Platform (http://www.biodiversity.be) PGP fingerprint: EEF9 F697 4B68 D275 7B11 6A25 B2BB 3710 A204 23C0 No trees were killed in the creation of this message. However, many electrons were terribly inconvenienced. --UJEqbsikIZBgizPR Content-Type: application/pgp-signature; name="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAABCgAGBQJXreGcAAoJELK7NxCiBCPAmnUP/0uTwdPzG3T1cXmmBI9NH0tH 3b6PxB2ETje5H0NdcSU0LAYSqZPYa1qam+yC8LPRcdU5fW2xG6tjzMDJ1p4/rgO7 0lF8xklVrN7F2in5USOoU+mGQpHAJjeuQiLtgbSliZmnNbHU+kvDaYfkbj266lh3 IG0Il4//PQF/64lfPyjnkHDF2t6slhTnDVN1i3TM/GBi9qyxAIy8z7o0WCgxTmS5 IbNa7XriInqKBLbMSIFp2xJSd67vSmkNpMomyDQL9GWFAwD2jNVihzBMmdCbTPsd DAmllU51IzlRkKG8Mf0SApJ6bsuhvDlKoMutgTXTQgIHeb/k6h2+QCVo5L31ZKp5 B3KtBd65rUBn0mLOR4CwkJlRHdH/ZiJbN1VAuf4+SvBdsvS8CxS+RsAhbYL28glr 6WWcKibrOi9NLrzLk/KytMyBm/Ovs9AMiHregkg5kmy3RPm3cznNcumQhm/X6hXj mo1b3ApGZMjIIV2ElW8dGOTCkOm+RwaHz5++xAsKQLDPI905a2bqnR3rP3rVLfqZ +Vl6r/kw0BDfsUI+txivpcLXl+XkJMCwFym+QhBbnlxjpVQGiiU6GV9VvuPLKeuF 4sW+i/BTWwcLebQZ+PaYLffNi4MoRCRptV7EyWEMWnoU0GHFxsjiRTF/1hF/SMQ9 gYFVPszd19zjP8bUGZcc =5yLk -----END PGP SIGNATURE----- --UJEqbsikIZBgizPR-- From owner-freebsd-fs@freebsd.org Fri Aug 12 16:12:20 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id AE849BB6F25 for ; Fri, 12 Aug 2016 16:12:20 +0000 (UTC) (envelope-from julian@freebsd.org) Received: from vps1.elischer.org (vps1.elischer.org [204.109.63.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "vps1.elischer.org", Issuer "CA Cert Signing Authority" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 8E5781328 for ; Fri, 12 Aug 2016 16:12:20 +0000 (UTC) (envelope-from julian@freebsd.org) Received: from Julian-MBP3.local (ppp121-45-226-8.lns20.per1.internode.on.net [121.45.226.8]) (authenticated bits=0) by vps1.elischer.org (8.15.2/8.15.2) with ESMTPSA id u7CGCCr2043197 (version=TLSv1.2 cipher=DHE-RSA-AES128-SHA bits=128 verify=NO); Fri, 12 Aug 2016 09:12:17 -0700 (PDT) (envelope-from julian@freebsd.org) Subject: Re: NFS shares and jails To: Julien Cigar , freebsd-fs@freebsd.org References: <20160812144759.GQ70364@mordor.lan> From: Julian Elischer Message-ID: <4a15701a-d229-47fc-e9a3-4c5a8892a476@freebsd.org> Date: Sat, 13 Aug 2016 00:12:06 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.11; rv:45.0) Gecko/20100101 Thunderbird/45.2.0 MIME-Version: 1.0 In-Reply-To: <20160812144759.GQ70364@mordor.lan> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 12 Aug 2016 16:12:20 -0000 On 12/08/2016 10:47 PM, Julien Cigar wrote: > Hello, > > Let's say I have 10 jails on a machine that need access to the same NFS > share. I wondered what was best: 10 NFS mount on the HOST (one for each > jail), or 1 NFS mount on the HOST and 1 NULLFS mount per jail.. (10 > NULLFS in total)? I'd guess 10 nfs mounts.. it depends on how much the jails SHARE the data. 10 nfs mounts will never share anything but 1 nfs mount will share its data and metadata before it hits the wire so there may be some caching effects. > > Thanks, > Julien > From owner-freebsd-fs@freebsd.org Fri Aug 12 19:19:44 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id A89A8BB33B3 for ; Fri, 12 Aug 2016 19:19:44 +0000 (UTC) (envelope-from "") Received: from mo7.mail.sc.edu (mo7.mail.sc.edu [129.252.158.31]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 769F51C96 for ; Fri, 12 Aug 2016 19:19:44 +0000 (UTC) (envelope-from "") Received: from mo7.mail.sc.edu (127.0.0.1) id hlogl00171s2 for ; Fri, 12 Aug 2016 15:14:37 -0400 (envelope-from <>) Received: from CAE145HUBP03.ds.sc.edu ([172.27.7.170]) by mo7.mail.sc.edu (SonicWALL 8.2.1.4973) with ESMTPS (version=TLSv1/SSLv3 cipher=ECDHE-RSA-AES256-SHA bits=256) id 201608121914370618333; Fri, 12 Aug 2016 15:14:37 -0400 Received: from CAE145HUBP03.ds.sc.edu ([::1]) by CAE145HUBP03.ds.sc.edu ([::1]) with Microsoft SMTP Server id 14.03.0301.000; Fri, 12 Aug 2016 15:14:17 -0400 From: "EDWARDS, KATHRYN" To: "freebsd-fs@freebsd.org" Subject: Automatic reply: Thread-Index: AQHR9M24EPuHV5xAbUW1YIDy8oJ7ww== Date: Fri, 12 Aug 2016 19:14:17 +0000 Message-ID: References: <000101d1f4ff$fe5ba1dc$c0a80001@as2116.net> In-Reply-To: <000101d1f4ff$fe5ba1dc$c0a80001@as2116.net> X-MS-Has-Attach: X-Auto-Response-Suppress: All X-MS-Exchange-Inbox-Rules-Loop: edwardsk@mailbox.sc.edu X-MS-TNEF-Correlator: MIME-Version: 1.0 X-Mlf-Version: 8.2.1.4973 X-Mlf-UniqueId: o201608121914370618333 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.22 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 12 Aug 2016 19:19:44 -0000 I am out of the office for work from August 10-21 and will not have regular= access to email. Please be assured that I will respond as soon as possibl= e once I return. If it is an emergency, you may leave a message at the History Department, = 803-777-5195. Sincerely, Kay Edwards Sincerely, Kathryn A. Edwards Professor of History From owner-freebsd-fs@freebsd.org Sat Aug 13 04:41:10 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id A6710BB5F7C for ; Sat, 13 Aug 2016 04:41:10 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from na01-bl2-obe.outbound.protection.outlook.com (mail-bl2on0073.outbound.protection.outlook.com [65.55.169.73]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (Client CN "mail.protection.outlook.com", Issuer "Microsoft IT SSL SHA2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 3AE1E1726 for ; Sat, 13 Aug 2016 04:41:09 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from YQBPR01MB0401.CANPRD01.PROD.OUTLOOK.COM (10.169.142.147) by YQBPR01MB0404.CANPRD01.PROD.OUTLOOK.COM (10.169.142.150) with Microsoft SMTP Server (version=TLS1_0, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA_P384) id 15.1.549.15; Sat, 13 Aug 2016 00:07:12 +0000 Received: from YQBPR01MB0401.CANPRD01.PROD.OUTLOOK.COM ([10.169.142.147]) by YQBPR01MB0401.CANPRD01.PROD.OUTLOOK.COM ([10.169.142.147]) with mapi id 15.01.0557.021; Sat, 13 Aug 2016 00:07:12 +0000 From: Rick Macklem To: Mahmoud Al-Qudsi , "freebsd-fs@freebsd.org" Subject: Re: PR-211674, fuse_vnode and fuse_msgbuf leak in fusefs-ntfs Thread-Topic: PR-211674, fuse_vnode and fuse_msgbuf leak in fusefs-ntfs Thread-Index: AdHxnNoX8eB2gk++Rnigc2FfbUZx+wDVXGlo Date: Sat, 13 Aug 2016 00:07:12 +0000 Message-ID: References: <012c01d1f19d$0aae9c70$200bd550$@neosmart.net> In-Reply-To: <012c01d1f19d$0aae9c70$200bd550$@neosmart.net> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: spf=none (sender IP is ) smtp.mailfrom=rmacklem@uoguelph.ca; x-originating-ip: [24.57.164.61] x-ms-office365-filtering-correlation-id: bff18731-864d-453c-d501-08d3c30dc690 x-microsoft-exchange-diagnostics: 1; YQBPR01MB0404; 6:vGnFX/sGqAp+oXC4issoTgTH1qcBR7QUJ5WpbLpCoBGLv0Cbl0IE1aGA64GQqXySqzF8r0Va2cuQMIgSNhzb273GBXH2wVDMaIsbYBsQelZuAwCexuedWJ7PPI+xY8Jq1sShFdQl1n8i9mz+YqL9Ig6ZLy0iHs4+IMrEqK1SfWt+LHyfG43jKVkE1pMqSxjkh4Neifva7SgHkn420d3kzD6CmRJK5yRp6euTOzGF6NkiKDP+mUJFwt60bMI+wNHnRKEr5LJ3Tz/PHIfPhqCXCccHtzYK/h/UleD1AsDDvk7EALouDKlAjsADzNl5smxr; 5:PeHmyySERCC/QUVFOzVFDe474xDfubKRTQFUcf/2bY4K/aHAAG5TG1GWzPnmkAr04oKDgT/S8L04iRiK86tQedmX8efLnyiMWKczBgr4X5iqjtaf0wCaK5/ndGeVPH7jEAsWs9vh+Q88RFNVwoeoTQ==; 24:NsKBruuxUmJF8O6QGiijwLYipv2qlZNp7vQJEWttySzjKv9nkH4CuEe/pDh8rWW9VUx43mpWqJiq5gp79DJkDVOyjmPQ/ypftK7mUTqU1WI=; 7:jSDbqcc72wxw53CvXSezLeJQnZ/JwwY46bZjLNKcAzxgEQZ3e2x7tZH4WfpjjZZ/OESbalxqaPltlkdiOERHFUnobp60zg0QeuleOO90thdKrxSyGET8ldivWPg/46XzPQtuE3iqHvhjYIq/8/94z47t9B/Z76IadyUn5EdSdMmI8Vgdaanjrjo25GIWnckOL4tqJ+iSjwgs7k04/7J+2z3HnO35THUo2kVZcHU9lE2dPTtznXtmub124WENZRD4 x-microsoft-antispam: UriScan:;BCL:0;PCL:0;RULEID:;SRVR:YQBPR01MB0404; x-microsoft-antispam-prvs: x-exchange-antispam-report-test: UriScan:(75325880899374); x-exchange-antispam-report-cfa-test: BCL:0; PCL:0; RULEID:(6040176)(2401047)(8121501046)(5005006)(3002001)(10201501046)(6043046)(6042046); SRVR:YQBPR01MB0404; BCL:0; PCL:0; RULEID:; SRVR:YQBPR01MB0404; x-forefront-prvs: 0033AAD26D x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(7916002)(189002)(199003)(24454002)(16236675004)(74482002)(33656002)(10400500002)(7696003)(7736002)(66066001)(2906002)(2900100001)(2950100001)(50986999)(15975445007)(8676002)(81166006)(8936002)(76176999)(81156014)(54356999)(101416001)(74316002)(19625215002)(2501003)(5002640100001)(92566002)(19617315012)(586003)(9686002)(19580395003)(105586002)(3846002)(7906003)(87936001)(19580405001)(106356001)(11100500001)(189998001)(5001770100001)(6116002)(107886002)(19627405001)(3280700002)(68736007)(77096005)(7846002)(3660700001)(102836003)(122556002)(97736004)(86362001)(21314002); DIR:OUT; SFP:1101; SCL:1; SRVR:YQBPR01MB0404; H:YQBPR01MB0401.CANPRD01.PROD.OUTLOOK.COM; FPR:; SPF:None; PTR:InfoNoRecords; MX:1; A:1; LANG:en; received-spf: None (protection.outlook.com: uoguelph.ca does not designate permitted sender hosts) spamdiagnosticoutput: 1:99 spamdiagnosticmetadata: NSPM MIME-Version: 1.0 X-OriginatorOrg: uoguelph.ca X-MS-Exchange-CrossTenant-originalarrivaltime: 13 Aug 2016 00:07:12.0051 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: be62a12b-2cad-49a1-a5fa-85f4f3156a7d X-MS-Exchange-Transport-CrossTenantHeadersStamped: YQBPR01MB0404 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.22 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 13 Aug 2016 04:41:10 -0000 Mahmoud Al-Qudsi wrote: >Hello, > >Please forgive me if it is not correct form to discuss fusefs-ntfs on the = FreeBSD fs mailing list. > >SUMMARY > >Running on FreeBSD 10.3-RELEASE-p6/i386 with fuse compiled into kernel and= with fusefs-ntfs >2016.2.22 installed, there is a fuse_vnode leak (though = it seems it may be more of a complete failure >to reclaim vnodes) resulting= in quick resource exhaustion. > >REPRODUCTION > >This is easily reproduced with the following: > >ntfs-3g /dev/xxx /mnt/yyyy >cd /mnt/yyyy >find . -exec touch {} \; > >In another virtual terminal: > >vmstat | head -n1; vmstat -m | sed 1d | sort -hk 3,3 > >ACTUAL RESULTS > >fuse_vnode will continuously balloon, and will not be reclaimed until the = filesystem is unmounted. > >(likewise, fuse_msgbuff also balloons but unlike fuse_vnode, it is never r= eclaimed. Separate PR?) > >EXPECTED RESULTS > >fuse_vnode entries should be reclaimed > >ADDITIONAL INFORMATION > >Here's a snapshot of the fuse-related vmstat entries after this process: > >fuse_vnode 36020 9005K - 502349 256 These should be free'd when the vnode is recycled. This should start happen= ing when the system has reached kern.maxvnodes (I think?). Check the sysctl: vfs.fuse.node_count - if this value is much smaller than the # malloc'd, something is broken. I= f it is the same, it may just be that the system hasn't been recycling the vnodes= yet. You could try setting kern.maxvnodes smaller to see if recycling starts hap= pening, but be warned...setting this too small can break your system badly. Since unmounting will cause all vnodes to be recycled, seeing it go down wh= en you unmount is normal. >fuse_msgbuf 58141 14895K - 311095 256,512,1024,2048,4096,8192 At a glance, these are allocated each time the fuse device is opened and are never free'd. I don't know why fuse does this. My guess is that the ntfs-3g file system is opening the device repeatedly. If that isn't happeni= ng, I have no idea why this is occurring. rick >Thank you, >Mahmoud Al-Qudsi >NeoSmart Technologies > > >_______________________________________________ >freebsd-fs@freebsd.org mailing list >https://lists.freebsd.org/mailman/listinfo/freebsd-fs >To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@freebsd.org Sat Aug 13 05:22:10 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id A547ABB8B38 for ; Sat, 13 Aug 2016 05:22:10 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from na01-bl2-obe.outbound.protection.outlook.com (mail-bl2on0087.outbound.protection.outlook.com [65.55.169.87]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (Client CN "mail.protection.outlook.com", Issuer "Microsoft IT SSL SHA2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 3529810E9 for ; Sat, 13 Aug 2016 05:22:09 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from YQBPR01MB0401.CANPRD01.PROD.OUTLOOK.COM (10.169.142.147) by YQBPR01MB0401.CANPRD01.PROD.OUTLOOK.COM (10.169.142.147) with Microsoft SMTP Server (version=TLS1_0, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA_P384) id 15.1.557.21; Fri, 12 Aug 2016 21:49:45 +0000 Received: from YQBPR01MB0401.CANPRD01.PROD.OUTLOOK.COM ([10.169.142.147]) by YQBPR01MB0401.CANPRD01.PROD.OUTLOOK.COM ([10.169.142.147]) with mapi id 15.01.0557.021; Fri, 12 Aug 2016 21:49:45 +0000 From: Rick Macklem To: Marc Goroff , "freebsd-fs@freebsd.org" Subject: Re: Hanging/stalling mountd on heavily loaded NFS server Thread-Topic: Hanging/stalling mountd on heavily loaded NFS server Thread-Index: AQHR6FuOZIb3G4VVH0qiD8yOyqrjbqAtDjqZgBeDtICAAWMHuQ== Date: Fri, 12 Aug 2016 21:49:45 +0000 Message-ID: References: <98b4db11-8b41-608c-c714-f704a78914b7@quorum.net> , In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: spf=none (sender IP is ) smtp.mailfrom=rmacklem@uoguelph.ca; x-originating-ip: [24.57.164.61] x-ms-office365-filtering-correlation-id: 90cdfe08-0d69-4a4c-5093-08d3c2fa9343 x-microsoft-exchange-diagnostics: 1; YQBPR01MB0401; 6:IRWFuy485KjyZ2D1E2dJRyzvkeoBJ2fe1nY2D9z5h3nkMdTBNkzcpugGmH5nWDf2JjG11DMzTYIpHJMnyKLSukuANUOzRcWHOZpj0fTeIe3EaLTHfC2Nk+e726r/kfunwzqqZcVUyTxDb7DB8R4MEWe8UwYsZJ2vhvIHT9vj7p1FyXDeQOgLe33FDEgCjvJifMRh/g40tvcZjwRQXLbtuCKQ9eQ2lzpeVRAISopCWTbzRbBOkdxNnRMZAkYj+MfUJsiKrt7sNbZQNoXSPn8QFuB1ZXWGJ6I8poCQ9sYuH3vIuohnK0X4yrDdnsDk0I3O; 5:vgB6t8xNNWGBZBNk9NU7Nymnvemo1bUx2VpV2vc0v2rdU+HxhlnC/kj9u3W9WeF6fERp8XBy9nYLLNwoCXkYEsoWnDSog7+qzzCiM5fC0zf1jK8DOikVgFCEdwWpWT3SuA/Es7l+AbTAlXqSqHFGNQ==; 24:7xj+N4JVvKVooUVWfx1pppdBod1s+XQjqT1Hk6bMDGqMkcoA9wJS/yAu612mw1LKbavTydx4+V/6yTxfof4uOdsQAPZiihKrO54sw+tpkPc=; 7:2UW43xN8dUzSE01+oPI3cXfkU6/5onSJjo4WYtpxXtEAZRUk9xMI0xgTEy1z+M0xMda3TuVRZV5VqzbT0u1rySrB0EvtL9Jkl6rfGGqGtHSiWCygsJ9lhpZX12jjlgk1AGukUcWo/jcIyAXMdP5wBasm5Wvun4trbKe9HUJkvS0g6kMr2QJaixzK6T+F9Tv7BMpWB/qnN9MUFGxQuj7IEu6YQVdKznuHwiWDzuhjM4vPbYiJ9zId43p8DS9VUfd0 x-microsoft-antispam: UriScan:;BCL:0;PCL:0;RULEID:;SRVR:YQBPR01MB0401; x-microsoft-antispam-prvs: x-exchange-antispam-report-test: UriScan:(56005881305849)(158342451672863)(192374486261705)(75325880899374); x-exchange-antispam-report-cfa-test: BCL:0; PCL:0; RULEID:(6040176)(2401047)(8121501046)(5005006)(3002001)(10201501046)(6043046)(6042046); SRVR:YQBPR01MB0401; BCL:0; PCL:0; RULEID:; SRVR:YQBPR01MB0401; x-forefront-prvs: 003245E729 x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(7916002)(24454002)(54094003)(189002)(199003)(377454003)(19627405001)(19617315012)(86362001)(9686002)(8936002)(2906002)(92566002)(74482002)(5002640100001)(2501003)(7846002)(68736007)(87936001)(19580395003)(19580405001)(106116001)(7736002)(7696003)(122556002)(105586002)(6116002)(3846002)(102836003)(586003)(16236675004)(50986999)(66066001)(76176999)(3280700002)(11100500001)(97736004)(54356999)(15975445007)(77096005)(2950100001)(2900100001)(7906003)(74316002)(8676002)(10400500002)(101416001)(19625215002)(189998001)(33656002)(81166006)(3660700001)(81156014)(5001770100001)(107886002)(106356001); DIR:OUT; SFP:1101; SCL:1; SRVR:YQBPR01MB0401; H:YQBPR01MB0401.CANPRD01.PROD.OUTLOOK.COM; FPR:; SPF:None; PTR:InfoNoRecords; MX:1; A:1; LANG:en; received-spf: None (protection.outlook.com: uoguelph.ca does not designate permitted sender hosts) spamdiagnosticoutput: 1:99 spamdiagnosticmetadata: NSPM MIME-Version: 1.0 X-OriginatorOrg: uoguelph.ca X-MS-Exchange-CrossTenant-originalarrivaltime: 12 Aug 2016 21:49:45.5765 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: be62a12b-2cad-49a1-a5fa-85f4f3156a7d X-MS-Exchange-Transport-CrossTenantHeadersStamped: YQBPR01MB0401 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.22 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 13 Aug 2016 05:22:10 -0000 Marc Goroff wrote: >Just to followup on this issue, the patch referenced below seems to have f= ixed the >problem. > I wonder if this patch should be made a 10.3 update? (At one time, it was o= nly fixes for security issues that became errata fixes, but that has changed. I= 'm not sure what it takes for a patch to qualify?) It may not affect a lot of people, but it is a simple self contained patch. Is anyone reading this familiar with the current decision "rules" for errat= a? Thanks for testing it, rick Thanks! Marc On 7/27/16 6:41 PM, Rick Macklem wrote: Marc Goroff wrote: > From: owner-freebsd-fs@freebsd.org <= owner-freebsd-fs@freebsd.org> on behal= f of Marc Goroff > Sent: Wednesday, July 27, 2016 7:04 PM > To: freebsd-fs@freebsd.org > Subject: Hanging/stalling mountd on heavily loaded NFS server > > We have a large and busy production NFS server running 10.2 that is > serving approximately 200 ZFS file systems to production VMs. The system > has been very stable up until last night when we attempted to mount new > ZFS filesystems on NFS clients. The mountd process hung and client mount > requests timed out. The NFS server continued to serve traffic to > existing clients during this time. The mountd was hung in state nfsv4lck: > > [root@zfs-west1 ~]# ps -axgl|grep mount 0 38043 1 0 20 0 63672 17644 nfsv4lck Ds - 0:00.30 /usr/sbin/mountd -r -S /etc/exports /etc/zfs/exports > > It remains in this state for an indeterminate amount of time. I once saw > it continue on after several minutes, but most of the time it seems to > stay in this state for 15+ minutes. During this time, it does not > respond to kill -9 but it will eventually exit after many minutes. > Restarting mountd will allow the existing NFS clients to continue (they > hang when mountd exits), but any attempt to perform additional NFS > mounts will push mountd back into the bad state. > > This problem seems to be related to the number of NFS mounts off the > server. If we unmount some of the clients, we can successfully perform > the NFS mounts of the new ZFS filesystems. However, when we attempt to > mount all of the production NFS mounts, mountd will hang as above. > Stuff snipped for brevity... > > Any suggestion on how to resolve this issue? Since this is a production > server, my options for intrusive debugging are very limited. > I think you should try the patch that is r300254 in stable/10. It is a simp= le patch you can apply to your kernel without other changes. http://svnweb.freebsd.org/base/stable/10/sys/fs/nfsserver/nfs_nfsdkrpc.c?r1= =3D291869&r2=3D300254 It reverses the lock acquisition priority so that mountd doesn't wait until= the nfsd threads are idle before updating exports. rick > Thanks. > > Marc > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@freebsd.org Sat Aug 13 10:52:06 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 3A275BB79E4 for ; Sat, 13 Aug 2016 10:52:06 +0000 (UTC) (envelope-from lev@FreeBSD.org) Received: from onlyone.friendlyhosting.spb.ru (onlyone.friendlyhosting.spb.ru [IPv6:2a01:4f8:201:6350::2]) by mx1.freebsd.org (Postfix) with ESMTP id 025EB1307 for ; Sat, 13 Aug 2016 10:52:06 +0000 (UTC) (envelope-from lev@FreeBSD.org) Received: from [IPv6:2001:470:923f:2:a976:d7ad:b132:8c3d] (unknown [IPv6:2001:470:923f:2:a976:d7ad:b132:8c3d]) (Authenticated sender: lev@serebryakov.spb.ru) by onlyone.friendlyhosting.spb.ru (Postfix) with ESMTPSA id 81645B85 for ; Sat, 13 Aug 2016 13:52:04 +0300 (MSK) Reply-To: lev@FreeBSD.org To: freebsd-fs@freebsd.org From: Lev Serebryakov Subject: zfs NFS share for several networks -- is here any plans to implement this? Organization: FreeBSD Message-ID: <57AEFBC9.3000900@FreeBSD.org> Date: Sat, 13 Aug 2016 13:51:53 +0300 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.1.0 MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha512; protocol="application/pgp-signature"; boundary="gjGNgFuDlFg8rI7hSwEqK5D8XgLonGvut" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 13 Aug 2016 10:52:06 -0000 This is an OpenPGP/MIME signed message (RFC 4880 and 3156) --gjGNgFuDlFg8rI7hSwEqK5D8XgLonGvut Content-Type: multipart/mixed; boundary="aMFa1Wx8SshauSoAmt4udpaUKKUtvSw9B" From: Lev Serebryakov Reply-To: lev@FreeBSD.org To: freebsd-fs@freebsd.org Message-ID: <57AEFBC9.3000900@FreeBSD.org> Subject: zfs NFS share for several networks -- is here any plans to implement this? --aMFa1Wx8SshauSoAmt4udpaUKKUtvSw9B Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Here are two tickets, one with patch https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D147881 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D202820 This problem bites me again! Writing /etc/exports by hands for deep ZFS filesystem tree is very tedious and error-prone. You need to copy'n'paste bunch of lines for each filesystem in tree and when you need to change something you need to change every line in this file. If out "zfs share" will support multiple lines in property, it will be needed to change options only at one place: at root of filesystem tree to export. Please?.. --=20 // Lev Serebryakov AKA Black Lion --aMFa1Wx8SshauSoAmt4udpaUKKUtvSw9B-- --gjGNgFuDlFg8rI7hSwEqK5D8XgLonGvut Content-Type: application/pgp-signature; name="signature.asc" Content-Description: OpenPGP digital signature Content-Disposition: attachment; filename="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (MingW32) iQJ8BAEBCgBmBQJXrvvVXxSAAAAAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w ZW5wZ3AuZmlmdGhob3JzZW1hbi5uZXRGOTZEMUNBMEI1RjQzMThCNjc0QjMzMEFF QUIwM0M1OEJGREM0NzhGAAoJEOqwPFi/3EePt2EQAK7B0T0nkEDzelDhv40UAzxV 2zUIxaJ14AkSREnIG/vELGcDBn/E4ZZZ2FnoBaBfjcNRU8EzdZ7OQxwYGBQOkf59 7K5o+3ICAH2+hYnCKyLmzbht9qq99enBYTpx6IzvPCaeC+4GDJNW6kbXEspgrJTh 5m+qi7V/UHcIQIG/ammaFWSMLFgfzx4JgHPxGAYFPYoLS12Tl2OzpVs80IB99Nvz onoKDw65TqzmGI2Q5lQsQAl5LVK0+OTsbVZGGrCGqMo6WE6zF6fBgz6SoFyoDQN3 fidCgqxKPJ17T3if6xJX7ht6zlV/7+XLMHalGKQIf8fO3YMLinCkDIFKjBrY/VUj PoK8J62ojOyX1c8np/V4KIqww+Z91oleElqAB0EtrMzIoFzQICITQ5uePn7AYwCE fQIp1pZzVmLOAV0knMqqVosF5e6pvwo5yVrtykn3EHEVEKBEqv1zUA6aGaM6FaK5 EFqO48r5aAWB9SKY7RaGKrRlY2cCA1/UyfZ/a89lEiqwE6BBe0aorl4YrdaslkCA lsZBfiaERh1fWcunn9NaPw4CXpfSjbPyLt1Qatuw5sT5GVt/7K16kMsLKwDSb0jR hT7Bus5Vs75nOiv/3CELj1nmZI3zEV8zivemtWD+SDYPmRVkTr0l43iY56JZ9AqZ rO+D5IXtDwMDXSPMCrob =XM63 -----END PGP SIGNATURE----- --gjGNgFuDlFg8rI7hSwEqK5D8XgLonGvut-- From owner-freebsd-fs@freebsd.org Sat Aug 13 11:36:41 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id BE079BB8D85 for ; Sat, 13 Aug 2016 11:36:41 +0000 (UTC) (envelope-from lev@FreeBSD.org) Received: from onlyone.friendlyhosting.spb.ru (onlyone.friendlyhosting.spb.ru [IPv6:2a01:4f8:201:6350::2]) by mx1.freebsd.org (Postfix) with ESMTP id 8895418B5; Sat, 13 Aug 2016 11:36:41 +0000 (UTC) (envelope-from lev@FreeBSD.org) Received: from [IPv6:2001:470:923f:2:a976:d7ad:b132:8c3d] (unknown [IPv6:2001:470:923f:2:a976:d7ad:b132:8c3d]) (Authenticated sender: lev@serebryakov.spb.ru) by onlyone.friendlyhosting.spb.ru (Postfix) with ESMTPSA id 381A2B8E; Sat, 13 Aug 2016 14:36:34 +0300 (MSK) Reply-To: lev@FreeBSD.org To: freebsd-fs@freebsd.org, rmacklem@FreeBSD.org From: Lev Serebryakov Subject: Looks like r304026 breaks buildworld Organization: FreeBSD Message-ID: <57AF0643.4000705@FreeBSD.org> Date: Sat, 13 Aug 2016 14:36:35 +0300 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.1.0 MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha512; protocol="application/pgp-signature"; boundary="9MQl24baC3CH0d11bWj8LfVT3GPEhGDuq" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 13 Aug 2016 11:36:41 -0000 This is an OpenPGP/MIME signed message (RFC 4880 and 3156) --9MQl24baC3CH0d11bWj8LfVT3GPEhGDuq Content-Type: multipart/mixed; boundary="aeh6Lp26JBqlVC0lMxswCEXIWldtc9L3a" From: Lev Serebryakov Reply-To: lev@FreeBSD.org To: freebsd-fs@freebsd.org, rmacklem@FreeBSD.org Message-ID: <57AF0643.4000705@FreeBSD.org> Subject: Looks like r304026 breaks buildworld --aeh6Lp26JBqlVC0lMxswCEXIWldtc9L3a Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable "Subsequent commits will update nfsstat(8) to use the new fields." says commit log message, but now (r304040) I can not build nfsstat at all (it is only one error out of many alike): /usr/local/poudriere/jails/12x64-gw/usr/src/usr.bin/nfsstat/nfsstat.c:569= :7: error: array index 79 is past the end of the array (which contains 49 elements) [-Werror,-Warray-bounds] ext_nfsstats.srvrpccnt[NFSV4OP_PATHCONF], ^ ~~~~~~~~~~~~~~~~ /usr/obj/usr/local/poudriere/jails/12x64-gw/usr/src/tmp/usr/include/fs/nf= s/nfsport.h:457:2: note: array 'srvrpccnt' declared here int srvrpccnt[NFSV4OP_NOPS + NFSV4OP_FAKENOPS]; --=20 // Lev Serebryakov AKA Black Lion --aeh6Lp26JBqlVC0lMxswCEXIWldtc9L3a-- --9MQl24baC3CH0d11bWj8LfVT3GPEhGDuq Content-Type: application/pgp-signature; name="signature.asc" Content-Description: OpenPGP digital signature Content-Disposition: attachment; filename="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (MingW32) iQJ8BAEBCgBmBQJXrwZDXxSAAAAAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w ZW5wZ3AuZmlmdGhob3JzZW1hbi5uZXRGOTZEMUNBMEI1RjQzMThCNjc0QjMzMEFF QUIwM0M1OEJGREM0NzhGAAoJEOqwPFi/3EePslsQAM40mBNLRLA6qSL11HKTsQ5x 0oqFGJqOFZxxtradIFCohyQ82N6dqQHVxgcN1SYDbkwFoFVQg/GWaH7ZOieiMaOJ 4pnaz257uIdHZsdGkRwqjCV1+oG88hbPaz/5Xj9b06I0mkLOz0xmtabUQqBwbmUp aKTae6Oa3VNP9pLdQQElCwVKiixnN9jZV4vaIJ/Rcn2LrdYv74i4f0idnboK4x3j aCWTIwrzn/hQwh5mX9w+LwRoFjRjDQ/2ACi9HIjdkrP0Y1zKwy0B73qtSuYAMBjN lAtJ3PdFiZ4/9xzXx6PLwH22lDb880vN7K8eKDDF5mTQnGUGrXY6MCkotm9VSfQn pw8Fi9t/GY1c0aEbg0VzXaw/ApM4dUF1gRsEyxLBfBEaGwibLUrzgHYTHcdaaeWt xlVQbO1Tz4FxxDRFjCr6jsg+4LbBoqXIyAD5H0AiLHPjaSBb8LRcWvDoi0XO1gvn 1WgWliBqd5cLDhKuZvXeINvERKS5xcWE/F+FnGjkehGy4zXZ24y6I8+9QybYRiF6 0b3I5DBBUaL5klAmQ0yzCne73V0nY3ISgv5feU27GQhxnlGc/3WE8nKcrYRs0DYx 7zFCP1iEAXTCJui2N/uxKt9xGvXKTm6ECK2ATDa4gx5TvtLwaPXnbxQeMzU1G0BT 6YD+CH3VKeUV8hKNXzAP =iSvO -----END PGP SIGNATURE----- --9MQl24baC3CH0d11bWj8LfVT3GPEhGDuq--