From owner-freebsd-fs@FreeBSD.ORG Sun Jan 6 17:10:27 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id B0B135B3 for ; Sun, 6 Jan 2013 17:10:27 +0000 (UTC) (envelope-from gosha-necr@yandex.ru) Received: from forward7.mail.yandex.net (forward7.mail.yandex.net [IPv6:2a02:6b8:0:202::2]) by mx1.freebsd.org (Postfix) with ESMTP id 6A5A19C9 for ; Sun, 6 Jan 2013 17:10:27 +0000 (UTC) Received: from web19e.yandex.ru (web19e.yandex.ru [77.88.60.23]) by forward7.mail.yandex.net (Yandex) with ESMTP id AD0291C11C5 for ; Sun, 6 Jan 2013 21:10:25 +0400 (MSK) Received: from 127.0.0.1 (localhost.localdomain [127.0.0.1]) by web19e.yandex.ru (Yandex) with ESMTP id 74BAA108800F; Sun, 6 Jan 2013 21:10:25 +0400 (MSK) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex.ru; s=mail; t=1357492225; bh=VOkSVDc9p6i0onO8Qm5LlZU0dPIAOntxKmGQPkXFA9Q=; h=From:To:Subject:Date; b=GOxQ4QNtT93RPYcZRs4rYG6kiaL8zIux4SRYXzQh9iU1XmZcgdjKEXX26ZwYA1Q9D 67G7Z8auHvYsTSCwHPCQEsCp6EPTs0wdnHXo00e65hz529Yo7sPZGYfjaCx578cvBP yl+usGcrvJEF7exbXolNwhh6JfyqN9yynQaHtKG0= Received: from [178.47.83.4] ([178.47.83.4]) by web19e.yandex.ru with HTTP; Sun, 06 Jan 2013 21:10:25 +0400 From: =?koi8-r?B?59XM0cXXIOfP28E=?= To: freebsd-fs@freebsd.org Subject: Is it planned to port davfs2 to FreeBSD? Message-Id: <568241357492225@web19e.yandex.ru> X-Mailer: Yamail [ http://yandex.ru ] 5.0 Date: Sun, 06 Jan 2013 23:10:25 +0600 X-Mailman-Approved-At: Sun, 06 Jan 2013 17:16:11 +0000 MIME-Version: 1.0 Content-Type: text/plain X-Content-Filtered-By: Mailman/MimeDel 2.1.14 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 06 Jan 2013 17:10:27 -0000 Good day to everyone! I want to ask, is it planned to port on FreeBSD davfs2 filesystem (Project URL: http://savannah.nongnu.org/projects/davfs2 )? That type of FS are actively used on cloud services like DropBox, Yandex.Disk, Google.Drive, etc. Thank you! From owner-freebsd-fs@FreeBSD.ORG Sun Jan 6 18:12:33 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 292F5804 for ; Sun, 6 Jan 2013 18:12:33 +0000 (UTC) (envelope-from yanegomi@gmail.com) Received: from mail-pa0-f51.google.com (mail-pa0-f51.google.com [209.85.220.51]) by mx1.freebsd.org (Postfix) with ESMTP id EFAF7116F for ; Sun, 6 Jan 2013 18:12:32 +0000 (UTC) Received: by mail-pa0-f51.google.com with SMTP id fb11so10174852pad.38 for ; Sun, 06 Jan 2013 10:12:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=x-received:subject:mime-version:content-type:from:in-reply-to:date :cc:content-transfer-encoding:message-id:references:to:x-mailer; bh=KRng/rO2nbO8dy4v7wFCjA+qKiTsR7o3pfFaS1pj49w=; b=XFZjyToEJBd3KXLcBmubHCVh/d0AU2n1kZyTma5+Gnqah3b6GXOdm51cvr2Z1q/LW9 7V07GuPwWsEogXKoe5X9RiyfnFVAr37tU0qx6+Kw8mChJcaS0lk2PEUxgaExaNwV223U isdedzmUA27uxQbqOdXqCoQahA+ZdydpjcwQUfyn7cVvGq0uG3BpLyXbFTgXoGH5ZuG3 9rfzkva/ng+hcuerfCS70442GaKF7YPAvSqwMjGIUrd2c/auRY9mHGPSNb3z3XGt8grC xUQfkiXaTGAirDk2iRKDPxnlRYE9SZzx9GWOVg7DBMApiU5HNCeykVu6J3tm46AHZC7h yV8Q== X-Received: by 10.68.245.37 with SMTP id xl5mr179211774pbc.120.1357495946835; Sun, 06 Jan 2013 10:12:26 -0800 (PST) Received: from [192.168.20.11] (c-24-19-191-56.hsd1.wa.comcast.net. [24.19.191.56]) by mx.google.com with ESMTPS id vx2sm36112988pbc.33.2013.01.06.10.12.25 (version=TLSv1/SSLv3 cipher=OTHER); Sun, 06 Jan 2013 10:12:26 -0800 (PST) Subject: Re: Is it planned to port davfs2 to FreeBSD? Mime-Version: 1.0 (Apple Message framework v1283) Content-Type: text/plain; charset=utf-8 From: Garrett Cooper In-Reply-To: <568241357492225@web19e.yandex.ru> Date: Sun, 6 Jan 2013 10:12:47 -0800 Content-Transfer-Encoding: quoted-printable Message-Id: <405BA252-E507-4BFB-9B5A-B7F1DB513533@gmail.com> References: <568241357492225@web19e.yandex.ru> To: =?utf-8?B?0JPRg9C70Y/QtdCyINCT0L7RiNCw?= X-Mailer: Apple Mail (2.1283) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 06 Jan 2013 18:12:33 -0000 On Jan 6, 2013, at 9:10 AM, =D0=93=D1=83=D0=BB=D1=8F=D0=B5=D0=B2 = =D0=93=D0=BE=D1=88=D0=B0 wrote: > Good day to everyone! >=20 > I want to ask, is it planned to port on FreeBSD davfs2 filesystem > (Project URL: http://savannah.nongnu.org/projects/davfs2 )? >=20 > That type of FS are actively used on cloud services like DropBox, > Yandex.Disk, Google.Drive, etc. I think Scott Long's reply still applies: = http://markmail.org/message/cl55ve7yerarvnta#query:+page:1+mid:2ni5pjsi5od= wjvcm+state:results . Cheers, -Garrett= From owner-freebsd-fs@FreeBSD.ORG Sun Jan 6 18:38:05 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 17286305 for ; Sun, 6 Jan 2013 18:38:05 +0000 (UTC) (envelope-from takeda@takeda.tk) Received: from chinatsu.takeda.tk (mail.takeda.tk [74.0.89.210]) by mx1.freebsd.org (Postfix) with ESMTP id D67611231 for ; Sun, 6 Jan 2013 18:38:04 +0000 (UTC) Received: from [10.185.122.74] (24.sub-70-197-129.myvzw.com [70.197.129.24]) (authenticated bits=0) by chinatsu.takeda.tk (8.14.5/8.14.5) with ESMTP id r06Ibuxn081334 (version=TLSv1/SSLv3 cipher=RC4-MD5 bits=128 verify=NO); Sun, 6 Jan 2013 10:37:57 -0800 (PST) (envelope-from takeda@takeda.tk) User-Agent: K-9 Mail for Android In-Reply-To: <405BA252-E507-4BFB-9B5A-B7F1DB513533@gmail.com> References: <568241357492225@web19e.yandex.ru> <405BA252-E507-4BFB-9B5A-B7F1DB513533@gmail.com> MIME-Version: 1.0 Subject: Re: Is it planned to port davfs2 to FreeBSD? From: Derek Kulinski Date: Sun, 06 Jan 2013 10:37:48 -0800 To: Garrett Cooper , =?UTF-8?B?0JPRg9C70Y/QtdCyINCT0L7RiNCw?= Message-ID: X-Virus-Scanned: clamav-milter 0.97.6 at chinatsu.takeda.tk X-Virus-Status: Clean Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Content-Filtered-By: Mailman/MimeDel 2.1.14 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 06 Jan 2013 18:38:05 -0000 Garrett Cooper wrote: >On Jan 6, 2013, at 9:10 AM, Гуляев Гоша wrote: > >> Good day to everyone! >> >> I want to ask, is it planned to port on FreeBSD davfs2 filesystem >> (Project URL: http://savannah.nongnu.org/projects/davfs2 )? >> >> That type of FS are actively used on cloud services like DropBox, >> Yandex.Disk, Google.Drive, etc. > > I think Scott Long's reply still applies: >http://markmail.org/message/cl55ve7yerarvnta#query:+page:1+mid:2ni5pjsi5odwjvcm+state:results >. >Cheers, >-Garrett >_______________________________________________ >freebsd-fs@freebsd.org mailing list >http://lists.freebsd.org/mailman/listinfo/freebsd-fs >To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" Doesn't FreeBSD support fuse? I never used it but I read it does. I would imagine that would take much less work to port it than in the past. -- Sent from my Android phone with K-9 Mail. Please excuse my brevity. From owner-freebsd-fs@FreeBSD.ORG Sun Jan 6 19:08:07 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id B58E9701 for ; Sun, 6 Jan 2013 19:08:07 +0000 (UTC) (envelope-from yanegomi@gmail.com) Received: from mail-pa0-f46.google.com (mail-pa0-f46.google.com [209.85.220.46]) by mx1.freebsd.org (Postfix) with ESMTP id 61EB31479 for ; Sun, 6 Jan 2013 19:08:07 +0000 (UTC) Received: by mail-pa0-f46.google.com with SMTP id bh2so10200112pad.5 for ; Sun, 06 Jan 2013 11:08:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=x-received:subject:mime-version:content-type:from:in-reply-to:date :cc:content-transfer-encoding:message-id:references:to:x-mailer; bh=ASXllCwmnFiyLmH0L3rJEBhiOC1onGBcgWpf5Y538Jo=; b=K5o1NfAL0vqXnVG3VLL26yLMFUwxMsn2qjlyxAhl6Y/3ru79pcaIE40em6ud7ilhqH 5viSjBYY1IA5HBEf6lAhu7CT7fWFckvtH+ATLzYrTpVVgehU0JquDPIg1bYRsDEl3ML/ cVgTjai53F1A01ImliPi+neGYufV9D3Q/c0XHn18m48eYwokbFz/9+ObPvWQveh1JiGM JXeNc++gAI1uE4HzLNTnSonsXk/QQi2WP9RlAGY73TWkxU4liE9lLXmROo0VmKG7bsdm gJUWkBw+6TBORyFOEkJouIHyaSumo4S9cK4pnhmmPxPBJXeWB15vi61pJvECbEnkkpgi FgHw== X-Received: by 10.66.82.170 with SMTP id j10mr172619466pay.9.1357499281484; Sun, 06 Jan 2013 11:08:01 -0800 (PST) Received: from [192.168.20.11] (c-24-19-191-56.hsd1.wa.comcast.net. [24.19.191.56]) by mx.google.com with ESMTPS id ol4sm36164754pbb.58.2013.01.06.11.07.59 (version=TLSv1/SSLv3 cipher=OTHER); Sun, 06 Jan 2013 11:08:00 -0800 (PST) Subject: Re: Is it planned to port davfs2 to FreeBSD? Mime-Version: 1.0 (Apple Message framework v1283) Content-Type: text/plain; charset=koi8-r From: Garrett Cooper In-Reply-To: <853751357498597@web9f.yandex.ru> Date: Sun, 6 Jan 2013 11:07:57 -0800 Content-Transfer-Encoding: quoted-printable Message-Id: References: <568241357492225@web19e.yandex.ru> <405BA252-E507-4BFB-9B5A-B7F1DB513533@gmail.com> <853751357498597@web9f.yandex.ru> To: =?utf-8?B?0JPRg9C70Y/QtdCyINCT0L7RiNCw?= X-Mailer: Apple Mail (2.1283) Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 06 Jan 2013 19:08:07 -0000 On Jan 6, 2013, at 10:56 AM, =E7=D5=CC=D1=C5=D7 =E7=CF=DB=C1 wrote: > Yes, certainly it wasn't so correct for my part, to ask developers, to = make something that is probably necessary only for me to one therefore I = am sorry :) > I am not the programmer therefore I and write to this list of mailing, = in hope that someone at whom will arise interest in this invention will = undertake it, as a result it will make FreeBSD a little more convenient = system for some users. And it's a valid question/concern :) (especially when there = isn't clear documentation noting what's in progress and what's not).. = fuse does exist, and is somewhat functional the last time I heard (and = have seen firsthand with some 3rd party apps like open-vm-tools), but I = haven't seen anything about this filesystem on the lists yet and google = hasn't really turned up anything quickly either. So, my gut instinct = says no, but there might be someone doing a skunkworks project somewhere = that hasn't been publicly noted. Best of luck finding someone else working on it :). Cheers, -Garrett From owner-freebsd-fs@FreeBSD.ORG Sun Jan 6 22:18:01 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id CAFD695 for ; Sun, 6 Jan 2013 22:18:01 +0000 (UTC) (envelope-from lists@eitanadler.com) Received: from mail-la0-f43.google.com (mail-la0-f43.google.com [209.85.215.43]) by mx1.freebsd.org (Postfix) with ESMTP id 53E2E1AB9 for ; Sun, 6 Jan 2013 22:18:00 +0000 (UTC) Received: by mail-la0-f43.google.com with SMTP id eg20so13894185lab.16 for ; Sun, 06 Jan 2013 14:17:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=eitanadler.com; s=0xdeadbeef; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-type; bh=NTMg7IAz/JMDDYbWurJUrl1Erk4FLGnG30iiSAAXOzY=; b=AB3T5iC5yy5yFq/EmbaapvhDNoZ3H0v69GMDrZZeDU6WtCDl3UWOKMmAcJSDMh8B5r cO5fiXZjHv+P0VtK2mHJmhnIVcnFipyGeBTJy/gHzVxaMIAvhvQKe4Dzr7Ohke+JVuCy o3ldipKj8NWc+ZxxuKbZf4e+Zca4CJa5ht3lg= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-type:x-gm-message-state; bh=NTMg7IAz/JMDDYbWurJUrl1Erk4FLGnG30iiSAAXOzY=; b=W7/McPYdkcBUtKZ4/m+FMo3HmqbpizlgN82k8Y5MV4H6iWaS2tKyrUMcIgqBvqlImb hlA7+WAe+iJyge0L960A7rcEUCjdIgF0P3dd6V4yENzuWz0vs48i4KavTyPCTQfII7Uj Y2MgIKNDoRq6QYSQP/pOE3Jqm1bjB3JAs3gkoYG4u8jH9uJ2mAKAzOLTX6wymu+8PXaf CR7XJR0FkcnLP9aOGlTSrT4xV38Nb/AxpK8Kf0AqT1wOfP85vSNSV5l5Uw8T54N8700M s97Tw4QvZBud7DbL/wOqRgXt99MSCzkxUo3NYx9UFhGY9h6DV9boLxJRiSqajJWJcNC7 DcFQ== Received: by 10.152.148.40 with SMTP id tp8mr56132018lab.30.1357510679791; Sun, 06 Jan 2013 14:17:59 -0800 (PST) MIME-Version: 1.0 Received: by 10.112.75.200 with HTTP; Sun, 6 Jan 2013 14:17:29 -0800 (PST) In-Reply-To: References: <568241357492225@web19e.yandex.ru> <405BA252-E507-4BFB-9B5A-B7F1DB513533@gmail.com> From: Eitan Adler Date: Sun, 6 Jan 2013 17:17:29 -0500 Message-ID: Subject: Re: Is it planned to port davfs2 to FreeBSD? To: Derek Kulinski Content-Type: text/plain; charset=UTF-8 X-Gm-Message-State: ALoCoQnWX4x1VW3PFrS3lVdto8XWrSU1DgqSofUe7lQt64C1E4oINl7yj2Wuum6fS87aZyJWDUkZ Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 06 Jan 2013 22:18:01 -0000 On 6 January 2013 13:37, Derek Kulinski wrote: > Doesn't FreeBSD support fuse? I never used it but I read it does. I would imagine that would take much less work to port it than in the past. FUSE needed work to run smoothly. It was recently imported into HEAD and a lot of work was done to de-bugify it. -- Eitan Adler From owner-freebsd-fs@FreeBSD.ORG Sun Jan 6 22:35:29 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 1E96C2FD for ; Sun, 6 Jan 2013 22:35:29 +0000 (UTC) (envelope-from universite@ukr.net) Received: from otrada.od.ua (universite-1-pt.tunnel.tserv24.sto1.ipv6.he.net [IPv6:2001:470:27:140::2]) by mx1.freebsd.org (Postfix) with ESMTP id 826C11B44 for ; Sun, 6 Jan 2013 22:35:27 +0000 (UTC) Received: from [10.0.0.10] ([10.0.0.10]) (authenticated bits=0) by otrada.od.ua (8.14.4/8.14.5) with ESMTP id r06MZMOn082158 for ; Mon, 7 Jan 2013 00:35:23 +0200 (EET) (envelope-from universite@ukr.net) Message-ID: <50E9FC25.2040405@ukr.net> Date: Mon, 07 Jan 2013 00:35:17 +0200 From: "Vladislav V. Prodan" User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:17.0) Gecko/17.0 Thunderbird/17.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: ZFS UUID References: <20130104150040.9dd114f700b69adecb233d65@linguamatics.com> In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.7 (otrada.od.ua [89.209.81.54]); Mon, 07 Jan 2013 00:35:23 +0200 (EET) X-Spam-Status: No, score=-101.0 required=5.0 tests=ALL_TRUSTED, FREEMAIL_FROM, T_TO_NO_BRKTS_FREEMAIL, USER_IN_WHITELIST autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mary-teresa.otrada.od.ua X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 06 Jan 2013 22:35:29 -0000 04.01.2013 17:53, Steven Hartland пишет: > Is the following what your looking for:- > > zfs get guid tank So what's up with the format of guid? # zfs get guid tank NAME PROPERTY VALUE SOURCE tank guid 14,7E - However: # zpool get all tank | egrep "guid|NAME|version" NAME PROPERTY VALUE SOURCE tank guid 6115223751951339756 default tank version 28 default # uname -rsv FreeBSD 9.0-STABLE FreeBSD 9.0-STABLE #0: Tue Jul 10 14:42:34 EEST 2012 root@XXX:/usr/obj/usr/src/sys/YYY.20 > > Regards > Steve > ----- Original Message ----- From: "Attila Bogár" > > To: > Sent: Friday, January 04, 2013 3:00 PM > Subject: ZFS UUID > > > Hi List, > > Is it possible to get the UUID of a ZFS _dataset_ using the zfs command > which is unique across replication? > > -- Vladislav V. Prodan System & Network Administrator http://support.od.ua +380 67 4584408, +380 99 4060508 VVP88-RIPE From owner-freebsd-fs@FreeBSD.ORG Sun Jan 6 18:56:42 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 437345FF for ; Sun, 6 Jan 2013 18:56:42 +0000 (UTC) (envelope-from gosha-necr@yandex.ru) Received: from forward14.mail.yandex.net (forward14.mail.yandex.net [IPv6:2a02:6b8:0:801::4]) by mx1.freebsd.org (Postfix) with ESMTP id C8BE31434 for ; Sun, 6 Jan 2013 18:56:41 +0000 (UTC) Received: from web9f.yandex.ru (web9f.yandex.ru [95.108.131.225]) by forward14.mail.yandex.net (Yandex) with ESMTP id 28BFC1980E39; Sun, 6 Jan 2013 22:56:39 +0400 (MSK) Received: from 127.0.0.1 (localhost.localdomain [127.0.0.1]) by web9f.yandex.ru (Yandex) with ESMTP id D288451D8001; Sun, 6 Jan 2013 22:56:37 +0400 (MSK) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex.ru; s=mail; t=1357498598; bh=BM/Fq3yJlL466k8kJa90Hb6Znq1nW9aiij5lLuOzf4w=; h=From:To:Cc:In-Reply-To:References:Subject:Date; b=vujWPyK8VCazVWYnxQu4aaOusK8EHuNtrFPMHza9J+yuB3uRxxERI4v+s70GFZINA 2pYHUtXzEeViQ+JZW6LTEjFDjsQQ5HBO33bNDdQ2+AwsmEWlmRHY6MIjDj4JOiUfqW qTBEqDFFh5zVXuxfWg5+AL/RWzpDmVGTpRcsVcTw= Received: from [178.47.83.4] ([178.47.83.4]) by web9f.yandex.ru with HTTP; Sun, 06 Jan 2013 22:56:37 +0400 From: =?koi8-r?B?59XM0cXXIOfP28E=?= To: Garrett Cooper In-Reply-To: <405BA252-E507-4BFB-9B5A-B7F1DB513533@gmail.com> References: <568241357492225@web19e.yandex.ru> <405BA252-E507-4BFB-9B5A-B7F1DB513533@gmail.com> Subject: Re: Is it planned to port davfs2 to FreeBSD? Message-Id: <853751357498597@web9f.yandex.ru> X-Mailer: Yamail [ http://yandex.ru ] 5.0 Date: Mon, 07 Jan 2013 00:56:37 +0600 X-Mailman-Approved-At: Sun, 06 Jan 2013 23:14:38 +0000 MIME-Version: 1.0 Content-Type: text/plain; charset="koi8-r" X-Content-Filtered-By: Mailman/MimeDel 2.1.14 Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 06 Jan 2013 18:56:42 -0000 Yes, certainly it wasn't so correct for my part, to ask developers, to make something that is probably necessary only for me to one therefore I am sorry :) I am not the programmer therefore I and write to this list of mailing, in hope that someone at whom will arise interest in this invention will undertake it, as a result it will make FreeBSD a little more convenient system for some users. 07.01.2013, 00:12, "Garrett Cooper" : On Jan 6, 2013, at 9:10 AM, wrote: Good day to everyone! I want to ask, is it planned to port on FreeBSD davfs2 filesystem (Project URL: [1]http://savannah.nongnu.org/projects/davfs2 )? That type of FS are actively used on cloud services like DropBox, Yandex.Disk, Google.Drive, etc. I think Scott Long's reply still applies: [2]http://markmail.org/message/cl55ve7yerarvnta#query:+page:1+mid:2n i5pjsi5odwjvcm+state:results . Cheers, -Garrett -------------------------------------------- , . References 1. http://savannah.nongnu.org/projects/davfs2 2. http://markmail.org/message/cl55ve7yerarvnta#query:+page:1+mid:2ni5pjsi5odwjvcm+state:results From owner-freebsd-fs@FreeBSD.ORG Sun Jan 6 19:03:57 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 0C63D6DF for ; Sun, 6 Jan 2013 19:03:57 +0000 (UTC) (envelope-from gosha-necr@yandex.ru) Received: from forward12.mail.yandex.net (forward12.mail.yandex.net [IPv6:2a02:6b8:0:801::2]) by mx1.freebsd.org (Postfix) with ESMTP id 7F1291464 for ; Sun, 6 Jan 2013 19:03:56 +0000 (UTC) Received: from web10f.yandex.ru (web10f.yandex.ru [95.108.131.154]) by forward12.mail.yandex.net (Yandex) with ESMTP id 30F57C21064 for ; Sun, 6 Jan 2013 23:03:55 +0400 (MSK) Received: from 127.0.0.1 (localhost.localdomain [127.0.0.1]) by web10f.yandex.ru (Yandex) with ESMTP id EB6B941F0061; Sun, 6 Jan 2013 23:03:54 +0400 (MSK) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex.ru; s=mail; t=1357499035; bh=ODrzGdt/WFXB1KICtM9x8TDCxyRiIvp6HQb6FtR/kdA=; h=From:To:In-Reply-To:References:Subject:Date; b=QcFSETSr6gZgyO0b5+fQHxf3+WZM/7B9cVSthKhPYOnniP3I0AIROnMHGDM+Sc16L 7+EwVCL/wDg58vTORLqiIxHP5ldlAY1hVrxf0wG/ksdoc+8p3nFqvppeX+xojW9mTT o3BzqVht0z5dkLmDuLXQiEVJQ575FlNlutYr60Q0= Received: from [178.47.83.4] ([178.47.83.4]) by web10f.yandex.ru with HTTP; Sun, 06 Jan 2013 23:03:54 +0400 From: =?koi8-r?B?59XM0cXXIOfP28E=?= To: freebsd-fs@freebsd.org In-Reply-To: References: <568241357492225@web19e.yandex.ru> <405BA252-E507-4BFB-9B5A-B7F1DB513533@gmail.com> Subject: Re: Is it planned to port davfs2 to FreeBSD? Message-Id: <107751357499034@web10f.yandex.ru> X-Mailer: Yamail [ http://yandex.ru ] 5.0 Date: Mon, 07 Jan 2013 01:03:54 +0600 X-Mailman-Approved-At: Sun, 06 Jan 2013 23:14:49 +0000 MIME-Version: 1.0 Content-Type: text/plain; charset="koi8-r" X-Content-Filtered-By: Mailman/MimeDel 2.1.14 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 06 Jan 2013 19:03:57 -0000 07.01.2013, 00:38, "Derek Kulinski" : Garrett Cooper <[1]yanegomi@gmail.com> wrote: On Jan 6, 2013, at 9:10 AM, wrote: Good day to everyone! I want to ask, is it planned to port on FreeBSD davfs2 filesystem (Project URL: [2]http://savannah.nongnu.org/projects/davfs2 )? That type of FS are actively used on cloud services like DropBox, Yandex.Disk, Google.Drive, etc. I think Scott Long's reply still applies: [3]http://markmail.org/message/cl55ve 7yerarvnta#query:+page:1+mid:2ni5pjsi5odwjvcm+state:results . Cheers, -Garrett _______________________________________________________________ [4]freebsd-fs@freebsd.org mailing list [5]http://lists.freebsd.org/mailman/listinfo/freebsd-fs To unsubscribe, send any mail to "[6]freebsd-fs-unsubscribe@freebsd.org" Doesn't FreeBSD support fuse? I never used it but I read it does. I would imagine that would take much less work to port it than in the past. -- Sent from my Android phone with K-9 Mail. Please excuse my brevity. Actually for FreeBSD there is fuse-version davfs sysutils/wdfs but it wasn't updated since 2007, and it is possible therefore it doesn't work with the yandex.disk and dropbox services. And I tried davfs2 under Linux, and it perfectly works, unfortunately I have no desire to work under Linux :) -------------------------------------------- , . References 1. mailto:yanegomi@gmail.com 2. http://savannah.nongnu.org/projects/davfs2 3. http://markmail.org/message/cl55ve7yerarvnta#query:+page:1+mid:2ni5pjsi5odwjvcm+state:results 4. mailto:freebsd-fs@freebsd.org 5. http://lists.freebsd.org/mailman/listinfo/freebsd-fs 6. mailto:freebsd-fs-unsubscribe@freebsd.org From owner-freebsd-fs@FreeBSD.ORG Mon Jan 7 00:33:48 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id D4B3D428 for ; Mon, 7 Jan 2013 00:33:48 +0000 (UTC) (envelope-from prvs=171950f360=killing@multiplay.co.uk) Received: from mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) by mx1.freebsd.org (Postfix) with ESMTP id 5959F1DFA for ; Mon, 7 Jan 2013 00:33:48 +0000 (UTC) Received: from r2d2 ([188.220.16.49]) by mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) (MDaemon PRO v10.0.4) with ESMTP id md50001597728.msg for ; Mon, 07 Jan 2013 00:33:39 +0000 X-Spam-Processed: mail1.multiplay.co.uk, Mon, 07 Jan 2013 00:33:39 +0000 (not processed: message from valid local sender) X-MDRemoteIP: 188.220.16.49 X-Return-Path: prvs=171950f360=killing@multiplay.co.uk X-Envelope-From: killing@multiplay.co.uk X-MDaemon-Deliver-To: freebsd-fs@freebsd.org Message-ID: <5C206711ED6343DEBA844DAAD9BA4E3F@multiplay.co.uk> From: "Steven Hartland" To: "Vladislav V. Prodan" , References: <20130104150040.9dd114f700b69adecb233d65@linguamatics.com> <50E9FC25.2040405@ukr.net> Subject: Re: ZFS UUID Date: Mon, 7 Jan 2013 00:33:52 -0000 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="utf-8"; reply-type=original Content-Transfer-Encoding: 8bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 07 Jan 2013 00:33:48 -0000 ----- Original Message ----- From: "Vladislav V. Prodan" To: Sent: Sunday, January 06, 2013 10:35 PM Subject: Re: ZFS UUID > 04.01.2013 17:53, Steven Hartland пишет: >> Is the following what your looking for:- >> >> zfs get guid tank > > So what's up with the format of guid? > > # zfs get guid tank > NAME PROPERTY VALUE SOURCE > tank guid 14,7E - > > However: > # zpool get all tank | egrep "guid|NAME|version" > NAME PROPERTY VALUE SOURCE > tank guid 6115223751951339756 default > tank version 28 default > > # uname -rsv > FreeBSD 9.0-STABLE FreeBSD 9.0-STABLE #0: Tue Jul 10 14:42:34 EEST 2012 > root@XXX:/usr/obj/usr/src/sys/YYY.20 guid is just a number so looks fine to me. what did you expect to see? Regards Steve ================================================ This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337 or return the E.mail to postmaster@multiplay.co.uk. From owner-freebsd-fs@FreeBSD.ORG Mon Jan 7 00:35:43 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 389C5602 for ; Mon, 7 Jan 2013 00:35:43 +0000 (UTC) (envelope-from prvs=171950f360=killing@multiplay.co.uk) Received: from mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) by mx1.freebsd.org (Postfix) with ESMTP id D309F1E0E for ; Mon, 7 Jan 2013 00:35:42 +0000 (UTC) Received: from r2d2 ([188.220.16.49]) by mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) (MDaemon PRO v10.0.4) with ESMTP id md50001597754.msg for ; Mon, 07 Jan 2013 00:35:41 +0000 X-Spam-Processed: mail1.multiplay.co.uk, Mon, 07 Jan 2013 00:35:41 +0000 (not processed: message from valid local sender) X-MDRemoteIP: 188.220.16.49 X-Return-Path: prvs=171950f360=killing@multiplay.co.uk X-Envelope-From: killing@multiplay.co.uk X-MDaemon-Deliver-To: freebsd-fs@freebsd.org Message-ID: <06044CF17B6841558B0AA7B4108B9502@multiplay.co.uk> From: "Steven Hartland" To: "Vladislav V. Prodan" , References: <20130104150040.9dd114f700b69adecb233d65@linguamatics.com> <50E9FC25.2040405@ukr.net> Subject: Re: ZFS UUID Date: Mon, 7 Jan 2013 00:35:51 -0000 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="utf-8"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 07 Jan 2013 00:35:43 -0000 ----- Original Message ----- From: "Vladislav V. Prodan" > So what's up with the format of guid? > > # zfs get guid tank > NAME PROPERTY VALUE SOURCE > tank guid 14,7E - > > However: > # zpool get all tank | egrep "guid|NAME|version" > NAME PROPERTY VALUE SOURCE > tank guid 6115223751951339756 default > tank version 28 default > > # uname -rsv > FreeBSD 9.0-STABLE FreeBSD 9.0-STABLE #0: Tue Jul 10 14:42:34 EEST 2012 > root@XXX:/usr/obj/usr/src/sys/YYY.20 For reference this change came in here:- http://svnweb.freebsd.org/base?view=revision&revision=236705 Regards Steve ================================================ This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337 or return the E.mail to postmaster@multiplay.co.uk. From owner-freebsd-fs@FreeBSD.ORG Mon Jan 7 01:12:37 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id D43E31C8 for ; Mon, 7 Jan 2013 01:12:37 +0000 (UTC) (envelope-from universite@ukr.net) Received: from otrada.od.ua (universite-1-pt.tunnel.tserv24.sto1.ipv6.he.net [IPv6:2001:470:27:140::2]) by mx1.freebsd.org (Postfix) with ESMTP id 4AA9812E for ; Mon, 7 Jan 2013 01:12:36 +0000 (UTC) Received: from [10.0.0.10] ([10.0.0.10]) (authenticated bits=0) by otrada.od.ua (8.14.4/8.14.5) with ESMTP id r071CWBQ049927 for ; Mon, 7 Jan 2013 03:12:32 +0200 (EET) (envelope-from universite@ukr.net) Message-ID: <50EA20F6.2050901@ukr.net> Date: Mon, 07 Jan 2013 03:12:22 +0200 From: "Vladislav V. Prodan" User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:17.0) Gecko/17.0 Thunderbird/17.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: ZFS UUID References: <20130104150040.9dd114f700b69adecb233d65@linguamatics.com> <50E9FC25.2040405@ukr.net> <06044CF17B6841558B0AA7B4108B9502@multiplay.co.uk> In-Reply-To: <06044CF17B6841558B0AA7B4108B9502@multiplay.co.uk> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.7 (otrada.od.ua [89.209.81.54]); Mon, 07 Jan 2013 03:12:32 +0200 (EET) X-Spam-Status: No, score=-101.0 required=5.0 tests=ALL_TRUSTED, FREEMAIL_FROM, T_TO_NO_BRKTS_FREEMAIL, USER_IN_WHITELIST autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mary-teresa.otrada.od.ua X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 07 Jan 2013 01:12:37 -0000 07.01.2013 2:35, Steven Hartland wrote: > ----- Original Message ----- From: "Vladislav V. Prodan" > >> So what's up with the format of guid? >> >> # zfs get guid tank >> NAME PROPERTY VALUE SOURCE >> tank guid 14,7E - >> >> However: >> # zpool get all tank | egrep "guid|NAME|version" >> NAME PROPERTY VALUE SOURCE >> tank guid 6115223751951339756 default >> tank version 28 default >> >> # uname -rsv >> FreeBSD 9.0-STABLE FreeBSD 9.0-STABLE #0: Tue Jul 10 14:42:34 EEST 2012 >> root@XXX:/usr/obj/usr/src/sys/YYY.20 > > For reference this change came in here:- > > http://svnweb.freebsd.org/base?view=revision&revision=236705 > Aaa. Hence, fixed a. I'll have to update your system. -- Vladislav V. Prodan System & Network Administrator http://support.od.ua +380 67 4584408, +380 99 4060508 VVP88-RIPE From owner-freebsd-fs@FreeBSD.ORG Mon Jan 7 03:51:54 2013 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id C435C42C; Mon, 7 Jan 2013 03:51:54 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) by mx1.freebsd.org (Postfix) with ESMTP id 9F58273F; Mon, 7 Jan 2013 03:51:54 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.6/8.14.6) with ESMTP id r073pscY001496; Mon, 7 Jan 2013 03:51:54 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.6/8.14.6/Submit) id r073psqk001492; Mon, 7 Jan 2013 03:51:54 GMT (envelope-from linimon) Date: Mon, 7 Jan 2013 03:51:54 GMT Message-Id: <201301070351.r073psqk001492@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Subject: Re: kern/175071: [ufs] [panic] softdep_deallocate_dependencies: unrecovered I/O error X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 07 Jan 2013 03:51:54 -0000 Old Synopsis: panic: softdep_deallocate_dependencies: unrecovered I/O error New Synopsis: [ufs] [panic] softdep_deallocate_dependencies: unrecovered I/O error Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Mon Jan 7 03:51:19 UTC 2013 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=175071 From owner-freebsd-fs@FreeBSD.ORG Mon Jan 7 03:59:57 2013 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 0105E699; Mon, 7 Jan 2013 03:59:56 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) by mx1.freebsd.org (Postfix) with ESMTP id C69C979A; Mon, 7 Jan 2013 03:59:56 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.6/8.14.6) with ESMTP id r073xuIX001935; Mon, 7 Jan 2013 03:59:56 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.6/8.14.6/Submit) id r073xuSY001931; Mon, 7 Jan 2013 03:59:56 GMT (envelope-from linimon) Date: Mon, 7 Jan 2013 03:59:56 GMT Message-Id: <201301070359.r073xuSY001931@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Subject: Re: kern/174948: [zfs] owner@ always have ZFS ACL full permissions. Should not be the case. X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 07 Jan 2013 03:59:57 -0000 Old Synopsis: owner@ always have ZFS ACL full permissions. Should not be the case. New Synopsis: [zfs] owner@ always have ZFS ACL full permissions. Should not be the case. Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Mon Jan 7 03:59:39 UTC 2013 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=174948 From owner-freebsd-fs@FreeBSD.ORG Mon Jan 7 04:00:12 2013 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 9B4B575B; Mon, 7 Jan 2013 04:00:12 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) by mx1.freebsd.org (Postfix) with ESMTP id 75F827A6; Mon, 7 Jan 2013 04:00:12 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.6/8.14.6) with ESMTP id r0740C9q003467; Mon, 7 Jan 2013 04:00:12 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.6/8.14.6/Submit) id r0740C5a003463; Mon, 7 Jan 2013 04:00:12 GMT (envelope-from linimon) Date: Mon, 7 Jan 2013 04:00:12 GMT Message-Id: <201301070400.r0740C5a003463@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Subject: Re: kern/174949: [zfs] ZFS ACL: rwxp required to mkdir. p should not be required. X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 07 Jan 2013 04:00:12 -0000 Old Synopsis: ZFS ACL: rwxp required to mkdir. p should not be required. New Synopsis: [zfs] ZFS ACL: rwxp required to mkdir. p should not be required. Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Mon Jan 7 03:59:39 UTC 2013 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=174949 From owner-freebsd-fs@FreeBSD.ORG Mon Jan 7 04:00:27 2013 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id EA51481E; Mon, 7 Jan 2013 04:00:27 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) by mx1.freebsd.org (Postfix) with ESMTP id C562F7B6; Mon, 7 Jan 2013 04:00:27 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.6/8.14.6) with ESMTP id r0740R7r003505; Mon, 7 Jan 2013 04:00:27 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.6/8.14.6/Submit) id r0740R6q003501; Mon, 7 Jan 2013 04:00:27 GMT (envelope-from linimon) Date: Mon, 7 Jan 2013 04:00:27 GMT Message-Id: <201301070400.r0740R6q003501@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Subject: Re: kern/174950: [zfs] delete ZFS ACL have no effect X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 07 Jan 2013 04:00:28 -0000 Old Synopsis: delete ZFS ACL have no effect New Synopsis: [zfs] delete ZFS ACL have no effect Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Mon Jan 7 03:59:39 UTC 2013 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=174950 From owner-freebsd-fs@FreeBSD.ORG Mon Jan 7 11:06:45 2013 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 7AC3F1000 for ; Mon, 7 Jan 2013 11:06:45 +0000 (UTC) (envelope-from owner-bugmaster@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) by mx1.freebsd.org (Postfix) with ESMTP id 6C940F8B for ; Mon, 7 Jan 2013 11:06:45 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.6/8.14.6) with ESMTP id r07B6j94087857 for ; Mon, 7 Jan 2013 11:06:45 GMT (envelope-from owner-bugmaster@FreeBSD.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.6/8.14.6/Submit) id r07B6jGw087855 for freebsd-fs@FreeBSD.org; Mon, 7 Jan 2013 11:06:45 GMT (envelope-from owner-bugmaster@FreeBSD.org) Date: Mon, 7 Jan 2013 11:06:45 GMT Message-Id: <201301071106.r07B6jGw087855@freefall.freebsd.org> X-Authentication-Warning: freefall.freebsd.org: gnats set sender to owner-bugmaster@FreeBSD.org using -f From: FreeBSD bugmaster To: freebsd-fs@FreeBSD.org Subject: Current problem reports assigned to freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 07 Jan 2013 11:06:45 -0000 Note: to view an individual PR, use: http://www.freebsd.org/cgi/query-pr.cgi?pr=(number). The following is a listing of current problems submitted by FreeBSD users. These represent problem reports covering all versions including experimental development code and obsolete releases. S Tracker Resp. Description -------------------------------------------------------------------------------- o kern/175071 fs [ufs] [panic] softdep_deallocate_dependencies: unrecov o kern/174950 fs [zfs] delete ZFS ACL have no effect o kern/174949 fs [zfs] ZFS ACL: rwxp required to mkdir. p should not be o kern/174948 fs [zfs] owner@ always have ZFS ACL full permissions. Sho o kern/174372 fs [zfs] Pagefault appears to be related to ZFS o kern/174315 fs [zfs] chflags uchg not supported o kern/174310 fs [zfs] root point mounting broken on CURRENT with multi o kern/174279 fs [ufs] UFS2-SU+J journal and filesystem corruption o kern/174060 fs [ext2fs] Ext2FS system crashes (buffer overflow?) o kern/173830 fs [zfs] Brain-dead simple change to ZFS error descriptio o kern/173718 fs [zfs] phantom directory in zraid2 pool f kern/173657 fs [nfs] strange UID map with nfsuserd o kern/173363 fs [zfs] [panic] Panic on 'zpool replace' on readonly poo o kern/173136 fs [unionfs] mounting above the NFS read-only share panic o kern/172348 fs [unionfs] umount -f of filesystem in use with readonly o kern/172334 fs [unionfs] unionfs permits recursive union mounts; caus o kern/171626 fs [tmpfs] tmpfs should be noisier when the requested siz o kern/171415 fs [zfs] zfs recv fails with "cannot receive incremental o kern/170945 fs [gpt] disk layout not portable between direct connect o bin/170778 fs [zfs] [panic] FreeBSD panics randomly o kern/170680 fs [nfs] Multiple NFS Client bug in the FreeBSD 7.4-RELEA o kern/170497 fs [xfs][panic] kernel will panic whenever I ls a mounted o kern/169945 fs [zfs] [panic] Kernel panic while importing zpool (afte o kern/169480 fs [zfs] ZFS stalls on heavy I/O o kern/169398 fs [zfs] Can't remove file with permanent error o kern/169339 fs panic while " : > /etc/123" o kern/169319 fs [zfs] zfs resilver can't complete o kern/168947 fs [nfs] [zfs] .zfs/snapshot directory is messed up when o kern/168942 fs [nfs] [hang] nfsd hangs after being restarted (not -HU o kern/168158 fs [zfs] incorrect parsing of sharenfs options in zfs (fs o kern/167979 fs [ufs] DIOCGDINFO ioctl does not work on 8.2 file syste o kern/167977 fs [smbfs] mount_smbfs results are differ when utf-8 or U o kern/167688 fs [fusefs] Incorrect signal handling with direct_io o kern/167685 fs [zfs] ZFS on USB drive prevents shutdown / reboot o kern/167612 fs [portalfs] The portal file system gets stuck inside po o kern/167272 fs [zfs] ZFS Disks reordering causes ZFS to pick the wron o kern/167260 fs [msdosfs] msdosfs disk was mounted the second time whe o kern/167109 fs [zfs] [panic] zfs diff kernel panic Fatal trap 9: gene o kern/167105 fs [nfs] mount_nfs can not handle source exports wiht mor o kern/167067 fs [zfs] [panic] ZFS panics the server o kern/167065 fs [zfs] boot fails when a spare is the boot disk o kern/167048 fs [nfs] [patch] RELEASE-9 crash when using ZFS+NULLFS+NF o kern/166912 fs [ufs] [panic] Panic after converting Softupdates to jo o kern/166851 fs [zfs] [hang] Copying directory from the mounted UFS di o kern/166477 fs [nfs] NFS data corruption. o kern/165950 fs [ffs] SU+J and fsck problem o kern/165923 fs [nfs] Writing to NFS-backed mmapped files fails if flu o kern/165521 fs [zfs] [hang] livelock on 1 Gig of RAM with zfs when 31 o kern/165392 fs Multiple mkdir/rmdir fails with errno 31 o kern/165087 fs [unionfs] lock violation in unionfs o kern/164472 fs [ufs] fsck -B panics on particular data inconsistency o kern/164370 fs [zfs] zfs destroy for snapshot fails on i386 and sparc o kern/164261 fs [nullfs] [patch] fix panic with NFS served from NULLFS o kern/164256 fs [zfs] device entry for volume is not created after zfs o kern/164184 fs [ufs] [panic] Kernel panic with ufs_makeinode o kern/163801 fs [md] [request] allow mfsBSD legacy installed in 'swap' o kern/163770 fs [zfs] [hang] LOR between zfs&syncer + vnlru leading to o kern/163501 fs [nfs] NFS exporting a dir and a subdir in that dir to o kern/162944 fs [coda] Coda file system module looks broken in 9.0 o kern/162860 fs [zfs] Cannot share ZFS filesystem to hosts with a hyph o kern/162751 fs [zfs] [panic] kernel panics during file operations o kern/162591 fs [nullfs] cross-filesystem nullfs does not work as expe o kern/162519 fs [zfs] "zpool import" relies on buggy realpath() behavi o kern/162362 fs [snapshots] [panic] ufs with snapshot(s) panics when g o kern/161968 fs [zfs] [hang] renaming snapshot with -r including a zvo o kern/161864 fs [ufs] removing journaling from UFS partition fails on o bin/161807 fs [patch] add option for explicitly specifying metadata o kern/161579 fs [smbfs] FreeBSD sometimes panics when an smb share is o kern/161533 fs [zfs] [panic] zfs receive panic: system ioctl returnin o kern/161438 fs [zfs] [panic] recursed on non-recursive spa_namespace_ o kern/161424 fs [nullfs] __getcwd() calls fail when used on nullfs mou o kern/161280 fs [zfs] Stack overflow in gptzfsboot o kern/161205 fs [nfs] [pfsync] [regression] [build] Bug report freebsd o kern/161169 fs [zfs] [panic] ZFS causes kernel panic in dbuf_dirty o kern/161112 fs [ufs] [lor] filesystem LOR in FreeBSD 9.0-BETA3 o kern/160893 fs [zfs] [panic] 9.0-BETA2 kernel panic o kern/160860 fs [ufs] Random UFS root filesystem corruption with SU+J o kern/160801 fs [zfs] zfsboot on 8.2-RELEASE fails to boot from root-o o kern/160790 fs [fusefs] [panic] VPUTX: negative ref count with FUSE o kern/160777 fs [zfs] [hang] RAID-Z3 causes fatal hang upon scrub/impo o kern/160706 fs [zfs] zfs bootloader fails when a non-root vdev exists o kern/160591 fs [zfs] Fail to boot on zfs root with degraded raidz2 [r o kern/160410 fs [smbfs] [hang] smbfs hangs when transferring large fil o kern/160283 fs [zfs] [patch] 'zfs list' does abort in make_dataset_ha o kern/159930 fs [ufs] [panic] kernel core o kern/159402 fs [zfs][loader] symlinks cause I/O errors o kern/159357 fs [zfs] ZFS MAXNAMELEN macro has confusing name (off-by- o kern/159356 fs [zfs] [patch] ZFS NAME_ERR_DISKLIKE check is Solaris-s o kern/159351 fs [nfs] [patch] - divide by zero in mountnfs() o kern/159251 fs [zfs] [request]: add FLETCHER4 as DEDUP hash option o kern/159077 fs [zfs] Can't cd .. with latest zfs version o kern/159048 fs [smbfs] smb mount corrupts large files o kern/159045 fs [zfs] [hang] ZFS scrub freezes system o kern/158839 fs [zfs] ZFS Bootloader Fails if there is a Dead Disk o kern/158802 fs amd(8) ICMP storm and unkillable process. o kern/158231 fs [nullfs] panic on unmounting nullfs mounted over ufs o f kern/157929 fs [nfs] NFS slow read o kern/157399 fs [zfs] trouble with: mdconfig force delete && zfs strip o kern/157179 fs [zfs] zfs/dbuf.c: panic: solaris assert: arc_buf_remov o kern/156797 fs [zfs] [panic] Double panic with FreeBSD 9-CURRENT and o kern/156781 fs [zfs] zfs is losing the snapshot directory, p kern/156545 fs [ufs] mv could break UFS on SMP systems o kern/156193 fs [ufs] [hang] UFS snapshot hangs && deadlocks processes o kern/156039 fs [nullfs] [unionfs] nullfs + unionfs do not compose, re o kern/155615 fs [zfs] zfs v28 broken on sparc64 -current o kern/155587 fs [zfs] [panic] kernel panic with zfs p kern/155411 fs [regression] [8.2-release] [tmpfs]: mount: tmpfs : No o kern/155199 fs [ext2fs] ext3fs mounted as ext2fs gives I/O errors o bin/155104 fs [zfs][patch] use /dev prefix by default when importing o kern/154930 fs [zfs] cannot delete/unlink file from full volume -> EN o kern/154828 fs [msdosfs] Unable to create directories on external USB o kern/154491 fs [smbfs] smb_co_lock: recursive lock for object 1 p kern/154228 fs [md] md getting stuck in wdrain state o kern/153996 fs [zfs] zfs root mount error while kernel is not located o kern/153753 fs [zfs] ZFS v15 - grammatical error when attempting to u o kern/153716 fs [zfs] zpool scrub time remaining is incorrect o kern/153695 fs [patch] [zfs] Booting from zpool created on 4k-sector o kern/153680 fs [xfs] 8.1 failing to mount XFS partitions o kern/153418 fs [zfs] [panic] Kernel Panic occurred writing to zfs vol o kern/153351 fs [zfs] locking directories/files in ZFS o bin/153258 fs [patch][zfs] creating ZVOLs requires `refreservation' s kern/153173 fs [zfs] booting from a gzip-compressed dataset doesn't w o bin/153142 fs [zfs] ls -l outputs `ls: ./.zfs: Operation not support o kern/153126 fs [zfs] vdev failure, zpool=peegel type=vdev.too_small o kern/152022 fs [nfs] nfs service hangs with linux client [regression] o kern/151942 fs [zfs] panic during ls(1) zfs snapshot directory o kern/151905 fs [zfs] page fault under load in /sbin/zfs o bin/151713 fs [patch] Bug in growfs(8) with respect to 32-bit overfl o kern/151648 fs [zfs] disk wait bug o kern/151629 fs [fs] [patch] Skip empty directory entries during name o kern/151330 fs [zfs] will unshare all zfs filesystem after execute a o kern/151326 fs [nfs] nfs exports fail if netgroups contain duplicate o kern/151251 fs [ufs] Can not create files on filesystem with heavy us o kern/151226 fs [zfs] can't delete zfs snapshot o kern/150503 fs [zfs] ZFS disks are UNAVAIL and corrupted after reboot o kern/150501 fs [zfs] ZFS vdev failure vdev.bad_label on amd64 o kern/150390 fs [zfs] zfs deadlock when arcmsr reports drive faulted o kern/150336 fs [nfs] mountd/nfsd became confused; refused to reload n o kern/149208 fs mksnap_ffs(8) hang/deadlock o kern/149173 fs [patch] [zfs] make OpenSolaris installa o kern/149015 fs [zfs] [patch] misc fixes for ZFS code to build on Glib o kern/149014 fs [zfs] [patch] declarations in ZFS libraries/utilities o kern/149013 fs [zfs] [patch] make ZFS makefiles use the libraries fro o kern/148504 fs [zfs] ZFS' zpool does not allow replacing drives to be o kern/148490 fs [zfs]: zpool attach - resilver bidirectionally, and re o kern/148368 fs [zfs] ZFS hanging forever on 8.1-PRERELEASE o kern/148138 fs [zfs] zfs raidz pool commands freeze o kern/147903 fs [zfs] [panic] Kernel panics on faulty zfs device o kern/147881 fs [zfs] [patch] ZFS "sharenfs" doesn't allow different " o kern/147420 fs [ufs] [panic] ufs_dirbad, nullfs, jail panic (corrupt o kern/146941 fs [zfs] [panic] Kernel Double Fault - Happens constantly o kern/146786 fs [zfs] zpool import hangs with checksum errors o kern/146708 fs [ufs] [panic] Kernel panic in softdep_disk_write_compl o kern/146528 fs [zfs] Severe memory leak in ZFS on i386 o kern/146502 fs [nfs] FreeBSD 8 NFS Client Connection to Server s kern/145712 fs [zfs] cannot offline two drives in a raidz2 configurat o kern/145411 fs [xfs] [panic] Kernel panics shortly after mounting an f bin/145309 fs bsdlabel: Editing disk label invalidates the whole dev o kern/145272 fs [zfs] [panic] Panic during boot when accessing zfs on o kern/145246 fs [ufs] dirhash in 7.3 gratuitously frees hashes when it o kern/145238 fs [zfs] [panic] kernel panic on zpool clear tank o kern/145229 fs [zfs] Vast differences in ZFS ARC behavior between 8.0 o kern/145189 fs [nfs] nfsd performs abysmally under load o kern/144929 fs [ufs] [lor] vfs_bio.c + ufs_dirhash.c p kern/144447 fs [zfs] sharenfs fsunshare() & fsshare_main() non functi o kern/144416 fs [panic] Kernel panic on online filesystem optimization s kern/144415 fs [zfs] [panic] kernel panics on boot after zfs crash o kern/144234 fs [zfs] Cannot boot machine with recent gptzfsboot code o kern/143825 fs [nfs] [panic] Kernel panic on NFS client o bin/143572 fs [zfs] zpool(1): [patch] The verbose output from iostat o kern/143212 fs [nfs] NFSv4 client strange work ... o kern/143184 fs [zfs] [lor] zfs/bufwait LOR o kern/142878 fs [zfs] [vfs] lock order reversal o kern/142597 fs [ext2fs] ext2fs does not work on filesystems with real o kern/142489 fs [zfs] [lor] allproc/zfs LOR o kern/142466 fs Update 7.2 -> 8.0 on Raid 1 ends with screwed raid [re o kern/142306 fs [zfs] [panic] ZFS drive (from OSX Leopard) causes two o kern/142068 fs [ufs] BSD labels are got deleted spontaneously o kern/141897 fs [msdosfs] [panic] Kernel panic. msdofs: file name leng o kern/141463 fs [nfs] [panic] Frequent kernel panics after upgrade fro o kern/141305 fs [zfs] FreeBSD ZFS+sendfile severe performance issues ( o kern/141091 fs [patch] [nullfs] fix panics with DIAGNOSTIC enabled o kern/141086 fs [nfs] [panic] panic("nfs: bioread, not dir") on FreeBS o kern/141010 fs [zfs] "zfs scrub" fails when backed by files in UFS2 o kern/140888 fs [zfs] boot fail from zfs root while the pool resilveri o kern/140661 fs [zfs] [patch] /boot/loader fails to work on a GPT/ZFS- o kern/140640 fs [zfs] snapshot crash o kern/140068 fs [smbfs] [patch] smbfs does not allow semicolon in file o kern/139725 fs [zfs] zdb(1) dumps core on i386 when examining zpool c o kern/139715 fs [zfs] vfs.numvnodes leak on busy zfs p bin/139651 fs [nfs] mount(8): read-only remount of NFS volume does n o kern/139407 fs [smbfs] [panic] smb mount causes system crash if remot o kern/138662 fs [panic] ffs_blkfree: freeing free block o kern/138421 fs [ufs] [patch] remove UFS label limitations o kern/138202 fs mount_msdosfs(1) see only 2Gb o kern/136968 fs [ufs] [lor] ufs/bufwait/ufs (open) o kern/136945 fs [ufs] [lor] filedesc structure/ufs (poll) o kern/136944 fs [ffs] [lor] bufwait/snaplk (fsync) o kern/136873 fs [ntfs] Missing directories/files on NTFS volume o kern/136865 fs [nfs] [patch] NFS exports atomic and on-the-fly atomic p kern/136470 fs [nfs] Cannot mount / in read-only, over NFS o kern/135546 fs [zfs] zfs.ko module doesn't ignore zpool.cache filenam o kern/135469 fs [ufs] [panic] kernel crash on md operation in ufs_dirb o kern/135050 fs [zfs] ZFS clears/hides disk errors on reboot o kern/134491 fs [zfs] Hot spares are rather cold... o kern/133676 fs [smbfs] [panic] umount -f'ing a vnode-based memory dis p kern/133174 fs [msdosfs] [patch] msdosfs must support multibyte inter o kern/132960 fs [ufs] [panic] panic:ffs_blkfree: freeing free frag o kern/132397 fs reboot causes filesystem corruption (failure to sync b o kern/132331 fs [ufs] [lor] LOR ufs and syncer o kern/132237 fs [msdosfs] msdosfs has problems to read MSDOS Floppy o kern/132145 fs [panic] File System Hard Crashes o kern/131441 fs [unionfs] [nullfs] unionfs and/or nullfs not combineab o kern/131360 fs [nfs] poor scaling behavior of the NFS server under lo o kern/131342 fs [nfs] mounting/unmounting of disks causes NFS to fail o bin/131341 fs makefs: error "Bad file descriptor" on the mount poin o kern/130920 fs [msdosfs] cp(1) takes 100% CPU time while copying file o kern/130210 fs [nullfs] Error by check nullfs o kern/129760 fs [nfs] after 'umount -f' of a stale NFS share FreeBSD l o kern/129488 fs [smbfs] Kernel "bug" when using smbfs in smbfs_smb.c: o kern/129231 fs [ufs] [patch] New UFS mount (norandom) option - mostly o kern/129152 fs [panic] non-userfriendly panic when trying to mount(8) o kern/127787 fs [lor] [ufs] Three LORs: vfslock/devfs/vfslock, ufs/vfs o bin/127270 fs fsck_msdosfs(8) may crash if BytesPerSec is zero o kern/127029 fs [panic] mount(8): trying to mount a write protected zi o kern/126287 fs [ufs] [panic] Kernel panics while mounting an UFS file o kern/125895 fs [ffs] [panic] kernel: panic: ffs_blkfree: freeing free s kern/125738 fs [zfs] [request] SHA256 acceleration in ZFS o kern/123939 fs [msdosfs] corrupts new files o kern/122380 fs [ffs] ffs_valloc:dup alloc (Soekris 4801/7.0/USB Flash o bin/122172 fs [fs]: amd(8) automount daemon dies on 6.3-STABLE i386, o bin/121898 fs [nullfs] pwd(1)/getcwd(2) fails with Permission denied o bin/121072 fs [smbfs] mount_smbfs(8) cannot normally convert the cha o kern/120483 fs [ntfs] [patch] NTFS filesystem locking changes o kern/120482 fs [ntfs] [patch] Sync style changes between NetBSD and F o kern/118912 fs [2tb] disk sizing/geometry problem with large array o kern/118713 fs [minidump] [patch] Display media size required for a k o kern/118318 fs [nfs] NFS server hangs under special circumstances o bin/118249 fs [ufs] mv(1): moving a directory changes its mtime o kern/118126 fs [nfs] [patch] Poor NFS server write performance o kern/118107 fs [ntfs] [panic] Kernel panic when accessing a file at N o kern/117954 fs [ufs] dirhash on very large directories blocks the mac o bin/117315 fs [smbfs] mount_smbfs(8) and related options can't mount o kern/117158 fs [zfs] zpool scrub causes panic if geli vdevs detach on o bin/116980 fs [msdosfs] [patch] mount_msdosfs(8) resets some flags f o conf/116931 fs lack of fsck_cd9660 prevents mounting iso images with o kern/116583 fs [ffs] [hang] System freezes for short time when using o bin/115361 fs [zfs] mount(8) gets into a state where it won't set/un o kern/114955 fs [cd9660] [patch] [request] support for mask,dirmask,ui o kern/114847 fs [ntfs] [patch] [request] dirmask support for NTFS ala o kern/114676 fs [ufs] snapshot creation panics: snapacct_ufs2: bad blo o bin/114468 fs [patch] [request] add -d option to umount(8) to detach o kern/113852 fs [smbfs] smbfs does not properly implement DFS referral o bin/113838 fs [patch] [request] mount(8): add support for relative p o bin/113049 fs [patch] [request] make quot(8) use getopt(3) and show o kern/112658 fs [smbfs] [patch] smbfs and caching problems (resolves b o kern/111843 fs [msdosfs] Long Names of files are incorrectly created o kern/111782 fs [ufs] dump(8) fails horribly for large filesystems s bin/111146 fs [2tb] fsck(8) fails on 6T filesystem o bin/107829 fs [2TB] fdisk(8): invalid boundary checking in fdisk / w o kern/106107 fs [ufs] left-over fsck_snapshot after unfinished backgro o kern/104406 fs [ufs] Processes get stuck in "ufs" state under persist o kern/104133 fs [ext2fs] EXT2FS module corrupts EXT2/3 filesystems o kern/103035 fs [ntfs] Directories in NTFS mounted disc images appear o kern/101324 fs [smbfs] smbfs sometimes not case sensitive when it's s o kern/99290 fs [ntfs] mount_ntfs ignorant of cluster sizes s bin/97498 fs [request] newfs(8) has no option to clear the first 12 o kern/97377 fs [ntfs] [patch] syntax cleanup for ntfs_ihash.c o kern/95222 fs [cd9660] File sections on ISO9660 level 3 CDs ignored o kern/94849 fs [ufs] rename on UFS filesystem is not atomic o bin/94810 fs fsck(8) incorrectly reports 'file system marked clean' o kern/94769 fs [ufs] Multiple file deletions on multi-snapshotted fil o kern/94733 fs [smbfs] smbfs may cause double unlock o kern/93942 fs [vfs] [patch] panic: ufs_dirbad: bad dir (patch from D o kern/92272 fs [ffs] [hang] Filling a filesystem while creating a sna o kern/91134 fs [smbfs] [patch] Preserve access and modification time a kern/90815 fs [smbfs] [patch] SMBFS with character conversions somet o kern/88657 fs [smbfs] windows client hang when browsing a samba shar o kern/88555 fs [panic] ffs_blkfree: freeing free frag on AMD 64 o bin/87966 fs [patch] newfs(8): introduce -A flag for newfs to enabl o kern/87859 fs [smbfs] System reboot while umount smbfs. o kern/86587 fs [msdosfs] rm -r /PATH fails with lots of small files o bin/85494 fs fsck_ffs: unchecked use of cg_inosused macro etc. o kern/80088 fs [smbfs] Incorrect file time setting on NTFS mounted vi o bin/74779 fs Background-fsck checks one filesystem twice and omits o kern/73484 fs [ntfs] Kernel panic when doing `ls` from the client si o bin/73019 fs [ufs] fsck_ufs(8) cannot alloc 607016868 bytes for ino o kern/71774 fs [ntfs] NTFS cannot "see" files on a WinXP filesystem o bin/70600 fs fsck(8) throws files away when it can't grow lost+foun o kern/68978 fs [panic] [ufs] crashes with failing hard disk, loose po o kern/65920 fs [nwfs] Mounted Netware filesystem behaves strange o kern/65901 fs [smbfs] [patch] smbfs fails fsx write/truncate-down/tr o kern/61503 fs [smbfs] mount_smbfs does not work as non-root o kern/55617 fs [smbfs] Accessing an nsmb-mounted drive via a smb expo o kern/51685 fs [hang] Unbounded inode allocation causes kernel to loc o kern/36566 fs [smbfs] System reboot with dead smb mount and umount o bin/27687 fs fsck(8) wrapper is not properly passing options to fsc o kern/18874 fs [2TB] 32bit NFS servers export wrong negative values t 298 problems total. From owner-freebsd-fs@FreeBSD.ORG Mon Jan 7 20:24:55 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 5D43A7DB for ; Mon, 7 Jan 2013 20:24:55 +0000 (UTC) (envelope-from marck@rinet.ru) Received: from woozle.rinet.ru (woozle.rinet.ru [195.54.192.68]) by mx1.freebsd.org (Postfix) with ESMTP id E1A45384 for ; Mon, 7 Jan 2013 20:24:54 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by woozle.rinet.ru (8.14.5/8.14.5) with ESMTP id r07KJFXv008880 for ; Tue, 8 Jan 2013 00:19:15 +0400 (MSK) (envelope-from marck@rinet.ru) Date: Tue, 8 Jan 2013 00:19:15 +0400 (MSK) From: Dmitry Morozovsky To: freebsd-fs@freebsd.org Subject: zfs -> ufs rsync: livelock in wdrain state Message-ID: User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) X-NCC-RegID: ru.rinet X-OpenPGP-Key-ID: 6B691B03 MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.2.7 (woozle.rinet.ru [0.0.0.0]); Tue, 08 Jan 2013 00:19:15 +0400 (MSK) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 07 Jan 2013 20:24:55 -0000 Dear colleagues, I have archive server with pretty large ZFS (24*2T in single raidz2 raidgroup) Sometimes we moved really old archives to external SATA drives, which are formatted with UFS2/SU. Files are copied via rsync The system in question is stable/8; upgrade to stable/9 is planned, but not yet completed. Now, during last rsync, the process is stuck as dump.2012062219.bin.gz 3208015437 100% 102.42MB/s 0:00:29 (xfer#66, to-check=196/721) dump.2012062220.bin.gz load: 0.01 cmd: rsync 47543 [wdrain] 1904.69r 443.01u 241.12s 0% 1736k ^C rsync error: received SIGINT, SIGTERM, or SIGHUP (code 20) at rsync.c(645) [sender=3.0.9] As we can see, rsync writer stops in wdrain state. I terminated it by ^C in terminal session, as it was not autogenerated backup. Now, zfs and other system is working seemingly well, but trying to sync manually stucks console forever: root@moose:/ar# sync load: 0.00 cmd: sync 67229 [wdrain] 468.17r 0.00u 0.00s 0% 596k Any hints? Quick searching throug freebsd mailing lists and/or open PRs does not reveal much. Thanks! -- Sincerely, D.Marck [DM5020, MCK-RIPE, DM3-RIPN] [ FreeBSD committer: marck@FreeBSD.org ] ------------------------------------------------------------------------ *** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- marck@rinet.ru *** ------------------------------------------------------------------------ From owner-freebsd-fs@FreeBSD.ORG Tue Jan 8 00:12:41 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 339C7B62 for ; Tue, 8 Jan 2013 00:12:41 +0000 (UTC) (envelope-from kostikbel@gmail.com) Received: from kib.kiev.ua (kib.kiev.ua [IPv6:2001:470:d5e7:1::1]) by mx1.freebsd.org (Postfix) with ESMTP id C6944EEA for ; Tue, 8 Jan 2013 00:12:40 +0000 (UTC) Received: from tom.home (kostik@localhost [127.0.0.1]) by kib.kiev.ua (8.14.5/8.14.5) with ESMTP id r080CW3v011266; Tue, 8 Jan 2013 02:12:32 +0200 (EET) (envelope-from kostikbel@gmail.com) DKIM-Filter: OpenDKIM Filter v2.7.4 kib.kiev.ua r080CW3v011266 Received: (from kostik@localhost) by tom.home (8.14.5/8.14.5/Submit) id r080CVGg011265; Tue, 8 Jan 2013 02:12:31 +0200 (EET) (envelope-from kostikbel@gmail.com) X-Authentication-Warning: tom.home: kostik set sender to kostikbel@gmail.com using -f Date: Tue, 8 Jan 2013 02:12:31 +0200 From: Konstantin Belousov To: Dmitry Morozovsky Subject: Re: zfs -> ufs rsync: livelock in wdrain state Message-ID: <20130108001231.GB82219@kib.kiev.ua> References: MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="gPVs24VLDFKgHP1I" Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) X-Spam-Status: No, score=-2.0 required=5.0 tests=ALL_TRUSTED,BAYES_00, DKIM_ADSP_CUSTOM_MED,FREEMAIL_FROM,NML_ADSP_CUSTOM_MED autolearn=no version=3.3.2 X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on tom.home Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 08 Jan 2013 00:12:41 -0000 --gPVs24VLDFKgHP1I Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Tue, Jan 08, 2013 at 12:19:15AM +0400, Dmitry Morozovsky wrote: > Dear colleagues, >=20 > I have archive server with pretty large ZFS (24*2T in single raidz2 raidg= roup) >=20 > Sometimes we moved really old archives to external SATA drives, which are= =20 > formatted with UFS2/SU. Files are copied via rsync >=20 > The system in question is stable/8; upgrade to stable/9 is planned, but n= ot yet=20 > completed. >=20 > Now, during last rsync, the process is stuck as >=20 > dump.2012062219.bin.gz > 3208015437 100% 102.42MB/s 0:00:29 (xfer#66, to-check=3D196/721) > dump.2012062220.bin.gz > load: 0.01 cmd: rsync 47543 [wdrain] 1904.69r 443.01u 241.12s 0% 1736k > ^C > rsync error: received SIGINT, SIGTERM, or SIGHUP (code 20) at rsync.c(645= )=20 > [sender=3D3.0.9] >=20 > As we can see, rsync writer stops in wdrain state. >=20 > I terminated it by ^C in terminal session, as it was not autogenerated=20 > backup. >=20 > Now, zfs and other system is working seemingly well, but trying to sync= =20 > manually stucks console forever: >=20 > root@moose:/ar# sync > load: 0.00 cmd: sync 67229 [wdrain] 468.17r 0.00u 0.00s 0% 596k >=20 > Any hints? Quick searching throug freebsd mailing lists and/or open PRs d= oes=20 > not reveal much. >=20 Are there any kernel messages about the disk system ? The wdrain means that the amount of the dirty buffers accumulated exceeds the allowed maximum. The transient 'wdrain' state is normal on a machine doing lot of writes to a filesystem using buffer cache, say UFS. Failure to clean the dirty buffers is usually related to the disk i/o stalling. It cannot be denied that a bug could cause stuck 'wdrain' state, but in the last five or so years all the cases I investigated were due to disks. --gPVs24VLDFKgHP1I Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.19 (FreeBSD) iQIcBAEBAgAGBQJQ62RuAAoJEJDCuSvBvK1BQbAP/2bUyXPL/GfvgXG/GiaIWBZm 75vlOyeNlQ7+zAR+Z++BmQUCnNPCSAbzEDlmfJ4nxcCCFBG/2slDdcHUsMr6osu5 /20G9UaBRt+tvjhlXiIAU6JgIKyv3o/DDEVTd4RW1lJmVDlFPQVqD9EK4tq/HITf BefQVznBHZHCyBs93YapOtghpJak81/nIMBTwLHe2lTuMTRaP1R8lhqK8TeputHr FcC70CyBwPz1oJqyHVu1fOcqMUWXZOGn0rlYmtv236Ba8z7W5p8wiSw70o4JSrqJ KN4rTzwtC8NsG7c/TaeAqzrMeSnvjBMwIC9SuoK1xhxUZxzCrZklrQEgaVeO2g6V BH4+1yEZDUPdXBvS+7TKA2fHd8cGdGFnil4mkMY2xRt9zpOPg5rrNP0Ubc4/3C+d wDj0LKPE/Uiq2LFlJQxg8cD8yyzoIb7T+4AuFqelGnwkvpgbbq7AQtXedY8afwBq qdeW2Zb3l3qMsF/IUoa1UFtQNPK4hLfcOuATVTPGufyCOwLwNIq13EQwsTQaxJc5 v9l9cU4m3pUybqAGFfMYkM7/W2jd/v9dfMhN9P2pz8HP5UzyoWNfMNYaNaYmd5eZ OeeHyOmPYpkMWlAK/ok+AIDV+qOxynqM532BzK85uk4BWM7Hi8yncT2wxer9N+NZ t5O43VdHbtTQIut0ZWPs =Urrw -----END PGP SIGNATURE----- --gPVs24VLDFKgHP1I-- From owner-freebsd-fs@FreeBSD.ORG Tue Jan 8 04:36:40 2013 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 6522EA8E; Tue, 8 Jan 2013 04:36:40 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) by mx1.freebsd.org (Postfix) with ESMTP id 329459A3; Tue, 8 Jan 2013 04:36:40 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.6/8.14.6) with ESMTP id r084aeQk077519; Tue, 8 Jan 2013 04:36:40 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.6/8.14.6/Submit) id r084ae37077515; Tue, 8 Jan 2013 04:36:40 GMT (envelope-from linimon) Date: Tue, 8 Jan 2013 04:36:40 GMT Message-Id: <201301080436.r084ae37077515@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Subject: Re: kern/175101: [zfs] [nfs] ZFS NFSv4 ACL's allows user without perm to delete and update timestamp X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 08 Jan 2013 04:36:40 -0000 Old Synopsis: ZFS NFSv4 ACL's allows user without perm to delete and update timestamp New Synopsis: [zfs] [nfs] ZFS NFSv4 ACL's allows user without perm to delete and update timestamp Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Tue Jan 8 04:36:23 UTC 2013 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=175101 From owner-freebsd-fs@FreeBSD.ORG Tue Jan 8 05:11:46 2013 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 0279F508 for ; Tue, 8 Jan 2013 05:11:46 +0000 (UTC) (envelope-from kevlo@FreeBSD.org) Received: from ns.kevlo.org (kevlo.org [220.128.136.52]) by mx1.freebsd.org (Postfix) with ESMTP id 74B23A97 for ; Tue, 8 Jan 2013 05:11:44 +0000 (UTC) Received: from srg.kevlo.org (git.kevlo.org [220.128.136.52]) by ns.kevlo.org (8.14.5/8.14.5) with ESMTP id r085AQQK077795; Tue, 8 Jan 2013 13:10:27 +0800 (CST) (envelope-from kevlo@FreeBSD.org) Message-ID: <50EBAA55.6090204@FreeBSD.org> Date: Tue, 08 Jan 2013 13:10:45 +0800 From: Kevin Lo User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:17.0) Gecko/17.0 Thunderbird/17.0 MIME-Version: 1.0 To: Yuri Subject: Re: kern/133174: [msdosfs] [patch] msdosfs must support multibyte international characters in file names References: <201301050110.r051A0ai012162@freefall.freebsd.org> In-Reply-To: <201301050110.r051A0ai012162@freefall.freebsd.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 08 Jan 2013 05:11:46 -0000 On 2013/01/05 09:10, Yuri wrote: > The following reply was made to PR kern/133174; it has been noted by GNATS. > > From: Yuri > To: bug-followup@FreeBSD.org > Cc: > Subject: Re: kern/133174: [msdosfs] [patch] msdosfs must support multibyte > international characters in file names > Date: Fri, 04 Jan 2013 16:51:14 -0800 > > So what does it take to MFC this? It has already been MFC'd back to the stable/9 branch. See r230196. > > Yuri Kevin From owner-freebsd-fs@FreeBSD.ORG Tue Jan 8 07:29:33 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 4A14E9B6 for ; Tue, 8 Jan 2013 07:29:33 +0000 (UTC) (envelope-from marck@rinet.ru) Received: from woozle.rinet.ru (woozle.rinet.ru [195.54.192.68]) by mx1.freebsd.org (Postfix) with ESMTP id C1729F3E for ; Tue, 8 Jan 2013 07:29:32 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by woozle.rinet.ru (8.14.5/8.14.5) with ESMTP id r087TU7L034949; Tue, 8 Jan 2013 11:29:30 +0400 (MSK) (envelope-from marck@rinet.ru) Date: Tue, 8 Jan 2013 11:29:30 +0400 (MSK) From: Dmitry Morozovsky To: Konstantin Belousov Subject: Re: zfs -> ufs rsync: livelock in wdrain state In-Reply-To: <20130108001231.GB82219@kib.kiev.ua> Message-ID: References: <20130108001231.GB82219@kib.kiev.ua> User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) X-NCC-RegID: ru.rinet X-OpenPGP-Key-ID: 6B691B03 MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.2.7 (woozle.rinet.ru [0.0.0.0]); Tue, 08 Jan 2013 11:29:30 +0400 (MSK) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 08 Jan 2013 07:29:33 -0000 On Tue, 8 Jan 2013, Konstantin Belousov wrote: > > Now, during last rsync, the process is stuck as [snip] > > root@moose:/ar# sync > > load: 0.00 cmd: sync 67229 [wdrain] 468.17r 0.00u 0.00s 0% 596k > > > > Any hints? Quick searching throug freebsd mailing lists and/or open PRs does > > not reveal much. > > > > Are there any kernel messages about the disk system ? > > The wdrain means that the amount of the dirty buffers accumulated exceeds > the allowed maximum. The transient 'wdrain' state is normal on a machine > doing lot of writes to a filesystem using buffer cache, say UFS. Failure > to clean the dirty buffers is usually related to the disk i/o stalling. > > It cannot be denied that a bug could cause stuck 'wdrain' state, but > in the last five or so years all the cases I investigated were due to > disks. Yes, it seems so: root@moose:~# camcontrol devlist load: 0.03 cmd: camcontrol 49735 [devfs] 2.68r 0.00u 0.00s 0% 820k and then machine is in well known "hardly alive" state: TCP connects established, process switching does not go. Will investigate the hardware, thank you. -- Sincerely, D.Marck [DM5020, MCK-RIPE, DM3-RIPN] [ FreeBSD committer: marck@FreeBSD.org ] ------------------------------------------------------------------------ *** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- marck@rinet.ru *** ------------------------------------------------------------------------ From owner-freebsd-fs@FreeBSD.ORG Tue Jan 8 15:05:11 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id AA3155BB for ; Tue, 8 Jan 2013 15:05:11 +0000 (UTC) (envelope-from simon@comsys.ntu-kpi.kiev.ua) Received: from comsys.kpi.ua (comsys.kpi.ua [77.47.192.42]) by mx1.freebsd.org (Postfix) with ESMTP id 65F617C7 for ; Tue, 8 Jan 2013 15:05:11 +0000 (UTC) Received: from pm513-1.comsys.kpi.ua ([10.18.52.101] helo=pm513-1.comsys.ntu-kpi.kiev.ua) by comsys.kpi.ua with esmtpsa (TLSv1:AES256-SHA:256) (Exim 4.63) (envelope-from ) id 1Tsajl-0006NS-1L; Tue, 08 Jan 2013 17:05:09 +0200 Received: by pm513-1.comsys.ntu-kpi.kiev.ua (Postfix, from userid 1001) id 8600C1E08A; Tue, 8 Jan 2013 17:05:08 +0200 (EET) Date: Tue, 8 Jan 2013 17:05:08 +0200 From: Andrey Simonenko To: Tim Gustafson Subject: Re: Problems Re-Starting mountd Message-ID: <20130108150508.GA2248@pm513-1.comsys.ntu-kpi.kiev.ua> References: <20130103123730.GA19137@pm513-1.comsys.ntu-kpi.kiev.ua> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) X-Authenticated-User: simon@comsys.ntu-kpi.kiev.ua X-Authenticator: plain X-Sender-Verify: SUCCEEDED (sender exists & accepts mail) X-Exim-Version: 4.63 (build at 28-Apr-2011 07:11:12) X-Date: 2013-01-08 17:05:09 X-Connected-IP: 10.18.52.101:38576 X-Message-Linecount: 88 X-Body-Linecount: 71 X-Message-Size: 3324 X-Body-Size: 2496 Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 08 Jan 2013 15:05:11 -0000 On Fri, Jan 04, 2013 at 08:43:32AM -0800, Tim Gustafson wrote: > > Can you give example of two lines for two users (four lines in total). > > /export/home/abc -network=1.2.3.4/22 > /export/home/abc -network=5.6.7.8/23 > > /export/home/def -network=1.2.3.4/22 > /export/home/def -network=5.6.7.8/23 > > > How many file systems are mounted on your system? > > Around 1,400. > > > What are types of these file systems? > > All ZFS. As I understood each /export/home/* pathname from /etc/exports is a mount point for ZFS file system. > > > If NFS export settings on your system have -mapall or -maproot, > > then tell which type of database is used for users and groups names. > > They do not. > > > Give the content of /etc/nsswitch.conf. > > group: files ldap > passwd: files ldap > hosts: files dns > networks: files > shells: files > services: compat > services_compat: nis > protocols: files > rpc: files > I created 2000 file systems on ZFS file system backed by vnode md(4) device. The /etc/exports file contains 4000 entries like your example. On 9.1-STABLE mountd spends ~70 seconds in flushing current NFS exports in the NFS server, parsing data from /etc/exports and loading parsed data into the NFS server. ~70 seconds is not several minutes. Most of time mountd spends in nmount() system call in "zio->io_cv" lock. Can you show the output of "truss -fc -o /tmp/output.txt mountd" (wait wchan "select" state of mountd and terminate it by a signal). If everything is correct you should see N statfs() calls, N+M nmount() calls and something*N lstat() calls, where N is the number of /etc/exports lines, M is the number of mounted file systems. Number of lstat() calls depends on number of components in pathnames. Since truss does not support all needed system calls, I modified it (src/usr.bin/truss/): --- syscalls.c.orig 2012-12-10 13:54:44.000000000 +0200 +++ syscalls.c 2013-01-08 16:19:40.000000000 +0200 @@ -194,6 +194,10 @@ struct syscall syscalls[] = { .args = { { Int, 0 } } }, { .name = "nanosleep", .ret_type = 0, .nargs = 1, .args = { { Timespec, 0 } } }, + { .name = "nmount", .ret_type = 0, .nargs = 3, + .args = { { Ptr, 0 }, { Int, 1 }, { Int, 2 } } }, + { .name = "statfs", .ret_type = 0, .nargs = 2, + .args = { { Name | IN, 0 }, { Ptr, 1 } } }, { .name = "select", .ret_type = 1, .nargs = 5, .args = { { Int, 0 }, { Fd_set, 1 }, { Fd_set, 2 }, { Fd_set, 3 }, { Timeval, 4 } } }, { .name = "poll", .ret_type = 1, .nargs = 3, From owner-freebsd-fs@FreeBSD.ORG Tue Jan 8 15:43:24 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 0EDD11C6 for ; Tue, 8 Jan 2013 15:43:24 +0000 (UTC) (envelope-from jas@cse.yorku.ca) Received: from bronze.cs.yorku.ca (bronze.cs.yorku.ca [130.63.95.34]) by mx1.freebsd.org (Postfix) with ESMTP id C9A74958 for ; Tue, 8 Jan 2013 15:43:23 +0000 (UTC) Received: from [130.63.97.125] (ident=jas) by bronze.cs.yorku.ca with esmtpsa (TLSv1:CAMELLIA256-SHA:256) (Exim 4.76) (envelope-from ) id 1Tsb0K-0005IW-N9; Tue, 08 Jan 2013 10:22:16 -0500 Message-ID: <50EC39A8.3070108@cse.yorku.ca> Date: Tue, 08 Jan 2013 10:22:16 -0500 From: Jason Keltz User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64; rv:17.0) Gecko/17.0 Thunderbird/17.0 MIME-Version: 1.0 To: Andrey Simonenko Subject: Re: Problems Re-Starting mountd References: <20130103123730.GA19137@pm513-1.comsys.ntu-kpi.kiev.ua> <20130108150508.GA2248@pm513-1.comsys.ntu-kpi.kiev.ua> In-Reply-To: <20130108150508.GA2248@pm513-1.comsys.ntu-kpi.kiev.ua> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Spam-Score: -1.0 X-Spam-Level: - X-Spam-Report: Content preview: On 01/08/2013 10:05 AM, Andrey Simonenko wrote: > I created 2000 file systems on ZFS file system backed by vnode md(4) > device. The /etc/exports file contains 4000 entries like your example. > > On 9.1-STABLE mountd spends ~70 seconds in flushing current NFS exports > in the NFS server, parsing data from /etc/exports and loading parsed > data into the NFS server. ~70 seconds is not several minutes. Most of > time mountd spends in nmount() system call in "zio->io_cv" lock. > > Can you show the output of "truss -fc -o /tmp/output.txt mountd" > (wait wchan "select" state of mountd and terminate it by a signal). > If everything is correct you should see N statfs() calls, N+M nmount() > calls and something*N lstat() calls, where N is the number of /etc/exports > lines, M is the number of mounted file systems. Number of lstat() calls > depends on number of components in pathnames. [...] Content analysis details: (-1.0 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 SHORTCIRCUIT Not all rules were run, due to a shortcircuited rule -1.0 ALL_TRUSTED Passed through trusted hosts only via SMTP Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 08 Jan 2013 15:43:24 -0000 On 01/08/2013 10:05 AM, Andrey Simonenko wrote: > I created 2000 file systems on ZFS file system backed by vnode md(4) > device. The /etc/exports file contains 4000 entries like your example. > > On 9.1-STABLE mountd spends ~70 seconds in flushing current NFS exports > in the NFS server, parsing data from /etc/exports and loading parsed > data into the NFS server. ~70 seconds is not several minutes. Most of > time mountd spends in nmount() system call in "zio->io_cv" lock. > > Can you show the output of "truss -fc -o /tmp/output.txt mountd" > (wait wchan "select" state of mountd and terminate it by a signal). > If everything is correct you should see N statfs() calls, N+M nmount() > calls and something*N lstat() calls, where N is the number of /etc/exports > lines, M is the number of mounted file systems. Number of lstat() calls > depends on number of components in pathnames. Andrey, Would that still be an ~70 second period in which new mounts would not be allowed? In the system I'm preparing, I'll have at least 4000 entries in /etc/exports, probably even more, so I know I'll be dealing with the same issue that Tim is dealing with when I get there. However, I don't see how to avoid the issue ... If I want new users to be able to login shortly after their account is created, and each user has a ZFS filesystem as a home directory, then at least at some interval, after adding a user to the system, I need to update the exports file on the file server, and re-export everything. Yet, even a >1 minute delay where users who are logging in won't get their home directory mounted on the system they are logging into - well, that's not so good... accounts can be added all the time and this would create random chaos. Isn't there some way to make it so that when you re-export everything, the existing exports are still served until the new exports are ready? Would this be the same for NFSv3 versus NFSv4? I suspect yes. Jason. From owner-freebsd-fs@FreeBSD.ORG Tue Jan 8 16:42:51 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 2FE2C769 for ; Tue, 8 Jan 2013 16:42:51 +0000 (UTC) (envelope-from tjg@soe.ucsc.edu) Received: from mail-vb0-f50.google.com (mail-vb0-f50.google.com [209.85.212.50]) by mx1.freebsd.org (Postfix) with ESMTP id D4DD4C34 for ; Tue, 8 Jan 2013 16:42:50 +0000 (UTC) Received: by mail-vb0-f50.google.com with SMTP id ft2so597268vbb.9 for ; Tue, 08 Jan 2013 08:42:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ucsc.edu; s=ucsc-google; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=EEkZavG9v0O/6HER0TvtwSv9W7VeL//9OD6fU3C1Yns=; b=D005NwWibwzb6/tJFcZydHrWFa6lH5LbFnY31mJmo373j1v0fY+gqOs+bMKEQ5YK/y Mlegy9hXrNK65a4GLJaf7Fo+hgGJyC/Lw5cEuPbTx/bMIF/QrmhslzSTWMXhvxCJ5f/6 N4Ral7QWEceCOwBsLzMTCLRY4luG9Av63uzs8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:x-gm-message-state; bh=EEkZavG9v0O/6HER0TvtwSv9W7VeL//9OD6fU3C1Yns=; b=O6vfqkwf8cJ8d9MXb1tTBSjTECievc/vdQHSuN+agj6d6XXgBZGGVMkrfGeVn2ncNA tvXQF1g53tvgx4kFSWV0aqgT7RAnrMQU4w/k15LnRY1eatS7f1GkyEafxnm7abJEf/5P jCYy9vUPYuU9x08EpXnolscqrkvQExQUsIziSO9NLMl2fHJ+Fzoj+jhRHOimRCVo9Hj6 1SIBf5BHseEQe1SIm0N6GqB+zMSCVmZvAvI7xokorZr6bLuX8o2Ylip9/qKkSvMBITky s9fp2GaGNdCqzKuwTz9r2plgDcDotL5RMcslEyW8HTNLFQPskrBDm3aDMuCm29iTAp2i OGqA== MIME-Version: 1.0 Received: by 10.52.97.7 with SMTP id dw7mr76364857vdb.38.1357663364201; Tue, 08 Jan 2013 08:42:44 -0800 (PST) Received: by 10.59.12.231 with HTTP; Tue, 8 Jan 2013 08:42:44 -0800 (PST) In-Reply-To: <20130108150508.GA2248@pm513-1.comsys.ntu-kpi.kiev.ua> References: <20130103123730.GA19137@pm513-1.comsys.ntu-kpi.kiev.ua> <20130108150508.GA2248@pm513-1.comsys.ntu-kpi.kiev.ua> Date: Tue, 8 Jan 2013 08:42:44 -0800 Message-ID: Subject: Re: Problems Re-Starting mountd From: Tim Gustafson To: Andrey Simonenko Content-Type: text/plain; charset=UTF-8 X-Gm-Message-State: ALoCoQmMChp59uh9U6aZF2mb+eNaI+jv68OCZgRtJQC+vgcDcjW1eRykzdiECVCN05aRQOWzKPOo Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 08 Jan 2013 16:42:51 -0000 > As I understood each /export/home/* pathname from /etc/exports is > a mount point for ZFS file system. Correct. > I created 2000 file systems on ZFS file system backed by vnode md(4) > device. The /etc/exports file contains 4000 entries like your example. > > On 9.1-STABLE mountd spends ~70 seconds in flushing current NFS exports > in the NFS server, parsing data from /etc/exports and loading parsed > data into the NFS server. ~70 seconds is not several minutes. Most of > time mountd spends in nmount() system call in "zio->io_cv" lock. I suspect that statfs(), nmount() and lstat() will return much more quickly for md-based file systems, since they have zero latency. > Can you show the output of "truss -fc -o /tmp/output.txt mountd" > (wait wchan "select" state of mountd and terminate it by a signal). > If everything is correct you should see N statfs() calls, N+M nmount() > calls and something*N lstat() calls, where N is the number of /etc/exports > lines, M is the number of mounted file systems. Number of lstat() calls > depends on number of components in pathnames. I will try to run this tonight; I can't do it now as people are already starting to work today. -- Tim Gustafson tjg@soe.ucsc.edu 831-459-5354 Baskin Engineering, Room 313A From owner-freebsd-fs@FreeBSD.ORG Tue Jan 8 17:51:59 2013 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 24298781 for ; Tue, 8 Jan 2013 17:51:59 +0000 (UTC) (envelope-from nicolas@i.0x5.de) Received: from n.0x5.de (n.0x5.de [217.197.85.144]) by mx1.freebsd.org (Postfix) with ESMTP id DB510FF1 for ; Tue, 8 Jan 2013 17:51:58 +0000 (UTC) Received: by pc5.i.0x5.de (Postfix, from userid 1003) id 3Ygglj6YxDz7ySH; Tue, 8 Jan 2013 18:42:25 +0100 (CET) Date: Tue, 8 Jan 2013 18:42:25 +0100 From: Nicolas Rachinsky To: freebsd-fs@FreeBSD.org Subject: slowdown of zfs (tx->tx) Message-ID: <20130108174225.GA17260@mid.pc5.i.0x5.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline X-Powered-by: FreeBSD X-Homepage: http://www.rachinsky.de X-PGP-Keyid: 887BAE72 X-PGP-Fingerprint: 039E 9433 115F BC5F F88D 4524 5092 45C4 887B AE72 X-PGP-Keys: http://www.rachinsky.de/nicolas/gpg/nicolas_rachinsky.asc User-Agent: Mutt/1.5.21 (2010-09-15) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 08 Jan 2013 17:51:59 -0000 Hallo, we have a problem that started to begin on one of our backup servers recently. We noticed that backups took an absurd amount of time (we aborted them after several hours when the same backup usually takes minutes). We first considered a disk broken and kicked it. But that didn't change anything. About one third of the rsync invocations end in a state where top shows mostly tx->tx as the state. It seems that other rsync instances that run at the same time or are started while one rsync is in this state do also get into this state. These rsyncs can be killed, but it takes a while (several seconds or tens of seconds). Repeating the same rsync invocation afterwards works (sometimes). There is almost no disk activity during this time. What can I do to debug or avoid this? Some information: The backups are taken with rsync (and --fake-super, but a patched version that does not read extended attributes, since it seems writing them all the time is faster than reading them). sync is disabled for the whole pool. root uses UFS and is on another set of disks (together with swap). zpool status pool: pool1 state: DEGRADED status: One or more devices could not be opened. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Attach the missing device and online it using 'zpool online'. see: http://www.sun.com/msg/ZFS-8000-2Q scan: scrub canceled on Fri Jan 4 10:31:35 2013 config: NAME STATE READ WRITE CKSUM pool1 DEGRADED 0 0 0 raidz2-0 DEGRADED 0 0 0 ada5 ONLINE 0 0 0 ada8 ONLINE 0 0 0 ada2 ONLINE 0 0 0 ada3 ONLINE 0 0 0 11846390416703086268 UNAVAIL 0 0 0 was /dev/dsk/ada1 ada6 ONLINE 0 0 0 ada0 ONLINE 0 0 1 ada7 ONLINE 0 0 0 ada4 ONLINE 0 0 3 errors: No known data errors 8.3-RELEASE-p5 with http://svnweb.freebsd.org/base?view=revision&revision=240345 and http://svnweb.freebsd.org/base?view=revision&revision=240632 applied amd64 with 8G of RAM Thanks in advance Nicolas From owner-freebsd-fs@FreeBSD.ORG Tue Jan 8 20:47:40 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 16ACC527 for ; Tue, 8 Jan 2013 20:47:40 +0000 (UTC) (envelope-from artemb@gmail.com) Received: from mail-vb0-f52.google.com (mail-vb0-f52.google.com [209.85.212.52]) by mx1.freebsd.org (Postfix) with ESMTP id A127EA37 for ; Tue, 8 Jan 2013 20:47:39 +0000 (UTC) Received: by mail-vb0-f52.google.com with SMTP id ez10so872122vbb.11 for ; Tue, 08 Jan 2013 12:47:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type; bh=0wKGk8NiR/gzbHt2wwZCLsU4tiSUL57mDpUY9wLUw4s=; b=FpQzB5BIQgFK7BrIdNULqINLMQQvZcQ9YawjUPAqSe8EZPVrxKACoL4qBanTGqtj+l D1MYBH2YeCedcqvjLEjGDzzSf+f8PeC4xHd8T1iSNYpyu4eKXQJxkCyRlUtOo56pkh+m 0ICAJOexTpq2Bzt0RJv+MJpIL2+thEsBVZJMNtOeVHepZfzKYj+10CMvpWaKJRXtz8Gy x3/wf4g3PNLSJfQ9xFa0MhIRHIP75hjGe/4B3DxA7cJuyXSsrwdxok8g8OCEwI+LKKeZ uF3/xysJwVWsQOSoKzRafu8cysArzzl8b63l3qZOuHMHD3TRm88b1IjGZaMz4e5ul/vb iAkA== MIME-Version: 1.0 Received: by 10.220.151.83 with SMTP id b19mr86687753vcw.25.1357678058734; Tue, 08 Jan 2013 12:47:38 -0800 (PST) Sender: artemb@gmail.com Received: by 10.220.122.196 with HTTP; Tue, 8 Jan 2013 12:47:38 -0800 (PST) In-Reply-To: <20130108174225.GA17260@mid.pc5.i.0x5.de> References: <20130108174225.GA17260@mid.pc5.i.0x5.de> Date: Tue, 8 Jan 2013 12:47:38 -0800 X-Google-Sender-Auth: Y5qqrkapwiEmXHcuD5d-LMC04q8 Message-ID: Subject: Re: slowdown of zfs (tx->tx) From: Artem Belevich To: Nicolas Rachinsky Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 08 Jan 2013 20:47:40 -0000 On Tue, Jan 8, 2013 at 9:42 AM, Nicolas Rachinsky wrote: > NAME STATE READ WRITE CKSUM > pool1 DEGRADED 0 0 0 > raidz2-0 DEGRADED 0 0 0 > ada5 ONLINE 0 0 0 > ada8 ONLINE 0 0 0 > ada2 ONLINE 0 0 0 > ada3 ONLINE 0 0 0 > 11846390416703086268 UNAVAIL 0 0 0 was /dev/dsk/ada1 > ada6 ONLINE 0 0 0 > ada0 ONLINE 0 0 1 > ada7 ONLINE 0 0 0 > ada4 ONLINE 0 0 3 You seem to have some checksum errors which does suggest hardware troubles. For starters, check smart info for all drives and see if they have any relocated sectors. Use gstat during your workload to see if any of the drives takes much longer than others to handle its job. > There is almost no disk activity during this time. What kind of disk activity *is* there? Sleeping on 'tx->tx...' usually means that ZFS is trying to commit data to disk. Normally it happens once every few seconds (10 is default if I remember correctly). It may happen more often if you do a lot of synchronous writes. I believe there was an iostat-like dtrace script that would show synchronous write rate, but I can't seem to find it. > sync is disabled for the whole pool. If that's the case (assyming you're talking about sync=disabled zfs property), then synchronous writes are probably not the cause of slowdown. My guess would be either failing HDD or something funky with cabling or sata controller. --Artem From owner-freebsd-fs@FreeBSD.ORG Wed Jan 9 00:18:48 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 2F4AF60C for ; Wed, 9 Jan 2013 00:18:48 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id ED69B737 for ; Wed, 9 Jan 2013 00:18:47 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AqAEAKO27FCDaFvO/2dsb2JhbABEhjm3PHOCHgEBAQMBAQEBIAQnIAsFFg4KAgINBAEBEwIpAQkmBggHBAEcBIdwBgynMYJAjSSBIotLcwgBghmBEwOIYYp9gi6BHI8tgxKBTAcXHg X-IronPort-AV: E=Sophos;i="4.84,433,1355115600"; d="scan'208";a="11023281" Received: from erie.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.206]) by esa-jnhn.mail.uoguelph.ca with ESMTP; 08 Jan 2013 19:18:40 -0500 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 0C4B5B3EEA; Tue, 8 Jan 2013 19:18:41 -0500 (EST) Date: Tue, 8 Jan 2013 19:18:41 -0500 (EST) From: Rick Macklem To: Jason Keltz Message-ID: <972459831.1800222.1357690721032.JavaMail.root@erie.cs.uoguelph.ca> In-Reply-To: <50EC39A8.3070108@cse.yorku.ca> Subject: Re: Problems Re-Starting mountd MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.203] X-Mailer: Zimbra 6.0.10_GA_2692 (ZimbraWebClient - FF3.0 (Win)/6.0.10_GA_2692) Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 09 Jan 2013 00:18:48 -0000 Jason Keltz wrote: > On 01/08/2013 10:05 AM, Andrey Simonenko wrote: > > I created 2000 file systems on ZFS file system backed by vnode md(4) > > device. The /etc/exports file contains 4000 entries like your > > example. > > > > On 9.1-STABLE mountd spends ~70 seconds in flushing current NFS > > exports > > in the NFS server, parsing data from /etc/exports and loading parsed > > data into the NFS server. ~70 seconds is not several minutes. Most > > of > > time mountd spends in nmount() system call in "zio->io_cv" lock. > > > > Can you show the output of "truss -fc -o /tmp/output.txt mountd" > > (wait wchan "select" state of mountd and terminate it by a signal). > > If everything is correct you should see N statfs() calls, N+M > > nmount() > > calls and something*N lstat() calls, where N is the number of > > /etc/exports > > lines, M is the number of mounted file systems. Number of lstat() > > calls > > depends on number of components in pathnames. > > Andrey, > > Would that still be an ~70 second period in which new mounts would not > be allowed? In the system I'm preparing, I'll have at least 4000 > entries > in /etc/exports, probably even more, so I know I'll be dealing with > the > same issue that Tim is dealing with when I get there. However, I don't > see how to avoid the issue ... If I want new users to be able to login > shortly after their account is created, and each user has a ZFS > filesystem as a home directory, then at least at some interval, after > adding a user to the system, I need to update the exports file on the > file server, and re-export everything. Yet, even a >1 minute delay > where users who are logging in won't get their home directory mounted > on > the system they are logging into - well, that's not so good... > accounts > can be added all the time and this would create random chaos. Isn't > there some way to make it so that when you re-export everything, the > existing exports are still served until the new exports are ready? I can't think of how you'd do everything without deleting the old stuff, but it would be possible to "add new entries". It has to be done by modifying mountd, since it keeps a tree in its address space that it uses for mount requests and the tree must be grown. I don't know about nfse, but you'd have to add this capability to mountd and, trust me, it's an ugly old piece of C code, so coming up with a patch might not be that easy. However, it might not be that bad, since the only difference from doing the full reload as it stands now would be to "not delete the tree that already exists in the utility and don't do the DELEXPORTS syscall" I think, so the old ones don't go away. There could be a file called something like /etc/exports.new for the new entries and a different signal (SIGUSR1??) to load these. (Then you'd add the new entries to /etc/exports as well for the next time mountd restarts, but wouldn't send it a SIGHUP.) I haven't tried to code this, so I don't know how hard it would be. If you did this, it would only be useful to add exports for file systems not already exported. > Would this be the same for NFSv3 versus NFSv4? I suspect yes. > Yep, the file system exports are done the same way. rick > Jason. > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Wed Jan 9 01:19:58 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 765244D5 for ; Wed, 9 Jan 2013 01:19:58 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id 3A2EA91C for ; Wed, 9 Jan 2013 01:19:57 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: Ap8EAEHF7FCDaFvO/2dsb2JhbAA9B4Y5s0qDcnOCJSMEUhsOGxkCBFUGiCqnQ4JAjSSMdIMOgRMDiGGGJ4cEkEmDEoFKIxs X-IronPort-AV: E=Sophos;i="4.84,433,1355115600"; d="scan'208";a="8115331" Received: from erie.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.206]) by esa-annu.net.uoguelph.ca with ESMTP; 08 Jan 2013 20:19:28 -0500 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id D1458B3EB3; Tue, 8 Jan 2013 20:19:28 -0500 (EST) Date: Tue, 8 Jan 2013 20:19:28 -0500 (EST) From: Rick Macklem To: Jason Keltz Message-ID: <2094136156.1801692.1357694368838.JavaMail.root@erie.cs.uoguelph.ca> In-Reply-To: <1855706034.1801685.1357694364311.JavaMail.root@erie.cs.uoguelph.ca> Subject: Re: Problems Re-Starting mountd MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="----=_Part_1801691_315852099.1357694368836" X-Originating-IP: [172.17.91.202] X-Mailer: Zimbra 6.0.10_GA_2692 (ZimbraWebClient - FF3.0 (Win)/6.0.10_GA_2692) Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 09 Jan 2013 01:19:58 -0000 ------=_Part_1801691_315852099.1357694368836 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit You could test the attached patch, which I think makes mountd load new export entries from a file called /etc/exports.new without deleting the exports already in place, when sent a USR1 signal. After applying the patch to mountd.c, rebuilding and replacing it, you would: - put new entries for file systems not yet exported in both /etc/exports and /etc/exports.new # kill -USR1 - delete /etc/exports.new Don't send HUP to mountd for this case. Very lightly tested, rick ps: Sometimes it's faster to just code this stuff instead of discussing if/how it can be done;-) pss: This patch isn't ready for head. If it is useful, it might make sense to add a new mountd option that specifies the name of the file (/etc/exports.new or ...), so that this capability isn't enabled by default. ------=_Part_1801691_315852099.1357694368836 Content-Type: text/x-patch; name=newexports.patch Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename=newexports.patch LS0tIHVzci5zYmluL21vdW50ZC9tb3VudGQuYy5zYXZuZXcJMjAxMy0wMS0wOCAxOTozMjo0Ni4w MDAwMDAwMDAgLTA1MDAKKysrIHVzci5zYmluL21vdW50ZC9tb3VudGQuYwkyMDEzLTAxLTA4IDE5 OjU0OjUxLjAwMDAwMDAwMCAtMDUwMApAQCAtMTkwLDYgKzE5MCw3IEBAIHZvaWQJZnJlZV9leHAo c3RydWN0IGV4cG9ydGxpc3QgKik7CiB2b2lkCWZyZWVfZ3JwKHN0cnVjdCBncm91cGxpc3QgKik7 CiB2b2lkCWZyZWVfaG9zdChzdHJ1Y3QgaG9zdGxpc3QgKik7CiB2b2lkCWdldF9leHBvcnRsaXN0 KHZvaWQpOwordm9pZAlnZXRfbmV3X2V4cG9ydGxpc3Qodm9pZCk7CiBpbnQJZ2V0X2hvc3QoY2hh ciAqLCBzdHJ1Y3QgZ3JvdXBsaXN0ICosIHN0cnVjdCBncm91cGxpc3QgKik7CiBzdHJ1Y3QgaG9z dGxpc3QgKmdldF9odCh2b2lkKTsKIGludAlnZXRfbGluZSh2b2lkKTsKQEAgLTIwMCw2ICsyMDEs NyBAQCBzdHJ1Y3QgZ3JvdXBsaXN0ICpnZXRfZ3JwKHZvaWQpOwogdm9pZAloYW5nX2RpcnAoc3Ry dWN0IGRpcmxpc3QgKiwgc3RydWN0IGdyb3VwbGlzdCAqLAogCQkJCXN0cnVjdCBleHBvcnRsaXN0 ICosIGludCk7CiB2b2lkCWh1cGhhbmRsZXIoaW50IHNpZyk7Cit2b2lkCXVzcjFoYW5kbGVyKGlu dCBzaWcpOwogaW50CW1ha2VtYXNrKHN0cnVjdCBzb2NrYWRkcl9zdG9yYWdlICpzc3AsIGludCBi aXRsZW4pOwogdm9pZAltbnRzcnYoc3RydWN0IHN2Y19yZXEgKiwgU1ZDWFBSVCAqKTsKIHZvaWQJ bmV4dGZpZWxkKGNoYXIgKiosIGNoYXIgKiopOwpAQCAtMjI1LDYgKzIyNyw3IEBAIHN0cnVjdCBt b3VudGxpc3QgKm1saGVhZDsKIHN0cnVjdCBncm91cGxpc3QgKmdycGhlYWQ7CiBjaGFyICpleG5h bWVzX2RlZmF1bHRbMl0gPSB7IF9QQVRIX0VYUE9SVFMsIE5VTEwgfTsKIGNoYXIgKipleG5hbWVz OworY2hhciAqbmV3X2V4bmFtZSA9ICIvZXRjL2V4cG9ydHMubmV3IjsKIGNoYXIgKipob3N0cyA9 IE5VTEw7CiBzdHJ1Y3QgeHVjcmVkIGRlZl9hbm9uID0gewogCVhVQ1JFRF9WRVJTSU9OLApAQCAt MjM5LDYgKzI0Miw3IEBAIGludCBuaG9zdHMgPSAwOwogaW50IGRpcl9vbmx5ID0gMTsKIGludCBk b2xvZyA9IDA7CiBpbnQgZ290X3NpZ2h1cCA9IDA7CitpbnQgZ290X3NpZ3VzcjEgPSAwOwogaW50 IHhjcmVhdGVkID0gMDsKIAogY2hhciAqc3ZjcG9ydF9zdHIgPSBOVUxMOwpAQCAtNDExLDYgKzQx NSw3IEBAIG1haW4oaW50IGFyZ2MsIGNoYXIgKiphcmd2KQogCQlzaWduYWwoU0lHUVVJVCwgU0lH X0lHTik7CiAJfQogCXNpZ25hbChTSUdIVVAsIGh1cGhhbmRsZXIpOworCXNpZ25hbChTSUdVU1Ix LCB1c3IxaGFuZGxlcik7CiAJc2lnbmFsKFNJR1RFUk0sIHRlcm1pbmF0ZSk7CiAJc2lnbmFsKFNJ R1BJUEUsIFNJR19JR04pOwogCkBAIC01NzMsNiArNTc4LDEwIEBAIG1haW4oaW50IGFyZ2MsIGNo YXIgKiphcmd2KQogCQkJZ2V0X2V4cG9ydGxpc3QoKTsKIAkJCWdvdF9zaWdodXAgPSAwOwogCQl9 CisJCWlmIChnb3Rfc2lndXNyMSkgeworCQkJZ2V0X25ld19leHBvcnRsaXN0KCk7CisJCQlnb3Rf c2lndXNyMSA9IDA7CisJCX0KIAkJcmVhZGZkcyA9IHN2Y19mZHNldDsKIAkJc3dpdGNoIChzZWxl Y3Qoc3ZjX21heGZkICsgMSwgJnJlYWRmZHMsIE5VTEwsIE5VTEwsIE5VTEwpKSB7CiAJCWNhc2Ug LTE6CkBAIC05NTEsNiArOTYwLDcgQEAgbW50c3J2KHN0cnVjdCBzdmNfcmVxICpycXN0cCwgU1ZD WFBSVCAqdAogCiAJc2lnZW1wdHlzZXQoJnNpZ2h1cF9tYXNrKTsKIAlzaWdhZGRzZXQoJnNpZ2h1 cF9tYXNrLCBTSUdIVVApOworCXNpZ2FkZHNldCgmc2lnaHVwX21hc2ssIFNJR1VTUjEpOwogCXNh ZGRyID0gc3ZjX2dldHJwY2NhbGxlcih0cmFuc3ApLT5idWY7CiAJc3dpdGNoIChzYWRkci0+c2Ff ZmFtaWx5KSB7CiAJY2FzZSBBRl9JTkVUNjoKQEAgLTEyMjcsNiArMTIzNyw3IEBAIHhkcl9leHBs aXN0X2NvbW1vbihYRFIgKnhkcnNwLCBjYWRkcl90IGMKIAogCXNpZ2VtcHR5c2V0KCZzaWdodXBf bWFzayk7CiAJc2lnYWRkc2V0KCZzaWdodXBfbWFzaywgU0lHSFVQKTsKKwlzaWdhZGRzZXQoJnNp Z2h1cF9tYXNrLCBTSUdVU1IxKTsKIAlzaWdwcm9jbWFzayhTSUdfQkxPQ0ssICZzaWdodXBfbWFz aywgTlVMTCk7CiAJZXAgPSBleHBoZWFkOwogCXdoaWxlIChlcCkgewpAQCAtMTc5OSw2ICsxODEw LDMyIEBAIGdldF9leHBvcnRsaXN0KHZvaWQpCiB9CiAKIC8qCisgKiBHZXQgdGhlIGV4cG9ydCBs aXN0IGZvciBhbGwgbmV3IGVudHJpZXMuCisgKi8KK3ZvaWQKK2dldF9uZXdfZXhwb3J0bGlzdCh2 b2lkKQoreworCisJaWYgKHN1c3BlbmRfbmZzZCAhPSAwKQorCQkodm9pZCluZnNzdmMoTkZTU1ZD X1NVU1BFTkRORlNELCBOVUxMKTsKKworCS8qCisJICogUmVhZCBpbiB0aGUgbmV3IGV4cG9ydHMg ZmlsZSBhbmQgYWRkIHRvIHRoZSBsaXN0LCBjYWxsaW5nCisJICogbm1vdW50KCkgYXMgd2UgZ28g YWxvbmcgdG8gcHVzaCB0aGUgZXhwb3J0IHJ1bGVzIGludG8gdGhlIGtlcm5lbC4KKwkgKi8KKwlp ZiAoZGVidWcpCisJCXdhcm54KCJyZWFkaW5nIG5ldyBleHBvcnRzIGZyb20gJXMiLCBuZXdfZXhu YW1lKTsKKwlpZiAoKGV4cF9maWxlID0gZm9wZW4obmV3X2V4bmFtZSwgInIiKSkgIT0gTlVMTCkg eworCQlnZXRfZXhwb3J0bGlzdF9vbmUoKTsKKwkJZmNsb3NlKGV4cF9maWxlKTsKKwl9IGVsc2UK KwkJc3lzbG9nKExPR19XQVJOSU5HLCAiY2FuJ3Qgb3BlbiAlcyIsIG5ld19leG5hbWUpOworCisJ LyogUmVzdW1lIHRoZSBuZnNkLiBJZiB0aGV5IHdlcmVuJ3Qgc3VzcGVuZGVkLCB0aGlzIGlzIGhh cm1sZXNzLiAqLworCSh2b2lkKW5mc3N2YyhORlNTVkNfUkVTVU1FTkZTRCwgTlVMTCk7Cit9CisK Ky8qCiAgKiBBbGxvY2F0ZSBhbiBleHBvcnQgbGlzdCBlbGVtZW50CiAgKi8KIHN0cnVjdCBleHBv cnRsaXN0ICoKQEAgLTMyMTIsNiArMzI0OSwxMiBAQCBodXBoYW5kbGVyKGludCBzaWcgX191bnVz ZWQpCiAJZ290X3NpZ2h1cCA9IDE7CiB9CiAKK3ZvaWQKK3VzcjFoYW5kbGVyKGludCBzaWcgX191 bnVzZWQpCit7CisJZ290X3NpZ3VzcjEgPSAxOworfQorCiB2b2lkIHRlcm1pbmF0ZShpbnQgc2ln IF9fdW51c2VkKQogewogCXBpZGZpbGVfcmVtb3ZlKHBmaCk7Cg== ------=_Part_1801691_315852099.1357694368836-- From owner-freebsd-fs@FreeBSD.ORG Wed Jan 9 02:33:28 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: by hub.freebsd.org (Postfix, from userid 821) id 01D89226; Wed, 9 Jan 2013 02:33:28 +0000 (UTC) Date: Wed, 9 Jan 2013 02:33:27 +0000 From: John To: FreeBSD Filesystems Subject: rc.d script for memory based zfs intent log Message-ID: <20130109023327.GA1888@FreeBSD.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.5.21 (2010-09-15) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 09 Jan 2013 02:33:28 -0000 Hi Folks, Here's an rc.d script that provides a nice performance boost on ZFS/NFS based file servers. It also helps in other areas not specific to NFS. It attaches the log device at system startup and removes it at system shutdown time. Example; memzil_pools="tank" memzil_bootfs="YES" service memzil onestart zpool status tank service memzil onestop This configuration provides a nice performance boost especially to NFS, but also helps in other areas not specific to NFS. Please DO NOT USE this script if your system is not UPS backed, preferably with dual power supplies on separate circuits. If your system crashes you may lose data. The script contains information on recovery. http://people.freebsd.org/~jwd/memzil.txt Comments/Improvements appreciated. Thanks, John From owner-freebsd-fs@FreeBSD.ORG Wed Jan 9 03:05:48 2013 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 8FFF99D4; Wed, 9 Jan 2013 03:05:48 +0000 (UTC) (envelope-from hrs@FreeBSD.org) Received: from mail.allbsd.org (gatekeeper.allbsd.org [IPv6:2001:2f0:104:e001::32]) by mx1.freebsd.org (Postfix) with ESMTP id A1AC6D5C; Wed, 9 Jan 2013 03:05:47 +0000 (UTC) Received: from alph.allbsd.org (p1137-ipbf1505funabasi.chiba.ocn.ne.jp [118.7.212.137]) (authenticated bits=128) by mail.allbsd.org (8.14.5/8.14.5) with ESMTP id r0935O68084086 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Wed, 9 Jan 2013 12:05:37 +0900 (JST) (envelope-from hrs@FreeBSD.org) Received: from localhost (localhost [127.0.0.1]) (authenticated bits=0) by alph.allbsd.org (8.14.5/8.14.5) with ESMTP id r0935Nkj009005; Wed, 9 Jan 2013 12:05:24 +0900 (JST) (envelope-from hrs@FreeBSD.org) Date: Wed, 09 Jan 2013 11:52:40 +0900 (JST) Message-Id: <20130109.115240.1198411557684741197.hrs@allbsd.org> To: jwd@FreeBSD.org Subject: Re: rc.d script for memory based zfs intent log From: Hiroki Sato In-Reply-To: <20130109023327.GA1888@FreeBSD.org> References: <20130109023327.GA1888@FreeBSD.org> X-PGPkey-fingerprint: BDB3 443F A5DD B3D0 A530 FFD7 4F2C D3D8 2793 CF2D X-Mailer: Mew version 6.5 on Emacs 23.4 / Mule 6.0 (HANACHIRUSATO) Mime-Version: 1.0 Content-Type: Multipart/Signed; protocol="application/pgp-signature"; micalg=pgp-sha1; boundary="--Security_Multipart(Wed_Jan__9_11_52_40_2013_377)--" Content-Transfer-Encoding: 7bit X-Virus-Scanned: clamav-milter 0.97.4 at gatekeeper.allbsd.org X-Virus-Status: Clean X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.7 (mail.allbsd.org [133.31.130.32]); Wed, 09 Jan 2013 12:05:38 +0900 (JST) X-Spam-Status: No, score=-98.1 required=13.0 tests=CONTENT_TYPE_PRESENT, ONLY1HOPDIRECT,SAMEHELOBY2HOP,USER_IN_WHITELIST autolearn=no version=3.3.2 X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on gatekeeper.allbsd.org Cc: freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 09 Jan 2013 03:05:48 -0000 ----Security_Multipart(Wed_Jan__9_11_52_40_2013_377)-- Content-Type: Text/Plain; charset=us-ascii Content-Transfer-Encoding: 7bit John wrote in <20130109023327.GA1888@FreeBSD.org>: jw> Hi Folks, jw> jw> Here's an rc.d script that provides a nice performance boost on jw> ZFS/NFS based file servers. It also helps in other areas not specific jw> to NFS. jw> jw> It attaches the log device at system startup and removes it at jw> system shutdown time. Example; jw> jw> memzil_pools="tank" jw> memzil_bootfs="YES" jw> service memzil onestart jw> zpool status tank jw> service memzil onestop jw> jw> This configuration provides a nice performance boost especially to jw> NFS, but also helps in other areas not specific to NFS. jw> jw> Please DO NOT USE this script if your system is not UPS backed, preferably jw> with dual power supplies on separate circuits. If your system crashes you jw> may lose data. The script contains information on recovery. jw> jw> http://people.freebsd.org/~jwd/memzil.txt jw> jw> Comments/Improvements appreciated. Why is simply setting sync=disabled to the ZFS dataset not enough? -- Hiroki ----Security_Multipart(Wed_Jan__9_11_52_40_2013_377)-- Content-Type: application/pgp-signature Content-Transfer-Encoding: 7bit -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (FreeBSD) iEYEABECAAYFAlDs23gACgkQTyzT2CeTzy0eIgCeOK4RI451JJEqEu/WS4ssfDpI AE4Anjhn3s0Y9lxCxxjC8Gb8SKCzQVY2 =Wpoa -----END PGP SIGNATURE----- ----Security_Multipart(Wed_Jan__9_11_52_40_2013_377)---- From owner-freebsd-fs@FreeBSD.ORG Wed Jan 9 03:18:34 2013 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: by hub.freebsd.org (Postfix, from userid 821) id CBE44C91; Wed, 9 Jan 2013 03:18:34 +0000 (UTC) Date: Wed, 9 Jan 2013 03:18:34 +0000 From: John To: Hiroki Sato Subject: Re: rc.d script for memory based zfs intent log Message-ID: <20130109031834.GA14386@FreeBSD.org> References: <20130109023327.GA1888@FreeBSD.org> <20130109.115240.1198411557684741197.hrs@allbsd.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20130109.115240.1198411557684741197.hrs@allbsd.org> User-Agent: Mutt/1.5.21 (2010-09-15) Cc: freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 09 Jan 2013 03:18:34 -0000 ----- Hiroki Sato's Original Message ----- > John wrote > in <20130109023327.GA1888@FreeBSD.org>: > > jw> Hi Folks, > jw> > jw> Here's an rc.d script that provides a nice performance boost on > jw> ZFS/NFS based file servers. It also helps in other areas not specific > jw> to NFS. > jw> > jw> It attaches the log device at system startup and removes it at > jw> system shutdown time. Example; > jw> > jw> memzil_pools="tank" > jw> memzil_bootfs="YES" > jw> service memzil onestart > jw> zpool status tank > jw> service memzil onestop > jw> > jw> This configuration provides a nice performance boost especially to > jw> NFS, but also helps in other areas not specific to NFS. > jw> > jw> Please DO NOT USE this script if your system is not UPS backed, preferably > jw> with dual power supplies on separate circuits. If your system crashes you > jw> may lose data. The script contains information on recovery. > jw> > jw> http://people.freebsd.org/~jwd/memzil.txt > jw> > jw> Comments/Improvements appreciated. > > Why is simply setting sync=disabled to the ZFS dataset not enough? As you refer to, my understanding is that sync=disabled is at the dataset layer. The zil approach is at the zpool layer - sync=disabled would be nice at the zpool layer. -John > -- Hiroki From owner-freebsd-fs@FreeBSD.ORG Wed Jan 9 03:58:53 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 15BBE571; Wed, 9 Jan 2013 03:58:53 +0000 (UTC) (envelope-from fjwcash@gmail.com) Received: from mail-la0-f50.google.com (mail-la0-f50.google.com [209.85.215.50]) by mx1.freebsd.org (Postfix) with ESMTP id 2E4FAF00; Wed, 9 Jan 2013 03:58:51 +0000 (UTC) Received: by mail-la0-f50.google.com with SMTP id fs13so1322109lab.37 for ; Tue, 08 Jan 2013 19:58:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=3kCk7NbkyFZLdl9y/ydUHv9P4u1An5JOpw2yk3CGChQ=; b=p1QzZsp2SFyvV27fwghGjkJOI9g4K4WQuVuTh2DeeEx1Ms3fiCteYerj3f7K0leMfX /ZCX0OeCuAnIcPzWnfgp+gFd2bB2tDi29vvMGEs6iWKPSilOYuk5Hv7AJMmp4NjvJ/jq R9bFgcIAvFSzUSwMJhrmqhad9jHi9RDFC9DF08qkBDKxA9wXcsTOLN+Q7R9kK4Es5iAa s+FIFKtJLYnWR+E3S5GNWkLCCBGVvyLQeaSuWBQKB3SXprD2yGFyujMY4ZgkKYkQAtIA mk/QrPK0LSBNUUfD8LjzgeIcLT8icYpA4n330vRKsAsoTXnUxZ2bt0c0D0BrpEnPkSDK Cbxg== MIME-Version: 1.0 Received: by 10.112.39.129 with SMTP id p1mr27375481lbk.26.1357703930563; Tue, 08 Jan 2013 19:58:50 -0800 (PST) Received: by 10.114.81.40 with HTTP; Tue, 8 Jan 2013 19:58:50 -0800 (PST) Received: by 10.114.81.40 with HTTP; Tue, 8 Jan 2013 19:58:50 -0800 (PST) In-Reply-To: <20130109031834.GA14386@FreeBSD.org> References: <20130109023327.GA1888@FreeBSD.org> <20130109.115240.1198411557684741197.hrs@allbsd.org> <20130109031834.GA14386@FreeBSD.org> Date: Tue, 8 Jan 2013 19:58:50 -0800 Message-ID: Subject: Re: rc.d script for memory based zfs intent log From: Freddie Cash To: John Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.14 Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 09 Jan 2013 03:58:53 -0000 On Jan 8, 2013 7:18 PM, "John" wrote: > > ----- Hiroki Sato's Original Message ----- > > John wrote > > in <20130109023327.GA1888@FreeBSD.org>: > > > > jw> Hi Folks, > > jw> > > jw> Here's an rc.d script that provides a nice performance boost on > > jw> ZFS/NFS based file servers. It also helps in other areas not specific > > jw> to NFS. > > jw> > > jw> It attaches the log device at system startup and removes it at > > jw> system shutdown time. Example; > > jw> > > jw> memzil_pools="tank" > > jw> memzil_bootfs="YES" > > jw> service memzil onestart > > jw> zpool status tank > > jw> service memzil onestop > > jw> > > jw> This configuration provides a nice performance boost especially to > > jw> NFS, but also helps in other areas not specific to NFS. > > jw> > > jw> Please DO NOT USE this script if your system is not UPS backed, preferably > > jw> with dual power supplies on separate circuits. If your system crashes you > > jw> may lose data. The script contains information on recovery. > > jw> > > jw> http://people.freebsd.org/~jwd/memzil.txt > > jw> > > jw> Comments/Improvements appreciated. > > > > Why is simply setting sync=disabled to the ZFS dataset not enough? > > As you refer to, my understanding is that sync=disabled is at the dataset > layer. The zil approach is at the zpool layer - sync=disabled would be > nice at the zpool layer. Set it on the root dataset (meaning the dataset with the same name as the pool) and every child dataset will pick it up via inheritance. From owner-freebsd-fs@FreeBSD.ORG Wed Jan 9 05:51:28 2013 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id B239FA13; Wed, 9 Jan 2013 05:51:28 +0000 (UTC) (envelope-from hrs@FreeBSD.org) Received: from mail.allbsd.org (gatekeeper.allbsd.org [IPv6:2001:2f0:104:e001::32]) by mx1.freebsd.org (Postfix) with ESMTP id 85570283; Wed, 9 Jan 2013 05:51:24 +0000 (UTC) Received: from alph.allbsd.org (p1137-ipbf1505funabasi.chiba.ocn.ne.jp [118.7.212.137]) (authenticated bits=128) by mail.allbsd.org (8.14.5/8.14.5) with ESMTP id r095oqTS004318 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Wed, 9 Jan 2013 14:51:09 +0900 (JST) (envelope-from hrs@FreeBSD.org) Received: from localhost (localhost [127.0.0.1]) (authenticated bits=0) by alph.allbsd.org (8.14.5/8.14.5) with ESMTP id r095ooJx010464; Wed, 9 Jan 2013 14:50:51 +0900 (JST) (envelope-from hrs@FreeBSD.org) Date: Wed, 09 Jan 2013 14:09:46 +0900 (JST) Message-Id: <20130109.140946.1818513049591807609.hrs@allbsd.org> To: jwd@FreeBSD.org Subject: Re: rc.d script for memory based zfs intent log From: Hiroki Sato In-Reply-To: <20130109031834.GA14386@FreeBSD.org> References: <20130109023327.GA1888@FreeBSD.org> <20130109.115240.1198411557684741197.hrs@allbsd.org> <20130109031834.GA14386@FreeBSD.org> X-PGPkey-fingerprint: BDB3 443F A5DD B3D0 A530 FFD7 4F2C D3D8 2793 CF2D X-Mailer: Mew version 6.5 on Emacs 23.4 / Mule 6.0 (HANACHIRUSATO) Mime-Version: 1.0 Content-Type: Multipart/Signed; protocol="application/pgp-signature"; micalg=pgp-sha1; boundary="--Security_Multipart(Wed_Jan__9_14_09_46_2013_178)--" Content-Transfer-Encoding: 7bit X-Virus-Scanned: clamav-milter 0.97.4 at gatekeeper.allbsd.org X-Virus-Status: Clean X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.7 (mail.allbsd.org [133.31.130.32]); Wed, 09 Jan 2013 14:51:10 +0900 (JST) X-Spam-Status: No, score=-98.1 required=13.0 tests=CONTENT_TYPE_PRESENT, ONLY1HOPDIRECT,SAMEHELOBY2HOP,USER_IN_WHITELIST autolearn=no version=3.3.2 X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on gatekeeper.allbsd.org Cc: freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 09 Jan 2013 05:51:28 -0000 ----Security_Multipart(Wed_Jan__9_14_09_46_2013_178)-- Content-Type: Text/Plain; charset=us-ascii Content-Transfer-Encoding: 7bit John wrote in <20130109031834.GA14386@FreeBSD.org>: jw> ----- Hiroki Sato's Original Message ----- jw> > John wrote jw> > in <20130109023327.GA1888@FreeBSD.org>: jw> > jw> > jw> Hi Folks, jw> > jw> jw> > jw> Here's an rc.d script that provides a nice performance boost on jw> > jw> ZFS/NFS based file servers. It also helps in other areas not specific jw> > jw> to NFS. jw> > jw> jw> > jw> It attaches the log device at system startup and removes it at jw> > jw> system shutdown time. Example; jw> > jw> jw> > jw> memzil_pools="tank" jw> > jw> memzil_bootfs="YES" jw> > jw> service memzil onestart jw> > jw> zpool status tank jw> > jw> service memzil onestop jw> > jw> jw> > jw> This configuration provides a nice performance boost especially to jw> > jw> NFS, but also helps in other areas not specific to NFS. jw> > jw> jw> > jw> Please DO NOT USE this script if your system is not UPS backed, preferably jw> > jw> with dual power supplies on separate circuits. If your system crashes you jw> > jw> may lose data. The script contains information on recovery. jw> > jw> jw> > jw> http://people.freebsd.org/~jwd/memzil.txt jw> > jw> jw> > jw> Comments/Improvements appreciated. jw> > jw> > Why is simply setting sync=disabled to the ZFS dataset not enough? jw> jw> As you refer to, my understanding is that sync=disabled is at the dataset jw> layer. The zil approach is at the zpool layer - sync=disabled would be jw> nice at the zpool layer. Can you elaborate why it matters? If one wants to disable ZIL, just setting sync=disabled should be enough. Using memory disk to effectively disable ZIL on physical HDDs looks a roundabout way to do the same thing to me. -- Hiroki ----Security_Multipart(Wed_Jan__9_14_09_46_2013_178)-- Content-Type: application/pgp-signature Content-Transfer-Encoding: 7bit -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (FreeBSD) iEYEABECAAYFAlDs+5oACgkQTyzT2CeTzy2KIgCdFLIlCEFI9Bfz0CegX4BCEst7 FwEAnjgrEkumnADWyLUIa1xr29vN3C8g =1Ic7 -----END PGP SIGNATURE----- ----Security_Multipart(Wed_Jan__9_14_09_46_2013_178)---- From owner-freebsd-fs@FreeBSD.ORG Wed Jan 9 09:45:55 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 2E4F7CD1 for ; Wed, 9 Jan 2013 09:45:55 +0000 (UTC) (envelope-from daniel@digsys.bg) Received: from smtp-sofia.digsys.bg (smtp-sofia.digsys.bg [193.68.3.230]) by mx1.freebsd.org (Postfix) with ESMTP id 9FF2E117 for ; Wed, 9 Jan 2013 09:45:53 +0000 (UTC) Received: from dcave.digsys.bg (dcave.digsys.bg [192.92.129.5]) (authenticated bits=0) by smtp-sofia.digsys.bg (8.14.5/8.14.5) with ESMTP id r099jgiZ090854 (version=TLSv1/SSLv3 cipher=DHE-RSA-CAMELLIA256-SHA bits=256 verify=NO) for ; Wed, 9 Jan 2013 11:45:42 +0200 (EET) (envelope-from daniel@digsys.bg) Message-ID: <50ED3C46.4040200@digsys.bg> Date: Wed, 09 Jan 2013 11:45:42 +0200 From: Daniel Kalchev User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:10.0.11) Gecko/20121212 Thunderbird/10.0.11 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: rc.d script for memory based zfs intent log References: <20130109023327.GA1888@FreeBSD.org> <20130109.115240.1198411557684741197.hrs@allbsd.org> <20130109031834.GA14386@FreeBSD.org> In-Reply-To: <20130109031834.GA14386@FreeBSD.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 09 Jan 2013 09:45:55 -0000 On 09.01.13 05:18, John wrote: > ----- Hiroki Sato's Original Message ----- >> Why is simply setting sync=disabled to the ZFS dataset not enough? > As you refer to, my understanding is that sync=disabled is at the dataset > layer. The zil approach is at the zpool layer - sync=disabled would be > nice at the zpool layer. A per-filesytem approach is better, because it lets you have huge zpool (the whole idea of ZFS) with only part of it being exported via NFS. You will also not risk data loss for the rest of the zpool. Daniel From owner-freebsd-fs@FreeBSD.ORG Wed Jan 9 12:55:58 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 6FD15A77 for ; Wed, 9 Jan 2013 12:55:58 +0000 (UTC) (envelope-from simon@comsys.ntu-kpi.kiev.ua) Received: from comsys.kpi.ua (comsys.kpi.ua [77.47.192.42]) by mx1.freebsd.org (Postfix) with ESMTP id 18B5CD92 for ; Wed, 9 Jan 2013 12:55:57 +0000 (UTC) Received: from pm513-1.comsys.kpi.ua ([10.18.52.101] helo=pm513-1.comsys.ntu-kpi.kiev.ua) by comsys.kpi.ua with esmtpsa (TLSv1:AES256-SHA:256) (Exim 4.63) (envelope-from ) id 1TsvCF-0006iN-P0; Wed, 09 Jan 2013 14:55:55 +0200 Received: by pm513-1.comsys.ntu-kpi.kiev.ua (Postfix, from userid 1001) id 45D621CC1E; Wed, 9 Jan 2013 14:55:55 +0200 (EET) Date: Wed, 9 Jan 2013 14:55:54 +0200 From: Andrey Simonenko To: Jason Keltz Subject: Re: Problems Re-Starting mountd Message-ID: <20130109125554.GA1574@pm513-1.comsys.ntu-kpi.kiev.ua> References: <20130103123730.GA19137@pm513-1.comsys.ntu-kpi.kiev.ua> <20130108150508.GA2248@pm513-1.comsys.ntu-kpi.kiev.ua> <50EC39A8.3070108@cse.yorku.ca> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <50EC39A8.3070108@cse.yorku.ca> User-Agent: Mutt/1.5.21 (2010-09-15) X-Authenticated-User: simon@comsys.ntu-kpi.kiev.ua X-Authenticator: plain X-Sender-Verify: SUCCEEDED (sender exists & accepts mail) X-Exim-Version: 4.63 (build at 28-Apr-2011 07:11:12) X-Date: 2013-01-09 14:55:55 X-Connected-IP: 10.18.52.101:14244 X-Message-Linecount: 79 X-Body-Linecount: 60 X-Message-Size: 4234 X-Body-Size: 3356 Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 09 Jan 2013 12:55:58 -0000 On Tue, Jan 08, 2013 at 10:22:16AM -0500, Jason Keltz wrote: > On 01/08/2013 10:05 AM, Andrey Simonenko wrote: > > I created 2000 file systems on ZFS file system backed by vnode md(4) > > device. The /etc/exports file contains 4000 entries like your example. > > > > On 9.1-STABLE mountd spends ~70 seconds in flushing current NFS exports > > in the NFS server, parsing data from /etc/exports and loading parsed > > data into the NFS server. ~70 seconds is not several minutes. Most of > > time mountd spends in nmount() system call in "zio->io_cv" lock. > > > > Can you show the output of "truss -fc -o /tmp/output.txt mountd" > > (wait wchan "select" state of mountd and terminate it by a signal). > > If everything is correct you should see N statfs() calls, N+M nmount() > > calls and something*N lstat() calls, where N is the number of /etc/exports > > lines, M is the number of mounted file systems. Number of lstat() calls > > depends on number of components in pathnames. > > Andrey, > > Would that still be an ~70 second period in which new mounts would not > be allowed? In the system I'm preparing, I'll have at least 4000 entries > in /etc/exports, probably even more, so I know I'll be dealing with the > same issue that Tim is dealing with when I get there. However, I don't > see how to avoid the issue ... If I want new users to be able to login > shortly after their account is created, and each user has a ZFS > filesystem as a home directory, then at least at some interval, after > adding a user to the system, I need to update the exports file on the > file server, and re-export everything. Yet, even a >1 minute delay > where users who are logging in won't get their home directory mounted on > the system they are logging into - well, that's not so good... accounts > can be added all the time and this would create random chaos. Isn't > there some way to make it so that when you re-export everything, the > existing exports are still served until the new exports are ready? When mountd starts it flushes NFS export settings for all file systems, for each mount point it calls nmount(), even if /etc/exports is empty it will call nmount() for all currently mounted file systems. When mountd loads export settings into NFS server it calls statfs() and lstat() for each pathname from /etc/exports (number of lstat() calls depends on number of '/' in each pathname), then it calls nmount() for each address specification for each pathname from /etc/exports. It uses nmount() interface for communication with kern/vfs_export.c that is responsible for NFS export settings for file systems. For the NFSv4 root directory mountd uses nfssvc() to update its settings, that calls kern/vfs_export.c:vfs_export(). When mountd receives SIGHUP it flushes everything and loads /etc/exports. This signal is sent by mount(8) when it mounts any file system. This delay in above described example came from ZFS kernel code, since the same configuration for 2000 nullfs(5) file systems takes ~0.20 second (less than second) by mountd in nmount() system calls. At least on 9.1-STABLE I do not see that this delay came from mountd code, it came from nmount() used by mountd. > Would this be the same for NFSv3 versus NFSv4? I suspect yes. NFS export settings are loaded into NFS server in the same for all NFS versions. From owner-freebsd-fs@FreeBSD.ORG Wed Jan 9 13:57:06 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 6E068A17 for ; Wed, 9 Jan 2013 13:57:06 +0000 (UTC) (envelope-from simon@comsys.ntu-kpi.kiev.ua) Received: from comsys.kpi.ua (comsys.kpi.ua [77.47.192.42]) by mx1.freebsd.org (Postfix) with ESMTP id 22D84AC for ; Wed, 9 Jan 2013 13:57:05 +0000 (UTC) Received: from pm513-1.comsys.kpi.ua ([10.18.52.101] helo=pm513-1.comsys.ntu-kpi.kiev.ua) by comsys.kpi.ua with esmtpsa (TLSv1:AES256-SHA:256) (Exim 4.63) (envelope-from ) id 1Tsw9P-0007pw-PV; Wed, 09 Jan 2013 15:57:03 +0200 Received: by pm513-1.comsys.ntu-kpi.kiev.ua (Postfix, from userid 1001) id C779D1CC1E; Wed, 9 Jan 2013 15:57:03 +0200 (EET) Date: Wed, 9 Jan 2013 15:57:03 +0200 From: Andrey Simonenko To: Rick Macklem Subject: Re: Problems Re-Starting mountd Message-ID: <20130109135703.GB1574@pm513-1.comsys.ntu-kpi.kiev.ua> References: <50EC39A8.3070108@cse.yorku.ca> <972459831.1800222.1357690721032.JavaMail.root@erie.cs.uoguelph.ca> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <972459831.1800222.1357690721032.JavaMail.root@erie.cs.uoguelph.ca> User-Agent: Mutt/1.5.21 (2010-09-15) X-Authenticated-User: simon@comsys.ntu-kpi.kiev.ua X-Authenticator: plain X-Sender-Verify: SUCCEEDED (sender exists & accepts mail) X-Exim-Version: 4.63 (build at 28-Apr-2011 07:11:12) X-Date: 2013-01-09 15:57:03 X-Connected-IP: 10.18.52.101:36080 X-Message-Linecount: 153 X-Body-Linecount: 136 X-Message-Size: 7125 X-Body-Size: 6357 Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 09 Jan 2013 13:57:06 -0000 On Tue, Jan 08, 2013 at 07:18:41PM -0500, Rick Macklem wrote: > Jason Keltz wrote: > > On 01/08/2013 10:05 AM, Andrey Simonenko wrote: > > > I created 2000 file systems on ZFS file system backed by vnode md(4) > > > device. The /etc/exports file contains 4000 entries like your > > > example. > > > > > > On 9.1-STABLE mountd spends ~70 seconds in flushing current NFS > > > exports > > > in the NFS server, parsing data from /etc/exports and loading parsed > > > data into the NFS server. ~70 seconds is not several minutes. Most > > > of > > > time mountd spends in nmount() system call in "zio->io_cv" lock. > > > > > > Can you show the output of "truss -fc -o /tmp/output.txt mountd" > > > (wait wchan "select" state of mountd and terminate it by a signal). > > > If everything is correct you should see N statfs() calls, N+M > > > nmount() > > > calls and something*N lstat() calls, where N is the number of > > > /etc/exports > > > lines, M is the number of mounted file systems. Number of lstat() > > > calls > > > depends on number of components in pathnames. > > > > Andrey, > > > > Would that still be an ~70 second period in which new mounts would not > > be allowed? In the system I'm preparing, I'll have at least 4000 > > entries > > in /etc/exports, probably even more, so I know I'll be dealing with > > the > > same issue that Tim is dealing with when I get there. However, I don't > > see how to avoid the issue ... If I want new users to be able to login > > shortly after their account is created, and each user has a ZFS > > filesystem as a home directory, then at least at some interval, after > > adding a user to the system, I need to update the exports file on the > > file server, and re-export everything. Yet, even a >1 minute delay > > where users who are logging in won't get their home directory mounted > > on > > the system they are logging into - well, that's not so good... > > accounts > > can be added all the time and this would create random chaos. Isn't > > there some way to make it so that when you re-export everything, the > > existing exports are still served until the new exports are ready? > I can't think of how you'd do everything without deleting the old stuff, > but it would be possible to "add new entries". It has to be done by > modifying mountd, since it keeps a tree in its address space that it uses > for mount requests and the tree must be grown. > > I don't know about nfse, but you'd have to add this capability to mountd > and, trust me, it's an ugly old piece of C code, so coming up with a patch > might not be that easy. However, it might not be that bad, since the only difference > from doing the full reload as it stands now would be to "not delete > the tree that already exists in the utility and don't do the DELEXPORTS > syscall" I think, so the old ones don't go away. There could be a file called > something like /etc/exports.new for the new entries and a different > signal (SIGUSR1??) to load these. (Then you'd add the new entries to > /etc/exports as well for the next time mountd restarts, but wouldn't > send it a SIGHUP.) This delay in above described example came from ZFS kernel code, since the same configuration for 2000 nullfs(5) file systems takes ~0.20 second (less than second) by mountd in nmount() system calls. At least on 9.1-STABLE I do not see that this delay came from mountd code, it came from nmount() used by mountd. Since nfse was mentioned in this thread, I can explain how this is implemented in nfse. The nfse utility and its NFSE API support dynamic commands, in fact all settings are updated using the same API. This API allows to flush all configuration, flush/clear file system configuration, add/update/delete configuration for address specification. All commands can be grouped, so one nfssvc() call can be called with several commands. Not all commands have to be grouped together, instead API uses transaction model and while some transaction is open it is possible to use it for passing commands into NFS server. When all commands are ready, transaction is committed. Each transaction has timeout and it is possible to have several transaction in one or in several processes. The nfse utility has the -c option that allows to specify commands in a command line. For example a user can add several lines in configuration file: /fs/user1 -network 10.1/16 -network 10.2/16 /fs/user2 -network 10.3/16 1.1.1.1 Then, instead of reloading all configuration one can add these settings: # nfse -c 'add /fs/user1 -network 10.1/16 -network 10.2/16' \ -c 'add /fs/user2 -network 10.3/16 1.1.1.1' Or, it is possible to keep settings for each user in separate file, eg. /etc/nfs-export/user1, /etc/nfs-export/user2 files and then: # nfse -c 'add -f /etc/nfs-export/user1' -c 'add -f /etc/nfs-export/user2' Since number of users can be change nfse should be started like this: # nfse /etc/nfs-export/ and it will take all regular files from the given directory(ies). When it is necessary to remove NFS exports for user then: # nfse -c 'delete /fs/user1 -network 10.1/16 -network 10.2/16' \ -c 'delete /fs/user2 -network 10.3/16 1.1.1.1' or # nfse -c 'delete -f /etc/nfs-export/user1' \ -c 'delete -f /etc/nfs-export/user2' or # nfse -c 'flush /fs/user1 /fs/user2' Updating work like this: # nfse -c 'update /fs/user1 -ro -network 10.1/16' I checked nfse on 9.1-STABLE with above given example. It takes ~0.10 second by nfse to configure 2000 ZFS file systems, this time mostly is spent in nfssvc() calls (number of calls depends on how many commands are grouped for one nfssvc() call). I did not check delay in NFSE code for NFS clients during updating of NFS export settings, but it will be less than time used by nfse, since NFSE code in the NFS server uses deferred data releasing and it require to acquire small number of locks. Two locks are acquire while all NFS export settings are updated, one lock is acquire for transaction and one lock is acquire for each passed security flavor list and credentials specification. Each security flavor list and credential specification or any specification is passed in own command, so if there are ~2000 file systems exported to the same address specification, then corresponding security flavor list and credential specification are passed only one time. From owner-freebsd-fs@FreeBSD.ORG Wed Jan 9 14:34:36 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id E90C854A for ; Wed, 9 Jan 2013 14:34:36 +0000 (UTC) (envelope-from patrick_dkt@yahoo.com.hk) Received: from nm39-vm1.bullet.mail.sg3.yahoo.com (nm39-vm1.bullet.mail.sg3.yahoo.com [106.10.151.172]) by mx1.freebsd.org (Postfix) with ESMTP id 17315283 for ; Wed, 9 Jan 2013 14:34:35 +0000 (UTC) Received: from [106.10.166.63] by nm39.bullet.mail.sg3.yahoo.com with NNFMP; 09 Jan 2013 14:31:20 -0000 Received: from [106.10.151.138] by tm20.bullet.mail.sg3.yahoo.com with NNFMP; 09 Jan 2013 14:31:19 -0000 Received: from [127.0.0.1] by omp1006.mail.sg3.yahoo.com with NNFMP; 09 Jan 2013 14:31:19 -0000 X-Yahoo-Newman-Property: ymail-3 X-Yahoo-Newman-Id: 832519.60523.bm@omp1006.mail.sg3.yahoo.com Received: (qmail 57155 invoked by uid 60001); 9 Jan 2013 14:31:19 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com.hk; s=s1024; t=1357741879; bh=rZskGUbkQfDsYCBN8MutOuiqfPkbEgpBFV1+6fs76Qk=; h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:Message-ID:Date:From:Subject:To:MIME-Version:Content-Type; b=w5QgAKXEyq3zR0Sk+pLfFM1mt6hVSMkS+ZfoBfWkRivc1vlH3pULu9tYGC9kndFM6RA/W942DNFSeiHEsn0OnQC71HWPEBY4KhRFdGi0VEXB+XcnicdzV+XgYOHk4azhwogcisntC2m3JK8G+hUBEDJMuCcGzDH5qQ6uyO1R+HE= DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com.hk; h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:Message-ID:Date:From:Subject:To:MIME-Version:Content-Type; b=FKBrSKP9cYOO60h9KObQwdX0YgWdH4f4T7/97wm/oVSlnz8jmP28PvMaOHngi4VWFAEMS52Uc5dBWmKlVQqt2UuwVFM1PbV3WcYh8yxhWDPWic0r7uAlXD87arVDTtHhlwrZiLb86tyAFjQEYb9Iyqryv0yFNFqOkrvuNvPhfSU=; X-YMail-OSG: 9taNbi4VM1kpJatzmZMVhi424NJaBke7CzuURWgST8E0ETt Z6lrdUZ.Nc22LmX4_0UIfC9hB3rBvzExMqhAy8aOghOup6eq7UHjEIp4bcyi 4PTOjGHyzu7KaDE5aT47HwpXtNHfD4hYhZgK4LEPDGxzEP5flumZ4rhhYi0t .fdAWj.hjqNjH6hkw6M308PL6rwiiSSu_.zX9cdgL9fRvCPop2f7bzuDJpOy 8n2GJN9G2r4PusoqV1KWUl8Ff0MUZqdGuYre0AacDiixcAD8WGvPyhQUMZv6 5r6oB_bqlD0ZnP6MoTWqgBVPFfQy1IHDdJY7HQ66iN61tTEoNepwdvOdtKek J2nF4QSDqkAO5tfpvp8M7VJTA_pAZscUkCynPiptXSX6LMH3sJ_H.OQ3AIhx ClcCoutL7ZEvfQ5BDNBi.Vln3tGPI_hWRzxTiLPGJy0bEJf8eJr3TlIL2mj0 F0xeI93AMzGebPuPAKsqr61T_Mi5hBAdaVMyun4RUlgeWOCcmgbUgWFSuCjI 4PGR4NR1iR1sKLxEnujzldyKZ9TG5Q8grZsrsDA52u632hWNrBN5zivJ60B0 i6cOSzxsLZZFAcmT0iJJ2uit5MB_SQn5i6mPvzbDJbDQ- Received: from [61.15.240.116] by web190806.mail.sg3.yahoo.com via HTTP; Wed, 09 Jan 2013 22:31:19 SGT X-Rocket-MIMEInfo: 001.001, SGkgZnJlZWJzZC1mcyENCg0KSSBoYXZlIG15IHRoZSBvcmlnaW5hbCBxdWVzdGlvbiBpbjoNCmh0dHA6Ly9hcmNoaXZlcy5wb3N0Z3Jlc3FsLm9yZy9wZ3NxbC1wZXJmb3JtYW5jZS8yMDEzLTAxL21zZzAwMDQ0LnBocA0KQnV0IGxhdGVyIGl0IHdhcyBmb3VuZCBvdXQgdGhlIGJvdHRsZW5lY2sgc2VlbXMgdG8gYmUgdGhlIFpGUyB3aXRoIG91dCBhIGZhc3QgWklMLg0KUGxlYXNlIGdpdmUgc29tZSBhZHZpc2UsIHRoYW5rcy4NCg0KRGV0YWlsczoNCg0KUG9zdGdyZXNxbCA5LjIuMiAoY29tcGxpZWQgYnkgZ2MBMAEBAQE- X-Mailer: YahooMailClassic/15.1.2 YahooMailWebService/0.8.130.494 Message-ID: <1357741879.56011.YahooMailClassic@web190806.mail.sg3.yahoo.com> Date: Wed, 9 Jan 2013 22:31:19 +0800 (SGT) From: Patrick Dung Subject: ZFS sub-optimal performance with default setting To: freebsd-fs@freebsd.org MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.14 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 09 Jan 2013 14:34:37 -0000 Hi freebsd-fs! I have my the original question in: http://archives.postgresql.org/pgsql-performance/2013-01/msg00044.php But later it was found out the bottleneck seems to be the ZFS with out a fa= st ZIL. Please give some advise, thanks. Details: Postgresql 9.2.2 (complied by gcc) is installed in FreeBSD 9.1 i386. The pgsql base directory is in a ZFS dataset. I have noticed the performance is sub-optimal, but I know the default=20 setting should be the most safest one to be use (concern about data integri= ty). a) I use OTRS ticketing system version 3.1, the backend is PostgreSQL. The user interactive response is not slow (switching web pages or create a = change). b) There is a benchmark in the support module of OTRS. It tested insert,update,select and delete performance. The response time is slow (>10 sec), except select. I have done some research on web, with below settings (just one change, not= both), the performance returned to normal: 1) Disabled sync in the pgsql dataset in ZFS zfs set sync=3Ddisabled mydata/pgsql or=20 2) In =A0postgresql.conf, set synchronous_commit from on to off I know the above setting would lead to data loss (e.g.power goes off), any = comments? PS: 1) I have tried to use primarycache/secondarycache=3Dmetadata/none, it do n= ot seem to help. 2) I have tried the default setting on Linux too: RHEL 6.3, ext4, stock postgresql 8.x, OTRS 3.1. The web site is responsive and the benchmark result is more or less the sam= e as FreeBSD with the 'sync' turned off. 3) For FreeBSD, same setting with Postgresql on UFS: The performance is between ZFS (default, sync enabled) and ZFS (sync disabl= ed). Thanks, Patrick From owner-freebsd-fs@FreeBSD.ORG Wed Jan 9 14:49:24 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 45DCEA0F for ; Wed, 9 Jan 2013 14:49:24 +0000 (UTC) (envelope-from freebsd-fs@m.gmane.org) Received: from plane.gmane.org (plane.gmane.org [80.91.229.3]) by mx1.freebsd.org (Postfix) with ESMTP id F095F34A for ; Wed, 9 Jan 2013 14:49:23 +0000 (UTC) Received: from list by plane.gmane.org with local (Exim 4.69) (envelope-from ) id 1TswyF-0001xg-HC for freebsd-fs@freebsd.org; Wed, 09 Jan 2013 15:49:35 +0100 Received: from lara.cc.fer.hr ([161.53.72.113]) by main.gmane.org with esmtp (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Wed, 09 Jan 2013 15:49:35 +0100 Received: from ivoras by lara.cc.fer.hr with local (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Wed, 09 Jan 2013 15:49:35 +0100 X-Injected-Via-Gmane: http://gmane.org/ To: freebsd-fs@freebsd.org From: Ivan Voras Subject: Re: ZFS sub-optimal performance with default setting Date: Wed, 09 Jan 2013 15:49:10 +0100 Lines: 69 Message-ID: References: <1357741879.56011.YahooMailClassic@web190806.mail.sg3.yahoo.com> Mime-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="------------enig2C0B8ED6291D76C5CFC40320" X-Complaints-To: usenet@ger.gmane.org X-Gmane-NNTP-Posting-Host: lara.cc.fer.hr User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:14.0) Gecko/20120812 Thunderbird/14.0 In-Reply-To: <1357741879.56011.YahooMailClassic@web190806.mail.sg3.yahoo.com> X-Enigmail-Version: 1.4.3 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 09 Jan 2013 14:49:24 -0000 This is an OpenPGP/MIME signed message (RFC 2440 and 3156) --------------enig2C0B8ED6291D76C5CFC40320 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable I don't know if I understand your questions correctly; you should provide more details: what is the performance you get from the system (numbers) and what do you want it to be (again, numbers). On 09/01/2013 15:31, Patrick Dung wrote: > a) I use OTRS ticketing system version 3.1, the backend is PostgreSQL. > The user interactive response is not slow (switching web pages or creat= e a change). How did you conclude the database and ZFS are the problem? What measurements from iostat and similar tools do you get which support that conclusion? > I have done some research on web, with below settings (just one change,= not both), the performance returned to normal: >=20 > 1) Disabled sync in the pgsql dataset in ZFS > zfs set sync=3Ddisabled mydata/pgsql > or=20 > 2) In > postgresql.conf, set synchronous_commit from on to off >=20 > I know the above setting would lead to data loss (e.g.power goes off), = any comments? Those two settings have almost the same idea behind them - delaying disk data sync in such a way that it doesn't impact the data already committed, and preserve metadata structures, but they do it on different levels. In theory, with ZFS "sync=3Ddisabled", you will survive a crash with the file system structures intact, but with some file data lost (e.g. the last 30 seconds before the crash). This may include any random file data.= With PostgreSQL's "synchronous_commit=3Doff", you will survive a crash with PostgreSQL's data structures intact (if the file system synces properly), but you could lose data within the database since the last time fsync was called. The difference is that in the first case (ZFS), you may end up with a sane file system but with corrupted PostgreSQL data, and in the second case the PostgreSQL files will be sane and only some database data can be lost. The second case is always better. --------------enig2C0B8ED6291D76C5CFC40320 Content-Type: application/pgp-signature; name="signature.asc" Content-Description: OpenPGP digital signature Content-Disposition: attachment; filename="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.19 (FreeBSD) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAlDtg2YACgkQ/QjVBj3/HSwD4ACdF9ugkBb87gaIROfLqAWEn9ut WrkAoIen8oD9f8o5hSWarDivwmDxFzvB =z9+c -----END PGP SIGNATURE----- --------------enig2C0B8ED6291D76C5CFC40320-- From owner-freebsd-fs@FreeBSD.ORG Wed Jan 9 15:00:05 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 1F136ED for ; Wed, 9 Jan 2013 15:00:05 +0000 (UTC) (envelope-from ronald-freebsd8@klop.yi.org) Received: from smarthost1.greenhost.nl (smarthost1.greenhost.nl [195.190.28.78]) by mx1.freebsd.org (Postfix) with ESMTP id ADA3B69F for ; Wed, 9 Jan 2013 15:00:04 +0000 (UTC) Received: from smtp.greenhost.nl ([213.108.104.138]) by smarthost1.greenhost.nl with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.69) (envelope-from ) id 1Tsx8F-0000jP-69 for freebsd-fs@freebsd.org; Wed, 09 Jan 2013 15:59:56 +0100 Received: from [81.21.138.17] (helo=ronaldradial.versatec.local) by smtp.greenhost.nl with esmtpsa (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.72) (envelope-from ) id 1Tsx8F-00038g-By for freebsd-fs@freebsd.org; Wed, 09 Jan 2013 15:59:55 +0100 Content-Type: text/plain; charset=utf-8; format=flowed; delsp=yes To: freebsd-fs@freebsd.org Subject: Re: ZFS sub-optimal performance with default setting References: <1357741879.56011.YahooMailClassic@web190806.mail.sg3.yahoo.com> Date: Wed, 09 Jan 2013 15:59:56 +0100 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: "Ronald Klop" Message-ID: In-Reply-To: <1357741879.56011.YahooMailClassic@web190806.mail.sg3.yahoo.com> User-Agent: Opera Mail/12.12 (Win32) X-Virus-Scanned: by clamav at smarthost1.samage.net X-Spam-Level: / X-Spam-Score: 0.8 X-Spam-Status: No, score=0.8 required=5.0 tests=BAYES_50 autolearn=disabled version=3.3.1 X-Scan-Signature: d58e29c6f4e42f76447094ef3ccb23d2 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 09 Jan 2013 15:00:05 -0000 On Wed, 09 Jan 2013 15:31:19 +0100, Patrick Dung wrote: > Hi freebsd-fs! > > I have my the original question in: > http://archives.postgresql.org/pgsql-performance/2013-01/msg00044.php > But later it was found out the bottleneck seems to be the ZFS with out a > fast ZIL. > Please give some advise, thanks. > > Details: > > Postgresql 9.2.2 (complied by gcc) is installed in FreeBSD 9.1 i386. > The pgsql base directory is in a ZFS dataset. > > I have noticed the performance is sub-optimal, but I know the default > setting should be the most safest one to be use (concern about data > integrity). > > a) I use OTRS ticketing system version 3.1, the backend is PostgreSQL. > The user interactive response is not slow (switching web pages or create > a change). > > b) There is a benchmark in the support module of OTRS. > It tested insert,update,select and delete performance. > The response time is slow (>10 sec), except select. > > I have done some research on web, with below settings (just one change, > not both), the performance returned to normal: > > 1) Disabled sync in the pgsql dataset in ZFS > zfs set sync=disabled mydata/pgsql > or > 2) In > postgresql.conf, set synchronous_commit from on to off > > I know the above setting would lead to data loss (e.g.power goes off), > any comments? > > PS: > 1) I have tried to use primarycache/secondarycache=metadata/none, it do > not seem to help. > > 2) > I have tried the default setting on Linux too: > RHEL 6.3, ext4, stock postgresql 8.x, OTRS 3.1. > The web site is responsive and the benchmark result is more or less the > same as FreeBSD with the 'sync' turned off. > > 3) > For FreeBSD, same setting with Postgresql on UFS: > The performance is between ZFS (default, sync enabled) and ZFS (sync > disabled). > > Thanks, > Patrick As you might have read on the internet ZFS does some kind of journaling and postgresql also. Postgresql expects the FS to not do the journaling (because it also works on ext2, ffs, etc.) so you can disable it for ZFS and postgresql will make sure the data is consistent. Ronald. From owner-freebsd-fs@FreeBSD.ORG Wed Jan 9 15:22:24 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id EFE14A23; Wed, 9 Jan 2013 15:22:24 +0000 (UTC) (envelope-from borjam@sarenet.es) Received: from proxypop03b.sare.net (proxypop03b.sare.net [194.30.0.251]) by mx1.freebsd.org (Postfix) with ESMTP id B0F0F802; Wed, 9 Jan 2013 15:22:24 +0000 (UTC) Received: from [172.16.2.2] (izaro.sarenet.es [192.148.167.11]) by proxypop03.sare.net (Postfix) with ESMTPSA id 560C79DD64F; Wed, 9 Jan 2013 16:15:33 +0100 (CET) Subject: Re: rc.d script for memory based zfs intent log Mime-Version: 1.0 (Apple Message framework v1085) Content-Type: text/plain; charset=us-ascii From: Borja Marcos In-Reply-To: <20130109023327.GA1888@FreeBSD.org> Date: Wed, 9 Jan 2013 16:15:38 +0100 Content-Transfer-Encoding: quoted-printable Message-Id: References: <20130109023327.GA1888@FreeBSD.org> To: John X-Mailer: Apple Mail (2.1085) Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 09 Jan 2013 15:22:25 -0000 On Jan 9, 2013, at 3:33 AM, John wrote: > Hi Folks, >=20 > Here's an rc.d script that provides a nice performance boost on > ZFS/NFS based file servers. It also helps in other areas not specific > to NFS. >=20 > It attaches the log device at system startup and removes it at > system shutdown time. Example; In case of a crash, seems to be riskier than using sync=3Ddisabled on = the datasets you need. What is the impact on the data integrity of a = suddenly disappearing ZIL? Borja. From owner-freebsd-fs@FreeBSD.ORG Wed Jan 9 16:26:16 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 1438AFDD; Wed, 9 Jan 2013 16:26:16 +0000 (UTC) (envelope-from nicolas@i.0x5.de) Received: from n.0x5.de (n.0x5.de [217.197.85.144]) by mx1.freebsd.org (Postfix) with ESMTP id 45A17B4E; Wed, 9 Jan 2013 16:26:15 +0000 (UTC) Received: by pc5.i.0x5.de (Postfix, from userid 1003) id 3YhG1K4jHHz7ySD; Wed, 9 Jan 2013 17:26:13 +0100 (CET) Date: Wed, 9 Jan 2013 17:26:13 +0100 From: Nicolas Rachinsky To: Artem Belevich Subject: Re: slowdown of zfs (tx->tx) Message-ID: <20130109162613.GA34276@mid.pc5.i.0x5.de> References: <20130108174225.GA17260@mid.pc5.i.0x5.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Powered-by: FreeBSD X-Homepage: http://www.rachinsky.de X-PGP-Keyid: 887BAE72 X-PGP-Fingerprint: 039E 9433 115F BC5F F88D 4524 5092 45C4 887B AE72 X-PGP-Keys: http://www.rachinsky.de/nicolas/gpg/nicolas_rachinsky.asc User-Agent: Mutt/1.5.21 (2010-09-15) Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 09 Jan 2013 16:26:16 -0000 * Artem Belevich [2013-01-08 12:47 -0800]: > On Tue, Jan 8, 2013 at 9:42 AM, Nicolas Rachinsky > wrote: > > NAME STATE READ WRITE CKSUM > > pool1 DEGRADED 0 0 0 > > raidz2-0 DEGRADED 0 0 0 > > ada5 ONLINE 0 0 0 > > ada8 ONLINE 0 0 0 > > ada2 ONLINE 0 0 0 > > ada3 ONLINE 0 0 0 > > 11846390416703086268 UNAVAIL 0 0 0 was /dev/dsk/ada1 > > ada6 ONLINE 0 0 0 > > ada0 ONLINE 0 0 1 > > ada7 ONLINE 0 0 0 > > ada4 ONLINE 0 0 3 > > You seem to have some checksum errors which does suggest hardware troubles. I somehow missed these. Is there any way to learn when these checksum errors happen? > For starters, check smart info for all drives and see if they have any > relocated sectors. There are some disks with relocated sectors, but for both ada0 and ada4 Reallocated_Sector_Ct is 0. > Use gstat during your workload to see if any of the drives takes much > longer than others to handle its job. There is one disk sticking out a bit. > > There is almost no disk activity during this time. > > What kind of disk activity *is* there? What would be interesting? > > sync is disabled for the whole pool. > > If that's the case (assyming you're talking about sync=disabled zfs > property), then synchronous writes are probably not the cause of > slowdown. My guess would be either failing HDD or something funky with > cabling or sata controller. Yes, sync=disabled for pool1. Ok, I will start swapping hardware (sadly the machine is quite a drive away). Thank you very much for your help. Nicolas -- http://www.rachinsky.de/nicolas From owner-freebsd-fs@FreeBSD.ORG Wed Jan 9 18:36:14 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 20D1D7C1 for ; Wed, 9 Jan 2013 18:36:14 +0000 (UTC) (envelope-from ronald-freebsd8@klop.yi.org) Received: from cpsmtpb-ews06.kpnxchange.com (cpsmtpb-ews06.kpnxchange.com [213.75.39.9]) by mx1.freebsd.org (Postfix) with ESMTP id 79FA93F5 for ; Wed, 9 Jan 2013 18:36:13 +0000 (UTC) Received: from cpsps-ews27.kpnxchange.com ([10.94.84.193]) by cpsmtpb-ews06.kpnxchange.com with Microsoft SMTPSVC(7.5.7601.17514); Wed, 9 Jan 2013 19:33:59 +0100 Received: from CPSMTPM-TLF102.kpnxchange.com ([195.121.3.5]) by cpsps-ews27.kpnxchange.com with Microsoft SMTPSVC(7.5.7601.17514); Wed, 9 Jan 2013 19:33:59 +0100 Received: from sjakie.klop.ws ([212.182.167.131]) by CPSMTPM-TLF102.kpnxchange.com with Microsoft SMTPSVC(7.5.7601.17514); Wed, 9 Jan 2013 19:35:04 +0100 Received: from 212-182-167-131.ip.telfort.nl (localhost [127.0.0.1]) by sjakie.klop.ws (Postfix) with ESMTP id 7FD5C7A60 for ; Wed, 9 Jan 2013 19:35:04 +0100 (CET) Content-Type: text/plain; charset=us-ascii; format=flowed; delsp=yes To: freebsd-fs@freebsd.org Subject: Re: slowdown of zfs (tx->tx) References: <20130108174225.GA17260@mid.pc5.i.0x5.de> <20130109162613.GA34276@mid.pc5.i.0x5.de> Date: Wed, 09 Jan 2013 19:35:04 +0100 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: "Ronald Klop" Message-ID: In-Reply-To: <20130109162613.GA34276@mid.pc5.i.0x5.de> User-Agent: Opera Mail/12.12 (FreeBSD) X-OriginalArrivalTime: 09 Jan 2013 18:35:04.0522 (UTC) FILETIME=[0AADE2A0:01CDEE98] X-RcptDomain: freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 09 Jan 2013 18:36:14 -0000 On Wed, 09 Jan 2013 17:26:13 +0100, Nicolas Rachinsky wrote: > * Artem Belevich [2013-01-08 12:47 -0800]: >> On Tue, Jan 8, 2013 at 9:42 AM, Nicolas Rachinsky >> wrote: >> > NAME STATE READ WRITE CKSUM >> > pool1 DEGRADED 0 0 0 >> > raidz2-0 DEGRADED 0 0 0 >> > ada5 ONLINE 0 0 0 >> > ada8 ONLINE 0 0 0 >> > ada2 ONLINE 0 0 0 >> > ada3 ONLINE 0 0 0 >> > 11846390416703086268 UNAVAIL 0 0 0 was >> /dev/dsk/ada1 >> > ada6 ONLINE 0 0 0 >> > ada0 ONLINE 0 0 1 >> > ada7 ONLINE 0 0 0 >> > ada4 ONLINE 0 0 3 >> >> You seem to have some checksum errors which does suggest hardware >> troubles. > > I somehow missed these. Is there any way to learn when these checksum > errors happen? > >> For starters, check smart info for all drives and see if they have any >> relocated sectors. > > There are some disks with relocated sectors, but for both ada0 and > ada4 Reallocated_Sector_Ct is 0. > >> Use gstat during your workload to see if any of the drives takes much >> longer than others to handle its job. > > There is one disk sticking out a bit. > >> > There is almost no disk activity during this time. >> >> What kind of disk activity *is* there? > > What would be interesting? > > >> > sync is disabled for the whole pool. >> >> If that's the case (assyming you're talking about sync=disabled zfs >> property), then synchronous writes are probably not the cause of >> slowdown. My guess would be either failing HDD or something funky with >> cabling or sata controller. > > Yes, sync=disabled for pool1. > > > Ok, I will start swapping hardware (sadly the machine is quite a drive > away). > > Thank you very much for your help. > > Nicolas If you are driving anyway replace this one: >> > 11846390416703086268 UNAVAIL 0 0 0 was >> /dev/dsk/ada1 If the pool is healthy checksum errors will be noticed earlier by the sysadmin. Ronald. From owner-freebsd-fs@FreeBSD.ORG Wed Jan 9 18:36:44 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 03CF5838 for ; Wed, 9 Jan 2013 18:36:44 +0000 (UTC) (envelope-from jas@cse.yorku.ca) Received: from bronze.cs.yorku.ca (bronze.cs.yorku.ca [130.63.95.34]) by mx1.freebsd.org (Postfix) with ESMTP id C96553FE for ; Wed, 9 Jan 2013 18:36:43 +0000 (UTC) Received: from [130.63.97.125] (ident=jas) by bronze.cs.yorku.ca with esmtpsa (TLSv1:CAMELLIA256-SHA:256) (Exim 4.76) (envelope-from ) id 1Tt0Vw-0008VX-0O; Wed, 09 Jan 2013 13:36:36 -0500 Message-ID: <50EDB8B3.4030903@cse.yorku.ca> Date: Wed, 09 Jan 2013 13:36:35 -0500 From: Jason Keltz User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64; rv:17.0) Gecko/17.0 Thunderbird/17.0 MIME-Version: 1.0 To: Rick Macklem Subject: Re: Problems Re-Starting mountd References: <2094136156.1801692.1357694368838.JavaMail.root@erie.cs.uoguelph.ca> In-Reply-To: <2094136156.1801692.1357694368838.JavaMail.root@erie.cs.uoguelph.ca> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Spam-Score: -1.0 X-Spam-Level: - X-Spam-Report: Content preview: On 01/08/2013 08:19 PM, Rick Macklem wrote: > You could test the attached patch, which I think makes mountd > load new export entries from a file called /etc/exports.new > without deleting the exports already in place, when sent a > USR1 signal. > > After applying the patch to mountd.c, rebuilding and replacing > it, you would: > - put new entries for file systems not yet exported in both > /etc/exports and /etc/exports.new > # kill -USR1 > - delete /etc/exports.new > Don't send HUP to mountd for this case. > > Very lightly tested, rick > ps: Sometimes it's faster to just code this stuff instead of > discussing if/how it can be done;-) > pss: This patch isn't ready for head. If it is useful, it might > make sense to add a new mountd option that specifies the > name of the file (/etc/exports.new or ...), so that this > capability isn't enabled by default. Hi Rick, [...] Content analysis details: (-1.0 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 SHORTCIRCUIT Not all rules were run, due to a shortcircuited rule -1.0 ALL_TRUSTED Passed through trusted hosts only via SMTP Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 09 Jan 2013 18:36:44 -0000 On 01/08/2013 08:19 PM, Rick Macklem wrote: > You could test the attached patch, which I think makes mountd > load new export entries from a file called /etc/exports.new > without deleting the exports already in place, when sent a > USR1 signal. > > After applying the patch to mountd.c, rebuilding and replacing > it, you would: > - put new entries for file systems not yet exported in both > /etc/exports and /etc/exports.new > # kill -USR1 > - delete /etc/exports.new > Don't send HUP to mountd for this case. > > Very lightly tested, rick > ps: Sometimes it's faster to just code this stuff instead of > discussing if/how it can be done;-) > pss: This patch isn't ready for head. If it is useful, it might > make sense to add a new mountd option that specifies the > name of the file (/etc/exports.new or ...), so that this > capability isn't enabled by default. Hi Rick, Thanks very much for looking into this. It's a pity (at least with current mountd) that there isn't maybe a more generic option for adding or removing an export on the fly. This way, a basic shell script could look at the original exports and the new exports, and then come up with the options to mountd to tell it what to do. The only "problem" I see with the patch per se is that while this would enable adding new exports without the additional delay, what happens when I delete a user and now I need to "unexport" the filesystem. Of course I have to revert to processing the whole exports file again. Right now, I don't have to think about when deletes happen becuase on my existing file server, where user home directories are stored on one of a few filesystems, if I delete a user, I only have to remove a directory. I don't have to do anything with a filesystem. However, when using one ZFS filesystem per user, if I delete a user, I have to delete a filesystem. Now, imagine the user has been logged into various systems, and their home directory is automounted everywhere. Now, I delete it on the fileserver, need to re-export, introduce the delay, and in addition, leave a bunch of machines with stale NFS mounts. yay. :) Jas. From owner-freebsd-fs@FreeBSD.ORG Wed Jan 9 18:36:46 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 3F84983A for ; Wed, 9 Jan 2013 18:36:46 +0000 (UTC) (envelope-from jas@cse.yorku.ca) Received: from bronze.cs.yorku.ca (bronze.cs.yorku.ca [130.63.95.34]) by mx1.freebsd.org (Postfix) with ESMTP id 0D17B3FF for ; Wed, 9 Jan 2013 18:36:46 +0000 (UTC) Received: from [130.63.97.125] (ident=jas) by bronze.cs.yorku.ca with esmtpsa (TLSv1:CAMELLIA256-SHA:256) (Exim 4.76) (envelope-from ) id 1Tt0Vz-0008Vl-Bb; Wed, 09 Jan 2013 13:36:39 -0500 Message-ID: <50EDB8B7.4040708@cse.yorku.ca> Date: Wed, 09 Jan 2013 13:36:39 -0500 From: Jason Keltz User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64; rv:17.0) Gecko/17.0 Thunderbird/17.0 MIME-Version: 1.0 To: Andrey Simonenko Subject: Re: Problems Re-Starting mountd References: <20130103123730.GA19137@pm513-1.comsys.ntu-kpi.kiev.ua> <20130108150508.GA2248@pm513-1.comsys.ntu-kpi.kiev.ua> <50EC39A8.3070108@cse.yorku.ca> <20130109125554.GA1574@pm513-1.comsys.ntu-kpi.kiev.ua> In-Reply-To: <20130109125554.GA1574@pm513-1.comsys.ntu-kpi.kiev.ua> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Spam-Score: -1.0 X-Spam-Level: - X-Spam-Report: Content preview: On 01/09/2013 07:55 AM, Andrey Simonenko wrote: > > When mountd starts it flushes NFS export settings for all file systems, > for each mount point it calls nmount(), even if /etc/exports is empty > it will call nmount() for all currently mounted file systems. > > When mountd loads export settings into NFS server it calls statfs() and > lstat() for each pathname from /etc/exports (number of lstat() calls depends > on number of '/' in each pathname), then it calls nmount() for each > address specification for each pathname from /etc/exports. It uses > nmount() interface for communication with kern/vfs_export.c that is > responsible for NFS export settings for file systems. For the NFSv4 > root directory mountd uses nfssvc() to update its settings, that > calls kern/vfs_export.c:vfs_export(). > > When mountd receives SIGHUP it flushes everything and loads /etc/exports. > This signal is sent by mount(8) when it mounts any file system. > > This delay in above described example came from ZFS kernel code, since > the same configuration for 2000 nullfs(5) file systems takes ~0.20 second > (less than second) by mountd in nmount() system calls. At least on > 9.1-STABLE I do not see that this delay came from mountd code, it came > from nmount() used by mountd. > [...] Content analysis details: (-1.0 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 SHORTCIRCUIT Not all rules were run, due to a shortcircuited rule -1.0 ALL_TRUSTED Passed through trusted hosts only via SMTP Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 09 Jan 2013 18:36:46 -0000 On 01/09/2013 07:55 AM, Andrey Simonenko wrote: > > When mountd starts it flushes NFS export settings for all file systems, > for each mount point it calls nmount(), even if /etc/exports is empty > it will call nmount() for all currently mounted file systems. > > When mountd loads export settings into NFS server it calls statfs() and > lstat() for each pathname from /etc/exports (number of lstat() calls depends > on number of '/' in each pathname), then it calls nmount() for each > address specification for each pathname from /etc/exports. It uses > nmount() interface for communication with kern/vfs_export.c that is > responsible for NFS export settings for file systems. For the NFSv4 > root directory mountd uses nfssvc() to update its settings, that > calls kern/vfs_export.c:vfs_export(). > > When mountd receives SIGHUP it flushes everything and loads /etc/exports. > This signal is sent by mount(8) when it mounts any file system. > > This delay in above described example came from ZFS kernel code, since > the same configuration for 2000 nullfs(5) file systems takes ~0.20 second > (less than second) by mountd in nmount() system calls. At least on > 9.1-STABLE I do not see that this delay came from mountd code, it came > from nmount() used by mountd. > Hi Andrey, Thanks for your message. If this is the case, I wonder if there's really any change that needed at all. Maybe one of the ZFS filesystem maintainers might have some idea why 2000 nullfs filesystems take ~0.20 second to process with nmount(), yet the same number of ZFS filesystems take so much longer to process. Maybe there's a bug somewhere? Jason. From owner-freebsd-fs@FreeBSD.ORG Wed Jan 9 18:36:48 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 739BE83E for ; Wed, 9 Jan 2013 18:36:48 +0000 (UTC) (envelope-from jas@cse.yorku.ca) Received: from bronze.cs.yorku.ca (bronze.cs.yorku.ca [130.63.95.34]) by mx1.freebsd.org (Postfix) with ESMTP id 38FC7402 for ; Wed, 9 Jan 2013 18:36:48 +0000 (UTC) Received: from [130.63.97.125] (ident=jas) by bronze.cs.yorku.ca with esmtpsa (TLSv1:CAMELLIA256-SHA:256) (Exim 4.76) (envelope-from ) id 1Tt0W1-0008W6-R1; Wed, 09 Jan 2013 13:36:42 -0500 Message-ID: <50EDB8B9.4030507@cse.yorku.ca> Date: Wed, 09 Jan 2013 13:36:41 -0500 From: Jason Keltz User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64; rv:17.0) Gecko/17.0 Thunderbird/17.0 MIME-Version: 1.0 To: Andrey Simonenko Subject: Re: Problems Re-Starting mountd References: <50EC39A8.3070108@cse.yorku.ca> <972459831.1800222.1357690721032.JavaMail.root@erie.cs.uoguelph.ca> <20130109135703.GB1574@pm513-1.comsys.ntu-kpi.kiev.ua> In-Reply-To: <20130109135703.GB1574@pm513-1.comsys.ntu-kpi.kiev.ua> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Spam-Score: -1.0 X-Spam-Level: - X-Spam-Report: Content preview: On 01/09/2013 08:57 AM, Andrey Simonenko wrote: > On Tue, Jan 08, 2013 at 07:18:41PM -0500, Rick Macklem wrote: >> Jason Keltz wrote: >>> On 01/08/2013 10:05 AM, Andrey Simonenko wrote: >>>> I created 2000 file systems on ZFS file system backed by vnode md(4) >>>> device. The /etc/exports file contains 4000 entries like your >>>> example. >>>> >>>> On 9.1-STABLE mountd spends ~70 seconds in flushing current NFS >>>> exports >>>> in the NFS server, parsing data from /etc/exports and loading parsed >>>> data into the NFS server. ~70 seconds is not several minutes. Most >>>> of >>>> time mountd spends in nmount() system call in "zio->io_cv" lock. >>>> >>>> Can you show the output of "truss -fc -o /tmp/output.txt mountd" >>>> (wait wchan "select" state of mountd and terminate it by a signal). >>>> If everything is correct you should see N statfs() calls, N+M >>>> nmount() >>>> calls and something*N lstat() calls, where N is the number of >>>> /etc/exports >>>> lines, M is the number of mounted file systems. Number of lstat() >>>> calls >>>> depends on number of components in pathnames. >>> Andrey, >>> >>> Would that still be an ~70 second period in which new mounts would not >>> be allowed? In the system I'm preparing, I'll have at least 4000 >>> entries >>> in /etc/exports, probably even more, so I know I'll be dealing with >>> the >>> same issue that Tim is dealing with when I get there. However, I don't >>> see how to avoid the issue ... If I want new users to be able to login >>> shortly after their account is created, and each user has a ZFS >>> filesystem as a home directory, then at least at some interval, after >>> adding a user to the system, I need to update the exports file on the >>> file server, and re-export everything. Yet, even a >1 minute delay >>> where users who are logging in won't get their home directory mounted >>> on >>> the system they are logging into - well, that's not so good... >>> accounts >>> can be added all the time and this would create random chaos. Isn't >>> there some way to make [...] Content analysis details: (-1.0 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 SHORTCIRCUIT Not all rules were run, due to a shortcircuited rule -1.0 ALL_TRUSTED Passed through trusted hosts only via SMTP Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 09 Jan 2013 18:36:48 -0000 On 01/09/2013 08:57 AM, Andrey Simonenko wrote: > On Tue, Jan 08, 2013 at 07:18:41PM -0500, Rick Macklem wrote: >> Jason Keltz wrote: >>> On 01/08/2013 10:05 AM, Andrey Simonenko wrote: >>>> I created 2000 file systems on ZFS file system backed by vnode md(4) >>>> device. The /etc/exports file contains 4000 entries like your >>>> example. >>>> >>>> On 9.1-STABLE mountd spends ~70 seconds in flushing current NFS >>>> exports >>>> in the NFS server, parsing data from /etc/exports and loading parsed >>>> data into the NFS server. ~70 seconds is not several minutes. Most >>>> of >>>> time mountd spends in nmount() system call in "zio->io_cv" lock. >>>> >>>> Can you show the output of "truss -fc -o /tmp/output.txt mountd" >>>> (wait wchan "select" state of mountd and terminate it by a signal). >>>> If everything is correct you should see N statfs() calls, N+M >>>> nmount() >>>> calls and something*N lstat() calls, where N is the number of >>>> /etc/exports >>>> lines, M is the number of mounted file systems. Number of lstat() >>>> calls >>>> depends on number of components in pathnames. >>> Andrey, >>> >>> Would that still be an ~70 second period in which new mounts would not >>> be allowed? In the system I'm preparing, I'll have at least 4000 >>> entries >>> in /etc/exports, probably even more, so I know I'll be dealing with >>> the >>> same issue that Tim is dealing with when I get there. However, I don't >>> see how to avoid the issue ... If I want new users to be able to login >>> shortly after their account is created, and each user has a ZFS >>> filesystem as a home directory, then at least at some interval, after >>> adding a user to the system, I need to update the exports file on the >>> file server, and re-export everything. Yet, even a >1 minute delay >>> where users who are logging in won't get their home directory mounted >>> on >>> the system they are logging into - well, that's not so good... >>> accounts >>> can be added all the time and this would create random chaos. Isn't >>> there some way to make it so that when you re-export everything, the >>> existing exports are still served until the new exports are ready? >> I can't think of how you'd do everything without deleting the old stuff, >> but it would be possible to "add new entries". It has to be done by >> modifying mountd, since it keeps a tree in its address space that it uses >> for mount requests and the tree must be grown. >> >> I don't know about nfse, but you'd have to add this capability to mountd >> and, trust me, it's an ugly old piece of C code, so coming up with a patch >> might not be that easy. However, it might not be that bad, since the only difference >> from doing the full reload as it stands now would be to "not delete >> the tree that already exists in the utility and don't do the DELEXPORTS >> syscall" I think, so the old ones don't go away. There could be a file called >> something like /etc/exports.new for the new entries and a different >> signal (SIGUSR1??) to load these. (Then you'd add the new entries to >> /etc/exports as well for the next time mountd restarts, but wouldn't >> send it a SIGHUP.) > This delay in above described example came from ZFS kernel code, since > the same configuration for 2000 nullfs(5) file systems takes ~0.20 second > (less than second) by mountd in nmount() system calls. At least on > 9.1-STABLE I do not see that this delay came from mountd code, it came > from nmount() used by mountd. > > Since nfse was mentioned in this thread, I can explain how this is > implemented in nfse. > > The nfse utility and its NFSE API support dynamic commands, in fact all > settings are updated using the same API. This API allows to flush all > configuration, flush/clear file system configuration, add/update/delete > configuration for address specification. All commands can be grouped, > so one nfssvc() call can be called with several commands. Not all commands > have to be grouped together, instead API uses transaction model and while > some transaction is open it is possible to use it for passing commands > into NFS server. When all commands are ready, transaction is committed. > Each transaction has timeout and it is possible to have several transaction > in one or in several processes. > > ... > I checked nfse on 9.1-STABLE with above given example. It takes ~0.10 > second by nfse to configure 2000 ZFS file systems, this time mostly > is spent in nfssvc() calls (number of calls depends on how many commands > are grouped for one nfssvc() call). > > I did not check delay in NFSE code for NFS clients during updating of > NFS export settings, but it will be less than time used by nfse, since > NFSE code in the NFS server uses deferred data releasing and it require > to acquire small number of locks. Two locks are acquire while all NFS > export settings are updated, one lock is acquire for transaction and one > lock is acquire for each passed security flavor list and credentials > specification. Each security flavor list and credential specification or > any specification is passed in own command, so if there are ~2000 file > systems exported to the same address specification, then corresponding > security flavor list and credential specification are passed only one time. Thanks for all of the helpful information on nfse. In all fairness, I didn't know what nfse was initially, so I read about it here: http://sourceforge.net/projects/nfse/ (Given the maintainer, Andrey, I can see why you're such an expert in nfse!) :) Will it work under 9.1? or is it still development? Since nfse doesn't use nmount() call (is this correct?), I get the impression that whether it processes the entire export configuration (I realize custom to nfse) or not, I assume that we wouldn't see any delay when using ZFS? Would the solution then be to use nfse or do I still need to wait for it to be stable when maybe 10.0 is released? Maybe without thinking this through too much, since nfse is newer, it would be interesting if there was an option for a "living" "exports" file. With a standard exports file, if you update it live, the changes aren't reprocessed until issuing a command, and then everything is unexported and reexported. What if you could change the exports file on the fly, nfse sees that the file has changed, compares what is in the file to the current exported state, and then acts accordingly to sync the two states by adding or deleting exports... of course, if processing the whole file takes such a short time, maybe this doesn't make any sense.... (and of course if you run out of space and truncate the file accidentally..whoops..) Jason. From owner-freebsd-fs@FreeBSD.ORG Wed Jan 9 21:47:37 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 4ABE3411 for ; Wed, 9 Jan 2013 21:47:37 +0000 (UTC) (envelope-from marck@rinet.ru) Received: from woozle.rinet.ru (woozle.rinet.ru [195.54.192.68]) by mx1.freebsd.org (Postfix) with ESMTP id CCFE6FF0 for ; Wed, 9 Jan 2013 21:47:36 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by woozle.rinet.ru (8.14.5/8.14.5) with ESMTP id r09LlTHL000789; Thu, 10 Jan 2013 01:47:29 +0400 (MSK) (envelope-from marck@rinet.ru) Date: Thu, 10 Jan 2013 01:47:29 +0400 (MSK) From: Dmitry Morozovsky To: Konstantin Belousov Subject: Re: zfs -> ufs rsync: livelock in wdrain state In-Reply-To: Message-ID: References: <20130108001231.GB82219@kib.kiev.ua> User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) X-NCC-RegID: ru.rinet X-OpenPGP-Key-ID: 6B691B03 MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.2.7 (woozle.rinet.ru [0.0.0.0]); Thu, 10 Jan 2013 01:47:29 +0400 (MSK) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 09 Jan 2013 21:47:37 -0000 On Tue, 8 Jan 2013, Dmitry Morozovsky wrote: > > Are there any kernel messages about the disk system ? > > > > The wdrain means that the amount of the dirty buffers accumulated exceeds > > the allowed maximum. The transient 'wdrain' state is normal on a machine > > doing lot of writes to a filesystem using buffer cache, say UFS. Failure > > to clean the dirty buffers is usually related to the disk i/o stalling. > > > > It cannot be denied that a bug could cause stuck 'wdrain' state, but > > in the last five or so years all the cases I investigated were due to > > disks. > > Yes, it seems so: > > root@moose:~# camcontrol devlist > load: 0.03 cmd: camcontrol 49735 [devfs] 2.68r 0.00u 0.00s 0% 820k > > and then machine is in well known "hardly alive" state: TCP connects > established, process switching does not go. > > Will investigate the hardware, thank you. It seems flaky eSATA cable was the source of drive sometimes get lost. Sorry for the noise. -- Sincerely, D.Marck [DM5020, MCK-RIPE, DM3-RIPN] [ FreeBSD committer: marck@FreeBSD.org ] ------------------------------------------------------------------------ *** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- marck@rinet.ru *** ------------------------------------------------------------------------ From owner-freebsd-fs@FreeBSD.ORG Thu Jan 10 01:15:10 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 19A46D1E for ; Thu, 10 Jan 2013 01:15:10 +0000 (UTC) (envelope-from artemb@gmail.com) Received: from mail-vc0-f175.google.com (mail-vc0-f175.google.com [209.85.220.175]) by mx1.freebsd.org (Postfix) with ESMTP id BC7C1BEC for ; Thu, 10 Jan 2013 01:15:09 +0000 (UTC) Received: by mail-vc0-f175.google.com with SMTP id fy7so11226vcb.34 for ; Wed, 09 Jan 2013 17:15:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type; bh=70AnbqTwsr2q9Ii5UgMCwEpb+lVF3L/hwszMhJ7rhEo=; b=FjCacVSvwSDK/meqWm9UAr9OWWc0sr2Bc1IfToQOsOQnZh91nCfIcxbJA/uk4mTd1j Y5CCgvZci5ozxvIh9CASh+TwhzS7F3JIc0lPOVxnrRWS6E2MLxJyFwODe141n5X+Yn7N hco3Eb68WucwP2UYYqNxm4XK3+FaPOSNKZjTrS5EZQfHlOdJZs+VpG2njxYOJoB1rTpr fyIz3w8TgF8aPKTcsxE6orvYLALntrdHjQKlXK7/2UStqEe1Es9c4E8dA9CXEm030N5I yEUfNe9nS2nwYo18mUH1mkkHKf5QgwTFe7av2idPWfakKdzeYnYvSoRMhRJI4Mz/Rr5H jwYA== MIME-Version: 1.0 Received: by 10.58.181.42 with SMTP id dt10mr8114476vec.34.1357780503255; Wed, 09 Jan 2013 17:15:03 -0800 (PST) Sender: artemb@gmail.com Received: by 10.220.122.196 with HTTP; Wed, 9 Jan 2013 17:15:03 -0800 (PST) In-Reply-To: <20130109162613.GA34276@mid.pc5.i.0x5.de> References: <20130108174225.GA17260@mid.pc5.i.0x5.de> <20130109162613.GA34276@mid.pc5.i.0x5.de> Date: Wed, 9 Jan 2013 17:15:03 -0800 X-Google-Sender-Auth: nJi9-6qAoP3R4o4DWalhMlF9Doo Message-ID: Subject: Re: slowdown of zfs (tx->tx) From: Artem Belevich To: Nicolas Rachinsky Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 10 Jan 2013 01:15:10 -0000 On Wed, Jan 9, 2013 at 8:26 AM, Nicolas Rachinsky wrote: > * Artem Belevich [2013-01-08 12:47 -0800]: >> On Tue, Jan 8, 2013 at 9:42 AM, Nicolas Rachinsky >> wrote: >> > NAME STATE READ WRITE CKSUM >> > pool1 DEGRADED 0 0 0 >> > raidz2-0 DEGRADED 0 0 0 >> > ada5 ONLINE 0 0 0 >> > ada8 ONLINE 0 0 0 >> > ada2 ONLINE 0 0 0 >> > ada3 ONLINE 0 0 0 >> > 11846390416703086268 UNAVAIL 0 0 0 was /dev/dsk/ada1 >> > ada6 ONLINE 0 0 0 >> > ada0 ONLINE 0 0 1 >> > ada7 ONLINE 0 0 0 >> > ada4 ONLINE 0 0 3 >> >> You seem to have some checksum errors which does suggest hardware troubles. > > I somehow missed these. Is there any way to learn when these checksum > errors happen? Not on FreeBSD (yet) as far as I can tell. Not explicitly, anyways. Check /var/log/messages for any indications of SATA errors. There's a good chance that there was a timeout at some point. >> For starters, check smart info for all drives and see if they have any >> relocated sectors. > > There are some disks with relocated sectors, but for both ada0 and > ada4 Reallocated_Sector_Ct is 0. Are there any UDMA errors? Those would suggest trouble with cabling. >> Use gstat during your workload to see if any of the drives takes much >> longer than others to handle its job. > > There is one disk sticking out a bit. In a raid-z pool number of transactions/second is determined by the slowest disk. Check ms/w column. Look for numbers substantially higher than typical seek rate (10..20ms is OK, 100 is not). > >> > There is almost no disk activity during this time. >> >> What kind of disk activity *is* there? > > What would be interesting? Drives 'sticking out' being busy longer than their peers in the pool. Excessive ms/r or ms/w in gstat. Unexpected reads or writes. --Artem From owner-freebsd-fs@FreeBSD.ORG Thu Jan 10 01:17:09 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 0B252D9F for ; Thu, 10 Jan 2013 01:17:09 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id C5BA9BFF for ; Thu, 10 Jan 2013 01:17:08 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AqAEACcW7lCDaFvO/2dsb2JhbABEhjmzSoN2c4IeAQEEASNWBRYOCgICDRkCWQaIJgamYo83gSKLPoMngRMDiGKNKpBJgxKBSD4 X-IronPort-AV: E=Sophos;i="4.84,440,1355115600"; d="scan'208";a="11214108" Received: from erie.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.206]) by esa-jnhn.mail.uoguelph.ca with ESMTP; 09 Jan 2013 20:17:01 -0500 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 99A1FB4032; Wed, 9 Jan 2013 20:17:01 -0500 (EST) Date: Wed, 9 Jan 2013 20:17:01 -0500 (EST) From: Rick Macklem To: Jason Keltz Message-ID: <894889105.1842818.1357780621511.JavaMail.root@erie.cs.uoguelph.ca> In-Reply-To: <50EDB8B3.4030903@cse.yorku.ca> Subject: Re: Problems Re-Starting mountd MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.203] X-Mailer: Zimbra 6.0.10_GA_2692 (ZimbraWebClient - FF3.0 (Win)/6.0.10_GA_2692) Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 10 Jan 2013 01:17:09 -0000 Jason Keltz wrote: > On 01/08/2013 08:19 PM, Rick Macklem wrote: > > You could test the attached patch, which I think makes mountd > > load new export entries from a file called /etc/exports.new > > without deleting the exports already in place, when sent a > > USR1 signal. > > > > After applying the patch to mountd.c, rebuilding and replacing > > it, you would: > > - put new entries for file systems not yet exported in both > > /etc/exports and /etc/exports.new > > # kill -USR1 > > - delete /etc/exports.new > > Don't send HUP to mountd for this case. > > > > Very lightly tested, rick > > ps: Sometimes it's faster to just code this stuff instead of > > discussing if/how it can be done;-) > > pss: This patch isn't ready for head. If it is useful, it might > > make sense to add a new mountd option that specifies the > > name of the file (/etc/exports.new or ...), so that this > > capability isn't enabled by default. > Hi Rick, > > Thanks very much for looking into this. > > It's a pity (at least with current mountd) that there isn't maybe a > more > generic option for adding or removing an export on the fly. This way, > a > basic shell script could look at the original exports and the new > exports, and then come up with the options to mountd to tell it what > to do. > mountd.c was written in the late 1980s and has been hacked on by various people over the years. Imho, the code is now very difficult to modify for many cases (it happened this patch was easy;-). I, for one, am not volunteering to make major changes to it. It sounds like you should look seriously at using nfse instead. > The only "problem" I see with the patch per se is that while this > would > enable adding new exports without the additional delay, what happens > when I delete a user and now I need to "unexport" the filesystem. Of > course I have to revert to processing the whole exports file again. > Right now, I don't have to think about when deletes happen becuase on > my > existing file server, where user home directories are stored on one of > a > few filesystems, if I delete a user, I only have to remove a > directory. > I don't have to do anything with a filesystem. However, when using one > ZFS filesystem per user, if I delete a user, I have to delete a > filesystem. Now, imagine the user has been logged into various > systems, > and their home directory is automounted everywhere. Now, I delete it > on > the fileserver, need to re-export, introduce the delay, and in > addition, > leave a bunch of machines with stale NFS mounts. yay. :) > What can I say. mountd was designed for what your current server does (export a few fixed file systems) and not what you are now trying to do (one fs/user with users being added/deleted all the time). My only suggestion would be to leave a deleted user's file system around until the end of semester or some other convenient deletion time. Good luck with it, rick > Jas. From owner-freebsd-fs@FreeBSD.ORG Thu Jan 10 01:37:10 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 44482D2 for ; Thu, 10 Jan 2013 01:37:10 +0000 (UTC) (envelope-from artemb@gmail.com) Received: from mail-vc0-f174.google.com (mail-vc0-f174.google.com [209.85.220.174]) by mx1.freebsd.org (Postfix) with ESMTP id 023F6CE9 for ; Thu, 10 Jan 2013 01:37:09 +0000 (UTC) Received: by mail-vc0-f174.google.com with SMTP id d16so27829vcd.5 for ; Wed, 09 Jan 2013 17:37:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type; bh=TqoV+GK8uBvrWacvhOMR9xqTSDvlcZyHwBehRWIuzH4=; b=YAxXPg2s6+7zPXTUY4Mhs2U7uBCtXe1ww3SmrD7SdKerykk3b0/ew4psUcNYrcf337 muJ1jBdlOEWRRwYWitDhjUtdTSrRRau3MRp9XF0HCSq/0RDsSIWjHp5OzPY8+wQaCjhw HAsdLwScQRg/qEPB/XM1u03naqxKXisNK8QJBaahDdB/ndqx3+VGgrQJOq5srW4ZYEvP Jlq9rYQSR01aJ+xDpZw3SXM2ZB9Yy3u8o7srZPhqgckB2g6Cr8rU5dnGVshq1x55Cs8K EjoKn0lL2mvZ+xZl6aiHzwVigSK23tXuonljF5k8s/wPphvFD78sE59VecUyMBwDkQlE vZrg== MIME-Version: 1.0 Received: by 10.52.74.38 with SMTP id q6mr15982156vdv.17.1357781829108; Wed, 09 Jan 2013 17:37:09 -0800 (PST) Sender: artemb@gmail.com Received: by 10.220.122.196 with HTTP; Wed, 9 Jan 2013 17:37:09 -0800 (PST) In-Reply-To: <1357741879.56011.YahooMailClassic@web190806.mail.sg3.yahoo.com> References: <1357741879.56011.YahooMailClassic@web190806.mail.sg3.yahoo.com> Date: Wed, 9 Jan 2013 17:37:09 -0800 X-Google-Sender-Auth: XV-GsbMYsralrKu-S6O2T86_TJY Message-ID: Subject: Re: ZFS sub-optimal performance with default setting From: Artem Belevich To: Patrick Dung Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 10 Jan 2013 01:37:10 -0000 On Wed, Jan 9, 2013 at 6:31 AM, Patrick Dung wrote: > Hi freebsd-fs! > > I have my the original question in: > http://archives.postgresql.org/pgsql-performance/2013-01/msg00044.php > But later it was found out the bottleneck seems to be the ZFS with out a fast ZIL. > Please give some advise, thanks. For database storage on ZFS it may be necessary to change ZFS record size to match database page size. At least that's what one of the things Oracle recommends for oracle database: http://www.oracle.com/technetwork/server-storage/solaris10/config-solaris-zfs-wp-167894.pdf You may also check if disabling prefetching (via vfs.zfs.prefetch_disable=1 tunable in loader.conf) helps your workload. --Artem From owner-freebsd-fs@FreeBSD.ORG Thu Jan 10 01:47:40 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 1674539C; Thu, 10 Jan 2013 01:47:40 +0000 (UTC) (envelope-from artemb@gmail.com) Received: from mail-vb0-f45.google.com (mail-vb0-f45.google.com [209.85.212.45]) by mx1.freebsd.org (Postfix) with ESMTP id ABE6AD49; Thu, 10 Jan 2013 01:47:39 +0000 (UTC) Received: by mail-vb0-f45.google.com with SMTP id p1so28314vbi.18 for ; Wed, 09 Jan 2013 17:47:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type; bh=ATPvUjqCJ1cDQE0E4XfePRmZ5MmMSkDFPHjnywAyhNc=; b=DT0v/Y48MdCrJwE17JQ2DwrLaVNRpBonepSTSEBNJUOjwEMsFLAwgQhI9eTzxpl6/a cZ2/O2LifWMLMnIEtBwY6RlZWs+PQ4NyOVUl5gXYrj80RpyacpSfaTelyaHfQn+lJ3Nq Ef1xMqbJ1xsyGywK2dGbc59hr35KXYd972SHj4Fa9XDQYvWLX0OcgclwQ6ROWf31En2E xRBH7hLQ5AWSRLdFPawu3isycTxLZKd/u0epUEhJGNXuceDC/g85V9z5kq6u0erzRVbc qRB1jYzLKsUD/s4/Dy5+3NytqIhITQtdeXc5B2gU2/Y04zrJF6ww09qS2nLa4k2GigFs 3Ekg== MIME-Version: 1.0 Received: by 10.52.72.66 with SMTP id b2mr78758672vdv.31.1357782452819; Wed, 09 Jan 2013 17:47:32 -0800 (PST) Sender: artemb@gmail.com Received: by 10.220.122.196 with HTTP; Wed, 9 Jan 2013 17:47:32 -0800 (PST) In-Reply-To: References: <20130109023327.GA1888@FreeBSD.org> Date: Wed, 9 Jan 2013 17:47:32 -0800 X-Google-Sender-Auth: 8DF1KQCSj-QVK3a3S7disaFel_Q Message-ID: Subject: Re: rc.d script for memory based zfs intent log From: Artem Belevich To: Borja Marcos Content-Type: text/plain; charset=ISO-8859-1 Cc: FreeBSD Filesystems , John X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 10 Jan 2013 01:47:40 -0000 On Wed, Jan 9, 2013 at 7:15 AM, Borja Marcos wrote: > > In case of a crash, seems to be riskier than using sync=disabled on the datasets you need. What is the impact on the data integrity of a suddenly disappearing ZIL? Losing ZIL used to be fatal for the pool. I think in recent ZFS versions (v28?) you will only lose transactions that were not committed to the pool yet, but don't quote me on that. From owner-freebsd-fs@FreeBSD.ORG Thu Jan 10 02:11:57 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 271E3572; Thu, 10 Jan 2013 02:11:57 +0000 (UTC) (envelope-from spork@bway.net) Received: from smtp1.bway.net (smtp1.bway.net [216.220.96.27]) by mx1.freebsd.org (Postfix) with ESMTP id A9409DB3; Thu, 10 Jan 2013 02:11:56 +0000 (UTC) Received: from frankentosh.sporklab.com (foon.sporktines.com [96.57.144.66]) (using TLSv1 with cipher AES128-SHA (128/128 bits)) (No client certificate requested) (Authenticated sender: spork@bway.net) by smtp1.bway.net (Postfix) with ESMTPSA id 1FD8295868; Wed, 9 Jan 2013 21:03:09 -0500 (EST) Subject: Re: ZFS sub-optimal performance with default setting Mime-Version: 1.0 (Apple Message framework v1085) Content-Type: text/plain; charset=us-ascii From: Charles Sprickman In-Reply-To: Date: Wed, 9 Jan 2013 21:03:08 -0500 Content-Transfer-Encoding: quoted-printable Message-Id: <76DE1383-BAAA-4EFD-ABA6-A9328D79D5B3@bway.net> References: <1357741879.56011.YahooMailClassic@web190806.mail.sg3.yahoo.com> To: Artem Belevich X-Mailer: Apple Mail (2.1085) Cc: Patrick Dung , freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 10 Jan 2013 02:11:57 -0000 On Jan 9, 2013, at 8:37 PM, Artem Belevich wrote: > On Wed, Jan 9, 2013 at 6:31 AM, Patrick Dung = wrote: >> Hi freebsd-fs! >>=20 >> I have my the original question in: >> http://archives.postgresql.org/pgsql-performance/2013-01/msg00044.php >> But later it was found out the bottleneck seems to be the ZFS with = out a fast ZIL. >> Please give some advise, thanks. >=20 > For database storage on ZFS it may be necessary to change ZFS record > size to match database page size. At least that's what one of the > things Oracle recommends for oracle database: > = http://www.oracle.com/technetwork/server-storage/solaris10/config-solaris-= zfs-wp-167894.pdf >=20 > You may also check if disabling prefetching (via > vfs.zfs.prefetch_disable=3D1 tunable in loader.conf) helps your > workload. PostgreSQL has a ton of tunables, and the performance list is a good = place to start. The archives there are full of information on zfs. A few things off the top of my head: -set recordsize to 8k on the pg dataset (this has to be done before you = write data) -set "full_page_writes =3D off" (safe on zfs, not necessarily so on ufs) -leave RAM available for PG by limiting the max ARC size in loader.conf, = tell PG how much it has left after ARC plus some slop with = "effective_cache_size" (for example, if you have 64GB of RAM, maybe = limit ARC to 32GB, then set effective_cache_size to 30GB or so) -turn off atime updates As far as general PG setup, pgtune will put you in a better place than = the default config: https://github.com/gregs1104/pgtune Actually the stock config file is really terrible, if you haven't = touched it, you're almost guaranteed to have lousy performance. Lastly, this book is amazing - there's lots of general information = that's quite useful outside of PG and general db tuning: http://www.2ndquadrant.com/en/postgresql-90-high-performance/ If you have $200 or so laying around, slap in two Intel 320 SSDs (these = survive power-loss without corruption), and make a mirrored ZIL for zfs. = That will up your TPS on write-heavy loads into at least the 20K realm. Charles >=20 > --Artem > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Thu Jan 10 10:19:09 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 6DA2DF80; Thu, 10 Jan 2013 10:19:09 +0000 (UTC) (envelope-from joh.hendriks@gmail.com) Received: from mail-la0-f42.google.com (mail-la0-f42.google.com [209.85.215.42]) by mx1.freebsd.org (Postfix) with ESMTP id AAE8F1BA; Thu, 10 Jan 2013 10:19:08 +0000 (UTC) Received: by mail-la0-f42.google.com with SMTP id fe20so366183lab.29 for ; Thu, 10 Jan 2013 02:19:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=x-received:message-id:date:from:user-agent:mime-version:to:cc :subject:references:in-reply-to:content-type :content-transfer-encoding; bh=gmz+r/uBENz93P46MFoCZCaVj3lowUw2GwX6/B4x2xo=; b=P4nmLRShcGywudxHu/5ECOOS/7FIckeqH+7i6wmt1tmKrFbKIh6SpAZofnGdNiZO1a HQmnaICLpsVxXPwt7I8TnULXpw81H6HTG1Cj/yGaxOx59niJhNIp2I+I2LFGtyQDYzhs /mVl79C6rZhkugDxcouvSRsXbCN9tfYGH38uUvBZAwRE1uBz0p11nwMijP9DBS1ObiY2 NdVsWjoQnX75EtEjbg2gHl2JmSt4EkRuzsVzGvQrtunpHDtYrFW5VxQ1gLG2VrQ0aCX3 x+ZOv45jgSp2xV8ey3Ap+6250sw80oT7KrabIR9Faqwt7Vvt5Pzq0+CabrAW5OFWopIU yhog== X-Received: by 10.152.145.37 with SMTP id sr5mr11252559lab.33.1357813141927; Thu, 10 Jan 2013 02:19:01 -0800 (PST) Received: from [192.168.1.129] (schavemaker.nl. [213.84.84.186]) by mx.google.com with ESMTPS id ml1sm443992lab.15.2013.01.10.02.18.59 (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 10 Jan 2013 02:19:00 -0800 (PST) Message-ID: <50EE9592.8050903@gmail.com> Date: Thu, 10 Jan 2013 11:18:58 +0100 From: Johan Hendriks User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:17.0) Gecko/20130107 Thunderbird/17.0.2 MIME-Version: 1.0 To: Artem Belevich Subject: Re: rc.d script for memory based zfs intent log References: <20130109023327.GA1888@FreeBSD.org> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 10 Jan 2013 10:19:09 -0000 Artem Belevich schreef: > On Wed, Jan 9, 2013 at 7:15 AM, Borja Marcos wrote: >> In case of a crash, seems to be riskier than using sync=disabled on the datasets you need. What is the impact on the data integrity of a suddenly disappearing ZIL? > Losing ZIL used to be fatal for the pool. > I think in recent ZFS versions (v28?) you will only lose transactions > that were not committed to the pool yet, but don't quote me on that. > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" Loosing the ZIL will loose data written to the ZIL not jet flushed to the pool itself. So most likely you will have corrupted data. So for a ZIL use a mirror pair! You can add and remove a ZIL from the pool without problem from version 28 and above. Same thing goes if you have sync=disabled. Data acknowledge by the server but not flushed to the pool will end up in corrupt data. Disabeling sync is something you do not want to do. If you need, use the ZIL. regards Johan Hendriks Neuteboom Automatisering From owner-freebsd-fs@FreeBSD.ORG Thu Jan 10 11:09:23 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id A2E448E8 for ; Thu, 10 Jan 2013 11:09:23 +0000 (UTC) (envelope-from simon@comsys.ntu-kpi.kiev.ua) Received: from comsys.kpi.ua (comsys.kpi.ua [77.47.192.42]) by mx1.freebsd.org (Postfix) with ESMTP id 350103FC for ; Thu, 10 Jan 2013 11:09:22 +0000 (UTC) Received: from pm513-1.comsys.kpi.ua ([10.18.52.101] helo=pm513-1.comsys.ntu-kpi.kiev.ua) by comsys.kpi.ua with esmtpsa (TLSv1:AES256-SHA:256) (Exim 4.63) (envelope-from ) id 1TtG0Y-0006hG-C8; Thu, 10 Jan 2013 13:09:14 +0200 Received: by pm513-1.comsys.ntu-kpi.kiev.ua (Postfix, from userid 1001) id 8678A1E08A; Thu, 10 Jan 2013 13:09:03 +0200 (EET) Date: Thu, 10 Jan 2013 13:09:00 +0200 From: Andrey Simonenko To: Jason Keltz Subject: Re: Problems Re-Starting mountd Message-ID: <20130110110900.GA1419@pm513-1.comsys.ntu-kpi.kiev.ua> References: <50EC39A8.3070108@cse.yorku.ca> <972459831.1800222.1357690721032.JavaMail.root@erie.cs.uoguelph.ca> <20130109135703.GB1574@pm513-1.comsys.ntu-kpi.kiev.ua> <50EDB8B9.4030507@cse.yorku.ca> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <50EDB8B9.4030507@cse.yorku.ca> User-Agent: Mutt/1.5.21 (2010-09-15) X-Authenticated-User: simon@comsys.ntu-kpi.kiev.ua X-Authenticator: plain X-Sender-Verify: SUCCEEDED (sender exists & accepts mail) X-Exim-Version: 4.63 (build at 28-Apr-2011 07:11:12) X-Date: 2013-01-10 13:09:14 X-Connected-IP: 10.18.52.101:39199 X-Message-Linecount: 137 X-Body-Linecount: 118 X-Message-Size: 6909 X-Body-Size: 6087 Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 10 Jan 2013 11:09:23 -0000 On Wed, Jan 09, 2013 at 01:36:41PM -0500, Jason Keltz wrote: > On 01/09/2013 08:57 AM, Andrey Simonenko wrote: > > > > I did not check delay in NFSE code for NFS clients during updating of > > NFS export settings, but it will be less than time used by nfse, since > > NFSE code in the NFS server uses deferred data releasing and it require > > to acquire small number of locks. Two locks are acquire while all NFS > > export settings are updated, one lock is acquire for transaction and one > > lock is acquire for each passed security flavor list and credentials > > specification. Each security flavor list and credential specification or > > any specification is passed in own command, so if there are ~2000 file > > systems exported to the same address specification, then corresponding > > security flavor list and credential specification are passed only one time. > Thanks for all of the helpful information on nfse. > In all fairness, I didn't know what nfse was initially, so I read about > it here: > http://sourceforge.net/projects/nfse/ > (Given the maintainer, Andrey, I can see why you're such an expert in > nfse!) :) > > Will it work under 9.1? or is it still development? I try to keep changes for etc/, cddl/ and sys/ in sync with CURRENT and check everything on CURRENT. Backports of changes for FreeBSD 8.2 (RELEASE) and 9.1 (recent STABLE) source code are here: http://nfse.sourceforge.net/8.2/ http://nfse.sourceforge.net/9.1/ Most recent backport of NFSE for 8.2 and 9.1 should work with most recent nfse utility for CURRENT. NFSE code can be considered as development since NFSE API can be changed, its configuration file format nfse.conf(5) can be changed, I do not have any information whether somebody verified correctness of sys/fs/nfsserver/nfs_export.c code and verified the idea of NFS exports support in NFS server (not in VFS as it is implemented now, as a side effect ZFS snapshots are not automatically exported by NFSE). > Since nfse doesn't use nmount() call (is this correct?), I get the > impression that whether it processes the entire export configuration (I > realize custom to nfse) or not, I assume that we wouldn't see any delay > when using ZFS? If some implementation build NFS export settings, then change a single pointer to this data and then use some kind of garbage collector to free previous settings, then there will be no delay. Both mountd and nfse introduce some delay, since NFS export settings are updated and exclusive lock is required to protect these settings. Amount of this delay is different because different approaches are used, amount of delay from nmount() came from ZFS code as I understand. Also NFS export settings should be updated atomically, under atomic update I understand atomic update of all NFS export settings for all pathnames. The nfse utility work with own nfse.conf(5) configuration and also it understands exports(5) format. More information about the compatibility mode can be read on http://nfse.sourceforge.net/ site. Dynamic updates work only if nfse.conf(5) format is used. By the way there are changes for zfs in cddl.diff that allow to use dynamic updates for 'zfs set sharenfs' commands. > Would the solution then be to use nfse or do I still > need to wait for it to be stable when maybe 10.0 is released? I cannot answer this question. > > Maybe without thinking this through too much, since nfse is newer, it > would be interesting if there was an option for a "living" "exports" > file. With a standard exports file, if you update it live, the changes > aren't reprocessed until issuing a command, and then everything is > unexported and reexported. What if you could change the exports file on > the fly, nfse sees that the file has changed, compares what is in the > file to the current exported state, and then acts accordingly to sync > the two states by adding or deleting exports... of course, if processing > the whole file takes such a short time, maybe this doesn't make any > sense.... (and of course if you run out of space and truncate the file > accidentally..whoops..) I considered such approach and it will not work because of logic of dynamic changes. Dynamic changes like "nfse -c 'add /fs ...'" change NFS export settings on-the-fly, they are not change settings in any file and are not saved on nfse exit (something like changing firewall settings from the command line). When old and new configuration diff is found, it is unclear how it can be applied to the current configuration, since this diff can be denied because of previous applied dynamic updates. Also if a user keeps configuration in several files, then it is impossible to get diffs from all changed files at one time (these diffs will be sequential) and this can introduce security problem, since partial update can allow export to not allowed hosts. I think that automatic support of directories with configuration files will simplify configuration management and will allow controlable and fast updates of NFS export settings. More advanced update logic is possible with nfse commands as well. If somebody has interest in nfse, then it can be run on 8.x, 9.x and 10.x without any modification to existent FreeBSD sources and on systems with working mountd. Just apply below given patch to force it to not call nfssvc(NFSSVC_EXPORT), use TESTING-BUILD from NFSE archive to build it and use the following switches to not register a server for the MOUNT protocol: # nfse -dl -m no It will report different errors, because nfssvc(NFSSVC_EXPORT) was not really called, but it will allow to test it and its logic (I suggest to read nfse(8) and nfse.conf(5) if a new configuration is used and do not forget to run "nfse -et" or "nfse -t" to check configurations or commands). --- nfse.c.orig 2013-01-09 12:26:25.000000000 +0200 +++ nfse.c 2013-01-10 12:45:49.000000000 +0200 @@ -269,6 +269,7 @@ port_resv(in_port_t port) static int nfsserver_call(struct nfse_cmds_hdr *hdr) { +return (0); if (nfssvc(NFSSVC_EXPORT, hdr) == 0) return (EXPCMD_RET_OK); From owner-freebsd-fs@FreeBSD.ORG Thu Jan 10 16:39:10 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 02FC36B5 for ; Thu, 10 Jan 2013 16:39:10 +0000 (UTC) (envelope-from patrick_dkt@yahoo.com.hk) Received: from nm31.bullet.mail.sg3.yahoo.com (nm31.bullet.mail.sg3.yahoo.com [106.10.151.26]) by mx1.freebsd.org (Postfix) with ESMTP id 4B56E884 for ; Thu, 10 Jan 2013 16:39:08 +0000 (UTC) Received: from [106.10.166.61] by nm31.bullet.mail.sg3.yahoo.com with NNFMP; 10 Jan 2013 16:39:07 -0000 Received: from [106.10.151.234] by tm18.bullet.mail.sg3.yahoo.com with NNFMP; 10 Jan 2013 16:39:07 -0000 Received: from [127.0.0.1] by omp1018.mail.sg3.yahoo.com with NNFMP; 10 Jan 2013 16:39:07 -0000 X-Yahoo-Newman-Property: ymail-3 X-Yahoo-Newman-Id: 360600.65498.bm@omp1018.mail.sg3.yahoo.com Received: (qmail 90375 invoked by uid 60001); 10 Jan 2013 16:39:07 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com.hk; s=s1024; t=1357835947; bh=E0ialZj6ymbqngvJSaPH1ZS8SL3zYxT1Y8JrCXX1yG0=; h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:Message-ID:Date:From:Subject:To:Cc:In-Reply-To:MIME-Version:Content-Type; b=CAKWb9TRa4oWkqYx9TNWBMI/H/flfj0esh5c6HaXI1Z+Kk2ylAnsO9HUgAQY8tJw/EVLLNbYsvA5J4XOlNti+wEvlz1HeS0VOF5st+NpoYf0WCETYgVovxwEmU7HwuNykLM/LlyuZHbthVgMA4pP5mJn07EiUUJuog4ZuYUMUR4= DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com.hk; h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:Message-ID:Date:From:Subject:To:Cc:In-Reply-To:MIME-Version:Content-Type; b=PS6sDnUS8mxcysE5az9uKqu/qCHVYtYO09HhSW/3tzX3vVqFM0c5R/+28PVGVpv+bWDcivHlHtWYJDaozWK01WXsnBg9EdeqCoGUzP1cavXCx0xfjNv1AtRkSBSVqQij9d0Cjb82q8xlizaZVBa+LXYdimeZE/Om5dZUkrFQpVo=; X-YMail-OSG: 5OxI6q0VM1m_nDuxP5Me3r1fr6eefs.XXVXNOhQQNLZnGNF UJYhHnxSlsYb2On9TZF.ynPshyvSrd6Hd9jVp2vsDQ3apBhLbgG_b08GDdkJ AKFlB8Ia5HKopA2yoSApqqZHMwyYE1dumKBAv_5BKSvbCD2zCJPY20NMGYvq IGR_L0AA6fbLD_aFI6qB63JiYTzXh9jDtl5CM3rpqKz3YKZ0mGwNgj0krYNi RMXry1t43D74Iut_g4habDSb6inbAU6bQebsrEt4Wx4AcuKull4A3TQP5m44 .oSeH.73gUqXbekvcNi.baamCwCsH0b8rxzCcgwfdymGacpkLG68uiuXrTyp rRgWNcfOhvp7j3ox1hm25H73IhOdkqplFC0vnFR2f871N4xlYDbsZn1Kzgzl FnhpjgnU9CiA2sIt2MJljB2ntve5HWtHXNB1hdztStExJRS9jbZYuLySxLYP Eqk3mSYjFFz1cxYH.LZOwLnGSPVD6.bYruTpcNm88ug655_eZHfpswhM0Rc1 539Vqn00KYscia2j07gxb6CbSFvjJS_9q8kyAfxlg2YTG0IuyPrRz9l4Sl9I dEOJsuXhYkup9DovAwPDcN8.zjNIz6WOXuqDKsMm31NTeSb5Ni1GZAqijNCz 5wnOj8lSsIICcB.ey1Iek7HRumBxzKoKJV_TSUE_pgULLyrvsySQ.daf_Xf1 G2Go3pNBFO0lsKLvlDDSEg8VMiJbxr3O5a9cL0ETVwet8G69vjkE4DHXkLWV O5fm5cuNklTkNbmvEZ08- Received: from [61.15.240.116] by web190806.mail.sg3.yahoo.com via HTTP; Fri, 11 Jan 2013 00:39:07 SGT X-Rocket-MIMEInfo: 001.001, SGkgQXJ0ZW0sDQoNClRoYW5rcyBmb3IgcmVwbHkuDQoNCkkgaGF2ZSB0cmllZCBzb21lIHRlc3RzLCBnb29kIGFuZCBiYWQgcmVzdWx0IGlzIGluIGJlbG93Li4uLi4uLi4uLg0KDQpsb2diaWFzPXRocm91Z2hwdXQsIHByaW1hcnljYWNoZT1hbGwsIHN5bmM9c3RhbmRhcmQNClJlc3VsdDogU1FMDQpLZXkgwqDCoCDCoFZhbHVlIMKgwqAgwqBUaW1lIMKgwqAgwqBDb21tZW50DQpJbnNlcnQgVGltZTogwqDCoCDCoDEwMDAwIMKgwqAgwqAyOCBzIDotKCDCoMKgIMKgU2hvdWxkIG5vdCB0YWtlIG1vcmUgdGhhbiABMAEBAQE- X-Mailer: YahooMailClassic/15.1.2 YahooMailWebService/0.8.130.494 Message-ID: <1357835947.77658.YahooMailClassic@web190806.mail.sg3.yahoo.com> Date: Fri, 11 Jan 2013 00:39:07 +0800 (SGT) From: Patrick Dung Subject: Re: ZFS sub-optimal performance with default setting To: Artem Belevich In-Reply-To: MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.14 Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 10 Jan 2013 16:39:10 -0000 Hi Artem, Thanks for reply. I have tried some tests, good and bad result is in below.......... logbias=3Dthroughput, primarycache=3Dall, sync=3Dstandard Result: SQL Key =A0=A0 =A0Value =A0=A0 =A0Time =A0=A0 =A0Comment Insert Time: =A0=A0 =A010000 =A0=A0 =A028 s :-( =A0=A0 =A0Should not take m= ore than 5's on an average system. Update Time: =A0=A0 =A010000 =A0=A0 =A029 s :-( =A0=A0 =A0Should not take m= ore than 9's on an average system. Select Time: =A0=A0 =A010000 =A0=A0 =A08 s :-( =A0=A0 =A0Should not take mo= re than 6's on an average system. Delete Time: =A0=A0 =A010000 =A0=A0 =A024 s :-( =A0=A0 =A0Should not take m= ore than 5's on an average system.=20 logbias=3Dlatency, primarycache=3Dall, sync=3Dstandard Result: SQL Key =A0=A0 =A0Value =A0=A0 =A0Time =A0=A0 =A0Comment Insert Time: =A0=A0 =A010000 =A0=A0 =A09 s :-( =A0=A0 =A0Should not take mo= re than 5's on an average system. Update Time: =A0=A0 =A010000 =A0=A0 =A010 s :-( =A0=A0 =A0Should not take m= ore than 9's on an average system. Select Time: =A0=A0 =A010000 =A0=A0 =A04 s :-) =A0=A0 =A0Looks fine! Delete Time: =A0=A0 =A010000 =A0=A0 =A08 s :-( =A0=A0 =A0Should not take mo= re than 5's on an average system.=20 logbias=3Dlatency, primarycache=3Dall, sync=3Ddisabled Result: SQL Key =A0=A0 =A0Value =A0=A0 =A0Time =A0=A0 =A0Comment Insert Time: =A0=A0 =A010000 =A0=A0 =A03 s :-) =A0=A0 =A0Looks fine! Update Time: =A0=A0 =A010000 =A0=A0 =A03 s :-) =A0=A0 =A0Looks fine! Select Time: =A0=A0 =A010000 =A0=A0 =A03 s :-) =A0=A0 =A0Looks fine! Delete Time: =A0=A0 =A010000 =A0=A0 =A03 s :-) =A0=A0 =A0Looks fine!=20 Regards, Patrick --- On Thu, 1/10/13, Artem Belevich wrote: From: Artem Belevich Subject: Re: ZFS sub-optimal performance with default setting To: "Patrick Dung" Cc: "freebsd-fs" Date: Thursday, January 10, 2013, 9:37 AM On Wed, Jan 9, 2013 at 6:31 AM, Patrick Dung wro= te: > Hi freebsd-fs! > > I have my the original question in: > http://archives.postgresql.org/pgsql-performance/2013-01/msg00044.php > But later it was found out the bottleneck seems to be the ZFS with out a = fast ZIL. > Please give some advise, thanks. For database storage on ZFS it may be necessary to change ZFS record size to match database page size. At least that's what one of the things Oracle recommends for oracle database: http://www.oracle.com/technetwork/server-storage/solaris10/config-solaris-z= fs-wp-167894.pdf You may also check if disabling prefetching (via vfs.zfs.prefetch_disable=3D1 tunable in loader.conf) helps your workload. --Artem From owner-freebsd-fs@FreeBSD.ORG Thu Jan 10 16:54:25 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id D1AEED33 for ; Thu, 10 Jan 2013 16:54:25 +0000 (UTC) (envelope-from patrick_dkt@yahoo.com.hk) Received: from nm1-vm3.bullet.mail.sg3.yahoo.com (nm1-vm3.bullet.mail.sg3.yahoo.com [106.10.148.74]) by mx1.freebsd.org (Postfix) with ESMTP id AF98594F for ; Thu, 10 Jan 2013 16:54:24 +0000 (UTC) Received: from [106.10.166.119] by nm1.bullet.mail.sg3.yahoo.com with NNFMP; 10 Jan 2013 16:54:23 -0000 Received: from [106.10.151.123] by tm8.bullet.mail.sg3.yahoo.com with NNFMP; 10 Jan 2013 16:54:22 -0000 Received: from [127.0.0.1] by omp1005.mail.sg3.yahoo.com with NNFMP; 10 Jan 2013 16:54:22 -0000 X-Yahoo-Newman-Property: ymail-3 X-Yahoo-Newman-Id: 962291.15377.bm@omp1005.mail.sg3.yahoo.com Received: (qmail 99395 invoked by uid 60001); 10 Jan 2013 16:54:22 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com.hk; s=s1024; t=1357836862; bh=6EnG/c91LVw13UBtAltjVOY/cOP5AZww9kbeL/uFRCE=; h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:Message-ID:Date:From:Subject:To:Cc:In-Reply-To:MIME-Version:Content-Type; b=tKoFWpLvGzA/uB/KmdYIYUZu2lfsXCuA7fTaKKll93P8G27MGABs3wy1VcCNBpx+4YMruvlNFBBcubtlGEhq34KI3iXSTbZAVI6Tg9n64EgwXGkzdFqP3Td1r1/NSVMXrmvxXKdDIX1a1m93kyxTWXXiXZ+E3SYmUPZX0vpXMl0= DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com.hk; h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:Message-ID:Date:From:Subject:To:Cc:In-Reply-To:MIME-Version:Content-Type; b=wt6eZKjjQ3NeueS5fafXfvTCVIjD1rlYuEzW3ugSI+Z29m5tRpFwRgUX5ITfXtua1bjRK6MgQuHacq4qTopON4GvJrsHhQ9eFqqSjkJYAE4Dr5pX5SPjGEkd/Aa+JXt0BsnlmGuEnWNd9v1TN8upwkO+gxxNujH2X3Iv5ax3DAE=; X-YMail-OSG: je2bYXAVM1kFX3DJXyGAOMxnXGvs9bkO6o7nAjuaIT0_p2e kYcuWb9iIXSl37LVw8pIiGyx94P2qdLYLRcC7H.s_5Ek1cLCjJaxT0dhH6eb 1p657ml1nwO2IXkzg4ThfcfI0QYVHlhHbB7tLI4zAXYPCSVwd3QvagRwXjvp IDvoBEP28wVkeHPYMFAWYtB1W.YgByvzox2OARcWELV7Ez9Ou_ZjK3wfd6X7 43fLeeJyd8P3p8JnQFr4RCE9sZl64VaD1oagVOksgdk9NDjayuadShw2o3du rRsVsbC6Cn05p04S3dMRriiX8Ox1X2L_jAPF5fjSuXm3Dol4UcF0tSTzMlgp 42WctK6JU3CC30QjCGEkcoB.HIT5EVCiW9pi4c2gqv94v83ytNGRdLLZ2psl 47L_q4hnhFB_79hB8aL1NVKYcfnH02E6gLZvBIzZG4DFkDr1pKqVtr5svWon D.tUCoTE4Eac92yB.D2hzeEj4DeCJX_w2CISwCytVPkTG91kE8eCBizdJRxr uLI0FRkWLj0xOwoqxhlQYPw8y9Xyp9cjUeN57ATomeujhbMpKeOWcaFukKSR 7qJ1X3.pfvi1oW5BjrWwGNUdV_W3_DdoK.EKU3XVZxQRWH0p.LURxMrDdibE xkD6K3BLrA_sgwubO4Se9ymwgv1BdM_QSBluM0hlHli281dVqsFZTKQmCveu 0s1YshchR_.SrdwN_ATK8lukE0vm6nbglvXVCxv_oHaMCxoQuLJ9B_YA4P_J I.aWV4YJqQBJAwTHZzMC84bLvMKxRLbVW.viz3_yn64DOEgySo6GSlUUHzno 4oQyms13VyysYqZIScsivALbqtFEPe5HxmYjUkpLRQUCEHsQAW5Ssf34r5X6 9X_lQKsl3g74gNOCxzejELtaM15whNfjhSDvgLUPHrizSjIL9BGXzm9XZGG5 yIg-- Received: from [61.15.240.116] by web190806.mail.sg3.yahoo.com via HTTP; Fri, 11 Jan 2013 00:54:22 SGT X-Rocket-MIMEInfo: 001.001, SSBoYXZlIHRyaWVkIHNvbWUgdGVzdHMsIGdvb2QgYW5kIGJhZCByZXN1bHQgaXMgaW4gYmVsb3cuLi4uLi4uLi4uDQpJIGFtIHN1cmUgdGhlcmUgaXMgc29tZSBib3R0bGVuZWNrLCBhbmQgdGhlIHJvb3QgY2F1c2UgaXMgc3RpbGwgdW5rbm93bi4NCg0KbG9nYmlhcz10aHJvdWdocHV0LCBwcmltYXJ5Y2FjaGU9YWxsLCBzeW5jPXN0YW5kYXJkDQpSZXN1bHQ6IFNRTA0KS2V5IMKgwqAgwqBWYWx1ZSDCoMKgIMKgVGltZSDCoMKgIMKgQ29tbWVudA0KSW5zZXJ0IFRpbWU6IMKgwqAgwqAxMDAwMCDCoMKgIMKgMjgBMAEBAQE- X-Mailer: YahooMailClassic/15.1.2 YahooMailWebService/0.8.130.494 Message-ID: <1357836862.81267.YahooMailClassic@web190806.mail.sg3.yahoo.com> Date: Fri, 11 Jan 2013 00:54:22 +0800 (SGT) From: Patrick Dung Subject: Re: ZFS sub-optimal performance with default setting To: Artem Belevich , Charles Sprickman In-Reply-To: <76DE1383-BAAA-4EFD-ABA6-A9328D79D5B3@bway.net> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.14 Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 10 Jan 2013 16:54:26 -0000 I have tried some tests, good and bad result is in below.......... I am sure there is some bottleneck, and the root cause is still unknown. logbias=3Dthroughput, primarycache=3Dall, sync=3Dstandard Result: SQL Key =A0=A0 =A0Value =A0=A0 =A0Time =A0=A0 =A0Comment Insert Time: =A0=A0 =A010000 =A0=A0 =A028 s :-( =A0=A0 =A0Should not take m= ore than 5's on an average system. Update Time: =A0=A0 =A010000 =A0=A0 =A029 s :-( =A0=A0 =A0Should not take m= ore than 9's on an average system. Select Time: =A0=A0 =A010000 =A0=A0 =A08 s :-( =A0=A0 =A0Should not take mo= re than 6's on an average system. Delete Time: =A0=A0 =A010000 =A0=A0 =A024 s :-( =A0=A0 =A0Should not take m= ore than 5's on an average system.=20 logbias=3Dlatency, primarycache=3Dall,=0A sync=3Dstandard Result: SQL Key =A0=A0 =A0Value =A0=A0 =A0Time =A0=A0 =A0Comment Insert Time: =A0=A0 =A010000 =A0=A0 =A09 s :-( =A0=A0 =A0Should not take mo= re than 5's on an average system. Update Time: =A0=A0 =A010000 =A0=A0 =A010 s :-( =A0=A0 =A0Should not take m= ore than 9's on an average system. Select Time: =A0=A0 =A010000 =A0=A0 =A04 s :-) =A0=A0 =A0Looks fine! Delete Time: =A0=A0 =A010000 =A0=A0 =A08 s :-( =A0=A0 =A0Should not take mo= re than 5's on an average system.=20 logbias=3Dlatency, primarycache=3Dall, sync=3Ddisabled Result: SQL Key =A0=A0 =A0Value =A0=A0 =A0Time =A0=A0 =A0Comment Insert Time: =A0=A0 =A010000 =A0=A0 =A03 s :-) =A0=A0 =A0Looks fine! Update Time: =A0=A0 =A010000 =A0=A0 =A03 s :-) =A0=A0=0A =A0Looks fine! Select Time: =A0=A0 =A010000 =A0=A0 =A03 s :-) =A0=A0 =A0Looks fine! Delete Time: =A0=A0 =A010000 =A0=A0 =A03 s :-) =A0=A0 =A0Looks fine!=20 Thanks, Patrick --- On Thu, 1/10/13, Charles Sprickman wrote: From: Charles Sprickman Subject: Re: ZFS sub-optimal performance with default setting To: "Artem Belevich" Cc: "Patrick Dung" , "freebsd-fs" Date: Thursday, January 10, 2013, 10:03 AM On Jan 9, 2013, at 8:37 PM, Artem Belevich wrote: > On Wed, Jan 9, 2013 at 6:31 AM, Patrick Dung w= rote: >> Hi freebsd-fs! >>=20 >> I have my the original question in: >> http://archives.postgresql.org/pgsql-performance/2013-01/msg00044.php >> But later it was found out the bottleneck seems to be the ZFS with out a= fast ZIL. >> Please give some advise, thanks. >=20 > For database storage on ZFS it may be necessary to change ZFS record > size to match database page size. At least that's what one of the > things Oracle recommends for oracle database: > http://www.oracle.com/technetwork/server-storage/solaris10/config-solaris= -zfs-wp-167894.pdf >=20 > You may also check if disabling prefetching (via > vfs.zfs.prefetch_disable=3D1 tunable in loader.conf) helps your > workload. PostgreSQL has a ton of tunables, and the performance list is a good place = to start.=A0 The archives there are full of information on zfs. A few things off the top of my head: -set recordsize to 8k on the pg dataset (this has to be done before you wri= te data) -set "full_page_writes =3D off" (safe on zfs, not necessarily so on ufs) -leave RAM available for PG by limiting the max ARC size in loader.conf, te= ll PG how much it has left after ARC plus some slop with "effective_cache_s= ize" (for example, if you have 64GB of RAM, maybe limit ARC to 32GB, then s= et effective_cache_size to 30GB or so) -turn off atime updates As far as general PG setup, pgtune will put you in a better place than the = default config: https://github.com/gregs1104/pgtune Actually the stock config file is really terrible, if you haven't touched i= t, you're almost guaranteed to have lousy performance. Lastly, this book is amazing - there's lots of general information that's q= uite useful outside of PG and general db tuning: http://www.2ndquadrant.com/en/postgresql-90-high-performance/ If you have $200 or so laying around, slap in two Intel 320 SSDs (these sur= vive power-loss without corruption), and make a mirrored ZIL for zfs.=A0 Th= at will up your TPS on write-heavy loads into at least the 20K realm. Charles >=20 > --Artem > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Thu Jan 10 17:08:52 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 17D8D362 for ; Thu, 10 Jan 2013 17:08:52 +0000 (UTC) (envelope-from tevans.uk@googlemail.com) Received: from mail-vc0-f181.google.com (mail-vc0-f181.google.com [209.85.220.181]) by mx1.freebsd.org (Postfix) with ESMTP id CA645A08 for ; Thu, 10 Jan 2013 17:08:51 +0000 (UTC) Received: by mail-vc0-f181.google.com with SMTP id gb30so549506vcb.26 for ; Thu, 10 Jan 2013 09:08:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=OZoFrLIRFYH86Hdga6YTOzCV33cLcWDmQTUuLoDxNro=; b=Xx4+Tkd+X2tce+P+wXBDt3u1QuNMF5yvBGVF2F5U/1L/cJmO0/guip0eDoi5WkeZ1W F8cmZwx+3gFW/fqHEpbUd1/Fe7bTi0+5A9E6Gz/4okpeCNRKoX+iRi/FbX255GCcu6F0 lESsq0zfInlUBRlWc2arXGfp+SY+4SbBdKL72kLyXiw8mrY7ieii/G7BqlsfMq0MTz7z 7rB14LfVPQByIOctxsArpD0b2MaMQZFMQYdlPTl3FVMsbHwyO+/OS5tLarxtsjSvLB/y 4YvW6QxayPCjpI/i4QOKjGDP41nY6BXgBqbgrD0CPnPHTpY5kb7EKMhSpCvQHk6hImP1 JIxA== MIME-Version: 1.0 Received: by 10.52.66.144 with SMTP id f16mr78828115vdt.60.1357837725416; Thu, 10 Jan 2013 09:08:45 -0800 (PST) Received: by 10.59.13.73 with HTTP; Thu, 10 Jan 2013 09:08:45 -0800 (PST) In-Reply-To: <1357836862.81267.YahooMailClassic@web190806.mail.sg3.yahoo.com> References: <76DE1383-BAAA-4EFD-ABA6-A9328D79D5B3@bway.net> <1357836862.81267.YahooMailClassic@web190806.mail.sg3.yahoo.com> Date: Thu, 10 Jan 2013 17:08:45 +0000 Message-ID: Subject: Re: ZFS sub-optimal performance with default setting From: Tom Evans To: Patrick Dung Content-Type: text/plain; charset=UTF-8 Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 10 Jan 2013 17:08:52 -0000 On Thu, Jan 10, 2013 at 4:54 PM, Patrick Dung wrote: > I have tried some tests, good and bad result is in below.......... > I am sure there is some bottleneck, and the root cause is still unknown. > Hi Patrick Correct me if I've made a mistake, but have you shown how you have configured your ZFS setup? Number and type of disks, etc, mirrored, raidz or raidz2? The output of zpool status and zfs-stats (from ports) would be useful. Cheers Tom From owner-freebsd-fs@FreeBSD.ORG Thu Jan 10 19:39:58 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 9EACEDF1; Thu, 10 Jan 2013 19:39:58 +0000 (UTC) (envelope-from nicolas@i.0x5.de) Received: from n.0x5.de (n.0x5.de [217.197.85.144]) by mx1.freebsd.org (Postfix) with ESMTP id 025163D4; Thu, 10 Jan 2013 19:39:58 +0000 (UTC) Received: by pc5.i.0x5.de (Postfix, from userid 1003) id 3YhyGF6vqPz7ySH; Thu, 10 Jan 2013 20:39:49 +0100 (CET) Date: Thu, 10 Jan 2013 20:39:49 +0100 From: Nicolas Rachinsky To: Artem Belevich Subject: Re: slowdown of zfs (tx->tx) Message-ID: <20130110193949.GA10023@mid.pc5.i.0x5.de> References: <20130108174225.GA17260@mid.pc5.i.0x5.de> <20130109162613.GA34276@mid.pc5.i.0x5.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Powered-by: FreeBSD X-Homepage: http://www.rachinsky.de X-PGP-Keyid: 887BAE72 X-PGP-Fingerprint: 039E 9433 115F BC5F F88D 4524 5092 45C4 887B AE72 X-PGP-Keys: http://www.rachinsky.de/nicolas/gpg/nicolas_rachinsky.asc User-Agent: Mutt/1.5.21 (2010-09-15) Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 10 Jan 2013 19:39:58 -0000 Hallo, after replacing one of the controllers, all problems seem to have disappeared. Thank you very much for your advice! > >> On Tue, Jan 8, 2013 at 9:42 AM, Nicolas Rachinsky > >> wrote: * Artem Belevich [2013-01-09 17:15 -0800]: > On Wed, Jan 9, 2013 at 8:26 AM, Nicolas Rachinsky > wrote: > > * Artem Belevich [2013-01-08 12:47 -0800]: > >> You seem to have some checksum errors which does suggest hardware troubles. > > > > I somehow missed these. Is there any way to learn when these checksum > > errors happen? > > Not on FreeBSD (yet) as far as I can tell. Not explicitly, anyways. > Check /var/log/messages for any indications of SATA errors. There's a > good chance that there was a timeout at some point. There is an UDMA_CRC_Error_Count of 17 and 20 for the two disks with checksum errors. The other disks have values between 0 and 5. And yes, there have been timeouts some time ago. Since the problem did occur without the timeout occuring again, I considered the timeouts to be unrelated. And then I forgot them. :( But shouldn't timeouts either produce correct data after a retry or a read/write error otherwise? Nicolas -- http://www.rachinsky.de/nicolas From owner-freebsd-fs@FreeBSD.ORG Thu Jan 10 21:12:56 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 41EEE995 for ; Thu, 10 Jan 2013 21:12:56 +0000 (UTC) (envelope-from artemb@gmail.com) Received: from mail-vc0-f171.google.com (mail-vc0-f171.google.com [209.85.220.171]) by mx1.freebsd.org (Postfix) with ESMTP id 0845C960 for ; Thu, 10 Jan 2013 21:12:55 +0000 (UTC) Received: by mail-vc0-f171.google.com with SMTP id n11so798244vch.2 for ; Thu, 10 Jan 2013 13:12:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type; bh=vY/bUgyjEB5vGxiiWk/qQ3welR+mhgTvuAAjBxoN5uk=; b=MiEPC7oPk5ybmZkn768e3Ojx6g0s2yRLVKmozen+iWZVNwGUmuVjFM6tuupfYKnTrM QB/979fzqtMfwAquM6TFACpodXreBNphZfYicjCBqVjRCz3Zx+xcCHt2Xeh9MSVAlVsd 8zez/do0qrPS2fuQj8KpJCTtvBaOZwB9AV5nyMvyfZio73r3rj9/CUg1BRDU2QHV6X09 arp5KKBQvNlP6mnXRbBja3/QZyZBpFvvJmTSXyXhr7vP6kSqyQzux3kAUqCX2zKt0s/1 eEJIvwkErmiac/OekZb7Ov798J9iByzASh95CRnQ4SOgxR9/FjahSesukgX6+9UH6t6W Kx0g== MIME-Version: 1.0 Received: by 10.59.11.67 with SMTP id eg3mr95130746ved.31.1357852374927; Thu, 10 Jan 2013 13:12:54 -0800 (PST) Sender: artemb@gmail.com Received: by 10.220.122.196 with HTTP; Thu, 10 Jan 2013 13:12:54 -0800 (PST) In-Reply-To: <20130110193949.GA10023@mid.pc5.i.0x5.de> References: <20130108174225.GA17260@mid.pc5.i.0x5.de> <20130109162613.GA34276@mid.pc5.i.0x5.de> <20130110193949.GA10023@mid.pc5.i.0x5.de> Date: Thu, 10 Jan 2013 13:12:54 -0800 X-Google-Sender-Auth: vPvY8Vqh3wKKq1MaQuc5Sp8eyKE Message-ID: Subject: Re: slowdown of zfs (tx->tx) From: Artem Belevich To: Nicolas Rachinsky Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 10 Jan 2013 21:12:56 -0000 On Thu, Jan 10, 2013 at 11:39 AM, Nicolas Rachinsky wrote: > There is an UDMA_CRC_Error_Count of 17 and 20 for the two disks with > checksum errors. The other disks have values between 0 and 5. > > And yes, there have been timeouts some time ago. Since the problem did > occur without the timeout occuring again, I considered the timeouts to > be unrelated. And then I forgot them. :( > > > But shouldn't timeouts either produce correct data after a retry or > a read/write error otherwise? if I see CRC counter incrementing often enough that's a good indication that something is wrong. It does not mean that those transactions were the ones that corrupted data, but rather as an indication that things are not right with particular device. It may be a false alarm as CRC errors may happen under normal conditions, but non-trivial number of them is a good sign of trouble. --Artem From owner-freebsd-fs@FreeBSD.ORG Fri Jan 11 11:11:49 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id A2F47327 for ; Fri, 11 Jan 2013 11:11:49 +0000 (UTC) (envelope-from nicolas@i.0x5.de) Received: from n.0x5.de (n.0x5.de [217.197.85.144]) by mx1.freebsd.org (Postfix) with ESMTP id 387AE3D8 for ; Fri, 11 Jan 2013 11:11:48 +0000 (UTC) Received: by pc5.i.0x5.de (Postfix, from userid 1003) id 3YjLxb09blz7ySH; Fri, 11 Jan 2013 12:11:47 +0100 (CET) Date: Fri, 11 Jan 2013 12:11:47 +0100 From: Nicolas Rachinsky To: freebsd-fs Subject: Re: slowdown of zfs (tx->tx) Message-ID: <20130111111147.GA34160@mid.pc5.i.0x5.de> References: <20130108174225.GA17260@mid.pc5.i.0x5.de> <20130109162613.GA34276@mid.pc5.i.0x5.de> <20130110193949.GA10023@mid.pc5.i.0x5.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20130110193949.GA10023@mid.pc5.i.0x5.de> X-Powered-by: FreeBSD X-Homepage: http://www.rachinsky.de X-PGP-Keyid: 887BAE72 X-PGP-Fingerprint: 039E 9433 115F BC5F F88D 4524 5092 45C4 887B AE72 X-PGP-Keys: http://www.rachinsky.de/nicolas/gpg/nicolas_rachinsky.asc User-Agent: Mutt/1.5.21 (2010-09-15) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 11 Jan 2013 11:11:49 -0000 * Nicolas Rachinsky [2013-01-10 20:39 +0100]: > after replacing one of the controllers, all problems seem to have > disappeared. Thank you very much for your advice! Now the problem is back. After changing the controller, there were no more timeouts logged. No UDMA_CRC_Error_Count changed. While the problem exists, top almost all the time shows: last pid: 46322; load averages: 0.90, 1.03, 0.98 up 0+11:07:55 08:28:41 39 processes: 1 running, 38 sleeping CPU: 0.0% user, 0.0% nice, 50.1% system, 0.0% interrupt, 49.9% idle Mem: 10M Active, 33M Inact, 7612M Wired, 23M Cache, 827M Buf, 234M Free Swap: 16G Total, 13M Used, 16G Free PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND 926 root 1 44 0 28020K 2404K select 1 2:23 0.29% snmpd 41642 user1 1 44 0 5828K 204K tx->tx 0 20:53 0.00% rsync 41641 user1 1 44 0 29952K 3976K select 1 13:39 0.00% ssh 41640 user1 1 44 0 5828K 140K select 1 0:20 0.00% rsync 90399 user2 1 44 0 14020K 872K tx->tx 0 0:16 0.00% rsync 956 root 1 44 0 11808K 708K select 1 0:02 0.00% ntpd 1051 root 1 44 0 8356K 640K kqread 0 0:00 0.00% master 25713 root 1 44 0 38108K 3596K select 1 0:00 0.00% sshd 875 root 1 44 0 6920K 572K select 1 0:00 0.00% syslogd 1066 root 1 44 0 7976K 564K nanslp 1 0:00 0.00% cron 1058 postfix 1 44 0 8356K 792K kqread 1 0:00 0.00% qmgr 705 root 1 44 0 5248K 120K select 1 0:00 0.00% devd 25715 root 1 44 0 10248K 2828K pause 1 0:00 0.00% csh 1062 root 1 44 0 26176K 952K select 1 0:00 0.00% sshd 90401 user2 1 44 0 14020K 768K select 1 0:00 0.00% rsync 90400 user2 1 44 0 23808K 892K select 1 0:00 0.00% ssh 90372 user2 1 59 0 8344K 124K wait 0 0:00 0.00% sh 41619 user1 1 76 0 8344K 40K wait 1 0:00 0.00% sh 46322 root 1 44 0 9372K 1800K CPU1 1 0:00 0.00% top 89384 root 1 44 0 8344K 712K wait 0 0:00 0.00% sh 37854 root 1 45 0 8360K 472K piperd 1 0:00 0.00% sendmail 45382 postfix 1 44 0 8360K 1324K kqread 1 0:00 0.00% pickup 41608 root 1 76 0 8344K 440K wait 0 0:00 0.00% sh 25768 root 1 52 0 13440K 1716K nanslp 0 0:00 0.00% smartd 33599 root 1 50 0 8344K 452K wait 1 0:00 0.00% sh 33597 root 1 52 0 8344K 440K wait 1 0:00 0.00% sh 37855 root 1 44 0 8360K 468K piperd 0 0:00 0.00% postdrop 33591 root 1 44 0 7976K 524K piperd 1 0:00 0.00% cron 33595 root 1 46 0 8344K 436K wait 1 0:00 0.00% sh 33594 root 1 44 0 8344K 436K wait 1 0:00 0.00% sh 33592 root 1 45 0 7976K 524K piperd 1 0:00 0.00% cron 1106 root 1 76 0 6916K 352K ttyin 1 0:00 0.00% getty 1111 root 1 76 0 6916K 352K ttyin 1 0:00 0.00% getty 1107 root 1 76 0 6916K 352K ttyin 0 0:00 0.00% getty 1108 root 1 76 0 6916K 352K ttyin 0 0:00 0.00% getty 1112 root 1 76 0 6916K 352K ttyin 0 0:00 0.00% getty 1109 root 1 76 0 6916K 352K ttyin 1 0:00 0.00% getty 1113 root 1 76 0 6916K 352K ttyin 0 0:00 0.00% getty 1110 root 1 76 0 6916K 352K ttyin 0 0:00 0.00% getty The result of sh -c "while :;do gstat -I 5s -b ;done" > gstat.txt & iostat -d -x -w 5 > iostat.txt & zpool iostat -v 5 > zpool.txt & is available via http://flummi.dauerreden.de/20130111/zpool.txt http://flummi.dauerreden.de/20130111/gstat.txt http://flummi.dauerreden.de/20130111/iostat.txt Thanks in advance! Nicolas -- http://www.rachinsky.de/nicolas From owner-freebsd-fs@FreeBSD.ORG Fri Jan 11 13:58:14 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 33B7959A for ; Fri, 11 Jan 2013 13:58:14 +0000 (UTC) (envelope-from prvs=17232837bf=killing@multiplay.co.uk) Received: from mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) by mx1.freebsd.org (Postfix) with ESMTP id CD32FC2C for ; Fri, 11 Jan 2013 13:58:13 +0000 (UTC) Received: from r2d2 ([188.220.16.49]) by mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) (MDaemon PRO v10.0.4) with ESMTP id md50001671706.msg for ; Fri, 11 Jan 2013 13:58:11 +0000 X-Spam-Processed: mail1.multiplay.co.uk, Fri, 11 Jan 2013 13:58:11 +0000 (not processed: message from valid local sender) X-MDRemoteIP: 188.220.16.49 X-Return-Path: prvs=17232837bf=killing@multiplay.co.uk X-Envelope-From: killing@multiplay.co.uk X-MDaemon-Deliver-To: freebsd-fs@freebsd.org Message-ID: From: "Steven Hartland" To: "Nicolas Rachinsky" , "freebsd-fs" References: <20130108174225.GA17260@mid.pc5.i.0x5.de> <20130109162613.GA34276@mid.pc5.i.0x5.de> <20130110193949.GA10023@mid.pc5.i.0x5.de> <20130111111147.GA34160@mid.pc5.i.0x5.de> Subject: Re: slowdown of zfs (tx->tx) Date: Fri, 11 Jan 2013 13:58:26 -0000 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 11 Jan 2013 13:58:14 -0000 ----- Original Message ----- From: "Nicolas Rachinsky" To: "freebsd-fs" Sent: Friday, January 11, 2013 11:11 AM Subject: Re: slowdown of zfs (tx->tx) >* Nicolas Rachinsky [2013-01-10 20:39 +0100]: >> after replacing one of the controllers, all problems seem to have >> disappeared. Thank you very much for your advice! > > Now the problem is back. > > After changing the controller, there were no more timeouts logged. > > No UDMA_CRC_Error_Count changed. > > While the problem exists, top almost all the time shows: > > last pid: 46322; load averages: 0.90, 1.03, 0.98 up 0+11:07:55 08:28:41 > 39 processes: 1 running, 38 sleeping > CPU: 0.0% user, 0.0% nice, 50.1% system, 0.0% interrupt, 49.9% idle > Mem: 10M Active, 33M Inact, 7612M Wired, 23M Cache, 827M Buf, 234M Free > Swap: 16G Total, 13M Used, 16G Free > > PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND > 926 root 1 44 0 28020K 2404K select 1 2:23 0.29% snmpd > 41642 user1 1 44 0 5828K 204K tx->tx 0 20:53 0.00% rsync > 41641 user1 1 44 0 29952K 3976K select 1 13:39 0.00% ssh > 41640 user1 1 44 0 5828K 140K select 1 0:20 0.00% rsync > 90399 user2 1 44 0 14020K 872K tx->tx 0 0:16 0.00% rsync > 956 root 1 44 0 11808K 708K select 1 0:02 0.00% ntpd > 1051 root 1 44 0 8356K 640K kqread 0 0:00 0.00% master > 25713 root 1 44 0 38108K 3596K select 1 0:00 0.00% sshd > 875 root 1 44 0 6920K 572K select 1 0:00 0.00% syslogd > 1066 root 1 44 0 7976K 564K nanslp 1 0:00 0.00% cron > 1058 postfix 1 44 0 8356K 792K kqread 1 0:00 0.00% qmgr > 705 root 1 44 0 5248K 120K select 1 0:00 0.00% devd > 25715 root 1 44 0 10248K 2828K pause 1 0:00 0.00% csh > 1062 root 1 44 0 26176K 952K select 1 0:00 0.00% sshd > 90401 user2 1 44 0 14020K 768K select 1 0:00 0.00% rsync > 90400 user2 1 44 0 23808K 892K select 1 0:00 0.00% ssh > 90372 user2 1 59 0 8344K 124K wait 0 0:00 0.00% sh > 41619 user1 1 76 0 8344K 40K wait 1 0:00 0.00% sh > 46322 root 1 44 0 9372K 1800K CPU1 1 0:00 0.00% top > 89384 root 1 44 0 8344K 712K wait 0 0:00 0.00% sh > 37854 root 1 45 0 8360K 472K piperd 1 0:00 0.00% sendmail > 45382 postfix 1 44 0 8360K 1324K kqread 1 0:00 0.00% pickup > 41608 root 1 76 0 8344K 440K wait 0 0:00 0.00% sh > 25768 root 1 52 0 13440K 1716K nanslp 0 0:00 0.00% smartd > 33599 root 1 50 0 8344K 452K wait 1 0:00 0.00% sh > 33597 root 1 52 0 8344K 440K wait 1 0:00 0.00% sh > 37855 root 1 44 0 8360K 468K piperd 0 0:00 0.00% postdrop > 33591 root 1 44 0 7976K 524K piperd 1 0:00 0.00% cron > 33595 root 1 46 0 8344K 436K wait 1 0:00 0.00% sh > 33594 root 1 44 0 8344K 436K wait 1 0:00 0.00% sh > 33592 root 1 45 0 7976K 524K piperd 1 0:00 0.00% cron > 1106 root 1 76 0 6916K 352K ttyin 1 0:00 0.00% getty > 1111 root 1 76 0 6916K 352K ttyin 1 0:00 0.00% getty > 1107 root 1 76 0 6916K 352K ttyin 0 0:00 0.00% getty > 1108 root 1 76 0 6916K 352K ttyin 0 0:00 0.00% getty > 1112 root 1 76 0 6916K 352K ttyin 0 0:00 0.00% getty > 1109 root 1 76 0 6916K 352K ttyin 1 0:00 0.00% getty > 1113 root 1 76 0 6916K 352K ttyin 0 0:00 0.00% getty > 1110 root 1 76 0 6916K 352K ttyin 0 0:00 0.00% getty > > The result of > sh -c "while :;do gstat -I 5s -b ;done" > gstat.txt & iostat -d -x -w 5 > iostat.txt & zpool iostat -v 5 > zpool.txt & > is available via > http://flummi.dauerreden.de/20130111/zpool.txt > http://flummi.dauerreden.de/20130111/gstat.txt > http://flummi.dauerreden.de/20130111/iostat.txt > TBH looks like your just saturating your disks with the number of IOP's your doing. Regards Steve ================================================ This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337 or return the E.mail to postmaster@multiplay.co.uk. From owner-freebsd-fs@FreeBSD.ORG Fri Jan 11 14:43:56 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 2AF1428F for ; Fri, 11 Jan 2013 14:43:56 +0000 (UTC) (envelope-from patrick_dkt@yahoo.com.hk) Received: from nm1-vm3.bullet.mail.sg3.yahoo.com (nm1-vm3.bullet.mail.sg3.yahoo.com [106.10.148.74]) by mx1.freebsd.org (Postfix) with ESMTP id F1904E70 for ; Fri, 11 Jan 2013 14:43:54 +0000 (UTC) Received: from [106.10.166.118] by nm1.bullet.mail.sg3.yahoo.com with NNFMP; 11 Jan 2013 14:43:47 -0000 Received: from [106.10.151.139] by tm7.bullet.mail.sg3.yahoo.com with NNFMP; 11 Jan 2013 14:43:47 -0000 Received: from [127.0.0.1] by omp1007.mail.sg3.yahoo.com with NNFMP; 11 Jan 2013 14:43:47 -0000 X-Yahoo-Newman-Property: ymail-3 X-Yahoo-Newman-Id: 247003.66141.bm@omp1007.mail.sg3.yahoo.com Received: (qmail 17048 invoked by uid 60001); 11 Jan 2013 14:43:47 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com.hk; s=s1024; t=1357915427; bh=hpNIVZqVIgLWYpRDaXayfolSzPSMS/Jdus+aikOk2NE=; h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:Message-ID:Date:From:Subject:To:Cc:MIME-Version:Content-Type; b=bCsRJ9mnKlKrefdPzFXPWGYyCMmSmspQYH0jLFOPaXTM6loqGGDgsR1qJSIfv6kE1jDyFFQXE12jnMj9DB0/aKfbsdPkz0ciPCQKgZGG0cu5YmL6Sdk+s+JDka4oTkz6GtSxHvG3oWUdK4fCTwNnlnM7WDlML/AXAWi3imA6ltg= DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com.hk; h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:Message-ID:Date:From:Subject:To:Cc:MIME-Version:Content-Type; b=M97GfbWdSvyVSsREuiKNyDjGiTertPqQ/oGJYu3y3RvTzk5Q3oaXoXmZclAAaFUkluGdFjlthg7ExAv7t7pz17LLLYvyq8I+qQMoGoUGs6XilCIAPRCgtiGT5qqEQK4XXurg8OkUdoDFXknPuq9FelMsKHADF7OBVXwRQ8sDlyI=; X-YMail-OSG: cOx57z8VM1l9IIqYWsMey2sbKZeayIELfx6Darkx7U6BEjq 3jsSOWWXgGmfswy_jK.a8s.eHesQzevZEE7pFRM1nO9GUVeTMGGRtdhid6OF E8gvt71MxrKXTQ3HcmjFPAtRhpfttFhyRgy4UzXu5vLcWhQ7Ce5iK3Vf24Jk KdPMgDJF7fTc27MV3yNTD2EKl9tMKbpIAYIkimoKZdGcrx8_PkprvGXXW47v iGoKzywyH5tB77xjGDYd_wfNbcJiH_YE0kxrxvbeFN3ijqLYRK9h_fg4YrIt JayvNKsWyqSu1aflDKJTPx2XwID3PHrf6fmcjddGEsdj7JRMat5rA0lHcDtM hEMR1gO7n_hamcj3bQqiQ4QI0YR5fdskOg_DEtnmuMdqLzyzrNPKlm3TbZVV etTf0VsLc0yLM5EmRWTgCxYTWVVgJyTSCOf725CMg4p6q.bhLCn7tD9u0ZYO OkO2k80BWcJUvA1g1pbU26ZmdMJxqjhJb5qg3Khs4g7DfRaBqoH6iHRj3R2s - Received: from [61.15.240.116] by web190801.mail.sg3.yahoo.com via HTTP; Fri, 11 Jan 2013 22:43:46 SGT X-Rocket-MIMEInfo: 001.001, SGkgVG9tLA0KDQpUbyBtYWtlIGl0IHNpbXBsZSwgSSBoYXZlIHNldHVwIGluIHRoaXMgc2V0dGluZzoNCg0KSG9zdDogSW50ZWwgZHVhbCBjb3JlIDNHaHogQ1BVLCBSSEVMIDYuMyB4NjQsIFJBTSA4R0INCg0KRnJlZWJzZCA5LjEgLWkzODYgVk0gd2l0aCB0aGVzZSBzZXR0aW5nOg0KQ1BVOiBPbmUNCk1lbW9yeTogMkdCDQo1R0IgZm9yIE9TIChkYTEpDQo1R0IgZm9yIFpGUyAoZGEyKSwgbm8gc2VwYXJhdGUgWklMDQoNCkluc3RhbGxlZCBzb2Z0d2FyZToNClBvc3RncmVzcWwgOS4yLjIgKGNvbXBpbGUgZnIBMAEBAQE- X-Mailer: YahooMailClassic/15.1.2 YahooMailWebService/0.8.130.494 Message-ID: <1357915426.16602.YahooMailClassic@web190801.mail.sg3.yahoo.com> Date: Fri, 11 Jan 2013 22:43:46 +0800 (SGT) From: Patrick Dung Subject: Re: ZFS sub-optimal performance with default setting To: Tom Evans MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="-1664858152-1374130778-1357915426=:16602" X-Content-Filtered-By: Mailman/MimeDel 2.1.14 Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 11 Jan 2013 14:43:56 -0000 ---1664858152-1374130778-1357915426=:16602 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable Hi Tom, To make it simple, I have setup in this setting: Host: Intel dual core 3Ghz CPU, RHEL 6.3 x64, RAM 8GB Freebsd 9.1 -i386 VM with these setting: CPU: One Memory: 2GB 5GB for OS (da1) 5GB for ZFS (da2), no separate ZIL Installed software: Postgresql 9.2.2 (compile from ports) /usr/local/pgsql is a ZFS dataset OTRS 3.1.6 (compile from ports) Apache 2 install from packages zfs/postgresql/otrs/apache is in default setting, except I have turned off = atime in ZFS. I have run OTRS benchmark twice, below is the result: Insert Time: =A0=A0=A0 10000 =A0=A0=A0 12 s :-( =A0=A0=A0 Should not take m= ore than 5's on an average system. Update Time: =A0=A0=A0 10000 =A0=A0=A0 7 s =A0=A0=A0 Ok Select Time: =A0=A0=A0 10000 =A0=A0=A0 3 s :-) =A0=A0=A0 Looks fine! Delete Time: =A0=A0=A0 10000 =A0=A0=A0 2 s :-) =A0=A0=A0 Looks fine!=20 Thanks, Patrick --- On Fri, 1/11/13, Tom Evans wrote: From: Tom Evans Subject: Re: ZFS sub-optimal performance with default setting To: "Patrick Dung" Cc: "freebsd-fs" Date: Friday, January 11, 2013, 1:08 AM On Thu, Jan 10, 2013 at 4:54 PM, Patrick Dung wr= ote: > I have tried some tests, good and bad result is in below.......... > I am sure there is some bottleneck, and the root cause is still unknown. > Hi Patrick Correct me if I've made a mistake, but have you shown how you have configured your ZFS setup? Number and type of disks, etc, mirrored, raidz or raidz2? The output of zpool status and zfs-stats (from ports) would be useful. Cheers Tom ---1664858152-1374130778-1357915426=:16602 Content-Type: text/plain; name=zfs Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="zfs get all.txt" IyB6ZnMgZ2V0IGFsbA0KTkFNRSAgICAgICAgUFJPUEVSVFkgICAgICAgICAg ICAgIFZBTFVFICAgICAgICAgICAgICAgICAgU09VUkNFDQpkYXRhICAgICAg ICB0eXBlICAgICAgICAgICAgICAgICAgZmlsZXN5c3RlbSAgICAgICAgICAg ICAtDQpkYXRhICAgICAgICBjcmVhdGlvbiAgICAgICAgICAgICAgRnJpIEph biAxMSAyMToyNSAyMDEzICAtDQpkYXRhICAgICAgICB1c2VkICAgICAgICAg ICAgICAgICAgMTc1TSAgICAgICAgICAgICAgICAgICAtDQpkYXRhICAgICAg ICBhdmFpbGFibGUgICAgICAgICAgICAgMi43N0cgICAgICAgICAgICAgICAg ICAtDQpkYXRhICAgICAgICByZWZlcmVuY2VkICAgICAgICAgICAgMTA4TSAg ICAgICAgICAgICAgICAgICAtDQpkYXRhICAgICAgICBjb21wcmVzc3JhdGlv ICAgICAgICAgMS4wMHggICAgICAgICAgICAgICAgICAtDQpkYXRhICAgICAg ICBtb3VudGVkICAgICAgICAgICAgICAgeWVzICAgICAgICAgICAgICAgICAg ICAtDQpkYXRhICAgICAgICBxdW90YSAgICAgICAgICAgICAgICAgbm9uZSAg ICAgICAgICAgICAgICAgICBkZWZhdWx0DQpkYXRhICAgICAgICByZXNlcnZh dGlvbiAgICAgICAgICAgbm9uZSAgICAgICAgICAgICAgICAgICBkZWZhdWx0 DQpkYXRhICAgICAgICByZWNvcmRzaXplICAgICAgICAgICAgMTI4SyAgICAg ICAgICAgICAgICAgICBkZWZhdWx0DQpkYXRhICAgICAgICBtb3VudHBvaW50 ICAgICAgICAgICAgL2RhdGEgICAgICAgICAgICAgICAgICBkZWZhdWx0DQpk YXRhICAgICAgICBzaGFyZW5mcyAgICAgICAgICAgICAgb2ZmICAgICAgICAg ICAgICAgICAgICBkZWZhdWx0DQpkYXRhICAgICAgICBjaGVja3N1bSAgICAg ICAgICAgICAgb24gICAgICAgICAgICAgICAgICAgICBkZWZhdWx0DQpkYXRh ICAgICAgICBjb21wcmVzc2lvbiAgICAgICAgICAgb2ZmICAgICAgICAgICAg ICAgICAgICBkZWZhdWx0DQpkYXRhICAgICAgICBhdGltZSAgICAgICAgICAg ICAgICAgb2ZmICAgICAgICAgICAgICAgICAgICBsb2NhbA0KZGF0YSAgICAg ICAgZGV2aWNlcyAgICAgICAgICAgICAgIG9uICAgICAgICAgICAgICAgICAg ICAgZGVmYXVsdA0KZGF0YSAgICAgICAgZXhlYyAgICAgICAgICAgICAgICAg IG9uICAgICAgICAgICAgICAgICAgICAgZGVmYXVsdA0KZGF0YSAgICAgICAg c2V0dWlkICAgICAgICAgICAgICAgIG9uICAgICAgICAgICAgICAgICAgICAg ZGVmYXVsdA0KZGF0YSAgICAgICAgcmVhZG9ubHkgICAgICAgICAgICAgIG9m ZiAgICAgICAgICAgICAgICAgICAgZGVmYXVsdA0KZGF0YSAgICAgICAgamFp bGVkICAgICAgICAgICAgICAgIG9mZiAgICAgICAgICAgICAgICAgICAgZGVm YXVsdA0KZGF0YSAgICAgICAgc25hcGRpciAgICAgICAgICAgICAgIGhpZGRl biAgICAgICAgICAgICAgICAgZGVmYXVsdA0KZGF0YSAgICAgICAgYWNsbW9k ZSAgICAgICAgICAgICAgIGRpc2NhcmQgICAgICAgICAgICAgICAgZGVmYXVs dA0KZGF0YSAgICAgICAgYWNsaW5oZXJpdCAgICAgICAgICAgIHJlc3RyaWN0 ZWQgICAgICAgICAgICAgZGVmYXVsdA0KZGF0YSAgICAgICAgY2FubW91bnQg ICAgICAgICAgICAgIG9uICAgICAgICAgICAgICAgICAgICAgZGVmYXVsdA0K ZGF0YSAgICAgICAgeGF0dHIgICAgICAgICAgICAgICAgIG9mZiAgICAgICAg ICAgICAgICAgICAgdGVtcG9yYXJ5DQpkYXRhICAgICAgICBjb3BpZXMgICAg ICAgICAgICAgICAgMSAgICAgICAgICAgICAgICAgICAgICBkZWZhdWx0DQpk YXRhICAgICAgICB2ZXJzaW9uICAgICAgICAgICAgICAgNSAgICAgICAgICAg ICAgICAgICAgICAtDQpkYXRhICAgICAgICB1dGY4b25seSAgICAgICAgICAg ICAgb2ZmICAgICAgICAgICAgICAgICAgICAtDQpkYXRhICAgICAgICBub3Jt YWxpemF0aW9uICAgICAgICAgbm9uZSAgICAgICAgICAgICAgICAgICAtDQpk YXRhICAgICAgICBjYXNlc2Vuc2l0aXZpdHkgICAgICAgc2Vuc2l0aXZlICAg ICAgICAgICAgICAtDQpkYXRhICAgICAgICB2c2NhbiAgICAgICAgICAgICAg ICAgb2ZmICAgICAgICAgICAgICAgICAgICBkZWZhdWx0DQpkYXRhICAgICAg ICBuYm1hbmQgICAgICAgICAgICAgICAgb2ZmICAgICAgICAgICAgICAgICAg ICBkZWZhdWx0DQpkYXRhICAgICAgICBzaGFyZXNtYiAgICAgICAgICAgICAg b2ZmICAgICAgICAgICAgICAgICAgICBkZWZhdWx0DQpkYXRhICAgICAgICBy ZWZxdW90YSAgICAgICAgICAgICAgbm9uZSAgICAgICAgICAgICAgICAgICBk ZWZhdWx0DQpkYXRhICAgICAgICByZWZyZXNlcnZhdGlvbiAgICAgICAgbm9u ZSAgICAgICAgICAgICAgICAgICBkZWZhdWx0DQpkYXRhICAgICAgICBwcmlt YXJ5Y2FjaGUgICAgICAgICAgYWxsICAgICAgICAgICAgICAgICAgICBkZWZh dWx0DQpkYXRhICAgICAgICBzZWNvbmRhcnljYWNoZSAgICAgICAgYWxsICAg ICAgICAgICAgICAgICAgICBkZWZhdWx0DQpkYXRhICAgICAgICB1c2VkYnlz bmFwc2hvdHMgICAgICAgMCAgICAgICAgICAgICAgICAgICAgICAtDQpkYXRh ICAgICAgICB1c2VkYnlkYXRhc2V0ICAgICAgICAgMTA4TSAgICAgICAgICAg ICAgICAgICAtDQpkYXRhICAgICAgICB1c2VkYnljaGlsZHJlbiAgICAgICAg NjYuN00gICAgICAgICAgICAgICAgICAtDQpkYXRhICAgICAgICB1c2VkYnly ZWZyZXNlcnZhdGlvbiAgMCAgICAgICAgICAgICAgICAgICAgICAtDQpkYXRh ICAgICAgICBsb2diaWFzICAgICAgICAgICAgICAgbGF0ZW5jeSAgICAgICAg ICAgICAgICBkZWZhdWx0DQpkYXRhICAgICAgICBkZWR1cCAgICAgICAgICAg ICAgICAgb2ZmICAgICAgICAgICAgICAgICAgICBkZWZhdWx0DQpkYXRhICAg ICAgICBtbHNsYWJlbCAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAtDQpkYXRhICAgICAgICBzeW5jICAgICAgICAgICAgICAgICAgc3Rh bmRhcmQgICAgICAgICAgICAgICBkZWZhdWx0DQpkYXRhICAgICAgICByZWZj b21wcmVzc3JhdGlvICAgICAgMS4wMHggICAgICAgICAgICAgICAgICAtDQpk YXRhICAgICAgICB3cml0dGVuICAgICAgICAgICAgICAgMTA4TSAgICAgICAg ICAgICAgICAgICAtDQpkYXRhL3Bnc3FsICB0eXBlICAgICAgICAgICAgICAg ICAgZmlsZXN5c3RlbSAgICAgICAgICAgICAtDQpkYXRhL3Bnc3FsICBjcmVh dGlvbiAgICAgICAgICAgICAgRnJpIEphbiAxMSAyMToyNyAyMDEzICAtDQpk YXRhL3Bnc3FsICB1c2VkICAgICAgICAgICAgICAgICAgNjYuNU0gICAgICAg ICAgICAgICAgICAtDQpkYXRhL3Bnc3FsICBhdmFpbGFibGUgICAgICAgICAg ICAgMi43N0cgICAgICAgICAgICAgICAgICAtDQpkYXRhL3Bnc3FsICByZWZl cmVuY2VkICAgICAgICAgICAgNjYuNU0gICAgICAgICAgICAgICAgICAtDQpk YXRhL3Bnc3FsICBjb21wcmVzc3JhdGlvICAgICAgICAgMS4wMHggICAgICAg ICAgICAgICAgICAtDQpkYXRhL3Bnc3FsICBtb3VudGVkICAgICAgICAgICAg ICAgeWVzICAgICAgICAgICAgICAgICAgICAtDQpkYXRhL3Bnc3FsICBxdW90 YSAgICAgICAgICAgICAgICAgbm9uZSAgICAgICAgICAgICAgICAgICBkZWZh dWx0DQpkYXRhL3Bnc3FsICByZXNlcnZhdGlvbiAgICAgICAgICAgbm9uZSAg ICAgICAgICAgICAgICAgICBkZWZhdWx0DQpkYXRhL3Bnc3FsICByZWNvcmRz aXplICAgICAgICAgICAgMTI4SyAgICAgICAgICAgICAgICAgICBkZWZhdWx0 DQpkYXRhL3Bnc3FsICBtb3VudHBvaW50ICAgICAgICAgICAgL3Vzci9sb2Nh bC9wZ3NxbCAgICAgICBsb2NhbA0KZGF0YS9wZ3NxbCAgc2hhcmVuZnMgICAg ICAgICAgICAgIG9mZiAgICAgICAgICAgICAgICAgICAgZGVmYXVsdA0KZGF0 YS9wZ3NxbCAgY2hlY2tzdW0gICAgICAgICAgICAgIG9uICAgICAgICAgICAg ICAgICAgICAgZGVmYXVsdA0KZGF0YS9wZ3NxbCAgY29tcHJlc3Npb24gICAg ICAgICAgIG9mZiAgICAgICAgICAgICAgICAgICAgZGVmYXVsdA0KZGF0YS9w Z3NxbCAgYXRpbWUgICAgICAgICAgICAgICAgIG9mZiAgICAgICAgICAgICAg ICAgICAgaW5oZXJpdGVkIGZyb20gZGF0YQ0KZGF0YS9wZ3NxbCAgZGV2aWNl cyAgICAgICAgICAgICAgIG9uICAgICAgICAgICAgICAgICAgICAgZGVmYXVs dA0KZGF0YS9wZ3NxbCAgZXhlYyAgICAgICAgICAgICAgICAgIG9uICAgICAg ICAgICAgICAgICAgICAgZGVmYXVsdA0KZGF0YS9wZ3NxbCAgc2V0dWlkICAg ICAgICAgICAgICAgIG9uICAgICAgICAgICAgICAgICAgICAgZGVmYXVsdA0K ZGF0YS9wZ3NxbCAgcmVhZG9ubHkgICAgICAgICAgICAgIG9mZiAgICAgICAg ICAgICAgICAgICAgZGVmYXVsdA0KZGF0YS9wZ3NxbCAgamFpbGVkICAgICAg ICAgICAgICAgIG9mZiAgICAgICAgICAgICAgICAgICAgZGVmYXVsdA0KZGF0 YS9wZ3NxbCAgc25hcGRpciAgICAgICAgICAgICAgIGhpZGRlbiAgICAgICAg ICAgICAgICAgZGVmYXVsdA0KZGF0YS9wZ3NxbCAgYWNsbW9kZSAgICAgICAg ICAgICAgIGRpc2NhcmQgICAgICAgICAgICAgICAgZGVmYXVsdA0KZGF0YS9w Z3NxbCAgYWNsaW5oZXJpdCAgICAgICAgICAgIHJlc3RyaWN0ZWQgICAgICAg ICAgICAgZGVmYXVsdA0KZGF0YS9wZ3NxbCAgY2FubW91bnQgICAgICAgICAg ICAgIG9uICAgICAgICAgICAgICAgICAgICAgZGVmYXVsdA0KZGF0YS9wZ3Nx bCAgeGF0dHIgICAgICAgICAgICAgICAgIG9mZiAgICAgICAgICAgICAgICAg ICAgdGVtcG9yYXJ5DQpkYXRhL3Bnc3FsICBjb3BpZXMgICAgICAgICAgICAg ICAgMSAgICAgICAgICAgICAgICAgICAgICBkZWZhdWx0DQpkYXRhL3Bnc3Fs ICB2ZXJzaW9uICAgICAgICAgICAgICAgNSAgICAgICAgICAgICAgICAgICAg ICAtDQpkYXRhL3Bnc3FsICB1dGY4b25seSAgICAgICAgICAgICAgb2ZmICAg ICAgICAgICAgICAgICAgICAtDQpkYXRhL3Bnc3FsICBub3JtYWxpemF0aW9u ICAgICAgICAgbm9uZSAgICAgICAgICAgICAgICAgICAtDQpkYXRhL3Bnc3Fs ICBjYXNlc2Vuc2l0aXZpdHkgICAgICAgc2Vuc2l0aXZlICAgICAgICAgICAg ICAtDQpkYXRhL3Bnc3FsICB2c2NhbiAgICAgICAgICAgICAgICAgb2ZmICAg ICAgICAgICAgICAgICAgICBkZWZhdWx0DQpkYXRhL3Bnc3FsICBuYm1hbmQg ICAgICAgICAgICAgICAgb2ZmICAgICAgICAgICAgICAgICAgICBkZWZhdWx0 DQpkYXRhL3Bnc3FsICBzaGFyZXNtYiAgICAgICAgICAgICAgb2ZmICAgICAg ICAgICAgICAgICAgICBkZWZhdWx0DQpkYXRhL3Bnc3FsICByZWZxdW90YSAg ICAgICAgICAgICAgbm9uZSAgICAgICAgICAgICAgICAgICBkZWZhdWx0DQpk YXRhL3Bnc3FsICByZWZyZXNlcnZhdGlvbiAgICAgICAgbm9uZSAgICAgICAg ICAgICAgICAgICBkZWZhdWx0DQpkYXRhL3Bnc3FsICBwcmltYXJ5Y2FjaGUg ICAgICAgICAgYWxsICAgICAgICAgICAgICAgICAgICBkZWZhdWx0DQpkYXRh L3Bnc3FsICBzZWNvbmRhcnljYWNoZSAgICAgICAgYWxsICAgICAgICAgICAg ICAgICAgICBkZWZhdWx0DQpkYXRhL3Bnc3FsICB1c2VkYnlzbmFwc2hvdHMg ICAgICAgMCAgICAgICAgICAgICAgICAgICAgICAtDQpkYXRhL3Bnc3FsICB1 c2VkYnlkYXRhc2V0ICAgICAgICAgNjYuNU0gICAgICAgICAgICAgICAgICAt DQpkYXRhL3Bnc3FsICB1c2VkYnljaGlsZHJlbiAgICAgICAgMCAgICAgICAg ICAgICAgICAgICAgICAtDQpkYXRhL3Bnc3FsICB1c2VkYnlyZWZyZXNlcnZh dGlvbiAgMCAgICAgICAgICAgICAgICAgICAgICAtDQpkYXRhL3Bnc3FsICBs b2diaWFzICAgICAgICAgICAgICAgbGF0ZW5jeSAgICAgICAgICAgICAgICBk ZWZhdWx0DQpkYXRhL3Bnc3FsICBkZWR1cCAgICAgICAgICAgICAgICAgb2Zm ICAgICAgICAgICAgICAgICAgICBkZWZhdWx0DQpkYXRhL3Bnc3FsICBtbHNs YWJlbCAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAtDQpk YXRhL3Bnc3FsICBzeW5jICAgICAgICAgICAgICAgICAgc3RhbmRhcmQgICAg ICAgICAgICAgICBkZWZhdWx0DQpkYXRhL3Bnc3FsICByZWZjb21wcmVzc3Jh dGlvICAgICAgMS4wMHggICAgICAgICAgICAgICAgICAtDQpkYXRhL3Bnc3Fs ICB3cml0dGVuICAgICAgICAgICAgICAgNjYuNU0gICAgICAgICAgICAgICAg ICAtDQo= ---1664858152-1374130778-1357915426=:16602 Content-Type: text/plain; name=zfs Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="zfs status.txt" IyB6cG9vbCBzdGF0dXMNCiAgcG9vbDogZGF0YQ0KIHN0YXRlOiBPTkxJTkUN CiAgc2Nhbjogbm9uZSByZXF1ZXN0ZWQNCmNvbmZpZzoNCg0KICAgICAgICBO QU1FICAgICAgICBTVEFURSAgICAgUkVBRCBXUklURSBDS1NVTQ0KICAgICAg ICBkYXRhICAgICAgICBPTkxJTkUgICAgICAgMCAgICAgMCAgICAgMA0KICAg ICAgICAgIGRhMSAgICAgICBPTkxJTkUgICAgICAgMCAgICAgMCAgICAgMA0K DQplcnJvcnM6IE5vIGtub3duIGRhdGEgZXJyb3Jz ---1664858152-1374130778-1357915426=:16602 Content-Type: text/plain; name="zfs-stats.txt" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="zfs-stats.txt" IyB6ZnMtc3RhdHMgLWENCg0KLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t DQpaRlMgU3Vic3lzdGVtIFJlcG9ydCAgICAgICAgICAgICAgICAgICAgICAg ICAgICBGcmkgSmFuIDExIDIyOjM4OjQ3IDIwMTMNCi0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLQ0KDQpTeXN0ZW0gSW5mb3JtYXRpb246DQoNCiAgICAg ICAgS2VybmVsIFZlcnNpb246ICAgICAgICAgICAgICAgICAgICAgICAgIDkw MTAwMCAob3NyZWxkYXRlKQ0KICAgICAgICBIYXJkd2FyZSBQbGF0Zm9ybTog ICAgICAgICAgICAgICAgICAgICAgaTM4Ng0KICAgICAgICBQcm9jZXNzb3Ig QXJjaGl0ZWN0dXJlOiAgICAgICAgICAgICAgICAgaTM4Ng0KDQogICAgICAg IFpGUyBTdG9yYWdlIHBvb2wgVmVyc2lvbjogICAgICAgICAgICAgICAyOA0K ICAgICAgICBaRlMgRmlsZXN5c3RlbSBWZXJzaW9uOiAgICAgICAgICAgICAg ICAgNQ0KDQpGcmVlQlNEIDkuMS1SRUxFQVNFICMwIHIyNDM4MjY6IFR1ZSBE ZWMgNCAwNjo1NTozOSBVVEMgMjAxMiByb290DQoxMDozOFBNICB1cCA5IG1p bnMsIDEgdXNlciwgbG9hZCBhdmVyYWdlczogMC4wMCwgMC4xMCwgMC4xMA0K DQotLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0NCg0KU3lzdGVtIE1lbW9y eToNCg0KICAgICAgICAxLjM3JSAgIDI3LjM5ICAgTWlCIEFjdGl2ZSwgICAg IDEuMDYlICAgMjEuMTggICBNaUIgSW5hY3QNCiAgICAgICAgNS4xOSUgICAx MDMuNTEgIE1pQiBXaXJlZCwgICAgICAwLjAwJSAgIDAgQ2FjaGUNCiAgICAg ICAgOTIuMzYlICAxLjgwICAgIEdpQiBGcmVlLCAgICAgICAwLjAyJSAgIDM3 Mi4wMCAgS2lCIEdhcA0KDQogICAgICAgIFJlYWwgSW5zdGFsbGVkOiAgICAg ICAgICAgICAgICAgICAgICAgICAyLjAwICAgIEdpQg0KICAgICAgICBSZWFs IEF2YWlsYWJsZTogICAgICAgICAgICAgICAgIDk5LjE5JSAgMS45OCAgICBH aUINCiAgICAgICAgUmVhbCBNYW5hZ2VkOiAgICAgICAgICAgICAgICAgICA5 OC4yMSUgIDEuOTUgICAgR2lCDQoNCiAgICAgICAgTG9naWNhbCBUb3RhbDog ICAgICAgICAgICAgICAgICAgICAgICAgIDIuMDAgICAgR2lCDQogICAgICAg IExvZ2ljYWwgVXNlZDogICAgICAgICAgICAgICAgICAgOS4wMCUgICAxODQu MjUgIE1pQg0KICAgICAgICBMb2dpY2FsIEZyZWU6ICAgICAgICAgICAgICAg ICAgIDkxLjAwJSAgMS44MiAgICBHaUINCg0KS2VybmVsIE1lbW9yeTogICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgODIuODQgICBNaUINCiAg ICAgICAgRGF0YTogICAgICAgICAgICAgICAgICAgICAgICAgICA3OS4xMiUg IDY1LjU0ICAgTWlCDQogICAgICAgIFRleHQ6ICAgICAgICAgICAgICAgICAg ICAgICAgICAgMjAuODglICAxNy4yOSAgIE1pQg0KDQpLZXJuZWwgTWVtb3J5 IE1hcDogICAgICAgICAgICAgICAgICAgICAgICAgICAgICA0MTEuNzYgIE1p Qg0KICAgICAgICBTaXplOiAgICAgICAgICAgICAgICAgICAgICAgICAgIDE4 LjUyJSAgNzYuMjUgICBNaUINCiAgICAgICAgRnJlZTogICAgICAgICAgICAg ICAgICAgICAgICAgICA4MS40OCUgIDMzNS41MCAgTWlCDQoNCi0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLQ0KDQpBUkMgU3VtbWFyeTogKEhFQUxUSFkp DQogICAgICAgIE1lbW9yeSBUaHJvdHRsZSBDb3VudDogICAgICAgICAgICAg ICAgICAwDQoNCkFSQyBNaXNjOg0KICAgICAgICBEZWxldGVkOiAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgNg0KICAgICAgICBSZWN5Y2xlIE1p c3NlczogICAgICAgICAgICAgICAgICAgICAgICAgMA0KICAgICAgICBNdXRl eCBNaXNzZXM6ICAgICAgICAgICAgICAgICAgICAgICAgICAgMA0KICAgICAg ICBFdmljdCBTa2lwczogICAgICAgICAgICAgICAgICAgICAgICAgICAgMA0K DQpBUkMgU2l6ZTogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgOC4x OCUgICAyMS4wNyAgIE1pQg0KICAgICAgICBUYXJnZXQgU2l6ZTogKEFkYXB0 aXZlKSAgICAgICAgIDEwMC4wMCUgMjU3LjUwICBNaUINCiAgICAgICAgTWlu IFNpemUgKEhhcmQgTGltaXQpOiAgICAgICAgICAxMi41MCUgIDMyLjE5ICAg TWlCDQogICAgICAgIE1heCBTaXplIChIaWdoIFdhdGVyKTogICAgICAgICAg ODoxICAgICAyNTcuNTAgIE1pQg0KDQpBUkMgU2l6ZSBCcmVha2Rvd246DQog ICAgICAgIFJlY2VudGx5IFVzZWQgQ2FjaGUgU2l6ZTogICAgICAgNTAuMDAl ICAxMjguNzUgIE1pQg0KICAgICAgICBGcmVxdWVudGx5IFVzZWQgQ2FjaGUg U2l6ZTogICAgIDUwLjAwJSAgMTI4Ljc1ICBNaUINCg0KQVJDIEhhc2ggQnJl YWtkb3duOg0KICAgICAgICBFbGVtZW50cyBNYXg6ICAgICAgICAgICAgICAg ICAgICAgICAgICAgNTgwDQogICAgICAgIEVsZW1lbnRzIEN1cnJlbnQ6ICAg ICAgICAgICAgICAgMTAwLjAwJSA1ODANCiAgICAgICAgQ29sbGlzaW9uczog ICAgICAgICAgICAgICAgICAgICAgICAgICAgIDI0DQogICAgICAgIENoYWlu IE1heDogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAxDQogICAgICAg IENoYWluczogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAzDQoN Ci0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQ0KDQpBUkMgRWZmaWNpZW5j eTogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICA1LjQ2aw0KICAg ICAgICBDYWNoZSBIaXQgUmF0aW86ICAgICAgICAgICAgICAgIDkyLjk0JSAg NS4wN2sNCiAgICAgICAgQ2FjaGUgTWlzcyBSYXRpbzogICAgICAgICAgICAg ICA3LjA2JSAgIDM4NQ0KICAgICAgICBBY3R1YWwgSGl0IFJhdGlvOiAgICAg ICAgICAgICAgIDkyLjg3JSAgNS4wN2sNCg0KICAgICAgICBEYXRhIERlbWFu ZCBFZmZpY2llbmN5OiAgICAgICAgIDg1Ljg2JSAgMS42MWsNCiAgICAgICAg RGF0YSBQcmVmZXRjaCBFZmZpY2llbmN5OiAgICAgICAwLjAwJSAgIDMNCg0K ICAgICAgICBDQUNIRSBISVRTIEJZIENBQ0hFIExJU1Q6DQogICAgICAgICAg QW5vbnltb3VzbHkgVXNlZDogICAgICAgICAgICAgMC4wOCUgICA0DQogICAg ICAgICAgTW9zdCBSZWNlbnRseSBVc2VkOiAgICAgICAgICAgNDIuNzklICAy LjE3aw0KICAgICAgICAgIE1vc3QgRnJlcXVlbnRseSBVc2VkOiAgICAgICAg IDU3LjEzJSAgMi45MGsNCiAgICAgICAgICBNb3N0IFJlY2VudGx5IFVzZWQg R2hvc3Q6ICAgICAwLjAwJSAgIDANCiAgICAgICAgICBNb3N0IEZyZXF1ZW50 bHkgVXNlZCBHaG9zdDogICAwLjAwJSAgIDANCg0KICAgICAgICBDQUNIRSBI SVRTIEJZIERBVEEgVFlQRToNCiAgICAgICAgICBEZW1hbmQgRGF0YTogICAg ICAgICAgICAgICAgICAyNy4yOSUgIDEuMzhrDQogICAgICAgICAgUHJlZmV0 Y2ggRGF0YTogICAgICAgICAgICAgICAgMC4wMCUgICAwDQogICAgICAgICAg RGVtYW5kIE1ldGFkYXRhOiAgICAgICAgICAgICAgNzIuNjMlICAzLjY4aw0K ICAgICAgICAgIFByZWZldGNoIE1ldGFkYXRhOiAgICAgICAgICAgIDAuMDgl ICAgNA0KDQogICAgICAgIENBQ0hFIE1JU1NFUyBCWSBEQVRBIFRZUEU6DQog ICAgICAgICAgRGVtYW5kIERhdGE6ICAgICAgICAgICAgICAgICAgNTkuMjIl ICAyMjgNCiAgICAgICAgICBQcmVmZXRjaCBEYXRhOiAgICAgICAgICAgICAg ICAwLjc4JSAgIDMNCiAgICAgICAgICBEZW1hbmQgTWV0YWRhdGE6ICAgICAg ICAgICAgICAzMS45NSUgIDEyMw0KICAgICAgICAgIFByZWZldGNoIE1ldGFk YXRhOiAgICAgICAgICAgIDguMDUlICAgMzENCg0KLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tDQoNCkwyQVJDIGlzIGRpc2FibGVkDQoNCi0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLQ0KDQoNCi0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLQ0KDQpWREVWIGNhY2hlIGlzIGRpc2FibGVkDQoNCi0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLQ0KDQpaRlMgVHVuYWJsZXMgKHN5c2N0bCk6 DQogICAgICAgIGtlcm4ubWF4dXNlcnMgICAgICAgICAgICAgICAgICAgICAg ICAgICAzODQNCiAgICAgICAgdm0ua21lbV9zaXplICAgICAgICAgICAgICAg ICAgICAgICAgICAgIDQzMjAxMzMxMg0KICAgICAgICB2bS5rbWVtX3NpemVf c2NhbGUgICAgICAgICAgICAgICAgICAgICAgMw0KICAgICAgICB2bS5rbWVt X3NpemVfbWluICAgICAgICAgICAgICAgICAgICAgICAgMA0KICAgICAgICB2 bS5rbWVtX3NpemVfbWF4ICAgICAgICAgICAgICAgICAgICAgICAgNDMyMDEz MzEyDQogICAgICAgIHZmcy56ZnMubDJjX29ubHlfc2l6ZSAgICAgICAgICAg ICAgICAgICAwDQogICAgICAgIHZmcy56ZnMubWZ1X2dob3N0X2RhdGFfbHNp emUgICAgICAgICAgICAxMTAwMjg4DQogICAgICAgIHZmcy56ZnMubWZ1X2do b3N0X21ldGFkYXRhX2xzaXplICAgICAgICAxNTE1NTINCiAgICAgICAgdmZz Lnpmcy5tZnVfZ2hvc3Rfc2l6ZSAgICAgICAgICAgICAgICAgIDEyNTE4NDAN CiAgICAgICAgdmZzLnpmcy5tZnVfZGF0YV9sc2l6ZSAgICAgICAgICAgICAg ICAgIDI0ODMyMDANCiAgICAgICAgdmZzLnpmcy5tZnVfbWV0YWRhdGFfbHNp emUgICAgICAgICAgICAgIDExMzY2NA0KICAgICAgICB2ZnMuemZzLm1mdV9z aXplICAgICAgICAgICAgICAgICAgICAgICAgMjU5Njg2NA0KICAgICAgICB2 ZnMuemZzLm1ydV9naG9zdF9kYXRhX2xzaXplICAgICAgICAgICAgMTMxMDcy DQogICAgICAgIHZmcy56ZnMubXJ1X2dob3N0X21ldGFkYXRhX2xzaXplICAg ICAgICAzNzI3MzYNCiAgICAgICAgdmZzLnpmcy5tcnVfZ2hvc3Rfc2l6ZSAg ICAgICAgICAgICAgICAgIDUwMzgwOA0KICAgICAgICB2ZnMuemZzLm1ydV9k YXRhX2xzaXplICAgICAgICAgICAgICAgICAgMTY2NTY4OTYNCiAgICAgICAg dmZzLnpmcy5tcnVfbWV0YWRhdGFfbHNpemUgICAgICAgICAgICAgIDc3MzYz Mg0KICAgICAgICB2ZnMuemZzLm1ydV9zaXplICAgICAgICAgICAgICAgICAg ICAgICAgMTg1MTgwMTYNCiAgICAgICAgdmZzLnpmcy5hbm9uX2RhdGFfbHNp emUgICAgICAgICAgICAgICAgIDANCiAgICAgICAgdmZzLnpmcy5hbm9uX21l dGFkYXRhX2xzaXplICAgICAgICAgICAgIDANCiAgICAgICAgdmZzLnpmcy5h bm9uX3NpemUgICAgICAgICAgICAgICAgICAgICAgIDE2Mzg0DQogICAgICAg IHZmcy56ZnMubDJhcmNfbm9ydyAgICAgICAgICAgICAgICAgICAgICAxDQog ICAgICAgIHZmcy56ZnMubDJhcmNfZmVlZF9hZ2FpbiAgICAgICAgICAgICAg ICAxDQogICAgICAgIHZmcy56ZnMubDJhcmNfbm9wcmVmZXRjaCAgICAgICAg ICAgICAgICAxDQogICAgICAgIHZmcy56ZnMubDJhcmNfZmVlZF9taW5fbXMg ICAgICAgICAgICAgICAyMDANCiAgICAgICAgdmZzLnpmcy5sMmFyY19mZWVk X3NlY3MgICAgICAgICAgICAgICAgIDENCiAgICAgICAgdmZzLnpmcy5sMmFy Y19oZWFkcm9vbSAgICAgICAgICAgICAgICAgIDINCiAgICAgICAgdmZzLnpm cy5sMmFyY193cml0ZV9ib29zdCAgICAgICAgICAgICAgIDgzODg2MDgNCiAg ICAgICAgdmZzLnpmcy5sMmFyY193cml0ZV9tYXggICAgICAgICAgICAgICAg IDgzODg2MDgNCiAgICAgICAgdmZzLnpmcy5hcmNfbWV0YV9saW1pdCAgICAg ICAgICAgICAgICAgIDY3NTAyMDgwDQogICAgICAgIHZmcy56ZnMuYXJjX21l dGFfdXNlZCAgICAgICAgICAgICAgICAgICAyOTU2ODkyDQogICAgICAgIHZm cy56ZnMuYXJjX21pbiAgICAgICAgICAgICAgICAgICAgICAgICAzMzc1MTA0 MA0KICAgICAgICB2ZnMuemZzLmFyY19tYXggICAgICAgICAgICAgICAgICAg ICAgICAgMjcwMDA4MzIwDQogICAgICAgIHZmcy56ZnMuZGVkdXAucHJlZmV0 Y2ggICAgICAgICAgICAgICAgICAxDQogICAgICAgIHZmcy56ZnMubWRjb21w X2Rpc2FibGUgICAgICAgICAgICAgICAgICAwDQogICAgICAgIHZmcy56ZnMu d3JpdGVfbGltaXRfb3ZlcnJpZGUgICAgICAgICAgICAwDQogICAgICAgIHZm cy56ZnMud3JpdGVfbGltaXRfaW5mbGF0ZWQgICAgICAgICAgICA2MzkwMjUx NTIwDQogICAgICAgIHZmcy56ZnMud3JpdGVfbGltaXRfbWF4ICAgICAgICAg ICAgICAgICAyNjYyNjA0ODANCiAgICAgICAgdmZzLnpmcy53cml0ZV9saW1p dF9taW4gICAgICAgICAgICAgICAgIDMzNTU0NDMyDQogICAgICAgIHZmcy56 ZnMud3JpdGVfbGltaXRfc2hpZnQgICAgICAgICAgICAgICAzDQogICAgICAg IHZmcy56ZnMubm9fd3JpdGVfdGhyb3R0bGUgICAgICAgICAgICAgICAwDQog ICAgICAgIHZmcy56ZnMuemZldGNoLmFycmF5X3JkX3N6ICAgICAgICAgICAg ICAxMDQ4NTc2DQogICAgICAgIHZmcy56ZnMuemZldGNoLmJsb2NrX2NhcCAg ICAgICAgICAgICAgICAyNTYNCiAgICAgICAgdmZzLnpmcy56ZmV0Y2gubWlu X3NlY19yZWFwICAgICAgICAgICAgIDINCiAgICAgICAgdmZzLnpmcy56ZmV0 Y2gubWF4X3N0cmVhbXMgICAgICAgICAgICAgIDgNCiAgICAgICAgdmZzLnpm cy5wcmVmZXRjaF9kaXNhYmxlICAgICAgICAgICAgICAgIDENCiAgICAgICAg dmZzLnpmcy5tZ19hbGxvY19mYWlsdXJlcyAgICAgICAgICAgICAgIDgNCiAg ICAgICAgdmZzLnpmcy5jaGVja19ob3N0aWQgICAgICAgICAgICAgICAgICAg IDENCiAgICAgICAgdmZzLnpmcy5yZWNvdmVyICAgICAgICAgICAgICAgICAg ICAgICAgIDANCiAgICAgICAgdmZzLnpmcy50eGcuc3luY3RpbWVfbXMgICAg ICAgICAgICAgICAgIDEwMDANCiAgICAgICAgdmZzLnpmcy50eGcudGltZW91 dCAgICAgICAgICAgICAgICAgICAgIDUNCiAgICAgICAgdmZzLnpmcy52ZGV2 LmNhY2hlLmJzaGlmdCAgICAgICAgICAgICAgIDE2DQogICAgICAgIHZmcy56 ZnMudmRldi5jYWNoZS5zaXplICAgICAgICAgICAgICAgICAwDQogICAgICAg IHZmcy56ZnMudmRldi5jYWNoZS5tYXggICAgICAgICAgICAgICAgICAxNjM4 NA0KICAgICAgICB2ZnMuemZzLnZkZXYud3JpdGVfZ2FwX2xpbWl0ICAgICAg ICAgICAgNDA5Ng0KICAgICAgICB2ZnMuemZzLnZkZXYucmVhZF9nYXBfbGlt aXQgICAgICAgICAgICAgMzI3NjgNCiAgICAgICAgdmZzLnpmcy52ZGV2LmFn Z3JlZ2F0aW9uX2xpbWl0ICAgICAgICAgIDEzMTA3Mg0KICAgICAgICB2ZnMu emZzLnZkZXYucmFtcF9yYXRlICAgICAgICAgICAgICAgICAgMg0KICAgICAg ICB2ZnMuemZzLnZkZXYudGltZV9zaGlmdCAgICAgICAgICAgICAgICAgNg0K ICAgICAgICB2ZnMuemZzLnZkZXYubWluX3BlbmRpbmcgICAgICAgICAgICAg ICAgNA0KICAgICAgICB2ZnMuemZzLnZkZXYubWF4X3BlbmRpbmcgICAgICAg ICAgICAgICAgMTANCiAgICAgICAgdmZzLnpmcy52ZGV2LmJpb19mbHVzaF9k aXNhYmxlICAgICAgICAgIDANCiAgICAgICAgdmZzLnpmcy5jYWNoZV9mbHVz aF9kaXNhYmxlICAgICAgICAgICAgIDANCiAgICAgICAgdmZzLnpmcy56aWxf cmVwbGF5X2Rpc2FibGUgICAgICAgICAgICAgIDANCiAgICAgICAgdmZzLnpm cy56aW8udXNlX3VtYSAgICAgICAgICAgICAgICAgICAgIDANCiAgICAgICAg dmZzLnpmcy5zbmFwc2hvdF9saXN0X3ByZWZldGNoICAgICAgICAgIDANCiAg ICAgICAgdmZzLnpmcy52ZXJzaW9uLnpwbCAgICAgICAgICAgICAgICAgICAg IDUNCiAgICAgICAgdmZzLnpmcy52ZXJzaW9uLnNwYSAgICAgICAgICAgICAg ICAgICAgIDI4DQogICAgICAgIHZmcy56ZnMudmVyc2lvbi5hY2wgICAgICAg ICAgICAgICAgICAgICAxDQogICAgICAgIHZmcy56ZnMuZGVidWcgICAgICAg ICAgICAgICAgICAgICAgICAgICAwDQogICAgICAgIHZmcy56ZnMuc3VwZXJf b3duZXIgICAgICAgICAgICAgICAgICAgICAwDQoNCi0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLQ0K ---1664858152-1374130778-1357915426=:16602-- From owner-freebsd-fs@FreeBSD.ORG Fri Jan 11 15:01:55 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 8077B7E9 for ; Fri, 11 Jan 2013 15:01:55 +0000 (UTC) (envelope-from amvandemore@gmail.com) Received: from mail-qa0-f45.google.com (mail-qa0-f45.google.com [209.85.216.45]) by mx1.freebsd.org (Postfix) with ESMTP id 49A4BF3F for ; Fri, 11 Jan 2013 15:01:55 +0000 (UTC) Received: by mail-qa0-f45.google.com with SMTP id j15so2625006qaq.18 for ; Fri, 11 Jan 2013 07:01:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=geMdmdeDJz1BVdTbo36ZePM+ywUfpcv2FwAonzPcPJA=; b=dA8mDG+pvyF41GSopqB+7I/aVBKHQ8hk6Ecfl3HzGbmSiem8cPecQ5X/9t81RaiG4u xzmj4x7yrTAe7ilwTZBNRTfYx4Nfufp2/kyW/RHRBdkhLniPQNoj9aY2AKC3gvQ1imHL ECW+jGkLEnpkQ2KgAt04tI6d/8shYNsHO//Yzx25vHaIyx33+XKF1HnVO2EQi18zEY4C WNQ5LncBxXSaOlQhKN+hhoEJPLIF5tmYu1oEHkvze4lNu+BKonBDERZVTvOuhy3JtOXu 4rMe1fGS9zANyB+ISmfeCwstaYbD211GsnpZJQ48xLI85rO7CgGazN6UOr+lF1I80LTt FK8Q== MIME-Version: 1.0 Received: by 10.49.72.136 with SMTP id d8mr72108441qev.62.1357916138771; Fri, 11 Jan 2013 06:55:38 -0800 (PST) Received: by 10.49.128.168 with HTTP; Fri, 11 Jan 2013 06:55:38 -0800 (PST) In-Reply-To: <1357915426.16602.YahooMailClassic@web190801.mail.sg3.yahoo.com> References: <1357915426.16602.YahooMailClassic@web190801.mail.sg3.yahoo.com> Date: Fri, 11 Jan 2013 08:55:38 -0600 Message-ID: Subject: Re: ZFS sub-optimal performance with default setting From: Adam Vande More To: Patrick Dung Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.14 Cc: Tom Evans , freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 11 Jan 2013 15:01:55 -0000 On Fri, Jan 11, 2013 at 8:43 AM, Patrick Dung wrote: > Hi Tom, > > To make it simple, I have setup in this setting: > > Host: Intel dual core 3Ghz CPU, RHEL 6.3 x64, RAM 8GB > > Freebsd 9.1 -i386 VM with these setting: > CPU: One > Memory: 2GB > 5GB for OS (da1) > 5GB for ZFS (da2), no separate ZIL > > Installed software: > Postgresql 9.2.2 (compile from ports) /usr/local/pgsql is a ZFS dataset > OTRS 3.1.6 (compile from ports) > Apache 2 install from packages > > zfs/postgresql/otrs/apache is in default setting, except I have turned off > atime in ZFS. I might make the argument ZFS isn't necessary or all that useful in this setup while adding quite a bit of overhead. About all you get over a UFS version is cheap snapshotting and even UFS can snapshot 50 GB with relatively low overhead. Single disk ZFS systems do also offer things like integrity checking, but UFS would still work fine for your use case. Additionally, I would still be worried about stability under load on a low mem 32 bit install. -- Adam Vande More From owner-freebsd-fs@FreeBSD.ORG Fri Jan 11 16:45:55 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id D05614DD for ; Fri, 11 Jan 2013 16:45:55 +0000 (UTC) (envelope-from tevans.uk@googlemail.com) Received: from mail-qa0-f50.google.com (mail-qa0-f50.google.com [209.85.216.50]) by mx1.freebsd.org (Postfix) with ESMTP id 7D24167C for ; Fri, 11 Jan 2013 16:45:55 +0000 (UTC) Received: by mail-qa0-f50.google.com with SMTP id cr7so1635642qab.2 for ; Fri, 11 Jan 2013 08:45:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=0zbNhYGmEjIOMAgD/W0O9f2BsCyy2QOMeZzmK8Sv+JI=; b=SeFxlhfo5V1zs+kU36ZfKCGPRJCqlr42Mk7nSDiZBDAryvvSPbyUV+VrE8uGdx5g6Z 4LlTi6zN9yZJN/ret1IDq+QYLFJaf1fIAN8gaqfX/dCwKZE+f+LZ+GTcC0McoL6nJU1P PGrPQgHClpQA0SmNbbrQiTs/YG9FyITL1Wpo6PAREhmeGjaKBWg29lydmwfic45F99Qh JU+lXn56wyhTs6oyhdGAaz0ikBMwUs/yrfjadcd3svKqgX2CU0wAzrC98Fs9u1VZLS1v w67SS6Dk10+9kBiHVQuakFgY0rV46wOcaYjYBLUJJ7eqd44j0ID64EnmXcEgv+zMJYr+ YOZA== MIME-Version: 1.0 Received: by 10.229.78.97 with SMTP id j33mr14872048qck.107.1357922749600; Fri, 11 Jan 2013 08:45:49 -0800 (PST) Received: by 10.49.48.168 with HTTP; Fri, 11 Jan 2013 08:45:49 -0800 (PST) In-Reply-To: <1357915426.16602.YahooMailClassic@web190801.mail.sg3.yahoo.com> References: <1357915426.16602.YahooMailClassic@web190801.mail.sg3.yahoo.com> Date: Fri, 11 Jan 2013 16:45:49 +0000 Message-ID: Subject: Re: ZFS sub-optimal performance with default setting From: Tom Evans To: Patrick Dung Content-Type: text/plain; charset=UTF-8 Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 11 Jan 2013 16:45:55 -0000 On Fri, Jan 11, 2013 at 2:43 PM, Patrick Dung wrote: > > Hi Tom, > > To make it simple, I have setup in this setting: > > Host: Intel dual core 3Ghz CPU, RHEL 6.3 x64, RAM 8GB > > Freebsd 9.1 -i386 VM with these setting: > CPU: One > Memory: 2GB > 5GB for OS (da1) > 5GB for ZFS (da2), no separate ZIL > This is ... tight! I concur with Adam, ZFS may be of little use in this scenario. It is not so much lack of ZIL, but your pool is made up of part of a virtualized shared resource. If you do not have a ZIL, then synch writes are constrained by the speed of your slowest/only disk, and I think you are at its limit. > Installed software: > Postgresql 9.2.2 (compile from ports) /usr/local/pgsql is a ZFS dataset > OTRS 3.1.6 (compile from ports) > Apache 2 install from packages > > zfs/postgresql/otrs/apache is in default setting, except I have turned off atime in ZFS. > > I have run OTRS benchmark twice, below is the result: > Insert Time: 10000 12 s :-( Should not take more than 5's on an average system. > Update Time: 10000 7 s Ok > > Select Time: 10000 3 s :-) Looks fine! > Delete Time: 10000 2 s :-) Looks fine! > > Thanks, > Patrick > Does performance significantly increase if you use UFS instead? Cheers Tom From owner-freebsd-fs@FreeBSD.ORG Fri Jan 11 20:39:50 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 145878BA for ; Fri, 11 Jan 2013 20:39:50 +0000 (UTC) (envelope-from artemb@gmail.com) Received: from mail-vb0-f50.google.com (mail-vb0-f50.google.com [209.85.212.50]) by mx1.freebsd.org (Postfix) with ESMTP id B0CC3F5D for ; Fri, 11 Jan 2013 20:39:49 +0000 (UTC) Received: by mail-vb0-f50.google.com with SMTP id ft2so1828565vbb.23 for ; Fri, 11 Jan 2013 12:39:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type; bh=YTp8OpFe66CMNyTOIh1OhoVCn4e+9sdZQQUnBNOX2JM=; b=U7s/M7ffnmiBTFmeK5B8jK2MFgOvvAjIXtAVRDKhvw4/tmiope8TWrb/lkc3C6gJaU Pl6s5WEtpeXBuW36dwC/ITJN6AhDvE08k8jgEdHry5pcpsKyR3G2xtflC/kv/yGt9DcN Uff+dzzrGeUVErNpPbONP5KEJhCHXvp+gpo7mGvrxH11hQUiKmWKJdB9aKbXwjzlmeY2 z1+mKvWy0hbyf1BzruzKE4vtGlAM1zeeQeNd03GZJLLFWJDjLn/cW2nZAIpTwttebAQc bHQStJKYhk6JbNQvqn1X+OpQcTpNU8VEWXRRbJE6RePY3m97ZqpiVPq7Xe3h4Kj+kHhw q9UA== MIME-Version: 1.0 Received: by 10.220.153.201 with SMTP id l9mr94404963vcw.33.1357936783342; Fri, 11 Jan 2013 12:39:43 -0800 (PST) Sender: artemb@gmail.com Received: by 10.220.122.196 with HTTP; Fri, 11 Jan 2013 12:39:43 -0800 (PST) In-Reply-To: <20130111073417.GA95100@mid.pc5.i.0x5.de> References: <20130108174225.GA17260@mid.pc5.i.0x5.de> <20130109162613.GA34276@mid.pc5.i.0x5.de> <20130110193949.GA10023@mid.pc5.i.0x5.de> <20130111073417.GA95100@mid.pc5.i.0x5.de> Date: Fri, 11 Jan 2013 12:39:43 -0800 X-Google-Sender-Auth: Os9MM8m1yqNVQHWmrImVZq9nbp8 Message-ID: Subject: Re: slowdown of zfs (tx->tx) From: Artem Belevich To: Nicolas Rachinsky Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 11 Jan 2013 20:39:50 -0000 On Thu, Jan 10, 2013 at 11:34 PM, Nicolas Rachinsky wrote: > * Nicolas Rachinsky [2013-01-10 20:39 +0100]: >> after replacing one of the controllers, all problems seem to have >> disappeared. Thank you very much for your advice! > > Now the problem is back. > > After changing the controller, there were no more timeouts logged. > > No UDMA_CRC_Error_Count changed. > Is there anything special about ada8? It does seem to have noticeably higher service time compared to other disks. Cound you do gstat with 1-second interval. Some of the 5-second samples show that ada8 is the bottleneck -- it has its request queue full (L(q)=10) when all other drives were done with their jobs. And that's a 5-sec average. Its write service time also seems to be a lot higher than for other drives. Does the drive have its write cache disabled by any chance? That could explain why it takes so much longer to service writes. Can you remove ada8 and see if your performance go back to normal? --Artem From owner-freebsd-fs@FreeBSD.ORG Sat Jan 12 05:13:35 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id B284D868 for ; Sat, 12 Jan 2013 05:13:35 +0000 (UTC) (envelope-from tjg@soe.ucsc.edu) Received: from mail-vb0-f48.google.com (mail-vb0-f48.google.com [209.85.212.48]) by mx1.freebsd.org (Postfix) with ESMTP id 4A66B3D4 for ; Sat, 12 Jan 2013 05:13:34 +0000 (UTC) Received: by mail-vb0-f48.google.com with SMTP id fc21so2067696vbb.7 for ; Fri, 11 Jan 2013 21:13:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ucsc.edu; s=ucsc-google; h=mime-version:date:message-id:subject:from:to:content-type; bh=AVZMHrt6NmACZqyPCHAoOGjM3pCgJBpCY9u4katEFAE=; b=YRCIgOGAVAC4Ezt195RDPCVF00OMhWIJBAr198g77QDhCjHeVOshAGwOlvTwaBUibY MsDvCXJ2dJdEquKSePmtu+tktB/wmbwTsk2IQBntz+IHQrG/XW6KJ83a1Qngo3/J0wS0 DhSyzxWdozFzjlBYuXrJTAaeRyGT8gnGCXW3g= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type :x-gm-message-state; bh=AVZMHrt6NmACZqyPCHAoOGjM3pCgJBpCY9u4katEFAE=; b=pIoW0GUEO2DOXvU00LMwpm6mdSgGOkw+qMf7gGjKHDD1uBc4MDuc0TGA3HAZqMFOeE 4pnd0nkLY8//yYEDvaSOIcIGE1RrNk73/g6St3k6tQkUklY7bQ6C7tOLB/NIv2em2CNt XavP37EnBKdAN6g6FC2ACRqg+XKaMZTACytg7WROyIeDQ0dfQ/gFlO3wIV5WnDUl4fo6 65qkrBM44NbMmciGcd+oUOdUEh/5V5IlfYh5e0t2r1IW1fmUjMzCwLeoW4GnczNfNJJ4 HFHn123ijG6wyoFwlNj6JtVXAYFBtcXNqWzmyKQqEF9CKimwo1b2Fy99QR8WIX5jukKe 6gAA== MIME-Version: 1.0 Received: by 10.220.153.80 with SMTP id j16mr94613395vcw.21.1357967614103; Fri, 11 Jan 2013 21:13:34 -0800 (PST) Received: by 10.59.12.231 with HTTP; Fri, 11 Jan 2013 21:13:33 -0800 (PST) Date: Fri, 11 Jan 2013 21:13:33 -0800 Message-ID: Subject: Using glabel From: Tim Gustafson To: FreeBSD Filesystems Content-Type: text/plain; charset=UTF-8 X-Gm-Message-State: ALoCoQkDUkHaBSWuWvMg4mjATwswigII1RhxVEfNV0+xfCoQXE7ff+yMNTWM8g9qa2wEkL1SNYGH X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 12 Jan 2013 05:13:35 -0000 Hi, We have a few servers with 45 disks each. It gets a bit cumbersome at the moment to map a failed drive (reported via "zpool status") to a physical device. The physical devices are labeled with serial numbers, and ZFS reports device nodes. I was wondering if I could use "glabel" to label each of the disks we have with their serial number to make identification easier, and then reconfigure the zpool to import the drives by gptid, rather than device node. So, my thinking was along the lines of: - obtain the device serial numbers, probably using smartctl - zpool export tank - glabel -v SERIAL-NUMBER-0 /dev/ada0 - glabel -v SERIAL-NUMBER-1 /dev/ada1 - glabel -v SERIAL-NUMBER-2 /dev/ada2 - snip 43 more glabel lines - zpool import tank -d /dev/gptid Is there any reason that this is a bad idea? Do I have the command sequence correct? -- Tim Gustafson tjg@soe.ucsc.edu 831-459-5354 Baskin Engineering, Room 313A From owner-freebsd-fs@FreeBSD.ORG Sat Jan 12 08:13:05 2013 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 3448B178; Sat, 12 Jan 2013 08:13:05 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) by mx1.freebsd.org (Postfix) with ESMTP id E80E5A5D; Sat, 12 Jan 2013 08:13:04 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.6/8.14.6) with ESMTP id r0C8D48h005787; Sat, 12 Jan 2013 08:13:04 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.6/8.14.6/Submit) id r0C8D4Rq005783; Sat, 12 Jan 2013 08:13:04 GMT (envelope-from linimon) Date: Sat, 12 Jan 2013 08:13:04 GMT Message-Id: <201301120813.r0C8D4Rq005783@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Subject: Re: kern/175179: [zfs] ZFS may attach wrong device on move X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 12 Jan 2013 08:13:05 -0000 Old Synopsis: ZFS may attach wrong device on move New Synopsis: [zfs] ZFS may attach wrong device on move Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Sat Jan 12 08:12:51 UTC 2013 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=175179 From owner-freebsd-fs@FreeBSD.ORG Sat Jan 12 10:37:12 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id E98D3A91 for ; Sat, 12 Jan 2013 10:37:12 +0000 (UTC) (envelope-from freebsd-listen@fabiankeil.de) Received: from smtprelay05.ispgateway.de (smtprelay05.ispgateway.de [80.67.31.97]) by mx1.freebsd.org (Postfix) with ESMTP id 74F1DF63 for ; Sat, 12 Jan 2013 10:37:12 +0000 (UTC) Received: from [84.44.211.82] (helo=fabiankeil.de) by smtprelay05.ispgateway.de with esmtpsa (SSLv3:AES128-SHA:128) (Exim 4.68) (envelope-from ) id 1TtyS1-0002ZF-O1; Sat, 12 Jan 2013 11:36:33 +0100 Date: Sat, 12 Jan 2013 11:36:22 +0100 From: Fabian Keil To: Tim Gustafson Subject: Re: Using glabel Message-ID: <20130112113622.0dbd6bc2@fabiankeil.de> In-Reply-To: References: Mime-Version: 1.0 Content-Type: multipart/signed; micalg=PGP-SHA1; boundary="Sig_//mYt88XJ07moyZVyLAUmetn"; protocol="application/pgp-signature" X-Df-Sender: Nzc1MDY3 Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 12 Jan 2013 10:37:13 -0000 --Sig_//mYt88XJ07moyZVyLAUmetn Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: quoted-printable Tim Gustafson wrote: > We have a few servers with 45 disks each. It gets a bit cumbersome at > the moment to map a failed drive (reported via "zpool status") to a > physical device. The physical devices are labeled with serial > numbers, and ZFS reports device nodes. I was wondering if I could use > "glabel" to label each of the disks we have with their serial number > to make identification easier, and then reconfigure the zpool to > import the drives by gptid, rather than device node. >=20 > So, my thinking was along the lines of: >=20 > - obtain the device serial numbers, probably using smartctl > - zpool export tank > - glabel -v SERIAL-NUMBER-0 /dev/ada0 > - glabel -v SERIAL-NUMBER-1 /dev/ada1 > - glabel -v SERIAL-NUMBER-2 /dev/ada2 The "label" action seems to be missing. > - snip 43 more glabel lines > - zpool import tank -d /dev/gptid For labels created with glabel this should be "-d /dev/label tank". /dev/gptid is for stuff created with gpart. > Is there any reason that this is a bad idea? Do I have the command > sequence correct? I'm using glabel for geli-encrypted backup pools to automate the import: http://www.fabiankeil.de/gehacktes/zogftw/ As it works for me, I don't think it's a bad idea in general, but note that glabel stores the label at the end of the device, slightly decreasing the space that is available for ZFS. In my tests ZFS never used the last sectors on a device (as far as I could tell) but I'm not sure if that's actually guaranteed. If the last sector on your disks is used by ZFS, creating a label with glabel on it would overwrite it and importing the pool using the label would additionally prevent ZFS from even accessing the sector. I believe you can test this by comparing the asize shown with zdb -l and comparing it with the size shown by diskinfo, but this relies on the asize count starting at the first sector and due to padding that might not be guaranteed either. Another problem that has been frequently reported is that importing the pool without specifying the /dev/label directory may let ZFS mix labeled devices and labels which renders the labeling somewhat pointless. Using geli prevents that of course, but using geli just to work around this is probably not something you want to do and also isn't an option for a "live migration". Fabian --Sig_//mYt88XJ07moyZVyLAUmetn Content-Type: application/pgp-signature; name=signature.asc Content-Disposition: attachment; filename=signature.asc -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.19 (FreeBSD) iEYEARECAAYFAlDxPLIACgkQBYqIVf93VJ3VXwCfVSeb1kikeualj2b+s7WAWnyG 8lwAoKzV5PRKAQ9hPkizuWrLxguvZ8z+ =BRyX -----END PGP SIGNATURE----- --Sig_//mYt88XJ07moyZVyLAUmetn-- From owner-freebsd-fs@FreeBSD.ORG Sat Jan 12 20:00:49 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 3F1B5130 for ; Sat, 12 Jan 2013 20:00:49 +0000 (UTC) (envelope-from freebsd@psconsult.nl) Received: from mx1.psconsult.nl (unknown [IPv6:2001:7b8:30f:e0::5059:ee8a]) by mx1.freebsd.org (Postfix) with ESMTP id DE7EA7AF for ; Sat, 12 Jan 2013 20:00:48 +0000 (UTC) Received: from mx1.psconsult.nl (mx1.hvnu.psconsult.nl [46.44.189.154]) by mx1.psconsult.nl (8.14.5/8.14.4) with ESMTP id r0CK0ffe078184 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO) for ; Sat, 12 Jan 2013 21:00:46 +0100 (CET) (envelope-from freebsd@psconsult.nl) Received: (from paul@localhost) by mx1.psconsult.nl (8.14.5/8.14.4/Submit) id r0CK0faH078183 for freebsd-fs@freebsd.org; Sat, 12 Jan 2013 21:00:41 +0100 (CET) (envelope-from freebsd@psconsult.nl) X-Authentication-Warning: mx1.psconsult.nl: paul set sender to freebsd@psconsult.nl using -f Date: Sat, 12 Jan 2013 21:00:41 +0100 From: Paul Schenkeveld To: freebsd-fs@freebsd.org Subject: Re: Using glabel Message-ID: <20130112200041.GA77338@psconsult.nl> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 12 Jan 2013 20:00:49 -0000 On Fri, Jan 11, 2013 at 09:13:33PM -0800, Tim Gustafson wrote: > Hi, > > We have a few servers with 45 disks each. It gets a bit cumbersome at > the moment to map a failed drive (reported via "zpool status") to a > physical device. The physical devices are labeled with serial > numbers, and ZFS reports device nodes. I was wondering if I could use > "glabel" to label each of the disks we have with their serial number > to make identification easier, and then reconfigure the zpool to > import the drives by gptid, rather than device node. > > So, my thinking was along the lines of: > > - obtain the device serial numbers, probably using smartctl > - zpool export tank > - glabel -v SERIAL-NUMBER-0 /dev/ada0 > - glabel -v SERIAL-NUMBER-1 /dev/ada1 > - glabel -v SERIAL-NUMBER-2 /dev/ada2 > - snip 43 more glabel lines > - zpool import tank -d /dev/gptid > > Is there any reason that this is a bad idea? Do I have the command > sequence correct? Using labels instead of auto-enumerated names (ada0, ada1 ...) is generally a good idea I think and makes sysadmin life a bit easier. You can use glabel to label your disks or partition the disks with gpart (using the GPT scheme) and let gpt put a label on each (-l flag). In the past I always used glabel for that but since I had disks fail on me and found out that a replacement disk of the same capacity was actually several sectors smaller than the original, I changed to using gpart and allocate all but the few last MB of every disk so that if I have to replace a broken disk by one which is a bit smaller it won't give a problem. Labels created using the -l option of gpart appear in /dev/gpt instead of /dev/label but that should be no problem. ZFS finds the labelled partitions first, even without using the -d flag. HTH Paul Schenkeveld