From owner-freebsd-fs@FreeBSD.ORG Sun Sep 12 08:10:10 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id E29C31065672 for ; Sun, 12 Sep 2010 08:10:09 +0000 (UTC) (envelope-from TERRY@tmk.com) Received: from server.tmk.com (server.tmk.com [204.141.35.63]) by mx1.freebsd.org (Postfix) with ESMTP id ACCFC8FC18 for ; Sun, 12 Sep 2010 08:10:09 +0000 (UTC) Received: from tmk.com by tmk.com (PMDF V6.4 #37010) id <01NRSD8H77W00022AD@tmk.com> for freebsd-fs@freebsd.org; Sun, 12 Sep 2010 04:10:04 -0400 (EDT) Date: Sun, 12 Sep 2010 04:09:44 -0400 (EDT) From: Terry Kennedy To: freebsd-fs@freebsd.org Message-id: <01NRSE7GZJEC0022AD@tmk.com> MIME-version: 1.0 Content-type: TEXT/PLAIN; CHARSET=us-ascii Subject: Re: Weird Linux - FreeBSD/ZFS NFSv4 interoperability problem X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 12 Sep 2010 08:10:10 -0000 > A couple of people have reported very slow read rates for the NFSv4 > client (actually the experimental client, since they see it for > NFSv3 too). If you could easily do the following, using a FreeBSD8.1 > or newer client: > # mount -t nfs -o nfsv4 :/path > - cd to anywhere in the mount that has a 100Mbyte+ file > # dd if=<100Mbyte+ file> of=/dev/null bs=1m > > and then report what read rate you see along with the client's > machine-arch/# of cores/ram size/network driver used by the mount > > rick > ps: Btw, anyone else who can do this test, it would be appreciated. > If you aren't set up for NFSv4, you can do an NFSv3 mount using > the exp. client instead. > # mount -t newnfs -o nfsv3 :/path On 8-STABLE (both client and server). First test is NFSv3 on the standard client: (0:842) new-gate:~terry# mount -t nfs -o nfsv4 new-rz1:/data /foo [tcp6] new-rz1:/data: NFSPROC_NULL: RPC: Program/version mismatch; low version = 2, high version = 3 [tcp] new-rz1:/data: NFSPROC_NULL: RPC: Program/version mismatch; low version = 2, high version = 3 ^C (1:843) new-gate:~terry# mount -t nfs -o nfsv3 new-rz1:/data /foo [...] (0:869) new-gate:/foo/Backups/Suzanne VAIO# dd if=0cff3d7b_VOL.spf of=/dev/null bs=1m 6010+1 records in 6010+1 records out 6301945344 bytes transferred in 69.730064 secs (90376302 bytes/sec) Now, let's try the newnfs client (cache should have been primed by the first run, so we'd expect this to be faster): (0:879) new-gate:/tmp# umount /foo (0:880) new-gate:/tmp# mount -t newnfs -o nfsv3 new-rz1:/data /foo (0:881) new-gate:/tmp# cd /foo/Backups/Suzanne\ VAIO/ (0:882) new-gate:/foo/Backups/Suzanne VAIO# dd if=0cff3d7b_VOL.spf of=/dev/null bs=1m 6010+1 records in 6010+1 records out 6301945344 bytes transferred in 135.927222 secs (46362644 bytes/sec) Hmmm. Half the performance. The problem isn't the disk speed on the server: (0:19) new-rz1:/data/Backups/Suzanne VAIO# dd if=0cff3d7b_VOL.spf of=/dev/null bs=1m 6010+1 records in 6010+1 records out 6301945344 bytes transferred in 1.307266 secs (4820706236 bytes/sec) Client system (new-gate) specs: CPU: Intel(R) Xeon(R) CPU X5470 @ 3.33GHz (3333.35-MHz K8-class CPU) Origin = "GenuineIntel" Id = 0x1067a Family = 6 Model = 17 Stepping = 10 real memory = 8589934592 (8192 MB) avail memory = 8256380928 (7873 MB) FreeBSD/SMP: Multiprocessor System Detected: 4 CPUs FreeBSD/SMP: 1 package(s) x 4 core(s) bce0: mem 0xdc000000-0xddffffff irq 16 at device 0.0 on pci8 miibus0: on bce0 brgphy0: PHY 1 on miibus0 brgphy0: 10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, 1000baseT, 1000baseT-FDX, auto (0:878) new-gate:/tmp# ifconfig bce0 bce0: flags=8843 metric 0 mtu 9000 options=c01bb Server system (new-rz1) specs: CPU: Intel(R) Xeon(R) CPU E5520 @ 2.27GHz (2275.83-MHz K8-class CPU) Origin = "GenuineIntel" Id = 0x106a5 Family = 6 Model = 1a Stepping = 5 real memory = 51543801856 (49156 MB) avail memory = 49691684864 (47389 MB) FreeBSD/SMP: Multiprocessor System Detected: 16 CPUs FreeBSD/SMP: 2 package(s) x 4 core(s) x 2 SMT threads igb0: port 0xcf80-0xcf9f mem 0xface0000-0xfacfffff,0xfacc0000-0xfacdffff,0xfac9c000-0xfac9ffff irq 28 at device 0.0 on pci1 igb0: flags=8843 metric 0 mtu 9000 options=1bb Let me know if there's any other testing you'd like me to do. Terry Kennedy http://www.tmk.com terry@tmk.com New York, NY USA From owner-freebsd-fs@FreeBSD.ORG Sun Sep 12 10:35:57 2010 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id B5B721065695; Sun, 12 Sep 2010 10:35:57 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 7D4908FC17; Sun, 12 Sep 2010 10:35:57 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id o8CAZvGS041024; Sun, 12 Sep 2010 10:35:57 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id o8CAZvVC041019; Sun, 12 Sep 2010 10:35:57 GMT (envelope-from linimon) Date: Sun, 12 Sep 2010 10:35:57 GMT Message-Id: <201009121035.o8CAZvVC041019@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Cc: Subject: Re: kern/150336: [nfs] mountd/nfsd became confused; refused to reload nfs maps X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 12 Sep 2010 10:35:57 -0000 Old Synopsis: mountd/nfsd became confused; refused to reload nfs maps New Synopsis: [nfs] mountd/nfsd became confused; refused to reload nfs maps Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Sun Sep 12 10:35:30 UTC 2010 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=150336 From owner-freebsd-fs@FreeBSD.ORG Sun Sep 12 15:28:10 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id B2326106566C for ; Sun, 12 Sep 2010 15:28:10 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id 6875F8FC1F for ; Sun, 12 Sep 2010 15:28:10 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: ApwEACKOjEyDaFvO/2dsb2JhbACDGZ8frB2QZIEigyp0BIon X-IronPort-AV: E=Sophos;i="4.56,355,1280721600"; d="scan'208";a="93596562" Received: from erie.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.206]) by esa-jnhn-pri.mail.uoguelph.ca with ESMTP; 12 Sep 2010 11:28:08 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id AEA23B3F37; Sun, 12 Sep 2010 11:28:08 -0400 (EDT) Date: Sun, 12 Sep 2010 11:28:08 -0400 (EDT) From: Rick Macklem To: Terry Kennedy Message-ID: <954605288.782335.1284305288639.JavaMail.root@erie.cs.uoguelph.ca> In-Reply-To: <01NRSE7GZJEC0022AD@tmk.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [24.65.230.102] X-Mailer: Zimbra 6.0.7_GA_2476.RHEL4 (ZimbraWebClient - SAF3 (Mac)/6.0.7_GA_2473.RHEL4_64) Cc: freebsd-fs@freebsd.org Subject: Re: Weird Linux - FreeBSD/ZFS NFSv4 interoperability problem X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 12 Sep 2010 15:28:10 -0000 > > A couple of people have reported very slow read rates for the NFSv4 > > client (actually the experimental client, since they see it for > > NFSv3 too). If you could easily do the following, using a FreeBSD8.1 > > or newer client: > > # mount -t nfs -o nfsv4 :/path > > - cd to anywhere in the mount that has a 100Mbyte+ file > > # dd if=<100Mbyte+ file> of=/dev/null bs=1m > > > > and then report what read rate you see along with the client's > > machine-arch/# of cores/ram size/network driver used by the mount > > > > rick > > ps: Btw, anyone else who can do this test, it would be appreciated. > > If you aren't set up for NFSv4, you can do an NFSv3 mount using > > the exp. client instead. > > # mount -t newnfs -o nfsv3 :/path > > On 8-STABLE (both client and server). First test is NFSv3 on the > standard > client: > > (0:842) new-gate:~terry# mount -t nfs -o nfsv4 new-rz1:/data /foo > [tcp6] new-rz1:/data: NFSPROC_NULL: RPC: Program/version mismatch; low > version = 2, high version = 3 > [tcp] new-rz1:/data: NFSPROC_NULL: RPC: Program/version mismatch; low > version = 2, high version = 3 > > ^C > (1:843) new-gate:~terry# mount -t nfs -o nfsv3 new-rz1:/data /foo > [...] > (0:869) new-gate:/foo/Backups/Suzanne VAIO# dd if=0cff3d7b_VOL.spf > of=/dev/null bs=1m > 6010+1 records in > 6010+1 records out > 6301945344 bytes transferred in 69.730064 secs (90376302 bytes/sec) > > Now, let's try the newnfs client (cache should have been primed by the > first run, so we'd expect this to be faster): > Just thought I'd mention that, since it is a different mount, the caches won't be primed, which is good, because that would mask differences. > (0:879) new-gate:/tmp# umount /foo > (0:880) new-gate:/tmp# mount -t newnfs -o nfsv3 new-rz1:/data /foo > (0:881) new-gate:/tmp# cd /foo/Backups/Suzanne\ VAIO/ > (0:882) new-gate:/foo/Backups/Suzanne VAIO# dd if=0cff3d7b_VOL.spf > of=/dev/null bs=1m > 6010+1 records in > 6010+1 records out > 6301945344 bytes transferred in 135.927222 secs (46362644 bytes/sec) > > Hmmm. Half the performance. The problem isn't the disk speed on the > server: > Ok, good. You aren't seeing what the two guys reported (they were really slow, at less than 2Mbytes/sec). If you would like to, you could try the following, since the two clients use different default r/w sizes. # mount -t newnfs -o nfsv3,rsize=32768,wsize=32768 new-rz1:/data /foo and see how it changes the read rate. I don't know why there is a factor of 2 difference (if it isn't the different r/w size), but it will probably get resolved as I bring the experimental client up to date. Thanks a lot for doing the test and giving me a data point, rick From owner-freebsd-fs@FreeBSD.ORG Sun Sep 12 16:20:31 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 3686C1065696 for ; Sun, 12 Sep 2010 16:20:31 +0000 (UTC) (envelope-from Gabor@Zahemszky.HU) Received: from relay03.digicable.hu (relay03.digicable.hu [92.249.128.185]) by mx1.freebsd.org (Postfix) with ESMTP id EEDA28FC1E for ; Sun, 12 Sep 2010 16:20:30 +0000 (UTC) Received: from [94.21.9.100] (helo=Picasso.Zahemszky.HU) by relay03.digicable.hu with esmtpa id 1OuotI-000469-Ds for ; Sun, 12 Sep 2010 17:54:52 +0200 Date: Sun, 12 Sep 2010 17:54:52 +0200 From: Zahemszky =?ISO-8859-2?Q?G=E1bor?= To: freebsd-fs@freebsd.org Message-ID: <20100912175452.1c488655@Picasso.Zahemszky.HU> Organization: Zahemszky Bt. X-Mailer: Claws Mail 3.7.6 (GTK+ 2.20.1; amd64-portbld-freebsd8.1) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Original: 94.21.9.100 Subject: problem with amd automounter X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 12 Sep 2010 16:20:31 -0000 Hi! I have a small NAS-box with tho Samba shares. I try to automount the two SMB-shares with amd and mount_smbfs. Here is my config: === $ smbclient -N -L XXX Anonymous login successful Domain=[MyNetWork] OS=[Unix] Server=[Samba 3.0.11] Sharename Type Comment --------- ---- ------- HDD_2_1_1 Disk For everyone ADMIN 1 Disk DISK 1 Disk For everyone IPC$ IPC IPC Service (ABC) ADMIN$ IPC IPC Service (ABC) Anonymous login successful Domain=[MyNetWork] OS=[Unix] Server=[Samba 3.0.11] Server Comment --------- ------- XXX ABC Workgroup Master --------- ------- MyNetWork XXX $ cat /etc/amd.net /defaults \ rhost:=XXX;\ fs:=${autodir}/${rhost}/${key}; disk1 \ type:=program;\ rfs:="DISK 1";\ mount:="/sbin/mount mount -t smbfs -o-N '\\\/\\\/guest@${rhost}/${rfs}' ${fs}";\ umount:="/sbin/umount umount ${fs}" disk2 \ type:=program;\ rfs:=HDD_2_1_1;\ mount:="/sbin/mount mount -r -t smbfs -o-N \\\/\\\/guest@${rhost}/${rfs} ${fs}";\ umount:="/sbin/umount umount ${fs}" === As you can see, the first share's name has a space in its name. I found nothing about it in FreeBSD's documentation, but from the web, I've found that I should use single quotes around the mount command's argument if it contains space character. I tried it with and without quotes, but it doesn't matter. I can reach the disk2 share, but I cannot reach disk1, I get error: $ amq / root "root" /net toplvl /etc/amd.net /net $ ls /net $ ls /net/disk2 bla foo bar baz $ ls /net/disk1 ls: /net/disk1: Unknown error: 2147483647 $ Some other info: FreeBSD 8.1-RELEASE amd64, GENERIC kernel Can anybody help me to write a correct amd config section for it? (No, I don't like to reconfigure Samba on the other end, I'd like to understand AMD.) Thanks, Zahy < Gabor at Zahemszky dot HU > PS: Why the old AMD reference manual amdref.* is missing from a full FreeBSD system? I had to download it from freebsd.org website. -- #!/bin/ksh # # See my GPG key at http://www.Zahemszky.HU # Z='21N16I25C25E30, 40M30E33E25T15U!'; IFS=' ABCDEFGHIJKLMNOPQRSTUVWXYZ '; set -- $Z;for i;{ [[ $i = ? ]]&&print $i&&break; [[ $i = ??? ]]&&j=$i&&i=${i%?}; typeset -i40 i=8#$i;print -n ${i#???}; [[ "$j" = ??? ]]&&print -n "${j#??} "&&j=;typeset +i i;}; IFS=' 0123456789 ';set -- $Z;for i;{ [[ $i = , ]]&&i=2; [[ $i = ?? ]]||typeset -l i;j="$j $i";typeset +l i;};print "$j" From owner-freebsd-fs@FreeBSD.ORG Sun Sep 12 17:02:42 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 2C186106566C; Sun, 12 Sep 2010 17:02:42 +0000 (UTC) (envelope-from prvs=18715e5890=killing@multiplay.co.uk) Received: from mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) by mx1.freebsd.org (Postfix) with ESMTP id 8F6098FC15; Sun, 12 Sep 2010 17:02:41 +0000 (UTC) X-MDAV-Processed: mail1.multiplay.co.uk, Sun, 12 Sep 2010 17:51:55 +0100 X-Spam-Processed: mail1.multiplay.co.uk, Sun, 12 Sep 2010 17:51:55 +0100 X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on mail1.multiplay.co.uk X-Spam-Level: X-Spam-Status: No, score=-5.0 required=6.0 tests=USER_IN_WHITELIST shortcircuit=ham autolearn=disabled version=3.2.5 Received: from r2d2 by mail1.multiplay.co.uk (MDaemon PRO v10.0.4) with ESMTP id md50011222930.msg; Sun, 12 Sep 2010 17:51:54 +0100 X-Authenticated-Sender: Killing@multiplay.co.uk X-MDRemoteIP: 188.220.16.49 X-Return-Path: prvs=18715e5890=killing@multiplay.co.uk X-Envelope-From: killing@multiplay.co.uk Message-ID: From: "Steven Hartland" To: "Andriy Gapon" , "Kostik Belousov" References: <5DB6E7C798E44D33A05673F4B773405E@multiplay.co.uk><4C85E91E.1010602@icyb.net.ua> <4C873914.40404@freebsd.org><20100908084855.GF2465@deviant.kiev.zoral.com.ua> <4C874F00.3050605@freebsd.org> Date: Sun, 12 Sep 2010 17:51:52 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="UTF-8"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.5931 Cc: freebsd-fs@freebsd.org, Pawel Jakub Dawidek Subject: Re: zfs very poor performance compared to ufs due to lack of cache? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 12 Sep 2010 17:02:42 -0000 ----- Original Message ----- From: "Andriy Gapon" >>> --- a/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c >>> +++ b/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c >>> @@ -500,6 +500,7 @@ again: >>> sched_unpin(); >>> } >>> VM_OBJECT_LOCK(obj); >>> + vm_page_set_validclean(m, off, bytes); >> Only if error == 0, perhaps ? Ok tried this and still no joy, the value of the cache always falls to that of the min value and all memory used by sendfile still seems to get lost into inactive memory :( Regards Steve ================================================ This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337 or return the E.mail to postmaster@multiplay.co.uk. From owner-freebsd-fs@FreeBSD.ORG Sun Sep 12 17:06:09 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 72147106566C; Sun, 12 Sep 2010 17:06:08 +0000 (UTC) (envelope-from avg@freebsd.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 446D18FC20; Sun, 12 Sep 2010 17:06:07 +0000 (UTC) Received: from porto.topspin.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id UAA29383; Sun, 12 Sep 2010 20:06:04 +0300 (EEST) (envelope-from avg@freebsd.org) Received: from localhost.topspin.kiev.ua ([127.0.0.1]) by porto.topspin.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1Ouq0B-000G4F-OY; Sun, 12 Sep 2010 20:06:03 +0300 Message-ID: <4C8D087B.5040404@freebsd.org> Date: Sun, 12 Sep 2010 20:06:03 +0300 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; U; FreeBSD amd64; en-US; rv:1.9.2.9) Gecko/20100912 Lightning/1.0b2 Thunderbird/3.1.3 MIME-Version: 1.0 To: Steven Hartland References: <5DB6E7C798E44D33A05673F4B773405E@multiplay.co.uk><4C85E91E.1010602@icyb.net.ua> <4C873914.40404@freebsd.org><20100908084855.GF2465@deviant.kiev.zoral.com.ua> <4C874F00.3050605@freebsd.org> In-Reply-To: X-Enigmail-Version: 1.1.2 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org, Pawel Jakub Dawidek Subject: Re: zfs very poor performance compared to ufs due to lack of cache? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 12 Sep 2010 17:06:09 -0000 on 12/09/2010 19:51 Steven Hartland said the following: > ----- Original Message ----- From: "Andriy Gapon" > >>>> --- a/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c >>>> +++ b/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c >>>> @@ -500,6 +500,7 @@ again: >>>> sched_unpin(); >>>> } >>>> VM_OBJECT_LOCK(obj); >>>> + vm_page_set_validclean(m, off, bytes); >>> Only if error == 0, perhaps ? > > Ok tried this and still no joy, the value of the cache always falls to that of > the min > value and all memory used by sendfile still seems to get lost into inactive > memory :( Well, I do not see enough technical details in this report to see what's going on. As we know, there is also another issue (not sendfile specific) leading to ARC shrinking. -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Sun Sep 12 17:29:45 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 19072106566B; Sun, 12 Sep 2010 17:29:45 +0000 (UTC) (envelope-from prvs=18715e5890=killing@multiplay.co.uk) Received: from mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) by mx1.freebsd.org (Postfix) with ESMTP id 598028FC13; Sun, 12 Sep 2010 17:29:43 +0000 (UTC) X-MDAV-Processed: mail1.multiplay.co.uk, Sun, 12 Sep 2010 18:29:40 +0100 X-Spam-Processed: mail1.multiplay.co.uk, Sun, 12 Sep 2010 18:29:40 +0100 X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on mail1.multiplay.co.uk X-Spam-Level: X-Spam-Status: No, score=-5.0 required=6.0 tests=USER_IN_WHITELIST shortcircuit=ham autolearn=disabled version=3.2.5 Received: from r2d2 by mail1.multiplay.co.uk (MDaemon PRO v10.0.4) with ESMTP id md50011222990.msg; Sun, 12 Sep 2010 18:29:38 +0100 X-Authenticated-Sender: Killing@multiplay.co.uk X-MDRemoteIP: 188.220.16.49 X-Return-Path: prvs=18715e5890=killing@multiplay.co.uk X-Envelope-From: killing@multiplay.co.uk Message-ID: <03537796FAB54E02959E2D64FC83004F@multiplay.co.uk> From: "Steven Hartland" To: "Andriy Gapon" References: <5DB6E7C798E44D33A05673F4B773405E@multiplay.co.uk><4C85E91E.1010602@icyb.net.ua> <4C873914.40404@freebsd.org><20100908084855.GF2465@deviant.kiev.zoral.com.ua> <4C874F00.3050605@freebsd.org> <4C8D087B.5040404@freebsd.org> Date: Sun, 12 Sep 2010 18:29:35 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="UTF-8"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.5931 Cc: freebsd-fs@freebsd.org, Pawel Jakub Dawidek Subject: Re: zfs very poor performance compared to ufs due to lack of cache? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 12 Sep 2010 17:29:45 -0000 ----- Original Message ----- From: "Andriy Gapon" > > Well, I do not see enough technical details in this report to see what's going > on. As we know, there is also another issue (not sendfile specific) leading to > ARC shrinking. What details would you like? ================================================ This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337 or return the E.mail to postmaster@multiplay.co.uk. From owner-freebsd-fs@FreeBSD.ORG Sun Sep 12 18:32:43 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id AC93A106566B; Sun, 12 Sep 2010 18:32:43 +0000 (UTC) (envelope-from a.smith@ukgrid.net) Received: from mx0.ukgrid.net (mx0.ukgrid.net [89.21.28.37]) by mx1.freebsd.org (Postfix) with ESMTP id 13EE58FC0C; Sun, 12 Sep 2010 18:32:43 +0000 (UTC) Received: from [89.21.28.38] (port=12139 helo=omicron.ukgrid.net) by mx0.ukgrid.net with esmtp (Exim 4.72; FreeBSD) envelope-from a.smith@ukgrid.net id 1OurM0-0008US-C7; Sun, 12 Sep 2010 19:32:40 +0100 Received: from voip.ukgrid.net (voip.ukgrid.net [89.107.16.9]) by webmail2.ukgrid.net (Horde Framework) with HTTP; Sun, 12 Sep 2010 19:32:40 +0100 Message-ID: <20100912193240.16694ph09065z484@webmail2.ukgrid.net> Date: Sun, 12 Sep 2010 19:32:40 +0100 From: a.smith@ukgrid.net To: Alexander Motin References: <20100909140000.5744370gkyqv4eo0@webmail2.ukgrid.net> <20100909182318.11133lqu4q4u1mw4@webmail2.ukgrid.net> <4C89D6A8.1080107@icyb.net.ua> <20100910143900.20382xl5bl6oo9as@webmail2.ukgrid.net> <20100910141127.GA13056@icarus.home.lan> <20100910155510.11831w104qjpyc4g@webmail2.ukgrid.net> <20100910152544.GA14636@icarus.home.lan> <20100910173912.205969tzhjiovf8c@webmail2.ukgrid.net> <4C8A6B26.8050305@icyb.net.ua> <20100910184921.16956kbaskhrsmg4@webmail2.ukgrid.net> <4C8A7B20.7090408@FreeBSD.org> In-Reply-To: <4C8A7B20.7090408@FreeBSD.org> MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; DelSp="Yes"; format="flowed" Content-Disposition: inline Content-Transfer-Encoding: 7bit User-Agent: Internet Messaging Program (IMP) H3 (4.3.7) / FreeBSD-8.0 Cc: freebsd-fs@freebsd.org, Andriy Gapon Subject: Re: ZFS related kernel panic X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 12 Sep 2010 18:32:43 -0000 Quoting Alexander Motin : > > It looks like during timeout handling (it is quite complicated process > when port multiplier is used) some request was completed twice. So > original problem is probably in hardware (try to check/replace cables, > multiplier, ...), that caused timeout, but the fact that drive was > unable to handle it is probably a siis(4) driver bug. > Hi, about checking for faulty hardware, to recap... So far I have replaced the server, moved from i386 to amd64, and now I have also replaced the eSATA PCI card. With all of these changes I am still getting the panics. On Monday I will swap out the drive enclosure and the cable which should cover off the things you mentioned. Once Ive tested that Ill let you know if Im still seeing the issue, thanks Andy. PS the disk enclosure is this: http://uk.startech.com/product/S352U2RER-35in-eSATA-USB-Dual-SATA-Hot-Swap-External-RAID-Hard-Drive-Enclosure Which according to the specs is using a Sil5744, I assume this is providing the port multiplier support. Just in case thats of any help, ie any know issues... From owner-freebsd-fs@FreeBSD.ORG Sun Sep 12 19:20:30 2010 Return-Path: Delivered-To: freebsd-fs@FreeBSD.ORG Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 1CF021065673 for ; Sun, 12 Sep 2010 19:20:30 +0000 (UTC) (envelope-from 000.fbsd@quip.cz) Received: from elsa.codelab.cz (elsa.codelab.cz [94.124.105.4]) by mx1.freebsd.org (Postfix) with ESMTP id D5AFB8FC1A for ; Sun, 12 Sep 2010 19:20:29 +0000 (UTC) Received: from elsa.codelab.cz (localhost.codelab.cz [127.0.0.1]) by elsa.codelab.cz (Postfix) with ESMTP id EF24F19E02D for ; Sun, 12 Sep 2010 21:00:35 +0200 (CEST) Received: from [192.168.1.2] (ip-86-49-61-235.net.upcbroadband.cz [86.49.61.235]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by elsa.codelab.cz (Postfix) with ESMTPSA id CE3D319E027 for ; Sun, 12 Sep 2010 21:00:31 +0200 (CEST) Message-ID: <4C8D234F.40204@quip.cz> Date: Sun, 12 Sep 2010 21:00:31 +0200 From: Miroslav Lachman <000.fbsd@quip.cz> User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.9.1.11) Gecko/20100701 SeaMonkey/2.0.6 MIME-Version: 1.0 To: freebsd-fs@FreeBSD.ORG Content-Type: text/plain; charset=ISO-8859-2; format=flowed Content-Transfer-Encoding: 7bit Cc: Subject: FreeNAS vs OpenSolaris vs Nexenta ZFS Benchmarks X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 12 Sep 2010 19:20:30 -0000 I am sending this link to those interested in poor performance results of FreeNAS (FreeBSD) and ZFS. http://www.zfsbuild.com/2010/09/10/freenas-vs-opensolaris-zfs-benchmarks/ Miroslav Lachman From owner-freebsd-fs@FreeBSD.ORG Sun Sep 12 19:20:53 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 34E6D106566B; Sun, 12 Sep 2010 19:20:53 +0000 (UTC) (envelope-from avg@freebsd.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 159CF8FC08; Sun, 12 Sep 2010 19:20:51 +0000 (UTC) Received: from porto.topspin.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id WAA01157; Sun, 12 Sep 2010 22:20:48 +0300 (EEST) (envelope-from avg@freebsd.org) Received: from localhost.topspin.kiev.ua ([127.0.0.1]) by porto.topspin.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1Ous6a-000GAM-3r; Sun, 12 Sep 2010 22:20:48 +0300 Message-ID: <4C8D280F.3040803@freebsd.org> Date: Sun, 12 Sep 2010 22:20:47 +0300 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; U; FreeBSD amd64; en-US; rv:1.9.2.9) Gecko/20100912 Lightning/1.0b2 Thunderbird/3.1.3 MIME-Version: 1.0 To: Steven Hartland References: <5DB6E7C798E44D33A05673F4B773405E@multiplay.co.uk><4C85E91E.1010602@icyb.net.ua> <4C873914.40404@freebsd.org><20100908084855.GF2465@deviant.kiev.zoral.com.ua> <4C874F00.3050605@freebsd.org> <4C8D087B.5040404@freebsd.org> <03537796FAB54E02959E2D64FC83004F@multiplay.co.uk> In-Reply-To: <03537796FAB54E02959E2D64FC83004F@multiplay.co.uk> X-Enigmail-Version: 1.1.2 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org, Pawel Jakub Dawidek Subject: Re: zfs very poor performance compared to ufs due to lack of cache? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 12 Sep 2010 19:20:53 -0000 on 12/09/2010 20:29 Steven Hartland said the following: > > ----- Original Message ----- From: "Andriy Gapon" >> >> Well, I do not see enough technical details in this report to see what's going >> on. As we know, there is also another issue (not sendfile specific) leading to >> ARC shrinking. > > What details would you like? > All :-) Revision of your code, all the extra patches, workload, graphs of ARC and memory dynamics and that's just for the start. Then, analysis similar to that of Wiktor. E.g. trying to test with a single file and then removing it, or better yet, examining with DTrace actual code paths taken from sendfile(2). -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Sun Sep 12 19:28:54 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 61E5D106564A for ; Sun, 12 Sep 2010 19:28:54 +0000 (UTC) (envelope-from jdc@koitsu.dyndns.org) Received: from qmta15.emeryville.ca.mail.comcast.net (qmta15.emeryville.ca.mail.comcast.net [76.96.27.228]) by mx1.freebsd.org (Postfix) with ESMTP id 4ACDC8FC17 for ; Sun, 12 Sep 2010 19:28:53 +0000 (UTC) Received: from omta18.emeryville.ca.mail.comcast.net ([76.96.30.74]) by qmta15.emeryville.ca.mail.comcast.net with comcast id 5v1P1f0051bwxycAFvUtuF; Sun, 12 Sep 2010 19:28:53 +0000 Received: from koitsu.dyndns.org ([98.248.41.155]) by omta18.emeryville.ca.mail.comcast.net with comcast id 5vUs1f00C3LrwQ28evUsK4; Sun, 12 Sep 2010 19:28:53 +0000 Received: by icarus.home.lan (Postfix, from userid 1000) id 7A40B9B423; Sun, 12 Sep 2010 12:28:52 -0700 (PDT) Date: Sun, 12 Sep 2010 12:28:52 -0700 From: Jeremy Chadwick To: Miroslav Lachman <000.fbsd@quip.cz> Message-ID: <20100912192852.GA28746@icarus.home.lan> References: <4C8D234F.40204@quip.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4C8D234F.40204@quip.cz> User-Agent: Mutt/1.5.20 (2009-06-14) Cc: freebsd-fs@FreeBSD.ORG Subject: Re: FreeNAS vs OpenSolaris vs Nexenta ZFS Benchmarks X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 12 Sep 2010 19:28:54 -0000 On Sun, Sep 12, 2010 at 09:00:31PM +0200, Miroslav Lachman wrote: > I am sending this link to those interested in poor performance > results of FreeNAS (FreeBSD) and ZFS. > > http://www.zfsbuild.com/2010/09/10/freenas-vs-opensolaris-zfs-benchmarks/ Thank you for posting this -- this is a fantastic, simple review of the current situation. -- | Jeremy Chadwick jdc@parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, USA | | Making life hard for others since 1977. PGP: 4BD6C0CB | From owner-freebsd-fs@FreeBSD.ORG Sun Sep 12 20:35:47 2010 Return-Path: Delivered-To: freebsd-fs@FreeBSD.ORG Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id C4159106566B for ; Sun, 12 Sep 2010 20:35:47 +0000 (UTC) (envelope-from rwatson@FreeBSD.org) Received: from cyrus.watson.org (cyrus.watson.org [65.122.17.42]) by mx1.freebsd.org (Postfix) with ESMTP id 9D2548FC14 for ; Sun, 12 Sep 2010 20:35:47 +0000 (UTC) Received: from fledge.watson.org (fledge.watson.org [65.122.17.41]) by cyrus.watson.org (Postfix) with ESMTPS id 2E2F046B23; Sun, 12 Sep 2010 16:35:47 -0400 (EDT) Date: Sun, 12 Sep 2010 21:35:47 +0100 (BST) From: Robert Watson X-X-Sender: robert@fledge.watson.org To: Miroslav Lachman <000.fbsd@quip.cz> In-Reply-To: <4C8D234F.40204@quip.cz> Message-ID: References: <4C8D234F.40204@quip.cz> User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Cc: freebsd-fs@FreeBSD.ORG Subject: Re: FreeNAS vs OpenSolaris vs Nexenta ZFS Benchmarks X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 12 Sep 2010 20:35:47 -0000 On Sun, 12 Sep 2010, Miroslav Lachman wrote: > I am sending this link to those interested in poor performance results of > FreeNAS (FreeBSD) and ZFS. > > http://www.zfsbuild.com/2010/09/10/freenas-vs-opensolaris-zfs-benchmarks/ Ouch :-). In general, not my area, so I'll leave it to the ZFS folks, but I will comment that there's an InfiniBand project in flight -- Jeff Roberson's work on this can be found in Subversion. I'm not sure what his schedule is, and presumably, there will be ZFS integration work to do even once the base InfiniBand stack is done. Robert From owner-freebsd-fs@FreeBSD.ORG Sun Sep 12 21:01:49 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 3623E1065672; Sun, 12 Sep 2010 21:01:49 +0000 (UTC) (envelope-from prvs=18715e5890=killing@multiplay.co.uk) Received: from mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) by mx1.freebsd.org (Postfix) with ESMTP id 7170E8FC17; Sun, 12 Sep 2010 21:01:48 +0000 (UTC) X-MDAV-Processed: mail1.multiplay.co.uk, Sun, 12 Sep 2010 22:01:43 +0100 X-Spam-Processed: mail1.multiplay.co.uk, Sun, 12 Sep 2010 22:01:43 +0100 X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on mail1.multiplay.co.uk X-Spam-Level: X-Spam-Status: No, score=-5.0 required=6.0 tests=USER_IN_WHITELIST shortcircuit=ham autolearn=disabled version=3.2.5 Received: from r2d2 by mail1.multiplay.co.uk (MDaemon PRO v10.0.4) with ESMTP id md50011223331.msg; Sun, 12 Sep 2010 22:01:43 +0100 X-Authenticated-Sender: Killing@multiplay.co.uk X-MDRemoteIP: 188.220.16.49 X-Return-Path: prvs=18715e5890=killing@multiplay.co.uk X-Envelope-From: killing@multiplay.co.uk Message-ID: <3FBF66BF11AA4CBBA6124CA435A4A31B@multiplay.co.uk> From: "Steven Hartland" To: "Andriy Gapon" References: <5DB6E7C798E44D33A05673F4B773405E@multiplay.co.uk><4C85E91E.1010602@icyb.net.ua> <4C873914.40404@freebsd.org><20100908084855.GF2465@deviant.kiev.zoral.com.ua> <4C874F00.3050605@freebsd.org> <4C8D087B.5040404@freebsd.org> <03537796FAB54E02959E2D64FC83004F@multiplay.co.uk> <4C8D280F.3040803@freebsd.org> Date: Sun, 12 Sep 2010 22:01:42 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="UTF-8"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.5931 Cc: freebsd-fs@freebsd.org, Pawel Jakub Dawidek , jhell Subject: Re: zfs very poor performance compared to ufs due to lack of cache? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 12 Sep 2010 21:01:49 -0000 ----- Original Message ----- From: "Andriy Gapon" > > All :-) > Revision of your code, all the extra patches, workload, graphs of ARC and memory > dynamics and that's just for the start. > Then, analysis similar to that of Wiktor. E.g. trying to test with a single > file and then removing it, or better yet, examining with DTrace actual code > paths taken from sendfile(2). All those have been given in past posts on this thread, but that's quite fragmented, sorry about that, so here's the current summary for reference:- The machine is a stream server with its job being to serve mp4 http streams via nginx. It also exports the fs via nfs to an encoding box which does all the grunt work of creating the streams, but that doesn't seem relevant here as this was not in use during these tests. We currently have two such machines one which has been updated to zfs and one which is still on ufs. After upgrading to 8.1-RELEASE and zfs all seemed ok until we had a bit of a traffic hike at which point we noticed the machine in question really struggling even though it was serving less than 100 clients at under 3mbps for a few popular streams which should have all easily fitted in cache. Upon investigation it seems that zfs wasn't caching anything so all streams where being read direct from disk overloading the areca controller backed with a 7 disk RAID6 volume. After my original post we've done a number of upgrades and we are now currently running 8-STABLE as of the 06/09 plus the following http://people.freebsd.org/~mm/patches/zfs/v15/stable-8-v15.patch http://people.freebsd.org/~mm/patches/zfs/zfs_metaslab_v2.patch http://people.freebsd.org/~mm/patches/zfs/zfs_abe_stat_rrwlock.patch needfree.patch and vm_paging_needed.patch posted by jhell > --- a/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c > +++ b/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c > @@ -500,6 +500,7 @@ again: > sched_unpin(); > } > VM_OBJECT_LOCK(obj); > + if (error == 0) > + vm_page_set_validclean(m, off, bytes); > vm_page_wakeup(m); > if (error == 0) > uio->uio_resid -= bytes; When nginx is active and using sendfile we see a large amount of memory, equivalent to the size of the files being accessed it seems, slip into inactive according to top and the size of arc drop to the at most the minimum configured and some times even less. The machine now has 7GB or ram and these are the load.conf settings currently in use:- # As we have battery backed cache we can do this vfs.zfs.cache_flush_disable=1 vfs.zfs.prefetch_disable=0 # Physical Memory * 1.5 vm.kmem_size="11G" vfs.zfs.arc_min="5G" vfs.zfs.arc_max="6656M" vfs.zfs.vdev.cache.size="20M" Currently arc_summary reports the following after been idle for several hours:- ARC Size: Current Size: 76.92% 5119.85M (arcsize) Target Size: (Adaptive) 76.92% 5120.00M (c) Min Size (Hard Limit): 76.92% 5120.00M (c_min) Max Size (High Water): ~1:1 6656.00M (c_max) Column details as requested previously:- cnt, time, kstat.zfs.misc.arcstats.size, vm.stats.vm.v_pdwakeups, vm.stats.vm.v_cache_count, vm.stats.vm.v_inactive_count, vm.stats.vm.v_active_count, vm.stats.vm.v_wire_count, vm.stats.vm.v_free_count 1,1284323760,5368902272,72,49002,156676,27241,1505466,32523 2,1284323797,5368675288,73,51593,156193,27612,1504846,30682 3,1284323820,5368675288,73,51478,156248,27649,1504874,30671 4,1284323851,5368670688,74,22994,184834,27609,1504794,30698 5,1284323868,5368670688,74,22990,184838,27605,1504792,30698 6,1284324024,5368679992,74,22246,184624,27663,1505177,31171 7,1284324057,5368679992,74,22245,184985,27663,1504844,31170 Point notes: 1. Initial values 2. single file request size: 692M 3. repeat request #2 4. request for second file 205M 5. repeat request #4 6. multi request #2 7. complete top details after tests:- Mem: 106M Active, 723M Inact, 5878M Wired, 87M Cache, 726M Buf, 124M Free Swap: 4096M Total, 836K Used, 4095M Free arc_summary snip after test ARC Size: Current Size: 76.92% 5119.97M (arcsize) Target Size: (Adaptive) 76.92% 5120.09M (c) Min Size (Hard Limit): 76.92% 5120.00M (c_min) Max Size (High Water): ~1:1 6656.00M (c_max) If I turn the box on so it gets a real range of requests, after about an hour I see something like:- Mem: 104M Active, 2778M Inact, 3065M Wired, 20M Cache, 726M Buf, 951M Free Swap: 4096M Total, 4096M Free ARC Size: Current Size: 34.37% 2287.36M (arcsize) Target Size: (Adaptive) 100.00% 6656.00M (c) Min Size (Hard Limit): 76.92% 5120.00M (c_min) Max Size (High Water): ~1:1 6656.00M (c_max) As you can see the size of ARC has even dropped below c_min. The results of the live test where gathered directly after a reboot, in case that's relevant. If someone could suggest a set of tests that would help I'll be happy to run them but from what's been said thus far is seems that the use of sendfile is forcing memory use other than that coming from arc which is what's expected? Would running the same test with sendfile disabled in nginx help? Regards Steve ================================================ This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337 or return the E.mail to postmaster@multiplay.co.uk. From owner-freebsd-fs@FreeBSD.ORG Sun Sep 12 21:14:16 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 04CE51065672; Sun, 12 Sep 2010 21:14:16 +0000 (UTC) (envelope-from appdebgr@gmail.com) Received: from mail-qw0-f54.google.com (mail-qw0-f54.google.com [209.85.216.54]) by mx1.freebsd.org (Postfix) with ESMTP id 9B9618FC08; Sun, 12 Sep 2010 21:14:15 +0000 (UTC) Received: by qwg5 with SMTP id 5so3261211qwg.13 for ; Sun, 12 Sep 2010 14:14:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:received:in-reply-to :references:date:message-id:subject:from:to:cc:content-type; bh=AQtfkqmLBIW5K4Sujr2NIytqQO5Yvugc84aTcDj8q2s=; b=xGOGG8+F0vDJ0/Z99itwRtTky5/hcIDEm/JNL6DLzb/3Frn9A1wjbdqptpv1IURU21 5Dy9xfdZkAMjDCCjK1vC3JXFN5vVzRVhkItl83D0b6KX/MWceRdJdDn2+DyTUkkJ20Ly fM5PKf42GUoe76tDWmc/QGbjay/tAKZSOqEF0= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=DA8bLbIYcr/m1FJiAjKTpNF1pySnWOIbR/c12hGRtCeuA+SBkbYCp80D55L1bG4PsY JyXVZAdw5XwxJ0lwNx2K548GsfWJUz9p75XdWaM3/W81kZi4x2rHbGuleTBMX0zgH1r+ 5XMnblEteGgSF6KmJcmI5tnggYdLMYMHKaEN4= MIME-Version: 1.0 Received: by 10.224.66.163 with SMTP id n35mr809494qai.8.1284326054640; Sun, 12 Sep 2010 14:14:14 -0700 (PDT) Received: by 10.229.20.10 with HTTP; Sun, 12 Sep 2010 14:14:14 -0700 (PDT) In-Reply-To: <4C8B2A56.8080809@DataIX.net> References: <20100910073912.GC2007@garage.freebsd.pl> <4C8B2A56.8080809@DataIX.net> Date: Mon, 13 Sep 2010 00:14:14 +0300 Message-ID: From: App Deb To: jhell Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: freebsd-fs@freebsd.org, Pawel Jakub Dawidek Subject: Re: Swap on ZFS Volume still panics? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 12 Sep 2010 21:14:16 -0000 I just tried your settings (setting primary and secondary cache to none). (also tried it with checksum=off and volblocksize=4k, all together) It somewhat helped. In the beginning I though it would deadlock. The system was inaccessible for ~10 minutes, but after that it managed to allocate the required swap, and the system resumed. I measured the speed that it allocated swap and it was only around ~0.4MB/s max. With the native freebsd-swap partition the speed using my "selfmade memory filling program" it almost reaches native disk speeds (~80MB/s). I still see a problem here. Thanks. On Sat, Sep 11, 2010 at 10:05 AM, jhell wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > On 09/10/2010 03:39, Pawel Jakub Dawidek wrote: > > On Wed, Sep 08, 2010 at 07:19:40PM +0300, App Deb wrote: > >> Reading the wiki guides they mention the official way to add swap > volumes on > >> ZFS (set org.freebsd:swap=on etc..), > >> > >> So I thought that it should work now, But I just got a panic in > 8.1-RELEASE > >> after a heavy memory situation that touched swap for the first time. > > > > Swap on ZVOL is still not recommended. Where did you find information > > that it now recommended? I can't find anything about swap on > > http://wiki.freebsd.org/ZFS and on > > http://wiki.freebsd.org/ZFSQuickStartGuide there is a note that it is > > not recommended. Let me know where it is advised and I'll remove it or > > add a note (if the documentation is mine). > > > >> It is a hassle to add gmirror volumes for swap on full zfs systems, is > there > >> any workaround for this, or any news when a fix is coming or if it is > >> coming? > > > > I've no plans to fix it, maybe with ZFSv28 it will be easier to fix, but > > this is really low priority. If you use full ZFS system the recommended > > layout is described here: > > > > > http://blogs.freebsdish.org/pjd/2010/08/06/from-sysinstall-to-zfs-only-configuration/ > > > >> If the current code procudes guaranteed panics with zfs swap, I think > that > >> every mention of swap on zfs should be removed from the semi-official > wiki > >> guides. > > > > BTW. If this issue will be worked on in the future, it will be useful to > > actually see your panic, backtrace and other debug info. To be honest, I > > didn't expect it to panic, rather deadlock. Maybe panic is from deadlock > > resolver? Hard to say without any debug info. > > > > For reference I have been using swap on ZFS for a while. I recieve no > panics when doing so and have the following properties set. > > exports/swap refreservation 2G local > exports/swap primarycache none local > exports/swap secondarycache none local > exports/swap org.freebsd:swap on local > > As someone already mentioned, it might help to also change volblocksize > property to 4k but I have never had to do that as changing the primary > and secondary caches were enough to keep it from panic here. > > I would be interested to hear if this helped anyone else. > > > Regards, > > - -- > > jhell,v > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v2.0.16 (FreeBSD) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ > > iQEcBAEBAgAGBQJMiypWAAoJEJBXh4mJ2FR+OeYH/0jxC6aJlOHVKxL8UCeA+94s > V3scx8jDH3ZiqoCno4FLgkzW41E1NUGlvZA4arLe76Fv2/QeU53ZyZD160WcRYax > fhOyUQeHPJ2QecSnsmBJS2aJSvl9abdNYMyEhA1YxoApvBRcm7FEvAt3qY8Dj5uQ > jJPN4zOxM6CGq6FUWEbAmmtpIKSi/TLNZqvpxN1GcJeyfyFJroswiEisWYq2rkEA > MqXEe13DqVdjS0yysyMHsGNqfgpByDRQJ4M/nnGMPHyUlGdItFJR2B3htzVdeb88 > 69TnEw5rwiztNm/629jyBaDdynjwortP3Jw37Mij6SE/kHS/+kZ6dfCb7eZHPmA= > =PAEM > -----END PGP SIGNATURE----- > From owner-freebsd-fs@FreeBSD.ORG Sun Sep 12 21:40:43 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id E52DF106564A for ; Sun, 12 Sep 2010 21:40:42 +0000 (UTC) (envelope-from josh@tcbug.org) Received: from out2.smtp.messagingengine.com (out2.smtp.messagingengine.com [66.111.4.26]) by mx1.freebsd.org (Postfix) with ESMTP id A8EC78FC16 for ; Sun, 12 Sep 2010 21:40:42 +0000 (UTC) Received: from compute3.internal (compute3.nyi.mail.srv.osa [10.202.2.43]) by gateway1.messagingengine.com (Postfix) with ESMTP id B8D0C179; Sun, 12 Sep 2010 17:40:41 -0400 (EDT) Received: from frontend1.messagingengine.com ([10.202.2.160]) by compute3.internal (MEProxy); Sun, 12 Sep 2010 17:40:41 -0400 DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=messagingengine.com; h=from:to:subject:date:cc:references:in-reply-to:mime-version:content-type:content-transfer-encoding:message-id; s=smtpout; bh=ab+ZPQL7L7sxUcMZY+RIjBAvPdg=; b=RI/Kp/pJO5RdiydotlaQg0dLlGjPYtDm59l1kdh8RCSeX9AdFsbSitmCC7CqILIdCpchsSbFaOGDw4r7IHpL7DtVmwUqo4ilLXPHxi7yyyu9sWNH6gNbEaYU5OqUE0fnXlUd6xNmi+IfspofJFcCvy5cNmeatBjPKOMfMFouim0= X-Sasl-enc: FAST0lppIvzWE5q6hebGhZpCAQ7q8MO+Ck/F6ufgvoCe 1284327641 Received: from tcbug.ixsystems.com (173-123-10-0.pools.spcsdns.net [173.123.10.0]) by mail.messagingengine.com (Postfix) with ESMTPSA id 46F6340EFF9; Sun, 12 Sep 2010 17:40:41 -0400 (EDT) From: Josh Paetzel To: freebsd-fs@freebsd.org Date: Sun, 12 Sep 2010 16:40:32 -0500 User-Agent: KMail/1.13.5 (FreeBSD/9.0-CURRENT; KDE/4.5.1; amd64; ; ) References: <4C8D234F.40204@quip.cz> In-Reply-To: MIME-Version: 1.0 Content-Type: multipart/signed; boundary="nextPart1463941.dqUD0DWsBv"; protocol="application/pgp-signature"; micalg=pgp-sha1 Content-Transfer-Encoding: 7bit Message-Id: <201009121640.39157.josh@tcbug.org> Cc: Robert Watson Subject: Re: FreeNAS vs OpenSolaris vs Nexenta ZFS Benchmarks X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 12 Sep 2010 21:40:43 -0000 --nextPart1463941.dqUD0DWsBv Content-Type: Text/Plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable On Sunday 12 September 2010 15:35:47 Robert Watson wrote: > On Sun, 12 Sep 2010, Miroslav Lachman wrote: > > I am sending this link to those interested in poor performance results = of > > FreeNAS (FreeBSD) and ZFS. > >=20 > > http://www.zfsbuild.com/2010/09/10/freenas-vs-opensolaris-zfs-benchmark= s/ >=20 > Ouch :-). >=20 > In general, not my area, so I'll leave it to the ZFS folks, but I will > comment that there's an InfiniBand project in flight -- Jeff Roberson's > work on this can be found in Subversion. I'm not sure what his schedule > is, and presumably, there will be ZFS integration work to do even once the > base InfiniBand stack is done. >=20 > Robert > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" I'll respond and say that the current FreeNAS is based on FreeBSD 7, where = ZFS=20 was an experimental filesystem. I think a system based on FreeBSD 8 will=20 provide a better comparison. I'm a tad confused about the whole "sharing a ZFS filesystem over iSCSI". = I=20 thought iSCSI was used to eport LUNs that you then put a filesystem on with= a=20 client. iSCSI on FreeBSD is fairly slow compared to other solutions, I think there = is=20 some very preliminary work to fix that going on. =2D-=20 Thanks, Josh Paetzel --nextPart1463941.dqUD0DWsBv Content-Type: application/pgp-signature; name=signature.asc Content-Description: This is a digitally signed message part. -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.16 (FreeBSD) iQEcBAABAgAGBQJMjUjWAAoJEKFq1/n1feG2S+4IAMQne/nKk0t7n/VKIilG1luN IAolJvh8I+4b+PROA6vwAk6f0KHk2SidWWefy//Q7OylxHbUegNv0jquHu5Zz11t x08eiPSmrod1Xwq5S9zjDih5UjdMHahL5UuepT4jylMlDKXIDK5X0w3Zk2RTf5mn h0j85cffJ1ov/bOCFuPJ9oihXzvODhT4arms1VqSXPob3aOKXuRR9+vyc0RQbru+ eJSJzwQzXIB8W00+zruBinFmJJlweCzuiiYizlBKRh541SjtU/Jy1pInw5wBTLvv HHJTC9C+fzXi8xdP/t9IOXLbfZM1e5qWLwiif4+DEUsXPH+O6Si/Hr0LTjbK9XY= =sdfn -----END PGP SIGNATURE----- --nextPart1463941.dqUD0DWsBv-- From owner-freebsd-fs@FreeBSD.ORG Sun Sep 12 22:16:09 2010 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id AC659106564A for ; Sun, 12 Sep 2010 22:16:09 +0000 (UTC) (envelope-from imp@bsdimp.com) Received: from harmony.bsdimp.com (bsdimp.com [199.45.160.85]) by mx1.freebsd.org (Postfix) with ESMTP id 581488FC16 for ; Sun, 12 Sep 2010 22:16:09 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by harmony.bsdimp.com (8.14.3/8.14.1) with ESMTP id o8CMDikN052663; Sun, 12 Sep 2010 16:13:45 -0600 (MDT) (envelope-from imp@bsdimp.com) Date: Sun, 12 Sep 2010 16:13:50 -0600 (MDT) Message-Id: <20100912.161350.93202495727462070.imp@bsdimp.com> To: 000.fbsd@quip.cz From: "M. Warner Losh" In-Reply-To: <4C8D234F.40204@quip.cz> References: <4C8D234F.40204@quip.cz> X-Mailer: Mew version 6.3 on Emacs 22.3 / Mule 5.0 (SAKAKI) Mime-Version: 1.0 Content-Type: Text/Plain; charset=us-ascii Content-Transfer-Encoding: 7bit Cc: freebsd-fs@FreeBSD.org Subject: Re: FreeNAS vs OpenSolaris vs Nexenta ZFS Benchmarks X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 12 Sep 2010 22:16:09 -0000 In message: <4C8D234F.40204@quip.cz> Miroslav Lachman <000.fbsd@quip.cz> writes: : I am sending this link to those interested in poor performance results : of FreeNAS (FreeBSD) and ZFS. : : http://www.zfsbuild.com/2010/09/10/freenas-vs-opensolaris-zfs-benchmarks/ Thanks. They are testing an old version of ZFS (the one in FreeBSD 7) against the latest version (the one in OpenSolaris). I'm guessing that's the main source of the difference. Warner From owner-freebsd-fs@FreeBSD.ORG Sun Sep 12 22:16:21 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 9F8E0106566C for ; Sun, 12 Sep 2010 22:16:21 +0000 (UTC) (envelope-from josh@tcbug.org) Received: from out2.smtp.messagingengine.com (out2.smtp.messagingengine.com [66.111.4.26]) by mx1.freebsd.org (Postfix) with ESMTP id 672FF8FC12 for ; Sun, 12 Sep 2010 22:16:21 +0000 (UTC) Received: from compute1.internal (compute1.nyi.mail.srv.osa [10.202.2.41]) by gateway1.messagingengine.com (Postfix) with ESMTP id DDD212C1; Sun, 12 Sep 2010 18:16:20 -0400 (EDT) Received: from frontend2.messagingengine.com ([10.202.2.161]) by compute1.internal (MEProxy); Sun, 12 Sep 2010 18:16:20 -0400 DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=messagingengine.com; h=from:to:subject:date:cc:references:in-reply-to:mime-version:content-type:content-transfer-encoding:message-id; s=smtpout; bh=kzMF+HTMttRM6TLvY2TsI3WoWSw=; b=HnWCxXBTpREEASvKcuQzDnzZ+AnyMBhXCNomFYKYuR3QDV14wlwfnPHtgAXuL0BEvbzdytEyRCuy/cSjsUplATFjfDmGZffy1WSBFvUxKhPEUuyNr6D5o/7yNZ0ZcIC4zQU2YnczUY2jfb7qPQdnCj6E1/JzMWJsieh0kDV7spo= X-Sasl-enc: B//KrRj8QOlAg5LLLTTARIWJrCuCJh/Z6QargQ0Nt0Pm 1284329780 Received: from tcbug.ixsystems.com (173-123-10-0.pools.spcsdns.net [173.123.10.0]) by mail.messagingengine.com (Postfix) with ESMTPSA id 11CCB5E6CE8; Sun, 12 Sep 2010 18:16:20 -0400 (EDT) From: Josh Paetzel To: freebsd-fs@freebsd.org Date: Sun, 12 Sep 2010 17:15:51 -0500 User-Agent: KMail/1.13.5 (FreeBSD/9.0-CURRENT; KDE/4.5.1; amd64; ; ) References: <954605288.782335.1284305288639.JavaMail.root@erie.cs.uoguelph.ca> In-Reply-To: <954605288.782335.1284305288639.JavaMail.root@erie.cs.uoguelph.ca> MIME-Version: 1.0 Content-Type: multipart/signed; boundary="nextPart3409528.XkxCAgStvQ"; protocol="application/pgp-signature"; micalg=pgp-sha1 Content-Transfer-Encoding: 7bit Message-Id: <201009121716.17813.josh@tcbug.org> Cc: Terry Kennedy Subject: Re: Weird Linux - FreeBSD/ZFS NFSv4 interoperability problem X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 12 Sep 2010 22:16:21 -0000 --nextPart3409528.XkxCAgStvQ Content-Type: Text/Plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable On Sunday 12 September 2010 10:28:08 Rick Macklem wrote: > > > A couple of people have reported very slow read rates for the NFSv4 > > > client (actually the experimental client, since they see it for > > > NFSv3 too). If you could easily do the following, using a FreeBSD8.1 > > > or newer client: > > > # mount -t nfs -o nfsv4 :/path > > > - cd to anywhere in the mount that has a 100Mbyte+ file > > > # dd if=3D<100Mbyte+ file> of=3D/dev/null bs=3D1m > > >=20 > > > and then report what read rate you see along with the client's > > > machine-arch/# of cores/ram size/network driver used by the mount > > >=20 > > > rick > > > ps: Btw, anyone else who can do this test, it would be appreciated. > > >=20 > > > If you aren't set up for NFSv4, you can do an NFSv3 mount using > > > the exp. client instead. > > > # mount -t newnfs -o nfsv3 :/path > >=20 > > On 8-STABLE (both client and server). First test is NFSv3 on the > > standard > > client: > >=20 > > (0:842) new-gate:~terry# mount -t nfs -o nfsv4 new-rz1:/data /foo > > [tcp6] new-rz1:/data: NFSPROC_NULL: RPC: Program/version mismatch; low > > version =3D 2, high version =3D 3 > > [tcp] new-rz1:/data: NFSPROC_NULL: RPC: Program/version mismatch; low > > version =3D 2, high version =3D 3 > >=20 > > ^C > > (1:843) new-gate:~terry# mount -t nfs -o nfsv3 new-rz1:/data /foo > > [...] > > (0:869) new-gate:/foo/Backups/Suzanne VAIO# dd if=3D0cff3d7b_VOL.spf > > of=3D/dev/null bs=3D1m > > 6010+1 records in > > 6010+1 records out > > 6301945344 bytes transferred in 69.730064 secs (90376302 bytes/sec) > >=20 > > Now, let's try the newnfs client (cache should have been primed by the >=20 > > first run, so we'd expect this to be faster): > Just thought I'd mention that, since it is a different mount, the caches > won't be primed, which is good, because that would mask differences. >=20 > > (0:879) new-gate:/tmp# umount /foo > > (0:880) new-gate:/tmp# mount -t newnfs -o nfsv3 new-rz1:/data /foo > > (0:881) new-gate:/tmp# cd /foo/Backups/Suzanne\ VAIO/ > > (0:882) new-gate:/foo/Backups/Suzanne VAIO# dd if=3D0cff3d7b_VOL.spf > > of=3D/dev/null bs=3D1m > > 6010+1 records in > > 6010+1 records out > > 6301945344 bytes transferred in 135.927222 secs (46362644 bytes/sec) > >=20 > > Hmmm. Half the performance. The problem isn't the disk speed on the >=20 > > server: > Ok, good. You aren't seeing what the two guys reported (they were really > slow, at less than 2Mbytes/sec). If you would like to, you could try the > following, since the two clients use different default r/w sizes. >=20 > # mount -t newnfs -o nfsv3,rsize=3D32768,wsize=3D32768 new-rz1:/data /foo >=20 > and see how it changes the read rate. I don't know why there is a > factor of 2 difference (if it isn't the different r/w size), but it > will probably get resolved as I bring the experimental client up to date. >=20 > Thanks a lot for doing the test and giving me a data point, rick root@jester1d / ->mount -t nfs -o wsize=3D65536,rsize=3D65536=20 servant.ixsystems.com:/a/isos /mnt root@jester1d / ->cd /mnt root@jester1d /mnt ->dd if=3DPCBSD8-STABLE-20100420-x64-DVD.iso of=3D/dev/n= ull=20 bs=3D1m 3344+1 records in 3344+1 records out 3507386368 bytes transferred in 34.562502 secs (101479528 bytes/sec) root@jester1d /mnt ->cd .. root@jester1d / ->umount /mnt root@jester1d / ->mount -t newnfs -o nfsv3,rsize=3D65536,wsize=3D65536=20 servant.ixsystems.com:/a/isos /mnt root@jester1d / ->cd /mnt root@jester1d /mnt ->dd if=3DPCBSD8-STABLE-20100420-x64-DVD.iso of=3D/dev/n= ull=20 bs=3D1m 345+0 records in 345+0 records out 361758720 bytes transferred in 46.191718 secs (7831679 bytes/sec) The first run hits network limits. Both machines are nehalems, intel NICs, I can give details if needed. =2D-=20 Thanks, Josh Paetzel --nextPart3409528.XkxCAgStvQ Content-Type: application/pgp-signature; name=signature.asc Content-Description: This is a digitally signed message part. -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.16 (FreeBSD) iQEcBAABAgAGBQJMjVExAAoJEKFq1/n1feG2IxkIALQxDI0wfgMcj72oKxqEI4IQ 01vae+HcO8jO1alBtUR9bJs1e2EepVYfGw+IhBzYV0tZDo5GMw1csoOqHJtmH6hP EeyV3bOO4wTjNxwbahNLv6UHC+OVgjNcDcDZIbUeOqTGEf/cLZmEa4bBYcyx0wIu WIzsjVr0Etjek8GUpkmm0bVmok7huP5LY/I8rfoRjSGNK9PGQM3GL+6RCYcXpdGm Gh7XsIuUa0dSNsCS2egnR3qFLVr+bKFTIe/njTjsCrZ9byqdlbgS7kRsb4FpWDaS vGgJHzurPX7D5dSmIxVIvZygVogwrHJupLUSJjAXJ+CRSMgS6g4NyXfzmanS1Zg= =G8FU -----END PGP SIGNATURE----- --nextPart3409528.XkxCAgStvQ-- From owner-freebsd-fs@FreeBSD.ORG Sun Sep 12 23:00:20 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 63B0C1065670 for ; Sun, 12 Sep 2010 23:00:20 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id 1ECE88FC18 for ; Sun, 12 Sep 2010 23:00:19 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: ApwEAMb4jEyDaFvO/2dsb2JhbACDGZ8irnWQPYEigyp0BIon X-IronPort-AV: E=Sophos;i="4.56,356,1280721600"; d="scan'208";a="93620693" Received: from erie.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.206]) by esa-jnhn-pri.mail.uoguelph.ca with ESMTP; 12 Sep 2010 19:00:18 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id EA3E9B3EA8; Sun, 12 Sep 2010 19:00:18 -0400 (EDT) Date: Sun, 12 Sep 2010 19:00:18 -0400 (EDT) From: Rick Macklem To: Josh Paetzel Message-ID: <737011362.791810.1284332418932.JavaMail.root@erie.cs.uoguelph.ca> In-Reply-To: <201009121716.17813.josh@tcbug.org> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [24.65.230.102] X-Mailer: Zimbra 6.0.7_GA_2476.RHEL4 (ZimbraWebClient - SAF3 (Mac)/6.0.7_GA_2473.RHEL4_64) Cc: freebsd-fs@freebsd.org, Terry Kennedy Subject: Re: Weird Linux - FreeBSD/ZFS NFSv4 interoperability problem X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 12 Sep 2010 23:00:20 -0000 > > root@jester1d / ->mount -t nfs -o wsize=65536,rsize=65536 > servant.ixsystems.com:/a/isos /mnt > root@jester1d / ->cd /mnt > root@jester1d /mnt ->dd if=PCBSD8-STABLE-20100420-x64-DVD.iso > of=/dev/null > bs=1m > 3344+1 records in > 3344+1 records out > 3507386368 bytes transferred in 34.562502 secs (101479528 bytes/sec) > > root@jester1d /mnt ->cd .. > root@jester1d / ->umount /mnt > root@jester1d / ->mount -t newnfs -o nfsv3,rsize=65536,wsize=65536 > servant.ixsystems.com:/a/isos /mnt > root@jester1d / ->cd /mnt > root@jester1d /mnt ->dd if=PCBSD8-STABLE-20100420-x64-DVD.iso > of=/dev/null > bs=1m > 345+0 records in > 345+0 records out > 361758720 bytes transferred in 46.191718 secs (7831679 bytes/sec) > > The first run hits network limits. > Hmm, the newnfs case seems to have terminated prematurely. That's a different problem than the others seemed to report, but you definitely have a slow read rate. Could you by any chance run the newnfs test again and capture a "ps axHl" on the client (I'm hoping that will hint at where the threads are sleeping). Thanks for doing the test, rick From owner-freebsd-fs@FreeBSD.ORG Sun Sep 12 23:29:37 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id C8E66106564A for ; Sun, 12 Sep 2010 23:29:37 +0000 (UTC) (envelope-from josh@tcbug.org) Received: from out2.smtp.messagingengine.com (out2.smtp.messagingengine.com [66.111.4.26]) by mx1.freebsd.org (Postfix) with ESMTP id 974BE8FC15 for ; Sun, 12 Sep 2010 23:29:37 +0000 (UTC) Received: from compute1.internal (compute1.nyi.mail.srv.osa [10.202.2.41]) by gateway1.messagingengine.com (Postfix) with ESMTP id DF105171; Sun, 12 Sep 2010 19:29:36 -0400 (EDT) Received: from frontend1.messagingengine.com ([10.202.2.160]) by compute1.internal (MEProxy); Sun, 12 Sep 2010 19:29:36 -0400 DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=messagingengine.com; h=references:in-reply-to:mime-version:content-transfer-encoding:content-type:message-id:cc:from:subject:date:to; s=smtpout; bh=1vc7sh+UuExrxIPAODlU/XaNNBo=; b=pr6KdlOAs2ZCI/pY2RVKzBhSp6gLoMr/5ZeNRfviSqI488UEE4NiL82RrKzGPbVZq0CGq+ws42EDc/tEZhsHub0Xn4twSQCVI+zvZmj2soHfFRCvl6ubziJgnD0VUwVneIOJ2nAZ8oHOK6qbCGCm/ZnwQPeBbRb0o4A1de/zbFI= X-Sasl-enc: aeZEc2wNZuHO5zInBoWFTCMaByQ4i41H7fbdI/+idNX7 1284334175 Received: from [10.56.81.22] (mobile-166-137-137-177.mycingular.net [166.137.137.177]) by mail.messagingengine.com (Postfix) with ESMTPSA id B62E4404DE6; Sun, 12 Sep 2010 19:29:35 -0400 (EDT) References: <737011362.791810.1284332418932.JavaMail.root@erie.cs.uoguelph.ca> In-Reply-To: <737011362.791810.1284332418932.JavaMail.root@erie.cs.uoguelph.ca> Mime-Version: 1.0 (iPhone Mail 8A293) Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=us-ascii Message-Id: X-Mailer: iPhone Mail (8A293) From: Josh Paetzel Date: Sun, 12 Sep 2010 19:30:01 -0400 To: Rick Macklem Cc: "freebsd-fs@freebsd.org" , Terry Kennedy Subject: Re: Weird Linux - FreeBSD/ZFS NFSv4 interoperability problem X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 12 Sep 2010 23:29:37 -0000 On Sep 12, 2010, at 7:00 PM, Rick Macklem wrote: >>=20 >> root@jester1d / ->mount -t nfs -o wsize=3D65536,rsize=3D65536 >> servant.ixsystems.com:/a/isos /mnt >> root@jester1d / ->cd /mnt >> root@jester1d /mnt ->dd if=3DPCBSD8-STABLE-20100420-x64-DVD.iso >> of=3D/dev/null >> bs=3D1m >> 3344+1 records in >> 3344+1 records out >> 3507386368 bytes transferred in 34.562502 secs (101479528 bytes/sec) >>=20 >> root@jester1d /mnt ->cd .. >> root@jester1d / ->umount /mnt >> root@jester1d / ->mount -t newnfs -o nfsv3,rsize=3D65536,wsize=3D65536 >> servant.ixsystems.com:/a/isos /mnt >> root@jester1d / ->cd /mnt >> root@jester1d /mnt ->dd if=3DPCBSD8-STABLE-20100420-x64-DVD.iso >> of=3D/dev/null >> bs=3D1m >> 345+0 records in >> 345+0 records out >> 361758720 bytes transferred in 46.191718 secs (7831679 bytes/sec) >>=20 >> The first run hits network limits. >>=20 > Hmm, the newnfs case seems to have terminated prematurely. That's a > different problem than the others seemed to report, but you definitely > have a slow read rate. >=20 > Could you by any chance run the newnfs test again and capture a "ps axHl" > on the client (I'm hoping that will hint at where the threads are sleeping= ). >=20 > Thanks for doing the test, rick >=20 I'll do that soon. The premature end was me doing ctrl-c. I ran it a couple t= imes to ensure it was repeatable.=20 Thanks,=20 Josh Paetzel= From owner-freebsd-fs@FreeBSD.ORG Sun Sep 12 23:52:18 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 85C51106566C; Sun, 12 Sep 2010 23:52:18 +0000 (UTC) (envelope-from jhellenthal@gmail.com) Received: from mail-gx0-f182.google.com (mail-gx0-f182.google.com [209.85.161.182]) by mx1.freebsd.org (Postfix) with ESMTP id E3CC68FC0A; Sun, 12 Sep 2010 23:52:17 +0000 (UTC) Received: by gxk8 with SMTP id 8so816725gxk.13 for ; Sun, 12 Sep 2010 16:52:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:sender:message-id:date:from :user-agent:mime-version:to:cc:subject:references:in-reply-to :x-enigmail-version:content-type; bh=hsvc39DseAqS3CJoJyfAkfLZk8dInk2KdT4q6kntUiY=; b=Q9dlyp3vVPlo6a0jEbKE6c1c3A+1IO/EgdIg2IcxSFxF08Enc1B4FudRYQcct+tzYF TTWWjvfP21y9cXbe1H3dZv/EqXQInjFWAw14rtZVpD7iKVkOWJ309G5AvkZslbjt6ZB/ EE/S5MIk3lSXt1ERNHo63ZI2va9gITeRRfmaM= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject :references:in-reply-to:x-enigmail-version:content-type; b=TdsQCvhGLua7dlSwqLiU/LS1aWuOgyQX9fw1mg2V47MRojAatPBZtEuxUvceEwedq5 TaXmGp8CYYpBrUOP67tFF406sdE7DQdil9kEVciVgn93FFNARw3UImxqKcEOJtkesy55 nYBASjVEdcPjDHoMCDMNc0qIQGtN3RIY4ExYY= Received: by 10.100.124.1 with SMTP id w1mr3420662anc.265.1284335537353; Sun, 12 Sep 2010 16:52:17 -0700 (PDT) Received: from centel.dataix.local (adsl-99-181-137-20.dsl.klmzmi.sbcglobal.net [99.181.137.20]) by mx.google.com with ESMTPS id d4sm8915445and.19.2010.09.12.16.52.15 (version=SSLv3 cipher=RC4-MD5); Sun, 12 Sep 2010 16:52:16 -0700 (PDT) Sender: "J. Hellenthal" Message-ID: <4C8D67AE.4030802@DataIX.net> Date: Sun, 12 Sep 2010 19:52:14 -0400 From: jhell User-Agent: Mozilla/5.0 (X11; U; FreeBSD i386; en-US; rv:1.9.2.9) Gecko/20100908 Lightning/1.0b1 Thunderbird MIME-Version: 1.0 To: Pawel Jakub Dawidek References: <20100910073912.GC2007@garage.freebsd.pl> <4C8B2A56.8080809@DataIX.net> In-Reply-To: X-Enigmail-Version: 1.1.2 Content-Type: multipart/mixed; boundary="------------060908070400070800040406" Cc: freebsd-fs@freebsd.org Subject: Re: Swap on ZFS Volume still panics? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 12 Sep 2010 23:52:18 -0000 This is a multi-part message in MIME format. --------------060908070400070800040406 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 09/12/2010 17:14, App Deb wrote: > I just tried your settings (setting primary and secondary cache to none). > > (also tried it with checksum=off and volblocksize=4k, all together) > > It somewhat helped. In the beginning I though it would deadlock. The system > was inaccessible for ~10 minutes, but after that it managed to allocate the > required swap, and the system resumed. > > I measured the speed that it allocated swap and it was only around ~0.4MB/s > max. With the native freebsd-swap partition the speed using my "selfmade > memory filling program" it almost reaches native disk speeds (~80MB/s). > > I still see a problem here. > I agree, something is definitely amiss here. I created a sparse '-s' or no refreservation zvol of 2g with '-b' of 4k and saw a lot of pressure when it came to heavy swapping that was equal to the same amount of RAM available. While this was happening I was able to get a pretty peculiar crash dump that someone may or may not find useful. I can provide further information on this dump via, core.txt.NN or any other methods needed. backtrace attached. *BEWARE* Pawel, I have uploaded the core.txt.36 file encrypted to your public key at here for further review if you find the backtrace curious: http://bit.ly/cuyH3L Regards, - -- Dump header from device /dev/label/dumpdev Architecture: i386 Architecture Version: 2 Dump Length: 583663616B (556 MB) Blocksize: 512 Dumptime: Sun Sep 12 13:39:25 2010 Hostname: centel.dataix.local Magic: FreeBSD Kernel Dump Version String: FreeBSD 8.1-STABLE #0 r212427M 208:41f4fcc6ce0a Fri Sep 10 17:47:45 EDT 2010 "REST EXCLUDED" Panic String: vm_fault: fault on nofault entry, addr: 828a0000 Dump Parity: 2118336362 Bounds: 36 Dump Status: good jhell,v -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.16 (FreeBSD) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQEcBAEBAgAGBQJMjWetAAoJEJBXh4mJ2FR+WqoH/08xRrORHKQiaxKea2D4G6jy iiG2r+Y3Xz632sa0VrNYRSDRtspXTUhR8ZJBPyItgQKcYsoT5WA+GcwN+yj19QlX sDXeNtVTiem0tFIavdiM2/gTIlQJj7F/sny08nQNQcgvyb2SSR/DHKd49o4gyCQG KtzP5Ea04R9Pc/OsPOVV9XNtWL06wxcACOTNRcHOrk7hFOuSku31jv2i+ohquKLV 2EpYYIXoO44hqHzYNDFCOqO4v+Iw72Ys00bitL71pg6+yWVFEYcAl04gNoNzJYud 7hAamvM5zE0Ah63tToEcFbYE0ufFAp4vRBXvEj44TJoHNl5MmCOgAUHQA4K4xjs= =E1EU -----END PGP SIGNATURE----- --------------060908070400070800040406 Content-Type: text/plain; name="bt.txt" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="bt.txt" IzEgIDB4ODA2NzRjOTEgaW4gYm9vdCAoaG93dG89MjYwKSBhdCAvdXNyL3NyYy9zeXMva2Vy bi9rZXJuX3NodXRkb3duLmM6NDE2CiMyICAweDgwNjc0ZjI1IGluIHBhbmljIChmbXQ9VmFy aWFibGUgImZtdCIgaXMgbm90IGF2YWlsYWJsZS4pIGF0IC91c3Ivc3JjL3N5cy9rZXJuL2tl cm5fc2h1dGRvd24uYzo1OTAKIzMgIDB4ODA4YzFhNGUgaW4gdm1fZmF1bHQgKG1hcD0weDgx NjkwMDAwLCB2YWRkcj0yMTkwMDgyMDQ4LCBmYXVsdF90eXBlPTEgJ1wwMDEnLCBmYXVsdF9m bGFncz0wKSBhdCAvdXNyL3NyYy9zeXMvdm0vdm1fZmF1bHQuYzoyODMKIzQgIDB4ODA5MWU2 NWIgaW4gdHJhcF9wZmF1bHQgKGZyYW1lPTB4YjQ1ZTFhNDQsIHVzZXJtb2RlPTAsIGV2YT0y MTkwMDgzOTc2KSBhdCAvdXNyL3NyYy9zeXMvaTM4Ni9pMzg2L3RyYXAuYzo4NDAKIzUgIDB4 ODA5MWYxNWMgaW4gdHJhcCAoZnJhbWU9MHhiNDVlMWE0NCkgYXQgL3Vzci9zcmMvc3lzL2kz ODYvaTM4Ni90cmFwLmM6NTMzCiM2ICAweDgwOTAyM2JjIGluIGNhbGx0cmFwICgpIGF0IC91 c3Ivc3JjL3N5cy9pMzg2L2kzODYvZXhjZXB0aW9uLnM6MTY2CiM3ICAweDgwOGQ4ZjVhIGlu IHZtX3Jlc2Vydl9sZXZlbF9pZmZ1bGxwb3AgKG09MHg4OTA5OTg3MCkgYXQgL3Vzci9zcmMv c3lzL3ZtL3ZtX3Jlc2Vydi5jOjUxMgojOCAgMHg4MDkxYjQ4NyBpbiBwbWFwX2VudGVyIChw bWFwPTB4ODRkNTg2MjAsIHZhPTg2MDE2NDA5NiwgYWNjZXNzPTEgJ1wwMDEnLCBtPTB4ODkw OTk4NzAsIHByb3Q9MyAnXDAwMycsIHdpcmVkPTApIGF0IC91c3Ivc3JjL3N5cy9pMzg2L2kz ODYvcG1hcC5jOjM0MzYKIzkgIDB4ODA4YzM0MWYgaW4gdm1fZmF1bHQgKG1hcD0weDg0ZDU4 NTcwLCB2YWRkcj04NjAxNjQwOTYsIGZhdWx0X3R5cGU9MSAnXDAwMScsIGZhdWx0X2ZsYWdz PVZhcmlhYmxlICJmYXVsdF9mbGFncyIgaXMgbm90IGF2YWlsYWJsZS4pIGF0IC91c3Ivc3Jj L3N5cy92bS92bV9mYXVsdC5jOjk0NQojMTAgMHg4MDkxZTU2OSBpbiB0cmFwX3BmYXVsdCAo ZnJhbWU9MHhiNDVlMWQzOCwgdXNlcm1vZGU9MSwgZXZhPTg2MDE2NDA5NikgYXQgL3Vzci9z cmMvc3lzL2kzODYvaTM4Ni90cmFwLmM6ODI4CiMxMSAweDgwOTFlZmNlIGluIHRyYXAgKGZy YW1lPTB4YjQ1ZTFkMzgpIGF0IC91c3Ivc3JjL3N5cy9pMzg2L2kzODYvdHJhcC5jOjQwMQoj MTIgMHg4MDkwMjNiYyBpbiBjYWxsdHJhcCAoKSBhdCAvdXNyL3NyYy9zeXMvaTM4Ni9pMzg2 L2V4Y2VwdGlvbi5zOjE2NgojMTMgMHgzMDYxNmZkZSBpbiA/PyAoKQo= --------------060908070400070800040406 Content-Type: application/octet-stream; name="bt.txt.sig" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="bt.txt.sig" iQEcBAABAgAGBQJMjWetAAoJEJBXh4mJ2FR+NoMH/1KUKsI44QYvf3ABcN8976Byem3JBMWn VjK1R3CnLtY8RKdaGgVSVcQPwnikTbosjr24wEP9v4iyzD+C8ix1S+0+wRct14fk6N4sB+5a pvnDFgk5uidxYbWLjLt3puw6wyKNbyttwen83HL9GB+ovMkHc4Rq3u6C9lDuiwzn4lnqXT9x z5VZgROKz/rjmcwLWb0nUquxQ6Qh8ZedwoQNS5luIOle+LJHLGm8b9gII24gDz0GIB/ovYSC PaOL6g4LJvgA1XLBEhw+iU1SA1ALy9i5Jf5c/GeuDepM5sYD+5XUpLNXQlqAvtFfCruJP5Y0 TUWavsff9tB6PmARB/pRIhA= --------------060908070400070800040406-- From owner-freebsd-fs@FreeBSD.ORG Mon Sep 13 00:15:15 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 69CAD106566C for ; Mon, 13 Sep 2010 00:15:15 +0000 (UTC) (envelope-from fjwcash@gmail.com) Received: from mail-fx0-f54.google.com (mail-fx0-f54.google.com [209.85.161.54]) by mx1.freebsd.org (Postfix) with ESMTP id F04A38FC0A for ; Mon, 13 Sep 2010 00:15:14 +0000 (UTC) Received: by fxm4 with SMTP id 4so3324345fxm.13 for ; Sun, 12 Sep 2010 17:15:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:received:in-reply-to :references:date:message-id:subject:from:to:content-type :content-transfer-encoding; bh=RKrlW6fSU+7v7d7Bd9UvvIRxS7K9Oks2rEzPPh45Xb4=; b=Bi05XORmqRp6VqexQx379mU/qfl1+ykIbhpukLYOmbtwLn3SXjdlwWeTlJb6sYTUqu ifYNP+7IG7o9yp2BUf61/16MYMkMzHo5wdYL3jWyaClZgJaYYlYlJUoiT5QbBCBAVJAc BPxGHghCBWbG5BPd9zjHPf1m1LO6Li0aw4GyQ= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type:content-transfer-encoding; b=fvJs7SofzAucH7/K3cIN6pbr2H8qiXWxPebevI4bgRuLzq+WTGmSMcHnPZzYU8cQuy mGkrYKLBAwFKa0yMuRGQt/vTdqetV/DxO/zRDhKO0EpWvQxdJ4SASIy8poaR0CkUi29c JBcfvljb8C9EqWZtHalYYXzX48B9851ehjoKc= MIME-Version: 1.0 Received: by 10.223.126.15 with SMTP id a15mr2719774fas.67.1284336913805; Sun, 12 Sep 2010 17:15:13 -0700 (PDT) Received: by 10.223.110.197 with HTTP; Sun, 12 Sep 2010 17:15:13 -0700 (PDT) In-Reply-To: <201009121640.39157.josh@tcbug.org> References: <4C8D234F.40204@quip.cz> <201009121640.39157.josh@tcbug.org> Date: Sun, 12 Sep 2010 17:15:13 -0700 Message-ID: From: Freddie Cash To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Subject: Re: FreeNAS vs OpenSolaris vs Nexenta ZFS Benchmarks X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 13 Sep 2010 00:15:15 -0000 On Sun, Sep 12, 2010 at 2:40 PM, Josh Paetzel wrote: > I'm a tad confused about the whole "sharing a ZFS filesystem over iSCSI".= =C2=A0I > thought iSCSI was used to eport LUNs that you then put a filesystem on wi= th a > client. (Everyone here probably already knows this, but thought I'd pass it along anyway.) Correct. You create ZFS volumes (zfs create -V), which appear as block devices (/dev/zvol/poolname/volumename), which are then exported via iSCSI to remote clients. The remote clients then format the iSCSI LUN any way they please, with any filesystem they want (treating it just like a normal, local harddrive). No ZFS filesystem is involved. > iSCSI on FreeBSD is fairly slow compared to other solutions, I think ther= e is > some very preliminary work to fix that going on. Using which target (iscsi-target or istgt)? Or are they both slow? --=20 Freddie Cash fjwcash@gmail.com From owner-freebsd-fs@FreeBSD.ORG Mon Sep 13 00:22:20 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 7E6C9106566C for ; Mon, 13 Sep 2010 00:22:20 +0000 (UTC) (envelope-from universite@ukr.net) Received: from otrada.od.ua (universite-1-pt.tunnel.tserv24.sto1.ipv6.he.net [IPv6:2001:470:27:140::2]) by mx1.freebsd.org (Postfix) with ESMTP id E3B258FC21 for ; Mon, 13 Sep 2010 00:22:19 +0000 (UTC) Received: from [10.0.0.10] (phenom.otrada.od.ua [10.0.0.10]) (authenticated bits=0) by otrada.od.ua (8.14.3/8.14.3) with ESMTP id o8D0MFXm074778 for ; Mon, 13 Sep 2010 03:22:15 +0300 (EEST) (envelope-from universite@ukr.net) X-Authentication-Warning: otrada.od.ua: Host phenom.otrada.od.ua [10.0.0.10] claimed to be [10.0.0.10] Message-ID: <4C8D6EB7.6060205@ukr.net> Date: Mon, 13 Sep 2010 03:22:15 +0300 From: "Vladislav V. Prodan" User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; ru; rv:1.9.2.9) Gecko/20100825 Thunderbird/3.1.3 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-0.8 required=5.0 tests=ALL_TRUSTED,AWL autolearn=failed version=3.2.5 X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on mary-teresa.otrada.od.ua X-Virus-Scanned: clamav-milter 0.95.3 at mary-teresa.otrada.od.ua X-Virus-Status: Clean Subject: I would like to compare snapshots ZFS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 13 Sep 2010 00:22:20 -0000 Something like: zfs diff [snapshot] [snapshot|filesystem] Inspired by posts: http://netmgt.blogspot.com/2010/03/zfs-snapshot-differences.html and http://thnetos.wordpress.com/2007/06/05/not-as-simple-perl-script-for-zfs-snapshot-auditing/ From owner-freebsd-fs@FreeBSD.ORG Mon Sep 13 02:28:06 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 9F3BF106579B for ; Mon, 13 Sep 2010 02:28:06 +0000 (UTC) (envelope-from TERRY@tmk.com) Received: from server.tmk.com (server.tmk.com [204.141.35.63]) by mx1.freebsd.org (Postfix) with ESMTP id 785418FC12 for ; Mon, 13 Sep 2010 02:28:06 +0000 (UTC) Received: from tmk.com by tmk.com (PMDF V6.4 #37010) id <01NRTG0OUH9C0022AD@tmk.com> for freebsd-fs@freebsd.org; Sun, 12 Sep 2010 22:28:01 -0400 (EDT) Date: Sun, 12 Sep 2010 22:21:16 -0400 (EDT) From: Terry Kennedy In-reply-to: "Your message dated Sun, 12 Sep 2010 11:28:08 -0400 (EDT)" <954605288.782335.1284305288639.JavaMail.root@erie.cs.uoguelph.ca> To: Rick Macklem Message-id: <01NRTGJQPS9S0022AD@tmk.com> MIME-version: 1.0 Content-type: TEXT/PLAIN; charset=utf-8 References: <01NRSE7GZJEC0022AD@tmk.com> Cc: freebsd-fs@freebsd.org Subject: Re: Weird Linux - FreeBSD/ZFS NFSv4 interoperability problem X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 13 Sep 2010 02:28:06 -0000 > > Now, let's try the newnfs client (cache should have been primed by the > > first run, so we'd expect this to be faster): > > Just thought I'd mention that, since it is a different mount, the caches > won't be primed, which is good, because that would mask differences. I was referring to the cache on the server side. While that disk subsys- tem is fast (about 600MB/sec sustained), the test locally on that server reported about 4GB/sec. > Ok, good. You aren't seeing what the two guys reported (they were really > slow, at less than 2Mbytes/sec). If you would like to, you could try the > following, since the two clients use different default r/w sizes. > > # mount -t newnfs -o nfsv3,rsize=32768,wsize=32768 new-rz1:/data /foo > > and see how it changes the read rate. I don't know why there is a > factor of 2 difference (if it isn't the different r/w size), but it > will probably get resolved as I bring the experimental client up to date. Not so good: (0:18) new-gate:/foo/Backups/Suzanne VAIO# dd if=0cff3d7b_VOL.spf of=/dev/null bs=1m 6010+1 records in 6010+1 records out 6301945344 bytes transferred in 159.656789 secs (39471828 bytes/sec) Caching seems to help: (0:19) new-gate:/foo/Backups/Suzanne VAIO# dd if=0cff3d7b_VOL.spf of=/dev/null bs=1m 6010+1 records in 6010+1 records out 6301945344 bytes transferred in 4.456822 secs (1413999810 bytes/sec) Terry Kennedy http://www.tmk.com terry@tmk.com New York, NY USA From owner-freebsd-fs@FreeBSD.ORG Mon Sep 13 02:30:20 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 09D32106564A for ; Mon, 13 Sep 2010 02:30:20 +0000 (UTC) (envelope-from rafaelhfaria@cenadigital.com.br) Received: from mail-wy0-f182.google.com (mail-wy0-f182.google.com [74.125.82.182]) by mx1.freebsd.org (Postfix) with ESMTP id 9BE958FC17 for ; Mon, 13 Sep 2010 02:30:19 +0000 (UTC) Received: by wyb33 with SMTP id 33so6578532wyb.13 for ; Sun, 12 Sep 2010 19:30:18 -0700 (PDT) Received: by 10.216.46.15 with SMTP id q15mr3593871web.103.1284343269443; Sun, 12 Sep 2010 19:01:09 -0700 (PDT) MIME-Version: 1.0 Received: by 10.216.160.75 with HTTP; Sun, 12 Sep 2010 19:00:39 -0700 (PDT) In-Reply-To: References: <4C8D234F.40204@quip.cz> <201009121640.39157.josh@tcbug.org> From: Rafael Henrique Faria Date: Sun, 12 Sep 2010 23:00:39 -0300 Message-ID: To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Subject: Re: FreeNAS vs OpenSolaris vs Nexenta ZFS Benchmarks X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 13 Sep 2010 02:30:20 -0000 On Sun, Sep 12, 2010 at 21:15, Freddie Cash wrote: > > Using which target (iscsi-target or istgt)? Or are they both slow? > > This is a good question. I know that iscsi-target (ported from NetBSD some time ago) is too old, and don't respect the current iSCSI protocol. But the new implementation from Aoyama (istgt) is a lot better, and I don't know if the tested FreeNAS was already using the istgt. I am using istgt a while, and have some problems with it, and don't know about the performance... if someone have any notice about it... will be very good. -- Rafael Henrique da Silva Faria From owner-freebsd-fs@FreeBSD.ORG Mon Sep 13 03:06:52 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 9D9CD106564A for ; Mon, 13 Sep 2010 03:06:52 +0000 (UTC) (envelope-from fjwcash@gmail.com) Received: from mail-fx0-f54.google.com (mail-fx0-f54.google.com [209.85.161.54]) by mx1.freebsd.org (Postfix) with ESMTP id 32CCB8FC08 for ; Mon, 13 Sep 2010 03:06:51 +0000 (UTC) Received: by fxm4 with SMTP id 4so3377743fxm.13 for ; Sun, 12 Sep 2010 20:06:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:received:in-reply-to :references:date:message-id:subject:from:to:content-type :content-transfer-encoding; bh=9A90UofWdeE+aCJUld3P7uFU40wNWpX6dT8teD66wkI=; b=IzLRs8j+F97Ie3sEVMipdocZ5sG2cba6Z/2Bfd1v9SFMlqw7mst9rzc5aKyO0ofLnx 6ckhTQfePof0srobx8TMygow1j9B0bvPvdSidbB+39wLBKi+Bna6xY7hGUpLMAxR6SR9 TeceZByjbxqTBQdpnEdJB4n/vzh5Err29KBCI= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type:content-transfer-encoding; b=n6y7I6PoivRBlZRJGs+8c/zgSnlfbWu+/VL91rdDvegRcMhORFm5lQIIOTjhyq+EkO cf3X0AxDTf+44au6TOrGGmtH8l2UecAVbmuvRoa+F3amrRjLAW4wzxCcsKQvN7gW3aUu ni2T03/0KBm8wQti1xU/nTmDjZNqIdQVLbvkE= MIME-Version: 1.0 Received: by 10.223.116.196 with SMTP id n4mr2797586faq.75.1284347210812; Sun, 12 Sep 2010 20:06:50 -0700 (PDT) Received: by 10.223.110.197 with HTTP; Sun, 12 Sep 2010 20:06:50 -0700 (PDT) In-Reply-To: <4C8D6EB7.6060205@ukr.net> References: <4C8D6EB7.6060205@ukr.net> Date: Sun, 12 Sep 2010 20:06:50 -0700 Message-ID: From: Freddie Cash To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Subject: Re: I would like to compare snapshots ZFS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 13 Sep 2010 03:06:52 -0000 On Sun, Sep 12, 2010 at 5:22 PM, Vladislav V. Prodan w= rote: > Something like: > =C2=A0 =C2=A0 =C2=A0 =C2=A0zfs diff [snapshot] [snapshot|filesystem] > > Inspired by posts: > =C2=A0 =C2=A0 =C2=A0 =C2=A0http://netmgt.blogspot.com/2010/03/zfs-snapsho= t-differences.html > and > http://thnetos.wordpress.com/2007/06/05/not-as-simple-perl-script-for-zfs= -snapshot-auditing/ Search the archives for this very list (or possibly -current), and you'll find a notice for testing of a ZFSv28 patch, which includes this feature. :) --=20 Freddie Cash fjwcash@gmail.com From owner-freebsd-fs@FreeBSD.ORG Mon Sep 13 03:15:53 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id C7492106564A for ; Mon, 13 Sep 2010 03:15:53 +0000 (UTC) (envelope-from jhellenthal@gmail.com) Received: from mail-yw0-f54.google.com (mail-yw0-f54.google.com [209.85.213.54]) by mx1.freebsd.org (Postfix) with ESMTP id 7BA998FC14 for ; Mon, 13 Sep 2010 03:15:53 +0000 (UTC) Received: by ywt2 with SMTP id 2so2166449ywt.13 for ; Sun, 12 Sep 2010 20:15:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:sender:message-id:date:from :user-agent:mime-version:to:cc:subject:references:in-reply-to :x-enigmail-version:content-type:content-transfer-encoding; bh=1kXpaDuiif5uvF22o1pw+5wHU/racuq3UN8NFzKX6qc=; b=aJKlBteLWSphahohLo8pKsLxN7zjtYLDbXUmpNCwKU8m9LUB1051vvLfWKgL+JWDrd mxfIYuKFkbs4ycN9P/Jo8/xF9cJlaqMoaSfvXGG843pQmqbjQt1lUq0ULh5Gz3dkuAMy IxRo+GUrmfGAiy86FXpPV45qQPI6V8jjeaZXo= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject :references:in-reply-to:x-enigmail-version:content-type :content-transfer-encoding; b=vsvz3jdt9y93JEI/nXx9fd6EtDkXKlID0pVpV5GJl1KsWU5UgNFNLjg51jrkSrljLN u08TeFBY7Tp4XCIkdXm3ajZR7lG1knXV9GE0DQFA1Ym4bupuHq+nah4nonWu6ujQOwcg mlKDLRpZBpypR5mthqq65eZaaCPmo01arlRMg= Received: by 10.150.50.10 with SMTP id x10mr1044088ybx.364.1284347749718; Sun, 12 Sep 2010 20:15:49 -0700 (PDT) Received: from centel.dataix.local (adsl-99-181-137-20.dsl.klmzmi.sbcglobal.net [99.181.137.20]) by mx.google.com with ESMTPS id w3sm1927423ybi.19.2010.09.12.20.15.48 (version=SSLv3 cipher=RC4-MD5); Sun, 12 Sep 2010 20:15:48 -0700 (PDT) Sender: "J. Hellenthal" Message-ID: <4C8D9762.6030607@DataIX.net> Date: Sun, 12 Sep 2010 23:15:46 -0400 From: jhell User-Agent: Mozilla/5.0 (X11; U; FreeBSD i386; en-US; rv:1.9.2.9) Gecko/20100908 Lightning/1.0b1 Thunderbird MIME-Version: 1.0 To: "Vladislav V. Prodan" References: <4C8D6EB7.6060205@ukr.net> In-Reply-To: <4C8D6EB7.6060205@ukr.net> X-Enigmail-Version: 1.1.2 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org Subject: Re: I would like to compare snapshots ZFS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 13 Sep 2010 03:15:54 -0000 -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 09/12/2010 20:22, Vladislav V. Prodan wrote: > Something like: > zfs diff [snapshot] [snapshot|filesystem] > > > Inspired by posts: > http://netmgt.blogspot.com/2010/03/zfs-snapshot-differences.html > and > http://thnetos.wordpress.com/2007/06/05/not-as-simple-perl-script-for-zfs-snapshot-auditing/ Good good another tester. Scan the mailing lists for freebsd.org with google for ZFS v28. I believe it has diff support. This does not come without a hitch though, you will essentially be a tester so I hope you are prepared for what you are embarking on. To save you a little time searching I made this for you: http://tinyurl.com/2c6lq83 <<< Don't know why, but I love that thing. Specifically the ones you want to read are the messages from Pawel and Andriy aka's pjd@ & avg@. There are patches specifically for HEAD only and a VirtualBox appliance posted from Pawel, and I believe Martin or Andriy have put together a full list of patches so you don't have to guess at what to apply. If you are not comfortable patching and/or checking out src for HEAD then I certainly recommend giving the VirtualBox appliance posted by Pawel a try before you shun it off. Keep in mind that you may not see general in tree support for this for a unspecified amount of time until it is actually slated to be committed. So with a final few words, Good luck, read those messages thoroughly & have fun!. Regards, - -- jhell,v -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.16 (FreeBSD) iQEcBAEBAgAGBQJMjZdiAAoJEJBXh4mJ2FR+7LQH/0qKUpYaZ1ZpFoIiW+9NHd9f jCJAesk/h/4N9c70ZbFvef2bPbqklCVHqL3mQ7S8eBBY0in2l2tiRhQAdHKFWM/U EFFt1P247iIDMIKlLEKilJyoE6p5HoPlnaoKjWw7loVEB5alGotI3SjwNwV4sX8z O9Y+T66ykXIKeFyEd+2WOBAIzm0OkbI8+9hLjM63Jhesoz/n4T8NMQJMSTvTuQcf iYjJPlFE1+3JNR9mTRi6R/woan7A5vVo9XTwTJEh8kz9utM+O7429vVY+ZzfXTmV mBZ3IGCNsMeIRie34s8xItZwe9/vqr48qS2MugA+Qp8s9FSY3vhhUZYKZMT7B7s= =vDQT -----END PGP SIGNATURE----- From owner-freebsd-fs@FreeBSD.ORG Mon Sep 13 05:01:55 2010 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id B8349106566B for ; Mon, 13 Sep 2010 05:01:55 +0000 (UTC) (envelope-from james@jrv.org) Received: from mail.jrv.org (adsl-70-243-84-13.dsl.austtx.swbell.net [70.243.84.13]) by mx1.freebsd.org (Postfix) with ESMTP id 279CD8FC08 for ; Mon, 13 Sep 2010 05:01:54 +0000 (UTC) Received: from kremvax.housenet.jrv (kremvax.housenet.jrv [192.168.3.124]) by mail.jrv.org (8.14.3/8.14.3) with ESMTP id o8D4Ej33038971; Sun, 12 Sep 2010 23:14:45 -0500 (CDT) (envelope-from james@jrv.org) Authentication-Results: mail.jrv.org; domainkeys=pass (testing) header.from=james@jrv.org DomainKey-Signature: a=rsa-sha1; s=enigma; d=jrv.org; c=nofws; q=dns; h=message-id:date:from:user-agent:mime-version:to:cc:subject: references:in-reply-to:x-enigmail-version:content-type:content-transfer-encoding; b=gDbZM3m4xmSAxwG0hjmC1mWpQdD9VXX4qKZGx1OrDb4xF1IqetElR/EXPYVky8ZZg /rTxdMIMWBjztOFBQkHEun5FUyuAW3KhE52cJV4BObfqQAJUY8988bf+qajq1xROQSO KjjEx1HxFhdTCG910Ss1KYd0vK5cucZ+VRU1OfQ= Message-ID: <4C8DA535.7050007@jrv.org> Date: Sun, 12 Sep 2010 23:14:45 -0500 From: "James R. Van Artsdalen" User-Agent: Thunderbird 2.0.0.24 (Macintosh/20100228) MIME-Version: 1.0 To: Pawel Jakub Dawidek References: <20100831215915.GE1932@garage.freebsd.pl> In-Reply-To: <20100831215915.GE1932@garage.freebsd.pl> X-Enigmail-Version: 0.96.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@FreeBSD.org Subject: ZFS v28: ZFS recv abort X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 13 Sep 2010 05:01:55 -0000 amd64, SVN 212080 with pjd's original v28 patch /sbin/zfs aborts receiving an incrementing stream. bigback:/root# zfs send -R -I @then bigtex@now | ssh kraken /sbin/zfs recv -dvF bigz Assertion failed: (!clp->cl_alldependents), file /usr/src/cddl/lib/libzfs/../../../cddl/contrib/opensolaris/lib/libzfs/common/libzfs_changelist.c, line 470. (the error message is from receiving system kraken) At the failing ASSERT(!clp->cl_alldependents) clp points to (gdb) p * (prop_changelist_t *) data $2 = {cl_prop = ZFS_PROP_MOUNTPOINT, cl_realprop = ZFS_PROP_NAME, cl_shareprop = ZFS_PROP_TYPE, cl_pool = 0x8018a8400, cl_list = 0x802243c50, cl_waslegacy = B_FALSE, cl_allchildren = B_FALSE, \ cl_alldependents = B_TRUE, cl_mflags = 524288, cl_gflags = 0, cl_haszonedchild = B_FALSE, cl_sorted = B_FALSE} One level up in parent function zfs_iter_dependents() zhp points to (gdb) p *zhp $3 = {zfs_hdl = 0x801810800, zpool_hdl = 0x801892140, zfs_name = "bigz/recv-2818-1", '\0' , zfs_type = ZFS_TYPE_FILESYSTEM, zfs_head_type = ZFS_TYPE_FILESYSTEM, zfs_dmustats\ = {dds_num_clones = 0, dds_creation_txg = 111554, dds_guid = 10368215686395422194, dds_type = DMU_OST_ZFS, dds_is_snapshot = 0 '\0', dds_inconsistent = 0 '\0', dds_origin = '\0' }, zfs_props = 0x801831040, zfs_user_props = 0x8018310c0, zfs_recvd_props = 0x0, zfs_mntcheck = B_FALSE, zfs_mntopts = 0x0, zfs_props_table = 0x0} I'm guessing that the filesystem was renamed, hence the "bigz/recv-2818-1" pool/dataset designation. Or perhaps a prior recv didn't properly change a temporary pool/recv-* dataset name back to the right name after that prior recv, causing the next recv to fail. From owner-freebsd-fs@FreeBSD.ORG Mon Sep 13 06:33:10 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 5BF271065789 for ; Mon, 13 Sep 2010 06:33:10 +0000 (UTC) (envelope-from rincebrain@gmail.com) Received: from mail-fx0-f54.google.com (mail-fx0-f54.google.com [209.85.161.54]) by mx1.freebsd.org (Postfix) with ESMTP id 121808FC0A for ; Mon, 13 Sep 2010 06:33:08 +0000 (UTC) Received: by fxm4 with SMTP id 4so3427206fxm.13 for ; Sun, 12 Sep 2010 23:33:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:received:date:message-id :subject:from:to:content-type; bh=2gAXjljJ67tR/Fe8fdubbWOl8mPC9ByBy8RC1V2ROr4=; b=rYp/dfDQO8fX3QRRDOBKcUBB7tZBtQ8TepQ4qCehHPWCvGnKIAN3GipBSUMI4hwpNF yyF/sC2+Da5QXKTF8udFahRJzHE6F/wANdSYalAQoixb2wysgmm5paKcBP5+Wj9dwE8c RlNRl1DAd5eWPwIUGvHXY6/d51quQpcYQQuCI= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:date:message-id:subject:from:to:content-type; b=DYGmcym0Zq7/ijj7vZjPirhiOUqrNt8r9wh16LgBj7+GdaB+9L3jToh0WUkOAnukm7 SKI7poB5R5cN0JGeEauhYeGb+lyzFNghR/VB+McDQGEXqVDBkJoUB+nDUWEsWFJMw3Ap KEeM2k1DpDVlHYHLVATob6UdP9BXUFXJdbZk4= MIME-Version: 1.0 Received: by 10.239.190.78 with SMTP id w14mr197783hbh.197.1284358039431; Sun, 12 Sep 2010 23:07:19 -0700 (PDT) Received: by 10.239.152.76 with HTTP; Sun, 12 Sep 2010 23:07:19 -0700 (PDT) Date: Mon, 13 Sep 2010 02:07:19 -0400 Message-ID: From: Rich To: freebsd-fs Content-Type: multipart/mixed; boundary=001485f7cbcab469c904901de88d Subject: ZFS v28: panic: zfs accessing past end of object X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 13 Sep 2010 06:33:10 -0000 --001485f7cbcab469c904901de88d Content-Type: text/plain; charset=ISO-8859-1 My experimental v28 server panicked today when I was testing it by pushing a backup of my laptop to it over NFS with rsync. Dedup was enabled on the pool, NFS sharing done via sharenfs property, nothing else of interest that I'm aware of was on. Running on r212074 with the original v28 patch, on an amd64 system. I've attached core.txt.0, please let me know what else I can provide. - Rich --001485f7cbcab469c904901de88d Content-Type: application/octet-stream; name="core.txt.0" Content-Disposition: attachment; filename="core.txt.0" Content-Transfer-Encoding: base64 X-Attachment-Id: f_ge0xkau60 eGFuYWR1IGR1bXBlZCBjb3JlIC0gc2VlIC92YXIvY3Jhc2gvdm1jb3JlLjAKCk1vbiBTZXAgMTMg MDI6MDU6MjkgVVRDIDIwMTAKCkZyZWVCU0QgeGFuYWR1IDkuMC1DVVJSRU5UIEZyZWVCU0QgOS4w LUNVUlJFTlQgIzAgcjIxMjA3NE06IFN1biBTZXAgMTIgMTg6NDg6MzYgVVRDIDIwMTAgICAgIHJv b3RAeGFuYWR1Oi91c3Ivb2JqL2hlYWRfb2xkL3N5cy9EVFJBQ0UyICBhbWQ2NAoKcGFuaWM6IFNv bGFyaXMocGFuaWMpOiB6ZnM6IGFjY2Vzc2luZyBwYXN0IGVuZCBvZiBvYmplY3QgZmZmZmZmODA2 MjdhY2Q1MC81NyAoc2l6ZT0xNjUyMjE1NDYzIGFjY2Vzcz0wKzE4NDQ2NzQzNTI1NjA1OTUyODgw KQoKR05VIGdkYiA2LjEuMSBbRnJlZUJTRF0KQ29weXJpZ2h0IDIwMDQgRnJlZSBTb2Z0d2FyZSBG b3VuZGF0aW9uLCBJbmMuCkdEQiBpcyBmcmVlIHNvZnR3YXJlLCBjb3ZlcmVkIGJ5IHRoZSBHTlUg R2VuZXJhbCBQdWJsaWMgTGljZW5zZSwgYW5kIHlvdSBhcmUKd2VsY29tZSB0byBjaGFuZ2UgaXQg YW5kL29yIGRpc3RyaWJ1dGUgY29waWVzIG9mIGl0IHVuZGVyIGNlcnRhaW4gY29uZGl0aW9ucy4K VHlwZSAic2hvdyBjb3B5aW5nIiB0byBzZWUgdGhlIGNvbmRpdGlvbnMuClRoZXJlIGlzIGFic29s dXRlbHkgbm8gd2FycmFudHkgZm9yIEdEQi4gIFR5cGUgInNob3cgd2FycmFudHkiIGZvciBkZXRh aWxzLgpUaGlzIEdEQiB3YXMgY29uZmlndXJlZCBhcyAiYW1kNjQtbWFyY2VsLWZyZWVic2QiLi4u CgpVbnJlYWQgcG9ydGlvbiBvZiB0aGUga2VybmVsIG1lc3NhZ2UgYnVmZmVyOgpwYW5pYzogU29s YXJpcyhwYW5pYyk6IHpmczogYWNjZXNzaW5nIHBhc3QgZW5kIG9mIG9iamVjdCBmZmZmZmY4MDYy N2FjZDUwLzU3IChzaXplPTE2NTIyMTU0NjMgYWNjZXNzPTArMTg0NDY3NDM1MjU2MDU5NTI4ODAp CgpjcHVpZCA9IDAKS0RCOiBlbnRlcjogcGFuaWMKcGFuaWM6IGZyb20gZGVidWdnZXIKY3B1aWQg PSAwClVwdGltZTogNW01MXMKUGh5c2ljYWwgbWVtb3J5OiAzMzA5IE1CCkR1bXBpbmcgMTMzNiBN QjogMTMyMSAxMzA1IDEyODkgMTI3MyAxMjU3IDEyNDEgMTIyNSAxMjA5IDExOTMgMTE3NyAxMTYx IDExNDUgMTEyOSAxMTEzIDEwOTcgMTA4MSAxMDY1IDEwNDkgMTAzMyAxMDE3IDEwMDEgOTg1IDk2 OSA5NTMgOTM3IDkyMSA5MDUgODg5IDg3MyA4NTcgODQxIDgyNSA4MDkgNzkzIDc3NyA3NjEgNzQ1 IDcyOSA3MTMgNjk3IDY4MSA2NjUgNjQ5IDYzMyA2MTcgNjAxIDU4NSA1NjkgNTUzIDUzNyA1MjEg NTA1IDQ4OSA0NzMgNDU3IDQ0MSA0MjUgNDA5IDM5MyAzNzcgMzYxIDM0NSAzMjkgMzEzIDI5NyAy ODEgMjY1IDI0OSAyMzMgMjE3IDIwMSAxODUgMTY5IDE1MyAxMzcgMTIxIDEwNSA4OSA3MyA1NyA0 MSAyNSA5CgpSZWFkaW5nIHN5bWJvbHMgZnJvbSAvYm9vdC9rZXJuZWwvemZzLmtvLi4uUmVhZGlu ZyBzeW1ib2xzIGZyb20gL2Jvb3Qva2VybmVsL3pmcy5rby5zeW1ib2xzLi4uZG9uZS4KZG9uZS4K TG9hZGVkIHN5bWJvbHMgZm9yIC9ib290L2tlcm5lbC96ZnMua28KUmVhZGluZyBzeW1ib2xzIGZy b20gL2Jvb3Qva2VybmVsL29wZW5zb2xhcmlzLmtvLi4uUmVhZGluZyBzeW1ib2xzIGZyb20gL2Jv b3Qva2VybmVsL29wZW5zb2xhcmlzLmtvLnN5bWJvbHMuLi5kb25lLgpkb25lLgpMb2FkZWQgc3lt Ym9scyBmb3IgL2Jvb3Qva2VybmVsL29wZW5zb2xhcmlzLmtvCiMwICBkb2FkdW1wICgpIGF0IC9o ZWFkX29sZC9zeXMva2Vybi9rZXJuX3NodXRkb3duLmM6MjQ4CjI0OAkJaWYgKHRleHRkdW1wX3Bl bmRpbmcpCihrZ2RiKSAjMCAgZG9hZHVtcCAoKSBhdCAvaGVhZF9vbGQvc3lzL2tlcm4va2Vybl9z aHV0ZG93bi5jOjI0OAojMSAgMHhmZmZmZmZmZjgwNWQ4M2M3IGluIGJvb3QgKGhvd3RvPTI2MCkK ICAgIGF0IC9oZWFkX29sZC9zeXMva2Vybi9rZXJuX3NodXRkb3duLmM6NDE2CiMyICAweGZmZmZm ZmZmODA1ZDg4M2MgaW4gcGFuaWMgKGZtdD1WYXJpYWJsZSAiZm10IiBpcyBub3QgYXZhaWxhYmxl LgopCiAgICBhdCAvaGVhZF9vbGQvc3lzL2tlcm4va2Vybl9zaHV0ZG93bi5jOjU5MAojMyAgMHhm ZmZmZmZmZjgwMjAyYWU3IGluIGRiX3BhbmljIChhZGRyPVZhcmlhYmxlICJhZGRyIiBpcyBub3Qg YXZhaWxhYmxlLgopCiAgICBhdCAvaGVhZF9vbGQvc3lzL2RkYi9kYl9jb21tYW5kLmM6NDc4CiM0 ICAweGZmZmZmZmZmODAyMDJmOTEgaW4gZGJfY29tbWFuZCAobGFzdF9jbWRwPTB4ZmZmZmZmZmY4 MGNkYmYwMCwgY21kX3RhYmxlPVZhcmlhYmxlICJjbWRfdGFibGUiIGlzIG5vdCBhdmFpbGFibGUu CgopIGF0IC9oZWFkX29sZC9zeXMvZGRiL2RiX2NvbW1hbmQuYzo0NDUKIzUgIDB4ZmZmZmZmZmY4 MDIwMzFlMCBpbiBkYl9jb21tYW5kX2xvb3AgKCkKICAgIGF0IC9oZWFkX29sZC9zeXMvZGRiL2Ri X2NvbW1hbmQuYzo0OTgKIzYgIDB4ZmZmZmZmZmY4MDIwNTI0OSBpbiBkYl90cmFwICh0eXBlPVZh cmlhYmxlICJ0eXBlIiBpcyBub3QgYXZhaWxhYmxlLgopIGF0IC9oZWFkX29sZC9zeXMvZGRiL2Ri X21haW4uYzoyMjkKIzcgIDB4ZmZmZmZmZmY4MDYwY2JkZSBpbiBrZGJfdHJhcCAodHlwZT0zLCBj b2RlPTAsIHRmPTB4ZmZmZmZmODA2MjdhY2M3MCkKICAgIGF0IC9oZWFkX29sZC9zeXMva2Vybi9z dWJyX2tkYi5jOjUzNQojOCAgMHhmZmZmZmZmZjgwOGRlZmY2IGluIHRyYXAgKGZyYW1lPTB4ZmZm ZmZmODA2MjdhY2M3MCkKICAgIGF0IC9oZWFkX29sZC9zeXMvYW1kNjQvYW1kNjQvdHJhcC5jOjYx OQojOSAgMHhmZmZmZmZmZjgwOGM4OTc4IGluIGNhbGx0cmFwICgpCiAgICBhdCAvaGVhZF9vbGQv c3lzL2FtZDY0L2FtZDY0L2V4Y2VwdGlvbi5TOjIyOAojMTAgMHhmZmZmZmZmZjgwNjBjZDdkIGlu IGtkYl9lbnRlciAod2h5PTB4ZmZmZmZmZmY4MDlmY2IxOCAicGFuaWMiLCAKICAgIG1zZz0weGEg PEFkZHJlc3MgMHhhIG91dCBvZiBib3VuZHM+KSBhdCBjcHVmdW5jLmg6NjMKIzExIDB4ZmZmZmZm ZmY4MDVkODg0YiBpbiBwYW5pYyAoZm10PVZhcmlhYmxlICJmbXQiIGlzIG5vdCBhdmFpbGFibGUu CikKICAgIGF0IC9oZWFkX29sZC9zeXMva2Vybi9rZXJuX3NodXRkb3duLmM6NTczCiMxMiAweGZm ZmZmZmZmODEzOGMxOTcgaW4gdmNtbl9lcnIgKGNlPTE4LCAKICAgIGZtdD0weGEgPEFkZHJlc3Mg MHhhIG91dCBvZiBib3VuZHM+LCBhZHg9MHhmZmZmZmY4MDYyN2FjZjgwKQogICAgYXQgL2hlYWRf b2xkL3N5cy9tb2R1bGVzL29wZW5zb2xhcmlzLy4uLy4uL2NkZGwvY29tcGF0L29wZW5zb2xhcmlz L2tlcm4vb3BlbnNvbGFyaXNfY21uX2Vyci5jOjUxCiMxMyAweGZmZmZmZmZmODEyODkxNGEgaW4g emZzX3BhbmljX3JlY292ZXIgKGZtdD1WYXJpYWJsZSAiZm10IiBpcyBub3QgYXZhaWxhYmxlLgop CiAgICBhdCAvaGVhZF9vbGQvc3lzL21vZHVsZXMvemZzLy4uLy4uL2NkZGwvY29udHJpYi9vcGVu c29sYXJpcy91dHMvY29tbW9uL2ZzL3pmcy9zcGFfbWlzYy5jOjEyMTgKIzE0IDB4ZmZmZmZmZmY4 MTI0M2E5MyBpbiBkbXVfYnVmX2hvbGRfYXJyYXlfYnlfZG5vZGUgKGRuPTB4ZmZmZmZmMDA2ZGU2 MzAwMCwgCiAgICBvZmZzZXQ9OTgzMDQsIGxlbmd0aD0zMzk5LCByZWFkPTAsIHRhZz0weGZmZmZm ZmZmODEzMzU1MTAsIAogICAgbnVtYnVmc3A9MHhmZmZmZmY4MDYyN2FkMTM0LCBkYnBwPTB4ZmZm ZmZmODA2MjdhZDEyOCwgZmxhZ3M9VmFyaWFibGUgImZsYWdzIiBpcyBub3QgYXZhaWxhYmxlLgop CiAgICBhdCAvaGVhZF9vbGQvc3lzL21vZHVsZXMvemZzLy4uLy4uL2NkZGwvY29udHJpYi9vcGVu c29sYXJpcy91dHMvY29tbW9uL2ZzL3pmcy9kbXUuYzozNzIKIzE1IDB4ZmZmZmZmZmY4MTI0NDRk MiBpbiBkbXVfd3JpdGVfdWlvX2Rub2RlIChkbj1WYXJpYWJsZSAiZG4iIGlzIG5vdCBhdmFpbGFi bGUuCikKICAgIGF0IC9oZWFkX29sZC9zeXMvbW9kdWxlcy96ZnMvLi4vLi4vY2RkbC9jb250cmli L29wZW5zb2xhcmlzL3V0cy9jb21tb24vZnMvemZzL2RtdS5jOjEwMTkKIzE2IDB4ZmZmZmZmZmY4 MTI0NDc2NiBpbiBkbXVfd3JpdGVfdWlvX2RidWYgKHpkYj0weGZmZmZmZjAwNmU2ZDAxNDgsIAog ICAgdWlvPTB4ZmZmZmZmODA2MjdhZDY4MCwgc2l6ZT0zMzk5LCB0eD0weGZmZmZmZjAwNmU3ZWFj MDApCiAgICBhdCAvaGVhZF9vbGQvc3lzL21vZHVsZXMvemZzLy4uLy4uL2NkZGwvY29udHJpYi9v cGVuc29sYXJpcy91dHMvY29tbW9uL2ZzL3pmcy9kbXUuYzoxMDc2CiMxNyAweGZmZmZmZmZmODEy ZGI4MmYgaW4gemZzX2ZyZWVic2Rfd3JpdGUgKGFwPVZhcmlhYmxlICJhcCIgaXMgbm90IGF2YWls YWJsZS4KKQogICAgYXQgL2hlYWRfb2xkL3N5cy9tb2R1bGVzL3pmcy8uLi8uLi9jZGRsL2NvbnRy aWIvb3BlbnNvbGFyaXMvdXRzL2NvbW1vbi9mcy96ZnMvemZzX3Zub3BzLmM6OTA5CiMxOCAweGZm ZmZmZmZmODA5M2JjMWQgaW4gVk9QX1dSSVRFX0FQViAodm9wPTB4ZmZmZmZmZmY4MTM2MTUwMCwg CiAgICBhPTB4ZmZmZmZmODA2MjdhZDZiMCkgYXQgdm5vZGVfaWYuYzo5NTEKIzE5IDB4ZmZmZmZm ZmY4MDdiYmYyMiBpbiBuZnNydl93cml0ZSAobmZzZD0weGZmZmZmZjgwNjI3YWQ3ZjAsIHNscD0w eDAsIAogICAgbXJxPTB4ZmZmZmZmODA2MjdhZDdlMCkgYXQgdm5vZGVfaWYuaDo0MTMKIzIwIDB4 ZmZmZmZmZmY4MDdiZDM3NiBpbiBuZnNzdmNfcHJvZ3JhbSAocnFzdD0weGZmZmZmZjAwNmU2YTc4 MDAsIHhwcnQ9VmFyaWFibGUgInhwcnQiIGlzIG5vdCBhdmFpbGFibGUuCikKICAgIGF0IC9oZWFk X29sZC9zeXMvbmZzc2VydmVyL25mc19zcnZrcnBjLmM6MzUyCiMyMSAweGZmZmZmZmZmODA3ZDY3 Y2IgaW4gc3ZjX3J1bl9pbnRlcm5hbCAocG9vbD0weGZmZmZmZjAwMDI2MWMyMDAsIGlzbWFzdGVy PWR3YXJmMl9yZWFkX2FkZHJlc3M6IENvcnJ1cHRlZCBEV0FSRiBleHByZXNzaW9uLgoKKSBhdCAv aGVhZF9vbGQvc3lzL3JwYy9zdmMuYzo4OTUKIzIyIDB4ZmZmZmZmZmY4MDdkNmM3ZiBpbiBzdmNf cnVuIChwb29sPTB4ZmZmZmZmMDAwMjYxYzIwMCkKICAgIGF0IC9oZWFkX29sZC9zeXMvcnBjL3N2 Yy5jOjEyMzUKIzIzIDB4ZmZmZmZmZmY4MDdiZDU0MiBpbiBuZnNzdmNfbmZzZCAodGQ9VmFyaWFi bGUgInRkIiBpcyBub3QgYXZhaWxhYmxlLgopCiAgICBhdCAvaGVhZF9vbGQvc3lzL25mc3NlcnZl ci9uZnNfc3J2a3JwYy5jOjQ2NwojMjQgMHhmZmZmZmZmZjgwN2JkNWZiIGluIG5mc3N2Y19uZnNz ZXJ2ZXIgKHRkPTB4ZmZmZmZmMDAxNzM3Zjg4MCwgdWFwPVZhcmlhYmxlICJ1YXAiIGlzIG5vdCBh dmFpbGFibGUuCikKICAgIGF0IC9oZWFkX29sZC9zeXMvbmZzc2VydmVyL25mc19zcnZrcnBjLmM6 MTkzCiMyNSAweGZmZmZmZmZmODA3YmYxNTMgaW4gbmZzc3ZjICh0ZD0weGZmZmZmZjAwMTczN2Y4 ODAsIAogICAgdWFwPTB4ZmZmZmZmODA2MjdhZGJiMCkgYXQgL2hlYWRfb2xkL3N5cy9uZnMvbmZz X25mc3N2Yy5jOjk4CiMyNiAweGZmZmZmZmZmODA2MWEwOWIgaW4gc3lzY2FsbGVudGVyICh0ZD0w eGZmZmZmZjAwMTczN2Y4ODAsIAogICAgc2E9MHhmZmZmZmY4MDYyN2FkYmEwKSBhdCAvaGVhZF9v bGQvc3lzL2tlcm4vc3Vicl90cmFwLmM6MzE5CiMyNyAweGZmZmZmZmZmODA4ZGU5MWMgaW4gc3lz Y2FsbCAoZnJhbWU9MHhmZmZmZmY4MDYyN2FkYzQwKQogICAgYXQgL2hlYWRfb2xkL3N5cy9hbWQ2 NC9hbWQ2NC90cmFwLmM6OTM5CiMyOCAweGZmZmZmZmZmODA4YzhjNTIgaW4gWGZhc3Rfc3lzY2Fs bCAoKQogICAgYXQgL2hlYWRfb2xkL3N5cy9hbWQ2NC9hbWQ2NC9leGNlcHRpb24uUzozODEKIzI5 IDB4MDAwMDAwMDgwMDZhMmJlYyBpbiA/PyAoKQpQcmV2aW91cyBmcmFtZSBpbm5lciB0byB0aGlz IGZyYW1lIChjb3JydXB0IHN0YWNrPykKKGtnZGIpIAoKLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCnBzIC1heGwK CiAgVUlEICAgUElEICBQUElEIENQVSBQUkkgTkkgICAgVlNaICAgIFJTUyBNV0NIQU4gU1RBVCAg VFQgICAgICAgVElNRSBDT01NQU5ECiAgICAwICAgICAwICAgICAwICAgMCAgLTggIDAgICAgICAw ICAgICAgMCAtICAgICAgRExzICAgPz8gIDY3NDA4OTI5OjE1LjAwIFtrZXJuZWxdCiAgICAwICAg ICAxICAgICAwICAgMCAgNzYgIDAgICA3Mjk2ICAgIDYwNCB3YWl0ICAgRExzICAgPz8gIDI5MzUy MzoyNC4wMCBbaW5pdF0KICAgIDAgICAgIDIgICAgIDAgICAwICAtOCAgMCAgICAgIDAgICAgICAw IC0gICAgICBETCAgICA/PyAgODA1NTE3OjQ0LjAwIFtnX2V2ZW50XQogICAgMCAgICAgMyAgICAg MCAgIDAgIC04ICAwICAgICAgMCAgICAgIDAgLSAgICAgIERMICAgID8/ICAxODc1MTYzNTg6Mjcu MDAgW2dfdXBdCiAgICAwICAgICA0ICAgICAwICAgMCAgLTggIDAgICAgICAwICAgICAgMCAtICAg ICAgREwgICAgPz8gIDIzNDI3MTQ4NDozNC4wMCBbZ19kb3duXQogICAgMCAgICAgNSAgICAgMCAg IDAgLTE2ICAwICAgICAgMCAgICAgIDAgd2FpdGluIERMICAgID8/ICAzNjQ1OjEwLjAwIFtzY3Rw X2l0ZQogICAgMCAgICAgNiAgICAgMCAgIDAgLTYwICAwICAgICAgMCAgICAgIDAgY2NiX3NjIERM ICAgID8/ICAgIDA6MDAuMDAgW3hwdF90aHJkCiAgICAwICAgICA3ICAgICAwICAgMCAgNzYgIDAg ICAgICAwICAgICAgMCAtICAgICAgUkwgICAgPz8gIDE5MTUzOjA3LjAwIFtwYWdlZGFlbQogICAg MCAgICAgOCAgICAgMCAgIDAgLTE2ICAwICAgICAgMCAgICAgIDAgcHNsZWVwIERMICAgID8/ICA1 MDA6MTMuMDAgW3ZtZGFlbW9uCiAgICAwICAgICA5ICAgICAwICAgMCAgNzYgIDAgICAgICAwICAg ICAgMCBwZ3plcm8gREwgICAgPz8gIDQyOToyMy4wMCBbcGFnZXplcm8KICAgIDAgICAgMTAgICAg IDAgICAwIC0xNiAgMCAgICAgIDAgICAgICAwIGF1ZGl0XyBETCAgICA/PyAgICAwOjAwLjAwIFth dWRpdF0KICAgIDAgICAgMTEgICAgIDAgICAwIDE3MSAgMCAgICAgIDAgICAgICAwIC0gICAgICBS TCAgICA/PyAgMjQ2OTA1NzUwOTk6MTguMDAgW2lkbGVdCiAgICAwICAgIDEyICAgICAwICAgMCAt NjAgIDAgICAgICAwICAgICAgMCAtICAgICAgV0wgICAgPz8gIDg5NzU4ODQyOjM0LjAwIFtpbnRy XQogICAgMCAgICAxMyAgICAgMCAgIDAgIDQ0ICAwICAgICAgMCAgICAgIDAgLSAgICAgIERMICAg ID8/ICA0OTQxMDUyOjAyLjAwIFt5YXJyb3ddCiAgICAwICAgIDE0ICAgICAwICAgMCAtNjQgIDAg ICAgICAwICAgICAgMCAtICAgICAgREwgICAgPz8gIDEzODk3MzozMC4wMCBbdXNiXQogICAgMCAg ICAxNSAgICAgMCAgIDAgLTE2ICAwICAgICAgMCAgICAgIDAgcHNsZWVwIERMICAgID8/ICA2MjY5 NDozNi4wMCBbYnVmZGFlbW8KICAgIDAgICAgMTYgICAgIDAgICAwIC0xNiAgMCAgICAgIDAgICAg ICAwIHZscnV3dCBETCAgICA/PyAgNjE0Nzc6NTYuMDAgW3ZubHJ1XQogICAgMCAgICAxNyAgICAg MCAgIDAgLTE2ICAwICAgICAgMCAgICAgIDAgc3luY2VyIERMICAgID8/ICAxNjk5OTk6MTAuMDAg W3N5bmNlcl0KICAgIDAgICAgMTggICAgIDAgICAwIC0xNiAgMCAgICAgIDAgICAgICAwIHNkZmx1 cyBETCAgICA/PyAgNzAwNDI6MjEuMDAgW3NvZnRkZXBmCiAgICAwICAgIDE5ICAgICAwICAgMCAt MTYgIDAgICAgICAwICAgICAgMCBmbG93Y2wgREwgICAgPz8gIDM2NjEwOjM3LjAwIFtmbG93Y2xl YQogICAgMCAgICAzOSAgICAgMCAgIDAgIC04ICAwICAgICAgMCAgICAgIDAgemlvLT5pIERMICAg ID8/ICA2NDQ0NDE6MTEuMDAgW3pmc2tlcm5dCiAgICAwICAgNjM2ICAgICAxICAgMCAgNzYgIDAg ICA3Mjk2ICAgIDg1MiBzZWxlY3QgRHMgICAgPz8gIDg1MTE3OjU4LjAwIFtkZXZkXQogICAgMCAg IDc4MyAgICAgMSAgIDAgIDQ0ICAwICAxMTI3NiAgIDE2NzIgc2VsZWN0IERzICAgID8/ICA0NjMz NDQ6NTIuMDAgW3N5c2xvZ2RdCiAgICAwICAgOTEzICAgICAxICAgMCAgNDQgIDAgIDEyMjA0ICAg MTgwOCBzZWxlY3QgRHMgICAgPz8gIDMxNzgxMjoyNC4wMCBbcnBjYmluZF0KICAgIDAgICA5MTcg ICAgIDEgICAwICA3NiAgMCAgMTExNDQgICAxOTYwIHNlbGVjdCBEcyAgICA/PyAgMTM2NjYyOjEy LjAwIFttb3VudGRdCiAgICAwICAgOTM0ICAgICAxICAgMCAgNDQgIDAgIDEwMDQ0ICAgMTcyNCBz ZWxlY3QgRHMgICAgPz8gIDEwMzE4OTI6MDMuMDAgW25mc2RdCiAgICAwICAgOTM1ICAgOTM0ICAg MCAgNDcgIDAgIDEwMDQ0ICAgMTcyMCBycGNzdmMgRCAgICAgPz8gIDU2NjY1MjE6MDQuMDAgW25m c2RdCiAgICAwICAxMDk1ICAgICAxICAgMCAgNzYgIDAgIDEwMDQwICAgMTY4NCBzZWxlY3QgRHMg ICAgPz8gIDU4OTQ3OjI4LjAwIFtkaGNsaWVudAogICA2NSAgMTE0NCAgICAgMSAgIDAgIDc2ICAw ICAxMDA0MCAgIDE4MDggc2VsZWN0IERzICAgID8/ICAxNTM2NzowMS4wMCBbZGhjbGllbnQKICAg IDAgIDExNDUgICAgIDEgICAwICA0NCAgMCAgMzA1NDAgICA0NjQwIHNlbGVjdCBEcyAgICA/PyAg MTg3NDEwOjM0LjAwIFtzc2hkXQogICAgMCAgMTE1NCAgICAgMSAgIDAgIDQ0ICAwICAxNjQwNCAg IDQxNzIgc2VsZWN0IERzICAgID8/ICAxNDIyMzI6NDQuMDAgW3NlbmRtYWlsCiAgIDI1ICAxMTYx ICAgICAxICAgMCAgNDQgIDAgIDE2NDA0ICAgNDIzMiBwYXVzZSAgRHMgICAgPz8gIDkzMzY4OjEy LjAwIFtzZW5kbWFpbAogICAgMCAgMTE2OCAgICAgMSAgIDAgIDUwICAwICAxMjIwNCAgIDE3OTIg bmFuc2xwIERzICAgID8/ICAxMzI3MDY6MTEuMDAgW2Nyb25dCiAgICAwICAxMjI4ICAgICAxICAg MCAgNzYgIDAgIDEyNTM2ICAgMTc4MCB3YWl0ICAgRCAgICAgPz8gICAgMDowMC4wMCBbc2hdCiAg ICAwICAxMjI5ICAgICAxICAgMCAgNzYgIDAgIDEwMDQ0ICAgMTEzNiBwaXBlcmQgRCAgICAgPz8g ICAgMDowMC4wMCBbbG9nZ2VyXQogICAgMCAgMTIzMCAgMTIyOCAgIDAgIDc2ICAwICAgMjg4MCAg ICA5OTYgbmFuc2xwIEQgICAgID8/ICAgIDA6MDAuMDAgW3NsZWVwXQogICAgMCAgMTIzMiAgICAg MSAgIDAgIDc2ICAwICAxMTE0NCAgIDE0MjggdHR5aW4gIERzKyAgID8/ICAxNTY4NDA6MTIuMDAg W2dldHR5XQogICAgMCAgMTIzMyAgICAgMSAgIDAgIDc2ICAwICAxMTE0NCAgIDE0MjggdHR5aW4g IERzKyAgID8/ICAxNTYyNDA6NTEuMDAgW2dldHR5XQogICAgMCAgMTIzNCAgICAgMSAgIDAgIDc2 ICAwICAxMTE0NCAgIDE0MjggdHR5aW4gIERzKyAgID8/ICAxNTMwOTc6NDMuMDAgW2dldHR5XQog ICAgMCAgMTIzNSAgICAgMSAgIDAgIDc2ICAwICAxMTE0NCAgIDE0MjggdHR5aW4gIERzKyAgID8/ ICAxMzM3MzU6MDcuMDAgW2dldHR5XQogICAgMCAgMTIzNiAgICAgMSAgIDAgIDc2ICAwICAxMTE0 NCAgIDE0MjggdHR5aW4gIERzKyAgID8/ICAxNjIzNzg6NDAuMDAgW2dldHR5XQogICAgMCAgMTIz NyAgICAgMSAgIDAgIDc2ICAwICAxMTE0NCAgIDE0MjggdHR5aW4gIERzKyAgID8/ICAxNzAxNDM6 MzAuMDAgW2dldHR5XQogICAgMCAgMTIzOCAgICAgMSAgIDAgIDc2ICAwICAxMTE0NCAgIDE0Mjgg dHR5aW4gIERzKyAgID8/ICAxNzMwMjA6MDkuMDAgW2dldHR5XQogICAgMCAgMTIzOSAgICAgMSAg IDAgIDc2ICAwICAxMTE0NCAgIDE0MjggdHR5aW4gIERzKyAgID8/ICAxNDI2OTE6MDUuMDAgW2dl dHR5XQoKLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tCnZtc3RhdCAtcwoKICAyNDA3ODUwIGNwdSBjb250ZXh0IHN3 aXRjaGVzCiAgIDMzOTg1MiBkZXZpY2UgaW50ZXJydXB0cwogICAgNjQwMjcgc29mdHdhcmUgaW50 ZXJydXB0cwogICAxMjc3MzUgdHJhcHMKICAgNjYyODY2IHN5c3RlbSBjYWxscwogICAgICAgMjAg a2VybmVsIHRocmVhZHMgY3JlYXRlZAogICAgIDEyMTggIGZvcmsoKSBjYWxscwogICAgICAgIDIg dmZvcmsoKSBjYWxscwogICAgICAgIDAgcmZvcmsoKSBjYWxscwogICAgICAgIDAgc3dhcCBwYWdl ciBwYWdlaW5zCiAgICAgICAgMCBzd2FwIHBhZ2VyIHBhZ2VzIHBhZ2VkIGluCiAgICAgICAgMCBz d2FwIHBhZ2VyIHBhZ2VvdXRzCiAgICAgICAgMCBzd2FwIHBhZ2VyIHBhZ2VzIHBhZ2VkIG91dAog ICAgICAzNTYgdm5vZGUgcGFnZXIgcGFnZWlucwogICAgIDI0NzUgdm5vZGUgcGFnZXIgcGFnZXMg cGFnZWQgaW4KICAgICAgICAwIHZub2RlIHBhZ2VyIHBhZ2VvdXRzCiAgICAgICAgMCB2bm9kZSBw YWdlciBwYWdlcyBwYWdlZCBvdXQKICAgICAgICAwIHBhZ2UgZGFlbW9uIHdha2V1cHMKICAgICAg ICAwIHBhZ2VzIGV4YW1pbmVkIGJ5IHRoZSBwYWdlIGRhZW1vbgogICAgICAxMDggcGFnZXMgcmVh Y3RpdmF0ZWQKICAgIDQzOTIxIGNvcHktb24td3JpdGUgZmF1bHRzCiAgICAgICA0NSBjb3B5LW9u LXdyaXRlIG9wdGltaXplZCBmYXVsdHMKICAgIDUxNjMxIHplcm8gZmlsbCBwYWdlcyB6ZXJvZWQK ICAgICAgICAwIHplcm8gZmlsbCBwYWdlcyBwcmV6ZXJvZWQKICAgICAgIDEzIGludHJhbnNpdCBi bG9ja2luZyBwYWdlIGZhdWx0cwogICAxMzEzODYgdG90YWwgVk0gZmF1bHRzIHRha2VuCiAgICAg ICAgMCBwYWdlcyBhZmZlY3RlZCBieSBrZXJuZWwgdGhyZWFkIGNyZWF0aW9uCiAgIDkzNTg2NSBw YWdlcyBhZmZlY3RlZCBieSAgZm9yaygpCiAgICAgMTU5MiBwYWdlcyBhZmZlY3RlZCBieSB2Zm9y aygpCiAgICAgICAgMCBwYWdlcyBhZmZlY3RlZCBieSByZm9yaygpCiAgICAgICAgMCBwYWdlcyBj YWNoZWQKICAgMjU3MjA2IHBhZ2VzIGZyZWVkCiAgICAgICAgMCBwYWdlcyBmcmVlZCBieSBkYWVt b24KICAgICAgICAwIHBhZ2VzIGZyZWVkIGJ5IGV4aXRpbmcgcHJvY2Vzc2VzCiAgICAgMjc4MCBw YWdlcyBhY3RpdmUKICAgICAxOTIyIHBhZ2VzIGluYWN0aXZlCiAgICAgICAgMiBwYWdlcyBpbiBW TSBjYWNoZQogICAgNTI0MjcgcGFnZXMgd2lyZWQgZG93bgogICA3NjUwMDUgcGFnZXMgZnJlZQog ICAgIDQwOTYgYnl0ZXMgcGVyIHBhZ2UKICAgIDE3NDg5IHRvdGFsIG5hbWUgbG9va3VwcwogICAg ICAgICAgY2FjaGUgaGl0cyAoNzclIHBvcyArIDYlIG5lZykgc3lzdGVtIDAlIHBlci1kaXJlY3Rv cnkKICAgICAgICAgIGRlbGV0aW9ucyAwJSwgZmFsc2VoaXRzIDAlLCB0b29sb25nIDAlCgotLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0Kdm1zdGF0IC1tCgogICAgICAgICBUeXBlIEluVXNlIE1lbVVzZSBIaWdoVXNl IFJlcXVlc3RzICBTaXplKHMpCiAgICBwcm9jLWFyZ3MgICAgMjQgICAgIDJLICAgICAgIC0gICAg ICAzNjYgIDMyLDY0LDEyOAogICAgIGFjcGl0YXNrICAgICAxICAgICAySyAgICAgICAtICAgICAg ICAxICAyMDQ4CiAgICAgIGl0aHJlYWQgICAgODQgICAgMTRLICAgICAgIC0gICAgICAgODQgIDMy LDEyOCwyNTYKICAgICBwcGJ1c2RldiAgICAgMyAgICAgMUsgICAgICAgLSAgICAgICAgMyAgMjU2 CiAgICAgICBLVFJBQ0UgICAxMDAgICAgMTNLICAgICAgIC0gICAgICAxMDAgIDEyOAogICAgICBl bnRyb3B5ICAxMDI0ICAgIDY0SyAgICAgICAtICAgICAxMDI0ICA2NAogICAgICAgbGlua2VyICAg MTQ1ICAxMDUzSyAgICAgICAtICAgICAgMTY0ICAxNiwzMiw2NCwxMjgsMjU2LDUxMiwxMDI0LDIw NDgsNDA5NgogICAgICAgIGxvY2tmICAgIDIzICAgICAzSyAgICAgICAtICAgICAgIDQzICA2NCwx MjgKICAgICAgQ0FNIFhQVCAgICAxMiAgICAgM0sgICAgICAgLSAgICAgICAyNiAgMzIsNjQsMTI4 LDIwNDgKICAgICAgIGlwNm5kcCAgICAgNiAgICAgMUsgICAgICAgLSAgICAgICAgNyAgNjQsMTI4 CiAgICAgICAgIHRlbXAgICAgMjMgICAgIDVLICAgICAgIC0gICAgIDk0OTcgIDE2LDMyLDY0LDEy OCwyNTYsNTEyLDEwMjQsMjA0OCw0MDk2CiAgICAgICBkZXZidWYgIDEwNjIgIDIxMDJLICAgICAg IC0gICAgIDEwODUgIDE2LDMyLDY0LDEyOCwyNTYsNTEyLDEwMjQsMjA0OCw0MDk2CiAgICAgICBt b2R1bGUgICA0MTkgICAgNTNLICAgICAgIC0gICAgICA0MTkgIDEyOAogICAgIG10eF9wb29sICAg ICAyICAgIDE2SyAgICAgICAtICAgICAgICAyICAKICAgICAgICAgVUFSVCAgICAgNiAgICAgNEsg ICAgICAgLSAgICAgICAgNiAgMTYsNTEyLDEwMjQKICAgICAgICAgIG9zZCAgICAgMyAgICAgMUsg ICAgICAgLSAgICAgICAgNCAgMTYsNjQKICAgICAgYWNwaXNlbSAgICAyMSAgICAgM0sgICAgICAg LSAgICAgICAyMSAgMTI4CiAgICAgIHN1YnByb2MgICAxMTcgICAyMTNLICAgICAgIC0gICAgIDEz MTQgIDUxMiw0MDk2CiAgICAgICAgIHByb2MgICAgIDIgICAgMTZLICAgICAgIC0gICAgICAgIDIg IAogICAgICBzZXNzaW9uICAgIDIxICAgICAzSyAgICAgICAtICAgICAgIDI0ICAxMjgKICAgICAg ICAgcGdycCAgICAyMSAgICAgM0sgICAgICAgLSAgICAgICAyNCAgMTI4CiAgICAgICAgIGNyZWQg ICAgMjYgICAgIDVLICAgICAgIC0gICAgMjg5MDYgIDY0LDI1NgogICAgICB1aWRpbmZvICAgICA0 ICAgICAzSyAgICAgICAtICAgICAgICA0ICAxMjgsMjA0OAogICAgICAgcGxpbWl0ICAgIDEyICAg ICAzSyAgICAgICAtICAgICAgMTM0ICAyNTYKICAgIHN5c2N0bHRtcCAgICAgMCAgICAgMEsgICAg ICAgLSAgICAgIDMxMSAgMTYsMzIsNjQsMTI4LDQwOTYKICAgIHN5c2N0bG9pZCAgNDM2MyAgIDIx NUsgICAgICAgLSAgICAgNDQ3MCAgMTYsMzIsNjQsMTI4CiAgICAgICBzeXNjdGwgICAgIDAgICAg IDBLICAgICAgIC0gICAgICAzODIgIDE2LDMyLDY0CiAgICAgIGNhbGxvdXQgICAgIDEgICA1MTJL ICAgICAgIC0gICAgICAgIDEgIAogICAgICAgICB1bXR4ICAgNTE2ICAgIDY1SyAgICAgICAtICAg ICAgNTE2ICAxMjgKICAgICBwMTAwMy4xYiAgICAgMSAgICAgMUsgICAgICAgLSAgICAgICAgMSAg MTYKICAgICAgICAgU1dBUCAgICAgMiAgIDY4NUsgICAgICAgLSAgICAgICAgMiAgNjQKICAgICAg IGJ1cy1zYyAgICA4MSAgIDEzM0sgICAgICAgLSAgICAgMzUwNiAgMTYsMzIsNjQsMTI4LDI1Niw1 MTIsMTAyNCwyMDQ4LDQwOTYKICAgICAgICAgIGJ1cyAgMTE1NyAgIDEwMksgICAgICAgLSAgICAg NjM0MSAgMTYsMzIsNjQsMTI4LDI1Niw1MTIsMTAyNAogICAgICBkZXZzdGF0ICAgICA2ICAgIDEz SyAgICAgICAtICAgICAgICA2ICAzMiw0MDk2CiBldmVudGhhbmRsZXIgICAgODMgICAgIDdLICAg ICAgIC0gICAgICAgODMgIDY0LDEyOAogICAgICAgICBrb2JqICAgMjkxICAxMTY0SyAgICAgICAt ICAgICAgMzUwICA0MDk2CiAgICAgIFBlci1jcHUgICAgIDEgICAgIDFLICAgICAgIC0gICAgICAg IDEgIDMyCkNBTSBkZXYgcXVldWUgICAgIDEgICAgIDFLICAgICAgIC0gICAgICAgIDEgIDEyOAog ICAgQ0FNIHF1ZXVlICAgICAzICAgICAxSyAgICAgICAtICAgICAgICA3ICAxNgogICAgICAgICBy bWFuICAgMTc5ICAgIDIySyAgICAgICAtICAgICAgNjQ0ICAxNiwzMiwxMjgKICAgICAgICAgc2J1 ZiAgICAgMCAgICAgMEsgICAgICAgLSAgICAgIDQyNiAgMTYsMzIsNjQsMTI4LDI1Niw1MTIsMTAy NCwyMDQ4LDQwOTYKICAgICAgQ0FNIFNJTSAgICAgMSAgICAgMUsgICAgICAgLSAgICAgICAgMSAg MjU2CiAgICAgICAgc3RhY2sgICAgIDAgICAgIDBLICAgICAgIC0gICAgICAgIDIgIDI1NgogICAg dGFza3F1ZXVlICAgIDQ1ICAgICA1SyAgICAgICAtICAgICAgIDc1ICAxNiwzMiw2NCwxMjgsMTAy NAogICAgICAgVW5pdG5vICAgIDEwICAgICAxSyAgICAgICAtICAgICAgMjkyICAzMiw2NAogICAg ICAgVVNCZGV2ICAgICA4ICAgICAzSyAgICAgICAtICAgICAgICA4ICA2NCwxMjgsMTAyNAogICAg ICAgICAgVVNCICAgIDE0ICAgICA1SyAgICAgICAtICAgICAgIDE0ICAxNiwzMiw2NCwxMjgsMjA0 OAogIGF0YV9nZW5lcmljICAgICA1ICAgICA1SyAgICAgICAtICAgICAgICA1ICAxMDI0CiAgICAg IFdpdG5lc3MgICAgIDEgICAxMjhLICAgICAgIC0gICAgICAgIDEgIAogICAgICAgICAgaW92ICAg ICAwICAgICAwSyAgICAgICAtICAgICAgNjQ4ICAxNiw2NCwxMjgsMjU2LDUxMgogICAgICAgc2Vs ZWN0ICAgIDE1ICAgICAySyAgICAgICAtICAgICAgIDE1ICAxMjgKICAgICBpb2N0bG9wcyAgICAg MCAgICAgMEsgICAgICAgLSAgICAgIDc4MiAgMTYsMzIsNjQsMTI4LDI1Niw1MTIsMTAyNAogICAg ICAgICAgbXNnICAgICA0ICAgIDMwSyAgICAgICAtICAgICAgICA0ICAyMDQ4LDQwOTYKICAgICAg ICAgIHNlbSAgICAgNCAgIDEwNksgICAgICAgLSAgICAgICAgNCAgMjA0OCw0MDk2CiAgICAgICAg ICBzaG0gICAgIDEgICAgMjBLICAgICAgIC0gICAgICAgIDEgIAogICAgICAgICAgdHR5ICAgIDIx ICAgIDIxSyAgICAgICAtICAgICAgIDIzICAxMDI0LDIwNDgKICAgICBtYnVmX3RhZyAgICAgMCAg ICAgMEsgICAgICAgLSAgICAgICAgOSAgMzIKICAgICAgICAga3NlbSAgICAgMSAgICAgOEsgICAg ICAgLSAgICAgICAgMSAgCiAgICAgICAgc2htZmQgICAgIDEgICAgIDhLICAgICAgIC0gICAgICAg IDEgIAogICAgICAgICAgcGNiICAgIDI0ICAgMTU3SyAgICAgICAtICAgICAgIDQwICAxNiwzMiwx MjgsMTAyNCwyMDQ4LDQwOTYKICAgICAgIHNvbmFtZSAgICAgNCAgICAgMUsgICAgICAgLSAgICAg IDI5OSAgMTYsMzIsMTI4CiAgICAgICBiaW9idWYgICAgIDQgICAgIDhLICAgICAgIC0gICAgICAg IDYgIDIwNDgKICAgICB2ZnNjYWNoZSAgICAgMSAgMjA0OEsgICAgICAgLSAgICAgICAgMSAgCiAg IGNsX3NhdmVidWYgICAgIDAgICAgIDBLICAgICAgIC0gICAgICAgIDIgIDY0CiAgZXhwb3J0X2hv c3QgICAgIDEgICAgIDFLICAgICAgIC0gICAgICAgIDEgIDI1NgogICAgIHZmc19oYXNoICAgICAx ICAxMDI0SyAgICAgICAtICAgICAgICAxICAKICAgICAgIHZub2RlcyAgICAgMiAgICAgMUsgICAg ICAgLSAgICAgICAgMiAgMjU2CiAgdm5vZGVtYXJrZXIgICAgIDAgICAgIDBLICAgICAgIC0gICAg ICAgMzEgIDEwMjQKICAgICAgICBtb3VudCAgICA0NiAgICAgM0sgICAgICAgLSAgICAgIDIzMyAg MTYsMzIsNjQsMTI4LDI1Niw1MTIKICAgICAgICAgIEJQRiAgICAgOSAgICAxMEsgICAgICAgLSAg ICAgICAgOSAgMTI4LDUxMiw0MDk2CiAgZXRoZXJfbXVsdGkgICAgNDAgICAgIDNLICAgICAgIC0g ICAgICAgNTIgIDE2LDMyLDY0CiAgICAgICBpZmFkZHIgICAgNTEgICAgMTRLICAgICAgIC0gICAg ICAgNTIgIDMyLDY0LDEyOCwyNTYsNTEyLDQwOTYKICAgICAgICBpZm5ldCAgICAgNSAgICAgOUsg ICAgICAgLSAgICAgICAgNSAgMTI4LDIwNDgKICAgIGFkX2RyaXZlciAgICAgNSAgICAgMUsgICAg ICAgLSAgICAgICAgNSAgMzIKICAgICAgICBjbG9uZSAgICAgNiAgICAyNEsgICAgICAgLSAgICAg ICAgNiAgNDA5NgogICAgICAgYXJwY29tICAgICAyICAgICAxSyAgICAgICAtICAgICAgICAyICAx NgogICAgICBsbHRhYmxlICAgIDE0ICAgICA2SyAgICAgICAtICAgICAgIDE0ICAyNTYsNTEyCiAg ICBhcl9kcml2ZXIgICAgIDAgICAgIDBLICAgICAgIC0gICAgICAgMzAgIDUxMiwyMDQ4CiAgICAg ICBrYmRtdXggICAgIDYgICAgMThLICAgICAgIC0gICAgICAgIDYgIDE2LDUxMiwxMDI0LDIwNDgK ICAgICByb3V0ZXRibCAgICAzMyAgMTA0NUsgICAgICAgLSAgICAgIDExNiAgMzIsNjQsMTI4LDI1 Niw1MTIKICAgICAgICAgaWdtcCAgICAgNCAgICAgMUsgICAgICAgLSAgICAgICAgNCAgMjU2CiAg ICAgICBERVZGUzEgICAgODUgICAgNDNLICAgICAgIC0gICAgICAgODcgIDUxMgogICAgICAgREVW RlMzICAgIDk5ICAgIDI1SyAgICAgICAtICAgICAgMTAyICAyNTYKICAgICBpbl9tdWx0aSAgICAg MiAgICAgMUsgICAgICAgLSAgICAgICAgMyAgMjU2CiAgICBzY3RwX2l0ZXIgICAgIDAgICAgIDBL ICAgICAgIC0gICAgICAgIDQgIDI1NgogICAgIHNjdHBfaWZuICAgICAyICAgICAxSyAgICAgICAt ICAgICAgICAyICAxMjgKICAgICBzY3RwX2lmYSAgICAgNSAgICAgMUsgICAgICAgLSAgICAgICAg NSAgMTI4CiAgICAgc2N0cF92cmYgICAgIDEgICAgIDFLICAgICAgIC0gICAgICAgIDEgIDY0CiAg ICBzY3RwX2FfaXQgICAgIDAgICAgIDBLICAgICAgIC0gICAgICAgIDQgIDE2CiAgICBob3N0Y2Fj aGUgICAgIDEgICAgMjhLICAgICAgIC0gICAgICAgIDEgIAogICAgIHN5bmNhY2hlICAgICAxICAg IDk2SyAgICAgICAtICAgICAgICAxICAKICAgICAgICBERVZGUyAgICAxMiAgICAgMUsgICAgICAg LSAgICAgICAxMyAgMTYsMTI4CiAgICAgICBERVZGU1AgICAgIDEgICAgIDFLICAgICAgIC0gICAg ICAgIDEgIDY0CiBpcDZfbW9wdGlvbnMgICAgIDIgICAgIDFLICAgICAgIC0gICAgICAgIDIgIDMy LDI1NgogICAgaW42X211bHRpICAgIDIyICAgICAzSyAgICAgICAtICAgICAgIDIyICAzMiwyNTYK ICBpbjZfbWZpbHRlciAgICAgMSAgICAgMUsgICAgICAgLSAgICAgICAgMSAgMTAyNAogICAgICAg ICAgbWxkICAgICA0ICAgICAxSyAgICAgICAtICAgICAgICA0ICAxMjgKICAgICAgTkZTIEZIQSAg ICAgMiAgICAgM0sgICAgICAgLSAgICAgODYzNyAgNjQsMjA0OAogICAgICAgICAgcnBjICAxMDUx ICAgNTQ3SyAgICAgICAtICAgIDUyOTc0ICAzMiw2NCwxMjgsMjU2LDUxMiwyMDQ4CmF1ZGl0X2V2 Y2xhc3MgICAxNzIgICAgIDZLICAgICAgIC0gICAgICAyMTEgIDMyCiAgICAgICBuZXdibGsgICAg IDEgICAxMjhLICAgICAgIC0gICAgICAgIDEgIAogICAgYm1zYWZlbWFwICAgICAxICAgICA4SyAg ICAgICAtICAgICAgICAxICAKICAgICBpbm9kZWRlcCAgICAgMSAgMTAyNEsgICAgICAgLSAgICAg ICAgMSAgCiAgICAgIHBhZ2VkZXAgICAgIDEgICAxMjhLICAgICAgIC0gICAgICAgIDEgIAogIHVm c19kaXJoYXNoICAgIDMwICAgICA2SyAgICAgICAtICAgICAgIDMwICAxNiwzMiw2NCwxMjgsNTEy CiAgICB1ZnNfbW91bnQgICAgIDMgICAgMzFLICAgICAgIC0gICAgICAgIDMgIDUxMiwyMDQ4CiAg ICBwZnNfbm9kZXMgICAgMjEgICAgIDZLICAgICAgIC0gICAgICAgMjEgIDI1NgogICAgdm1fcGdk YXRhICAgICAyICAgMTI5SyAgICAgICAtICAgICAgICAyICAxMjgKICAgICAgIGFwbWRldiAgICAg MSAgICAgMUsgICAgICAgLSAgICAgICAgMSAgMTI4CiAgIENBTSBwZXJpcGggICAgIDIgICAgIDFL ICAgICAgIC0gICAgICAgMTIgIDE2LDMyLDY0LDEyOCwyNTYKICAgICAgICAgR0VPTSAgICA3MyAg ICAyMksgICAgICAgLSAgICAgIDM4NSAgMTYsMzIsNjQsMTI4LDI1Niw1MTIsMTAyNAogICAgICBh Y3BpZGV2ICAgIDc2ICAgICA1SyAgICAgICAtICAgICAgIDc2ICA2NAogICAgICBtZW1kZXNjICAg ICAxICAgICA0SyAgICAgICAtICAgICAgICAxICA0MDk2CiAgICAgbmV4dXNkZXYgICAgIDMgICAg IDFLICAgICAgIC0gICAgICAgIDMgIDE2CiAgICAgYXRrYmRkZXYgICAgIDIgICAgIDFLICAgICAg IC0gICAgICAgIDIgIDY0CiAgICAgcGNpX2xpbmsgICAgMzAgICAgIDNLICAgICAgIC0gICAgICAg MzAgIDE2LDEyOAogICAgYWNwaV9wZXJmICAgICAyICAgICAxSyAgICAgICAtICAgICAgICAyICAx MjgKICAgICAgIGlzYWRldiAgICAgNSAgICAgMUsgICAgICAgLSAgICAgICAgNSAgMTI4CiAgZGRi X2NhcHR1cmUgICAgIDEgICAgNDhLICAgICAgIC0gICAgICAgIDEgIAogICAgICAgICBjZGV2ICAg ICA4ICAgICAySyAgICAgICAtICAgICAgICA4ICAyNTYKICAgICAgIGFjcGljYSAgMTY5NSAgIDE2 OUsgICAgICAgLSAgICA3MzgxMCAgMTYsMzIsNjQsMTI4LDI1Niw1MTIsMTAyNAogICAgICAgIHNp Z2lvICAgICAxICAgICAxSyAgICAgICAtICAgICAgICAxICA2NAogICAgIGZpbGVkZXNjICAgIDQ0 ICAgIDIySyAgICAgICAtICAgICAxMjQxICA1MTIKICAgICAga2R0cmFjZSAgIDI4NyAgICA2NEsg ICAgICAgLSAgICAgMTYyMiAgNjQsMjU2CiAgICAgIGlvX2FwaWMgICAgIDMgICAgIDNLICAgICAg IC0gICAgICAgIDMgIDUxMiwyMDQ4CiAgICAgICAgIGtlbnYgICAgNjUgICAgMTFLICAgICAgIC0g ICAgICAgNjkgIDE2LDMyLDY0LDEyOAogICAgICAga3F1ZXVlICAgICAwICAgICAwSyAgICAgICAt ICAgICAgIDI4ICAyNTYsMjA0OAogICAgICBzb2xhcmlzIDE4MjcyIDEyMDM1NUsgICAgICAgLSAg IDEwNzg1MCAgMTYsMzIsNjQsMTI4LDI1Niw1MTIsMTAyNCwyMDQ4LDQwOTYKICAga3N0YXRfZGF0 YSAgICAgNCAgICAgMUsgICAgICAgLSAgICAgICAgNCAgNjQKCi0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQp2bXN0 YXQgLXoKCklURU0gICAgICAgICAgICAgICAgICAgU0laRSAgTElNSVQgICAgIFVTRUQgICAgIEZS RUUgICAgICBSRVEgRkFJTCBTTEVFUAoKVU1BIEtlZ3M6ICAgICAgICAgICAgICAgMjA4LCAgICAg IDAsICAgICAyNTEsICAgICAgIDQsICAgICAyNTEsICAgMCwgICAwClVNQSBab25lczogICAgICAg ICAgICAgIDY0MCwgICAgICAwLCAgICAgMjUxLCAgICAgICAxLCAgICAgMjUxLCAgIDAsICAgMApV TUEgU2xhYnM6ICAgICAgICAgICAgICA1NjgsICAgICAgMCwgICAgMjY2MSwgICAgIDY3OCwgICAg OTQ3MywgICAwLCAgIDAKVU1BIFJDbnRTbGFiczogICAgICAgICAgNTY4LCAgICAgIDAsICAgICA5 NTMsICAgICAgIDYsICAgICA5NTMsICAgMCwgICAwClVNQSBIYXNoOiAgICAgICAgICAgICAgIDI1 NiwgICAgICAwLCAgICAgIDgxLCAgICAgICA5LCAgICAgIDgxLCAgIDAsICAgMAoxNiBCdWNrZXQ6 ICAgICAgICAgICAgICAxNTIsICAgICAgMCwgICAgIDE5MCwgICAgICAxMCwgICAgIDE5MCwgICAw LCAgIDAKMzIgQnVja2V0OiAgICAgICAgICAgICAgMjgwLCAgICAgIDAsICAgICAxMDgsICAgICAg IDQsICAgICAxMDgsICAgMSwgICAwCjY0IEJ1Y2tldDogICAgICAgICAgICAgIDUzNiwgICAgICAw LCAgICAgMTI0LCAgICAgICAyLCAgICAgMTI0LCAgNjYsICAgMAoxMjggQnVja2V0OiAgICAgICAg ICAgIDEwNDgsICAgICAgMCwgICAgIDE5MywgICAgICAgMiwgICAgIDE5MywgICAwLCAgIDAKVk0g T0JKRUNUOiAgICAgICAgICAgICAgMjE2LCAgICAgIDAsICAgIDU5ODcsICAgICAgNzksICAgMjI0 ODgsICAgMCwgICAwCk1BUDogICAgICAgICAgICAgICAgICAgIDIzMiwgICAgICAwLCAgICAgICA3 LCAgICAgIDI1LCAgICAgICA3LCAgIDAsICAgMApLTUFQIEVOVFJZOiAgICAgICAgICAgICAxMjAs IDEyNzQ0MSwgICAgIDE5MSwgICAgIDQyOSwgICAxNzMxNSwgICAwLCAgIDAKTUFQIEVOVFJZOiAg ICAgICAgICAgICAgMTIwLCAgICAgIDAsICAgICA0MzcsICAgICAxNTIsICAgMzI1MDksICAgMCwg ICAwCkRQIGZha2VwZzogICAgICAgICAgICAgIDEyMCwgICAgICAwLCAgICAgICAwLCAgICAgICAw LCAgICAgICAwLCAgIDAsICAgMApTRyBmYWtlcGc6ICAgICAgICAgICAgICAxMjAsICAgICAgMCwg ICAgICAgMCwgICAgICAgMCwgICAgICAgMCwgICAwLCAgIDAKbXRfem9uZTogICAgICAgICAgICAg ICAyMDY0LCAgICAgIDAsICAgICAyODIsICAgICAgMjMsICAgICAyODIsICAgMCwgICAwCjE2OiAg ICAgICAgICAgICAgICAgICAgICAxNiwgICAgICAwLCAgICAgIDIyLCAgICAgMzE0LCAgICAgMTg5 LCAgIDAsICAgMAoxNjogICAgICAgICAgICAgICAgICAgICAgMTYsICAgICAgMCwgICAgMTYxOSwg ICAgIDIyOSwgICAgMjIzMCwgICAwLCAgIDAKMTY6ICAgICAgICAgICAgICAgICAgICAgIDE2LCAg ICAgIDAsICAgICAgIDMsICAgICAxNjUsICAgICAgIDMsICAgMCwgICAwCjE2OiAgICAgICAgICAg ICAgICAgICAgICAxNiwgICAgICAwLCAgICAgIDQxLCAgICAgMjk1LCAgIDMxNjQwLCAgIDAsICAg MAoxNjogICAgICAgICAgICAgICAgICAgICAgMTYsICAgICAgMCwgICAgICAyOCwgICAgIDMwOCwg ICAgIDM5OCwgICAwLCAgIDAKMTY6ICAgICAgICAgICAgICAgICAgICAgIDE2LCAgICAgIDAsICAg ICAzMTAsICAgICAzNjIsICAgIDI5MjAsICAgMCwgICAwCjE2OiAgICAgICAgICAgICAgICAgICAg ICAxNiwgICAgICAwLCAgICAgICA3LCAgICAgMzI5LCAgICAgMTY2LCAgIDAsICAgMAoxNjogICAg ICAgICAgICAgICAgICAgICAgMTYsICAgICAgMCwgICAgIDY0MiwgICAgIDUzNCwgICAxMDI2NSwg ICAwLCAgIDAKMzI6ICAgICAgICAgICAgICAgICAgICAgIDMyLCAgICAgIDAsICAgICAgMTgsICAg ICAyODUsICAgICAgNTEsICAgMCwgICAwCjMyOiAgICAgICAgICAgICAgICAgICAgICAzMiwgICAg ICAwLCAgICAxNzczLCAgICAgMjQ3LCAgICAyMDAzLCAgIDAsICAgMAozMjogICAgICAgICAgICAg ICAgICAgICAgMzIsICAgICAgMCwgICAgICAxMCwgICAgIDI5MywgICAgICA5MiwgICAwLCAgIDAK MzI6ICAgICAgICAgICAgICAgICAgICAgIDMyLCAgICAgIDAsICAgICAgNDcsICAgICAyNTYsICAg IDczMDMsICAgMCwgICAwCjMyOiAgICAgICAgICAgICAgICAgICAgICAzMiwgICAgICAwLCAgICAg IDMxLCAgICAgMjcyLCAgICAgMTg2LCAgIDAsICAgMAozMjogICAgICAgICAgICAgICAgICAgICAg MzIsICAgICAgMCwgICAgICA5NSwgICAgIDQxMCwgICAgMTgxMiwgICAwLCAgIDAKMzI6ICAgICAg ICAgICAgICAgICAgICAgIDMyLCAgICAgIDAsICAgICAgODcsICAgICAyMTYsICAgICA1NjEsICAg MCwgICAwCjMyOiAgICAgICAgICAgICAgICAgICAgICAzMiwgICAgICAwLCAgICAxODk4LCAgICAg NjI3LCAgIDEzMzcyLCAgIDAsICAgMAo2NDogICAgICAgICAgICAgICAgICAgICAgNjQsICAgICAg MCwgICAgICAxMywgICAgIDE1NSwgICAgIDM2MCwgICAwLCAgIDAKNjQ6ICAgICAgICAgICAgICAg ICAgICAgIDY0LCAgICAgIDAsICAgICAyNzksICAgICAxMTMsICAgICAzMjEsICAgMCwgICAwCjY0 OiAgICAgICAgICAgICAgICAgICAgICA2NCwgICAgICAwLCAgICAgIDE1LCAgICAgMTUzLCAgICAg MjU2LCAgIDAsICAgMAo2NDogICAgICAgICAgICAgICAgICAgICAgNjQsICAgICAgMCwgICAgIDcz MywgICAgIDEwNywgICAzMDU4NSwgICAwLCAgIDAKNjQ6ICAgICAgICAgICAgICAgICAgICAgIDY0 LCAgICAgIDAsICAgICAgNzksICAgICAxNDUsICAgMTYwNjcsICAgMCwgICAwCjY0OiAgICAgICAg ICAgICAgICAgICAgICA2NCwgICAgICAwLCAgICAgNjk2LCAgICAgMTQ0LCAgICAgOTg5LCAgIDAs ICAgMAo2NDogICAgICAgICAgICAgICAgICAgICAgNjQsICAgICAgMCwgICAgMTExOCwgICAgIDEx NCwgICAgOTc3NCwgICAwLCAgIDAKNjQ6ICAgICAgICAgICAgICAgICAgICAgIDY0LCAgICAgIDAs ICAgIDI5MzYsICAgICAzMTIsICAgMjU0MDgsICAgMCwgICAwCjEyODogICAgICAgICAgICAgICAg ICAgIDEyOCwgICAgICAwLCAgICAgNTYxLCAgICAgIDQ4LCAgICAgNjc4LCAgIDAsICAgMAoxMjg6 ICAgICAgICAgICAgICAgICAgICAxMjgsICAgICAgMCwgICAgMTA5MiwgICAgICA2OCwgICAgMTEy MywgICAwLCAgIDAKMTI4OiAgICAgICAgICAgICAgICAgICAgMTI4LCAgICAgIDAsICAgICAgIDEs ICAgICAgODYsICAgICAgMjAsICAgMCwgICAwCjEyODogICAgICAgICAgICAgICAgICAgIDEyOCwg ICAgICAwLCAgICAxMDIwLCAgICAgIDgyLCAgICAzMDk1LCAgIDAsICAgMAoxMjg6ICAgICAgICAg ICAgICAgICAgICAxMjgsICAgICAgMCwgICAgIDE0MSwgICAgICA2MiwgICAgIDE3MSwgICAwLCAg IDAKMTI4OiAgICAgICAgICAgICAgICAgICAgMTI4LCAgICAgIDAsICAgICA0MzAsICAgICAgNjMs ICAgICA2NTgsICAgMCwgICAwCjEyODogICAgICAgICAgICAgICAgICAgIDEyOCwgICAgICAwLCAg ICAgMjE3LCAgICAgIDQ0LCAgICAgNzI0LCAgIDAsICAgMAoxMjg6ICAgICAgICAgICAgICAgICAg ICAxMjgsICAgICAgMCwgICAgIDg5NywgICAgICA2MCwgICAxNDQ5MSwgICAwLCAgIDAKMjU2OiAg ICAgICAgICAgICAgICAgICAgMjU2LCAgICAgIDAsICAgICAgIDYsICAgICAgMzksICAgICAgMzcs ICAgMCwgICAwCjI1NjogICAgICAgICAgICAgICAgICAgIDI1NiwgICAgICAwLCAgICAgICA1LCAg ICAgIDQwLCAgICAgIDIwLCAgIDAsICAgMAoyNTY6ICAgICAgICAgICAgICAgICAgICAyNTYsICAg ICAgMCwgICAgICAgMCwgICAgICAgMCwgICAgICAgMCwgICAwLCAgIDAKMjU2OiAgICAgICAgICAg ICAgICAgICAgMjU2LCAgICAgIDAsICAgICAgIDQsICAgICAgNDEsICAgICAgMTUsICAgMCwgICAw CjI1NjogICAgICAgICAgICAgICAgICAgIDI1NiwgICAgICAwLCAgICAgMzE4LCAgICAgIDcyLCAg IDE1MTEwLCAgIDAsICAgMAoyNTY6ICAgICAgICAgICAgICAgICAgICAyNTYsICAgICAgMCwgICAg IDY3MywgICAgICA2MiwgICAgMTA4MCwgICAwLCAgIDAKMjU2OiAgICAgICAgICAgICAgICAgICAg MjU2LCAgICAgIDAsICAgICAgNDUsICAgICAgMzAsICAgICAgNTUsICAgMCwgICAwCjI1NjogICAg ICAgICAgICAgICAgICAgIDI1NiwgICAgICAwLCAgICAzMTMyLCAgICAxNjM4LCAgICA3NDU3LCAg IDAsICAgMAo1MTI6ICAgICAgICAgICAgICAgICAgICA1MTIsICAgICAgMCwgICAgMTAzOCwgICAg ICAzMywgICAxNzM2MywgICAwLCAgIDAKNTEyOiAgICAgICAgICAgICAgICAgICAgNTEyLCAgICAg IDAsICAgICAgIDYsICAgICAgMTUsICAgICAgIDYsICAgMCwgICAwCjUxMjogICAgICAgICAgICAg ICAgICAgIDUxMiwgICAgICAwLCAgICAgICAxLCAgICAgIDEzLCAgICAgIDE2LCAgIDAsICAgMAo1 MTI6ICAgICAgICAgICAgICAgICAgICA1MTIsICAgICAgMCwgICAgICAgMiwgICAgICAxMiwgICAg ICAgOSwgICAwLCAgIDAKNTEyOiAgICAgICAgICAgICAgICAgICAgNTEyLCAgICAgIDAsICAgICAg MTYsICAgICAgMTIsICAgICAgNDcsICAgMCwgICAwCjUxMjogICAgICAgICAgICAgICAgICAgIDUx MiwgICAgICAwLCAgICAgMjUwLCAgICAgIDUxLCAgICAyNDU1LCAgIDAsICAgMAo1MTI6ICAgICAg ICAgICAgICAgICAgICA1MTIsICAgICAgMCwgICAgICAxNCwgICAgICAxNCwgICAgICAyMywgICAw LCAgIDAKNTEyOiAgICAgICAgICAgICAgICAgICAgNTEyLCAgICAgIDAsICAgIDc0NDIsICAgICAx MTEsICAgMjMzNTYsICAgMCwgICAwCjEwMjQ6ICAgICAgICAgICAgICAgICAgMTAyNCwgICAgICAw LCAgICAgICAwLCAgICAgIDEyLCAgICAgIDE3LCAgIDAsICAgMAoxMDI0OiAgICAgICAgICAgICAg ICAgIDEwMjQsICAgICAgMCwgICAgICAgMiwgICAgICAgNiwgICAgICAgMywgICAwLCAgIDAKMTAy NDogICAgICAgICAgICAgICAgICAxMDI0LCAgICAgIDAsICAgICAgIDIsICAgICAgMTAsICAgICAg MzMsICAgMCwgICAwCjEwMjQ6ICAgICAgICAgICAgICAgICAgMTAyNCwgICAgICAwLCAgICAgIDEw LCAgICAgIDEwLCAgICAxMzUzLCAgIDAsICAgMAoxMDI0OiAgICAgICAgICAgICAgICAgIDEwMjQs ICAgICAgMCwgICAgICAyMywgICAgICAxMywgICAgICAzMiwgICAwLCAgIDAKMTAyNDogICAgICAg ICAgICAgICAgICAxMDI0LCAgICAgIDAsICAgICAgMTMsICAgICAxNjcsICAgIDEzMDcsICAgMCwg ICAwCjEwMjQ6ICAgICAgICAgICAgICAgICAgMTAyNCwgICAgICAwLCAgICAgICAwLCAgICAgICA4 LCAgICAgICA0LCAgIDAsICAgMAoxMDI0OiAgICAgICAgICAgICAgICAgIDEwMjQsICAgICAgMCwg ICAgICA3MywgICAgMTI1NSwgICAgNTE1MywgICAwLCAgIDAKMjA0ODogICAgICAgICAgICAgICAg ICAyMDQ4LCAgICAgIDAsICAgICAgMTIsICAgICAgNDAsICAgNDAyNTMsICAgMCwgICAwCjIwNDg6 ICAgICAgICAgICAgICAgICAgMjA0OCwgICAgICAwLCAgICAgIDExLCAgICAgICA3LCAgICAgIDI3 LCAgIDAsICAgMAoyMDQ4OiAgICAgICAgICAgICAgICAgIDIwNDgsICAgICAgMCwgICAgICAgMiwg ICAgICAgMiwgICAgICAxMiwgICAwLCAgIDAKMjA0ODogICAgICAgICAgICAgICAgICAyMDQ4LCAg ICAgIDAsICAgICAgIDQsICAgICAgIDQsICAgICAgIDQsICAgMCwgICAwCjIwNDg6ICAgICAgICAg ICAgICAgICAgMjA0OCwgICAgICAwLCAgICAgICA0LCAgICAgICA4LCAgICAgICA2LCAgIDAsICAg MAoyMDQ4OiAgICAgICAgICAgICAgICAgIDIwNDgsICAgICAgMCwgICAgICAgNiwgICAgICAgMiwg ICAgICA1MiwgICAwLCAgIDAKMjA0ODogICAgICAgICAgICAgICAgICAyMDQ4LCAgICAgIDAsICAg ICAgIDIsICAgICAgIDIsICAgICAgIDYsICAgMCwgICAwCjIwNDg6ICAgICAgICAgICAgICAgICAg MjA0OCwgICAgICAwLCAgICAgMjE1LCAgICAgMTI5LCAgICAxNzgxLCAgIDAsICAgMAo0MDk2OiAg ICAgICAgICAgICAgICAgIDQwOTYsICAgICAgMCwgICAgICAgMywgICAgICAgMiwgICAgNDEyNCwg ICAwLCAgIDAKNDA5NjogICAgICAgICAgICAgICAgICA0MDk2LCAgICAgIDAsICAgICAgIDIsICAg ICAgIDMsICAgICAgIDQsICAgMCwgICAwCjQwOTY6ICAgICAgICAgICAgICAgICAgNDA5NiwgICAg ICAwLCAgICAgMjkxLCAgICAgICA0LCAgICAgMzUwLCAgIDAsICAgMAo0MDk2OiAgICAgICAgICAg ICAgICAgIDQwOTYsICAgICAgMCwgICAgICAgMywgICAgICAgMiwgICAgICAgMywgICAwLCAgIDAK NDA5NjogICAgICAgICAgICAgICAgICA0MDk2LCAgICAgIDAsICAgICAgIDAsICAgICAgIDAsICAg ICAgIDAsICAgMCwgICAwCjQwOTY6ICAgICAgICAgICAgICAgICAgNDA5NiwgICAgICAwLCAgICAg IDc2LCAgICAgIDI4LCAgICAxNTE3LCAgIDAsICAgMAo0MDk2OiAgICAgICAgICAgICAgICAgIDQw OTYsICAgICAgMCwgICAgICAgMSwgICAgICAgNCwgICAgICAgOSwgICAwLCAgIDAKNDA5NjogICAg ICAgICAgICAgICAgICA0MDk2LCAgICAgIDAsICAgICAyODksICAgICAgMzgsICAgIDExMDAsICAg MCwgICAwCkZpbGVzOiAgICAgICAgICAgICAgICAgICA4MCwgICAgICAwLCAgICAgIDc1LCAgICAg MTUwLCAgICA1MDI4LCAgIDAsICAgMApUVVJOU1RJTEU6ICAgICAgICAgICAgICAxMzYsICAgICAg MCwgICAgIDI1OSwgICAgICA0MSwgICAgIDI1OSwgICAwLCAgIDAKdW10eCBwaTogICAgICAgICAg ICAgICAgIDk2LCAgICAgIDAsICAgICAgIDAsICAgICAgIDAsICAgICAgIDAsICAgMCwgICAwCk1B QyBsYWJlbHM6ICAgICAgICAgICAgICA0MCwgICAgICAwLCAgICAgICAwLCAgICAgICAwLCAgICAg ICAwLCAgIDAsICAgMApQUk9DOiAgICAgICAgICAgICAgICAgIDExMjAsICAgICAgMCwgICAgICA0 MywgICAgICAyOSwgICAgMTI0MCwgICAwLCAgIDAKVEhSRUFEOiAgICAgICAgICAgICAgICAxMDgw LCAgICAgIDAsICAgICAyNDIsICAgICAgMTYsICAgICAzODAsICAgMCwgICAwClNMRUVQUVVFVUU6 ICAgICAgICAgICAgICA4OCwgICAgICAwLCAgICAgMjU5LCAgICAgIDMxLCAgICAgMjU5LCAgIDAs ICAgMApWTVNQQUNFOiAgICAgICAgICAgICAgICA0MDAsICAgICAgMCwgICAgICAyNCwgICAgICAz MCwgICAgMTIyMiwgICAwLCAgIDAKY3B1c2V0OiAgICAgICAgICAgICAgICAgIDcyLCAgICAgIDAs ICAgICAgIDIsICAgICAgOTgsICAgICAgIDIsICAgMCwgICAwCmF1ZGl0X3JlY29yZDogICAgICAg ICAgIDk1MiwgICAgICAwLCAgICAgICAwLCAgICAgICAwLCAgICAgICAwLCAgIDAsICAgMAptYnVm X3BhY2tldDogICAgICAgICAgICAyNTYsICAgICAgMCwgICAgIDgyOSwgICAgIDMyOCwgIDExNTUw NCwgICAwLCAgIDAKbWJ1ZjogICAgICAgICAgICAgICAgICAgMjU2LCAgICAgIDAsICAgIDEwMjQs ICAgICAzODksICAxMjY1NTAsICAgMCwgICAwCm1idWZfY2x1c3RlcjogICAgICAgICAgMjA0OCwg IDI1NjAwLCAgICAxMTU2LCAgICAgNzUwLCAgICAyNjA0LCAgIDAsICAgMAptYnVmX2p1bWJvX3Bh Z2U6ICAgICAgIDQwOTYsICAxMjgwMCwgICAgICAgMCwgICAgICAgMCwgICAgICAgMCwgICAwLCAg IDAKbWJ1Zl9qdW1ib185azogICAgICAgICA5MjE2LCAgMTkyMDAsICAgICAgIDAsICAgICAgIDAs ICAgICAgIDAsICAgMCwgICAwCm1idWZfanVtYm9fMTZrOiAgICAgICAxNjM4NCwgIDEyODAwLCAg ICAgICAwLCAgICAgICAwLCAgICAgICAwLCAgIDAsICAgMAptYnVmX2V4dF9yZWZjbnQ6ICAgICAg ICAgIDQsICAgICAgMCwgICAgICAgMCwgICAgICAgMCwgICAgICAgMCwgICAwLCAgIDAKZ19iaW86 ICAgICAgICAgICAgICAgICAgMjMyLCAgICAgIDAsICAgICAgIDQsICAgICAyMjAsIDEzMTE5MzUs ICAgMCwgICAwCnR0eWlucTogICAgICAgICAgICAgICAgIDE2MCwgICAgICAwLCAgICAxNDQwLCAg ICAgIDQ4LCAgICAxNTc1LCAgIDAsICAgMAp0dHlvdXRxOiAgICAgICAgICAgICAgICAyNTYsICAg ICAgMCwgICAgIDc0NCwgICAgICAzNiwgICAgIDgxNiwgICAwLCAgIDAKYXRhX3JlcXVlc3Q6ICAg ICAgICAgICAgMzI4LCAgICAgIDAsICAgICAgIDMsICAgICAxMDUsICA1MzEyMTYsICAgMCwgICAw CmF0YV9jb21wb3NpdGU6ICAgICAgICAgIDMzNiwgICAgICAwLCAgICAgICAwLCAgICAgICAwLCAg ICAgICAwLCAgIDAsICAgMApWTk9ERTogICAgICAgICAgICAgICAgICA2MzIsICAgICAgMCwgICAg NzY3MSwgICAgICAzMywgICAgNzcwMiwgICAwLCAgIDAKVk5PREVQT0xMOiAgICAgICAgICAgICAg MTEyLCAgICAgIDAsICAgICAgIDAsICAgICAgIDAsICAgICAgIDAsICAgMCwgICAwCk5BTUVJOiAg ICAgICAgICAgICAgICAgMTAyNCwgICAgICAwLCAgICAgICAwLCAgICAgIDM2LCAgICA4ODg3LCAg IDAsICAgMApTIFZGUyBDYWNoZTogICAgICAgICAgICAxMDgsICAgICAgMCwgICAgIDQ5NywgICAg IDMyOCwgICAgMTQzMSwgICAwLCAgIDAKTCBWRlMgQ2FjaGU6ICAgICAgICAgICAgMzI4LCAgICAg IDAsICAgIDExNzAsICAgICAgNzgsICAgIDEyMzcsICAgMCwgICAwCk5GU01PVU5UOiAgICAgICAg ICAgICAgIDYxNiwgICAgICAwLCAgICAgICAwLCAgICAgICAwLCAgICAgICAwLCAgIDAsICAgMApO RlNOT0RFOiAgICAgICAgICAgICAgICA2NDAsICAgICAgMCwgICAgICAgMCwgICAgICAgMCwgICAg ICAgMCwgICAwLCAgIDAKRElSSEFTSDogICAgICAgICAgICAgICAxMDI0LCAgICAgIDAsICAgICAg NDAsICAgICAgIDgsICAgICAgNDAsICAgMCwgICAwCnBpcGU6ICAgICAgICAgICAgICAgICAgIDcy OCwgICAgICAwLCAgICAgICAyLCAgICAgIDM4LCAgICAgODI0LCAgIDAsICAgMAprc2lnaW5mbzog ICAgICAgICAgICAgICAxMTIsICAgICAgMCwgICAgICA1MiwgICAgMTAwNCwgICAgICA1OCwgICAw LCAgIDAKaXRpbWVyOiAgICAgICAgICAgICAgICAgMzQ0LCAgICAgIDAsICAgICAgIDAsICAgICAg IDAsICAgICAgIDAsICAgMCwgICAwCktOT1RFOiAgICAgICAgICAgICAgICAgIDEyOCwgICAgICAw LCAgICAgICAwLCAgICAgIDg3LCAgICAgIDI2LCAgIDAsICAgMApzb2NrZXQ6ICAgICAgICAgICAg ICAgICA2ODAsICAyNTYwMiwgICAgICAzMCwgICAgICAxOCwgICAgIDMyMCwgICAwLCAgIDAKaXBx OiAgICAgICAgICAgICAgICAgICAgIDU2LCAgICA4MTksICAgICAgIDAsICAgICAgIDAsICAgICAg IDAsICAgMCwgICAwCnVkcF9pbnBjYjogICAgICAgICAgICAgIDMzNiwgIDI1NjA4LCAgICAgIDEx LCAgICAgIDIyLCAgICAgMjM1LCAgIDAsICAgMAp1ZHBjYjogICAgICAgICAgICAgICAgICAgMTYs ICAyNTcwNCwgICAgICAxMSwgICAgIDMyNSwgICAgIDIzNSwgICAwLCAgIDAKdGNwX2lucGNiOiAg ICAgICAgICAgICAgMzM2LCAgMjU2MDgsICAgICAgMTAsICAgICAgMjMsICAgICAgMTUsICAgMCwg ICAwCnRjcGNiOiAgICAgICAgICAgICAgICAgIDg4MCwgIDI1NjAwLCAgICAgIDEwLCAgICAgIDEw LCAgICAgIDE1LCAgIDAsICAgMAp0Y3B0dzogICAgICAgICAgICAgICAgICAgNzIsICAgNTE1MCwg ICAgICAgMCwgICAgICAgMCwgICAgICAgMCwgICAwLCAgIDAKc3luY2FjaGU6ICAgICAgICAgICAg ICAgMTQ0LCAgMTUzNjYsICAgICAgIDAsICAgICAgNTIsICAgICAgIDQsICAgMCwgICAwCmhvc3Rj YWNoZTogICAgICAgICAgICAgIDEzNiwgIDE1MzcyLCAgICAgICAwLCAgICAgICAwLCAgICAgICAw LCAgIDAsICAgMAp0Y3ByZWFzczogICAgICAgICAgICAgICAgNDAsICAgMTY4MCwgICAgICAgMCwg ICAgICAgMCwgICAgICAgMCwgICAwLCAgIDAKc2Fja2hvbGU6ICAgICAgICAgICAgICAgIDMyLCAg ICAgIDAsICAgICAgIDAsICAgICAgIDAsICAgICAgIDAsICAgMCwgICAwCnNjdHBfZXA6ICAgICAg ICAgICAgICAgMTI3MiwgIDI1NjAyLCAgICAgICAwLCAgICAgICAwLCAgICAgICAwLCAgIDAsICAg MApzY3RwX2Fzb2M6ICAgICAgICAgICAgIDIyNDAsICA0MDAwMCwgICAgICAgMCwgICAgICAgMCwg ICAgICAgMCwgICAwLCAgIDAKc2N0cF9sYWRkcjogICAgICAgICAgICAgIDQ4LCAgODAwNjQsICAg ICAgIDAsICAgICAyMTYsICAgICAgIDQsICAgMCwgICAwCnNjdHBfcmFkZHI6ICAgICAgICAgICAg IDYxNiwgIDgwMDA0LCAgICAgICAwLCAgICAgICAwLCAgICAgICAwLCAgIDAsICAgMApzY3RwX2No dW5rOiAgICAgICAgICAgICAxMzYsIDQwMDAwOCwgICAgICAgMCwgICAgICAgMCwgICAgICAgMCwg ICAwLCAgIDAKc2N0cF9yZWFkcTogICAgICAgICAgICAgMTA0LCA0MDAwMzIsICAgICAgIDAsICAg ICAgIDAsICAgICAgIDAsICAgMCwgICAwCnNjdHBfc3RyZWFtX21zZ19vdXQ6ICAgICA5NiwgNDAw MDI2LCAgICAgICAwLCAgICAgICAwLCAgICAgICAwLCAgIDAsICAgMApzY3RwX2FzY29uZjogICAg ICAgICAgICAgNDAsIDQwMDAwOCwgICAgICAgMCwgICAgICAgMCwgICAgICAgMCwgICAwLCAgIDAK c2N0cF9hc2NvbmZfYWNrOiAgICAgICAgIDQ4LCA0MDAwMzIsICAgICAgIDAsICAgICAgIDAsICAg ICAgIDAsICAgMCwgICAwCnJpcGNiOiAgICAgICAgICAgICAgICAgIDMzNiwgIDI1NjA4LCAgICAg ICAxLCAgICAgIDIxLCAgICAgICAxLCAgIDAsICAgMAp1bnBjYjogICAgICAgICAgICAgICAgICAy NDAsICAyNTYwMCwgICAgICAgNywgICAgICA0MSwgICAgICA2NSwgICAwLCAgIDAKcnRlbnRyeTog ICAgICAgICAgICAgICAgMjAwLCAgICAgIDAsICAgICAgMTMsICAgICAgNDQsICAgICAgMTQsICAg MCwgICAwCnNlbGZkOiAgICAgICAgICAgICAgICAgICA1NiwgICAgICAwLCAgICAgIDQzLCAgICAg MTQ2LCAgICAgOTE2LCAgIDAsICAgMApTV0FQTUVUQTogICAgICAgICAgICAgICAyODgsIDExNjUx OSwgICAgICAgMCwgICAgICAgMCwgICAgICAgMCwgICAwLCAgIDAKaXA0ZmxvdzogICAgICAgICAg ICAgICAgIDU2LCAgNTAyMTEsICAgICAgIDQsICAgICAzMTEsICAgICAgIDgsICAgMCwgICAwCmlw NmZsb3c6ICAgICAgICAgICAgICAgICA4MCwgIDUwMjIwLCAgICAgICAwLCAgICAgMTM1LCAgICAg ICAxLCAgIDAsICAgMApNb3VudHBvaW50czogICAgICAgICAgICA5MTIsICAgICAgMCwgICAgICAg MywgICAgICAgOSwgICAgICAgMywgICAwLCAgIDAKRkZTIGlub2RlOiAgICAgICAgICAgICAgMTY4 LCAgICAgIDAsICAgICA0MTIsICAgICAgNzIsICAgICA0NDEsICAgMCwgICAwCkZGUzEgZGlub2Rl OiAgICAgICAgICAgIDEyOCwgICAgICAwLCAgICAgICAwLCAgICAgICAwLCAgICAgICAwLCAgIDAs ICAgMApGRlMyIGRpbm9kZTogICAgICAgICAgICAyNTYsICAgICAgMCwgICAgIDQxMiwgICAgICAz OCwgICAgIDQ0MSwgICAwLCAgIDAKdGFza3Ffem9uZTogICAgICAgICAgICAgIDY0LCAgICAgIDAs ICAgICAgIDAsICAgICAxNjgsICAgICAgIDgsICAgMCwgICAwCnJlZmVyZW5jZV9jYWNoZTogICAg ICAgICA0MCwgICAgICAwLCAgICAgICAwLCAgICAgICAwLCAgICAgICAwLCAgIDAsICAgMApyZWZl cmVuY2VfaGlzdG9yeV9jYWNoZTogICAgICA4LCAgICAgIDAsICAgICAgIDAsICAgICAgIDAsICAg ICAgIDAsICAgMCwgICAwCnppb19jYWNoZTogICAgICAgICAgICAgIDg5NiwgICAgICAwLCAgICAg MTk0LCAgICAzMzUwLCAgIDM2NTA2LCAgIDAsICAgMAp6aW9fbGlua19jYWNoZTogICAgICAgICAg NDgsICAgICAgMCwgICAgIDE5MSwgICAgMzQwOSwgICAyNTI4MiwgICAwLCAgIDAKemlvX2J1Zl81 MTI6ICAgICAgICAgICAgNTEyLCAgICAgIDAsICAgICAgIDAsICAgICAgIDAsICAgICAgIDAsICAg MCwgICAwCnppb19kYXRhX2J1Zl81MTI6ICAgICAgIDUxMiwgICAgICAwLCAgICAgICAwLCAgICAg ICAwLCAgICAgICAwLCAgIDAsICAgMAp6aW9fYnVmXzEwMjQ6ICAgICAgICAgIDEwMjQsICAgICAg MCwgICAgICAgMCwgICAgICAgMCwgICAgICAgMCwgICAwLCAgIDAKemlvX2RhdGFfYnVmXzEwMjQ6 ICAgICAxMDI0LCAgICAgIDAsICAgICAgIDAsICAgICAgIDAsICAgICAgIDAsICAgMCwgICAwCnpp b19idWZfMTUzNjogICAgICAgICAgMTUzNiwgICAgICAwLCAgICAgICAwLCAgICAgICAwLCAgICAg ICAwLCAgIDAsICAgMAp6aW9fZGF0YV9idWZfMTUzNjogICAgIDE1MzYsICAgICAgMCwgICAgICAg MCwgICAgICAgMCwgICAgICAgMCwgICAwLCAgIDAKemlvX2J1Zl8yMDQ4OiAgICAgICAgICAyMDQ4 LCAgICAgIDAsICAgICAgIDAsICAgICAgIDAsICAgICAgIDAsICAgMCwgICAwCnppb19kYXRhX2J1 Zl8yMDQ4OiAgICAgMjA0OCwgICAgICAwLCAgICAgICAwLCAgICAgICAwLCAgICAgICAwLCAgIDAs ICAgMAp6aW9fYnVmXzI1NjA6ICAgICAgICAgIDI1NjAsICAgICAgMCwgICAgICAgMCwgICAgICAg MCwgICAgICAgMCwgICAwLCAgIDAKemlvX2RhdGFfYnVmXzI1NjA6ICAgICAyNTYwLCAgICAgIDAs ICAgICAgIDAsICAgICAgIDAsICAgICAgIDAsICAgMCwgICAwCnppb19idWZfMzA3MjogICAgICAg ICAgMzA3MiwgICAgICAwLCAgICAgICAwLCAgICAgICAwLCAgICAgICAwLCAgIDAsICAgMAp6aW9f ZGF0YV9idWZfMzA3MjogICAgIDMwNzIsICAgICAgMCwgICAgICAgMCwgICAgICAgMCwgICAgICAg MCwgICAwLCAgIDAKemlvX2J1Zl8zNTg0OiAgICAgICAgICAzNTg0LCAgICAgIDAsICAgICAgIDAs ICAgICAgIDAsICAgICAgIDAsICAgMCwgICAwCnppb19kYXRhX2J1Zl8zNTg0OiAgICAgMzU4NCwg ICAgICAwLCAgICAgICAwLCAgICAgICAwLCAgICAgICAwLCAgIDAsICAgMAp6aW9fYnVmXzQwOTY6 ICAgICAgICAgIDQwOTYsICAgICAgMCwgICAgICAgMCwgICAgICAgMCwgICAgICAgMCwgICAwLCAg IDAKemlvX2RhdGFfYnVmXzQwOTY6ICAgICA0MDk2LCAgICAgIDAsICAgICAgIDAsICAgICAgIDAs ICAgICAgIDAsICAgMCwgICAwCnppb19idWZfNTEyMDogICAgICAgICAgNTEyMCwgICAgICAwLCAg ICAgICAwLCAgICAgICAwLCAgICAgICAwLCAgIDAsICAgMAp6aW9fZGF0YV9idWZfNTEyMDogICAg IDUxMjAsICAgICAgMCwgICAgICAgMCwgICAgICAgMCwgICAgICAgMCwgICAwLCAgIDAKemlvX2J1 Zl82MTQ0OiAgICAgICAgICA2MTQ0LCAgICAgIDAsICAgICAgIDAsICAgICAgIDAsICAgICAgIDAs ICAgMCwgICAwCnppb19kYXRhX2J1Zl82MTQ0OiAgICAgNjE0NCwgICAgICAwLCAgICAgICAwLCAg ICAgICAwLCAgICAgICAwLCAgIDAsICAgMAp6aW9fYnVmXzcxNjg6ICAgICAgICAgIDcxNjgsICAg ICAgMCwgICAgICAgMCwgICAgICAgMCwgICAgICAgMCwgICAwLCAgIDAKemlvX2RhdGFfYnVmXzcx Njg6ICAgICA3MTY4LCAgICAgIDAsICAgICAgIDAsICAgICAgIDAsICAgICAgIDAsICAgMCwgICAw Cnppb19idWZfODE5MjogICAgICAgICAgODE5MiwgICAgICAwLCAgICAgICAwLCAgICAgICAwLCAg ICAgICAwLCAgIDAsICAgMAp6aW9fZGF0YV9idWZfODE5MjogICAgIDgxOTIsICAgICAgMCwgICAg ICAgMCwgICAgICAgMCwgICAgICAgMCwgICAwLCAgIDAKemlvX2J1Zl8xMDI0MDogICAgICAgIDEw MjQwLCAgICAgIDAsICAgICAgIDAsICAgICAgIDAsICAgICAgIDAsICAgMCwgICAwCnppb19kYXRh X2J1Zl8xMDI0MDogICAxMDI0MCwgICAgICAwLCAgICAgICAwLCAgICAgICAwLCAgICAgICAwLCAg IDAsICAgMAp6aW9fYnVmXzEyMjg4OiAgICAgICAgMTIyODgsICAgICAgMCwgICAgICAgMCwgICAg ICAgMCwgICAgICAgMCwgICAwLCAgIDAKemlvX2RhdGFfYnVmXzEyMjg4OiAgIDEyMjg4LCAgICAg IDAsICAgICAgIDAsICAgICAgIDAsICAgICAgIDAsICAgMCwgICAwCnppb19idWZfMTQzMzY6ICAg ICAgICAxNDMzNiwgICAgICAwLCAgICAgICAwLCAgICAgICAwLCAgICAgICAwLCAgIDAsICAgMAp6 aW9fZGF0YV9idWZfMTQzMzY6ICAgMTQzMzYsICAgICAgMCwgICAgICAgMCwgICAgICAgMCwgICAg ICAgMCwgICAwLCAgIDAKemlvX2J1Zl8xNjM4NDogICAgICAgIDE2Mzg0LCAgICAgIDAsICAgICAg IDAsICAgICAgIDAsICAgICAgIDAsICAgMCwgICAwCnppb19kYXRhX2J1Zl8xNjM4NDogICAxNjM4 NCwgICAgICAwLCAgICAgICAwLCAgICAgICAwLCAgICAgICAwLCAgIDAsICAgMAp6aW9fYnVmXzIw NDgwOiAgICAgICAgMjA0ODAsICAgICAgMCwgICAgICAgMCwgICAgICAgMCwgICAgICAgMCwgICAw LCAgIDAKemlvX2RhdGFfYnVmXzIwNDgwOiAgIDIwNDgwLCAgICAgIDAsICAgICAgIDAsICAgICAg IDAsICAgICAgIDAsICAgMCwgICAwCnppb19idWZfMjQ1NzY6ICAgICAgICAyNDU3NiwgICAgICAw LCAgICAgICAwLCAgICAgICAwLCAgICAgICAwLCAgIDAsICAgMAp6aW9fZGF0YV9idWZfMjQ1NzY6 ICAgMjQ1NzYsICAgICAgMCwgICAgICAgMCwgICAgICAgMCwgICAgICAgMCwgICAwLCAgIDAKemlv X2J1Zl8yODY3MjogICAgICAgIDI4NjcyLCAgICAgIDAsICAgICAgIDAsICAgICAgIDAsICAgICAg IDAsICAgMCwgICAwCnppb19kYXRhX2J1Zl8yODY3MjogICAyODY3MiwgICAgICAwLCAgICAgICAw LCAgICAgICAwLCAgICAgICAwLCAgIDAsICAgMAp6aW9fYnVmXzMyNzY4OiAgICAgICAgMzI3Njgs ICAgICAgMCwgICAgICAgMCwgICAgICAgMCwgICAgICAgMCwgICAwLCAgIDAKemlvX2RhdGFfYnVm XzMyNzY4OiAgIDMyNzY4LCAgICAgIDAsICAgICAgIDAsICAgICAgIDAsICAgICAgIDAsICAgMCwg ICAwCnppb19idWZfMzY4NjQ6ICAgICAgICAzNjg2NCwgICAgICAwLCAgICAgICAwLCAgICAgICAw LCAgICAgICAwLCAgIDAsICAgMAp6aW9fZGF0YV9idWZfMzY4NjQ6ICAgMzY4NjQsICAgICAgMCwg ICAgICAgMCwgICAgICAgMCwgICAgICAgMCwgICAwLCAgIDAKemlvX2J1Zl80MDk2MDogICAgICAg IDQwOTYwLCAgICAgIDAsICAgICAgIDAsICAgICAgIDAsICAgICAgIDAsICAgMCwgICAwCnppb19k YXRhX2J1Zl80MDk2MDogICA0MDk2MCwgICAgICAwLCAgICAgICAwLCAgICAgICAwLCAgICAgICAw LCAgIDAsICAgMAp6aW9fYnVmXzQ1MDU2OiAgICAgICAgNDUwNTYsICAgICAgMCwgICAgICAgMCwg ICAgICAgMCwgICAgICAgMCwgICAwLCAgIDAKemlvX2RhdGFfYnVmXzQ1MDU2OiAgIDQ1MDU2LCAg ICAgIDAsICAgICAgIDAsICAgICAgIDAsICAgICAgIDAsICAgMCwgICAwCnppb19idWZfNDkxNTI6 ICAgICAgICA0OTE1MiwgICAgICAwLCAgICAgICAwLCAgICAgICAwLCAgICAgICAwLCAgIDAsICAg MAp6aW9fZGF0YV9idWZfNDkxNTI6ICAgNDkxNTIsICAgICAgMCwgICAgICAgMCwgICAgICAgMCwg ICAgICAgMCwgICAwLCAgIDAKemlvX2J1Zl81MzI0ODogICAgICAgIDUzMjQ4LCAgICAgIDAsICAg ICAgIDAsICAgICAgIDAsICAgICAgIDAsICAgMCwgICAwCnppb19kYXRhX2J1Zl81MzI0ODogICA1 MzI0OCwgICAgICAwLCAgICAgICAwLCAgICAgICAwLCAgICAgICAwLCAgIDAsICAgMAp6aW9fYnVm XzU3MzQ0OiAgICAgICAgNTczNDQsICAgICAgMCwgICAgICAgMCwgICAgICAgMCwgICAgICAgMCwg ICAwLCAgIDAKemlvX2RhdGFfYnVmXzU3MzQ0OiAgIDU3MzQ0LCAgICAgIDAsICAgICAgIDAsICAg ICAgIDAsICAgICAgIDAsICAgMCwgICAwCnppb19idWZfNjE0NDA6ICAgICAgICA2MTQ0MCwgICAg ICAwLCAgICAgICAwLCAgICAgICAwLCAgICAgICAwLCAgIDAsICAgMAp6aW9fZGF0YV9idWZfNjE0 NDA6ICAgNjE0NDAsICAgICAgMCwgICAgICAgMCwgICAgICAgMCwgICAgICAgMCwgICAwLCAgIDAK emlvX2J1Zl82NTUzNjogICAgICAgIDY1NTM2LCAgICAgIDAsICAgICAgIDAsICAgICAgIDAsICAg ICAgIDAsICAgMCwgICAwCnppb19kYXRhX2J1Zl82NTUzNjogICA2NTUzNiwgICAgICAwLCAgICAg ICAwLCAgICAgICAwLCAgICAgICAwLCAgIDAsICAgMAp6aW9fYnVmXzY5NjMyOiAgICAgICAgNjk2 MzIsICAgICAgMCwgICAgICAgMCwgICAgICAgMCwgICAgICAgMCwgICAwLCAgIDAKemlvX2RhdGFf YnVmXzY5NjMyOiAgIDY5NjMyLCAgICAgIDAsICAgICAgIDAsICAgICAgIDAsICAgICAgIDAsICAg MCwgICAwCnppb19idWZfNzM3Mjg6ICAgICAgICA3MzcyOCwgICAgICAwLCAgICAgICAwLCAgICAg ICAwLCAgICAgICAwLCAgIDAsICAgMAp6aW9fZGF0YV9idWZfNzM3Mjg6ICAgNzM3MjgsICAgICAg MCwgICAgICAgMCwgICAgICAgMCwgICAgICAgMCwgICAwLCAgIDAKemlvX2J1Zl83NzgyNDogICAg ICAgIDc3ODI0LCAgICAgIDAsICAgICAgIDAsICAgICAgIDAsICAgICAgIDAsICAgMCwgICAwCnpp b19kYXRhX2J1Zl83NzgyNDogICA3NzgyNCwgICAgICAwLCAgICAgICAwLCAgICAgICAwLCAgICAg ICAwLCAgIDAsICAgMAp6aW9fYnVmXzgxOTIwOiAgICAgICAgODE5MjAsICAgICAgMCwgICAgICAg MCwgICAgICAgMCwgICAgICAgMCwgICAwLCAgIDAKemlvX2RhdGFfYnVmXzgxOTIwOiAgIDgxOTIw LCAgICAgIDAsICAgICAgIDAsICAgICAgIDAsICAgICAgIDAsICAgMCwgICAwCnppb19idWZfODYw MTY6ICAgICAgICA4NjAxNiwgICAgICAwLCAgICAgICAwLCAgICAgICAwLCAgICAgICAwLCAgIDAs ICAgMAp6aW9fZGF0YV9idWZfODYwMTY6ICAgODYwMTYsICAgICAgMCwgICAgICAgMCwgICAgICAg MCwgICAgICAgMCwgICAwLCAgIDAKemlvX2J1Zl85MDExMjogICAgICAgIDkwMTEyLCAgICAgIDAs ICAgICAgIDAsICAgICAgIDAsICAgICAgIDAsICAgMCwgICAwCnppb19kYXRhX2J1Zl85MDExMjog ICA5MDExMiwgICAgICAwLCAgICAgICAwLCAgICAgICAwLCAgICAgICAwLCAgIDAsICAgMAp6aW9f YnVmXzk0MjA4OiAgICAgICAgOTQyMDgsICAgICAgMCwgICAgICAgMCwgICAgICAgMCwgICAgICAg MCwgICAwLCAgIDAKemlvX2RhdGFfYnVmXzk0MjA4OiAgIDk0MjA4LCAgICAgIDAsICAgICAgIDAs ICAgICAgIDAsICAgICAgIDAsICAgMCwgICAwCnppb19idWZfOTgzMDQ6ICAgICAgICA5ODMwNCwg ICAgICAwLCAgICAgICAwLCAgICAgICAwLCAgICAgICAwLCAgIDAsICAgMAp6aW9fZGF0YV9idWZf OTgzMDQ6ICAgOTgzMDQsICAgICAgMCwgICAgICAgMCwgICAgICAgMCwgICAgICAgMCwgICAwLCAg IDAKemlvX2J1Zl8xMDI0MDA6ICAgICAgMTAyNDAwLCAgICAgIDAsICAgICAgIDAsICAgICAgIDAs ICAgICAgIDAsICAgMCwgICAwCnppb19kYXRhX2J1Zl8xMDI0MDA6IDEwMjQwMCwgICAgICAwLCAg ICAgICAwLCAgICAgICAwLCAgICAgICAwLCAgIDAsICAgMAp6aW9fYnVmXzEwNjQ5NjogICAgICAx MDY0OTYsICAgICAgMCwgICAgICAgMCwgICAgICAgMCwgICAgICAgMCwgICAwLCAgIDAKemlvX2Rh dGFfYnVmXzEwNjQ5NjogMTA2NDk2LCAgICAgIDAsICAgICAgIDAsICAgICAgIDAsICAgICAgIDAs ICAgMCwgICAwCnppb19idWZfMTEwNTkyOiAgICAgIDExMDU5MiwgICAgICAwLCAgICAgICAwLCAg ICAgICAwLCAgICAgICAwLCAgIDAsICAgMAp6aW9fZGF0YV9idWZfMTEwNTkyOiAxMTA1OTIsICAg ICAgMCwgICAgICAgMCwgICAgICAgMCwgICAgICAgMCwgICAwLCAgIDAKemlvX2J1Zl8xMTQ2ODg6 ICAgICAgMTE0Njg4LCAgICAgIDAsICAgICAgIDAsICAgICAgIDAsICAgICAgIDAsICAgMCwgICAw Cnppb19kYXRhX2J1Zl8xMTQ2ODg6IDExNDY4OCwgICAgICAwLCAgICAgICAwLCAgICAgICAwLCAg ICAgICAwLCAgIDAsICAgMAp6aW9fYnVmXzExODc4NDogICAgICAxMTg3ODQsICAgICAgMCwgICAg ICAgMCwgICAgICAgMCwgICAgICAgMCwgICAwLCAgIDAKemlvX2RhdGFfYnVmXzExODc4NDogMTE4 Nzg0LCAgICAgIDAsICAgICAgIDAsICAgICAgIDAsICAgICAgIDAsICAgMCwgICAwCnppb19idWZf MTIyODgwOiAgICAgIDEyMjg4MCwgICAgICAwLCAgICAgICAwLCAgICAgICAwLCAgICAgICAwLCAg IDAsICAgMAp6aW9fZGF0YV9idWZfMTIyODgwOiAxMjI4ODAsICAgICAgMCwgICAgICAgMCwgICAg ICAgMCwgICAgICAgMCwgICAwLCAgIDAKemlvX2J1Zl8xMjY5NzY6ICAgICAgMTI2OTc2LCAgICAg IDAsICAgICAgIDAsICAgICAgIDAsICAgICAgIDAsICAgMCwgICAwCnppb19kYXRhX2J1Zl8xMjY5 NzY6IDEyNjk3NiwgICAgICAwLCAgICAgICAwLCAgICAgICAwLCAgICAgICAwLCAgIDAsICAgMAp6 aW9fYnVmXzEzMTA3MjogICAgICAxMzEwNzIsICAgICAgMCwgICAgICAgMCwgICAgICAgMCwgICAg ICAgMCwgICAwLCAgIDAKemlvX2RhdGFfYnVmXzEzMTA3MjogMTMxMDcyLCAgICAgIDAsICAgICAg IDAsICAgICAgIDAsICAgICAgIDAsICAgMCwgICAwCnNhX2NhY2hlOiAgICAgICAgICAgICAgICA4 MCwgICAgICAwLCAgICA3MjMwLCAgICAgMTA1LCAgICA3MjMxLCAgIDAsICAgMApkbm9kZV90OiAg ICAgICAgICAgICAgIDEwNjQsICAgICAgMCwgICAgNzI4MCwgICAgICAxOSwgICAgNzMzMywgICAw LCAgIDAKZG11X2J1Zl9pbXBsX3Q6ICAgICAgICAgMzI4LCAgICAgIDAsICAgIDgwNjMsICAgICAz OTcsICAgIDk4NjIsICAgMCwgICAwCmFyY19idWZfaGRyX3Q6ICAgICAgICAgIDMyMCwgICAgICAw LCAgICAxMTA4LCAgICAgMjYwLCAgICAyOTA0LCAgIDAsICAgMAphcmNfYnVmX3Q6ICAgICAgICAg ICAgICAxMDQsICAgICAgMCwgICAgMTEwMiwgICAgIDMzOCwgICAgMjkzNiwgICAwLCAgIDAKemls X2x3Yl9jYWNoZTogICAgICAgICAgMTkyLCAgICAgIDAsICAgICAgIDAsICAgICAgIDAsICAgICAg IDAsICAgMCwgICAwCnpmc196bm9kZV9jYWNoZTogICAgICAgIDQwOCwgICAgICAwLCAgICA3MjMw LCAgICAgIDE1LCAgICA3MjMxLCAgIDAsICAgMAoKCi0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQp2bXN0YXQgLWkK CmludGVycnVwdCAgICAgICAgICAgICAgICAgICAgICAgICAgdG90YWwgICAgICAgcmF0ZQppcnEx OiBhdGtiZDAgICAgICAgICAgICAgICAgICAgICAgICAgICAxICAgICAgICAgIDAKaXJxODogYXRy dGMwICAgICAgICAgICAgICAgICAgICAgICA0NDkwOSAgICAgICAgMTY1CmlycTE0OiBhdGEwICAg ICAgICAgICAgICAgICAgICAgICAyNjEwOTMgICAgICAgIDk2MwppcnEyMjogYXRhcGNpMSAgICAg ICAgICAgICAgICAgICAgICAxOTcxICAgICAgICAgIDcKaXJxMjM6IGF0YXBjaTIgICAgICAgICAg ICAgICAgICAgICAgMTk1MCAgICAgICAgICA3CmlycTI5OiBiZ2UxICAgICAgICAgICAgICAgICAg ICAgICAgMjk5MjggICAgICAgIDExMApjcHUwOnRpbWVyICAgICAgICAgICAgICAgICAgICAgICAg MzUwODczICAgICAgIDEyOTQKY3B1MTp0aW1lciAgICAgICAgICAgICAgICAgICAgICAgIDM0OTk3 NSAgICAgICAxMjkxClRvdGFsICAgICAgICAgICAgICAgICAgICAgICAgICAgIDEwNDA3MDAgICAg ICAgMzg0MAoKLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCnBzdGF0IC1UCgogNzUvMTIzMjggZmlsZXMKME0vNTEx OU0gc3dhcCBzcGFjZQoKLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCnBzdGF0IC1zCgpEZXZpY2UgICAgICAgICAg NTEyLWJsb2NrcyAgICAgVXNlZCAgICBBdmFpbCBDYXBhY2l0eQovZGV2L2FkMHMxYiAgICAgICAx MDQ4NTUwNCAgICAgICAgMCAxMDQ4NTUwNCAgICAgMCUKCi0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQppb3N0YXQK Cmlvc3RhdDoga3ZtX3JlYWQoX3RrX25pbik6IGludmFsaWQgYWRkcmVzcyAoMHgwKQppb3N0YXQ6 IGRpc2FibGluZyBUVFkgc3RhdGlzdGljcwppb3N0YXQ6IGt2bV9nZXRjcHRpbWU6IGludmFsaWQg YWRkcmVzcyAoMHgwKQppb3N0YXQ6IGRpc2FibGluZyBDUFUgdGltZSBzdGF0aXN0aWNzCiAgICAg ICAgICAgICBhZDAgICAgICAgICAgICAgIGFkNCAgICAgICAgICAgICAgYWQ2IAogIEtCL3QgdHBz ICBNQi9zICAgS0IvdCB0cHMgIE1CL3MgICBLQi90IHRwcyAgTUIvcyAKICA0LjE1IDkyMiAgMy43 MyAgMzkuNTcgICAzICAwLjEzICAzOS41NCAgIDMgIDAuMTMgCgotLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0KaXBj cyAtYQoKTWVzc2FnZSBRdWV1ZXM6ClQgICAgICAgICAgIElEICAgICAgICAgIEtFWSBNT0RFICAg ICAgICBPV05FUiAgICBHUk9VUCAgICBDUkVBVE9SICBDR1JPVVAgICAgICAgICAgICAgICAgIENC WVRFUyAgICAgICAgICAgICAgICAgUU5VTSAgICAgICAgICAgICAgIFFCWVRFUyAgICAgICAgTFNQ SUQgICAgICAgIExSUElEIFNUSU1FICAgIFJUSU1FICAgIENUSU1FICAgCgpTaGFyZWQgTWVtb3J5 OgpUICAgICAgICAgICBJRCAgICAgICAgICBLRVkgTU9ERSAgICAgICAgT1dORVIgICAgR1JPVVAg ICAgQ1JFQVRPUiAgQ0dST1VQICAgICAgICAgTkFUVENIICAgICAgICBTRUdTWiAgICAgICAgIENQ SUQgICAgICAgICBMUElEIEFUSU1FICAgIERUSU1FICAgIENUSU1FICAgCgpTZW1hcGhvcmVzOgpU ICAgICAgICAgICBJRCAgICAgICAgICBLRVkgTU9ERSAgICAgICAgT1dORVIgICAgR1JPVVAgICAg Q1JFQVRPUiAgQ0dST1VQICAgICAgICAgIE5TRU1TIE9USU1FICAgIENUSU1FICAgCgoKLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tCmlwY3MgLVQKCm1zZ2luZm86Cgltc2dtYXg6ICAgICAgICAxNjM4NAkobWF4IGNo YXJhY3RlcnMgaW4gYSBtZXNzYWdlKQoJbXNnbW5pOiAgICAgICAgICAgNDAJKCMgb2YgbWVzc2Fn ZSBxdWV1ZXMpCgltc2dtbmI6ICAgICAgICAgMjA0OAkobWF4IGNoYXJhY3RlcnMgaW4gYSBtZXNz YWdlIHF1ZXVlKQoJbXNndHFsOiAgICAgICAgICAgNDAJKG1heCAjIG9mIG1lc3NhZ2VzIGluIHN5 c3RlbSkKCW1zZ3NzejogICAgICAgICAgICA4CShzaXplIG9mIGEgbWVzc2FnZSBzZWdtZW50KQoJ bXNnc2VnOiAgICAgICAgIDIwNDgJKCMgb2YgbWVzc2FnZSBzZWdtZW50cyBpbiBzeXN0ZW0pCgpz aG1pbmZvOgoJc2htbWF4OiAgICA1MzY4NzA5MTIJKG1heCBzaGFyZWQgbWVtb3J5IHNlZ21lbnQg c2l6ZSkKCXNobW1pbjogICAgICAgICAgICAxCShtaW4gc2hhcmVkIG1lbW9yeSBzZWdtZW50IHNp emUpCglzaG1tbmk6ICAgICAgICAgIDE5MgkobWF4IG51bWJlciBvZiBzaGFyZWQgbWVtb3J5IGlk ZW50aWZpZXJzKQoJc2htc2VnOiAgICAgICAgICAxMjgJKG1heCBzaGFyZWQgbWVtb3J5IHNlZ21l bnRzIHBlciBwcm9jZXNzKQoJc2htYWxsOiAgICAgICAxMzEwNzIJKG1heCBhbW91bnQgb2Ygc2hh cmVkIG1lbW9yeSBpbiBwYWdlcykKCnNlbWluZm86CglzZW1tYXA6ICAgICAgICAgICAzMAkoIyBv ZiBlbnRyaWVzIGluIHNlbWFwaG9yZSBtYXApCglzZW1tbmk6ICAgICAgICAgICA1MAkoIyBvZiBz ZW1hcGhvcmUgaWRlbnRpZmllcnMpCglzZW1tbnM6ICAgICAgICAgIDM0MAkoIyBvZiBzZW1hcGhv cmVzIGluIHN5c3RlbSkKCXNlbW1udTogICAgICAgICAgMTUwCSgjIG9mIHVuZG8gc3RydWN0dXJl cyBpbiBzeXN0ZW0pCglzZW1tc2w6ICAgICAgICAgIDM0MAkobWF4ICMgb2Ygc2VtYXBob3JlcyBw ZXIgaWQpCglzZW1vcG06ICAgICAgICAgIDEwMAkobWF4ICMgb2Ygb3BlcmF0aW9ucyBwZXIgc2Vt b3AgY2FsbCkKCXNlbXVtZTogICAgICAgICAgIDUwCShtYXggIyBvZiB1bmRvIGVudHJpZXMgcGVy IHByb2Nlc3MpCglzZW11c3o6ICAgICAgICAgIDYzMgkoc2l6ZSBpbiBieXRlcyBvZiB1bmRvIHN0 cnVjdHVyZSkKCXNlbXZteDogICAgICAgIDMyNzY3CShzZW1hcGhvcmUgbWF4aW11bSB2YWx1ZSkK CXNlbWFlbTogICAgICAgIDE2Mzg0CShhZGp1c3Qgb24gZXhpdCBtYXggdmFsdWUpCgoKLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tCm5mc3N0YXQKCkNsaWVudCBJbmZvOgpScGMgQ291bnRzOgogIEdldGF0dHIgICBT ZXRhdHRyICAgIExvb2t1cCAgUmVhZGxpbmsgICAgICBSZWFkICAgICBXcml0ZSAgICBDcmVhdGUg ICAgUmVtb3ZlCiAgICAgICAgMCAgICAgICAgIDAgICAgICAgICAwICAgICAgICAgMCAgICAgICAg IDAgICAgICAgICAwICAgICAgICAgMCAgICAgICAgIDAKICAgUmVuYW1lICAgICAgTGluayAgIFN5 bWxpbmsgICAgIE1rZGlyICAgICBSbWRpciAgIFJlYWRkaXIgIFJkaXJQbHVzICAgIEFjY2Vzcwog ICAgICAgIDAgICAgICAgICAwICAgICAgICAgMCAgICAgICAgIDAgICAgICAgICAwICAgICAgICAg MCAgICAgICAgIDAgICAgICAgICAwCiAgICBNa25vZCAgICBGc3N0YXQgICAgRnNpbmZvICBQYXRo Q29uZiAgICBDb21taXQKICAgICAgICAwICAgICAgICAgMCAgICAgICAgIDAgICAgICAgICAwICAg ICAgICAgMApScGMgSW5mbzoKIFRpbWVkT3V0ICAgSW52YWxpZCBYIFJlcGxpZXMgICBSZXRyaWVz ICBSZXF1ZXN0cwogICAgICAgIDAgICAgICAgICAwICAgICAgICAgMCAgICAgICAgIDAgICAgICAg ICAwCkNhY2hlIEluZm86CkF0dHIgSGl0cyAgICBNaXNzZXMgTGt1cCBIaXRzICAgIE1pc3NlcyBC aW9SIEhpdHMgICAgTWlzc2VzIEJpb1cgSGl0cyAgICBNaXNzZXMKICAgICAgICAwICAgICAgICAg MCAgICAgICAgIDAgICAgICAgICAwICAgICAgICAgMCAgICAgICAgIDAgICAgICAgICAwICAgICAg ICAgMApCaW9STEhpdHMgICAgTWlzc2VzIEJpb0QgSGl0cyAgICBNaXNzZXMgRGlyRSBIaXRzICAg IE1pc3NlcwogICAgICAgIDAgICAgICAgICAwICAgICAgICAgMCAgICAgICAgIDAgICAgICAgICAw ICAgICAgICAgMAoKU2VydmVyIEluZm86CiAgR2V0YXR0ciAgIFNldGF0dHIgICAgTG9va3VwICBS ZWFkbGluayAgICAgIFJlYWQgICAgIFdyaXRlICAgIENyZWF0ZSAgICBSZW1vdmUKICAgICA1Nzgy ICAgICAgICAxMSAgICAgIDE2NTYgICAgICAgICAwICAgICAgICAgMCAgICAgIDQ0ODMgICAgICAg ICAzICAgICAgICAgMAogICBSZW5hbWUgICAgICBMaW5rICAgU3ltbGluayAgICAgTWtkaXIgICAg IFJtZGlyICAgUmVhZGRpciAgUmRpclBsdXMgICAgQWNjZXNzCiAgICAgICAgMiAgICAgICAgIDAg ICAgICAgICAwICAgICAgICAgMCAgICAgICAgIDAgICAgICAgICAwICAgICAgICAgMCAgICAgICA4 MTQKICAgIE1rbm9kICAgIEZzc3RhdCAgICBGc2luZm8gIFBhdGhDb25mICAgIENvbW1pdAogICAg ICAgIDAgICAgICAgICAwICAgICAgICAgMCAgICAgICAgIDAgICAgICAgICAzClNlcnZlciBSZXQt RmFpbGVkCiAgICAgICAgICAgICAgNDQ0ClNlcnZlciBGYXVsdHMKICAgICAgICAgICAgMApTZXJ2 ZXIgQ2FjaGUgU3RhdHM6CiAgIElucHJvZyAgICAgIElkZW0gIE5vbi1pZGVtICAgIE1pc3Nlcwog ICAgICAgIDAgICAgICAgICAwICAgICAgICAgMCAgICAgICAgIDAKU2VydmVyIFdyaXRlIEdhdGhl cmluZzoKIFdyaXRlT3BzICBXcml0ZVJQQyAgIE9wc2F2ZWQKICAgICA0MjQ5ICAgICAgNDQ4MyAg ICAgICAyMzQKCi0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQpuZXRzdGF0IC1zCgp0Y3A6Cgk3MzY0NCBwYWNrZXRz IHNlbnQKCQkxMjc1MyBkYXRhIHBhY2tldHMgKDE4NTM0MDQgYnl0ZXMpCgkJMCBkYXRhIHBhY2tl dHMgKDAgYnl0ZXMpIHJldHJhbnNtaXR0ZWQKCQkwIGRhdGEgcGFja2V0cyB1bm5lY2Vzc2FyaWx5 IHJldHJhbnNtaXR0ZWQKCQkwIHJlc2VuZHMgaW5pdGlhdGVkIGJ5IE1UVSBkaXNjb3ZlcnkKCQk0 OTI3MyBhY2stb25seSBwYWNrZXRzICg1IGRlbGF5ZWQpCgkJMCBVUkcgb25seSBwYWNrZXRzCgkJ MCB3aW5kb3cgcHJvYmUgcGFja2V0cwoJCTExNjE4IHdpbmRvdyB1cGRhdGUgcGFja2V0cwoJCTAg Y29udHJvbCBwYWNrZXRzCgkxMTI2MzYgcGFja2V0cyByZWNlaXZlZAoJCTEyMDgyIGFja3MgKGZv ciAxODUzMjQ0IGJ5dGVzKQoJCTAgZHVwbGljYXRlIGFja3MKCQkwIGFja3MgZm9yIHVuc2VudCBk YXRhCgkJMTExOTM3IHBhY2tldHMgKDE0ODU0MDM5MiBieXRlcykgcmVjZWl2ZWQgaW4tc2VxdWVu Y2UKCQkwIGNvbXBsZXRlbHkgZHVwbGljYXRlIHBhY2tldHMgKDAgYnl0ZXMpCgkJMCBvbGQgZHVw bGljYXRlIHBhY2tldHMKCQkwIHBhY2tldHMgd2l0aCBzb21lIGR1cC4gZGF0YSAoMCBieXRlcyBk dXBlZCkKCQkwIG91dC1vZi1vcmRlciBwYWNrZXRzICgwIGJ5dGVzKQoJCTAgcGFja2V0cyAoMCBi eXRlcykgb2YgZGF0YSBhZnRlciB3aW5kb3cKCQkwIHdpbmRvdyBwcm9iZXMKCQkwIHdpbmRvdyB1 cGRhdGUgcGFja2V0cwoJCTAgcGFja2V0cyByZWNlaXZlZCBhZnRlciBjbG9zZQoJCTAgZGlzY2Fy ZGVkIGZvciBiYWQgY2hlY2tzdW1zCgkJMCBkaXNjYXJkZWQgZm9yIGJhZCBoZWFkZXIgb2Zmc2V0 IGZpZWxkcwoJCTAgZGlzY2FyZGVkIGJlY2F1c2UgcGFja2V0IHRvbyBzaG9ydAoJCTAgZGlzY2Fy ZGVkIGR1ZSB0byBtZW1vcnkgcHJvYmxlbXMKCTAgY29ubmVjdGlvbiByZXF1ZXN0cwoJNCBjb25u ZWN0aW9uIGFjY2VwdHMKCTAgYmFkIGNvbm5lY3Rpb24gYXR0ZW1wdHMKCTAgbGlzdGVuIHF1ZXVl IG92ZXJmbG93cwoJMCBpZ25vcmVkIFJTVHMgaW4gdGhlIHdpbmRvd3MKCTQgY29ubmVjdGlvbnMg ZXN0YWJsaXNoZWQgKGluY2x1ZGluZyBhY2NlcHRzKQoJNSBjb25uZWN0aW9ucyBjbG9zZWQgKGlu Y2x1ZGluZyAzIGRyb3BzKQoJCTAgY29ubmVjdGlvbnMgdXBkYXRlZCBjYWNoZWQgUlRUIG9uIGNs b3NlCgkJMCBjb25uZWN0aW9ucyB1cGRhdGVkIGNhY2hlZCBSVFQgdmFyaWFuY2Ugb24gY2xvc2UK CQkwIGNvbm5lY3Rpb25zIHVwZGF0ZWQgY2FjaGVkIHNzdGhyZXNoIG9uIGNsb3NlCgkwIGVtYnJ5 b25pYyBjb25uZWN0aW9ucyBkcm9wcGVkCgkxMjA4MiBzZWdtZW50cyB1cGRhdGVkIHJ0dCAob2Yg MTE0MTkgYXR0ZW1wdHMpCgkwIHJldHJhbnNtaXQgdGltZW91dHMKCQkwIGNvbm5lY3Rpb25zIGRy b3BwZWQgYnkgcmV4bWl0IHRpbWVvdXQKCTAgcGVyc2lzdCB0aW1lb3V0cwoJCTAgY29ubmVjdGlv bnMgZHJvcHBlZCBieSBwZXJzaXN0IHRpbWVvdXQKCTAgQ29ubmVjdGlvbnMgKGZpbl93YWl0XzIp IGRyb3BwZWQgYmVjYXVzZSBvZiB0aW1lb3V0CgkwIGtlZXBhbGl2ZSB0aW1lb3V0cwoJCTAga2Vl cGFsaXZlIHByb2JlcyBzZW50CgkJMCBjb25uZWN0aW9ucyBkcm9wcGVkIGJ5IGtlZXBhbGl2ZQoJ MzM2IGNvcnJlY3QgQUNLIGhlYWRlciBwcmVkaWN0aW9ucwoJOTkwNDIgY29ycmVjdCBkYXRhIHBh Y2tldCBoZWFkZXIgcHJlZGljdGlvbnMKCTQgc3luY2FjaGUgZW50cmllcyBhZGRlZAoJCTAgcmV0 cmFuc21pdHRlZAoJCTAgZHVwc3luCgkJMCBkcm9wcGVkCgkJNCBjb21wbGV0ZWQKCQkwIGJ1Y2tl dCBvdmVyZmxvdwoJCTAgY2FjaGUgb3ZlcmZsb3cKCQkwIHJlc2V0CgkJMCBzdGFsZQoJCTAgYWJv cnRlZAoJCTAgYmFkYWNrCgkJMCB1bnJlYWNoCgkJMCB6b25lIGZhaWx1cmVzCgk0IGNvb2tpZXMg c2VudAoJMCBjb29raWVzIHJlY2VpdmVkCgkwIFNBQ0sgcmVjb3ZlcnkgZXBpc29kZXMKCTAgc2Vn bWVudCByZXhtaXRzIGluIFNBQ0sgcmVjb3ZlcnkgZXBpc29kZXMKCTAgYnl0ZSByZXhtaXRzIGlu IFNBQ0sgcmVjb3ZlcnkgZXBpc29kZXMKCTAgU0FDSyBvcHRpb25zIChTQUNLIGJsb2NrcykgcmVj ZWl2ZWQKCTAgU0FDSyBvcHRpb25zIChTQUNLIGJsb2Nrcykgc2VudAoJMCBTQUNLIHNjb3JlYm9h cmQgb3ZlcmZsb3cKCTAgcGFja2V0cyB3aXRoIEVDTiBDRSBiaXQgc2V0CgkwIHBhY2tldHMgd2l0 aCBFQ04gRUNUKDApIGJpdCBzZXQKCTAgcGFja2V0cyB3aXRoIEVDTiBFQ1QoMSkgYml0IHNldAoJ MCBzdWNjZXNzZnVsIEVDTiBoYW5kc2hha2VzCgkwIHRpbWVzIEVDTiByZWR1Y2VkIHRoZSBjb25n ZXN0aW9uIHdpbmRvdwp1ZHA6CgkxMyBkYXRhZ3JhbXMgcmVjZWl2ZWQKCTAgd2l0aCBpbmNvbXBs ZXRlIGhlYWRlcgoJMCB3aXRoIGJhZCBkYXRhIGxlbmd0aCBmaWVsZAoJMCB3aXRoIGJhZCBjaGVj a3N1bQoJMCB3aXRoIG5vIGNoZWNrc3VtCgkwIGRyb3BwZWQgZHVlIHRvIG5vIHNvY2tldAoJMCBi cm9hZGNhc3QvbXVsdGljYXN0IGRhdGFncmFtcyB1bmRlbGl2ZXJlZAoJMCBkcm9wcGVkIGR1ZSB0 byBmdWxsIHNvY2tldCBidWZmZXJzCgkwIG5vdCBmb3IgaGFzaGVkIHBjYgoJMTMgZGVsaXZlcmVk CgkxMyBkYXRhZ3JhbXMgb3V0cHV0CgkwIHRpbWVzIG11bHRpY2FzdCBzb3VyY2UgZmlsdGVyIG1h dGNoZWQKaXA6CgkxMTI2NTAgdG90YWwgcGFja2V0cyByZWNlaXZlZAoJMCBiYWQgaGVhZGVyIGNo ZWNrc3VtcwoJMCB3aXRoIHNpemUgc21hbGxlciB0aGFuIG1pbmltdW0KCTAgd2l0aCBkYXRhIHNp emUgPCBkYXRhIGxlbmd0aAoJMCB3aXRoIGlwIGxlbmd0aCA+IG1heCBpcCBwYWNrZXQgc2l6ZQoJ MCB3aXRoIGhlYWRlciBsZW5ndGggPCBkYXRhIHNpemUKCTAgd2l0aCBkYXRhIGxlbmd0aCA8IGhl YWRlciBsZW5ndGgKCTAgd2l0aCBiYWQgb3B0aW9ucwoJMCB3aXRoIGluY29ycmVjdCB2ZXJzaW9u IG51bWJlcgoJMCBmcmFnbWVudHMgcmVjZWl2ZWQKCTAgZnJhZ21lbnRzIGRyb3BwZWQgKGR1cCBv ciBvdXQgb2Ygc3BhY2UpCgkwIGZyYWdtZW50cyBkcm9wcGVkIGFmdGVyIHRpbWVvdXQKCTAgcGFj a2V0cyByZWFzc2VtYmxlZCBvawoJMTEyNjQ5IHBhY2tldHMgZm9yIHRoaXMgaG9zdAoJMCBwYWNr ZXRzIGZvciB1bmtub3duL3Vuc3VwcG9ydGVkIHByb3RvY29sCgkwIHBhY2tldHMgZm9yd2FyZGVk ICgwIHBhY2tldHMgZmFzdCBmb3J3YXJkZWQpCgkxIHBhY2tldCBub3QgZm9yd2FyZGFibGUKCTAg cGFja2V0cyByZWNlaXZlZCBmb3IgdW5rbm93biBtdWx0aWNhc3QgZ3JvdXAKCTAgcmVkaXJlY3Rz IHNlbnQKCTczNjY0IHBhY2tldHMgc2VudCBmcm9tIHRoaXMgaG9zdAoJMCBwYWNrZXRzIHNlbnQg d2l0aCBmYWJyaWNhdGVkIGlwIGhlYWRlcgoJMCBvdXRwdXQgcGFja2V0cyBkcm9wcGVkIGR1ZSB0 byBubyBidWZzLCBldGMuCgkwIG91dHB1dCBwYWNrZXRzIGRpc2NhcmRlZCBkdWUgdG8gbm8gcm91 dGUKCTAgb3V0cHV0IGRhdGFncmFtcyBmcmFnbWVudGVkCgkwIGZyYWdtZW50cyBjcmVhdGVkCgkw IGRhdGFncmFtcyB0aGF0IGNhbid0IGJlIGZyYWdtZW50ZWQKCTAgdHVubmVsaW5nIHBhY2tldHMg dGhhdCBjYW4ndCBmaW5kIGdpZgoJMCBkYXRhZ3JhbXMgd2l0aCBiYWQgYWRkcmVzcyBpbiBoZWFk ZXIKaWNtcDoKCTAgY2FsbHMgdG8gaWNtcF9lcnJvcgoJMCBlcnJvcnMgbm90IGdlbmVyYXRlZCBp biByZXNwb25zZSB0byBhbiBpY21wIG1lc3NhZ2UKCTAgbWVzc2FnZXMgd2l0aCBiYWQgY29kZSBm aWVsZHMKCTAgbWVzc2FnZXMgbGVzcyB0aGFuIHRoZSBtaW5pbXVtIGxlbmd0aAoJMCBtZXNzYWdl cyB3aXRoIGJhZCBjaGVja3N1bQoJMCBtZXNzYWdlcyB3aXRoIGJhZCBsZW5ndGgKCTAgbXVsdGlj YXN0IGVjaG8gcmVxdWVzdHMgaWdub3JlZAoJMCBtdWx0aWNhc3QgdGltZXN0YW1wIHJlcXVlc3Rz IGlnbm9yZWQKCTAgbWVzc2FnZSByZXNwb25zZXMgZ2VuZXJhdGVkCgkwIGludmFsaWQgcmV0dXJu IGFkZHJlc3NlcwoJMCBubyByZXR1cm4gcm91dGVzCmlnbXA6CgkwIG1lc3NhZ2VzIHJlY2VpdmVk CgkwIG1lc3NhZ2VzIHJlY2VpdmVkIHdpdGggdG9vIGZldyBieXRlcwoJMCBtZXNzYWdlcyByZWNl aXZlZCB3aXRoIHdyb25nIFRUTAoJMCBtZXNzYWdlcyByZWNlaXZlZCB3aXRoIGJhZCBjaGVja3N1 bQoJMCBWMS9WMiBtZW1iZXJzaGlwIHF1ZXJpZXMgcmVjZWl2ZWQKCTAgVjMgbWVtYmVyc2hpcCBx dWVyaWVzIHJlY2VpdmVkCgkwIG1lbWJlcnNoaXAgcXVlcmllcyByZWNlaXZlZCB3aXRoIGludmFs aWQgZmllbGQocykKCTAgZ2VuZXJhbCBxdWVyaWVzIHJlY2VpdmVkCgkwIGdyb3VwIHF1ZXJpZXMg cmVjZWl2ZWQKCTAgZ3JvdXAtc291cmNlIHF1ZXJpZXMgcmVjZWl2ZWQKCTAgZ3JvdXAtc291cmNl IHF1ZXJpZXMgZHJvcHBlZAoJMCBtZW1iZXJzaGlwIHJlcG9ydHMgcmVjZWl2ZWQKCTAgbWVtYmVy c2hpcCByZXBvcnRzIHJlY2VpdmVkIHdpdGggaW52YWxpZCBmaWVsZChzKQoJMCBtZW1iZXJzaGlw IHJlcG9ydHMgcmVjZWl2ZWQgZm9yIGdyb3VwcyB0byB3aGljaCB3ZSBiZWxvbmcKCTAgVjMgcmVw b3J0cyByZWNlaXZlZCB3aXRob3V0IFJvdXRlciBBbGVydAoJMCBtZW1iZXJzaGlwIHJlcG9ydHMg c2VudAphcnA6CgkyIEFSUCByZXF1ZXN0cyBzZW50CgkzIEFSUCByZXBsaWVzIHNlbnQKCTIzMzAg QVJQIHJlcXVlc3RzIHJlY2VpdmVkCgkxIEFSUCByZXBseSByZWNlaXZlZAoJMjMzMSBBUlAgcGFj a2V0cyByZWNlaXZlZAoJMCB0b3RhbCBwYWNrZXRzIGRyb3BwZWQgZHVlIHRvIG5vIEFSUCBlbnRy eQoJMCBBUlAgZW50cnlzIHRpbWVkIG91dAoJMCBEdXBsaWNhdGUgSVBzIHNlZW4KaXA2OgoJMCB0 b3RhbCBwYWNrZXRzIHJlY2VpdmVkCgkwIHdpdGggc2l6ZSBzbWFsbGVyIHRoYW4gbWluaW11bQoJ MCB3aXRoIGRhdGEgc2l6ZSA8IGRhdGEgbGVuZ3RoCgkwIHdpdGggYmFkIG9wdGlvbnMKCTAgd2l0 aCBpbmNvcnJlY3QgdmVyc2lvbiBudW1iZXIKCTAgZnJhZ21lbnRzIHJlY2VpdmVkCgkwIGZyYWdt ZW50cyBkcm9wcGVkIChkdXAgb3Igb3V0IG9mIHNwYWNlKQoJMCBmcmFnbWVudHMgZHJvcHBlZCBh ZnRlciB0aW1lb3V0CgkwIGZyYWdtZW50cyB0aGF0IGV4Y2VlZGVkIGxpbWl0CgkwIHBhY2tldHMg cmVhc3NlbWJsZWQgb2sKCTAgcGFja2V0cyBmb3IgdGhpcyBob3N0CgkwIHBhY2tldHMgZm9yd2Fy ZGVkCgkwIHBhY2tldHMgbm90IGZvcndhcmRhYmxlCgkwIHJlZGlyZWN0cyBzZW50Cgk1IHBhY2tl dHMgc2VudCBmcm9tIHRoaXMgaG9zdAoJMCBwYWNrZXRzIHNlbnQgd2l0aCBmYWJyaWNhdGVkIGlw IGhlYWRlcgoJMCBvdXRwdXQgcGFja2V0cyBkcm9wcGVkIGR1ZSB0byBubyBidWZzLCBldGMuCgkw IG91dHB1dCBwYWNrZXRzIGRpc2NhcmRlZCBkdWUgdG8gbm8gcm91dGUKCTAgb3V0cHV0IGRhdGFn cmFtcyBmcmFnbWVudGVkCgkwIGZyYWdtZW50cyBjcmVhdGVkCgkwIGRhdGFncmFtcyB0aGF0IGNh bid0IGJlIGZyYWdtZW50ZWQKCTAgcGFja2V0cyB0aGF0IHZpb2xhdGVkIHNjb3BlIHJ1bGVzCgkw IG11bHRpY2FzdCBwYWNrZXRzIHdoaWNoIHdlIGRvbid0IGpvaW4KCU1idWYgc3RhdGlzdGljczoK CQkwIG9uZSBtYnVmCgkJMCBvbmUgZXh0IG1idWYKCQkwIHR3byBvciBtb3JlIGV4dCBtYnVmCgkw IHBhY2tldHMgd2hvc2UgaGVhZGVycyBhcmUgbm90IGNvbnRpbnVvdXMKCTAgdHVubmVsaW5nIHBh Y2tldHMgdGhhdCBjYW4ndCBmaW5kIGdpZgoJMCBwYWNrZXRzIGRpc2NhcmRlZCBiZWNhdXNlIG9m IHRvbyBtYW55IGhlYWRlcnMKCTAgZmFpbHVyZXMgb2Ygc291cmNlIGFkZHJlc3Mgc2VsZWN0aW9u CglTb3VyY2UgYWRkcmVzc2VzIHNlbGVjdGlvbiBydWxlIGFwcGxpZWQ6CmljbXA2OgoJMCBjYWxs cyB0byBpY21wNl9lcnJvcgoJMCBlcnJvcnMgbm90IGdlbmVyYXRlZCBpbiByZXNwb25zZSB0byBh biBpY21wNiBtZXNzYWdlCgkwIGVycm9ycyBub3QgZ2VuZXJhdGVkIGJlY2F1c2Ugb2YgcmF0ZSBs aW1pdGF0aW9uCglPdXRwdXQgaGlzdG9ncmFtOgoJCW5laWdoYm9yIHNvbGljaXRhdGlvbjogMQoJ CU1MRHYyIGxpc3RlbmVyIHJlcG9ydDogNAoJMCBtZXNzYWdlcyB3aXRoIGJhZCBjb2RlIGZpZWxk cwoJMCBtZXNzYWdlcyA8IG1pbmltdW0gbGVuZ3RoCgkwIGJhZCBjaGVja3N1bXMKCTAgbWVzc2Fn ZXMgd2l0aCBiYWQgbGVuZ3RoCglIaXN0b2dyYW0gb2YgZXJyb3IgbWVzc2FnZXMgdG8gYmUgZ2Vu ZXJhdGVkOgoJCTAgbm8gcm91dGUKCQkwIGFkbWluaXN0cmF0aXZlbHkgcHJvaGliaXRlZAoJCTAg YmV5b25kIHNjb3BlCgkJMCBhZGRyZXNzIHVucmVhY2hhYmxlCgkJMCBwb3J0IHVucmVhY2hhYmxl CgkJMCBwYWNrZXQgdG9vIGJpZwoJCTAgdGltZSBleGNlZWQgdHJhbnNpdAoJCTAgdGltZSBleGNl ZWQgcmVhc3NlbWJseQoJCTAgZXJyb25lb3VzIGhlYWRlciBmaWVsZAoJCTAgdW5yZWNvZ25pemVk IG5leHQgaGVhZGVyCgkJMCB1bnJlY29nbml6ZWQgb3B0aW9uCgkJMCByZWRpcmVjdAoJCTAgdW5r bm93bgoJMCBtZXNzYWdlIHJlc3BvbnNlcyBnZW5lcmF0ZWQKCTAgbWVzc2FnZXMgd2l0aCB0b28g bWFueSBORCBvcHRpb25zCgkwIG1lc3NhZ2VzIHdpdGggYmFkIE5EIG9wdGlvbnMKCTAgYmFkIG5l aWdoYm9yIHNvbGljaXRhdGlvbiBtZXNzYWdlcwoJMCBiYWQgbmVpZ2hib3IgYWR2ZXJ0aXNlbWVu dCBtZXNzYWdlcwoJMCBiYWQgcm91dGVyIHNvbGljaXRhdGlvbiBtZXNzYWdlcwoJMCBiYWQgcm91 dGVyIGFkdmVydGlzZW1lbnQgbWVzc2FnZXMKCTAgYmFkIHJlZGlyZWN0IG1lc3NhZ2VzCgkwIHBh dGggTVRVIGNoYW5nZXMKcmlwNjoKCTAgbWVzc2FnZXMgcmVjZWl2ZWQKCTAgY2hlY2tzdW0gY2Fs Y3VsYXRpb25zIG9uIGluYm91bmQKCTAgbWVzc2FnZXMgd2l0aCBiYWQgY2hlY2tzdW0KCTAgbWVz c2FnZXMgZHJvcHBlZCBkdWUgdG8gbm8gc29ja2V0CgkwIG11bHRpY2FzdCBtZXNzYWdlcyBkcm9w cGVkIGR1ZSB0byBubyBzb2NrZXQKCTAgbWVzc2FnZXMgZHJvcHBlZCBkdWUgdG8gZnVsbCBzb2Nr ZXQgYnVmZmVycwoJMCBkZWxpdmVyZWQKCTAgZGF0YWdyYW1zIG91dHB1dAoKLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tCm5ldHN0YXQgLW0KCjE4NTMvNzE3LzI1NzAgbWJ1ZnMgaW4gdXNlIChjdXJyZW50L2NhY2hl L3RvdGFsKQo4MjgvMTA3OC8xOTA2LzI1NjAwIG1idWYgY2x1c3RlcnMgaW4gdXNlIChjdXJyZW50 L2NhY2hlL3RvdGFsL21heCkKODI5LzMyOCBtYnVmK2NsdXN0ZXJzIG91dCBvZiBwYWNrZXQgc2Vj b25kYXJ5IHpvbmUgaW4gdXNlIChjdXJyZW50L2NhY2hlKQowLzAvMC8xMjgwMCA0ayAocGFnZSBz aXplKSBqdW1ibyBjbHVzdGVycyBpbiB1c2UgKGN1cnJlbnQvY2FjaGUvdG90YWwvbWF4KQowLzAv MC8xOTIwMCA5ayBqdW1ibyBjbHVzdGVycyBpbiB1c2UgKGN1cnJlbnQvY2FjaGUvdG90YWwvbWF4 KQowLzAvMC8xMjgwMCAxNmsganVtYm8gY2x1c3RlcnMgaW4gdXNlIChjdXJyZW50L2NhY2hlL3Rv dGFsL21heCkKMjExOUsvMjMzNUsvNDQ1NEsgYnl0ZXMgYWxsb2NhdGVkIHRvIG5ldHdvcmsgKGN1 cnJlbnQvY2FjaGUvdG90YWwpCjAvMC8wIHJlcXVlc3RzIGZvciBtYnVmcyBkZW5pZWQgKG1idWZz L2NsdXN0ZXJzL21idWYrY2x1c3RlcnMpCjAvMC8wIHJlcXVlc3RzIGZvciBqdW1ibyBjbHVzdGVy cyBkZW5pZWQgKDRrLzlrLzE2aykKMCByZXF1ZXN0cyBmb3Igc2ZidWZzIGRlbmllZAowIHJlcXVl c3RzIGZvciBzZmJ1ZnMgZGVsYXllZAowIHJlcXVlc3RzIGZvciBJL08gaW5pdGlhdGVkIGJ5IHNl bmRmaWxlCjAgY2FsbHMgdG8gcHJvdG9jb2wgZHJhaW4gcm91dGluZXMKCi0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LQpuZXRzdGF0IC1pZAoKTmFtZSAgICBNdHUgTmV0d29yayAgICAgICBBZGRyZXNzICAgICAgICAg ICAgICBJcGt0cyBJZXJycyBJZHJvcCAgICBPcGt0cyBPZXJycyAgQ29sbCBEcm9wCmJnZTAqICAx NTAwIDxMaW5rIzE+ICAgICAgMDA6ZTA6ODE6NDA6Mjk6ZDIgICAgICAgIDAgICAgIDAgICAgIDAg ICAgICAgIDAgICAgIDAgICAgIDAgICAgMCAKYmdlMSAgIDE1MDAgPExpbmsjMj4gICAgICAwMDpl MDo4MTo0MDoyOTpkMyAgIDExNDk4MSAgICAgMCAgICAgMCAgICA3MzY3NSAgICAgMCAgICAgMCAg ICAwIApiZ2UxICAgMTUwMCBmZTgwOjI6OjJlMDo4IGZlODA6Mjo6MmUwOjgxZmY6ICAgICAgICAw ICAgICAtICAgICAtICAgICAgICAzICAgICAtICAgICAtICAgIC0gCmJnZTEgICAxNTAwIDEwLjQy LjQzLjAgICAgeGFuYWR1ICAgICAgICAgICAgICAxMTI2NDkgICAgIC0gICAgIC0gICAgNzM2NjQg ICAgIC0gICAgIC0gICAgLSAKcGxpcDAgIDE1MDAgPExpbmsjMz4gICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgMCAgICAgMCAgICAgMCAgICAgICAgMCAgICAgMCAgICAgMCAgICAwIApsbzAg ICAxNjM4NCA8TGluayM0PiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAwICAgICAwICAg ICAwICAgICAgICAwICAgICAwICAgICAwICAgIDAgCmxvMCAgIDE2Mzg0IGZlODA6NDo6MSAgICAg ZmU4MDo0OjoxICAgICAgICAgICAgICAgIDAgICAgIC0gICAgIC0gICAgICAgIDAgICAgIC0gICAg IC0gICAgLSAKbG8wICAgMTYzODQgbG9jYWxob3N0ICAgICA6OjEgICAgICAgICAgICAgICAgICAg ICAgMCAgICAgLSAgICAgLSAgICAgICAgMCAgICAgLSAgICAgLSAgICAtIApsbzAgICAxNjM4NCB5 b3VyLW5ldCAgICAgIGxvY2FsaG9zdCAgICAgICAgICAgICAgICAwICAgICAtICAgICAtICAgICAg ICAwICAgICAtICAgICAtICAgIC0gCgotLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0KbmV0c3RhdCAtYW5yCgpSb3V0 aW5nIHRhYmxlcwoKSW50ZXJuZXQ6CkRlc3RpbmF0aW9uICAgICAgICBHYXRld2F5ICAgICAgICAg ICAgRmxhZ3MgICAgUmVmcyAgICAgIFVzZSAgTmV0aWYgRXhwaXJlCmRlZmF1bHQgICAgICAgICAg ICAxMC40Mi40My4xICAgICAgICAgVUdTICAgICAgICAgMCAgICAgICAgMCAgIGJnZTEKMTAuNDIu NDMuMC8yNCAgICAgIGxpbmsjMiAgICAgICAgICAgICBVICAgICAgICAgICA0ICAgIDczNjY0ICAg YmdlMQoxMC40Mi40My4xMCAgICAgICAgbGluayMyICAgICAgICAgICAgIFVIUyAgICAgICAgIDAg ICAgICAgIDAgICAgbG8wCjEyNy4wLjAuMSAgICAgICAgICBsaW5rIzQgICAgICAgICAgICAgVUgg ICAgICAgICAgMCAgICAgICAgMCAgICBsbzAKCkludGVybmV0NjoKRGVzdGluYXRpb24gICAgICAg ICAgICAgICAgICAgICAgIEdhdGV3YXkgICAgICAgICAgICAgICAgICAgICAgIEZsYWdzICAgICAg TmV0aWYgRXhwaXJlCjo6MSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICA6OjEgICAgICAg ICAgICAgICAgICAgICAgICAgICBVSCAgICAgICAgICBsbzAKZmU4MDo6JWJnZTEvNjQgICAgICAg ICAgICAgICAgICAgIGxpbmsjMiAgICAgICAgICAgICAgICAgICAgICAgIFUgICAgICAgICAgYmdl MQpmZTgwOjoyZTA6ODFmZjpmZTQwOjI5ZDMlYmdlMSAgICAgbGluayMyICAgICAgICAgICAgICAg ICAgICAgICAgVUhTICAgICAgICAgbG8wCmZlODA6OiVsbzAvNjQgICAgICAgICAgICAgICAgICAg ICBsaW5rIzQgICAgICAgICAgICAgICAgICAgICAgICBVICAgICAgICAgICBsbzAKZmU4MDo6MSVs bzAgICAgICAgICAgICAgICAgICAgICAgIGxpbmsjNCAgICAgICAgICAgICAgICAgICAgICAgIFVI UyAgICAgICAgIGxvMApmZjAxOjI6Oi8zMiAgICAgICAgICAgICAgICAgICAgICAgZmU4MDo6MmUw OjgxZmY6ZmU0MDoyOWQzJWJnZTEgVSAgICAgICAgICBiZ2UxCmZmMDE6NDo6LzMyICAgICAgICAg ICAgICAgICAgICAgICBmZTgwOjoxJWxvMCAgICAgICAgICAgICAgICAgICBVICAgICAgICAgICBs bzAKZmYwMjo6JWJnZTEvMzIgICAgICAgICAgICAgICAgICAgIGZlODA6OjJlMDo4MWZmOmZlNDA6 MjlkMyViZ2UxIFUgICAgICAgICAgYmdlMQpmZjAyOjolbG8wLzMyICAgICAgICAgICAgICAgICAg ICAgZmU4MDo6MSVsbzAgICAgICAgICAgICAgICAgICAgVSAgICAgICAgICAgbG8wCgotLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0KbmV0c3RhdCAtYW5BCgpBY3RpdmUgSW50ZXJuZXQgY29ubmVjdGlvbnMgKGluY2x1 ZGluZyBzZXJ2ZXJzKQpUY3BjYiAgICBQcm90byBSZWN2LVEgU2VuZC1RICBMb2NhbCBBZGRyZXNz ICAgICAgRm9yZWlnbiBBZGRyZXNzICAgKHN0YXRlKQpmZmZmZmYwMDE3NWVmMDAwIHRjcDQgICA4 MjcwOCAgICAxNjQgMTAuNDIuNDMuMTAuMjA0OSAgIDEwLjQyLjQzLjEuNzYyICAgICBFU1RBQkxJ U0hFRApmZmZmZmYwMDE3NTQ4MDAwIHRjcDQgICAgICAgMCAgICAgIDAgMTI3LjAuMC4xLjI1ICAg ICAgICouKiAgICAgICAgICAgICAgICBMSVNURU4KZmZmZmZmMDAxNzU0OTZlMCB0Y3A0ICAgICAg IDAgICAgICAwICouMjIgICAgICAgICAgICAgICAqLiogICAgICAgICAgICAgICAgTElTVEVOCmZm ZmZmZjAwMTc1NDlhNTAgdGNwNiAgICAgICAwICAgICAgMCAqLjIyICAgICAgICAgICAgICAgKi4q ICAgICAgICAgICAgICAgIExJU1RFTgpmZmZmZmYwMDE3M2RmMDAwIHRjcDYgICAgICAgMCAgICAg IDAgKi4yMDQ5ICAgICAgICAgICAgICouKiAgICAgICAgICAgICAgICBMSVNURU4KZmZmZmZmMDAx NzNkZjM3MCB0Y3A0ICAgICAgIDAgICAgICAwICouMjA0OSAgICAgICAgICAgICAqLiogICAgICAg ICAgICAgICAgTElTVEVOCmZmZmZmZjAwMTc1NDg2ZTAgdGNwNCAgICAgICAwICAgICAgMCAqLjYy NyAgICAgICAgICAgICAgKi4qICAgICAgICAgICAgICAgIExJU1RFTgpmZmZmZmYwMDE3M2RmNmUw IHRjcDYgICAgICAgMCAgICAgIDAgKi42MjcgICAgICAgICAgICAgICouKiAgICAgICAgICAgICAg ICBMSVNURU4KZmZmZmZmMDAxNzU0OGE1MCB0Y3A0ICAgICAgIDAgICAgICAwICouMTExICAgICAg ICAgICAgICAqLiogICAgICAgICAgICAgICAgTElTVEVOCmZmZmZmZjAwMTc1NDkwMDAgdGNwNiAg ICAgICAwICAgICAgMCAqLjExMSAgICAgICAgICAgICAgKi4qICAgICAgICAgICAgICAgIExJU1RF TgpmZmZmZmYwMDE3MzliM2YwIHVkcDYgICAgICAgMCAgICAgIDAgKi4yMDQ5ICAgICAgICAgICAg ICouKiAgICAgICAgICAgICAgICAKZmZmZmZmMDAxNzM5YjJhMCB1ZHA0ICAgICAgIDAgICAgICAw ICouMjA0OSAgICAgICAgICAgICAqLiogICAgICAgICAgICAgICAgCmZmZmZmZjAwMTczOWI2OTAg dWRwNCAgICAgICAwICAgICAgMCAqLjYyNyAgICAgICAgICAgICAgKi4qICAgICAgICAgICAgICAg IApmZmZmZmYwMDE3MzliZDIwIHVkcDYgICAgICAgMCAgICAgIDAgKi42MjcgICAgICAgICAgICAg ICouKiAgICAgICAgICAgICAgICAKZmZmZmZmMDAxNzM5YWQyMCB1ZHA2ICAgICAgIDAgICAgICAw ICouKiAgICAgICAgICAgICAgICAqLiogICAgICAgICAgICAgICAgCmZmZmZmZjAwMTczOWFhODAg dWRwNCAgICAgICAwICAgICAgMCAqLjcyMCAgICAgICAgICAgICAgKi4qICAgICAgICAgICAgICAg IApmZmZmZmYwMDE3MzlhN2UwIHVkcDQgICAgICAgMCAgICAgIDAgKi4xMTEgICAgICAgICAgICAg ICouKiAgICAgICAgICAgICAgICAKZmZmZmZmMDAxNzM5YTU0MCB1ZHA2ICAgICAgIDAgICAgICAw ICouMTAxNSAgICAgICAgICAgICAqLiogICAgICAgICAgICAgICAgCmZmZmZmZjAwMDJhMzEyYTAg dWRwNiAgICAgICAwICAgICAgMCAqLjExMSAgICAgICAgICAgICAgKi4qICAgICAgICAgICAgICAg IApmZmZmZmYwMDE3MzlhMTUwIHVkcDQgICAgICAgMCAgICAgIDAgKi41MTQgICAgICAgICAgICAg ICouKiAgICAgICAgICAgICAgICAKZmZmZmZmMDAxNzM5YTNmMCB1ZHA2ICAgICAgIDAgICAgICAw ICouNTE0ICAgICAgICAgICAgICAqLiogICAgICAgICAgICAgICAgCkFjdGl2ZSBVTklYIGRvbWFp biBzb2NrZXRzCkFkZHJlc3MgIFR5cGUgICBSZWN2LVEgU2VuZC1RICAgIElub2RlICAgICBDb25u ICAgICBSZWZzICBOZXh0cmVmIEFkZHIKZmZmZmZmMDAxNzNjMGMzMCBzdHJlYW0gICAgICAwICAg ICAgMCBmZmZmZmYwMDE3NTI1YzU4ICAgICAgICAwICAgICAgICAwICAgICAgICAwIC92YXIvcnVu L3JwY2JpbmQuc29jawpmZmZmZmYwMDE3MzlmYzMwIHN0cmVhbSAgICAgIDAgICAgICAwIGZmZmZm ZjAwMTczOTY3NjggICAgICAgIDAgICAgICAgIDAgICAgICAgIDAgL3Zhci9ydW4vZGV2ZC5waXBl CmZmZmZmZjAwMTczYzA5NjAgZGdyYW0gICAgICAgMCAgICAgIDAgICAgICAgIDAgZmZmZmZmMDAx NzM5Zjg3MCAgICAgICAgMCBmZmZmZmYwMDE3MzlmMGYwCmZmZmZmZjAwMTczOWYwZjAgZGdyYW0g ICAgICAgMCAgICAgIDAgICAgICAgIDAgZmZmZmZmMDAxNzM5Zjg3MCAgICAgICAgMCBmZmZmZmYw MDE3M2MwZTEwCmZmZmZmZjAwMTczYzBlMTAgZGdyYW0gICAgICAgMCAgICAgIDAgICAgICAgIDAg ZmZmZmZmMDAxNzM5Zjg3MCAgICAgICAgMCAgICAgICAgMApmZmZmZmYwMDE3MzlmODcwIGRncmFt ICAgICAgIDAgICAgICAwIGZmZmZmZjAwMTc1N2YyNzggICAgICAgIDAgZmZmZmZmMDAxNzNjMDk2 MCAgICAgICAgMCAvdmFyL3J1bi9sb2dwcml2CmZmZmZmZjAwMTczYzBkMjAgZGdyYW0gICAgICAg MCAgICAgIDAgZmZmZmZmMDAxNzU3OTllMCAgICAgICAgMCAgICAgICAgMCAgICAgICAgMCAvdmFy L3J1bi9sb2cKCi0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQpuZXRzdGF0IC1hTAoKQ3VycmVudCBsaXN0ZW4gcXVl dWUgc2l6ZXMgKHFsZW4vaW5jcWxlbi9tYXhxbGVuKQpQcm90byBMaXN0ZW4gICAgICAgICBMb2Nh bCBBZGRyZXNzICAgICAgICAgCnRjcDQgIDAvMC8xMCAgICAgICAgIGxvY2FsaG9zdC5zbXRwICAg ICAgICAgCnRjcDQgIDAvMC8xMjggICAgICAgICouc3NoICAgICAgICAgICAgICAgICAgCnRjcDYg IDAvMC8xMjggICAgICAgICouc3NoICAgICAgICAgICAgICAgICAgCnRjcDYgIDAvMC81ICAgICAg ICAgICoubmZzZCAgICAgICAgICAgICAgICAgCnRjcDQgIDAvMC81ICAgICAgICAgICoubmZzZCAg ICAgICAgICAgICAgICAgCnRjcDQgIDAvMC8xMjggICAgICAgICoucGFzc2dvLXRpdm9saSAgICAg ICAgCnRjcDYgIDAvMC8xMjggICAgICAgICoucGFzc2dvLXRpdm9saSAgICAgICAgCnRjcDQgIDAv MC8xMjggICAgICAgICouc3VucnBjICAgICAgICAgICAgICAgCnRjcDYgIDAvMC8xMjggICAgICAg ICouc3VucnBjICAgICAgICAgICAgICAgCnVuaXggIDAvMC8xMjggICAgICAgIC92YXIvcnVuL3Jw Y2JpbmQuc29jawp1bml4ICAwLzAvNCAgICAgICAgICAvdmFyL3J1bi9kZXZkLnBpcGUKCi0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLQpmc3RhdAoKVVNFUiAgICAgQ01EICAgICAgICAgIFBJRCAgIEZEIE1PVU5UICAg ICAgSU5VTSBNT0RFICAgICAgICAgU1p8RFYgUi9XCnJvb3QgICAgIGdldHR5ICAgICAgIDEyMzkg cm9vdCAvICAgICAgICAgICAgIDIgZHJ3eHIteHIteCAgICAxMDI0ICByCnJvb3QgICAgIGdldHR5 ICAgICAgIDEyMzkgICB3ZCAvICAgICAgICAgICAgIDIgZHJ3eHIteHIteCAgICAxMDI0ICByCnJv b3QgICAgIGdldHR5ICAgICAgIDEyMzkgdGV4dCAvICAgICAgICAyODg1NjMzNCAtci14ci14ci14 ICAgMjc2OTYgIHIKcm9vdCAgICAgZ2V0dHkgICAgICAgMTIzOSAgICAwIC9kZXYgICAgICAgICA1 OSBjcnctLS0tLS0tICAgdHR5djcgcncKcm9vdCAgICAgZ2V0dHkgICAgICAgMTIzOSAgICAxIC9k ZXYgICAgICAgICA1OSBjcnctLS0tLS0tICAgdHR5djcgcncKcm9vdCAgICAgZ2V0dHkgICAgICAg MTIzOSAgICAyIC9kZXYgICAgICAgICA1OSBjcnctLS0tLS0tICAgdHR5djcgcncKcm9vdCAgICAg Z2V0dHkgICAgICAgMTIzOCByb290IC8gICAgICAgICAgICAgMiBkcnd4ci14ci14ICAgIDEwMjQg IHIKcm9vdCAgICAgZ2V0dHkgICAgICAgMTIzOCAgIHdkIC8gICAgICAgICAgICAgMiBkcnd4ci14 ci14ICAgIDEwMjQgIHIKcm9vdCAgICAgZ2V0dHkgICAgICAgMTIzOCB0ZXh0IC8gICAgICAgIDI4 ODU2MzM0IC1yLXhyLXhyLXggICAyNzY5NiAgcgpyb290ICAgICBnZXR0eSAgICAgICAxMjM4ICAg IDAgL2RldiAgICAgICAgIDU4IGNydy0tLS0tLS0gICB0dHl2NiBydwpyb290ICAgICBnZXR0eSAg ICAgICAxMjM4ICAgIDEgL2RldiAgICAgICAgIDU4IGNydy0tLS0tLS0gICB0dHl2NiBydwpyb290 ICAgICBnZXR0eSAgICAgICAxMjM4ICAgIDIgL2RldiAgICAgICAgIDU4IGNydy0tLS0tLS0gICB0 dHl2NiBydwpyb290ICAgICBnZXR0eSAgICAgICAxMjM3IHJvb3QgLyAgICAgICAgICAgICAyIGRy d3hyLXhyLXggICAgMTAyNCAgcgpyb290ICAgICBnZXR0eSAgICAgICAxMjM3ICAgd2QgLyAgICAg ICAgICAgICAyIGRyd3hyLXhyLXggICAgMTAyNCAgcgpyb290ICAgICBnZXR0eSAgICAgICAxMjM3 IHRleHQgLyAgICAgICAgMjg4NTYzMzQgLXIteHIteHIteCAgIDI3Njk2ICByCnJvb3QgICAgIGdl dHR5ICAgICAgIDEyMzcgICAgMCAvZGV2ICAgICAgICAgNTcgY3J3LS0tLS0tLSAgIHR0eXY1IHJ3 CnJvb3QgICAgIGdldHR5ICAgICAgIDEyMzcgICAgMSAvZGV2ICAgICAgICAgNTcgY3J3LS0tLS0t LSAgIHR0eXY1IHJ3CnJvb3QgICAgIGdldHR5ICAgICAgIDEyMzcgICAgMiAvZGV2ICAgICAgICAg NTcgY3J3LS0tLS0tLSAgIHR0eXY1IHJ3CnJvb3QgICAgIGdldHR5ICAgICAgIDEyMzYgcm9vdCAv ICAgICAgICAgICAgIDIgZHJ3eHIteHIteCAgICAxMDI0ICByCnJvb3QgICAgIGdldHR5ICAgICAg IDEyMzYgICB3ZCAvICAgICAgICAgICAgIDIgZHJ3eHIteHIteCAgICAxMDI0ICByCnJvb3QgICAg IGdldHR5ICAgICAgIDEyMzYgdGV4dCAvICAgICAgICAyODg1NjMzNCAtci14ci14ci14ICAgMjc2 OTYgIHIKcm9vdCAgICAgZ2V0dHkgICAgICAgMTIzNiAgICAwIC9kZXYgICAgICAgICA1NiBjcnct LS0tLS0tICAgdHR5djQgcncKcm9vdCAgICAgZ2V0dHkgICAgICAgMTIzNiAgICAxIC9kZXYgICAg ICAgICA1NiBjcnctLS0tLS0tICAgdHR5djQgcncKcm9vdCAgICAgZ2V0dHkgICAgICAgMTIzNiAg ICAyIC9kZXYgICAgICAgICA1NiBjcnctLS0tLS0tICAgdHR5djQgcncKcm9vdCAgICAgZ2V0dHkg ICAgICAgMTIzNSByb290IC8gICAgICAgICAgICAgMiBkcnd4ci14ci14ICAgIDEwMjQgIHIKcm9v dCAgICAgZ2V0dHkgICAgICAgMTIzNSAgIHdkIC8gICAgICAgICAgICAgMiBkcnd4ci14ci14ICAg IDEwMjQgIHIKcm9vdCAgICAgZ2V0dHkgICAgICAgMTIzNSB0ZXh0IC8gICAgICAgIDI4ODU2MzM0 IC1yLXhyLXhyLXggICAyNzY5NiAgcgpyb290ICAgICBnZXR0eSAgICAgICAxMjM1ICAgIDAgL2Rl diAgICAgICAgIDU1IGNydy0tLS0tLS0gICB0dHl2MyBydwpyb290ICAgICBnZXR0eSAgICAgICAx MjM1ICAgIDEgL2RldiAgICAgICAgIDU1IGNydy0tLS0tLS0gICB0dHl2MyBydwpyb290ICAgICBn ZXR0eSAgICAgICAxMjM1ICAgIDIgL2RldiAgICAgICAgIDU1IGNydy0tLS0tLS0gICB0dHl2MyBy dwpyb290ICAgICBnZXR0eSAgICAgICAxMjM0IHJvb3QgLyAgICAgICAgICAgICAyIGRyd3hyLXhy LXggICAgMTAyNCAgcgpyb290ICAgICBnZXR0eSAgICAgICAxMjM0ICAgd2QgLyAgICAgICAgICAg ICAyIGRyd3hyLXhyLXggICAgMTAyNCAgcgpyb290ICAgICBnZXR0eSAgICAgICAxMjM0IHRleHQg LyAgICAgICAgMjg4NTYzMzQgLXIteHIteHIteCAgIDI3Njk2ICByCnJvb3QgICAgIGdldHR5ICAg ICAgIDEyMzQgICAgMCAvZGV2ICAgICAgICAgNTQgY3J3LS0tLS0tLSAgIHR0eXYyIHJ3CnJvb3Qg ICAgIGdldHR5ICAgICAgIDEyMzQgICAgMSAvZGV2ICAgICAgICAgNTQgY3J3LS0tLS0tLSAgIHR0 eXYyIHJ3CnJvb3QgICAgIGdldHR5ICAgICAgIDEyMzQgICAgMiAvZGV2ICAgICAgICAgNTQgY3J3 LS0tLS0tLSAgIHR0eXYyIHJ3CnJvb3QgICAgIGdldHR5ICAgICAgIDEyMzMgcm9vdCAvICAgICAg ICAgICAgIDIgZHJ3eHIteHIteCAgICAxMDI0ICByCnJvb3QgICAgIGdldHR5ICAgICAgIDEyMzMg ICB3ZCAvICAgICAgICAgICAgIDIgZHJ3eHIteHIteCAgICAxMDI0ICByCnJvb3QgICAgIGdldHR5 ICAgICAgIDEyMzMgdGV4dCAvICAgICAgICAyODg1NjMzNCAtci14ci14ci14ICAgMjc2OTYgIHIK cm9vdCAgICAgZ2V0dHkgICAgICAgMTIzMyAgICAwIC9kZXYgICAgICAgICA1MyBjcnctLS0tLS0t ICAgdHR5djEgcncKcm9vdCAgICAgZ2V0dHkgICAgICAgMTIzMyAgICAxIC9kZXYgICAgICAgICA1 MyBjcnctLS0tLS0tICAgdHR5djEgcncKcm9vdCAgICAgZ2V0dHkgICAgICAgMTIzMyAgICAyIC9k ZXYgICAgICAgICA1MyBjcnctLS0tLS0tICAgdHR5djEgcncKcm9vdCAgICAgZ2V0dHkgICAgICAg MTIzMiByb290IC8gICAgICAgICAgICAgMiBkcnd4ci14ci14ICAgIDEwMjQgIHIKcm9vdCAgICAg Z2V0dHkgICAgICAgMTIzMiAgIHdkIC8gICAgICAgICAgICAgMiBkcnd4ci14ci14ICAgIDEwMjQg IHIKcm9vdCAgICAgZ2V0dHkgICAgICAgMTIzMiB0ZXh0IC8gICAgICAgIDI4ODU2MzM0IC1yLXhy LXhyLXggICAyNzY5NiAgcgpyb290ICAgICBnZXR0eSAgICAgICAxMjMyICAgIDAgL2RldiAgICAg ICAgIDUyIGNydy0tLS0tLS0gICB0dHl2MCBydwpyb290ICAgICBnZXR0eSAgICAgICAxMjMyICAg IDEgL2RldiAgICAgICAgIDUyIGNydy0tLS0tLS0gICB0dHl2MCBydwpyb290ICAgICBnZXR0eSAg ICAgICAxMjMyICAgIDIgL2RldiAgICAgICAgIDUyIGNydy0tLS0tLS0gICB0dHl2MCBydwpyb290 ICAgICBzbGVlcCAgICAgICAxMjMwIHJvb3QgLyAgICAgICAgICAgICAyIGRyd3hyLXhyLXggICAg MTAyNCAgcgpyb290ICAgICBzbGVlcCAgICAgICAxMjMwICAgd2QgLyAgICAgICAgICAgICAyIGRy d3hyLXhyLXggICAgMTAyNCAgcgpyb290ICAgICBzbGVlcCAgICAgICAxMjMwIHRleHQgLyAgICAg ICAgMTA0NTcxMjIgLXIteHIteHIteCAgICA1MjI0ICByCnJvb3QgICAgIHNsZWVwICAgICAgIDEy MzAgICAgMCAvZGV2ICAgICAgICAgMjAgY3J3LXJ3LXJ3LSAgICBudWxsICByCnJvb3QgICAgIHNs ZWVwICAgICAgIDEyMzAgICAgMSogcGlwZSBmZmZmZmYwMDE3MzRiNDMwIDwtPiBmZmZmZmYwMDE3 MzRiMmQ4ICAgICAgMCBydwpyb290ICAgICBzbGVlcCAgICAgICAxMjMwICAgIDIqIHBpcGUgZmZm ZmZmMDAxNzM0YjQzMCA8LT4gZmZmZmZmMDAxNzM0YjJkOCAgICAgIDAgcncKcm9vdCAgICAgbG9n Z2VyICAgICAgMTIyOSByb290IC8gICAgICAgICAgICAgMiBkcnd4ci14ci14ICAgIDEwMjQgIHIK cm9vdCAgICAgbG9nZ2VyICAgICAgMTIyOSAgIHdkIC8gICAgICAgICAgICAgMiBkcnd4ci14ci14 ICAgIDEwMjQgIHIKcm9vdCAgICAgbG9nZ2VyICAgICAgMTIyOSB0ZXh0IC8gICAgICAgIDI4ODU1 MDE5IC1yLXhyLXhyLXggICAxMjAyNCAgcgpyb290ICAgICBsb2dnZXIgICAgICAxMjI5ICAgIDAq IHBpcGUgZmZmZmZmMDAxNzM0YjJkOCA8LT4gZmZmZmZmMDAxNzM0YjQzMCAgICAgIDAgcncKcm9v dCAgICAgbG9nZ2VyICAgICAgMTIyOSAgICAyIC0gICAgICAgICAtICAgICAgICAgYmFkICAgIC0K cm9vdCAgICAgc2ggICAgICAgICAgMTIyOCByb290IC8gICAgICAgICAgICAgMiBkcnd4ci14ci14 ICAgIDEwMjQgIHIKcm9vdCAgICAgc2ggICAgICAgICAgMTIyOCAgIHdkIC8gICAgICAgICAgICAg MiBkcnd4ci14ci14ICAgIDEwMjQgIHIKcm9vdCAgICAgc2ggICAgICAgICAgMTIyOCB0ZXh0IC8g ICAgICAgIDEwNDU3MTE5IC1yLXhyLXhyLXggIDEzOTAyNCAgcgpyb290ICAgICBzaCAgICAgICAg ICAxMjI4ICAgIDAgL2RldiAgICAgICAgIDIwIGNydy1ydy1ydy0gICAgbnVsbCAgcgpyb290ICAg ICBzaCAgICAgICAgICAxMjI4ICAgIDEqIHBpcGUgZmZmZmZmMDAxNzM0YjQzMCA8LT4gZmZmZmZm MDAxNzM0YjJkOCAgICAgIDAgcncKcm9vdCAgICAgc2ggICAgICAgICAgMTIyOCAgICAyKiBwaXBl IGZmZmZmZjAwMTczNGI0MzAgPC0+IGZmZmZmZjAwMTczNGIyZDggICAgICAwIHJ3CnJvb3QgICAg IGNyb24gICAgICAgIDExNjggcm9vdCAvICAgICAgICAgICAgIDIgZHJ3eHIteHIteCAgICAxMDI0 ICByCnJvb3QgICAgIGNyb24gICAgICAgIDExNjggICB3ZCAvICAgICAgICA3OTg0MTM0IGRyd3hy LXgtLS0gICAgIDUxMiAgcgpyb290ICAgICBjcm9uICAgICAgICAxMTY4IHRleHQgLyAgICAgICAg Mjg4NTYxNjMgLXIteHIteHIteCAgIDM5ODU2ICByCnJvb3QgICAgIGNyb24gICAgICAgIDExNjgg ICAgMCAvZGV2ICAgICAgICAgMjAgY3J3LXJ3LXJ3LSAgICBudWxsIHJ3CnJvb3QgICAgIGNyb24g ICAgICAgIDExNjggICAgMSAvZGV2ICAgICAgICAgMjAgY3J3LXJ3LXJ3LSAgICBudWxsIHJ3CnJv b3QgICAgIGNyb24gICAgICAgIDExNjggICAgMiAvZGV2ICAgICAgICAgMjAgY3J3LXJ3LXJ3LSAg ICBudWxsIHJ3CnJvb3QgICAgIGNyb24gICAgICAgIDExNjggICAgMyAvICAgICAgICA3OTg0MzY5 IC1ydy0tLS0tLS0gICAgICAgNCAgdwpzbW1zcCAgICBzZW5kbWFpbCAgICAxMTYxIHJvb3QgLyAg ICAgICAgICAgICAyIGRyd3hyLXhyLXggICAgMTAyNCAgcgpzbW1zcCAgICBzZW5kbWFpbCAgICAx MTYxICAgd2QgLyAgICAgICAgNzk4NDE1NiBkcnd4cnd4LS0tICAgICA1MTIgIHIKc21tc3AgICAg c2VuZG1haWwgICAgMTE2MSB0ZXh0IC8gICAgICAgIDI4ODU1MzM4IC1yLXhyLXNyLXggIDY5ODE0 NCAgcgpzbW1zcCAgICBzZW5kbWFpbCAgICAxMTYxICAgIDAgL2RldiAgICAgICAgIDIwIGNydy1y dy1ydy0gICAgbnVsbCAgcgpzbW1zcCAgICBzZW5kbWFpbCAgICAxMTYxICAgIDEgL2RldiAgICAg ICAgIDIwIGNydy1ydy1ydy0gICAgbnVsbCAgdwpzbW1zcCAgICBzZW5kbWFpbCAgICAxMTYxICAg IDIgL2RldiAgICAgICAgIDIwIGNydy1ydy1ydy0gICAgbnVsbCAgdwpzbW1zcCAgICBzZW5kbWFp bCAgICAxMTYxICAgIDMqIGxvY2FsIGRncmFtIGZmZmZmZjAwMTczYzA5NjAgPC0+IGZmZmZmZjAw MTczOWY4NzAKc21tc3AgICAgc2VuZG1haWwgICAgMTE2MSAgICA0IC8gICAgICAgIDc5ODQyODcg LXJ3LS0tLS0tLSAgICAgIDUwICB3CnJvb3QgICAgIHNlbmRtYWlsICAgIDExNTQgcm9vdCAvICAg ICAgICAgICAgIDIgZHJ3eHIteHIteCAgICAxMDI0ICByCnJvb3QgICAgIHNlbmRtYWlsICAgIDEx NTQgICB3ZCAvICAgICAgICA3OTg0MTUzIGRyd3hyLXhyLXggICAgIDUxMiAgcgpyb290ICAgICBz ZW5kbWFpbCAgICAxMTU0IHRleHQgLyAgICAgICAgMjg4NTUzMzggLXIteHItc3IteCAgNjk4MTQ0 ICByCnJvb3QgICAgIHNlbmRtYWlsICAgIDExNTQgICAgMCAvZGV2ICAgICAgICAgMjAgY3J3LXJ3 LXJ3LSAgICBudWxsICByCnJvb3QgICAgIHNlbmRtYWlsICAgIDExNTQgICAgMSAvZGV2ICAgICAg ICAgMjAgY3J3LXJ3LXJ3LSAgICBudWxsICB3CnJvb3QgICAgIHNlbmRtYWlsICAgIDExNTQgICAg MiAvZGV2ICAgICAgICAgMjAgY3J3LXJ3LXJ3LSAgICBudWxsICB3CnJvb3QgICAgIHNlbmRtYWls ICAgIDExNTQgICAgMyogbG9jYWwgZGdyYW0gZmZmZmZmMDAxNzM5ZjBmMCA8LT4gZmZmZmZmMDAx NzM5Zjg3MApyb290ICAgICBzZW5kbWFpbCAgICAxMTU0ICAgIDQqIGludGVybmV0IHN0cmVhbSB0 Y3AgZmZmZmZmMDAxNzU0ODAwMApyb290ICAgICBzZW5kbWFpbCAgICAxMTU0ICAgIDUgLyAgICAg ICAgNzk4NDM1NSAtcnctLS0tLS0tICAgICAgNzkgIHcKcm9vdCAgICAgc3NoZCAgICAgICAgMTE0 NSByb290IC8gICAgICAgICAgICAgMiBkcnd4ci14ci14ICAgIDEwMjQgIHIKcm9vdCAgICAgc3No ZCAgICAgICAgMTE0NSAgIHdkIC8gICAgICAgICAgICAgMiBkcnd4ci14ci14ICAgIDEwMjQgIHIK cm9vdCAgICAgc3NoZCAgICAgICAgMTE0NSB0ZXh0IC8gICAgICAgIDI4ODUyMzQ2IC1yLXhyLXhy LXggIDI1OTUxMiAgcgpyb290ICAgICBzc2hkICAgICAgICAxMTQ1ICAgIDAgL2RldiAgICAgICAg IDIwIGNydy1ydy1ydy0gICAgbnVsbCBydwpyb290ICAgICBzc2hkICAgICAgICAxMTQ1ICAgIDEg L2RldiAgICAgICAgIDIwIGNydy1ydy1ydy0gICAgbnVsbCBydwpyb290ICAgICBzc2hkICAgICAg ICAxMTQ1ICAgIDIgL2RldiAgICAgICAgIDIwIGNydy1ydy1ydy0gICAgbnVsbCBydwpyb290ICAg ICBzc2hkICAgICAgICAxMTQ1ICAgIDMqIGludGVybmV0NiBzdHJlYW0gdGNwIGZmZmZmZjAwMTc1 NDlhNTAKcm9vdCAgICAgc3NoZCAgICAgICAgMTE0NSAgICA0KiBpbnRlcm5ldCBzdHJlYW0gdGNw IGZmZmZmZjAwMTc1NDk2ZTAKX2RoY3AgICAgZGhjbGllbnQgICAgMTE0NCByb290IC8gICAgICAg IDc5ODQxMzYgZHIteHIteHIteCAgICAgNTEyICByCl9kaGNwICAgIGRoY2xpZW50ICAgIDExNDQg ICB3ZCAvICAgICAgICA3OTg0MTM2IGRyLXhyLXhyLXggICAgIDUxMiAgcgpfZGhjcCAgICBkaGNs aWVudCAgICAxMTQ0IGphaWwgLyAgICAgICAgNzk4NDEzNiBkci14ci14ci14ICAgICA1MTIgIHIK X2RoY3AgICAgZGhjbGllbnQgICAgMTE0NCB0ZXh0IC8gICAgICAgIDI0NDk1MDcgLXIteHIteHIt eCAgIDkxMjMyICByCl9kaGNwICAgIGRoY2xpZW50ICAgIDExNDQgICAgMCAvZGV2ICAgICAgICAg MjAgY3J3LXJ3LXJ3LSAgICBudWxsIHJ3Cl9kaGNwICAgIGRoY2xpZW50ICAgIDExNDQgICAgMSAv ZGV2ICAgICAgICAgMjAgY3J3LXJ3LXJ3LSAgICBudWxsIHJ3Cl9kaGNwICAgIGRoY2xpZW50ICAg IDExNDQgICAgMiAvZGV2ICAgICAgICAgMjAgY3J3LXJ3LXJ3LSAgICBudWxsIHJ3Cl9kaGNwICAg IGRoY2xpZW50ICAgIDExNDQgICAgMyogbG9jYWwgZGdyYW0gZmZmZmZmMDAxNzNjMGUxMCA8LT4g ZmZmZmZmMDAxNzM5Zjg3MApfZGhjcCAgICBkaGNsaWVudCAgICAxMTQ0ICAgIDUqIHJvdXRlIHJh dyAwIGZmZmZmZjAwMTczOTk1NTAKX2RoY3AgICAgZGhjbGllbnQgICAgMTE0NCAgICA2KiBwaXBl IGZmZmZmZjAwMTc1NTQ5ZTAgPC0+IGZmZmZmZjAwMTc1NTQ4ODggICAgICAwIHJ3Cl9kaGNwICAg IGRoY2xpZW50ICAgIDExNDQgICAgNyAvICAgICAgICA3OTg0MjkwIC0tLS0tLS0tLS0gICAgMTAw MCAgdwpfZGhjcCAgICBkaGNsaWVudCAgICAxMTQ0ICAgIDggL2RldiAgICAgICAgIDE4IGNydy0t LS0tLS0gICAgIGJwZiBydwpfZGhjcCAgICBkaGNsaWVudCAgICAxMTQ0ICAgIDkqIGludGVybmV0 IHJhdyBpcCBmZmZmZmYwMDE3NjBkMDAwCnJvb3QgICAgIGRoY2xpZW50ICAgIDEwOTUgcm9vdCAv ICAgICAgICAgICAgIDIgZHJ3eHIteHIteCAgICAxMDI0ICByCnJvb3QgICAgIGRoY2xpZW50ICAg IDEwOTUgICB3ZCAvICAgICAgICAgICAgIDIgZHJ3eHIteHIteCAgICAxMDI0ICByCnJvb3QgICAg IGRoY2xpZW50ICAgIDEwOTUgdGV4dCAvICAgICAgICAyNDQ5NTA3IC1yLXhyLXhyLXggICA5MTIz MiAgcgpyb290ICAgICBkaGNsaWVudCAgICAxMDk1ICAgIDAgL2RldiAgICAgICAgIDIwIGNydy1y dy1ydy0gICAgbnVsbCBydwpyb290ICAgICBkaGNsaWVudCAgICAxMDk1ICAgIDEgL2RldiAgICAg ICAgIDIwIGNydy1ydy1ydy0gICAgbnVsbCBydwpyb290ICAgICBkaGNsaWVudCAgICAxMDk1ICAg IDIgL2RldiAgICAgICAgIDIwIGNydy1ydy1ydy0gICAgbnVsbCBydwpyb290ICAgICBkaGNsaWVu dCAgICAxMDk1ICAgIDMqIGxvY2FsIGRncmFtIGZmZmZmZjAwMTczYzBlMTAgPC0+IGZmZmZmZjAw MTczOWY4NzAKcm9vdCAgICAgZGhjbGllbnQgICAgMTA5NSAgICA1KiBwaXBlIGZmZmZmZjAwMTc1 NTQ4ODggPC0+IGZmZmZmZjAwMTc1NTQ5ZTAgICAgICAwIHJ3CnJvb3QgICAgIG5mc2QgICAgICAg ICA5MzUgcm9vdCAvICAgICAgICAgICAgIDIgZHJ3eHIteHIteCAgICAxMDI0ICByCnJvb3QgICAg IG5mc2QgICAgICAgICA5MzUgICB3ZCAvICAgICAgICAgICAgIDIgZHJ3eHIteHIteCAgICAxMDI0 ICByCnJvb3QgICAgIG5mc2QgICAgICAgICA5MzUgdGV4dCAvICAgICAgICAyODg1NTE3MyAtci14 ci14ci14ICAgMTk4MDggIHIKcm9vdCAgICAgbmZzZCAgICAgICAgIDkzNSAgICAwIC9kZXYgICAg ICAgICAyMCBjcnctcnctcnctICAgIG51bGwgcncKcm9vdCAgICAgbmZzZCAgICAgICAgIDkzNSAg ICAxIC9kZXYgICAgICAgICAyMCBjcnctcnctcnctICAgIG51bGwgcncKcm9vdCAgICAgbmZzZCAg ICAgICAgIDkzNSAgICAyIC9kZXYgICAgICAgICAyMCBjcnctcnctcnctICAgIG51bGwgcncKcm9v dCAgICAgbmZzZCAgICAgICAgIDkzNCByb290IC8gICAgICAgICAgICAgMiBkcnd4ci14ci14ICAg IDEwMjQgIHIKcm9vdCAgICAgbmZzZCAgICAgICAgIDkzNCAgIHdkIC8gICAgICAgICAgICAgMiBk cnd4ci14ci14ICAgIDEwMjQgIHIKcm9vdCAgICAgbmZzZCAgICAgICAgIDkzNCB0ZXh0IC8gICAg ICAgIDI4ODU1MTczIC1yLXhyLXhyLXggICAxOTgwOCAgcgpyb290ICAgICBuZnNkICAgICAgICAg OTM0ICAgIDAgL2RldiAgICAgICAgIDIwIGNydy1ydy1ydy0gICAgbnVsbCBydwpyb290ICAgICBu ZnNkICAgICAgICAgOTM0ICAgIDEgL2RldiAgICAgICAgIDIwIGNydy1ydy1ydy0gICAgbnVsbCBy dwpyb290ICAgICBuZnNkICAgICAgICAgOTM0ICAgIDIgL2RldiAgICAgICAgIDIwIGNydy1ydy1y dy0gICAgbnVsbCBydwpyb290ICAgICBuZnNkICAgICAgICAgOTM0ICAgIDMqIGludGVybmV0IHN0 cmVhbSB0Y3AgZmZmZmZmMDAxNzNkZjM3MApyb290ICAgICBuZnNkICAgICAgICAgOTM0ICAgIDQq IGludGVybmV0NiBzdHJlYW0gdGNwIGZmZmZmZjAwMTczZGYwMDAKcm9vdCAgICAgbW91bnRkICAg ICAgIDkxNyByb290IC8gICAgICAgICAgICAgMiBkcnd4ci14ci14ICAgIDEwMjQgIHIKcm9vdCAg ICAgbW91bnRkICAgICAgIDkxNyAgIHdkIC8gICAgICAgICAgICAgMiBkcnd4ci14ci14ICAgIDEw MjQgIHIKcm9vdCAgICAgbW91bnRkICAgICAgIDkxNyB0ZXh0IC8gICAgICAgIDI4ODU1MTU3IC1y LXhyLXhyLXggICA0MjU1MiAgcgpyb290ICAgICBtb3VudGQgICAgICAgOTE3ICAgIDAgL2RldiAg ICAgICAgIDIwIGNydy1ydy1ydy0gICAgbnVsbCBydwpyb290ICAgICBtb3VudGQgICAgICAgOTE3 ICAgIDEgL2RldiAgICAgICAgIDIwIGNydy1ydy1ydy0gICAgbnVsbCBydwpyb290ICAgICBtb3Vu dGQgICAgICAgOTE3ICAgIDIgL2RldiAgICAgICAgIDIwIGNydy1ydy1ydy0gICAgbnVsbCBydwpy b290ICAgICBtb3VudGQgICAgICAgOTE3ICAgIDMgLyAgICAgICAgNzk4NDMzNSAtcnctLS0tLS0t ICAgICAgIDMgIHcKcm9vdCAgICAgbW91bnRkICAgICAgIDkxNyAgICA1KiBpbnRlcm5ldDYgZGdy YW0gdWRwIGZmZmZmZjAwMTczOWJkMjAKcm9vdCAgICAgbW91bnRkICAgICAgIDkxNyAgICA2KiBp bnRlcm5ldDYgc3RyZWFtIHRjcCBmZmZmZmYwMDE3M2RmNmUwCnJvb3QgICAgIG1vdW50ZCAgICAg ICA5MTcgICAgNyogaW50ZXJuZXQgZGdyYW0gdWRwIGZmZmZmZjAwMTczOWI2OTAKcm9vdCAgICAg bW91bnRkICAgICAgIDkxNyAgICA4KiBpbnRlcm5ldCBzdHJlYW0gdGNwIGZmZmZmZjAwMTc1NDg2 ZTAKcm9vdCAgICAgcnBjYmluZCAgICAgIDkxMyByb290IC8gICAgICAgICAgICAgMiBkcnd4ci14 ci14ICAgIDEwMjQgIHIKcm9vdCAgICAgcnBjYmluZCAgICAgIDkxMyAgIHdkIC8gICAgICAgICAg ICAgMiBkcnd4ci14ci14ICAgIDEwMjQgIHIKcm9vdCAgICAgcnBjYmluZCAgICAgIDkxMyB0ZXh0 IC8gICAgICAgIDI4ODU1MjE4IC1yLXhyLXhyLXggICA0NzE2MCAgcgpyb290ICAgICBycGNiaW5k ICAgICAgOTEzICAgIDAgL2RldiAgICAgICAgIDIwIGNydy1ydy1ydy0gICAgbnVsbCBydwpyb290 ICAgICBycGNiaW5kICAgICAgOTEzICAgIDEgL2RldiAgICAgICAgIDIwIGNydy1ydy1ydy0gICAg bnVsbCBydwpyb290ICAgICBycGNiaW5kICAgICAgOTEzICAgIDIgL2RldiAgICAgICAgIDIwIGNy dy1ydy1ydy0gICAgbnVsbCBydwpyb290ICAgICBycGNiaW5kICAgICAgOTEzICAgIDMgLyAgICAg ICAgNzk4NDI4NSAtci0tci0tci0tICAgICAgIDAgIHIKcm9vdCAgICAgcnBjYmluZCAgICAgIDkx MyAgICA0KiBpbnRlcm5ldDYgZGdyYW0gdWRwIGZmZmZmZjAwMTczOWFkMjAKcm9vdCAgICAgcnBj YmluZCAgICAgIDkxMyAgICA1KiBsb2NhbCBzdHJlYW0gZmZmZmZmMDAxNzNjMGMzMApyb290ICAg ICBycGNiaW5kICAgICAgOTEzICAgIDYqIGludGVybmV0NiBkZ3JhbSB1ZHAgZmZmZmZmMDAwMmEz MTJhMApyb290ICAgICBycGNiaW5kICAgICAgOTEzICAgIDcqIGludGVybmV0NiBkZ3JhbSB1ZHAg ZmZmZmZmMDAxNzM5YTU0MApyb290ICAgICBycGNiaW5kICAgICAgOTEzICAgIDgqIGludGVybmV0 NiBzdHJlYW0gdGNwIGZmZmZmZjAwMTc1NDkwMDAKcm9vdCAgICAgcnBjYmluZCAgICAgIDkxMyAg ICA5KiBpbnRlcm5ldCBkZ3JhbSB1ZHAgZmZmZmZmMDAxNzM5YTdlMApyb290ICAgICBycGNiaW5k ICAgICAgOTEzICAgMTAqIGludGVybmV0IGRncmFtIHVkcCBmZmZmZmYwMDE3MzlhYTgwCnJvb3Qg ICAgIHJwY2JpbmQgICAgICA5MTMgICAxMSogaW50ZXJuZXQgc3RyZWFtIHRjcCBmZmZmZmYwMDE3 NTQ4YTUwCnJvb3QgICAgIHN5c2xvZ2QgICAgICA3ODMgcm9vdCAvICAgICAgICAgICAgIDIgZHJ3 eHIteHIteCAgICAxMDI0ICByCnJvb3QgICAgIHN5c2xvZ2QgICAgICA3ODMgICB3ZCAvICAgICAg ICAgICAgIDIgZHJ3eHIteHIteCAgICAxMDI0ICByCnJvb3QgICAgIHN5c2xvZ2QgICAgICA3ODMg dGV4dCAvICAgICAgICAyODg1NTMzOSAtci14ci14ci14ICAgMzk1NDQgIHIKcm9vdCAgICAgc3lz bG9nZCAgICAgIDc4MyAgICAwIC9kZXYgICAgICAgICAyMCBjcnctcnctcnctICAgIG51bGwgcncK cm9vdCAgICAgc3lzbG9nZCAgICAgIDc4MyAgICAxIC9kZXYgICAgICAgICAyMCBjcnctcnctcnct ICAgIG51bGwgcncKcm9vdCAgICAgc3lzbG9nZCAgICAgIDc4MyAgICAyIC9kZXYgICAgICAgICAy MCBjcnctcnctcnctICAgIG51bGwgcncKcm9vdCAgICAgc3lzbG9nZCAgICAgIDc4MyAgICAzIC8g ICAgICAgIDc5ODQyNzQgLXJ3LS0tLS0tLSAgICAgICAzICB3CnJvb3QgICAgIHN5c2xvZ2QgICAg ICA3ODMgICAgNCogbG9jYWwgZGdyYW0gZmZmZmZmMDAxNzNjMGQyMApyb290ICAgICBzeXNsb2dk ICAgICAgNzgzICAgIDUqIGxvY2FsIGRncmFtIGZmZmZmZjAwMTczOWY4NzAKcm9vdCAgICAgc3lz bG9nZCAgICAgIDc4MyAgICA2KiBpbnRlcm5ldDYgZGdyYW0gdWRwIGZmZmZmZjAwMTczOWEzZjAK cm9vdCAgICAgc3lzbG9nZCAgICAgIDc4MyAgICA3KiBpbnRlcm5ldCBkZ3JhbSB1ZHAgZmZmZmZm MDAxNzM5YTE1MApyb290ICAgICBzeXNsb2dkICAgICAgNzgzICAgIDggL2RldiAgICAgICAgIDMx IGNydy0tLS0tLS0gICAga2xvZyAgcgpyb290ICAgICBzeXNsb2dkICAgICAgNzgzICAgMTAgLSAg ICAgICAgIC0gICAgICAgICBiYWQgICAgLQpyb290ICAgICBzeXNsb2dkICAgICAgNzgzICAgMTEg LyAgICAgICAgNzk4NDI5NSAtcnctci0tci0tICAyMDgyNTAgIHcKcm9vdCAgICAgc3lzbG9nZCAg ICAgIDc4MyAgIDEyIC8gICAgICAgIDc5ODQyNzAgLXJ3LS0tLS0tLSAgICAgIDU1ICB3CnJvb3Qg ICAgIHN5c2xvZ2QgICAgICA3ODMgICAxMyAvICAgICAgICA3OTg0MjU4IC1ydy0tLS0tLS0gICAx MTc5NCAgdwpyb290ICAgICBzeXNsb2dkICAgICAgNzgzICAgMTQgLyAgICAgICAgNzk4NDI2NSAt cnctci0tLS0tICAgIDE5MjYgIHcKcm9vdCAgICAgc3lzbG9nZCAgICAgIDc4MyAgIDE1IC8gICAg ICAgIDc5ODQyNjMgLXJ3LXItLXItLSAgICAgIDU1ICB3CnJvb3QgICAgIHN5c2xvZ2QgICAgICA3 ODMgICAxNiAvICAgICAgICA3OTg0MjcxIC1ydy0tLS0tLS0gICAgICA1NSAgdwpyb290ICAgICBz eXNsb2dkICAgICAgNzgzICAgMTcgLyAgICAgICAgNzk4NDI1OSAtcnctLS0tLS0tICAgOTM2NTYg IHcKcm9vdCAgICAgc3lzbG9nZCAgICAgIDc4MyAgIDE4IC8gICAgICAgIDc5ODQyNjIgLXJ3LS0t LS0tLSAgICAgMTE4ICB3CnJvb3QgICAgIHN5c2xvZ2QgICAgICA3ODMgICAxOSAvICAgICAgICA3 OTg0MjY5IC1ydy1yLS0tLS0gICAgICA1NSAgdwpyb290ICAgICBkZXZkICAgICAgICAgNjM2IHJv b3QgLyAgICAgICAgICAgICAyIGRyd3hyLXhyLXggICAgMTAyNCAgcgpyb290ICAgICBkZXZkICAg ICAgICAgNjM2ICAgd2QgLyAgICAgICAgICAgICAyIGRyd3hyLXhyLXggICAgMTAyNCAgcgpyb290 ICAgICBkZXZkICAgICAgICAgNjM2IHRleHQgLyAgICAgICAgMjQ0OTUwNSAtci14ci14ci14ICA0 NTMwNDAgIHIKcm9vdCAgICAgZGV2ZCAgICAgICAgIDYzNiAgICAwIC9kZXYgICAgICAgICAyMCBj cnctcnctcnctICAgIG51bGwgcncKcm9vdCAgICAgZGV2ZCAgICAgICAgIDYzNiAgICAxIC9kZXYg ICAgICAgICAyMCBjcnctcnctcnctICAgIG51bGwgcncKcm9vdCAgICAgZGV2ZCAgICAgICAgIDYz NiAgICAyIC9kZXYgICAgICAgICAyMCBjcnctcnctcnctICAgIG51bGwgcncKcm9vdCAgICAgZGV2 ZCAgICAgICAgIDYzNiAgICAzIC8gICAgICAgIDc3NzIyNyBkcnd4ci14ci14ICAgICA1MTIgIHIK cm9vdCAgICAgZGV2ZCAgICAgICAgIDYzNiAgICA0IC9kZXYgICAgICAgICAgNCBjcnctLS0tLS0t ICBkZXZjdGwgIHIKcm9vdCAgICAgZGV2ZCAgICAgICAgIDYzNiAgICA1KiBsb2NhbCBzdHJlYW0g ZmZmZmZmMDAxNzM5ZmMzMApyb290ICAgICBkZXZkICAgICAgICAgNjM2ICAgIDYgLyAgICAgICAg Nzk4NDI3MiAtcnctLS0tLS0tICAgICAgIDMgIHcKcm9vdCAgICAgemZza2VybiAgICAgICAzOSBy b290IC8gICAgICAgICAgICAgMiBkcnd4ci14ci14ICAgIDEwMjQgIHIKcm9vdCAgICAgemZza2Vy biAgICAgICAzOSAgIHdkIC8gICAgICAgICAgICAgMiBkcnd4ci14ci14ICAgIDEwMjQgIHIKcm9v dCAgICAgaW5pdCAgICAgICAgICAgMSByb290IC8gICAgICAgICAgICAgMiBkcnd4ci14ci14ICAg IDEwMjQgIHIKcm9vdCAgICAgaW5pdCAgICAgICAgICAgMSAgIHdkIC8gICAgICAgICAgICAgMiBk cnd4ci14ci14ICAgIDEwMjQgIHIKcm9vdCAgICAgaW5pdCAgICAgICAgICAgMSB0ZXh0IC8gICAg ICAgIDI0NDk0NDUgLXIteHIteHIteCAgNzc5ODg4ICByCnJvb3QgICAgIGtlcm5lbCAgICAgICAg IDAgcm9vdCAvICAgICAgICAgICAgIDIgZHJ3eHIteHIteCAgICAxMDI0ICByCnJvb3QgICAgIGtl cm5lbCAgICAgICAgIDAgICB3ZCAvICAgICAgICAgICAgIDIgZHJ3eHIteHIteCAgICAxMDI0ICBy CgotLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0KZG1lc2cKCkNvcHlyaWdodCAoYykgMTk5Mi0yMDEwIFRoZSBGcmVl QlNEIFByb2plY3QuCkNvcHlyaWdodCAoYykgMTk3OSwgMTk4MCwgMTk4MywgMTk4NiwgMTk4OCwg MTk4OSwgMTk5MSwgMTk5MiwgMTk5MywgMTk5NAoJVGhlIFJlZ2VudHMgb2YgdGhlIFVuaXZlcnNp dHkgb2YgQ2FsaWZvcm5pYS4gQWxsIHJpZ2h0cyByZXNlcnZlZC4KRnJlZUJTRCBpcyBhIHJlZ2lz dGVyZWQgdHJhZGVtYXJrIG9mIFRoZSBGcmVlQlNEIEZvdW5kYXRpb24uCkZyZWVCU0QgOS4wLUNV UlJFTlQgIzAgcjIxMjA3NE06IFN1biBTZXAgMTIgMTg6NDg6MzYgVVRDIDIwMTAKICAgIHJvb3RA eGFuYWR1Oi91c3Ivb2JqL2hlYWRfb2xkL3N5cy9EVFJBQ0UyIGFtZDY0CldBUk5JTkc6IFdJVE5F U1Mgb3B0aW9uIGVuYWJsZWQsIGV4cGVjdCByZWR1Y2VkIHBlcmZvcm1hbmNlLgpDUFU6IEFNRCBP cHRlcm9uKHRtKSBQcm9jZXNzb3IgMjUwICgyNDExLjE2LU1IeiBLOC1jbGFzcyBDUFUpCiAgT3Jp Z2luID0gIkF1dGhlbnRpY0FNRCIgIElkID0gMHgyMGY1MSAgRmFtaWx5ID0gZiAgTW9kZWwgPSAy NSAgU3RlcHBpbmcgPSAxCiAgRmVhdHVyZXM9MHg3OGJmYmZmPEZQVSxWTUUsREUsUFNFLFRTQyxN U1IsUEFFLE1DRSxDWDgsQVBJQyxTRVAsTVRSUixQR0UsTUNBLENNT1YsUEFULFBTRTM2LENMRkxV U0gsTU1YLEZYU1IsU1NFLFNTRTI+CiAgRmVhdHVyZXMyPTB4MTxTU0UzPgogIEFNRCBGZWF0dXJl cz0weGUyNTAwODAwPFNZU0NBTEwsTlgsTU1YKyxGRlhTUixMTSwzRE5vdyErLDNETm93IT4KICBB TUQgRmVhdHVyZXMyPTB4MTxMQUhGPgpyZWFsIG1lbW9yeSAgPSAzNDM1OTczODM2OCAoMzI3Njgg TUIpCmF2YWlsIG1lbW9yeSA9IDMzNDQ2MDkyODAgKDMxODkgTUIpCkV2ZW50IHRpbWVyICJMQVBJ QyIgZnJlcXVlbmN5IDAgSHogcXVhbGl0eSA1MDAKQUNQSSBBUElDIFRhYmxlOiA8UFRMVEQgIAkg QVBJQyAgPgpGcmVlQlNEL1NNUDogTXVsdGlwcm9jZXNzb3IgU3lzdGVtIERldGVjdGVkOiAyIENQ VXMKRnJlZUJTRC9TTVA6IDIgcGFja2FnZShzKSB4IDEgY29yZShzKQogY3B1MCAoQlNQKTogQVBJ QyBJRDogIDAKIGNwdTEgKEFQKTogQVBJQyBJRDogIDEKaW9hcGljMCA8VmVyc2lvbiAxLjE+IGly cXMgMC0yMyBvbiBtb3RoZXJib2FyZAppb2FwaWMxIDxWZXJzaW9uIDEuMT4gaXJxcyAyNC0yNyBv biBtb3RoZXJib2FyZAppb2FwaWMyIDxWZXJzaW9uIDEuMT4gaXJxcyAyOC0zMSBvbiBtb3RoZXJi b2FyZAprYmQxIGF0IGtiZG11eDAKYWNwaTA6IDxQVExURCAgIFJTRFQ+IG9uIG1vdGhlcmJvYXJk CmFjcGkwOiBbSVRIUkVBRF0KYWNwaTA6IFBvd2VyIEJ1dHRvbiAoZml4ZWQpClRpbWVjb3VudGVy ICJBQ1BJLWZhc3QiIGZyZXF1ZW5jeSAzNTc5NTQ1IEh6IHF1YWxpdHkgMTAwMAphY3BpX3RpbWVy MDogPDI0LWJpdCB0aW1lciBhdCAzLjU3OTU0NU1Iej4gcG9ydCAweDgwMDgtMHg4MDBiIG9uIGFj cGkwCmNwdTA6IDxBQ1BJIENQVT4gb24gYWNwaTAKY3B1MTogPEFDUEkgQ1BVPiBvbiBhY3BpMAph Y3BpX2J1dHRvbjA6IDxQb3dlciBCdXR0b24+IG9uIGFjcGkwCnBjaWIwOiA8QUNQSSBIb3N0LVBD SSBicmlkZ2U+IHBvcnQgMHhjZjgtMHhjZmYgb24gYWNwaTAKcGNpMDogPEFDUEkgUENJIGJ1cz4g b24gcGNpYjAKcGNpMDogPG1lbW9yeT4gYXQgZGV2aWNlIDAuMCAobm8gZHJpdmVyIGF0dGFjaGVk KQppc2FiMDogPFBDSS1JU0EgYnJpZGdlPiBhdCBkZXZpY2UgMS4wIG9uIHBjaTAKaXNhMDogPElT QSBidXM+IG9uIGlzYWIwCnBjaTA6IDxzZXJpYWwgYnVzLCBTTUJ1cz4gYXQgZGV2aWNlIDEuMSAo bm8gZHJpdmVyIGF0dGFjaGVkKQpvaGNpMDogPE9IQ0kgKGdlbmVyaWMpIFVTQiBjb250cm9sbGVy PiBtZW0gMHhkZDAwMDAwMC0weGRkMDAwZmZmIGlycSAyMCBhdCBkZXZpY2UgMi4wIG9uIHBjaTAK b2hjaTA6IFtJVEhSRUFEXQp1c2J1czA6IDxPSENJIChnZW5lcmljKSBVU0IgY29udHJvbGxlcj4g b24gb2hjaTAKZWhjaTA6IDxOVklESUEgbkZvcmNlNCBVU0IgMi4wIGNvbnRyb2xsZXI+IG1lbSAw eGRkMDAxMDAwLTB4ZGQwMDEwZmYgaXJxIDIxIGF0IGRldmljZSAyLjEgb24gcGNpMAplaGNpMDog W0lUSFJFQURdCnVzYnVzMTogRUhDSSB2ZXJzaW9uIDEuMAp1c2J1czE6IDxOVklESUEgbkZvcmNl NCBVU0IgMi4wIGNvbnRyb2xsZXI+IG9uIGVoY2kwCmF0YXBjaTA6IDxuVmlkaWEgbkZvcmNlIENL ODA0IFVETUExMzMgY29udHJvbGxlcj4gcG9ydCAweDFmMC0weDFmNywweDNmNiwweDE3MC0weDE3 NywweDM3NiwweDE0MDAtMHgxNDBmIGF0IGRldmljZSA2LjAgb24gcGNpMAphdGEwOiA8QVRBIGNo YW5uZWwgMD4gb24gYXRhcGNpMAphdGEwOiBbSVRIUkVBRF0KYXRhMTogPEFUQSBjaGFubmVsIDE+ IG9uIGF0YXBjaTAKYXRhMTogW0lUSFJFQURdCmF0YXBjaTE6IDxuVmlkaWEgbkZvcmNlIENLODA0 IFNBVEEzMDAgY29udHJvbGxlcj4gcG9ydCAweDE0NDAtMHgxNDQ3LDB4MTQzNC0weDE0MzcsMHgx NDM4LTB4MTQzZiwweDE0MzAtMHgxNDMzLDB4MTQxMC0weDE0MWYgbWVtIDB4ZGQwMDIwMDAtMHhk ZDAwMmZmZiBpcnEgMjIgYXQgZGV2aWNlIDcuMCBvbiBwY2kwCmF0YXBjaTE6IFtJVEhSRUFEXQph dGEyOiA8QVRBIGNoYW5uZWwgMD4gb24gYXRhcGNpMQphdGEyOiBbSVRIUkVBRF0KYXRhMzogPEFU QSBjaGFubmVsIDE+IG9uIGF0YXBjaTEKYXRhMzogW0lUSFJFQURdCmF0YXBjaTI6IDxuVmlkaWEg bkZvcmNlIENLODA0IFNBVEEzMDAgY29udHJvbGxlcj4gcG9ydCAweDE0NTgtMHgxNDVmLDB4MTQ0 Yy0weDE0NGYsMHgxNDUwLTB4MTQ1NywweDE0NDgtMHgxNDRiLDB4MTQyMC0weDE0MmYgbWVtIDB4 ZGQwMDMwMDAtMHhkZDAwM2ZmZiBpcnEgMjMgYXQgZGV2aWNlIDguMCBvbiBwY2kwCmF0YXBjaTI6 IFtJVEhSRUFEXQphdGE0OiA8QVRBIGNoYW5uZWwgMD4gb24gYXRhcGNpMgphdGE0OiBbSVRIUkVB RF0KYXRhNTogPEFUQSBjaGFubmVsIDE+IG9uIGF0YXBjaTIKYXRhNTogW0lUSFJFQURdCnBjaWIx OiA8QUNQSSBQQ0ktUENJIGJyaWRnZT4gYXQgZGV2aWNlIDkuMCBvbiBwY2kwCnBjaTE6IDxBQ1BJ IFBDSSBidXM+IG9uIHBjaWIxCnZnYXBjaTA6IDxWR0EtY29tcGF0aWJsZSBkaXNwbGF5PiBwb3J0 IDB4MjAwMC0weDIwZmYgbWVtIDB4ZGUwMDAwMDAtMHhkZWZmZmZmZiwweGRkMTAwMDAwLTB4ZGQx MDBmZmYgaXJxIDE2IGF0IGRldmljZSA3LjAgb24gcGNpMQpwY2liMjogPEFDUEkgUENJLVBDSSBi cmlkZ2U+IGF0IGRldmljZSAxNC4wIG9uIHBjaTAKcGNpMjogPEFDUEkgUENJIGJ1cz4gb24gcGNp YjIKcGNpYjM6IDxBQ1BJIEhvc3QtUENJIGJyaWRnZT4gcG9ydCAweGNmOC0weGNmZiBvbiBhY3Bp MApwY2k4OiA8QUNQSSBQQ0kgYnVzPiBvbiBwY2liMwpwY2liNDogPEFDUEkgUENJLVBDSSBicmlk Z2U+IGF0IGRldmljZSAxMC4wIG9uIHBjaTgKcGNpOTogPEFDUEkgUENJIGJ1cz4gb24gcGNpYjQK cGNpYjU6IDxBQ1BJIFBDSS1QQ0kgYnJpZGdlPiBhdCBkZXZpY2UgMTEuMCBvbiBwY2k4CnBjaTEw OiA8QUNQSSBQQ0kgYnVzPiBvbiBwY2liNQpiZ2UwOiA8QnJvYWRjb20gR2lnYWJpdCBFdGhlcm5l dCBDb250cm9sbGVyLCBBU0lDIHJldi4gMHgwMDIwMDM+IG1lbSAweGRmMzEwMDAwLTB4ZGYzMWZm ZmYsMHhkZjMwMDAwMC0weGRmMzBmZmZmIGlycSAyOCBhdCBkZXZpY2UgOS4wIG9uIHBjaTEwCm1p aWJ1czA6IDxNSUkgYnVzPiBvbiBiZ2UwCmJyZ3BoeTA6IDxCQ001NzA0IDEwLzEwMC8xMDAwYmFz ZVRYIFBIWT4gUEhZIDEgb24gbWlpYnVzMApicmdwaHkwOiAgMTBiYXNlVCwgMTBiYXNlVC1GRFgs IDEwMGJhc2VUWCwgMTAwYmFzZVRYLUZEWCwgMTAwMGJhc2VULCAxMDAwYmFzZVQtRkRYLCBhdXRv CmJnZTA6IEV0aGVybmV0IGFkZHJlc3M6IDAwOmUwOjgxOjQwOjI5OmQyCmJnZTA6IFtJVEhSRUFE XQpiZ2UxOiA8QnJvYWRjb20gR2lnYWJpdCBFdGhlcm5ldCBDb250cm9sbGVyLCBBU0lDIHJldi4g MHgwMDIwMDM+IG1lbSAweGRmMzMwMDAwLTB4ZGYzM2ZmZmYsMHhkZjMyMDAwMC0weGRmMzJmZmZm IGlycSAyOSBhdCBkZXZpY2UgOS4xIG9uIHBjaTEwCm1paWJ1czE6IDxNSUkgYnVzPiBvbiBiZ2Ux CmJyZ3BoeTE6IDxCQ001NzA0IDEwLzEwMC8xMDAwYmFzZVRYIFBIWT4gUEhZIDEgb24gbWlpYnVz MQpicmdwaHkxOiAgMTBiYXNlVCwgMTBiYXNlVC1GRFgsIDEwMGJhc2VUWCwgMTAwYmFzZVRYLUZE WCwgMTAwMGJhc2VULCAxMDAwYmFzZVQtRkRYLCBhdXRvCmJnZTE6IEV0aGVybmV0IGFkZHJlc3M6 IDAwOmUwOjgxOjQwOjI5OmQzCmJnZTE6IFtJVEhSRUFEXQphdHRpbWVyMDogPEFUIHRpbWVyPiBw b3J0IDB4NDAtMHg0MyBpcnEgMCBvbiBhY3BpMApUaW1lY291bnRlciAiaTgyNTQiIGZyZXF1ZW5j eSAxMTkzMTgyIEh6IHF1YWxpdHkgMAphdHRpbWVyMDogQ2FuJ3QgbWFwIGludGVycnVwdC4KYXRy dGMwOiA8QVQgcmVhbHRpbWUgY2xvY2s+IHBvcnQgMHg3MC0weDcxIGlycSA4IG9uIGFjcGkwCmF0 cnRjMDogW0ZJTFRFUl0KRXZlbnQgdGltZXIgIlJUQyIgZnJlcXVlbmN5IDMyNzY4IEh6IHF1YWxp dHkgMAphdGtiZGMwOiA8S2V5Ym9hcmQgY29udHJvbGxlciAoaTgwNDIpPiBwb3J0IDB4NjAsMHg2 NCBpcnEgMSBvbiBhY3BpMAphdGtiZDA6IDxBVCBLZXlib2FyZD4gaXJxIDEgb24gYXRrYmRjMApr YmQwIGF0IGF0a2JkMAphdGtiZDA6IFtHSUFOVC1MT0NLRURdCmF0a2JkMDogW0lUSFJFQURdCnVh cnQwOiA8MTY1NTAgb3IgY29tcGF0aWJsZT4gcG9ydCAweDNmOC0weDNmZiBpcnEgNCBmbGFncyAw eDEwIG9uIGFjcGkwCnVhcnQwOiBbRklMVEVSXQp1YXJ0MTogPDE2NTUwIG9yIGNvbXBhdGlibGU+ IHBvcnQgMHgyZjgtMHgyZmYgaXJxIDMgb24gYWNwaTAKdWFydDE6IFtGSUxURVJdCnBwYzE6IDxQ YXJhbGxlbCBwb3J0PiBwb3J0IDB4Mjc4LTB4MjdmIGlycSA1IG9uIGFjcGkwCnBwYzE6IEdlbmVy aWMgY2hpcHNldCAoTklCQkxFLW9ubHkpIGluIENPTVBBVElCTEUgbW9kZQpwcGMxOiBbSVRIUkVB RF0KcHBidXMwOiA8UGFyYWxsZWwgcG9ydCBidXM+IG9uIHBwYzEKcGxpcDA6IDxQTElQIG5ldHdv cmsgaW50ZXJmYWNlPiBvbiBwcGJ1czAKcGxpcDA6IFtJVEhSRUFEXQpscHQwOiA8UHJpbnRlcj4g b24gcHBidXMwCmxwdDA6IFtJVEhSRUFEXQpscHQwOiBJbnRlcnJ1cHQtZHJpdmVuIHBvcnQKcHBp MDogPFBhcmFsbGVsIEkvTz4gb24gcHBidXMwCm9ybTA6IDxJU0EgT3B0aW9uIFJPTXM+IGF0IGlv bWVtIDB4YzAwMDAtMHhjN2ZmZiwweGM4MDAwLTB4Yzk3ZmYsMHhjOTgwMC0weGNhZmZmIG9uIGlz YTAKc2MwOiA8U3lzdGVtIGNvbnNvbGU+IGF0IGZsYWdzIDB4MTAwIG9uIGlzYTAKc2MwOiBWR0Eg PDE2IHZpcnR1YWwgY29uc29sZXMsIGZsYWdzPTB4MzAwPgp2Z2EwOiA8R2VuZXJpYyBJU0EgVkdB PiBhdCBwb3J0IDB4M2MwLTB4M2RmIGlvbWVtIDB4YTAwMDAtMHhiZmZmZiBvbiBpc2EwCnBwYzA6 IGNhbm5vdCByZXNlcnZlIEkvTyBwb3J0IHJhbmdlCnBvd2Vybm93MDogPENvb2xgbidRdWlldCBL OD4gb24gY3B1MApwb3dlcm5vdzE6IDxDb29sYG4nUXVpZXQgSzg+IG9uIGNwdTEKU3RhcnRpbmcg a2VybmVsIGV2ZW50IHRpbWVyczogTEFQSUMgQCAxMDAwSHosIFJUQyBAIDEyOEh6ClRpbWVjb3Vu dGVycyB0aWNrIGV2ZXJ5IDEuMDAwIG1zZWMKdXNidXMwOiAxMk1icHMgRnVsbCBTcGVlZCBVU0Ig djEuMAp1c2J1czE6IDQ4ME1icHMgSGlnaCBTcGVlZCBVU0IgdjIuMAphZDA6IDIzODQ3NU1CIDxT ZWFnYXRlIFNUMzI1MDgyM0EgMy4wMz4gYXQgYXRhMC1tYXN0ZXIgVURNQTEwMCAKdWdlbjAuMTog PG5WaWRpYT4gYXQgdXNidXMwCnVodWIwOiA8blZpZGlhIE9IQ0kgcm9vdCBIVUIsIGNsYXNzIDkv MCwgcmV2IDEuMDAvMS4wMCwgYWRkciAxPiBvbiB1c2J1czAKdWdlbjEuMTogPG5WaWRpYT4gYXQg dXNidXMxCnVodWIxOiA8blZpZGlhIEVIQ0kgcm9vdCBIVUIsIGNsYXNzIDkvMCwgcmV2IDIuMDAv MS4wMCwgYWRkciAxPiBvbiB1c2J1czEKYWQ0OiAxNDMwNzk5TUIgPFdEQyBXRDE1RUFSUy0wMFo1 QjEgODAuMDBBODA+IGF0IGF0YTItbWFzdGVyIFVETUExMDAgU0FUQSAzR2Ivcwp1aHViMDogMTAg cG9ydHMgd2l0aCAxMCByZW1vdmFibGUsIHNlbGYgcG93ZXJlZAphZDY6IDE0MzA3OTlNQiA8U2Vh Z2F0ZSBTVDMxNTAwMzQxQVMgQ0MxSD4gYXQgYXRhMy1tYXN0ZXIgVURNQTEwMCBTQVRBIDNHYi9z CmFkODogMTkwNzcyOU1CIDxIaXRhY2hpIEhEUzcyMjAyMEFMQTMzMCBKS0FPQTNFQT4gYXQgYXRh NC1tYXN0ZXIgVURNQTEwMCBTQVRBIDNHYi9zCmFkMTA6IDE5MDc3MjlNQiA8SGl0YWNoaSBIRFM3 MjIwMjBBTEEzMzAgSktBT0EyME4+IGF0IGF0YTUtbWFzdGVyIFVETUExMDAgU0FUQSAzR2IvcwpT TVA6IEFQIENQVSAjMSBMYXVuY2hlZCEKV0FSTklORzogV0lUTkVTUyBvcHRpb24gZW5hYmxlZCwg ZXhwZWN0IHJlZHVjZWQgcGVyZm9ybWFuY2UuClJvb3QgbW91bnQgd2FpdGluZyBmb3I6IHVzYnVz MQpSb290IG1vdW50IHdhaXRpbmcgZm9yOiB1c2J1czEKdWh1YjE6IDEwIHBvcnRzIHdpdGggMTAg cmVtb3ZhYmxlLCBzZWxmIHBvd2VyZWQKVHJ5aW5nIHRvIG1vdW50IHJvb3QgZnJvbSB1ZnM6L2Rl di9hZDBzMWEKU2V0dGluZyBob3N0dXVpZDogNWMwNjA0NjMtYmFmMS0xMWRmLTg1MzYtMDBlMDgx NDAyOWQyLgpTZXR0aW5nIGhvc3RpZDogMHgzOGRlOWQ0ZS4KWkZTIE5PVElDRTogUHJlZmV0Y2gg aXMgZGlzYWJsZWQgYnkgZGVmYXVsdCBpZiBsZXNzIHRoYW4gNEdCIG9mIFJBTSBpcyBwcmVzZW50 OwogICAgICAgICAgICB0byBlbmFibGUsIGFkZCAidmZzLnpmcy5wcmVmZXRjaF9kaXNhYmxlPTAi IHRvIC9ib290L2xvYWRlci5jb25mLgpaRlMgZmlsZXN5c3RlbSB2ZXJzaW9uIDUKWkZTIHN0b3Jh Z2UgcG9vbCB2ZXJzaW9uIDI4CnVucmVjb2duaXplZCBjb21tYW5kICd2b2xpbml0Jwp1c2FnZTog emZzIGNvbW1hbmQgYXJncyAuLi4Kd2hlcmUgJ2NvbW1hbmQnIGlzIG9uZSBvZiB0aGUgZm9sbG93 aW5nOgoKCWNyZWF0ZSBbLXBdIFstbyBwcm9wZXJ0eT12YWx1ZV0gLi4uIDxmaWxlc3lzdGVtPgoJ Y3JlYXRlIFstcHNdIFstYiBibG9ja3NpemVdIFstbyBwcm9wZXJ0eT12YWx1ZV0gLi4uIC1WIDxz aXplPiA8dm9sdW1lPgoJZGVzdHJveSBbLXJSZl0gPGZpbGVzeXN0ZW18dm9sdW1lPgoJZGVzdHJv eSBbLXJSZF0gPHNuYXBzaG90PgoKCXNuYXBzaG90IFstcl0gWy1vIHByb3BlcnR5PXZhbHVlXSAu Li4gPGZpbGVzeXN0ZW1Ac25hcG5hbWV8dm9sdW1lQHNuYXBuYW1lPgoJcm9sbGJhY2sgWy1yUmZd IDxzbmFwc2hvdD4KCWNsb25lIFstcF0gWy1vIHByb3BlcnR5PXZhbHVlXSAuLi4gPHNuYXBzaG90 PiA8ZmlsZXN5c3RlbXx2b2x1bWU+Cglwcm9tb3RlIDxjbG9uZS1maWxlc3lzdGVtPgoJcmVuYW1l IDxmaWxlc3lzdGVtfHZvbHVtZXxzbmFwc2hvdD4gPGZpbGVzeXN0ZW18dm9sdW1lfHNuYXBzaG90 PgoJcmVuYW1lIC1wIDxmaWxlc3lzdGVtfHZvbHVtZT4gPGZpbGVzeXN0ZW18dm9sdW1lPgoJcmVu YW1lIC1yIDxzbmFwc2hvdD4gPHNuYXBzaG90PgoKCWxpc3QgWy1ySF1bLWQgbWF4XSBbLW8gcHJv cGVydHlbLC4uLl1dIFstdCB0eXBlWywuLi5dXSBbLXMgcHJvcGVydHldIC4uLgoJICAgIFstUyBw cm9wZXJ0eV0gLi4uIFtmaWxlc3lzdGVtfHZvbHVtZXxzbmFwc2hvdF0gLi4uCgoJc2V0IDxwcm9w ZXJ0eT12YWx1ZT4gPGZpbGVzeXN0ZW18dm9sdW1lfHNuYXBzaG90PiAuLi4KCWdldCBbLXJIcF0g Wy1kIG1heF0gWy1vICJhbGwiIHwgZmllbGRbLC4uLl1dIFstcyBzb3VyY2VbLC4uLl1dCgkgICAg PCJhbGwiIHwgcHJvcGVydHlbLC4uLl0+IFtmaWxlc3lzdGVtfHZvbHVtZXxzbmFwc2hvdF0gLi4u Cglpbmhlcml0IFstclNdIDxwcm9wZXJ0eT4gPGZpbGVzeXN0ZW18dm9sdW1lfHNuYXBzaG90PiAu Li4KCXVwZ3JhZGUgWy12XQoJdXBncmFkZSBbLXJdIFstViB2ZXJzaW9uXSA8LWEgfCBmaWxlc3lz dGVtIC4uLj4KCXVzZXJzcGFjZSBbLWhuaUhwXSBbLW8gZmllbGRbLC4uLl1dIFstc1MgZmllbGRd IC4uLiBbLXQgdHlwZVssLi4uXV0KCSAgICA8ZmlsZXN5c3RlbXxzbmFwc2hvdD4KCWdyb3Vwc3Bh Y2UgWy1obmlIcFVdIFstbyBmaWVsZFssLi4uXV0gWy1zUyBmaWVsZF0gLi4uIFstdCB0eXBlWywu Li5dXQoJICAgIDxmaWxlc3lzdGVtfHNuYXBzaG90PgoKCW1vdW50Cgltb3VudCBbLXZPXSBbLW8g b3B0c10gPC1hIHwgZmlsZXN5c3RlbT4KCXVubW91bnQgWy1mXSA8LWEgfCBmaWxlc3lzdGVtfG1v dW50cG9pbnQ+CglzaGFyZSA8LWEgfCBmaWxlc3lzdGVtPgoJdW5zaGFyZSA8LWEgfCBmaWxlc3lz dGVtfG1vdW50cG9pbnQ+CgoJc2VuZCBbLVJEcF0gWy1baUldIHNuYXBzaG90XSA8c25hcHNob3Q+ CglyZWNlaXZlIFstdm5GdV0gPGZpbGVzeXN0ZW18dm9sdW1lfHNuYXBzaG90PgoJcmVjZWl2ZSBb LXZuRnVdIFstZCB8IC1lXSA8ZmlsZXN5c3RlbT4KCglhbGxvdyA8ZmlsZXN5c3RlbXx2b2x1bWU+ CglhbGxvdyBbLWxkdWddIDwiZXZlcnlvbmUifHVzZXJ8Z3JvdXA+WywuLi5dIDxwZXJtfEBzZXRu YW1lPlssLi4uXQoJICAgIDxmaWxlc3lzdGVtfHZvbHVtZT4KCWFsbG93IFstbGRdIC1lIDxwZXJt fEBzZXRuYW1lPlssLi4uXSA8ZmlsZXN5c3RlbXx2b2x1bWU+CglhbGxvdyAtYyA8cGVybXxAc2V0 bmFtZT5bLC4uLl0gPGZpbGVzeXN0ZW18dm9sdW1lPgoJYWxsb3cgLXMgQHNldG5hbWUgPHBlcm18 QHNldG5hbWU+WywuLi5dIDxmaWxlc3lzdGVtfHZvbHVtZT4KCgl1bmFsbG93IFstcmxkdWddIDwi ZXZlcnlvbmUifHVzZXJ8Z3JvdXA+WywuLi5dCgkgICAgWzxwZXJtfEBzZXRuYW1lPlssLi4uXV0g PGZpbGVzeXN0ZW18dm9sdW1lPgoJdW5hbGxvdyBbLXJsZF0gLWUgWzxwZXJtfEBzZXRuYW1lPlss Li4uXV0gPGZpbGVzeXN0ZW18dm9sdW1lPgoJdW5hbGxvdyBbLXJdIC1jIFs8cGVybXxAc2V0bmFt ZT5bLC4uLl1dIDxmaWxlc3lzdGVtfHZvbHVtZT4KCXVuYWxsb3cgWy1yXSAtcyBAc2V0bmFtZSBb PHBlcm18QHNldG5hbWU+WywuLi5dXSA8ZmlsZXN5c3RlbXx2b2x1bWU+CgoJaG9sZCBbLXJdIDx0 YWc+IDxzbmFwc2hvdD4gLi4uCglob2xkcyBbLXJdIDxzbmFwc2hvdD4gLi4uCglyZWxlYXNlIFst cl0gPHRhZz4gPHNuYXBzaG90PiAuLi4KCWRpZmYgWy1GSHRdIDxzbmFwc2hvdD4gW3NuYXBzaG90 fGZpbGVzeXN0ZW1dCgoJamFpbCA8amFpbGlkPiA8ZmlsZXN5c3RlbT4KCXVuamFpbCA8amFpbGlk PiA8ZmlsZXN5c3RlbT4KCkVhY2ggZGF0YXNldCBpcyBvZiB0aGUgZm9ybTogcG9vbC9bZGF0YXNl dC9dKmRhdGFzZXRbQG5hbWVdCgpGb3IgdGhlIHByb3BlcnR5IGxpc3QsIHJ1bjogemZzIHNldHxn ZXQKCkZvciB0aGUgZGVsZWdhdGVkIHBlcm1pc3Npb24gbGlzdCwgcnVuOiB6ZnMgYWxsb3d8dW5h bGxvdwpsb2NrIG9yZGVyIHJldmVyc2FsOgogMXN0IDB4ZmZmZmZmMDAwMmNhOWE5OCBkYi0+ZGJf bXR4IChkYi0+ZGJfbXR4KSBAIC9oZWFkX29sZC9zeXMvbW9kdWxlcy96ZnMvLi4vLi4vY2RkbC9j b250cmliL29wZW5zb2xhcmlzL3V0cy9jb21tb24vZnMvemZzL2RidWYuYzoyMDA5CiAybmQgMHhm ZmZmZmYwMDAyY2I0MGU4IGRuLT5kbl9tdHggKGRuLT5kbl9tdHgpIEAgL2hlYWRfb2xkL3N5cy9t b2R1bGVzL3pmcy8uLi8uLi9jZGRsL2NvbnRyaWIvb3BlbnNvbGFyaXMvdXRzL2NvbW1vbi9mcy96 ZnMvZG5vZGUuYzoxMTc0CktEQjogc3RhY2sgYmFja3RyYWNlOgpkYl90cmFjZV9zZWxmX3dyYXBw ZXIoKSBhdCBkYl90cmFjZV9zZWxmX3dyYXBwZXIrMHgyYQpfd2l0bmVzc19kZWJ1Z2dlcigpIGF0 IF93aXRuZXNzX2RlYnVnZ2VyKzB4MmUKd2l0bmVzc19jaGVja29yZGVyKCkgYXQgd2l0bmVzc19j aGVja29yZGVyKzB4ODA3Cl9zeF94bG9jaygpIGF0IF9zeF94bG9jaysweDU1CmRub2RlX3JlbGUo KSBhdCBkbm9kZV9yZWxlKzB4NGUKZHNsX2RlYWRsaXN0X2Nsb3NlKCkgYXQgZHNsX2RlYWRsaXN0 X2Nsb3NlKzB4NDcKZHNsX2RhdGFzZXRfZXZpY3QoKSBhdCBkc2xfZGF0YXNldF9ldmljdCsweDdk CmRidWZfZXZpY3RfdXNlcigpIGF0IGRidWZfZXZpY3RfdXNlcisweDU1CmRidWZfcmVsZV9hbmRf dW5sb2NrKCkgYXQgZGJ1Zl9yZWxlX2FuZF91bmxvY2srMHgxNTQKZHNsX3Bvb2xfb3BlbigpIGF0 IGRzbF9wb29sX29wZW4rMHgxYjUKc3BhX2xvYWQoKSBhdCBzcGFfbG9hZCsweDU0ZgpzcGFfbG9h ZF9iZXN0KCkgYXQgc3BhX2xvYWRfYmVzdCsweDUyCnNwYV9vcGVuX2NvbW1vbigpIGF0IHNwYV9v cGVuX2NvbW1vbisweDE0YQpwb29sX3N0YXR1c19jaGVjaygpIGF0IHBvb2xfc3RhdHVzX2NoZWNr KzB4MjEKemZzZGV2X2lvY3RsKCkgYXQgemZzZGV2X2lvY3RsKzB4MTIxCmRldmZzX2lvY3RsX2Yo KSBhdCBkZXZmc19pb2N0bF9mKzB4N2EKa2Vybl9pb2N0bCgpIGF0IGtlcm5faW9jdGwrMHhiZQpp b2N0bCgpIGF0IGlvY3RsKzB4ZmQKc3lzY2FsbGVudGVyKCkgYXQgc3lzY2FsbGVudGVyKzB4MWNi CnN5c2NhbGwoKSBhdCBzeXNjYWxsKzB4NGMKWGZhc3Rfc3lzY2FsbCgpIGF0IFhmYXN0X3N5c2Nh bGwrMHhlMgotLS0gc3lzY2FsbCAoNTQsIEZyZWVCU0QgRUxGNjQsIGlvY3RsKSwgcmlwID0gMHg4 MDExMDI2OGMsIHJzcCA9IDB4N2ZmZmZmZmZjY2Q4LCByYnAgPSAweDQwMDAgLS0tCmxvY2sgb3Jk ZXIgcmV2ZXJzYWw6CiAxc3QgMHhmZmZmZmYwMDAyY2E5MDU4IGRiLT5kYl9tdHggKGRiLT5kYl9t dHgpIEAgL2hlYWRfb2xkL3N5cy9tb2R1bGVzL3pmcy8uLi8uLi9jZGRsL2NvbnRyaWIvb3BlbnNv bGFyaXMvdXRzL2NvbW1vbi9mcy96ZnMvZG5vZGVfc3luYy5jOjM5NgogMm5kIDB4ZmZmZmZmMDAw MmI0NzY2MCBvcy0+b3NfbG9jayAob3MtPm9zX2xvY2spIEAgL2hlYWRfb2xkL3N5cy9tb2R1bGVz L3pmcy8uLi8uLi9jZGRsL2NvbnRyaWIvb3BlbnNvbGFyaXMvdXRzL2NvbW1vbi9mcy96ZnMvZG5v ZGUuYzo0MzkKS0RCOiBzdGFjayBiYWNrdHJhY2U6CmRiX3RyYWNlX3NlbGZfd3JhcHBlcigpIGF0 IGRiX3RyYWNlX3NlbGZfd3JhcHBlcisweDJhCl93aXRuZXNzX2RlYnVnZ2VyKCkgYXQgX3dpdG5l c3NfZGVidWdnZXIrMHgyZQp3aXRuZXNzX2NoZWNrb3JkZXIoKSBhdCB3aXRuZXNzX2NoZWNrb3Jk ZXIrMHg4MDcKX3N4X3hsb2NrKCkgYXQgX3N4X3hsb2NrKzB4NTUKZG5vZGVfZGVzdHJveSgpIGF0 IGRub2RlX2Rlc3Ryb3krMHgzZQpkbm9kZV9idWZfcGFnZW91dCgpIGF0IGRub2RlX2J1Zl9wYWdl b3V0KzB4OWQKZGJ1Zl9ldmljdF91c2VyKCkgYXQgZGJ1Zl9ldmljdF91c2VyKzB4NTUKZGJ1Zl9j bGVhcigpIGF0IGRidWZfY2xlYXIrMHg1OApkbm9kZV9ldmljdF9kYnVmcygpIGF0IGRub2RlX2V2 aWN0X2RidWZzKzB4OTgKZG11X29ianNldF9ldmljdF9kYnVmcygpIGF0IGRtdV9vYmpzZXRfZXZp Y3RfZGJ1ZnMrMHgxMWMKZG11X29ianNldF9ldmljdCgpIGF0IGRtdV9vYmpzZXRfZXZpY3QrMHhh Mwpkc2xfcG9vbF9jbG9zZSgpIGF0IGRzbF9wb29sX2Nsb3NlKzB4NmEKc3BhX3VubG9hZCgpIGF0 IHNwYV91bmxvYWQrMHg3OQpzcGFfbG9hZCgpIGF0IHNwYV9sb2FkKzB4NjJkCnNwYV9sb2FkX2Jl c3QoKSBhdCBzcGFfbG9hZF9iZXN0KzB4NTIKc3BhX29wZW5fY29tbW9uKCkgYXQgc3BhX29wZW5f Y29tbW9uKzB4MTRhCnBvb2xfc3RhdHVzX2NoZWNrKCkgYXQgcG9vbF9zdGF0dXNfY2hlY2srMHgy MQp6ZnNkZXZfaW9jdGwoKSBhdCB6ZnNkZXZfaW9jdGwrMHgxMjEKZGV2ZnNfaW9jdGxfZigpIGF0 IGRldmZzX2lvY3RsX2YrMHg3YQprZXJuX2lvY3RsKCkgYXQga2Vybl9pb2N0bCsweGJlCmlvY3Rs KCkgYXQgaW9jdGwrMHhmZApzeXNjYWxsZW50ZXIoKSBhdCBzeXNjYWxsZW50ZXIrMHgxY2IKc3lz Y2FsbCgpIGF0IHN5c2NhbGwrMHg0YwpYZmFzdF9zeXNjYWxsKCkgYXQgWGZhc3Rfc3lzY2FsbCsw eGUyCi0tLSBzeXNjYWxsICg1NCwgRnJlZUJTRCBFTEY2NCwgaW9jdGwpLCByaXAgPSAweDgwMTEw MjY4YywgcnNwID0gMHg3ZmZmZmZmZmNjZDgsIHJicCA9IDB4NDAwMCAtLS0KRW50cm9weSBoYXJ2 ZXN0aW5nOgogaW50ZXJydXB0cwogZXRoZXJuZXQKIHBvaW50X3RvX3BvaW50CiBraWNrc3RhcnQK LgpTdGFydGluZyBmaWxlIHN5c3RlbSBjaGVja3M6Ci9kZXYvYWQwczFhOiBGSUxFIFNZU1RFTSBD TEVBTjsgU0tJUFBJTkcgQ0hFQ0tTCi9kZXYvYWQwczFhOiBjbGVhbiwgMTA3Njk4MDI3IGZyZWUg KDcwNDkxIGZyYWdzLCAxMzQ1MzQ0MiBibG9ja3MsIDAuMSUgZnJhZ21lbnRhdGlvbikKTW91bnRp bmcgbG9jYWwgZmlsZSBzeXN0ZW1zOgouCmxvY2sgb3JkZXIgcmV2ZXJzYWw6CiAxc3QgMHhmZmZm ZmYwMDAyZTJlMTAwIHNhLT5zYV9sb2NrIChzYS0+c2FfbG9jaykgQCAvaGVhZF9vbGQvc3lzL21v ZHVsZXMvemZzLy4uLy4uL2NkZGwvY29udHJpYi9vcGVuc29sYXJpcy91dHMvY29tbW9uL2ZzL3pm cy9zYS5jOjk5NQogMm5kIDB4ZmZmZmZmMDAwMmRmNzQyOCBkbi0+ZG5fc3RydWN0X3J3bG9jayAo ZG4tPmRuX3N0cnVjdF9yd2xvY2spIEAgL2hlYWRfb2xkL3N5cy9tb2R1bGVzL3pmcy8uLi8uLi9j ZGRsL2NvbnRyaWIvb3BlbnNvbGFyaXMvdXRzL2NvbW1vbi9mcy96ZnMvZG5vZGUuYzoyMDUKS0RC OiBzdGFjayBiYWNrdHJhY2U6CmRiX3RyYWNlX3NlbGZfd3JhcHBlcigpIGF0IGRiX3RyYWNlX3Nl bGZfd3JhcHBlcisweDJhCl93aXRuZXNzX2RlYnVnZ2VyKCkgYXQgX3dpdG5lc3NfZGVidWdnZXIr MHgyZQp3aXRuZXNzX2NoZWNrb3JkZXIoKSBhdCB3aXRuZXNzX2NoZWNrb3JkZXIrMHg4MDcKX3N4 X3Nsb2NrKCkgYXQgX3N4X3Nsb2NrKzB4NTQKZG5vZGVfdmVyaWZ5KCkgYXQgZG5vZGVfdmVyaWZ5 KzB4N2MKZG5vZGVfaG9sZF9pbXBsKCkgYXQgZG5vZGVfaG9sZF9pbXBsKzB4OTQKZG11X2J1Zl9o b2xkKCkgYXQgZG11X2J1Zl9ob2xkKzB4NDgKemFwX2xvY2tkaXIoKSBhdCB6YXBfbG9ja2Rpcisw eDc0CnphcF9sb29rdXBfbm9ybSgpIGF0IHphcF9sb29rdXBfbm9ybSsweDQ1CnphcF9sb29rdXAo KSBhdCB6YXBfbG9va3VwKzB4MmUKc2Ffc2V0dXAoKSBhdCBzYV9zZXR1cCsweDIxNAp6ZnN2ZnNf Y3JlYXRlKCkgYXQgemZzdmZzX2NyZWF0ZSsweDIxOAp6ZnNfbW91bnQoKSBhdCB6ZnNfbW91bnQr MHhlNgp2ZnNfZG9ubW91bnQoKSBhdCB2ZnNfZG9ubW91bnQrMHhjZGUKbm1vdW50KCkgYXQgbm1v dW50KzB4NjMKc3lzY2FsbGVudGVyKCkgYXQgc3lzY2FsbGVudGVyKzB4MWNiCnN5c2NhbGwoKSBh dCBzeXNjYWxsKzB4NGMKWGZhc3Rfc3lzY2FsbCgpIGF0IFhmYXN0X3N5c2NhbGwrMHhlMgotLS0g c3lzY2FsbCAoMzc4LCBGcmVlQlNEIEVMRjY0LCBubW91bnQpLCByaXAgPSAweDgwMTA2MWQ0Yywg cnNwID0gMHg3ZmZmZmZmZmNjZTgsIHJicCA9IDB4ODAxODVmMTBjIC0tLQpsb2NrIG9yZGVyIHJl dmVyc2FsOgogMXN0IDB4ZmZmZmZmMDAwMmUyZTEwMCBzYS0+c2FfbG9jayAoc2EtPnNhX2xvY2sp IEAgL2hlYWRfb2xkL3N5cy9tb2R1bGVzL3pmcy8uLi8uLi9jZGRsL2NvbnRyaWIvb3BlbnNvbGFy aXMvdXRzL2NvbW1vbi9mcy96ZnMvc2EuYzo5OTUKIDJuZCAweGZmZmZmZjAwMDJjYzRhNjAgb3Mt Pm9zX2xvY2sgKG9zLT5vc19sb2NrKSBAIC9oZWFkX29sZC9zeXMvbW9kdWxlcy96ZnMvLi4vLi4v Y2RkbC9jb250cmliL29wZW5zb2xhcmlzL3V0cy9jb21tb24vZnMvemZzL2Rub2RlLmM6NDE1CktE Qjogc3RhY2sgYmFja3RyYWNlOgpkYl90cmFjZV9zZWxmX3dyYXBwZXIoKSBhdCBkYl90cmFjZV9z ZWxmX3dyYXBwZXIrMHgyYQpfd2l0bmVzc19kZWJ1Z2dlcigpIGF0IF93aXRuZXNzX2RlYnVnZ2Vy KzB4MmUKd2l0bmVzc19jaGVja29yZGVyKCkgYXQgd2l0bmVzc19jaGVja29yZGVyKzB4ODA3Cl9z eF94bG9jaygpIGF0IF9zeF94bG9jaysweDU1CmRub2RlX2NyZWF0ZSgpIGF0IGRub2RlX2NyZWF0 ZSsweDEzMApkbm9kZV9ob2xkX2ltcGwoKSBhdCBkbm9kZV9ob2xkX2ltcGwrMHg1NmEKZG11X2J1 Zl9ob2xkKCkgYXQgZG11X2J1Zl9ob2xkKzB4NDgKemFwX2xvY2tkaXIoKSBhdCB6YXBfbG9ja2Rp cisweDc0CnphcF9sb29rdXBfbm9ybSgpIGF0IHphcF9sb29rdXBfbm9ybSsweDQ1CnphcF9sb29r dXAoKSBhdCB6YXBfbG9va3VwKzB4MmUKc2Ffc2V0dXAoKSBhdCBzYV9zZXR1cCsweDIxNAp6ZnN2 ZnNfY3JlYXRlKCkgYXQgemZzdmZzX2NyZWF0ZSsweDIxOAp6ZnNfbW91bnQoKSBhdCB6ZnNfbW91 bnQrMHhlNgp2ZnNfZG9ubW91bnQoKSBhdCB2ZnNfZG9ubW91bnQrMHhjZGUKbm1vdW50KCkgYXQg bm1vdW50KzB4NjMKc3lzY2FsbGVudGVyKCkgYXQgc3lzY2FsbGVudGVyKzB4MWNiCnN5c2NhbGwo KSBhdCBzeXNjYWxsKzB4NGMKWGZhc3Rfc3lzY2FsbCgpIGF0IFhmYXN0X3N5c2NhbGwrMHhlMgot LS0gc3lzY2FsbCAoMzc4LCBGcmVlQlNEIEVMRjY0LCBubW91bnQpLCByaXAgPSAweDgwMTA2MWQ0 YywgcnNwID0gMHg3ZmZmZmZmZmNjZTgsIHJicCA9IDB4ODAxODVmMTBjIC0tLQpsb2NrIG9yZGVy IHJldmVyc2FsOgogMXN0IDB4ZmZmZmZmMDAwMmUyZTEwMCBzYS0+c2FfbG9jayAoc2EtPnNhX2xv Y2spIEAgL2hlYWRfb2xkL3N5cy9tb2R1bGVzL3pmcy8uLi8uLi9jZGRsL2NvbnRyaWIvb3BlbnNv bGFyaXMvdXRzL2NvbW1vbi9mcy96ZnMvc2EuYzo5OTUKIDJuZCAweGZmZmZmZjAwMDJjYTU5OTgg emFwLT56YXBfcndsb2NrICh6YXAtPnphcF9yd2xvY2spIEAgL2hlYWRfb2xkL3N5cy9tb2R1bGVz L3pmcy8uLi8uLi9jZGRsL2NvbnRyaWIvb3BlbnNvbGFyaXMvdXRzL2NvbW1vbi9mcy96ZnMvemFw X21pY3JvLmM6Mzc1CktEQjogc3RhY2sgYmFja3RyYWNlOgpkYl90cmFjZV9zZWxmX3dyYXBwZXIo KSBhdCBkYl90cmFjZV9zZWxmX3dyYXBwZXIrMHgyYQpfd2l0bmVzc19kZWJ1Z2dlcigpIGF0IF93 aXRuZXNzX2RlYnVnZ2VyKzB4MmUKd2l0bmVzc19jaGVja29yZGVyKCkgYXQgd2l0bmVzc19jaGVj a29yZGVyKzB4ODA3Cl9zeF94bG9jaygpIGF0IF9zeF94bG9jaysweDU1CnphcF9sb2NrZGlyKCkg YXQgemFwX2xvY2tkaXIrMHgzNGMKemFwX2xvb2t1cF9ub3JtKCkgYXQgemFwX2xvb2t1cF9ub3Jt KzB4NDUKemFwX2xvb2t1cCgpIGF0IHphcF9sb29rdXArMHgyZQpzYV9zZXR1cCgpIGF0IHNhX3Nl dHVwKzB4MjE0Cnpmc3Zmc19jcmVhdGUoKSBhdCB6ZnN2ZnNfY3JlYXRlKzB4MjE4Cnpmc19tb3Vu dCgpIGF0IHpmc19tb3VudCsweGU2CnZmc19kb25tb3VudCgpIGF0IHZmc19kb25tb3VudCsweGNk ZQpubW91bnQoKSBhdCBubW91bnQrMHg2MwpzeXNjYWxsZW50ZXIoKSBhdCBzeXNjYWxsZW50ZXIr MHgxY2IKc3lzY2FsbCgpIGF0IHN5c2NhbGwrMHg0YwpYZmFzdF9zeXNjYWxsKCkgYXQgWGZhc3Rf c3lzY2FsbCsweGUyCi0tLSBzeXNjYWxsICgzNzgsIEZyZWVCU0QgRUxGNjQsIG5tb3VudCksIHJp cCA9IDB4ODAxMDYxZDRjLCByc3AgPSAweDdmZmZmZmZmY2NlOCwgcmJwID0gMHg4MDE4NWYxMGMg LS0tCmxvY2sgb3JkZXIgcmV2ZXJzYWw6CiAxc3QgMHhmZmZmZmYwMDAyZTI0MDk4IHpmcyAoemZz KSBAIC9oZWFkX29sZC9zeXMvbW9kdWxlcy96ZnMvLi4vLi4vY2RkbC9jb250cmliL29wZW5zb2xh cmlzL3V0cy9jb21tb24vZnMvZ2ZzLmM6NDg4CiAybmQgMHhmZmZmZmYwMDAyZGM5MzUwIHpmc3Zm cy0+el9ob2xkX210eFtpXSAoemZzdmZzLT56X2hvbGRfbXR4W2ldKSBAIC9oZWFkX29sZC9zeXMv bW9kdWxlcy96ZnMvLi4vLi4vY2RkbC9jb250cmliL29wZW5zb2xhcmlzL3V0cy9jb21tb24vZnMv emZzL3pmc196bm9kZS5jOjExMTYKS0RCOiBzdGFjayBiYWNrdHJhY2U6CmRiX3RyYWNlX3NlbGZf d3JhcHBlcigpIGF0IGRiX3RyYWNlX3NlbGZfd3JhcHBlcisweDJhCl93aXRuZXNzX2RlYnVnZ2Vy KCkgYXQgX3dpdG5lc3NfZGVidWdnZXIrMHgyZQp3aXRuZXNzX2NoZWNrb3JkZXIoKSBhdCB3aXRu ZXNzX2NoZWNrb3JkZXIrMHg4MDcKX3N4X3hsb2NrKCkgYXQgX3N4X3hsb2NrKzB4NTUKemZzX3pn ZXQoKSBhdCB6ZnNfemdldCsweDI0MQp6ZnNfcm9vdCgpIGF0IHpmc19yb290KzB4NTAKemZzY3Rs X2NyZWF0ZSgpIGF0IHpmc2N0bF9jcmVhdGUrMHg4MQp6ZnNfbW91bnQoKSBhdCB6ZnNfbW91bnQr MHg1ZjQKdmZzX2Rvbm1vdW50KCkgYXQgdmZzX2Rvbm1vdW50KzB4Y2RlCm5tb3VudCgpIGF0IG5t b3VudCsweDYzCnN5c2NhbGxlbnRlcigpIGF0IHN5c2NhbGxlbnRlcisweDFjYgpzeXNjYWxsKCkg YXQgc3lzY2FsbCsweDRjClhmYXN0X3N5c2NhbGwoKSBhdCBYZmFzdF9zeXNjYWxsKzB4ZTIKLS0t IHN5c2NhbGwgKDM3OCwgRnJlZUJTRCBFTEY2NCwgbm1vdW50KSwgcmlwID0gMHg4MDEwNjFkNGMs IHJzcCA9IDB4N2ZmZmZmZmZjY2U4LCByYnAgPSAweDgwMTg1ZjEwYyAtLS0KU2V0dGluZyBob3N0 bmFtZTogeGFuYWR1Ci4KU3RhcnRpbmcgTmV0d29yazogbG8wIGJnZTEuCmxvMDogZmxhZ3M9ODA0 OTxVUCxMT09QQkFDSyxSVU5OSU5HLE1VTFRJQ0FTVD4gbWV0cmljIDAgbXR1IDE2Mzg0CglvcHRp b25zPTM8UlhDU1VNLFRYQ1NVTT4KCWluZXQ2IGZlODA6OjElbG8wIHByZWZpeGxlbiA2NCBzY29w ZWlkIDB4NCAKCWluZXQ2IDo6MSBwcmVmaXhsZW4gMTI4IAoJaW5ldCAxMjcuMC4wLjEgbmV0bWFz ayAweGZmMDAwMDAwIAoJbmQ2IG9wdGlvbnM9MjE8UEVSRk9STU5VRCxBVVRPX0xJTktMT0NBTD4K YmdlMTogZmxhZ3M9ODg0MzxVUCxCUk9BRENBU1QsUlVOTklORyxTSU1QTEVYLE1VTFRJQ0FTVD4g bWV0cmljIDAgbXR1IDE1MDAKCW9wdGlvbnM9ODAwOWI8UlhDU1VNLFRYQ1NVTSxWTEFOX01UVSxW TEFOX0hXVEFHR0lORyxWTEFOX0hXQ1NVTSxMSU5LU1RBVEU+CglldGhlciAwMDplMDo4MTo0MDoy OTpkMwoJaW5ldDYgZmU4MDo6MmUwOjgxZmY6ZmU0MDoyOWQzJWJnZTEgcHJlZml4bGVuIDY0IHRl bnRhdGl2ZSBzY29wZWlkIDB4MiAKCW5kNiBvcHRpb25zPTIxPFBFUkZPUk1OVUQsQVVUT19MSU5L TE9DQUw+CgltZWRpYTogRXRoZXJuZXQgYXV0b3NlbGVjdCAobm9uZSkKCXN0YXR1czogbm8gY2Fy cmllcgpTdGFydGluZyBkZXZkLgpFTEYgbGRjb25maWcgcGF0aDogL2xpYiAvdXNyL2xpYiAvdXNy L2xpYi9jb21wYXQgL3Vzci9sb2NhbC9saWIKMzItYml0IGNvbXBhdGliaWxpdHkgbGRjb25maWcg cGF0aDogL3Vzci9saWIzMgpDcmVhdGluZyBhbmQvb3IgdHJpbW1pbmcgbG9nIGZpbGVzCi4KU3Rh cnRpbmcgc3lzbG9nZC4KTm8gY29yZSBkdW1wcyBmb3VuZC4KQ2xlYXJpbmcgL3RtcCAoWCByZWxh dGVkKS4KU3RhcnRpbmcgcnBjYmluZC4KU3RhcnRpbmcgbW91bnRkLgpTdGFydGluZyBuZnNkLgpV cGRhdGluZyBtb3RkOgpiZ2UxOiBsaW5rIHN0YXRlIGNoYW5nZWQgdG8gVVAKLgpDb25maWd1cmlu ZyBzeXNjb25zOgogYmxhbmt0aW1lCi4KU3RhcnRpbmcgc3NoZC4KU2VwIDEzIDAxOjM4OjA0IHhh bmFkdSBzbS1tdGFbMTE0OV06IE15IHVucXVhbGlmaWVkIGhvc3QgbmFtZSAoeGFuYWR1KSB1bmtu b3duOyBzbGVlcGluZyBmb3IgcmV0cnkKU2VwIDEzIDAxOjM5OjA0IHhhbmFkdSBzbS1tdGFbMTE0 OV06IHVuYWJsZSB0byBxdWFsaWZ5IG15IG93biBkb21haW4gbmFtZSAoeGFuYWR1KSAtLSB1c2lu ZyBzaG9ydCBuYW1lClNlcCAxMyAwMTozOTowNCB4YW5hZHUgc20tbXNwLXF1ZXVlWzExNTZdOiBN eSB1bnF1YWxpZmllZCBob3N0IG5hbWUgKHhhbmFkdSkgdW5rbm93bjsgc2xlZXBpbmcgZm9yIHJl dHJ5ClNlcCAxMyAwMTo0MDowNCB4YW5hZHUgc20tbXNwLXF1ZXVlWzExNTZdOiB1bmFibGUgdG8g cXVhbGlmeSBteSBvd24gZG9tYWluIG5hbWUgKHhhbmFkdSkgLS0gdXNpbmcgc2hvcnQgbmFtZQpT dGFydGluZyBjcm9uLgpTdGFydGluZyBiYWNrZ3JvdW5kIGZpbGUgc3lzdGVtIGNoZWNrcyBpbiA2 MCBzZWNvbmRzLgoKTW9uIFNlcCAxMyAwMTo0MDowNCBVVEMgMjAxMApTZXAgMTMgMDE6NDM6MjQg eGFuYWR1IHNzaGRbMTE0Ml06IGVycm9yOiBhY2NlcHQ6IFNvZnR3YXJlIGNhdXNlZCBjb25uZWN0 aW9uIGFib3J0CmxvY2sgb3JkZXIgcmV2ZXJzYWw6CiAxc3QgMHhmZmZmZmYwMDAyZGM5MjAwIHpm c3Zmcy0+el90ZWFyZG93bl9pbmFjdGl2ZV9sb2NrICh6ZnN2ZnMtPnpfdGVhcmRvd25faW5hY3Rp dmVfbG9jaykgQCAvaGVhZF9vbGQvc3lzL21vZHVsZXMvemZzLy4uLy4uL2NkZGwvY29udHJpYi9v cGVuc29sYXJpcy91dHMvY29tbW9uL2ZzL3pmcy96ZnNfdm5vcHMuYzo0NTAzCiAybmQgMHhmZmZm ZmYwMDAyZGM5M2YwIHpmc3Zmcy0+el9ob2xkX210eFtpXSAoemZzdmZzLT56X2hvbGRfbXR4W2ld KSBAIC9oZWFkX29sZC9zeXMvbW9kdWxlcy96ZnMvLi4vLi4vY2RkbC9jb250cmliL29wZW5zb2xh cmlzL3V0cy9jb21tb24vZnMvemZzL3pmc196bm9kZS5jOjEzNDUKS0RCOiBzdGFjayBiYWNrdHJh Y2U6CmRiX3RyYWNlX3NlbGZfd3JhcHBlcigpIGF0IGRiX3RyYWNlX3NlbGZfd3JhcHBlcisweDJh Cl93aXRuZXNzX2RlYnVnZ2VyKCkgYXQgX3dpdG5lc3NfZGVidWdnZXIrMHgyZQp3aXRuZXNzX2No ZWNrb3JkZXIoKSBhdCB3aXRuZXNzX2NoZWNrb3JkZXIrMHg4MDcKX3N4X3hsb2NrKCkgYXQgX3N4 X3hsb2NrKzB4NTUKemZzX3ppbmFjdGl2ZSgpIGF0IHpmc196aW5hY3RpdmUrMHg4Ygp6ZnNfaW5h Y3RpdmUoKSBhdCB6ZnNfaW5hY3RpdmUrMHg3ZQp6ZnNfZnJlZWJzZF9pbmFjdGl2ZSgpIGF0IHpm c19mcmVlYnNkX2luYWN0aXZlKzB4MWEKVk9QX0lOQUNUSVZFX0FQVigpIGF0IFZPUF9JTkFDVElW RV9BUFYrMHhkOQp2aW5hY3RpdmUoKSBhdCB2aW5hY3RpdmUrMHg5MAp2cHV0eCgpIGF0IHZwdXR4 KzB4MmRjCm5mc3J2M19hY2Nlc3MoKSBhdCBuZnNydjNfYWNjZXNzKzB4MmNhCm5mc3N2Y19wcm9n cmFtKCkgYXQgbmZzc3ZjX3Byb2dyYW0rMHgxYTYKc3ZjX3J1bl9pbnRlcm5hbCgpIGF0IHN2Y19y dW5faW50ZXJuYWwrMHg1ZmIKc3ZjX3RocmVhZF9zdGFydCgpIGF0IHN2Y190aHJlYWRfc3RhcnQr MHhiCmZvcmtfZXhpdCgpIGF0IGZvcmtfZXhpdCsweDEyYQpmb3JrX3RyYW1wb2xpbmUoKSBhdCBm b3JrX3RyYW1wb2xpbmUrMHhlCi0tLSB0cmFwIDB4YywgcmlwID0gMHg4MDA2YTJiZWMsIHJzcCA9 IDB4N2ZmZmZmZmZlNjk4LCByYnAgPSAweDUgLS0tCmxvY2sgb3JkZXIgcmV2ZXJzYWw6CiAxc3Qg MHhmZmZmZmYwMDYyNjhjYjgwIHpwLT56X25hbWVfbG9jayAoenAtPnpfbmFtZV9sb2NrKSBAIC9o ZWFkX29sZC9zeXMvbW9kdWxlcy96ZnMvLi4vLi4vY2RkbC9jb250cmliL29wZW5zb2xhcmlzL3V0 cy9jb21tb24vZnMvemZzL3pmc19kaXIuYzoyMjAKIDJuZCAweGZmZmZmZjAwMDJkYzk2OTAgemZz dmZzLT56X2hvbGRfbXR4W2ldICh6ZnN2ZnMtPnpfaG9sZF9tdHhbaV0pIEAgL2hlYWRfb2xkL3N5 cy9tb2R1bGVzL3pmcy8uLi8uLi9jZGRsL2NvbnRyaWIvb3BlbnNvbGFyaXMvdXRzL2NvbW1vbi9m cy96ZnMvemZzX3pub2RlLmM6MTExNgpLREI6IHN0YWNrIGJhY2t0cmFjZToKZGJfdHJhY2Vfc2Vs Zl93cmFwcGVyKCkgYXQgZGJfdHJhY2Vfc2VsZl93cmFwcGVyKzB4MmEKX3dpdG5lc3NfZGVidWdn ZXIoKSBhdCBfd2l0bmVzc19kZWJ1Z2dlcisweDJlCndpdG5lc3NfY2hlY2tvcmRlcigpIGF0IHdp dG5lc3NfY2hlY2tvcmRlcisweDgwNwpfc3hfeGxvY2soKSBhdCBfc3hfeGxvY2srMHg1NQp6ZnNf emdldCgpIGF0IHpmc196Z2V0KzB4MjQxCnpmc19kaXJlbnRfbG9jaygpIGF0IHpmc19kaXJlbnRf bG9jaysweDRjMQp6ZnNfZGlybG9vaygpIGF0IHpmc19kaXJsb29rKzB4OTAKemZzX2xvb2t1cCgp IGF0IHpmc19sb29rdXArMHgyZDQKemZzX2ZyZWVic2RfbG9va3VwKCkgYXQgemZzX2ZyZWVic2Rf bG9va3VwKzB4OGQKVk9QX0NBQ0hFRExPT0tVUF9BUFYoKSBhdCBWT1BfQ0FDSEVETE9PS1VQX0FQ VisweGQ3CnZmc19jYWNoZV9sb29rdXAoKSBhdCB2ZnNfY2FjaGVfbG9va3VwKzB4ZjAKVk9QX0xP T0tVUF9BUFYoKSBhdCBWT1BfTE9PS1VQX0FQVisweGRmCmxvb2t1cCgpIGF0IGxvb2t1cCsweDNk MwpuZnNfbmFtZWkoKSBhdCBuZnNfbmFtZWkrMHgzZTMKbmZzcnZfbG9va3VwKCkgYXQgbmZzcnZf bG9va3VwKzB4MjE2Cm5mc3N2Y19wcm9ncmFtKCkgYXQgbmZzc3ZjX3Byb2dyYW0rMHgxYTYKc3Zj X3J1bl9pbnRlcm5hbCgpIGF0IHN2Y19ydW5faW50ZXJuYWwrMHg1ZmIKc3ZjX3J1bigpIGF0IHN2 Y19ydW4rMHg4ZgpuZnNzdmNfbmZzZCgpIGF0IG5mc3N2Y19uZnNkKzB4YTIKbmZzc3ZjX25mc3Nl cnZlcigpIGF0IG5mc3N2Y19uZnNzZXJ2ZXIrMHg1YgpuZnNzdmMoKSBhdCBuZnNzdmMrMHg3Mwpz eXNjYWxsZW50ZXIoKSBhdCBzeXNjYWxsZW50ZXIrMHgxY2IKc3lzY2FsbCgpIGF0IHN5c2NhbGwr MHg0YwpYZmFzdF9zeXNjYWxsKCkgYXQgWGZhc3Rfc3lzY2FsbCsweGUyCi0tLSBzeXNjYWxsICgx NTUsIEZyZWVCU0QgRUxGNjQsIG5mc3N2YyksIHJpcCA9IDB4ODAwNmEyYmVjLCByc3AgPSAweDdm ZmZmZmZmZTY5OCwgcmJwID0gMHg1IC0tLQpsb2NrIG9yZGVyIHJldmVyc2FsOgogMXN0IDB4ZmZm ZmZmMDAwMmNiMTJlOCBkYi0+ZGJfbXR4IChkYi0+ZGJfbXR4KSBAIC9oZWFkX29sZC9zeXMvbW9k dWxlcy96ZnMvLi4vLi4vY2RkbC9jb250cmliL29wZW5zb2xhcmlzL3V0cy9jb21tb24vZnMvemZz L2RidWYuYzoxMjQyCiAybmQgMHhmZmZmZmYwMGM2MTI3YjM4IGRyLT5kdC5kaS5kcl9tdHggKGRy LT5kdC5kaS5kcl9tdHgpIEAgL2hlYWRfb2xkL3N5cy9tb2R1bGVzL3pmcy8uLi8uLi9jZGRsL2Nv bnRyaWIvb3BlbnNvbGFyaXMvdXRzL2NvbW1vbi9mcy96ZnMvZGJ1Zi5jOjEyNDYKS0RCOiBzdGFj ayBiYWNrdHJhY2U6CmRiX3RyYWNlX3NlbGZfd3JhcHBlcigpIGF0IGRiX3RyYWNlX3NlbGZfd3Jh cHBlcisweDJhCl93aXRuZXNzX2RlYnVnZ2VyKCkgYXQgX3dpdG5lc3NfZGVidWdnZXIrMHgyZQp3 aXRuZXNzX2NoZWNrb3JkZXIoKSBhdCB3aXRuZXNzX2NoZWNrb3JkZXIrMHg4MDcKX3N4X3hsb2Nr KCkgYXQgX3N4X3hsb2NrKzB4NTUKZGJ1Zl9kaXJ0eSgpIGF0IGRidWZfZGlydHkrMHg2Y2UKZGJ1 Zl9kaXJ0eSgpIGF0IGRidWZfZGlydHkrMHg1MmIKZGJ1Zl9kaXJ0eSgpIGF0IGRidWZfZGlydHkr MHg1MmIKZGJ1Zl9kaXJ0eSgpIGF0IGRidWZfZGlydHkrMHg1MmIKZGJ1Zl9kaXJ0eSgpIGF0IGRi dWZfZGlydHkrMHg1MmIKZGJ1Zl9kaXJ0eSgpIGF0IGRidWZfZGlydHkrMHg1MmIKZG5vZGVfc2V0 ZGlydHkoKSBhdCBkbm9kZV9zZXRkaXJ0eSsweDFhNQpkYnVmX2RpcnR5KCkgYXQgZGJ1Zl9kaXJ0 eSsweDU5MwpzYV9hdHRyX29wKCkgYXQgc2FfYXR0cl9vcCsweDMyYwpzYV9idWxrX3VwZGF0ZV9p bXBsKCkgYXQgc2FfYnVsa191cGRhdGVfaW1wbCsweDdjCnNhX2J1bGtfdXBkYXRlKCkgYXQgc2Ff YnVsa191cGRhdGUrMHg1MAp6ZnNfZnJlZWJzZF9zZXRhdHRyKCkgYXQgemZzX2ZyZWVic2Rfc2V0 YXR0cisweDFiNjEKVk9QX1NFVEFUVFJfQVBWKCkgYXQgVk9QX1NFVEFUVFJfQVBWKzB4ZDMKbmZz cnZfc2V0YXR0cigpIGF0IG5mc3J2X3NldGF0dHIrMHg4MGQKbmZzc3ZjX3Byb2dyYW0oKSBhdCBu ZnNzdmNfcHJvZ3JhbSsweDFhNgpzdmNfcnVuX2ludGVybmFsKCkgYXQgc3ZjX3J1bl9pbnRlcm5h bCsweDVmYgpzdmNfdGhyZWFkX3N0YXJ0KCkgYXQgc3ZjX3RocmVhZF9zdGFydCsweGIKZm9ya19l eGl0KCkgYXQgZm9ya19leGl0KzB4MTJhCmZvcmtfdHJhbXBvbGluZSgpIGF0IGZvcmtfdHJhbXBv bGluZSsweGUKLS0tIHRyYXAgMHhjLCByaXAgPSAweDgwMDZhMmJlYywgcnNwID0gMHg3ZmZmZmZm ZmU2OTgsIHJicCA9IDB4NSAtLS0KbG9jayBvcmRlciByZXZlcnNhbDoKIDFzdCAweGZmZmZmZjAw YzYxMDYwYzAgbC0+bF9yd2xvY2sgKGwtPmxfcndsb2NrKSBAIC9oZWFkX29sZC9zeXMvbW9kdWxl cy96ZnMvLi4vLi4vY2RkbC9jb250cmliL29wZW5zb2xhcmlzL3V0cy9jb21tb24vZnMvemZzL3ph cC5jOjUyMgogMm5kIDB4ZmZmZmZmMDBjNjE1NTAwMCBkbi0+ZG5fc3RydWN0X3J3bG9jayAoZG4t PmRuX3N0cnVjdF9yd2xvY2spIEAgL2hlYWRfb2xkL3N5cy9tb2R1bGVzL3pmcy8uLi8uLi9jZGRs L2NvbnRyaWIvb3BlbnNvbGFyaXMvdXRzL2NvbW1vbi9mcy96ZnMvZGJ1Zi5jOjYxMQpLREI6IHN0 YWNrIGJhY2t0cmFjZToKZGJfdHJhY2Vfc2VsZl93cmFwcGVyKCkgYXQgZGJfdHJhY2Vfc2VsZl93 cmFwcGVyKzB4MmEKX3dpdG5lc3NfZGVidWdnZXIoKSBhdCBfd2l0bmVzc19kZWJ1Z2dlcisweDJl CndpdG5lc3NfY2hlY2tvcmRlcigpIGF0IHdpdG5lc3NfY2hlY2tvcmRlcisweDgwNwpfc3hfc2xv Y2soKSBhdCBfc3hfc2xvY2srMHg1NApkYnVmX3JlYWQoKSBhdCBkYnVmX3JlYWQrMHgzMTkKZGJ1 Zl93aWxsX2RpcnR5KCkgYXQgZGJ1Zl93aWxsX2RpcnR5KzB4ODAKemFwX2dldF9sZWFmX2J5Ymxr KCkgYXQgemFwX2dldF9sZWFmX2J5YmxrKzB4MWRkCnphcF9kZXJlZl9sZWFmKCkgYXQgemFwX2Rl cmVmX2xlYWYrMHhiNgpmemFwX2FkZF9jZCgpIGF0IGZ6YXBfYWRkX2NkKzB4NzUKemFwX2FkZCgp IGF0IHphcF9hZGQrMHgxMDUKemZzX2xpbmtfY3JlYXRlKCkgYXQgemZzX2xpbmtfY3JlYXRlKzB4 MzMyCnpmc19mcmVlYnNkX2NyZWF0ZSgpIGF0IHpmc19mcmVlYnNkX2NyZWF0ZSsweDc2MgpWT1Bf Q1JFQVRFX0FQVigpIGF0IFZPUF9DUkVBVEVfQVBWKzB4ZDcKbmZzcnZfY3JlYXRlKCkgYXQgbmZz cnZfY3JlYXRlKzB4OTA2Cm5mc3N2Y19wcm9ncmFtKCkgYXQgbmZzc3ZjX3Byb2dyYW0rMHgxYTYK c3ZjX3J1bl9pbnRlcm5hbCgpIGF0IHN2Y19ydW5faW50ZXJuYWwrMHg1ZmIKc3ZjX3RocmVhZF9z dGFydCgpIGF0IHN2Y190aHJlYWRfc3RhcnQrMHhiCmZvcmtfZXhpdCgpIGF0IGZvcmtfZXhpdCsw eDEyYQpmb3JrX3RyYW1wb2xpbmUoKSBhdCBmb3JrX3RyYW1wb2xpbmUrMHhlCi0tLSB0cmFwIDB4 YywgcmlwID0gMHg4MDA2YTJiZWMsIHJzcCA9IDB4N2ZmZmZmZmZlNjk4LCByYnAgPSAweDUgLS0t CmxvY2sgb3JkZXIgcmV2ZXJzYWw6CiAxc3QgMHhmZmZmZmYwMGM2MTA2MGMwIGwtPmxfcndsb2Nr IChsLT5sX3J3bG9jaykgQCAvaGVhZF9vbGQvc3lzL21vZHVsZXMvemZzLy4uLy4uL2NkZGwvY29u dHJpYi9vcGVuc29sYXJpcy91dHMvY29tbW9uL2ZzL3pmcy96YXAuYzo1MjIKIDJuZCAweGZmZmZm ZjAwMDJjYzRhNjAgb3MtPm9zX2xvY2sgKG9zLT5vc19sb2NrKSBAIC9oZWFkX29sZC9zeXMvbW9k dWxlcy96ZnMvLi4vLi4vY2RkbC9jb250cmliL29wZW5zb2xhcmlzL3V0cy9jb21tb24vZnMvemZz L2Rub2RlLmM6MTIyOApLREI6IHN0YWNrIGJhY2t0cmFjZToKZGJfdHJhY2Vfc2VsZl93cmFwcGVy KCkgYXQgZGJfdHJhY2Vfc2VsZl93cmFwcGVyKzB4MmEKX3dpdG5lc3NfZGVidWdnZXIoKSBhdCBf d2l0bmVzc19kZWJ1Z2dlcisweDJlCndpdG5lc3NfY2hlY2tvcmRlcigpIGF0IHdpdG5lc3NfY2hl Y2tvcmRlcisweDgwNwpfc3hfeGxvY2soKSBhdCBfc3hfeGxvY2srMHg1NQpkbm9kZV9zZXRkaXJ0 eSgpIGF0IGRub2RlX3NldGRpcnR5KzB4YzkKZGJ1Zl9kaXJ0eSgpIGF0IGRidWZfZGlydHkrMHg1 OTMKZGJ1Zl9kaXJ0eSgpIGF0IGRidWZfZGlydHkrMHg1MmIKemFwX2dldF9sZWFmX2J5YmxrKCkg YXQgemFwX2dldF9sZWFmX2J5YmxrKzB4MWRkCnphcF9kZXJlZl9sZWFmKCkgYXQgemFwX2RlcmVm X2xlYWYrMHhiNgpmemFwX2FkZF9jZCgpIGF0IGZ6YXBfYWRkX2NkKzB4NzUKemFwX2FkZCgpIGF0 IHphcF9hZGQrMHgxMDUKemZzX2xpbmtfY3JlYXRlKCkgYXQgemZzX2xpbmtfY3JlYXRlKzB4MzMy Cnpmc19mcmVlYnNkX2NyZWF0ZSgpIGF0IHpmc19mcmVlYnNkX2NyZWF0ZSsweDc2MgpWT1BfQ1JF QVRFX0FQVigpIGF0IFZPUF9DUkVBVEVfQVBWKzB4ZDcKbmZzcnZfY3JlYXRlKCkgYXQgbmZzcnZf Y3JlYXRlKzB4OTA2Cm5mc3N2Y19wcm9ncmFtKCkgYXQgbmZzc3ZjX3Byb2dyYW0rMHgxYTYKc3Zj X3J1bl9pbnRlcm5hbCgpIGF0IHN2Y19ydW5faW50ZXJuYWwrMHg1ZmIKc3ZjX3RocmVhZF9zdGFy dCgpIGF0IHN2Y190aHJlYWRfc3RhcnQrMHhiCmZvcmtfZXhpdCgpIGF0IGZvcmtfZXhpdCsweDEy YQpmb3JrX3RyYW1wb2xpbmUoKSBhdCBmb3JrX3RyYW1wb2xpbmUrMHhlCi0tLSB0cmFwIDB4Yywg cmlwID0gMHg4MDA2YTJiZWMsIHJzcCA9IDB4N2ZmZmZmZmZlNjk4LCByYnAgPSAweDUgLS0tCmxv Y2sgb3JkZXIgcmV2ZXJzYWw6CiAxc3QgMHhmZmZmZmYwMGM2MjI3YTM4IGRyLT5kdC5kaS5kcl9t dHggKGRyLT5kdC5kaS5kcl9tdHgpIEAgL2hlYWRfb2xkL3N5cy9tb2R1bGVzL3pmcy8uLi8uLi9j ZGRsL2NvbnRyaWIvb3BlbnNvbGFyaXMvdXRzL2NvbW1vbi9mcy96ZnMvZGJ1Zi5jOjIyNDMKIDJu ZCAweGZmZmZmZjAwYzYyMWM4NTAgZG4tPmRuX3N0cnVjdF9yd2xvY2sgKGRuLT5kbl9zdHJ1Y3Rf cndsb2NrKSBAIC9oZWFkX29sZC9zeXMvbW9kdWxlcy96ZnMvLi4vLi4vY2RkbC9jb250cmliL29w ZW5zb2xhcmlzL3V0cy9jb21tb24vZnMvemZzL2RidWYuYzo2MTEKS0RCOiBzdGFjayBiYWNrdHJh Y2U6CmRiX3RyYWNlX3NlbGZfd3JhcHBlcigpIGF0IGRiX3RyYWNlX3NlbGZfd3JhcHBlcisweDJh Cl93aXRuZXNzX2RlYnVnZ2VyKCkgYXQgX3dpdG5lc3NfZGVidWdnZXIrMHgyZQp3aXRuZXNzX2No ZWNrb3JkZXIoKSBhdCB3aXRuZXNzX2NoZWNrb3JkZXIrMHg4MDcKX3N4X3Nsb2NrKCkgYXQgX3N4 X3Nsb2NrKzB4NTQKZGJ1Zl9yZWFkKCkgYXQgZGJ1Zl9yZWFkKzB4MzE5CmRidWZfc3luY19saXN0 KCkgYXQgZGJ1Zl9zeW5jX2xpc3QrMHg2NDIKZGJ1Zl9zeW5jX2xpc3QoKSBhdCBkYnVmX3N5bmNf bGlzdCsweDE3Zgpkbm9kZV9zeW5jKCkgYXQgZG5vZGVfc3luYysweGU5YwpkbXVfb2Jqc2V0X3N5 bmNfZG5vZGVzKCkgYXQgZG11X29ianNldF9zeW5jX2Rub2RlcysweDkyCmRtdV9vYmpzZXRfc3lu YygpIGF0IGRtdV9vYmpzZXRfc3luYysweDE5ZApkc2xfcG9vbF9zeW5jKCkgYXQgZHNsX3Bvb2xf c3luYysweGU1CnNwYV9zeW5jKCkgYXQgc3BhX3N5bmMrMHgzM2YKdHhnX3N5bmNfdGhyZWFkKCkg YXQgdHhnX3N5bmNfdGhyZWFkKzB4MTQ3CmZvcmtfZXhpdCgpIGF0IGZvcmtfZXhpdCsweDEyYQpm b3JrX3RyYW1wb2xpbmUoKSBhdCBmb3JrX3RyYW1wb2xpbmUrMHhlCi0tLSB0cmFwIDAsIHJpcCA9 IDAsIHJzcCA9IDB4ZmZmZmZmODA2MjczZmNmMCwgcmJwID0gMCAtLS0KbG9jayBvcmRlciByZXZl cnNhbDoKIDFzdCAweGZmZmZmZjAwYzYxNzQwMDAgZG4tPmRuX3N0cnVjdF9yd2xvY2sgKGRuLT5k bl9zdHJ1Y3Rfcndsb2NrKSBAIC9oZWFkX29sZC9zeXMvbW9kdWxlcy96ZnMvLi4vLi4vY2RkbC9j b250cmliL29wZW5zb2xhcmlzL3V0cy9jb21tb24vZnMvemZzL2RtdV90eC5jOjQ0MAogMm5kIDB4 ZmZmZmZmMDAwMmNhNDQxOCB6YXAtPnphcF9yd2xvY2sgKHphcC0+emFwX3J3bG9jaykgQCAvaGVh ZF9vbGQvc3lzL21vZHVsZXMvemZzLy4uLy4uL2NkZGwvY29udHJpYi9vcGVuc29sYXJpcy91dHMv Y29tbW9uL2ZzL3pmcy96YXBfbWljcm8uYzo0OTAKS0RCOiBzdGFjayBiYWNrdHJhY2U6CmRiX3Ry YWNlX3NlbGZfd3JhcHBlcigpIGF0IGRiX3RyYWNlX3NlbGZfd3JhcHBlcisweDJhCl93aXRuZXNz X2RlYnVnZ2VyKCkgYXQgX3dpdG5lc3NfZGVidWdnZXIrMHgyZQp3aXRuZXNzX2NoZWNrb3JkZXIo KSBhdCB3aXRuZXNzX2NoZWNrb3JkZXIrMHg4MDcKX3N4X3Nsb2NrKCkgYXQgX3N4X3Nsb2NrKzB4 NTQKemFwX2xvY2tkaXIoKSBhdCB6YXBfbG9ja2RpcisweDExMQp6YXBfcHJlZmV0Y2hfdWludDY0 KCkgYXQgemFwX3ByZWZldGNoX3VpbnQ2NCsweDM2CmRkdF9wcmVmZXRjaCgpIGF0IGRkdF9wcmVm ZXRjaCsweGJhCmRzbF9kYXRhc2V0X2Jsb2NrX2ZyZWVhYmxlKCkgYXQgZHNsX2RhdGFzZXRfYmxv Y2tfZnJlZWFibGUrMHgzYwpkbXVfdHhfaG9sZF9mcmVlKCkgYXQgZG11X3R4X2hvbGRfZnJlZSsw eDU2ZgpkbXVfZnJlZV9sb25nX3JhbmdlX2ltcGwoKSBhdCBkbXVfZnJlZV9sb25nX3JhbmdlX2lt cGwrMHgxMTQKZG11X2ZyZWVfbG9uZ19yYW5nZSgpIGF0IGRtdV9mcmVlX2xvbmdfcmFuZ2UrMHg0 Ywp6ZnNfcm1ub2RlKCkgYXQgemZzX3Jtbm9kZSsweDg5Cnpmc19pbmFjdGl2ZSgpIGF0IHpmc19p bmFjdGl2ZSsweDdlCnpmc19mcmVlYnNkX2luYWN0aXZlKCkgYXQgemZzX2ZyZWVic2RfaW5hY3Rp dmUrMHgxYQpWT1BfSU5BQ1RJVkVfQVBWKCkgYXQgVk9QX0lOQUNUSVZFX0FQVisweGQ5CnZpbmFj dGl2ZSgpIGF0IHZpbmFjdGl2ZSsweDkwCnZwdXR4KCkgYXQgdnB1dHgrMHgyZGMKemZzX2ZyZWVi c2RfcmVuYW1lKCkgYXQgemZzX2ZyZWVic2RfcmVuYW1lKzB4MTFiClZPUF9SRU5BTUVfQVBWKCkg YXQgVk9QX1JFTkFNRV9BUFYrMHhiZgpuZnNydl9yZW5hbWUoKSBhdCBuZnNydl9yZW5hbWUrMHhi NTIKbmZzc3ZjX3Byb2dyYW0oKSBhdCBuZnNzdmNfcHJvZ3JhbSsweDFhNgpzdmNfcnVuX2ludGVy bmFsKCkgYXQgc3ZjX3J1bl9pbnRlcm5hbCsweDVmYgpzdmNfcnVuKCkgYXQgc3ZjX3J1bisweDhm Cm5mc3N2Y19uZnNkKCkgYXQgbmZzc3ZjX25mc2QrMHhhMgpuZnNzdmNfbmZzc2VydmVyKCkgYXQg bmZzc3ZjX25mc3NlcnZlcisweDViCm5mc3N2YygpIGF0IG5mc3N2YysweDczCnN5c2NhbGxlbnRl cigpIGF0IHN5c2NhbGxlbnRlcisweDFjYgpzeXNjYWxsKCkgYXQgc3lzY2FsbCsweDRjClhmYXN0 X3N5c2NhbGwoKSBhdCBYZmFzdF9zeXNjYWxsKzB4ZTIKLS0tIHN5c2NhbGwgKDE1NSwgRnJlZUJT RCBFTEY2NCwgbmZzc3ZjKSwgcmlwID0gMHg4MDA2YTJiZWMsIHJzcCA9IDB4N2ZmZmZmZmZlNjk4 LCByYnAgPSAweDUgLS0tCnBhbmljOiBTb2xhcmlzKHBhbmljKTogemZzOiBhY2Nlc3NpbmcgcGFz dCBlbmQgb2Ygb2JqZWN0IGZmZmZmZjgwNDQyM2JkNTAvNTcgKHNpemU9MTE0MzE5MzI1NSBhY2Nl c3M9MCsxODQ0Njc0MzUyNTA5NjkzMDY3MikKCmNwdWlkID0gMQpLREI6IGVudGVyOiBwYW5pYwpw YW5pYzogYnVmd3JpdGU6IGJ1ZmZlciBpcyBub3QgYnVzeT8/PwpjcHVpZCA9IDEKS0RCOiBlbnRl cjogcGFuaWMKQ29weXJpZ2h0IChjKSAxOTkyLTIwMTAgVGhlIEZyZWVCU0QgUHJvamVjdC4KQ29w eXJpZ2h0IChjKSAxOTc5LCAxOTgwLCAxOTgzLCAxOTg2LCAxOTg4LCAxOTg5LCAxOTkxLCAxOTky LCAxOTkzLCAxOTk0CglUaGUgUmVnZW50cyBvZiB0aGUgVW5pdmVyc2l0eSBvZiBDYWxpZm9ybmlh LiBBbGwgcmlnaHRzIHJlc2VydmVkLgpGcmVlQlNEIGlzIGEgcmVnaXN0ZXJlZCB0cmFkZW1hcmsg b2YgVGhlIEZyZWVCU0QgRm91bmRhdGlvbi4KRnJlZUJTRCA5LjAtQ1VSUkVOVCAjMCByMjEyMDc0 TTogU3VuIFNlcCAxMiAxODo0ODozNiBVVEMgMjAxMAogICAgcm9vdEB4YW5hZHU6L3Vzci9vYmov aGVhZF9vbGQvc3lzL0RUUkFDRTIgYW1kNjQKV0FSTklORzogV0lUTkVTUyBvcHRpb24gZW5hYmxl ZCwgZXhwZWN0IHJlZHVjZWQgcGVyZm9ybWFuY2UuCkNQVTogQU1EIE9wdGVyb24odG0pIFByb2Nl c3NvciAyNTAgKDI0MTEuMTYtTUh6IEs4LWNsYXNzIENQVSkKICBPcmlnaW4gPSAiQXV0aGVudGlj QU1EIiAgSWQgPSAweDIwZjUxICBGYW1pbHkgPSBmICBNb2RlbCA9IDI1ICBTdGVwcGluZyA9IDEK ICBGZWF0dXJlcz0weDc4YmZiZmY8RlBVLFZNRSxERSxQU0UsVFNDLE1TUixQQUUsTUNFLENYOCxB UElDLFNFUCxNVFJSLFBHRSxNQ0EsQ01PVixQQVQsUFNFMzYsQ0xGTFVTSCxNTVgsRlhTUixTU0Us U1NFMj4KICBGZWF0dXJlczI9MHgxPFNTRTM+CiAgQU1EIEZlYXR1cmVzPTB4ZTI1MDA4MDA8U1lT Q0FMTCxOWCxNTVgrLEZGWFNSLExNLDNETm93ISssM0ROb3chPgogIEFNRCBGZWF0dXJlczI9MHgx PExBSEY+CnJlYWwgbWVtb3J5ICA9IDM0MzU5NzM4MzY4ICgzMjc2OCBNQikKYXZhaWwgbWVtb3J5 ID0gMzM0NDYwOTI4MCAoMzE4OSBNQikKRXZlbnQgdGltZXIgIkxBUElDIiBmcmVxdWVuY3kgMCBI eiBxdWFsaXR5IDUwMApBQ1BJIEFQSUMgVGFibGU6IDxQVExURCAgCSBBUElDICA+CkZyZWVCU0Qv U01QOiBNdWx0aXByb2Nlc3NvciBTeXN0ZW0gRGV0ZWN0ZWQ6IDIgQ1BVcwpGcmVlQlNEL1NNUDog MiBwYWNrYWdlKHMpIHggMSBjb3JlKHMpCiBjcHUwIChCU1ApOiBBUElDIElEOiAgMAogY3B1MSAo QVApOiBBUElDIElEOiAgMQppb2FwaWMwIDxWZXJzaW9uIDEuMT4gaXJxcyAwLTIzIG9uIG1vdGhl cmJvYXJkCmlvYXBpYzEgPFZlcnNpb24gMS4xPiBpcnFzIDI0LTI3IG9uIG1vdGhlcmJvYXJkCmlv YXBpYzIgPFZlcnNpb24gMS4xPiBpcnFzIDI4LTMxIG9uIG1vdGhlcmJvYXJkCmtiZDEgYXQga2Jk bXV4MAphY3BpMDogPFBUTFREICAgUlNEVD4gb24gbW90aGVyYm9hcmQKYWNwaTA6IFtJVEhSRUFE XQphY3BpMDogUG93ZXIgQnV0dG9uIChmaXhlZCkKVGltZWNvdW50ZXIgIkFDUEktZmFzdCIgZnJl cXVlbmN5IDM1Nzk1NDUgSHogcXVhbGl0eSAxMDAwCmFjcGlfdGltZXIwOiA8MjQtYml0IHRpbWVy IGF0IDMuNTc5NTQ1TUh6PiBwb3J0IDB4ODAwOC0weDgwMGIgb24gYWNwaTAKY3B1MDogPEFDUEkg Q1BVPiBvbiBhY3BpMApjcHUxOiA8QUNQSSBDUFU+IG9uIGFjcGkwCmFjcGlfYnV0dG9uMDogPFBv d2VyIEJ1dHRvbj4gb24gYWNwaTAKcGNpYjA6IDxBQ1BJIEhvc3QtUENJIGJyaWRnZT4gcG9ydCAw eGNmOC0weGNmZiBvbiBhY3BpMApwY2kwOiA8QUNQSSBQQ0kgYnVzPiBvbiBwY2liMApwY2kwOiA8 bWVtb3J5PiBhdCBkZXZpY2UgMC4wIChubyBkcml2ZXIgYXR0YWNoZWQpCmlzYWIwOiA8UENJLUlT QSBicmlkZ2U+IGF0IGRldmljZSAxLjAgb24gcGNpMAppc2EwOiA8SVNBIGJ1cz4gb24gaXNhYjAK cGNpMDogPHNlcmlhbCBidXMsIFNNQnVzPiBhdCBkZXZpY2UgMS4xIChubyBkcml2ZXIgYXR0YWNo ZWQpCm9oY2kwOiA8T0hDSSAoZ2VuZXJpYykgVVNCIGNvbnRyb2xsZXI+IG1lbSAweGRkMDAwMDAw LTB4ZGQwMDBmZmYgaXJxIDIwIGF0IGRldmljZSAyLjAgb24gcGNpMApvaGNpMDogW0lUSFJFQURd CnVzYnVzMDogPE9IQ0kgKGdlbmVyaWMpIFVTQiBjb250cm9sbGVyPiBvbiBvaGNpMAplaGNpMDog PE5WSURJQSBuRm9yY2U0IFVTQiAyLjAgY29udHJvbGxlcj4gbWVtIDB4ZGQwMDEwMDAtMHhkZDAw MTBmZiBpcnEgMjEgYXQgZGV2aWNlIDIuMSBvbiBwY2kwCmVoY2kwOiBbSVRIUkVBRF0KdXNidXMx OiBFSENJIHZlcnNpb24gMS4wCnVzYnVzMTogPE5WSURJQSBuRm9yY2U0IFVTQiAyLjAgY29udHJv bGxlcj4gb24gZWhjaTAKYXRhcGNpMDogPG5WaWRpYSBuRm9yY2UgQ0s4MDQgVURNQTEzMyBjb250 cm9sbGVyPiBwb3J0IDB4MWYwLTB4MWY3LDB4M2Y2LDB4MTcwLTB4MTc3LDB4Mzc2LDB4MTQwMC0w eDE0MGYgYXQgZGV2aWNlIDYuMCBvbiBwY2kwCmF0YTA6IDxBVEEgY2hhbm5lbCAwPiBvbiBhdGFw Y2kwCmF0YTA6IFtJVEhSRUFEXQphdGExOiA8QVRBIGNoYW5uZWwgMT4gb24gYXRhcGNpMAphdGEx OiBbSVRIUkVBRF0KYXRhcGNpMTogPG5WaWRpYSBuRm9yY2UgQ0s4MDQgU0FUQTMwMCBjb250cm9s bGVyPiBwb3J0IDB4MTQ0MC0weDE0NDcsMHgxNDM0LTB4MTQzNywweDE0MzgtMHgxNDNmLDB4MTQz MC0weDE0MzMsMHgxNDEwLTB4MTQxZiBtZW0gMHhkZDAwMjAwMC0weGRkMDAyZmZmIGlycSAyMiBh dCBkZXZpY2UgNy4wIG9uIHBjaTAKYXRhcGNpMTogW0lUSFJFQURdCmF0YTI6IDxBVEEgY2hhbm5l bCAwPiBvbiBhdGFwY2kxCmF0YTI6IFtJVEhSRUFEXQphdGEzOiA8QVRBIGNoYW5uZWwgMT4gb24g YXRhcGNpMQphdGEzOiBbSVRIUkVBRF0KYXRhcGNpMjogPG5WaWRpYSBuRm9yY2UgQ0s4MDQgU0FU QTMwMCBjb250cm9sbGVyPiBwb3J0IDB4MTQ1OC0weDE0NWYsMHgxNDRjLTB4MTQ0ZiwweDE0NTAt MHgxNDU3LDB4MTQ0OC0weDE0NGIsMHgxNDIwLTB4MTQyZiBtZW0gMHhkZDAwMzAwMC0weGRkMDAz ZmZmIGlycSAyMyBhdCBkZXZpY2UgOC4wIG9uIHBjaTAKYXRhcGNpMjogW0lUSFJFQURdCmF0YTQ6 IDxBVEEgY2hhbm5lbCAwPiBvbiBhdGFwY2kyCmF0YTQ6IFtJVEhSRUFEXQphdGE1OiA8QVRBIGNo YW5uZWwgMT4gb24gYXRhcGNpMgphdGE1OiBbSVRIUkVBRF0KcGNpYjE6IDxBQ1BJIFBDSS1QQ0kg YnJpZGdlPiBhdCBkZXZpY2UgOS4wIG9uIHBjaTAKcGNpMTogPEFDUEkgUENJIGJ1cz4gb24gcGNp YjEKdmdhcGNpMDogPFZHQS1jb21wYXRpYmxlIGRpc3BsYXk+IHBvcnQgMHgyMDAwLTB4MjBmZiBt ZW0gMHhkZTAwMDAwMC0weGRlZmZmZmZmLDB4ZGQxMDAwMDAtMHhkZDEwMGZmZiBpcnEgMTYgYXQg ZGV2aWNlIDcuMCBvbiBwY2kxCnBjaWIyOiA8QUNQSSBQQ0ktUENJIGJyaWRnZT4gYXQgZGV2aWNl IDE0LjAgb24gcGNpMApwY2kyOiA8QUNQSSBQQ0kgYnVzPiBvbiBwY2liMgpwY2liMzogPEFDUEkg SG9zdC1QQ0kgYnJpZGdlPiBwb3J0IDB4Y2Y4LTB4Y2ZmIG9uIGFjcGkwCnBjaTg6IDxBQ1BJIFBD SSBidXM+IG9uIHBjaWIzCnBjaWI0OiA8QUNQSSBQQ0ktUENJIGJyaWRnZT4gYXQgZGV2aWNlIDEw LjAgb24gcGNpOApwY2k5OiA8QUNQSSBQQ0kgYnVzPiBvbiBwY2liNApwY2liNTogPEFDUEkgUENJ LVBDSSBicmlkZ2U+IGF0IGRldmljZSAxMS4wIG9uIHBjaTgKcGNpMTA6IDxBQ1BJIFBDSSBidXM+ IG9uIHBjaWI1CmJnZTA6IDxCcm9hZGNvbSBHaWdhYml0IEV0aGVybmV0IENvbnRyb2xsZXIsIEFT SUMgcmV2LiAweDAwMjAwMz4gbWVtIDB4ZGYzMTAwMDAtMHhkZjMxZmZmZiwweGRmMzAwMDAwLTB4 ZGYzMGZmZmYgaXJxIDI4IGF0IGRldmljZSA5LjAgb24gcGNpMTAKbWlpYnVzMDogPE1JSSBidXM+ IG9uIGJnZTAKYnJncGh5MDogPEJDTTU3MDQgMTAvMTAwLzEwMDBiYXNlVFggUEhZPiBQSFkgMSBv biBtaWlidXMwCmJyZ3BoeTA6ICAxMGJhc2VULCAxMGJhc2VULUZEWCwgMTAwYmFzZVRYLCAxMDBi YXNlVFgtRkRYLCAxMDAwYmFzZVQsIDEwMDBiYXNlVC1GRFgsIGF1dG8KYmdlMDogRXRoZXJuZXQg YWRkcmVzczogMDA6ZTA6ODE6NDA6Mjk6ZDIKYmdlMDogW0lUSFJFQURdCmJnZTE6IDxCcm9hZGNv bSBHaWdhYml0IEV0aGVybmV0IENvbnRyb2xsZXIsIEFTSUMgcmV2LiAweDAwMjAwMz4gbWVtIDB4 ZGYzMzAwMDAtMHhkZjMzZmZmZiwweGRmMzIwMDAwLTB4ZGYzMmZmZmYgaXJxIDI5IGF0IGRldmlj ZSA5LjEgb24gcGNpMTAKbWlpYnVzMTogPE1JSSBidXM+IG9uIGJnZTEKYnJncGh5MTogPEJDTTU3 MDQgMTAvMTAwLzEwMDBiYXNlVFggUEhZPiBQSFkgMSBvbiBtaWlidXMxCmJyZ3BoeTE6ICAxMGJh c2VULCAxMGJhc2VULUZEWCwgMTAwYmFzZVRYLCAxMDBiYXNlVFgtRkRYLCAxMDAwYmFzZVQsIDEw MDBiYXNlVC1GRFgsIGF1dG8KYmdlMTogRXRoZXJuZXQgYWRkcmVzczogMDA6ZTA6ODE6NDA6Mjk6 ZDMKYmdlMTogW0lUSFJFQURdCmF0dGltZXIwOiA8QVQgdGltZXI+IHBvcnQgMHg0MC0weDQzIGly cSAwIG9uIGFjcGkwClRpbWVjb3VudGVyICJpODI1NCIgZnJlcXVlbmN5IDExOTMxODIgSHogcXVh bGl0eSAwCmF0dGltZXIwOiBDYW4ndCBtYXAgaW50ZXJydXB0LgphdHJ0YzA6IDxBVCByZWFsdGlt ZSBjbG9jaz4gcG9ydCAweDcwLTB4NzEgaXJxIDggb24gYWNwaTAKYXRydGMwOiBbRklMVEVSXQpF dmVudCB0aW1lciAiUlRDIiBmcmVxdWVuY3kgMzI3NjggSHogcXVhbGl0eSAwCmF0a2JkYzA6IDxL ZXlib2FyZCBjb250cm9sbGVyIChpODA0Mik+IHBvcnQgMHg2MCwweDY0IGlycSAxIG9uIGFjcGkw CmF0a2JkMDogPEFUIEtleWJvYXJkPiBpcnEgMSBvbiBhdGtiZGMwCmtiZDAgYXQgYXRrYmQwCmF0 a2JkMDogW0dJQU5ULUxPQ0tFRF0KYXRrYmQwOiBbSVRIUkVBRF0KdWFydDA6IDwxNjU1MCBvciBj b21wYXRpYmxlPiBwb3J0IDB4M2Y4LTB4M2ZmIGlycSA0IGZsYWdzIDB4MTAgb24gYWNwaTAKdWFy dDA6IFtGSUxURVJdCnVhcnQxOiA8MTY1NTAgb3IgY29tcGF0aWJsZT4gcG9ydCAweDJmOC0weDJm ZiBpcnEgMyBvbiBhY3BpMAp1YXJ0MTogW0ZJTFRFUl0KcHBjMTogPFBhcmFsbGVsIHBvcnQ+IHBv cnQgMHgyNzgtMHgyN2YgaXJxIDUgb24gYWNwaTAKcHBjMTogR2VuZXJpYyBjaGlwc2V0IChOSUJC TEUtb25seSkgaW4gQ09NUEFUSUJMRSBtb2RlCnBwYzE6IFtJVEhSRUFEXQpwcGJ1czA6IDxQYXJh bGxlbCBwb3J0IGJ1cz4gb24gcHBjMQpwbGlwMDogPFBMSVAgbmV0d29yayBpbnRlcmZhY2U+IG9u IHBwYnVzMApwbGlwMDogW0lUSFJFQURdCmxwdDA6IDxQcmludGVyPiBvbiBwcGJ1czAKbHB0MDog W0lUSFJFQURdCmxwdDA6IEludGVycnVwdC1kcml2ZW4gcG9ydApwcGkwOiA8UGFyYWxsZWwgSS9P PiBvbiBwcGJ1czAKb3JtMDogPElTQSBPcHRpb24gUk9Ncz4gYXQgaW9tZW0gMHhjMDAwMC0weGM3 ZmZmLDB4YzgwMDAtMHhjOTdmZiwweGM5ODAwLTB4Y2FmZmYgb24gaXNhMApzYzA6IDxTeXN0ZW0g Y29uc29sZT4gYXQgZmxhZ3MgMHgxMDAgb24gaXNhMApzYzA6IFZHQSA8MTYgdmlydHVhbCBjb25z b2xlcywgZmxhZ3M9MHgzMDA+CnZnYTA6IDxHZW5lcmljIElTQSBWR0E+IGF0IHBvcnQgMHgzYzAt MHgzZGYgaW9tZW0gMHhhMDAwMC0weGJmZmZmIG9uIGlzYTAKcHBjMDogY2Fubm90IHJlc2VydmUg SS9PIHBvcnQgcmFuZ2UKcG93ZXJub3cwOiA8Q29vbGBuJ1F1aWV0IEs4PiBvbiBjcHUwCnBvd2Vy bm93MTogPENvb2xgbidRdWlldCBLOD4gb24gY3B1MQpTdGFydGluZyBrZXJuZWwgZXZlbnQgdGlt ZXJzOiBMQVBJQyBAIDEwMDBIeiwgUlRDIEAgMTI4SHoKVGltZWNvdW50ZXJzIHRpY2sgZXZlcnkg MS4wMDAgbXNlYwp1c2J1czA6IDEyTWJwcyBGdWxsIFNwZWVkIFVTQiB2MS4wCnVzYnVzMTogNDgw TWJwcyBIaWdoIFNwZWVkIFVTQiB2Mi4wCmFkMDogMjM4NDc1TUIgPFNlYWdhdGUgU1QzMjUwODIz QSAzLjAzPiBhdCBhdGEwLW1hc3RlciBVRE1BMTAwIAp1Z2VuMC4xOiA8blZpZGlhPiBhdCB1c2J1 czAKdWh1YjA6IDxuVmlkaWEgT0hDSSByb290IEhVQiwgY2xhc3MgOS8wLCByZXYgMS4wMC8xLjAw LCBhZGRyIDE+IG9uIHVzYnVzMAp1Z2VuMS4xOiA8blZpZGlhPiBhdCB1c2J1czEKdWh1YjE6IDxu VmlkaWEgRUhDSSByb290IEhVQiwgY2xhc3MgOS8wLCByZXYgMi4wMC8xLjAwLCBhZGRyIDE+IG9u IHVzYnVzMQphZDQ6IDE0MzA3OTlNQiA8V0RDIFdEMTVFQVJTLTAwWjVCMSA4MC4wMEE4MD4gYXQg YXRhMi1tYXN0ZXIgVURNQTEwMCBTQVRBIDNHYi9zCnVodWIwOiAxMCBwb3J0cyB3aXRoIDEwIHJl bW92YWJsZSwgc2VsZiBwb3dlcmVkCmFkNjogMTQzMDc5OU1CIDxTZWFnYXRlIFNUMzE1MDAzNDFB UyBDQzFIPiBhdCBhdGEzLW1hc3RlciBVRE1BMTAwIFNBVEEgM0diL3MKYWQ4OiAxOTA3NzI5TUIg PEhpdGFjaGkgSERTNzIyMDIwQUxBMzMwIEpLQU9BM0VBPiBhdCBhdGE0LW1hc3RlciBVRE1BMTAw IFNBVEEgM0diL3MKYWQxMDogMTkwNzcyOU1CIDxIaXRhY2hpIEhEUzcyMjAyMEFMQTMzMCBKS0FP QTIwTj4gYXQgYXRhNS1tYXN0ZXIgVURNQTEwMCBTQVRBIDNHYi9zClNNUDogQVAgQ1BVICMxIExh dW5jaGVkIQpXQVJOSU5HOiBXSVRORVNTIG9wdGlvbiBlbmFibGVkLCBleHBlY3QgcmVkdWNlZCBw ZXJmb3JtYW5jZS4KUm9vdCBtb3VudCB3YWl0aW5nIGZvcjogdXNidXMxClJvb3QgbW91bnQgd2Fp dGluZyBmb3I6IHVzYnVzMQp1aHViMTogMTAgcG9ydHMgd2l0aCAxMCByZW1vdmFibGUsIHNlbGYg cG93ZXJlZApUcnlpbmcgdG8gbW91bnQgcm9vdCBmcm9tIHVmczovZGV2L2FkMHMxYQpXQVJOSU5H OiAvIHdhcyBub3QgcHJvcGVybHkgZGlzbW91bnRlZApTZXR0aW5nIGhvc3R1dWlkOiA1YzA2MDQ2 My1iYWYxLTExZGYtODUzNi0wMGUwODE0MDI5ZDIuClNldHRpbmcgaG9zdGlkOiAweDM4ZGU5ZDRl LgpaRlMgTk9USUNFOiBQcmVmZXRjaCBpcyBkaXNhYmxlZCBieSBkZWZhdWx0IGlmIGxlc3MgdGhh biA0R0Igb2YgUkFNIGlzIHByZXNlbnQ7CiAgICAgICAgICAgIHRvIGVuYWJsZSwgYWRkICJ2ZnMu emZzLnByZWZldGNoX2Rpc2FibGU9MCIgdG8gL2Jvb3QvbG9hZGVyLmNvbmYuClpGUyBmaWxlc3lz dGVtIHZlcnNpb24gNQpaRlMgc3RvcmFnZSBwb29sIHZlcnNpb24gMjgKdW5yZWNvZ25pemVkIGNv bW1hbmQgJ3ZvbGluaXQnCnVzYWdlOiB6ZnMgY29tbWFuZCBhcmdzIC4uLgp3aGVyZSAnY29tbWFu ZCcgaXMgb25lIG9mIHRoZSBmb2xsb3dpbmc6CgoJY3JlYXRlIFstcF0gWy1vIHByb3BlcnR5PXZh bHVlXSAuLi4gPGZpbGVzeXN0ZW0+CgljcmVhdGUgWy1wc10gWy1iIGJsb2Nrc2l6ZV0gWy1vIHBy b3BlcnR5PXZhbHVlXSAuLi4gLVYgPHNpemU+IDx2b2x1bWU+CglkZXN0cm95IFstclJmXSA8Zmls ZXN5c3RlbXx2b2x1bWU+CglkZXN0cm95IFstclJkXSA8c25hcHNob3Q+CgoJc25hcHNob3QgWy1y XSBbLW8gcHJvcGVydHk9dmFsdWVdIC4uLiA8ZmlsZXN5c3RlbUBzbmFwbmFtZXx2b2x1bWVAc25h cG5hbWU+Cglyb2xsYmFjayBbLXJSZl0gPHNuYXBzaG90PgoJY2xvbmUgWy1wXSBbLW8gcHJvcGVy dHk9dmFsdWVdIC4uLiA8c25hcHNob3Q+IDxmaWxlc3lzdGVtfHZvbHVtZT4KCXByb21vdGUgPGNs b25lLWZpbGVzeXN0ZW0+CglyZW5hbWUgPGZpbGVzeXN0ZW18dm9sdW1lfHNuYXBzaG90PiA8Zmls ZXN5c3RlbXx2b2x1bWV8c25hcHNob3Q+CglyZW5hbWUgLXAgPGZpbGVzeXN0ZW18dm9sdW1lPiA8 ZmlsZXN5c3RlbXx2b2x1bWU+CglyZW5hbWUgLXIgPHNuYXBzaG90PiA8c25hcHNob3Q+CgoJbGlz dCBbLXJIXVstZCBtYXhdIFstbyBwcm9wZXJ0eVssLi4uXV0gWy10IHR5cGVbLC4uLl1dIFstcyBw cm9wZXJ0eV0gLi4uCgkgICAgWy1TIHByb3BlcnR5XSAuLi4gW2ZpbGVzeXN0ZW18dm9sdW1lfHNu YXBzaG90XSAuLi4KCglzZXQgPHByb3BlcnR5PXZhbHVlPiA8ZmlsZXN5c3RlbXx2b2x1bWV8c25h cHNob3Q+IC4uLgoJZ2V0IFstckhwXSBbLWQgbWF4XSBbLW8gImFsbCIgfCBmaWVsZFssLi4uXV0g Wy1zIHNvdXJjZVssLi4uXV0KCSAgICA8ImFsbCIgfCBwcm9wZXJ0eVssLi4uXT4gW2ZpbGVzeXN0 ZW18dm9sdW1lfHNuYXBzaG90XSAuLi4KCWluaGVyaXQgWy1yU10gPHByb3BlcnR5PiA8ZmlsZXN5 c3RlbXx2b2x1bWV8c25hcHNob3Q+IC4uLgoJdXBncmFkZSBbLXZdCgl1cGdyYWRlIFstcl0gWy1W IHZlcnNpb25dIDwtYSB8IGZpbGVzeXN0ZW0gLi4uPgoJdXNlcnNwYWNlIFstaG5pSHBdIFstbyBm aWVsZFssLi4uXV0gWy1zUyBmaWVsZF0gLi4uIFstdCB0eXBlWywuLi5dXQoJICAgIDxmaWxlc3lz dGVtfHNuYXBzaG90PgoJZ3JvdXBzcGFjZSBbLWhuaUhwVV0gWy1vIGZpZWxkWywuLi5dXSBbLXNT IGZpZWxkXSAuLi4gWy10IHR5cGVbLC4uLl1dCgkgICAgPGZpbGVzeXN0ZW18c25hcHNob3Q+CgoJ bW91bnQKCW1vdW50IFstdk9dIFstbyBvcHRzXSA8LWEgfCBmaWxlc3lzdGVtPgoJdW5tb3VudCBb LWZdIDwtYSB8IGZpbGVzeXN0ZW18bW91bnRwb2ludD4KCXNoYXJlIDwtYSB8IGZpbGVzeXN0ZW0+ Cgl1bnNoYXJlIDwtYSB8IGZpbGVzeXN0ZW18bW91bnRwb2ludD4KCglzZW5kIFstUkRwXSBbLVtp SV0gc25hcHNob3RdIDxzbmFwc2hvdD4KCXJlY2VpdmUgWy12bkZ1XSA8ZmlsZXN5c3RlbXx2b2x1 bWV8c25hcHNob3Q+CglyZWNlaXZlIFstdm5GdV0gWy1kIHwgLWVdIDxmaWxlc3lzdGVtPgoKCWFs bG93IDxmaWxlc3lzdGVtfHZvbHVtZT4KCWFsbG93IFstbGR1Z10gPCJldmVyeW9uZSJ8dXNlcnxn cm91cD5bLC4uLl0gPHBlcm18QHNldG5hbWU+WywuLi5dCgkgICAgPGZpbGVzeXN0ZW18dm9sdW1l PgoJYWxsb3cgWy1sZF0gLWUgPHBlcm18QHNldG5hbWU+WywuLi5dIDxmaWxlc3lzdGVtfHZvbHVt ZT4KCWFsbG93IC1jIDxwZXJtfEBzZXRuYW1lPlssLi4uXSA8ZmlsZXN5c3RlbXx2b2x1bWU+Cglh bGxvdyAtcyBAc2V0bmFtZSA8cGVybXxAc2V0bmFtZT5bLC4uLl0gPGZpbGVzeXN0ZW18dm9sdW1l PgoKCXVuYWxsb3cgWy1ybGR1Z10gPCJldmVyeW9uZSJ8dXNlcnxncm91cD5bLC4uLl0KCSAgICBb PHBlcm18QHNldG5hbWU+WywuLi5dXSA8ZmlsZXN5c3RlbXx2b2x1bWU+Cgl1bmFsbG93IFstcmxk XSAtZSBbPHBlcm18QHNldG5hbWU+WywuLi5dXSA8ZmlsZXN5c3RlbXx2b2x1bWU+Cgl1bmFsbG93 IFstcl0gLWMgWzxwZXJtfEBzZXRuYW1lPlssLi4uXV0gPGZpbGVzeXN0ZW18dm9sdW1lPgoJdW5h bGxvdyBbLXJdIC1zIEBzZXRuYW1lIFs8cGVybXxAc2V0bmFtZT5bLC4uLl1dIDxmaWxlc3lzdGVt fHZvbHVtZT4KCglob2xkIFstcl0gPHRhZz4gPHNuYXBzaG90PiAuLi4KCWhvbGRzIFstcl0gPHNu YXBzaG90PiAuLi4KCXJlbGVhc2UgWy1yXSA8dGFnPiA8c25hcHNob3Q+IC4uLgoJZGlmZiBbLUZI dF0gPHNuYXBzaG90PiBbc25hcHNob3R8ZmlsZXN5c3RlbV0KCglqYWlsIDxqYWlsaWQ+IDxmaWxl c3lzdGVtPgoJdW5qYWlsIDxqYWlsaWQ+IDxmaWxlc3lzdGVtPgoKRWFjaCBkYXRhc2V0IGlzIG9m IHRoZSBmb3JtOiBwb29sL1tkYXRhc2V0L10qZGF0YXNldFtAbmFtZV0KCkZvciB0aGUgcHJvcGVy dHkgbGlzdCwgcnVuOiB6ZnMgc2V0fGdldAoKRm9yIHRoZSBkZWxlZ2F0ZWQgcGVybWlzc2lvbiBs aXN0LCBydW46IHpmcyBhbGxvd3x1bmFsbG93CmxvY2sgb3JkZXIgcmV2ZXJzYWw6CiAxc3QgMHhm ZmZmZmYwMDAyY2E2YmUwIGRiLT5kYl9tdHggKGRiLT5kYl9tdHgpIEAgL2hlYWRfb2xkL3N5cy9t b2R1bGVzL3pmcy8uLi8uLi9jZGRsL2NvbnRyaWIvb3BlbnNvbGFyaXMvdXRzL2NvbW1vbi9mcy96 ZnMvZGJ1Zi5jOjIwMDkKIDJuZCAweGZmZmZmZjAwMDJjYWY1MTAgZG4tPmRuX210eCAoZG4tPmRu X210eCkgQCAvaGVhZF9vbGQvc3lzL21vZHVsZXMvemZzLy4uLy4uL2NkZGwvY29udHJpYi9vcGVu c29sYXJpcy91dHMvY29tbW9uL2ZzL3pmcy9kbm9kZS5jOjExNzQKS0RCOiBzdGFjayBiYWNrdHJh Y2U6CmRiX3RyYWNlX3NlbGZfd3JhcHBlcigpIGF0IGRiX3RyYWNlX3NlbGZfd3JhcHBlcisweDJh Cl93aXRuZXNzX2RlYnVnZ2VyKCkgYXQgX3dpdG5lc3NfZGVidWdnZXIrMHgyZQp3aXRuZXNzX2No ZWNrb3JkZXIoKSBhdCB3aXRuZXNzX2NoZWNrb3JkZXIrMHg4MDcKX3N4X3hsb2NrKCkgYXQgX3N4 X3hsb2NrKzB4NTUKZG5vZGVfcmVsZSgpIGF0IGRub2RlX3JlbGUrMHg0ZQpkc2xfZGVhZGxpc3Rf Y2xvc2UoKSBhdCBkc2xfZGVhZGxpc3RfY2xvc2UrMHg0Nwpkc2xfZGF0YXNldF9ldmljdCgpIGF0 IGRzbF9kYXRhc2V0X2V2aWN0KzB4N2QKZGJ1Zl9ldmljdF91c2VyKCkgYXQgZGJ1Zl9ldmljdF91 c2VyKzB4NTUKZGJ1Zl9yZWxlX2FuZF91bmxvY2soKSBhdCBkYnVmX3JlbGVfYW5kX3VubG9jaysw eDE1NApkc2xfcG9vbF9vcGVuKCkgYXQgZHNsX3Bvb2xfb3BlbisweDFiNQpzcGFfbG9hZCgpIGF0 IHNwYV9sb2FkKzB4NTRmCnNwYV9sb2FkX2Jlc3QoKSBhdCBzcGFfbG9hZF9iZXN0KzB4NTIKc3Bh X29wZW5fY29tbW9uKCkgYXQgc3BhX29wZW5fY29tbW9uKzB4MTRhCnBvb2xfc3RhdHVzX2NoZWNr KCkgYXQgcG9vbF9zdGF0dXNfY2hlY2srMHgyMQp6ZnNkZXZfaW9jdGwoKSBhdCB6ZnNkZXZfaW9j dGwrMHgxMjEKZGV2ZnNfaW9jdGxfZigpIGF0IGRldmZzX2lvY3RsX2YrMHg3YQprZXJuX2lvY3Rs KCkgYXQga2Vybl9pb2N0bCsweGJlCmlvY3RsKCkgYXQgaW9jdGwrMHhmZApzeXNjYWxsZW50ZXIo KSBhdCBzeXNjYWxsZW50ZXIrMHgxY2IKc3lzY2FsbCgpIGF0IHN5c2NhbGwrMHg0YwpYZmFzdF9z eXNjYWxsKCkgYXQgWGZhc3Rfc3lzY2FsbCsweGUyCi0tLSBzeXNjYWxsICg1NCwgRnJlZUJTRCBF TEY2NCwgaW9jdGwpLCByaXAgPSAweDgwMTEwMjY4YywgcnNwID0gMHg3ZmZmZmZmZmNjZDgsIHJi cCA9IDB4NDAwMCAtLS0KbG9jayBvcmRlciByZXZlcnNhbDoKIDFzdCAweGZmZmZmZjAwMDJjYTYw NTggZGItPmRiX210eCAoZGItPmRiX210eCkgQCAvaGVhZF9vbGQvc3lzL21vZHVsZXMvemZzLy4u Ly4uL2NkZGwvY29udHJpYi9vcGVuc29sYXJpcy91dHMvY29tbW9uL2ZzL3pmcy9kbm9kZV9zeW5j LmM6Mzk2CiAybmQgMHhmZmZmZmYwMDAyYzMzNjYwIG9zLT5vc19sb2NrIChvcy0+b3NfbG9jaykg QCAvaGVhZF9vbGQvc3lzL21vZHVsZXMvemZzLy4uLy4uL2NkZGwvY29udHJpYi9vcGVuc29sYXJp cy91dHMvY29tbW9uL2ZzL3pmcy9kbm9kZS5jOjQzOQpLREI6IHN0YWNrIGJhY2t0cmFjZToKZGJf dHJhY2Vfc2VsZl93cmFwcGVyKCkgYXQgZGJfdHJhY2Vfc2VsZl93cmFwcGVyKzB4MmEKX3dpdG5l c3NfZGVidWdnZXIoKSBhdCBfd2l0bmVzc19kZWJ1Z2dlcisweDJlCndpdG5lc3NfY2hlY2tvcmRl cigpIGF0IHdpdG5lc3NfY2hlY2tvcmRlcisweDgwNwpfc3hfeGxvY2soKSBhdCBfc3hfeGxvY2sr MHg1NQpkbm9kZV9kZXN0cm95KCkgYXQgZG5vZGVfZGVzdHJveSsweDNlCmRub2RlX2J1Zl9wYWdl b3V0KCkgYXQgZG5vZGVfYnVmX3BhZ2VvdXQrMHg5ZApkYnVmX2V2aWN0X3VzZXIoKSBhdCBkYnVm X2V2aWN0X3VzZXIrMHg1NQpkYnVmX2NsZWFyKCkgYXQgZGJ1Zl9jbGVhcisweDU4CmRub2RlX2V2 aWN0X2RidWZzKCkgYXQgZG5vZGVfZXZpY3RfZGJ1ZnMrMHg5OApkbXVfb2Jqc2V0X2V2aWN0X2Ri dWZzKCkgYXQgZG11X29ianNldF9ldmljdF9kYnVmcysweDExYwpkbXVfb2Jqc2V0X2V2aWN0KCkg YXQgZG11X29ianNldF9ldmljdCsweGEzCmRzbF9wb29sX2Nsb3NlKCkgYXQgZHNsX3Bvb2xfY2xv c2UrMHg2YQpzcGFfdW5sb2FkKCkgYXQgc3BhX3VubG9hZCsweDc5CnNwYV9sb2FkKCkgYXQgc3Bh X2xvYWQrMHg2MmQKc3BhX2xvYWRfYmVzdCgpIGF0IHNwYV9sb2FkX2Jlc3QrMHg1MgpzcGFfb3Bl bl9jb21tb24oKSBhdCBzcGFfb3Blbl9jb21tb24rMHgxNGEKcG9vbF9zdGF0dXNfY2hlY2soKSBh dCBwb29sX3N0YXR1c19jaGVjaysweDIxCnpmc2Rldl9pb2N0bCgpIGF0IHpmc2Rldl9pb2N0bCsw eDEyMQpkZXZmc19pb2N0bF9mKCkgYXQgZGV2ZnNfaW9jdGxfZisweDdhCmtlcm5faW9jdGwoKSBh dCBrZXJuX2lvY3RsKzB4YmUKaW9jdGwoKSBhdCBpb2N0bCsweGZkCnN5c2NhbGxlbnRlcigpIGF0 IHN5c2NhbGxlbnRlcisweDFjYgpzeXNjYWxsKCkgYXQgc3lzY2FsbCsweDRjClhmYXN0X3N5c2Nh bGwoKSBhdCBYZmFzdF9zeXNjYWxsKzB4ZTIKLS0tIHN5c2NhbGwgKDU0LCBGcmVlQlNEIEVMRjY0 LCBpb2N0bCksIHJpcCA9IDB4ODAxMTAyNjhjLCByc3AgPSAweDdmZmZmZmZmY2NkOCwgcmJwID0g MHg0MDAwIC0tLQpFbnRyb3B5IGhhcnZlc3Rpbmc6CiBpbnRlcnJ1cHRzCiBldGhlcm5ldAogcG9p bnRfdG9fcG9pbnQKIGtpY2tzdGFydAouClN0YXJ0aW5nIGZpbGUgc3lzdGVtIGNoZWNrczoKL2Rl di9hZDBzMWE6IEZSRUUgQkxLIENPVU5UKFMpIFdST05HIElOIFNVUEVSQkxLIChTQUxWQUdFRCkK L2Rldi9hZDBzMWE6IFNVTU1BUlkgSU5GT1JNQVRJT04gQkFEIChTQUxWQUdFRCkKL2Rldi9hZDBz MWE6IEJMSyhTKSBNSVNTSU5HIElOIEJJVCBNQVBTIChTQUxWQUdFRCkKL2Rldi9hZDBzMWE6IDEx Njg3ODkgZmlsZXMsIDgwMTc2MDMgdXNlZCwgMTA3Njk4MDIzIGZyZWUgKDcwNDg3IGZyYWdzLCAx MzQ1MzQ0MiBibG9ja3MsIDAuMSUgZnJhZ21lbnRhdGlvbikKTW91bnRpbmcgbG9jYWwgZmlsZSBz eXN0ZW1zOgouCmxvY2sgb3JkZXIgcmV2ZXJzYWw6CiAxc3QgMHhmZmZmZmYwMDAyYTIzZTAwIHNh LT5zYV9sb2NrIChzYS0+c2FfbG9jaykgQCAvaGVhZF9vbGQvc3lzL21vZHVsZXMvemZzLy4uLy4u L2NkZGwvY29udHJpYi9vcGVuc29sYXJpcy91dHMvY29tbW9uL2ZzL3pmcy9zYS5jOjk5NQogMm5k IDB4ZmZmZmZmMDAwMmNiMzAwMCBkbi0+ZG5fc3RydWN0X3J3bG9jayAoZG4tPmRuX3N0cnVjdF9y d2xvY2spIEAgL2hlYWRfb2xkL3N5cy9tb2R1bGVzL3pmcy8uLi8uLi9jZGRsL2NvbnRyaWIvb3Bl bnNvbGFyaXMvdXRzL2NvbW1vbi9mcy96ZnMvZG5vZGUuYzoyMDUKS0RCOiBzdGFjayBiYWNrdHJh Y2U6CmRiX3RyYWNlX3NlbGZfd3JhcHBlcigpIGF0IGRiX3RyYWNlX3NlbGZfd3JhcHBlcisweDJh Cl93aXRuZXNzX2RlYnVnZ2VyKCkgYXQgX3dpdG5lc3NfZGVidWdnZXIrMHgyZQp3aXRuZXNzX2No ZWNrb3JkZXIoKSBhdCB3aXRuZXNzX2NoZWNrb3JkZXIrMHg4MDcKX3N4X3Nsb2NrKCkgYXQgX3N4 X3Nsb2NrKzB4NTQKZG5vZGVfdmVyaWZ5KCkgYXQgZG5vZGVfdmVyaWZ5KzB4N2MKZG5vZGVfaG9s ZF9pbXBsKCkgYXQgZG5vZGVfaG9sZF9pbXBsKzB4OTQKZG11X2J1Zl9ob2xkKCkgYXQgZG11X2J1 Zl9ob2xkKzB4NDgKemFwX2xvY2tkaXIoKSBhdCB6YXBfbG9ja2RpcisweDc0CnphcF9sb29rdXBf bm9ybSgpIGF0IHphcF9sb29rdXBfbm9ybSsweDQ1CnphcF9sb29rdXAoKSBhdCB6YXBfbG9va3Vw KzB4MmUKc2Ffc2V0dXAoKSBhdCBzYV9zZXR1cCsweDIxNAp6ZnN2ZnNfY3JlYXRlKCkgYXQgemZz dmZzX2NyZWF0ZSsweDIxOAp6ZnNfbW91bnQoKSBhdCB6ZnNfbW91bnQrMHhlNgp2ZnNfZG9ubW91 bnQoKSBhdCB2ZnNfZG9ubW91bnQrMHhjZGUKbm1vdW50KCkgYXQgbm1vdW50KzB4NjMKc3lzY2Fs bGVudGVyKCkgYXQgc3lzY2FsbGVudGVyKzB4MWNiCnN5c2NhbGwoKSBhdCBzeXNjYWxsKzB4NGMK WGZhc3Rfc3lzY2FsbCgpIGF0IFhmYXN0X3N5c2NhbGwrMHhlMgotLS0gc3lzY2FsbCAoMzc4LCBG cmVlQlNEIEVMRjY0LCBubW91bnQpLCByaXAgPSAweDgwMTA2MWQ0YywgcnNwID0gMHg3ZmZmZmZm ZmNjZTgsIHJicCA9IDB4ODAxODVmMTBjIC0tLQpsb2NrIG9yZGVyIHJldmVyc2FsOgogMXN0IDB4 ZmZmZmZmMDAwMmEyM2UwMCBzYS0+c2FfbG9jayAoc2EtPnNhX2xvY2spIEAgL2hlYWRfb2xkL3N5 cy9tb2R1bGVzL3pmcy8uLi8uLi9jZGRsL2NvbnRyaWIvb3BlbnNvbGFyaXMvdXRzL2NvbW1vbi9m cy96ZnMvc2EuYzo5OTUKIDJuZCAweGZmZmZmZjAwMDJkMGNlNjAgb3MtPm9zX2xvY2sgKG9zLT5v c19sb2NrKSBAIC9oZWFkX29sZC9zeXMvbW9kdWxlcy96ZnMvLi4vLi4vY2RkbC9jb250cmliL29w ZW5zb2xhcmlzL3V0cy9jb21tb24vZnMvemZzL2Rub2RlLmM6NDE1CktEQjogc3RhY2sgYmFja3Ry YWNlOgpkYl90cmFjZV9zZWxmX3dyYXBwZXIoKSBhdCBkYl90cmFjZV9zZWxmX3dyYXBwZXIrMHgy YQpfd2l0bmVzc19kZWJ1Z2dlcigpIGF0IF93aXRuZXNzX2RlYnVnZ2VyKzB4MmUKd2l0bmVzc19j aGVja29yZGVyKCkgYXQgd2l0bmVzc19jaGVja29yZGVyKzB4ODA3Cl9zeF94bG9jaygpIGF0IF9z eF94bG9jaysweDU1CmRub2RlX2NyZWF0ZSgpIGF0IGRub2RlX2NyZWF0ZSsweDEzMApkbm9kZV9o b2xkX2ltcGwoKSBhdCBkbm9kZV9ob2xkX2ltcGwrMHg1NmEKZG11X2J1Zl9ob2xkKCkgYXQgZG11 X2J1Zl9ob2xkKzB4NDgKemFwX2xvY2tkaXIoKSBhdCB6YXBfbG9ja2RpcisweDc0CnphcF9sb29r dXBfbm9ybSgpIGF0IHphcF9sb29rdXBfbm9ybSsweDQ1CnphcF9sb29rdXAoKSBhdCB6YXBfbG9v a3VwKzB4MmUKc2Ffc2V0dXAoKSBhdCBzYV9zZXR1cCsweDIxNAp6ZnN2ZnNfY3JlYXRlKCkgYXQg emZzdmZzX2NyZWF0ZSsweDIxOAp6ZnNfbW91bnQoKSBhdCB6ZnNfbW91bnQrMHhlNgp2ZnNfZG9u bW91bnQoKSBhdCB2ZnNfZG9ubW91bnQrMHhjZGUKbm1vdW50KCkgYXQgbm1vdW50KzB4NjMKc3lz Y2FsbGVudGVyKCkgYXQgc3lzY2FsbGVudGVyKzB4MWNiCnN5c2NhbGwoKSBhdCBzeXNjYWxsKzB4 NGMKWGZhc3Rfc3lzY2FsbCgpIGF0IFhmYXN0X3N5c2NhbGwrMHhlMgotLS0gc3lzY2FsbCAoMzc4 LCBGcmVlQlNEIEVMRjY0LCBubW91bnQpLCByaXAgPSAweDgwMTA2MWQ0YywgcnNwID0gMHg3ZmZm ZmZmZmNjZTgsIHJicCA9IDB4ODAxODVmMTBjIC0tLQpsb2NrIG9yZGVyIHJldmVyc2FsOgogMXN0 IDB4ZmZmZmZmMDAwMmEyM2UwMCBzYS0+c2FfbG9jayAoc2EtPnNhX2xvY2spIEAgL2hlYWRfb2xk L3N5cy9tb2R1bGVzL3pmcy8uLi8uLi9jZGRsL2NvbnRyaWIvb3BlbnNvbGFyaXMvdXRzL2NvbW1v bi9mcy96ZnMvc2EuYzo5OTUKIDJuZCAweGZmZmZmZjAwMDJlNWYxOTggemFwLT56YXBfcndsb2Nr ICh6YXAtPnphcF9yd2xvY2spIEAgL2hlYWRfb2xkL3N5cy9tb2R1bGVzL3pmcy8uLi8uLi9jZGRs L2NvbnRyaWIvb3BlbnNvbGFyaXMvdXRzL2NvbW1vbi9mcy96ZnMvemFwX21pY3JvLmM6Mzc1CktE Qjogc3RhY2sgYmFja3RyYWNlOgpkYl90cmFjZV9zZWxmX3dyYXBwZXIoKSBhdCBkYl90cmFjZV9z ZWxmX3dyYXBwZXIrMHgyYQpfd2l0bmVzc19kZWJ1Z2dlcigpIGF0IF93aXRuZXNzX2RlYnVnZ2Vy KzB4MmUKd2l0bmVzc19jaGVja29yZGVyKCkgYXQgd2l0bmVzc19jaGVja29yZGVyKzB4ODA3Cl9z eF94bG9jaygpIGF0IF9zeF94bG9jaysweDU1CnphcF9sb2NrZGlyKCkgYXQgemFwX2xvY2tkaXIr MHgzNGMKemFwX2xvb2t1cF9ub3JtKCkgYXQgemFwX2xvb2t1cF9ub3JtKzB4NDUKemFwX2xvb2t1 cCgpIGF0IHphcF9sb29rdXArMHgyZQpzYV9zZXR1cCgpIGF0IHNhX3NldHVwKzB4MjE0Cnpmc3Zm c19jcmVhdGUoKSBhdCB6ZnN2ZnNfY3JlYXRlKzB4MjE4Cnpmc19tb3VudCgpIGF0IHpmc19tb3Vu dCsweGU2CnZmc19kb25tb3VudCgpIGF0IHZmc19kb25tb3VudCsweGNkZQpubW91bnQoKSBhdCBu bW91bnQrMHg2MwpzeXNjYWxsZW50ZXIoKSBhdCBzeXNjYWxsZW50ZXIrMHgxY2IKc3lzY2FsbCgp IGF0IHN5c2NhbGwrMHg0YwpYZmFzdF9zeXNjYWxsKCkgYXQgWGZhc3Rfc3lzY2FsbCsweGUyCi0t LSBzeXNjYWxsICgzNzgsIEZyZWVCU0QgRUxGNjQsIG5tb3VudCksIHJpcCA9IDB4ODAxMDYxZDRj LCByc3AgPSAweDdmZmZmZmZmY2NlOCwgcmJwID0gMHg4MDE4NWYxMGMgLS0tCmxvY2sgb3JkZXIg cmV2ZXJzYWw6CiAxc3QgMHhmZmZmZmYwMDAyYTNlMzEwIHpmcyAoemZzKSBAIC9oZWFkX29sZC9z eXMvbW9kdWxlcy96ZnMvLi4vLi4vY2RkbC9jb250cmliL29wZW5zb2xhcmlzL3V0cy9jb21tb24v ZnMvZ2ZzLmM6NDg4CiAybmQgMHhmZmZmZmYwMDAyZWU0MzUwIHpmc3Zmcy0+el9ob2xkX210eFtp XSAoemZzdmZzLT56X2hvbGRfbXR4W2ldKSBAIC9oZWFkX29sZC9zeXMvbW9kdWxlcy96ZnMvLi4v Li4vY2RkbC9jb250cmliL29wZW5zb2xhcmlzL3V0cy9jb21tb24vZnMvemZzL3pmc196bm9kZS5j OjExMTYKS0RCOiBzdGFjayBiYWNrdHJhY2U6CmRiX3RyYWNlX3NlbGZfd3JhcHBlcigpIGF0IGRi X3RyYWNlX3NlbGZfd3JhcHBlcisweDJhCl93aXRuZXNzX2RlYnVnZ2VyKCkgYXQgX3dpdG5lc3Nf ZGVidWdnZXIrMHgyZQp3aXRuZXNzX2NoZWNrb3JkZXIoKSBhdCB3aXRuZXNzX2NoZWNrb3JkZXIr MHg4MDcKX3N4X3hsb2NrKCkgYXQgX3N4X3hsb2NrKzB4NTUKemZzX3pnZXQoKSBhdCB6ZnNfemdl dCsweDI0MQp6ZnNfcm9vdCgpIGF0IHpmc19yb290KzB4NTAKemZzY3RsX2NyZWF0ZSgpIGF0IHpm c2N0bF9jcmVhdGUrMHg4MQp6ZnNfbW91bnQoKSBhdCB6ZnNfbW91bnQrMHg1ZjQKdmZzX2Rvbm1v dW50KCkgYXQgdmZzX2Rvbm1vdW50KzB4Y2RlCm5tb3VudCgpIGF0IG5tb3VudCsweDYzCnN5c2Nh bGxlbnRlcigpIGF0IHN5c2NhbGxlbnRlcisweDFjYgpzeXNjYWxsKCkgYXQgc3lzY2FsbCsweDRj ClhmYXN0X3N5c2NhbGwoKSBhdCBYZmFzdF9zeXNjYWxsKzB4ZTIKLS0tIHN5c2NhbGwgKDM3OCwg RnJlZUJTRCBFTEY2NCwgbm1vdW50KSwgcmlwID0gMHg4MDEwNjFkNGMsIHJzcCA9IDB4N2ZmZmZm ZmZjY2U4LCByYnAgPSAweDgwMTg1ZjEwYyAtLS0KU2V0dGluZyBob3N0bmFtZTogeGFuYWR1Ci4K U3RhcnRpbmcgTmV0d29yazogbG8wIGJnZTEuCmxvMDogZmxhZ3M9ODA0OTxVUCxMT09QQkFDSyxS VU5OSU5HLE1VTFRJQ0FTVD4gbWV0cmljIDAgbXR1IDE2Mzg0CglvcHRpb25zPTM8UlhDU1VNLFRY Q1NVTT4KCWluZXQ2IGZlODA6OjElbG8wIHByZWZpeGxlbiA2NCBzY29wZWlkIDB4NCAKCWluZXQ2 IDo6MSBwcmVmaXhsZW4gMTI4IAoJaW5ldCAxMjcuMC4wLjEgbmV0bWFzayAweGZmMDAwMDAwIAoJ bmQ2IG9wdGlvbnM9MjE8UEVSRk9STU5VRCxBVVRPX0xJTktMT0NBTD4KYmdlMTogZmxhZ3M9ODg0 MzxVUCxCUk9BRENBU1QsUlVOTklORyxTSU1QTEVYLE1VTFRJQ0FTVD4gbWV0cmljIDAgbXR1IDE1 MDAKCW9wdGlvbnM9ODAwOWI8UlhDU1VNLFRYQ1NVTSxWTEFOX01UVSxWTEFOX0hXVEFHR0lORyxW TEFOX0hXQ1NVTSxMSU5LU1RBVEU+CglldGhlciAwMDplMDo4MTo0MDoyOTpkMwoJaW5ldDYgZmU4 MDo6MmUwOjgxZmY6ZmU0MDoyOWQzJWJnZTEgcHJlZml4bGVuIDY0IHRlbnRhdGl2ZSBzY29wZWlk IDB4MiAKCW5kNiBvcHRpb25zPTIxPFBFUkZPUk1OVUQsQVVUT19MSU5LTE9DQUw+CgltZWRpYTog RXRoZXJuZXQgYXV0b3NlbGVjdCAobm9uZSkKCXN0YXR1czogbm8gY2FycmllcgpTdGFydGluZyBk ZXZkLgpFTEYgbGRjb25maWcgcGF0aDogL2xpYiAvdXNyL2xpYiAvdXNyL2xpYi9jb21wYXQgL3Vz ci9sb2NhbC9saWIKMzItYml0IGNvbXBhdGliaWxpdHkgbGRjb25maWcgcGF0aDogL3Vzci9saWIz MgpDcmVhdGluZyBhbmQvb3IgdHJpbW1pbmcgbG9nIGZpbGVzCi4KU3RhcnRpbmcgc3lzbG9nZC4K Tm8gY29yZSBkdW1wcyBmb3VuZC4KQ2xlYXJpbmcgL3RtcCAoWCByZWxhdGVkKS4KU3RhcnRpbmcg cnBjYmluZC4KU3RhcnRpbmcgbW91bnRkLgpTdGFydGluZyBuZnNkLgpVcGRhdGluZyBtb3RkOgou CmJnZTE6IGxpbmsgc3RhdGUgY2hhbmdlZCB0byBVUApDb25maWd1cmluZyBzeXNjb25zOgogYmxh bmt0aW1lCi4KU3RhcnRpbmcgc3NoZC4KU2VwIDEzIDAxOjUyOjA2IHhhbmFkdSBzbS1tdGFbMTE1 Ml06IE15IHVucXVhbGlmaWVkIGhvc3QgbmFtZSAoeGFuYWR1KSB1bmtub3duOyBzbGVlcGluZyBm b3IgcmV0cnkKU2VwIDEzIDAxOjUzOjA2IHhhbmFkdSBzbS1tdGFbMTE1Ml06IHVuYWJsZSB0byBx dWFsaWZ5IG15IG93biBkb21haW4gbmFtZSAoeGFuYWR1KSAtLSB1c2luZyBzaG9ydCBuYW1lClNl cCAxMyAwMTo1MzowNyB4YW5hZHUgc20tbXNwLXF1ZXVlWzExNTldOiBNeSB1bnF1YWxpZmllZCBo b3N0IG5hbWUgKHhhbmFkdSkgdW5rbm93bjsgc2xlZXBpbmcgZm9yIHJldHJ5CmxvY2sgb3JkZXIg cmV2ZXJzYWw6CiAxc3QgMHhmZmZmZmYwMDAyZWU0MjAwIHpmc3Zmcy0+el90ZWFyZG93bl9pbmFj dGl2ZV9sb2NrICh6ZnN2ZnMtPnpfdGVhcmRvd25faW5hY3RpdmVfbG9jaykgQCAvaGVhZF9vbGQv c3lzL21vZHVsZXMvemZzLy4uLy4uL2NkZGwvY29udHJpYi9vcGVuc29sYXJpcy91dHMvY29tbW9u L2ZzL3pmcy96ZnNfdm5vcHMuYzo0NTAzCiAybmQgMHhmZmZmZmYwMDAyZWU0M2YwIHpmc3Zmcy0+ el9ob2xkX210eFtpXSAoemZzdmZzLT56X2hvbGRfbXR4W2ldKSBAIC9oZWFkX29sZC9zeXMvbW9k dWxlcy96ZnMvLi4vLi4vY2RkbC9jb250cmliL29wZW5zb2xhcmlzL3V0cy9jb21tb24vZnMvemZz L3pmc196bm9kZS5jOjEzNDUKS0RCOiBzdGFjayBiYWNrdHJhY2U6CmRiX3RyYWNlX3NlbGZfd3Jh cHBlcigpIGF0IGRiX3RyYWNlX3NlbGZfd3JhcHBlcisweDJhCl93aXRuZXNzX2RlYnVnZ2VyKCkg YXQgX3dpdG5lc3NfZGVidWdnZXIrMHgyZQp3aXRuZXNzX2NoZWNrb3JkZXIoKSBhdCB3aXRuZXNz X2NoZWNrb3JkZXIrMHg4MDcKX3N4X3hsb2NrKCkgYXQgX3N4X3hsb2NrKzB4NTUKemZzX3ppbmFj dGl2ZSgpIGF0IHpmc196aW5hY3RpdmUrMHg4Ygp6ZnNfaW5hY3RpdmUoKSBhdCB6ZnNfaW5hY3Rp dmUrMHg3ZQp6ZnNfZnJlZWJzZF9pbmFjdGl2ZSgpIGF0IHpmc19mcmVlYnNkX2luYWN0aXZlKzB4 MWEKVk9QX0lOQUNUSVZFX0FQVigpIGF0IFZPUF9JTkFDVElWRV9BUFYrMHhkOQp2aW5hY3RpdmUo KSBhdCB2aW5hY3RpdmUrMHg5MAp2cHV0eCgpIGF0IHZwdXR4KzB4MmRjCm5mc3J2M19hY2Nlc3Mo KSBhdCBuZnNydjNfYWNjZXNzKzB4MmNhCm5mc3N2Y19wcm9ncmFtKCkgYXQgbmZzc3ZjX3Byb2dy YW0rMHgxYTYKc3ZjX3J1bl9pbnRlcm5hbCgpIGF0IHN2Y19ydW5faW50ZXJuYWwrMHg1ZmIKc3Zj X3RocmVhZF9zdGFydCgpIGF0IHN2Y190aHJlYWRfc3RhcnQrMHhiCmZvcmtfZXhpdCgpIGF0IGZv cmtfZXhpdCsweDEyYQpmb3JrX3RyYW1wb2xpbmUoKSBhdCBmb3JrX3RyYW1wb2xpbmUrMHhlCi0t LSB0cmFwIDB4YywgcmlwID0gMHg4MDA2YTJiZWMsIHJzcCA9IDB4N2ZmZmZmZmZlNjk4LCByYnAg PSAweDUgLS0tClNlcCAxMyAwMTo1NDowNyB4YW5hZHUgc20tbXNwLXF1ZXVlWzExNTldOiB1bmFi bGUgdG8gcXVhbGlmeSBteSBvd24gZG9tYWluIG5hbWUgKHhhbmFkdSkgLS0gdXNpbmcgc2hvcnQg bmFtZQpTdGFydGluZyBjcm9uLgpTdGFydGluZyBiYWNrZ3JvdW5kIGZpbGUgc3lzdGVtIGNoZWNr cyBpbiA2MCBzZWNvbmRzLgoKTW9uIFNlcCAxMyAwMTo1NDowNyBVVEMgMjAxMApsb2NrIG9yZGVy IHJldmVyc2FsOgogMXN0IDB4ZmZmZmZmMDAxN2I4YzA1OCB6cC0+el9uYW1lX2xvY2sgKHpwLT56 X25hbWVfbG9jaykgQCAvaGVhZF9vbGQvc3lzL21vZHVsZXMvemZzLy4uLy4uL2NkZGwvY29udHJp Yi9vcGVuc29sYXJpcy91dHMvY29tbW9uL2ZzL3pmcy96ZnNfZGlyLmM6MjIwCiAybmQgMHhmZmZm ZmYwMDAyZWU0NDkwIHpmc3Zmcy0+el9ob2xkX210eFtpXSAoemZzdmZzLT56X2hvbGRfbXR4W2ld KSBAIC9oZWFkX29sZC9zeXMvbW9kdWxlcy96ZnMvLi4vLi4vY2RkbC9jb250cmliL29wZW5zb2xh cmlzL3V0cy9jb21tb24vZnMvemZzL3pmc196bm9kZS5jOjExMTYKS0RCOiBzdGFjayBiYWNrdHJh Y2U6CmRiX3RyYWNlX3NlbGZfd3JhcHBlcigpIGF0IGRiX3RyYWNlX3NlbGZfd3JhcHBlcisweDJh Cl93aXRuZXNzX2RlYnVnZ2VyKCkgYXQgX3dpdG5lc3NfZGVidWdnZXIrMHgyZQp3aXRuZXNzX2No ZWNrb3JkZXIoKSBhdCB3aXRuZXNzX2NoZWNrb3JkZXIrMHg4MDcKX3N4X3hsb2NrKCkgYXQgX3N4 X3hsb2NrKzB4NTUKemZzX3pnZXQoKSBhdCB6ZnNfemdldCsweDI0MQp6ZnNfZGlyZW50X2xvY2so KSBhdCB6ZnNfZGlyZW50X2xvY2srMHg0YzEKemZzX2Rpcmxvb2soKSBhdCB6ZnNfZGlybG9vaysw eDkwCnpmc19sb29rdXAoKSBhdCB6ZnNfbG9va3VwKzB4MmQ0Cnpmc19mcmVlYnNkX2xvb2t1cCgp IGF0IHpmc19mcmVlYnNkX2xvb2t1cCsweDhkClZPUF9DQUNIRURMT09LVVBfQVBWKCkgYXQgVk9Q X0NBQ0hFRExPT0tVUF9BUFYrMHhkNwp2ZnNfY2FjaGVfbG9va3VwKCkgYXQgdmZzX2NhY2hlX2xv b2t1cCsweGYwClZPUF9MT09LVVBfQVBWKCkgYXQgVk9QX0xPT0tVUF9BUFYrMHhkZgpsb29rdXAo KSBhdCBsb29rdXArMHgzZDMKbmZzX25hbWVpKCkgYXQgbmZzX25hbWVpKzB4M2UzCm5mc3J2X2xv b2t1cCgpIGF0IG5mc3J2X2xvb2t1cCsweDIxNgpuZnNzdmNfcHJvZ3JhbSgpIGF0IG5mc3N2Y19w cm9ncmFtKzB4MWE2CnN2Y19ydW5faW50ZXJuYWwoKSBhdCBzdmNfcnVuX2ludGVybmFsKzB4NWZi CnN2Y190aHJlYWRfc3RhcnQoKSBhdCBzdmNfdGhyZWFkX3N0YXJ0KzB4Ygpmb3JrX2V4aXQoKSBh dCBmb3JrX2V4aXQrMHgxMmEKZm9ya190cmFtcG9saW5lKCkgYXQgZm9ya190cmFtcG9saW5lKzB4 ZQotLS0gdHJhcCAweGMsIHJpcCA9IDB4ODAwNmEyYmVjLCByc3AgPSAweDdmZmZmZmZmZTY5OCwg cmJwID0gMHg1IC0tLQpsb2NrIG9yZGVyIHJldmVyc2FsOgogMXN0IDB4ZmZmZmZmMDAxNzMzOWJl MCBkYi0+ZGJfbXR4IChkYi0+ZGJfbXR4KSBAIC9oZWFkX29sZC9zeXMvbW9kdWxlcy96ZnMvLi4v Li4vY2RkbC9jb250cmliL29wZW5zb2xhcmlzL3V0cy9jb21tb24vZnMvemZzL2RidWYuYzoxMjQy CiAybmQgMHhmZmZmZmYwMDZlNWE3OTM4IGRyLT5kdC5kaS5kcl9tdHggKGRyLT5kdC5kaS5kcl9t dHgpIEAgL2hlYWRfb2xkL3N5cy9tb2R1bGVzL3pmcy8uLi8uLi9jZGRsL2NvbnRyaWIvb3BlbnNv bGFyaXMvdXRzL2NvbW1vbi9mcy96ZnMvZGJ1Zi5jOjEyNDYKS0RCOiBzdGFjayBiYWNrdHJhY2U6 CmRiX3RyYWNlX3NlbGZfd3JhcHBlcigpIGF0IGRiX3RyYWNlX3NlbGZfd3JhcHBlcisweDJhCl93 aXRuZXNzX2RlYnVnZ2VyKCkgYXQgX3dpdG5lc3NfZGVidWdnZXIrMHgyZQp3aXRuZXNzX2NoZWNr b3JkZXIoKSBhdCB3aXRuZXNzX2NoZWNrb3JkZXIrMHg4MDcKX3N4X3hsb2NrKCkgYXQgX3N4X3hs b2NrKzB4NTUKZGJ1Zl9kaXJ0eSgpIGF0IGRidWZfZGlydHkrMHg2Y2UKZGJ1Zl9kaXJ0eSgpIGF0 IGRidWZfZGlydHkrMHg1MmIKZGJ1Zl9kaXJ0eSgpIGF0IGRidWZfZGlydHkrMHg1MmIKZGJ1Zl9k aXJ0eSgpIGF0IGRidWZfZGlydHkrMHg1MmIKZGJ1Zl9kaXJ0eSgpIGF0IGRidWZfZGlydHkrMHg1 MmIKZGJ1Zl9kaXJ0eSgpIGF0IGRidWZfZGlydHkrMHg1MmIKZG5vZGVfc2V0ZGlydHkoKSBhdCBk bm9kZV9zZXRkaXJ0eSsweDFhNQpkYnVmX2RpcnR5KCkgYXQgZGJ1Zl9kaXJ0eSsweDU5MwpzYV9h dHRyX29wKCkgYXQgc2FfYXR0cl9vcCsweDMyYwpzYV9idWxrX3VwZGF0ZV9pbXBsKCkgYXQgc2Ff YnVsa191cGRhdGVfaW1wbCsweDdjCnNhX2J1bGtfdXBkYXRlKCkgYXQgc2FfYnVsa191cGRhdGUr MHg1MAp6ZnNfZnJlZWJzZF9zZXRhdHRyKCkgYXQgemZzX2ZyZWVic2Rfc2V0YXR0cisweDFiNjEK Vk9QX1NFVEFUVFJfQVBWKCkgYXQgVk9QX1NFVEFUVFJfQVBWKzB4ZDMKbmZzcnZfc2V0YXR0cigp IGF0IG5mc3J2X3NldGF0dHIrMHg4MGQKbmZzc3ZjX3Byb2dyYW0oKSBhdCBuZnNzdmNfcHJvZ3Jh bSsweDFhNgpzdmNfcnVuX2ludGVybmFsKCkgYXQgc3ZjX3J1bl9pbnRlcm5hbCsweDVmYgpzdmNf cnVuKCkgYXQgc3ZjX3J1bisweDhmCm5mc3N2Y19uZnNkKCkgYXQgbmZzc3ZjX25mc2QrMHhhMgpu ZnNzdmNfbmZzc2VydmVyKCkgYXQgbmZzc3ZjX25mc3NlcnZlcisweDViCm5mc3N2YygpIGF0IG5m c3N2YysweDczCnN5c2NhbGxlbnRlcigpIGF0IHN5c2NhbGxlbnRlcisweDFjYgpzeXNjYWxsKCkg YXQgc3lzY2FsbCsweDRjClhmYXN0X3N5c2NhbGwoKSBhdCBYZmFzdF9zeXNjYWxsKzB4ZTIKLS0t IHN5c2NhbGwgKDE1NSwgRnJlZUJTRCBFTEY2NCwgbmZzc3ZjKSwgcmlwID0gMHg4MDA2YTJiZWMs IHJzcCA9IDB4N2ZmZmZmZmZlNjk4LCByYnAgPSAweDUgLS0tCmxvY2sgb3JkZXIgcmV2ZXJzYWw6 CiAxc3QgMHhmZmZmZmYwMDE3MDE3MWMwIGwtPmxfcndsb2NrIChsLT5sX3J3bG9jaykgQCAvaGVh ZF9vbGQvc3lzL21vZHVsZXMvemZzLy4uLy4uL2NkZGwvY29udHJpYi9vcGVuc29sYXJpcy91dHMv Y29tbW9uL2ZzL3pmcy96YXAuYzo1MjIKIDJuZCAweGZmZmZmZjAwMTczMTE4NTAgZG4tPmRuX3N0 cnVjdF9yd2xvY2sgKGRuLT5kbl9zdHJ1Y3Rfcndsb2NrKSBAIC9oZWFkX29sZC9zeXMvbW9kdWxl cy96ZnMvLi4vLi4vY2RkbC9jb250cmliL29wZW5zb2xhcmlzL3V0cy9jb21tb24vZnMvemZzL2Ri dWYuYzo2MTEKS0RCOiBzdGFjayBiYWNrdHJhY2U6CmRiX3RyYWNlX3NlbGZfd3JhcHBlcigpIGF0 IGRiX3RyYWNlX3NlbGZfd3JhcHBlcisweDJhCl93aXRuZXNzX2RlYnVnZ2VyKCkgYXQgX3dpdG5l c3NfZGVidWdnZXIrMHgyZQp3aXRuZXNzX2NoZWNrb3JkZXIoKSBhdCB3aXRuZXNzX2NoZWNrb3Jk ZXIrMHg4MDcKX3N4X3Nsb2NrKCkgYXQgX3N4X3Nsb2NrKzB4NTQKZGJ1Zl9yZWFkKCkgYXQgZGJ1 Zl9yZWFkKzB4MzE5CmRidWZfd2lsbF9kaXJ0eSgpIGF0IGRidWZfd2lsbF9kaXJ0eSsweDgwCnph cF9nZXRfbGVhZl9ieWJsaygpIGF0IHphcF9nZXRfbGVhZl9ieWJsaysweDFkZAp6YXBfZGVyZWZf bGVhZigpIGF0IHphcF9kZXJlZl9sZWFmKzB4YjYKZnphcF9hZGRfY2QoKSBhdCBmemFwX2FkZF9j ZCsweDc1CnphcF9hZGQoKSBhdCB6YXBfYWRkKzB4MTA1Cnpmc19saW5rX2NyZWF0ZSgpIGF0IHpm c19saW5rX2NyZWF0ZSsweDMzMgp6ZnNfZnJlZWJzZF9jcmVhdGUoKSBhdCB6ZnNfZnJlZWJzZF9j cmVhdGUrMHg3NjIKVk9QX0NSRUFURV9BUFYoKSBhdCBWT1BfQ1JFQVRFX0FQVisweGQ3Cm5mc3J2 X2NyZWF0ZSgpIGF0IG5mc3J2X2NyZWF0ZSsweDkwNgpuZnNzdmNfcHJvZ3JhbSgpIGF0IG5mc3N2 Y19wcm9ncmFtKzB4MWE2CnN2Y19ydW5faW50ZXJuYWwoKSBhdCBzdmNfcnVuX2ludGVybmFsKzB4 NWZiCnN2Y19ydW4oKSBhdCBzdmNfcnVuKzB4OGYKbmZzc3ZjX25mc2QoKSBhdCBuZnNzdmNfbmZz ZCsweGEyCm5mc3N2Y19uZnNzZXJ2ZXIoKSBhdCBuZnNzdmNfbmZzc2VydmVyKzB4NWIKbmZzc3Zj KCkgYXQgbmZzc3ZjKzB4NzMKc3lzY2FsbGVudGVyKCkgYXQgc3lzY2FsbGVudGVyKzB4MWNiCnN5 c2NhbGwoKSBhdCBzeXNjYWxsKzB4NGMKWGZhc3Rfc3lzY2FsbCgpIGF0IFhmYXN0X3N5c2NhbGwr MHhlMgotLS0gc3lzY2FsbCAoMTU1LCBGcmVlQlNEIEVMRjY0LCBuZnNzdmMpLCByaXAgPSAweDgw MDZhMmJlYywgcnNwID0gMHg3ZmZmZmZmZmU2OTgsIHJicCA9IDB4NSAtLS0KbG9jayBvcmRlciBy ZXZlcnNhbDoKIDFzdCAweGZmZmZmZjAwMTcwMTcxYzAgbC0+bF9yd2xvY2sgKGwtPmxfcndsb2Nr KSBAIC9oZWFkX29sZC9zeXMvbW9kdWxlcy96ZnMvLi4vLi4vY2RkbC9jb250cmliL29wZW5zb2xh cmlzL3V0cy9jb21tb24vZnMvemZzL3phcC5jOjUyMgogMm5kIDB4ZmZmZmZmMDAwMmQwY2U2MCBv cy0+b3NfbG9jayAob3MtPm9zX2xvY2spIEAgL2hlYWRfb2xkL3N5cy9tb2R1bGVzL3pmcy8uLi8u Li9jZGRsL2NvbnRyaWIvb3BlbnNvbGFyaXMvdXRzL2NvbW1vbi9mcy96ZnMvZG5vZGUuYzoxMjI4 CktEQjogc3RhY2sgYmFja3RyYWNlOgpkYl90cmFjZV9zZWxmX3dyYXBwZXIoKSBhdCBkYl90cmFj ZV9zZWxmX3dyYXBwZXIrMHgyYQpfd2l0bmVzc19kZWJ1Z2dlcigpIGF0IF93aXRuZXNzX2RlYnVn Z2VyKzB4MmUKd2l0bmVzc19jaGVja29yZGVyKCkgYXQgd2l0bmVzc19jaGVja29yZGVyKzB4ODA3 Cl9zeF94bG9jaygpIGF0IF9zeF94bG9jaysweDU1CmRub2RlX3NldGRpcnR5KCkgYXQgZG5vZGVf c2V0ZGlydHkrMHhjOQpkYnVmX2RpcnR5KCkgYXQgZGJ1Zl9kaXJ0eSsweDU5MwpkYnVmX2RpcnR5 KCkgYXQgZGJ1Zl9kaXJ0eSsweDUyYgp6YXBfZ2V0X2xlYWZfYnlibGsoKSBhdCB6YXBfZ2V0X2xl YWZfYnlibGsrMHgxZGQKemFwX2RlcmVmX2xlYWYoKSBhdCB6YXBfZGVyZWZfbGVhZisweGI2CmZ6 YXBfYWRkX2NkKCkgYXQgZnphcF9hZGRfY2QrMHg3NQp6YXBfYWRkKCkgYXQgemFwX2FkZCsweDEw NQp6ZnNfbGlua19jcmVhdGUoKSBhdCB6ZnNfbGlua19jcmVhdGUrMHgzMzIKemZzX2ZyZWVic2Rf Y3JlYXRlKCkgYXQgemZzX2ZyZWVic2RfY3JlYXRlKzB4NzYyClZPUF9DUkVBVEVfQVBWKCkgYXQg Vk9QX0NSRUFURV9BUFYrMHhkNwpuZnNydl9jcmVhdGUoKSBhdCBuZnNydl9jcmVhdGUrMHg5MDYK bmZzc3ZjX3Byb2dyYW0oKSBhdCBuZnNzdmNfcHJvZ3JhbSsweDFhNgpzdmNfcnVuX2ludGVybmFs KCkgYXQgc3ZjX3J1bl9pbnRlcm5hbCsweDVmYgpzdmNfcnVuKCkgYXQgc3ZjX3J1bisweDhmCm5m c3N2Y19uZnNkKCkgYXQgbmZzc3ZjX25mc2QrMHhhMgpuZnNzdmNfbmZzc2VydmVyKCkgYXQgbmZz c3ZjX25mc3NlcnZlcisweDViCm5mc3N2YygpIGF0IG5mc3N2YysweDczCnN5c2NhbGxlbnRlcigp IGF0IHN5c2NhbGxlbnRlcisweDFjYgpzeXNjYWxsKCkgYXQgc3lzY2FsbCsweDRjClhmYXN0X3N5 c2NhbGwoKSBhdCBYZmFzdF9zeXNjYWxsKzB4ZTIKLS0tIHN5c2NhbGwgKDE1NSwgRnJlZUJTRCBF TEY2NCwgbmZzc3ZjKSwgcmlwID0gMHg4MDA2YTJiZWMsIHJzcCA9IDB4N2ZmZmZmZmZlNjk4LCBy YnAgPSAweDUgLS0tCmxvY2sgb3JkZXIgcmV2ZXJzYWw6CiAxc3QgMHhmZmZmZmYwMDZlNjVmMjM4 IGRyLT5kdC5kaS5kcl9tdHggKGRyLT5kdC5kaS5kcl9tdHgpIEAgL2hlYWRfb2xkL3N5cy9tb2R1 bGVzL3pmcy8uLi8uLi9jZGRsL2NvbnRyaWIvb3BlbnNvbGFyaXMvdXRzL2NvbW1vbi9mcy96ZnMv ZGJ1Zi5jOjIyNDMKIDJuZCAweGZmZmZmZjAwNmU2NGU0MjggZG4tPmRuX3N0cnVjdF9yd2xvY2sg KGRuLT5kbl9zdHJ1Y3Rfcndsb2NrKSBAIC9oZWFkX29sZC9zeXMvbW9kdWxlcy96ZnMvLi4vLi4v Y2RkbC9jb250cmliL29wZW5zb2xhcmlzL3V0cy9jb21tb24vZnMvemZzL2RidWYuYzoyMTk0CktE Qjogc3RhY2sgYmFja3RyYWNlOgpkYl90cmFjZV9zZWxmX3dyYXBwZXIoKSBhdCBkYl90cmFjZV9z ZWxmX3dyYXBwZXIrMHgyYQpfd2l0bmVzc19kZWJ1Z2dlcigpIGF0IF93aXRuZXNzX2RlYnVnZ2Vy KzB4MmUKd2l0bmVzc19jaGVja29yZGVyKCkgYXQgd2l0bmVzc19jaGVja29yZGVyKzB4ODA3Cl9z eF9zbG9jaygpIGF0IF9zeF9zbG9jaysweDU0CmRidWZfY2hlY2tfYmxrcHRyKCkgYXQgZGJ1Zl9j aGVja19ibGtwdHIrMHgyMTAKZGJ1Zl9zeW5jX2xpc3QoKSBhdCBkYnVmX3N5bmNfbGlzdCsweDM2 YwpkYnVmX3N5bmNfbGlzdCgpIGF0IGRidWZfc3luY19saXN0KzB4MTdmCmRub2RlX3N5bmMoKSBh dCBkbm9kZV9zeW5jKzB4ZTljCmRtdV9vYmpzZXRfc3luY19kbm9kZXMoKSBhdCBkbXVfb2Jqc2V0 X3N5bmNfZG5vZGVzKzB4OTIKZG11X29ianNldF9zeW5jKCkgYXQgZG11X29ianNldF9zeW5jKzB4 MTlkCmRzbF9wb29sX3N5bmMoKSBhdCBkc2xfcG9vbF9zeW5jKzB4ZTUKc3BhX3N5bmMoKSBhdCBz cGFfc3luYysweDMzZgp0eGdfc3luY190aHJlYWQoKSBhdCB0eGdfc3luY190aHJlYWQrMHgxNDcK Zm9ya19leGl0KCkgYXQgZm9ya19leGl0KzB4MTJhCmZvcmtfdHJhbXBvbGluZSgpIGF0IGZvcmtf dHJhbXBvbGluZSsweGUKLS0tIHRyYXAgMCwgcmlwID0gMCwgcnNwID0gMHhmZmZmZmY4MDYyNzNh Y2YwLCByYnAgPSAwIC0tLQpsb2NrIG9yZGVyIHJldmVyc2FsOgogMXN0IDB4ZmZmZmZmMDA2ZTVi ODAwMCBkbi0+ZG5fc3RydWN0X3J3bG9jayAoZG4tPmRuX3N0cnVjdF9yd2xvY2spIEAgL2hlYWRf b2xkL3N5cy9tb2R1bGVzL3pmcy8uLi8uLi9jZGRsL2NvbnRyaWIvb3BlbnNvbGFyaXMvdXRzL2Nv bW1vbi9mcy96ZnMvZG11X3R4LmM6NDQwCiAybmQgMHhmZmZmZmYwMDAyMzYxODE4IHphcC0+emFw X3J3bG9jayAoemFwLT56YXBfcndsb2NrKSBAIC9oZWFkX29sZC9zeXMvbW9kdWxlcy96ZnMvLi4v Li4vY2RkbC9jb250cmliL29wZW5zb2xhcmlzL3V0cy9jb21tb24vZnMvemZzL3phcF9taWNyby5j OjQ5MApLREI6IHN0YWNrIGJhY2t0cmFjZToKZGJfdHJhY2Vfc2VsZl93cmFwcGVyKCkgYXQgZGJf dHJhY2Vfc2VsZl93cmFwcGVyKzB4MmEKX3dpdG5lc3NfZGVidWdnZXIoKSBhdCBfd2l0bmVzc19k ZWJ1Z2dlcisweDJlCndpdG5lc3NfY2hlY2tvcmRlcigpIGF0IHdpdG5lc3NfY2hlY2tvcmRlcisw eDgwNwpfc3hfc2xvY2soKSBhdCBfc3hfc2xvY2srMHg1NAp6YXBfbG9ja2RpcigpIGF0IHphcF9s b2NrZGlyKzB4MTExCnphcF9wcmVmZXRjaF91aW50NjQoKSBhdCB6YXBfcHJlZmV0Y2hfdWludDY0 KzB4MzYKZGR0X3ByZWZldGNoKCkgYXQgZGR0X3ByZWZldGNoKzB4YmEKZHNsX2RhdGFzZXRfYmxv Y2tfZnJlZWFibGUoKSBhdCBkc2xfZGF0YXNldF9ibG9ja19mcmVlYWJsZSsweDNjCmRtdV90eF9o b2xkX2ZyZWUoKSBhdCBkbXVfdHhfaG9sZF9mcmVlKzB4NTZmCmRtdV9mcmVlX2xvbmdfcmFuZ2Vf aW1wbCgpIGF0IGRtdV9mcmVlX2xvbmdfcmFuZ2VfaW1wbCsweDExNApkbXVfZnJlZV9sb25nX3Jh bmdlKCkgYXQgZG11X2ZyZWVfbG9uZ19yYW5nZSsweDRjCnpmc19ybW5vZGUoKSBhdCB6ZnNfcm1u b2RlKzB4ODkKemZzX2luYWN0aXZlKCkgYXQgemZzX2luYWN0aXZlKzB4N2UKemZzX2ZyZWVic2Rf aW5hY3RpdmUoKSBhdCB6ZnNfZnJlZWJzZF9pbmFjdGl2ZSsweDFhClZPUF9JTkFDVElWRV9BUFYo KSBhdCBWT1BfSU5BQ1RJVkVfQVBWKzB4ZDkKdmluYWN0aXZlKCkgYXQgdmluYWN0aXZlKzB4OTAK dnB1dHgoKSBhdCB2cHV0eCsweDJkYwp6ZnNfZnJlZWJzZF9yZW5hbWUoKSBhdCB6ZnNfZnJlZWJz ZF9yZW5hbWUrMHgxMWIKVk9QX1JFTkFNRV9BUFYoKSBhdCBWT1BfUkVOQU1FX0FQVisweGJmCm5m c3J2X3JlbmFtZSgpIGF0IG5mc3J2X3JlbmFtZSsweGI1MgpuZnNzdmNfcHJvZ3JhbSgpIGF0IG5m c3N2Y19wcm9ncmFtKzB4MWE2CnN2Y19ydW5faW50ZXJuYWwoKSBhdCBzdmNfcnVuX2ludGVybmFs KzB4NWZiCnN2Y19ydW4oKSBhdCBzdmNfcnVuKzB4OGYKbmZzc3ZjX25mc2QoKSBhdCBuZnNzdmNf bmZzZCsweGEyCm5mc3N2Y19uZnNzZXJ2ZXIoKSBhdCBuZnNzdmNfbmZzc2VydmVyKzB4NWIKbmZz c3ZjKCkgYXQgbmZzc3ZjKzB4NzMKc3lzY2FsbGVudGVyKCkgYXQgc3lzY2FsbGVudGVyKzB4MWNi CnN5c2NhbGwoKSBhdCBzeXNjYWxsKzB4NGMKWGZhc3Rfc3lzY2FsbCgpIGF0IFhmYXN0X3N5c2Nh bGwrMHhlMgotLS0gc3lzY2FsbCAoMTU1LCBGcmVlQlNEIEVMRjY0LCBuZnNzdmMpLCByaXAgPSAw eDgwMDZhMmJlYywgcnNwID0gMHg3ZmZmZmZmZmU2OTgsIHJicCA9IDB4NSAtLS0KcGFuaWM6IFNv bGFyaXMocGFuaWMpOiB6ZnM6IGFjY2Vzc2luZyBwYXN0IGVuZCBvZiBvYmplY3QgZmZmZmZmODA2 MjdhY2Q1MC81NyAoc2l6ZT0xNjUyMjE1NDYzIGFjY2Vzcz0wKzE4NDQ2NzQzNTI1NjA1OTUyODgw KQoKY3B1aWQgPSAwCktEQjogZW50ZXI6IHBhbmljCnBhbmljOiBmcm9tIGRlYnVnZ2VyCmNwdWlk ID0gMApVcHRpbWU6IDVtNTFzClBoeXNpY2FsIG1lbW9yeTogMzMwOSBNQgpEdW1waW5nIDEzMzYg TUI6IDEzMjEgMTMwNSAxMjg5IDEyNzMgMTI1NyAxMjQxIDEyMjUgMTIwOSAxMTkzIDExNzcgMTE2 MSAxMTQ1IDExMjkgMTExMyAxMDk3IDEwODEgMTA2NSAxMDQ5IDEwMzMgMTAxNyAxMDAxIDk4NSA5 NjkgOTUzIDkzNyA5MjEgOTA1IDg4OSA4NzMgODU3IDg0MSA4MjUgODA5IDc5MyA3NzcgNzYxIDc0 NSA3MjkgNzEzIDY5NyA2ODEgNjY1IDY0OSA2MzMgNjE3IDYwMSA1ODUgNTY5IDU1MyA1MzcgNTIx IDUwNSA0ODkgNDczIDQ1NyA0NDEgNDI1IDQwOSAzOTMgMzc3IDM2MSAzNDUgMzI5IDMxMyAyOTcg MjgxIDI2NSAyNDkgMjMzIDIxNyAyMDEgMTg1IDE2OSAxNTMgMTM3IDEyMSAxMDUgODkgNzMgNTcg NDEgMjUgOQoKLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCmtlcm5lbCBjb25maWcKCm9wdGlvbnMJQ09ORklHX0FV VE9HRU5FUkFURUQKaWRlbnQJR0VORVJJQwptYWNoaW5lCWFtZDY0CmNwdQlIQU1NRVIKbWFrZW9w dGlvbnMJREVCVUc9LWcKb3B0aW9ucwlVU0JfREVCVUcKb3B0aW9ucwlBSF9TVVBQT1JUX0FSNTQx NgpvcHRpb25zCUlFRUU4MDIxMV9TVVBQT1JUX01FU0gKb3B0aW9ucwlJRUVFODAyMTFfQU1QRFVf QUdFCm9wdGlvbnMJSUVFRTgwMjExX0RFQlVHCm9wdGlvbnMJQUhEX1JFR19QUkVUVFlfUFJJTlQK b3B0aW9ucwlBSENfUkVHX1BSRVRUWV9QUklOVApvcHRpb25zCUFUQV9TVEFUSUNfSUQKb3B0aW9u cwlTTVAKb3B0aW9ucwlERUJVR19WRlNfTE9DS1MKb3B0aW9ucwlERUJVR19MT0NLUwpvcHRpb25z CU1BTExPQ19ERUJVR19NQVhaT05FUz04Cm9wdGlvbnMJV0lUTkVTU19TS0lQU1BJTgpvcHRpb25z CVdJVE5FU1MKb3B0aW9ucwlJTlZBUklBTlRfU1VQUE9SVApvcHRpb25zCUlOVkFSSUFOVFMKb3B0 aW9ucwlERUFETEtSRVMKb3B0aW9ucwlHREIKb3B0aW9ucwlEREJfQ1RGCm9wdGlvbnMJRERCCm9w dGlvbnMJS0RCCm9wdGlvbnMJSU5DTFVERV9DT05GSUdfRklMRQpvcHRpb25zCUtEVFJBQ0VfSE9P S1MKb3B0aW9ucwlLRFRSQUNFX0ZSQU1FCm9wdGlvbnMJRkxPV1RBQkxFCm9wdGlvbnMJTUFDCm9w dGlvbnMJQVVESVQKb3B0aW9ucwlIV1BNQ19IT09LUwpvcHRpb25zCUtCRF9JTlNUQUxMX0NERVYK b3B0aW9ucwlQUklOVEZfQlVGUl9TSVpFPTEyOApvcHRpb25zCV9LUE9TSVhfUFJJT1JJVFlfU0NI RURVTElORwpvcHRpb25zCVAxMDAzXzFCX1NFTUFQSE9SRVMKb3B0aW9ucwlTWVNWU0VNCm9wdGlv bnMJU1lTVk1TRwpvcHRpb25zCVNZU1ZTSE0Kb3B0aW9ucwlTVEFDSwpvcHRpb25zCUtUUkFDRQpv cHRpb25zCVNDU0lfREVMQVk9NTAwMApvcHRpb25zCUNPTVBBVF9GUkVFQlNENwpvcHRpb25zCUNP TVBBVF9GUkVFQlNENgpvcHRpb25zCUNPTVBBVF9GUkVFQlNENQpvcHRpb25zCUNPTVBBVF9GUkVF QlNENApvcHRpb25zCUNPTVBBVF9GUkVFQlNEMzIKb3B0aW9ucwlHRU9NX0xBQkVMCm9wdGlvbnMJ R0VPTV9QQVJUX0dQVApvcHRpb25zCVBTRVVET0ZTCm9wdGlvbnMJUFJPQ0ZTCm9wdGlvbnMJQ0Q5 NjYwCm9wdGlvbnMJTVNET1NGUwpvcHRpb25zCU5GU19ST09UCm9wdGlvbnMJTkZTTE9DS0QKb3B0 aW9ucwlORlNTRVJWRVIKb3B0aW9ucwlORlNDTElFTlQKb3B0aW9ucwlNRF9ST09UCm9wdGlvbnMJ VUZTX0dKT1VSTkFMCm9wdGlvbnMJVUZTX0RJUkhBU0gKb3B0aW9ucwlVRlNfQUNMCm9wdGlvbnMJ U09GVFVQREFURVMKb3B0aW9ucwlGRlMKb3B0aW9ucwlTQ1RQCm9wdGlvbnMJSU5FVDYKb3B0aW9u cwlJTkVUCm9wdGlvbnMJUFJFRU1QVElPTgpvcHRpb25zCVNDSEVEX1VMRQpvcHRpb25zCUdFT01f UEFSVF9NQlIKb3B0aW9ucwlHRU9NX1BBUlRfRUJSX0NPTVBBVApvcHRpb25zCUdFT01fUEFSVF9F QlIKb3B0aW9ucwlHRU9NX1BBUlRfQlNECmRldmljZQlpc2EKZGV2aWNlCW1lbQpkZXZpY2UJaW8K ZGV2aWNlCXVhcnRfbnM4MjUwCmRldmljZQljcHVmcmVxCmRldmljZQlhY3BpCmRldmljZQlwY2kK ZGV2aWNlCWZkYwpkZXZpY2UJYXRhCmRldmljZQlhdGFkaXNrCmRldmljZQlhdGFyYWlkCmRldmlj ZQlhdGFwaWNkCmRldmljZQlhdGFwaWZkCmRldmljZQlhdGFwaXN0CmRldmljZQlhaGMKZGV2aWNl CWFoZApkZXZpY2UJYW1kCmRldmljZQlocHRpb3AKZGV2aWNlCWlzcApkZXZpY2UJbXB0CmRldmlj ZQlzeW0KZGV2aWNlCXRybQpkZXZpY2UJYWR2CmRldmljZQlhZHcKZGV2aWNlCWFpYwpkZXZpY2UJ YnQKZGV2aWNlCXNjYnVzCmRldmljZQljaApkZXZpY2UJZGEKZGV2aWNlCXNhCmRldmljZQljZApk ZXZpY2UJcGFzcwpkZXZpY2UJc2VzCmRldmljZQlhbXIKZGV2aWNlCWFyY21zcgpkZXZpY2UJY2lz cwpkZXZpY2UJZHB0CmRldmljZQlocHRtdgpkZXZpY2UJaHB0cnIKZGV2aWNlCWlpcgpkZXZpY2UJ aXBzCmRldmljZQltbHkKZGV2aWNlCXR3YQpkZXZpY2UJYWFjCmRldmljZQlhYWNwCmRldmljZQlp ZGEKZGV2aWNlCW1maQpkZXZpY2UJbWx4CmRldmljZQl0d2UKZGV2aWNlCWF0a2JkYwpkZXZpY2UJ YXRrYmQKZGV2aWNlCXBzbQpkZXZpY2UJa2JkbXV4CmRldmljZQl2Z2EKZGV2aWNlCXNwbGFzaApk ZXZpY2UJc2MKZGV2aWNlCWFncApkZXZpY2UJY2JiCmRldmljZQlwY2NhcmQKZGV2aWNlCWNhcmRi dXMKZGV2aWNlCXVhcnQKZGV2aWNlCXBwYwpkZXZpY2UJcHBidXMKZGV2aWNlCWxwdApkZXZpY2UJ cGxpcApkZXZpY2UJcHBpCmRldmljZQlkZQpkZXZpY2UJZW0KZGV2aWNlCWlnYgpkZXZpY2UJaXhn YmUKZGV2aWNlCWxlCmRldmljZQl0aQpkZXZpY2UJdHhwCmRldmljZQl2eApkZXZpY2UJbWlpYnVz CmRldmljZQlhZQpkZXZpY2UJYWdlCmRldmljZQlhbGMKZGV2aWNlCWFsZQpkZXZpY2UJYmNlCmRl dmljZQliZmUKZGV2aWNlCWJnZQpkZXZpY2UJZGMKZGV2aWNlCWV0CmRldmljZQlmeHAKZGV2aWNl CWptZQpkZXZpY2UJbGdlCmRldmljZQltc2sKZGV2aWNlCW5mZQpkZXZpY2UJbmdlCmRldmljZQlw Y24KZGV2aWNlCXJlCmRldmljZQlybApkZXZpY2UJc2YKZGV2aWNlCXNnZQpkZXZpY2UJc2lzCmRl dmljZQlzawpkZXZpY2UJc3RlCmRldmljZQlzdGdlCmRldmljZQl0bApkZXZpY2UJdHgKZGV2aWNl CXZnZQpkZXZpY2UJdnIKZGV2aWNlCXdiCmRldmljZQl4bApkZXZpY2UJY3MKZGV2aWNlCWVkCmRl dmljZQlleApkZXZpY2UJZXAKZGV2aWNlCWZlCmRldmljZQlzbgpkZXZpY2UJeGUKZGV2aWNlCXds YW4KZGV2aWNlCXdsYW5fd2VwCmRldmljZQl3bGFuX2NjbXAKZGV2aWNlCXdsYW5fdGtpcApkZXZp Y2UJd2xhbl9hbXJyCmRldmljZQlhbgpkZXZpY2UJYXRoCmRldmljZQlhdGhfaGFsCmRldmljZQlh dGhfcmF0ZV9zYW1wbGUKZGV2aWNlCXJhbApkZXZpY2UJd2kKZGV2aWNlCWxvb3AKZGV2aWNlCXJh bmRvbQpkZXZpY2UJZXRoZXIKZGV2aWNlCXZsYW4KZGV2aWNlCXR1bgpkZXZpY2UJcHR5CmRldmlj ZQltZApkZXZpY2UJZ2lmCmRldmljZQlmYWl0aApkZXZpY2UJZmlybXdhcmUKZGV2aWNlCWJwZgpk ZXZpY2UJdWhjaQpkZXZpY2UJb2hjaQpkZXZpY2UJZWhjaQpkZXZpY2UJdXNiCmRldmljZQl1aGlk CmRldmljZQl1a2JkCmRldmljZQl1bHB0CmRldmljZQl1bWFzcwpkZXZpY2UJdW1zCmRldmljZQl1 cmlvCmRldmljZQl1M2cKZGV2aWNlCXVhcmsKZGV2aWNlCXVic2EKZGV2aWNlCXVmdGRpCmRldmlj ZQl1aXBhcQpkZXZpY2UJdXBsY29tCmRldmljZQl1c2xjb20KZGV2aWNlCXV2aXNvcgpkZXZpY2UJ dXZzY29tCmRldmljZQlhdWUKZGV2aWNlCWF4ZQpkZXZpY2UJY2RjZQpkZXZpY2UJY3VlCmRldmlj ZQlrdWUKZGV2aWNlCXJ1ZQpkZXZpY2UJdWRhdgpkZXZpY2UJcnVtCmRldmljZQl1YXRoCmRldmlj ZQl1cmFsCmRldmljZQl6eWQKZGV2aWNlCWZpcmV3aXJlCmRldmljZQlzYnAKZGV2aWNlCWZ3ZQpk ZXZpY2UJZndpcApkZXZpY2UJZGNvbnMKZGV2aWNlCWRjb25zX2Nyb20KCi0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LQpkZGIgY2FwdHVyZSBidWZmZXIKCgo= --001485f7cbcab469c904901de88d-- From owner-freebsd-fs@FreeBSD.ORG Mon Sep 13 07:58:31 2010 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 0C3881065675 for ; Mon, 13 Sep 2010 07:58:31 +0000 (UTC) (envelope-from pjd@garage.freebsd.pl) Received: from mail.garage.freebsd.pl (60.wheelsystems.com [83.12.187.60]) by mx1.freebsd.org (Postfix) with ESMTP id AEE168FC12 for ; Mon, 13 Sep 2010 07:58:30 +0000 (UTC) Received: by mail.garage.freebsd.pl (Postfix, from userid 65534) id 81A3845CA0; Mon, 13 Sep 2010 09:58:28 +0200 (CEST) Received: from localhost (pdawidek.whl [10.0.1.1]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.garage.freebsd.pl (Postfix) with ESMTP id 6A0994569A; Mon, 13 Sep 2010 09:58:23 +0200 (CEST) Date: Mon, 13 Sep 2010 09:58:10 +0200 From: Pawel Jakub Dawidek To: "James R. Van Artsdalen" Message-ID: <20100913075810.GA2098@garage.freebsd.pl> References: <20100831215915.GE1932@garage.freebsd.pl> <4C8DA535.7050007@jrv.org> Mime-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="ZPt4rx8FFjLCG7dd" Content-Disposition: inline In-Reply-To: <4C8DA535.7050007@jrv.org> User-Agent: Mutt/1.4.2.3i X-PGP-Key-URL: http://people.freebsd.org/~pjd/pjd.asc X-OS: FreeBSD 9.0-CURRENT amd64 X-Spam-Checker-Version: SpamAssassin 3.0.4 (2005-06-05) on mail.garage.freebsd.pl X-Spam-Level: X-Spam-Status: No, score=-5.9 required=4.5 tests=ALL_TRUSTED,BAYES_00 autolearn=ham version=3.0.4 Cc: freebsd-fs@FreeBSD.org Subject: Re: ZFS v28: ZFS recv abort X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 13 Sep 2010 07:58:31 -0000 --ZPt4rx8FFjLCG7dd Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Sun, Sep 12, 2010 at 11:14:45PM -0500, James R. Van Artsdalen wrote: > amd64, SVN 212080 with pjd's original v28 patch >=20 > /sbin/zfs aborts receiving an incrementing stream. >=20 > bigback:/root# zfs send -R -I @then bigtex@now | ssh kraken /sbin/zfs > recv -dvF bigz > Assertion failed: (!clp->cl_alldependents), file > /usr/src/cddl/lib/libzfs/../../../cddl/contrib/opensolaris/lib/libzfs/com= mon/libzfs_changelist.c, > line 470. Could you provide output of the following commands: # zfs get -r all bigtex # zfs get -r all bigz --=20 Pawel Jakub Dawidek http://www.wheelsystems.com pjd@FreeBSD.org http://www.FreeBSD.org FreeBSD committer Am I Evil? Yes, I Am! --ZPt4rx8FFjLCG7dd Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.14 (FreeBSD) iEYEARECAAYFAkyN2ZIACgkQForvXbEpPzQZgACeP2GaomjZSLkJqwb4fDlc6jrR 2CsAniRSTblqMcBcfuVChtGr7yRrhnzp =sq53 -----END PGP SIGNATURE----- --ZPt4rx8FFjLCG7dd-- From owner-freebsd-fs@FreeBSD.ORG Mon Sep 13 11:06:54 2010 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id D2A9010656B5 for ; Mon, 13 Sep 2010 11:06:54 +0000 (UTC) (envelope-from owner-bugmaster@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id A528E8FC1E for ; Mon, 13 Sep 2010 11:06:54 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id o8DB6sqW001877 for ; Mon, 13 Sep 2010 11:06:54 GMT (envelope-from owner-bugmaster@FreeBSD.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id o8DB6s6R001875 for freebsd-fs@FreeBSD.org; Mon, 13 Sep 2010 11:06:54 GMT (envelope-from owner-bugmaster@FreeBSD.org) Date: Mon, 13 Sep 2010 11:06:54 GMT Message-Id: <201009131106.o8DB6s6R001875@freefall.freebsd.org> X-Authentication-Warning: freefall.freebsd.org: gnats set sender to owner-bugmaster@FreeBSD.org using -f From: FreeBSD bugmaster To: freebsd-fs@FreeBSD.org Cc: Subject: Current problem reports assigned to freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 13 Sep 2010 11:06:54 -0000 Note: to view an individual PR, use: http://www.freebsd.org/cgi/query-pr.cgi?pr=(number). The following is a listing of current problems submitted by FreeBSD users. These represent problem reports covering all versions including experimental development code and obsolete releases. S Tracker Resp. Description -------------------------------------------------------------------------------- o kern/150390 fs [zfs] zfs deadlock when arcmsr reports drive faulted o kern/150336 fs [nfs] mountd/nfsd became confused; refused to reload n o kern/150207 fs zpool import -d /dev tries to open weird devices o kern/149855 fs [gvinum] growfs causes fsck to report errors in Filesy o kern/149495 fs [zfs] chflags sappend on zfs not working right o kern/149173 fs [patch] [zfs] make OpenSolaris installa o kern/149022 fs [hang] File system operations hangs with suspfs state o kern/149015 fs [zfs] [patch] misc fixes for ZFS code to build on Glib o kern/149014 fs [zfs] [patch] declarations in ZFS libraries/utilities o kern/149013 fs [zfs] [patch] make ZFS makefiles use the libraries fro o kern/148504 fs [zfs] ZFS' zpool does not allow replacing drives to be o kern/148490 fs [zfs]: zpool attach - resilver bidirectionally, and re o kern/148368 fs [zfs] ZFS hanging forever on 8.1-PRERELEASE o bin/148296 fs [zfs] [loader] [patch] Very slow probe in /usr/src/sys o kern/148204 fs [nfs] UDP NFS causes overload o kern/148138 fs [zfs] zfs raidz pool commands freeze o kern/147903 fs [zfs] [panic] Kernel panics on faulty zfs device o kern/147881 fs [zfs] [patch] ZFS "sharenfs" doesn't allow different " o kern/147790 fs [zfs] zfs set acl(mode|inherit) fails on existing zfs o kern/147420 fs [ufs] [panic] ufs_dirbad, nullfs, jail panic (corrupt o kern/147292 fs [nfs] [patch] readahead missing in nfs client options o kern/146941 fs [zfs] [panic] Kernel Double Fault - Happens constantly o kern/146786 fs [zfs] zpool import hangs with checksum errors o kern/146708 fs [ufs] [panic] Kernel panic in softdep_disk_write_compl o kern/146528 fs [zfs] Severe memory leak in ZFS on i386 o kern/146502 fs [nfs] FreeBSD 8 NFS Client Connection to Server o kern/146375 fs [nfs] [patch] Typos in macro variables names in sys/fs s kern/145712 fs [zfs] cannot offline two drives in a raidz2 configurat o kern/145411 fs [xfs] [panic] Kernel panics shortly after mounting an o bin/145309 fs bsdlabel: Editing disk label invalidates the whole dev o kern/145272 fs [zfs] [panic] Panic during boot when accessing zfs on o kern/145246 fs [ufs] dirhash in 7.3 gratuitously frees hashes when it o kern/145238 fs [zfs] [panic] kernel panic on zpool clear tank o kern/145229 fs [zfs] Vast differences in ZFS ARC behavior between 8.0 o kern/145189 fs [nfs] nfsd performs abysmally under load o kern/144929 fs [ufs] [lor] vfs_bio.c + ufs_dirhash.c o kern/144458 fs [nfs] [patch] nfsd fails as a kld p kern/144447 fs [zfs] sharenfs fsunshare() & fsshare_main() non functi o kern/144416 fs [panic] Kernel panic on online filesystem optimization s kern/144415 fs [zfs] [panic] kernel panics on boot after zfs crash o kern/144234 fs [zfs] Cannot boot machine with recent gptzfsboot code o kern/143825 fs [nfs] [panic] Kernel panic on NFS client o kern/143345 fs [ext2fs] [patch] extfs minor header cleanups to better o kern/143212 fs [nfs] NFSv4 client strange work ... o kern/143184 fs [zfs] [lor] zfs/bufwait LOR o kern/142924 fs [ext2fs] [patch] Small cleanup for the inode struct in o kern/142914 fs [zfs] ZFS performance degradation over time o kern/142878 fs [zfs] [vfs] lock order reversal o kern/142597 fs [ext2fs] ext2fs does not work on filesystems with real o kern/142489 fs [zfs] [lor] allproc/zfs LOR o kern/142466 fs Update 7.2 -> 8.0 on Raid 1 ends with screwed raid [re o kern/142401 fs [ntfs] [patch] Minor updates to NTFS from NetBSD o kern/142306 fs [zfs] [panic] ZFS drive (from OSX Leopard) causes two o kern/142068 fs [ufs] BSD labels are got deleted spontaneously o kern/141897 fs [msdosfs] [panic] Kernel panic. msdofs: file name leng o kern/141463 fs [nfs] [panic] Frequent kernel panics after upgrade fro o kern/141305 fs [zfs] FreeBSD ZFS+sendfile severe performance issues ( o kern/141091 fs [patch] [nullfs] fix panics with DIAGNOSTIC enabled o kern/141086 fs [nfs] [panic] panic("nfs: bioread, not dir") on FreeBS o kern/141010 fs [zfs] "zfs scrub" fails when backed by files in UFS2 o kern/140888 fs [zfs] boot fail from zfs root while the pool resilveri o kern/140661 fs [zfs] [patch] /boot/loader fails to work on a GPT/ZFS- o kern/140640 fs [zfs] snapshot crash o kern/140134 fs [msdosfs] write and fsck destroy filesystem integrity o kern/140068 fs [smbfs] [patch] smbfs does not allow semicolon in file o kern/139725 fs [zfs] zdb(1) dumps core on i386 when examining zpool c o kern/139715 fs [zfs] vfs.numvnodes leak on busy zfs o bin/139651 fs [nfs] mount(8): read-only remount of NFS volume does n o kern/139597 fs [patch] [tmpfs] tmpfs initializes va_gen but doesn't u o kern/139564 fs [zfs] [panic] 8.0-RC1 - Fatal trap 12 at end of shutdo o kern/139407 fs [smbfs] [panic] smb mount causes system crash if remot o kern/139363 fs [nfs] diskless root nfs mount from non FreeBSD server o kern/138790 fs [zfs] ZFS ceases caching when mem demand is high o kern/138421 fs [ufs] [patch] remove UFS label limitations o kern/138202 fs mount_msdosfs(1) see only 2Gb f kern/137037 fs [zfs] [hang] zfs rollback on root causes FreeBSD to fr o kern/136968 fs [ufs] [lor] ufs/bufwait/ufs (open) o kern/136945 fs [ufs] [lor] filedesc structure/ufs (poll) o kern/136944 fs [ffs] [lor] bufwait/snaplk (fsync) o kern/136873 fs [ntfs] Missing directories/files on NTFS volume o kern/136865 fs [nfs] [patch] NFS exports atomic and on-the-fly atomic o kern/136470 fs [nfs] Cannot mount / in read-only, over NFS o kern/135546 fs [zfs] zfs.ko module doesn't ignore zpool.cache filenam o kern/135469 fs [ufs] [panic] kernel crash on md operation in ufs_dirb o kern/135050 fs [zfs] ZFS clears/hides disk errors on reboot o kern/134491 fs [zfs] Hot spares are rather cold... o kern/133676 fs [smbfs] [panic] umount -f'ing a vnode-based memory dis o kern/133614 fs [panic] panic: ffs_truncate: read-only filesystem o kern/133174 fs [msdosfs] [patch] msdosfs must support utf-encoded int f kern/133150 fs [zfs] Page fault with ZFS on 7.1-RELEASE/amd64 while w o kern/132960 fs [ufs] [panic] panic:ffs_blkfree: freeing free frag o kern/132397 fs reboot causes filesystem corruption (failure to sync b o kern/132331 fs [ufs] [lor] LOR ufs and syncer o kern/132237 fs [msdosfs] msdosfs has problems to read MSDOS Floppy o kern/132145 fs [panic] File System Hard Crashes o kern/131441 fs [unionfs] [nullfs] unionfs and/or nullfs not combineab o kern/131360 fs [nfs] poor scaling behavior of the NFS server under lo o kern/131342 fs [nfs] mounting/unmounting of disks causes NFS to fail o bin/131341 fs makefs: error "Bad file descriptor" on the mount poin o kern/130920 fs [msdosfs] cp(1) takes 100% CPU time while copying file o kern/130210 fs [nullfs] Error by check nullfs o kern/129760 fs [nfs] after 'umount -f' of a stale NFS share FreeBSD l o kern/129488 fs [smbfs] Kernel "bug" when using smbfs in smbfs_smb.c: o kern/129231 fs [ufs] [patch] New UFS mount (norandom) option - mostly o kern/129152 fs [panic] non-userfriendly panic when trying to mount(8) o kern/129059 fs [zfs] [patch] ZFS bootloader whitelistable via WITHOUT f kern/128829 fs smbd(8) causes periodic panic on 7-RELEASE o kern/127787 fs [lor] [ufs] Three LORs: vfslock/devfs/vfslock, ufs/vfs o kern/127420 fs [gjournal] [panic] Journal overflow on gmirrored gjour o kern/127029 fs [panic] mount(8): trying to mount a write protected zi o kern/126287 fs [ufs] [panic] Kernel panics while mounting an UFS file o kern/125895 fs [ffs] [panic] kernel: panic: ffs_blkfree: freeing free s kern/125738 fs [zfs] [request] SHA256 acceleration in ZFS p kern/124621 fs [ext3] [patch] Cannot mount ext2fs partition f bin/124424 fs [zfs] zfs(8): zfs list -r shows strange snapshots' siz o kern/123939 fs [msdosfs] corrupts new files o kern/122380 fs [ffs] ffs_valloc:dup alloc (Soekris 4801/7.0/USB Flash o bin/122172 fs [fs]: amd(8) automount daemon dies on 6.3-STABLE i386, o bin/121898 fs [nullfs] pwd(1)/getcwd(2) fails with Permission denied o bin/121779 fs [ufs] snapinfo(8) (and related tools?) only work for t o bin/121366 fs [zfs] [patch] Automatic disk scrubbing from periodic(8 o bin/121072 fs [smbfs] mount_smbfs(8) cannot normally convert the cha f kern/120991 fs [panic] [fs] [snapshot] System crashes when manipulati o kern/120483 fs [ntfs] [patch] NTFS filesystem locking changes o kern/120482 fs [ntfs] [patch] Sync style changes between NetBSD and F f kern/119735 fs [zfs] geli + ZFS + samba starting on boot panics 7.0-B o kern/118912 fs [2tb] disk sizing/geometry problem with large array o kern/118713 fs [minidump] [patch] Display media size required for a k o bin/118249 fs mv(1): moving a directory changes its mtime o kern/118107 fs [ntfs] [panic] Kernel panic when accessing a file at N o kern/117954 fs [ufs] dirhash on very large directories blocks the mac o bin/117315 fs [smbfs] mount_smbfs(8) and related options can't mount o kern/117314 fs [ntfs] Long-filename only NTFS fs'es cause kernel pani o kern/117158 fs [zfs] zpool scrub causes panic if geli vdevs detach on o bin/116980 fs [msdosfs] [patch] mount_msdosfs(8) resets some flags f o conf/116931 fs lack of fsck_cd9660 prevents mounting iso images with p kern/116608 fs [msdosfs] [patch] msdosfs fails to check mount options o kern/116583 fs [ffs] [hang] System freezes for short time when using o kern/116170 fs [panic] Kernel panic when mounting /tmp o kern/115645 fs [snapshots] [panic] lockmgr: thread 0xc4c00d80, not ex o bin/115361 fs [zfs] mount(8) gets into a state where it won't set/un o kern/114955 fs [cd9660] [patch] [request] support for mask,dirmask,ui o kern/114847 fs [ntfs] [patch] [request] dirmask support for NTFS ala o kern/114676 fs [ufs] snapshot creation panics: snapacct_ufs2: bad blo o bin/114468 fs [patch] [request] add -d option to umount(8) to detach o kern/113852 fs [smbfs] smbfs does not properly implement DFS referral o bin/113838 fs [patch] [request] mount(8): add support for relative p o bin/113049 fs [patch] [request] make quot(8) use getopt(3) and show o kern/112658 fs [smbfs] [patch] smbfs and caching problems (resolves b o kern/111843 fs [msdosfs] Long Names of files are incorrectly created o kern/111782 fs [ufs] dump(8) fails horribly for large filesystems s bin/111146 fs [2tb] fsck(8) fails on 6T filesystem o kern/109024 fs [msdosfs] [iconv] mount_msdosfs: msdosfs_iconv: Operat o kern/109010 fs [msdosfs] can't mv directory within fat32 file system o bin/107829 fs [2TB] fdisk(8): invalid boundary checking in fdisk / w o bin/107692 fs newfs(8): newfs -O 1 doesn't create consistent filesys o kern/106107 fs [ufs] left-over fsck_snapshot after unfinished backgro o kern/106030 fs [ufs] [panic] panic in ufs from geom when a dead disk o kern/104406 fs [ufs] Processes get stuck in "ufs" state under persist o kern/104133 fs [ext2fs] EXT2FS module corrupts EXT2/3 filesystems o kern/103035 fs [ntfs] Directories in NTFS mounted disc images appear o kern/101324 fs [smbfs] smbfs sometimes not case sensitive when it's s o kern/99290 fs [ntfs] mount_ntfs ignorant of cluster sizes o kern/97377 fs [ntfs] [patch] syntax cleanup for ntfs_ihash.c o kern/95222 fs [iso9660] File sections on ISO9660 level 3 CDs ignored o kern/94849 fs [ufs] rename on UFS filesystem is not atomic o kern/94769 fs [ufs] Multiple file deletions on multi-snapshotted fil o kern/94733 fs [smbfs] smbfs may cause double unlock o kern/93942 fs [vfs] [patch] panic: ufs_dirbad: bad dir (patch from D o kern/92272 fs [ffs] [hang] Filling a filesystem while creating a sna f kern/91568 fs [ufs] [panic] writing to UFS/softupdates DVD media in o kern/91134 fs [smbfs] [patch] Preserve access and modification time a kern/90815 fs [smbfs] [patch] SMBFS with character conversions somet o kern/88657 fs [smbfs] windows client hang when browsing a samba shar o kern/88266 fs [smbfs] smbfs does not implement UIO_NOCOPY and sendfi o kern/87859 fs [smbfs] System reboot while umount smbfs. o kern/86587 fs [msdosfs] rm -r /PATH fails with lots of small files o kern/85326 fs [smbfs] [panic] saving a file via samba to an overquot o kern/84589 fs [2TB] 5.4-STABLE unresponsive during background fsck 2 o kern/80088 fs [smbfs] Incorrect file time setting on NTFS mounted vi o kern/73484 fs [ntfs] Kernel panic when doing `ls` from the client si o bin/73019 fs [ufs] fsck_ufs(8) cannot alloc 607016868 bytes for ino o kern/71774 fs [ntfs] NTFS cannot "see" files on a WinXP filesystem o kern/68978 fs [panic] [ufs] crashes with failing hard disk, loose po o kern/65920 fs [nwfs] Mounted Netware filesystem behaves strange o kern/65901 fs [smbfs] [patch] smbfs fails fsx write/truncate-down/tr a docs/61716 fs [patch] newfs(8) code and manpage are out of sync o kern/61503 fs [smbfs] mount_smbfs does not work as non-root o kern/55617 fs [smbfs] Accessing an nsmb-mounted drive via a smb expo o kern/51685 fs [hang] Unbounded inode allocation causes kernel to loc o kern/51583 fs [nullfs] [patch] allow to work with devices and socket o kern/36566 fs [smbfs] System reboot with dead smb mount and umount o kern/33464 fs [ufs] soft update inconsistencies after system crash o kern/18874 fs [2TB] 32bit NFS servers export wrong negative values t 194 problems total. From owner-freebsd-fs@FreeBSD.ORG Mon Sep 13 13:00:19 2010 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A84E21065679; Mon, 13 Sep 2010 13:00:19 +0000 (UTC) (envelope-from gjb@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 7B0438FC21; Mon, 13 Sep 2010 13:00:19 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id o8DD0JHP019155; Mon, 13 Sep 2010 13:00:19 GMT (envelope-from gjb@freefall.freebsd.org) Received: (from gjb@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id o8DD0JJY019116; Mon, 13 Sep 2010 13:00:19 GMT (envelope-from gjb) Date: Mon, 13 Sep 2010 13:00:19 GMT Message-Id: <201009131300.o8DD0JJY019116@freefall.freebsd.org> To: gjb@FreeBSD.org, freebsd-fs@FreeBSD.org, gjb@FreeBSD.org From: gjb@FreeBSD.org Cc: Subject: Re: docs/61716: [patch] newfs(8) code and manpage are out of sync X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 13 Sep 2010 13:00:19 -0000 Synopsis: [patch] newfs(8) code and manpage are out of sync Responsible-Changed-From-To: freebsd-fs->gjb Responsible-Changed-By: gjb Responsible-Changed-When: Mon Sep 13 12:59:45 UTC 2010 Responsible-Changed-Why: I'll take this. http://www.freebsd.org/cgi/query-pr.cgi?pr=61716 From owner-freebsd-fs@FreeBSD.ORG Mon Sep 13 14:20:05 2010 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 975BC106564A for ; Mon, 13 Sep 2010 14:20:05 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 6B8A68FC0A for ; Mon, 13 Sep 2010 14:20:05 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id o8DEK5ej003579 for ; Mon, 13 Sep 2010 14:20:05 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id o8DEK5Tw003578; Mon, 13 Sep 2010 14:20:05 GMT (envelope-from gnats) Date: Mon, 13 Sep 2010 14:20:05 GMT Message-Id: <201009131420.o8DEK5Tw003578@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org From: Nick Hibma Cc: Subject: Re: bin/135710: mount(8): mount -t tmpfs does not follow 'size' option X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: Nick Hibma List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 13 Sep 2010 14:20:05 -0000 The following reply was made to PR bin/135710; it has been noted by GNATS. From: Nick Hibma To: bug-followup@FreeBSD.org, mikej@paymentallianceintl.com Cc: Subject: Re: bin/135710: mount(8): mount -t tmpfs does not follow 'size' option Date: Mon, 13 Sep 2010 16:01:06 +0200 So, what exactly is one to do to set the size to a specific amount? Or = is it automatic? I cannot find ANY information on this (the size of = tmpfs). Nick Hibma AnyWi Technologies From owner-freebsd-fs@FreeBSD.ORG Mon Sep 13 15:03:44 2010 Return-Path: Delivered-To: freebsd-fs@FreeBSD.ORG Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id B2D26106564A; Mon, 13 Sep 2010 15:03:44 +0000 (UTC) (envelope-from olli@lurza.secnetix.de) Received: from lurza.secnetix.de (lurza.secnetix.de [IPv6:2a01:170:102f::2]) by mx1.freebsd.org (Postfix) with ESMTP id 2C1A08FC14; Mon, 13 Sep 2010 15:03:43 +0000 (UTC) Received: from lurza.secnetix.de (localhost [127.0.0.1]) by lurza.secnetix.de (8.14.3/8.14.3) with ESMTP id o8DF3RhL039704; Mon, 13 Sep 2010 17:03:42 +0200 (CEST) (envelope-from oliver.fromme@secnetix.de) Received: (from olli@localhost) by lurza.secnetix.de (8.14.3/8.14.3/Submit) id o8DF3Qau039703; Mon, 13 Sep 2010 17:03:26 +0200 (CEST) (envelope-from olli) Date: Mon, 13 Sep 2010 17:03:26 +0200 (CEST) Message-Id: <201009131503.o8DF3Qau039703@lurza.secnetix.de> From: Oliver Fromme To: freebsd-stable@FreeBSD.ORG, freebsd-fs@FreeBSD.ORG, rmacklem@uoguelph.ca In-Reply-To: <404412916.782668.1284306279949.JavaMail.root@erie.cs.uoguelph.ca> X-Newsgroups: list.freebsd-stable User-Agent: tin/1.8.3-20070201 ("Scotasay") (UNIX) (FreeBSD/6.4-PRERELEASE-20080904 (i386)) MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 8bit X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.3.5 (lurza.secnetix.de [127.0.0.1]); Mon, 13 Sep 2010 17:03:43 +0200 (CEST) Cc: Subject: Re: Why is NFSv4 so slow? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: freebsd-stable@FreeBSD.ORG, freebsd-fs@FreeBSD.ORG, rmacklem@uoguelph.ca List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 13 Sep 2010 15:03:44 -0000 Rick Macklem wrote: > Btw, if anyone who didn't see the posting on freebsd-fs and would > like to run a quick test, it would be appreciated. > Bascially do both kinds of mount using a FreeBSD8.1 or later client > and then read a greater than 100Mbyte file with dd. > > # mount -t nfs -o nfsv3 :/path / > - cd anywhere in mount that has > 100Mbyte file > # dd if= of=/dev/null bs=1m > # umount / > > Then repeat with > # mount -t newnfs -o nfsv3 :/path / > > and post the results along with the client machine's info > (machine arch/# of cores/memory/net interface used for NFS traffic). > > Thanks in advance to anyone who runs the test, rick Ok ... NFS server: - FreeBSD 8.1-PRERELEASE-20100620 i386 - intel Atom 330 (1.6 GHz dual-core with HT --> 4-way SMP) - 4 GB RAM - re0: NFS client: - FreeBSD 8.1-STABLE-20100908 i386 - AMD Phenom II X6 1055T (2.8 GHz + "Turbo Core", six-core) - 4 GB RAM - re0: The machines are connected through a Netgear GS108T gigabit ethernet switch. I umounted and re-mounted the NFS path after every single dd(1) command, so the data actually comes from the server instead of from the local cache. I also made sure that the file was in the cache on the server, so the server's disk speed is irrelevant. Testing with "mount -t nfs": 183649990 bytes transferred in 2.596677 secs (70725002 bytes/sec) 183649990 bytes transferred in 2.578746 secs (71216779 bytes/sec) 183649990 bytes transferred in 2.561857 secs (71686277 bytes/sec) 183649990 bytes transferred in 2.629028 secs (69854708 bytes/sec) 183649990 bytes transferred in 2.535422 secs (72433702 bytes/sec) Testing with "mount -t newnfs": 183649990 bytes transferred in 5.361544 secs (34253192 bytes/sec) 183649990 bytes transferred in 5.401471 secs (33999996 bytes/sec) 183649990 bytes transferred in 5.052138 secs (36350946 bytes/sec) 183649990 bytes transferred in 5.311821 secs (34573829 bytes/sec) 183649990 bytes transferred in 5.537337 secs (33165760 bytes/sec) So, nfs is roughly twice as fast as newnfs, indeed. Best regards Oliver -- Oliver Fromme, secnetix GmbH & Co. KG, Marktplatz 29, 85567 Grafing b. M. Handelsregister: Registergericht Muenchen, HRA 74606, Geschäftsfuehrung: secnetix Verwaltungsgesellsch. mbH, Handelsregister: Registergericht Mün- chen, HRB 125758, Geschäftsführer: Maik Bachmann, Olaf Erb, Ralf Gebhart FreeBSD-Dienstleistungen, -Produkte und mehr: http://www.secnetix.de/bsd "A language that doesn't have everything is actually easier to program in than some that do." -- Dennis M. Ritchie From owner-freebsd-fs@FreeBSD.ORG Mon Sep 13 15:15:36 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A0126106567A; Mon, 13 Sep 2010 15:15:36 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-annu.mail.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id 46BD88FC22; Mon, 13 Sep 2010 15:15:35 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: ApwEAJbcjUyDaFvO/2dsb2JhbACDGp8dsQ+RKYEigyp0BIon X-IronPort-AV: E=Sophos;i="4.56,359,1280721600"; d="scan'208";a="91665124" Received: from erie.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.206]) by esa-annu-pri.mail.uoguelph.ca with ESMTP; 13 Sep 2010 11:15:34 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 5C860B3F32; Mon, 13 Sep 2010 11:15:34 -0400 (EDT) Date: Mon, 13 Sep 2010 11:15:34 -0400 (EDT) From: Rick Macklem To: freebsd-stable@FreeBSD.ORG, freebsd-fs@FreeBSD.ORG, rmacklem@uoguelph.ca Message-ID: <1846953836.819261.1284390934350.JavaMail.root@erie.cs.uoguelph.ca> In-Reply-To: <201009131503.o8DF3Qau039703@lurza.secnetix.de> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [24.65.230.102] X-Mailer: Zimbra 6.0.7_GA_2476.RHEL4 (ZimbraWebClient - SAF3 (Mac)/6.0.7_GA_2473.RHEL4_64) Cc: Subject: Re: Why is NFSv4 so slow? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 13 Sep 2010 15:15:36 -0000 > > Ok ... > > NFS server: > - FreeBSD 8.1-PRERELEASE-20100620 i386 > - intel Atom 330 (1.6 GHz dual-core with HT --> 4-way SMP) > - 4 GB RAM > - re0: > > NFS client: > - FreeBSD 8.1-STABLE-20100908 i386 > - AMD Phenom II X6 1055T (2.8 GHz + "Turbo Core", six-core) > - 4 GB RAM > - re0: > > The machines are connected through a Netgear GS108T > gigabit ethernet switch. > > I umounted and re-mounted the NFS path after every single > dd(1) command, so the data actually comes from the server > instead of from the local cache. I also made sure that > the file was in the cache on the server, so the server's > disk speed is irrelevant. > > Testing with "mount -t nfs": > > 183649990 bytes transferred in 2.596677 secs (70725002 bytes/sec) > 183649990 bytes transferred in 2.578746 secs (71216779 bytes/sec) > 183649990 bytes transferred in 2.561857 secs (71686277 bytes/sec) > 183649990 bytes transferred in 2.629028 secs (69854708 bytes/sec) > 183649990 bytes transferred in 2.535422 secs (72433702 bytes/sec) > > Testing with "mount -t newnfs": > > 183649990 bytes transferred in 5.361544 secs (34253192 bytes/sec) > 183649990 bytes transferred in 5.401471 secs (33999996 bytes/sec) > 183649990 bytes transferred in 5.052138 secs (36350946 bytes/sec) > 183649990 bytes transferred in 5.311821 secs (34573829 bytes/sec) > 183649990 bytes transferred in 5.537337 secs (33165760 bytes/sec) > > So, nfs is roughly twice as fast as newnfs, indeed. > > Best regards > Oliver > Thanks for doing the test. I think I can find out what causes the factor of 2 someday. What is really weird is that some people see several orders of magnitude slower (a few Mbytes/sec). Your case was also useful, because you are using the same net interface/driver as the original report of a few Mbytes/sec, so it doesn't appear to be an re problem. Have a good week, rick From owner-freebsd-fs@FreeBSD.ORG Mon Sep 13 15:20:04 2010 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id D8EDF1065674 for ; Mon, 13 Sep 2010 15:20:04 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id ACF538FC19 for ; Mon, 13 Sep 2010 15:20:04 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id o8DFK4s7064881 for ; Mon, 13 Sep 2010 15:20:04 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id o8DFK46F064880; Mon, 13 Sep 2010 15:20:04 GMT (envelope-from gnats) Date: Mon, 13 Sep 2010 15:20:04 GMT Message-Id: <201009131520.o8DFK46F064880@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org From: Oliver Fromme Cc: Subject: Re: bin/135710: mount(8): mount -t tmpfs does not follow 'size' ?option X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: Oliver Fromme List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 13 Sep 2010 15:20:04 -0000 The following reply was made to PR bin/135710; it has been noted by GNATS. From: Oliver Fromme To: bug-followup@FreeBSD.ORG, Nick Hibma Cc: Subject: Re: bin/135710: mount(8): mount -t tmpfs does not follow 'size' ?option Date: Mon, 13 Sep 2010 17:16:01 +0200 (CEST) Nick Hibma wrote: > So, what exactly is one to do to set the size to a specific amount? > Or is it automatic? I cannot find ANY information on this (the size > of tmpfs). Did you read Nathanael's reply? I think it explains it all: http://lists.freebsd.org/pipermail/freebsd-fs/2009-June/006368.html Best regards Oliver -- Oliver Fromme, secnetix GmbH & Co. KG, Marktplatz 29, 85567 Grafing b. M. Handelsregister: Registergericht Muenchen, HRA 74606, Geschäftsfuehrung: secnetix Verwaltungsgesellsch. mbH, Handelsregister: Registergericht Mün- chen, HRB 125758, Geschäftsführer: Maik Bachmann, Olaf Erb, Ralf Gebhart FreeBSD-Dienstleistungen, -Produkte und mehr: http://www.secnetix.de/bsd "... there are two ways of constructing a software design: One way is to make it so simple that there are _obviously_ no deficiencies and the other way is to make it so complicated that there are no _obvious_ deficiencies." -- C.A.R. Hoare, ACM Turing Award Lecture, 1980 From owner-freebsd-fs@FreeBSD.ORG Mon Sep 13 15:24:08 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A0EE0106564A; Mon, 13 Sep 2010 15:24:08 +0000 (UTC) (envelope-from avg@freebsd.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 7CA7C8FC0C; Mon, 13 Sep 2010 15:24:07 +0000 (UTC) Received: from odyssey.starpoint.kiev.ua (alpha-e.starpoint.kiev.ua [212.40.38.101]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id SAA21864; Mon, 13 Sep 2010 18:24:02 +0300 (EEST) (envelope-from avg@freebsd.org) Message-ID: <4C8E4212.30000@freebsd.org> Date: Mon, 13 Sep 2010 18:24:02 +0300 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; U; FreeBSD amd64; en-US; rv:1.9.2.9) Gecko/20100909 Lightning/1.0b2 Thunderbird/3.1.3 MIME-Version: 1.0 To: Steven Hartland References: <5DB6E7C798E44D33A05673F4B773405E@multiplay.co.uk><4C85E91E.1010602@icyb.net.ua> <4C873914.40404@freebsd.org><20100908084855.GF2465@deviant.kiev.zoral.com.ua> <4C874F00.3050605@freebsd.org> <4C8D087B.5040404@freebsd.org> <03537796FAB54E02959E2D64FC83004F@multiplay.co.uk> <4C8D280F.3040803@freebsd.org> <3FBF66BF11AA4CBBA6124CA435A4A31B@multiplay.co.uk> In-Reply-To: <3FBF66BF11AA4CBBA6124CA435A4A31B@multiplay.co.uk> X-Enigmail-Version: 1.1.2 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org, Pawel Jakub Dawidek , jhell Subject: Re: zfs very poor performance compared to ufs due to lack of cache? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 13 Sep 2010 15:24:08 -0000 on 13/09/2010 00:01 Steven Hartland said the following: > ----- Original Message ----- From: "Andriy Gapon" >> >> All :-) >> Revision of your code, all the extra patches, workload, graphs of ARC and memory >> dynamics and that's just for the start. >> Then, analysis similar to that of Wiktor. E.g. trying to test with a single >> file and then removing it, or better yet, examining with DTrace actual code >> paths taken from sendfile(2). > > All those have been given in past posts on this thread, but that's quite fragmented, > sorry about that, so here's the current summary for reference:- > > The machine is a stream server with its job being to serve mp4 http streams via > nginx. It also exports the fs via nfs to an encoding box which does all the grunt > work of creating the streams, but that doesn't seem relevant here as this was > not in use during these tests. > > We currently have two such machines one which has been updated to zfs and one > which is still on ufs. After upgrading to 8.1-RELEASE and zfs all seemed ok until we > had a bit of a traffic hike at which point we noticed the machine in question really > struggling even though it was serving less than 100 clients at under 3mbps for > a few popular streams which should have all easily fitted in cache. > > Upon investigation it seems that zfs wasn't caching anything so all streams where > being read direct from disk overloading the areca controller backed with a 7 disk > RAID6 volume. > > After my original post we've done a number of upgrades and we are now currently > running 8-STABLE as of the 06/09 plus the following > http://people.freebsd.org/~mm/patches/zfs/v15/stable-8-v15.patch > http://people.freebsd.org/~mm/patches/zfs/zfs_metaslab_v2.patch > http://people.freebsd.org/~mm/patches/zfs/zfs_abe_stat_rrwlock.patch > needfree.patch and vm_paging_needed.patch posted by jhell > >> --- a/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c >> +++ b/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c >> @@ -500,6 +500,7 @@ again: >> sched_unpin(); >> } >> VM_OBJECT_LOCK(obj); >> + if (error == 0) >> + vm_page_set_validclean(m, off, bytes); >> vm_page_wakeup(m); >> if (error == 0) >> uio->uio_resid -= bytes; I'd really prefer to see description of your sources as svn revision rXXXXX plus http link to a diff of your actual sources to that revision. That would greatly help to see what you actually have, and what you don't have. > When nginx is active and using sendfile we see a large amount of memory, equivalent > to the size of the files being accessed it seems, slip into inactive according to > top and > the size of arc drop to the at most the minimum configured and some times even less. > > The machine now has 7GB or ram and these are the load.conf settings currently in > use:- > # As we have battery backed cache we can do this > vfs.zfs.cache_flush_disable=1 > vfs.zfs.prefetch_disable=0 > # Physical Memory * 1.5 > vm.kmem_size="11G" > vfs.zfs.arc_min="5G" > vfs.zfs.arc_max="6656M" > vfs.zfs.vdev.cache.size="20M" > > Currently arc_summary reports the following after been idle for several hours:- > ARC Size: > Current Size: 76.92% 5119.85M (arcsize) > Target Size: (Adaptive) 76.92% 5120.00M (c) > Min Size (Hard Limit): 76.92% 5120.00M (c_min) > Max Size (High Water): ~1:1 6656.00M (c_max) > > Column details as requested previously:- > cnt, time, kstat.zfs.misc.arcstats.size, vm.stats.vm.v_pdwakeups, > vm.stats.vm.v_cache_count, vm.stats.vm.v_inactive_count, > vm.stats.vm.v_active_count, vm.stats.vm.v_wire_count, > vm.stats.vm.v_free_count > 1,1284323760,5368902272,72,49002,156676,27241,1505466,32523 > 2,1284323797,5368675288,73,51593,156193,27612,1504846,30682 > 3,1284323820,5368675288,73,51478,156248,27649,1504874,30671 > 4,1284323851,5368670688,74,22994,184834,27609,1504794,30698 > 5,1284323868,5368670688,74,22990,184838,27605,1504792,30698 > 6,1284324024,5368679992,74,22246,184624,27663,1505177,31171 > 7,1284324057,5368679992,74,22245,184985,27663,1504844,31170 > > Point notes: > 1. Initial values > 2. single file request size: 692M > 3. repeat request #2 > 4. request for second file 205M > 5. repeat request #4 > 6. multi request #2 > 7. complete Graphs look prettier :-) I used drraw to visualize rrdtool data. Well, I don't see anything unusual in these numbers. E.g. contrary to what you implied by saying that the patch hasn't changed anything, I do not see page counts changing much after each iteration of sending the same file. Also, during the test you seem to have sufficiently high amount of free and cached pages to not trigger ARC shrinkage or inactive/active recycling. > top details after tests:- > Mem: 106M Active, 723M Inact, 5878M Wired, 87M Cache, 726M Buf, 124M Free > Swap: 4096M Total, 836K Used, 4095M Free > > arc_summary snip after test > ARC Size: > Current Size: 76.92% 5119.97M (arcsize) > Target Size: (Adaptive) 76.92% 5120.09M (c) > Min Size (Hard Limit): 76.92% 5120.00M (c_min) > Max Size (High Water): ~1:1 6656.00M (c_max) > > If I turn the box on so it gets a real range of requests, after about an hour I > see something > like:- > Mem: 104M Active, 2778M Inact, 3065M Wired, 20M Cache, 726M Buf, 951M Free > Swap: 4096M Total, 4096M Free > > ARC Size: > Current Size: 34.37% 2287.36M (arcsize) > Target Size: (Adaptive) 100.00% 6656.00M (c) > Min Size (Hard Limit): 76.92% 5120.00M (c_min) > Max Size (High Water): ~1:1 6656.00M (c_max) > > As you can see the size of ARC has even dropped below c_min. The results of the > live test > where gathered directly after a reboot, in case that's relevant. Well, I would love to see the mentioned above graphs for this real test load. Going below c_min likely means that you don't have all the latest stable/8 ZFS code, but i am not sure. > If someone could suggest a set of tests that would help I'll be happy to run them but > from what's been said thus far is seems that the use of sendfile is forcing memory > use > other than that coming from arc which is what's expected? > > Would running the same test with sendfile disabled in nginx help? The more test data the better, we could have some base for comparison and separation of general ARC issues from sendfile-specific issues. Thanks! -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Mon Sep 13 16:06:02 2010 Return-Path: Delivered-To: freebsd-fs@FreeBSD.ORG Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 751ED1065741 for ; Mon, 13 Sep 2010 16:06:02 +0000 (UTC) (envelope-from rick@rix.kiwi-computer.com) Received: from rix.kiwi-computer.com (66-191-70-202.static.stcd.mn.charter.com [66.191.70.202]) by mx1.freebsd.org (Postfix) with SMTP id 0C6CC8FC08 for ; Mon, 13 Sep 2010 16:06:00 +0000 (UTC) Received: (qmail 97207 invoked by uid 2000); 13 Sep 2010 15:39:18 -0000 Date: Mon, 13 Sep 2010 10:39:18 -0500 From: "Rick C. Petty" To: Rick Macklem Message-ID: <20100913153918.GA96692@rix.kiwi-computer.com> References: <201009131503.o8DF3Qau039703@lurza.secnetix.de> <1846953836.819261.1284390934350.JavaMail.root@erie.cs.uoguelph.ca> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1846953836.819261.1284390934350.JavaMail.root@erie.cs.uoguelph.ca> User-Agent: Mutt/1.4.2.3i Cc: freebsd-fs@FreeBSD.ORG, freebsd-stable@FreeBSD.ORG Subject: Re: Why is NFSv4 so slow? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: rick-freebsd2009@kiwi-computer.com List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 13 Sep 2010 16:06:02 -0000 On Mon, Sep 13, 2010 at 11:15:34AM -0400, Rick Macklem wrote: > > > > instead of from the local cache. I also made sure that > > the file was in the cache on the server, so the server's > > disk speed is irrelevant. > > > > So, nfs is roughly twice as fast as newnfs, indeed. Hmm, I have the same network switch as Oliver, and I wasn't caching the file on the server before. When I cache the file on the server, I get about 1 MiB/s faster throughput, so that doesn't seem to make the difference to me (but with higher throughputs, I would imagine it would). > Thanks for doing the test. I think I can find out what causes the > factor of 2 someday. What is really weird is that some people see > several orders of magnitude slower (a few Mbytes/sec). > > Your case was also useful, because you are using the same net > interface/driver as the original report of a few Mbytes/sec, so it > doesn't appear to be an re problem. I believe I said something to that effect. :-P The problem I have is that the magnitude of throughput varies randomly. Sometimes I can repeat the test and see 3-4 MB/s. Then my server's motherboard failed last week so I swapped things around and now I have 9-10 MB/s on the same client (but using 100Mbit interface instead of gigabit, so those speeds make sense). One thing I noticed is the lag seems to have disappeared after the reboots. Another thing I had to change was that I was using an NFSv3 mount for /home (with the v3 client, not the experimental v3/v4 client) and now I'm using NFSv4 mounts exclusively. Too much hardware changed because of that board failing (AHCI was randomly dropping disks, and it got to the point that it wouldn't pick up drives after a cold start and then the board failed to POST 11 of 12 times), so I haven't been able to reliably reproduce any problems. I also had to reboot the "bad" client because of the broken NFSv3 mountpoints, and the server was auto-upgraded to a newer 8.1-stable (I often run "make buildworld kernel" regularly, so any reboots will automatically have a newer kernel). There's definite evidence that the newnfs mounts are slower than plain nfs, and sometimes orders of magnitude slower (as others have shown). But the old nfs is so broken in other ways that I'd prefer slower yet more stable. Thanks again for all your help, Rick! -- Rick C. Petty From owner-freebsd-fs@FreeBSD.ORG Tue Sep 14 17:40:59 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A7EDB106564A; Tue, 14 Sep 2010 17:40:59 +0000 (UTC) (envelope-from prvs=1873ccaa52=killing@multiplay.co.uk) Received: from mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) by mx1.freebsd.org (Postfix) with ESMTP id 106EE8FC26; Tue, 14 Sep 2010 17:40:58 +0000 (UTC) X-MDAV-Processed: mail1.multiplay.co.uk, Tue, 14 Sep 2010 18:30:06 +0100 X-Spam-Processed: mail1.multiplay.co.uk, Tue, 14 Sep 2010 18:30:06 +0100 X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on mail1.multiplay.co.uk X-Spam-Level: X-Spam-Status: No, score=-5.0 required=6.0 tests=USER_IN_WHITELIST shortcircuit=ham autolearn=disabled version=3.2.5 Received: from r2d2 by mail1.multiplay.co.uk (MDaemon PRO v10.0.4) with ESMTP id md50011232724.msg; Tue, 14 Sep 2010 18:30:06 +0100 X-Authenticated-Sender: Killing@multiplay.co.uk X-MDRemoteIP: 188.220.16.49 X-Return-Path: prvs=1873ccaa52=killing@multiplay.co.uk X-Envelope-From: killing@multiplay.co.uk Message-ID: From: "Steven Hartland" To: "Andriy Gapon" References: <5DB6E7C798E44D33A05673F4B773405E@multiplay.co.uk><4C85E91E.1010602@icyb.net.ua> <4C873914.40404@freebsd.org><20100908084855.GF2465@deviant.kiev.zoral.com.ua> <4C874F00.3050605@freebsd.org> <4C8D087B.5040404@freebsd.org> <03537796FAB54E02959E2D64FC83004F@multiplay.co.uk> <4C8D280F.3040803@freebsd.org> <3FBF66BF11AA4CBBA6124CA435A4A31B@multiplay.co.uk> <4C8E4212.30000@freebsd.org> Date: Tue, 14 Sep 2010 18:30:07 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="utf-8"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.5931 Cc: freebsd-fs@freebsd.org, Pawel Jakub Dawidek , jhell Subject: Re: zfs very poor performance compared to ufs due to lack of cache? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 14 Sep 2010 17:40:59 -0000 ----- Original Message ----- From: "Andriy Gapon" > I'd really prefer to see description of your sources as svn revision rXXXXX plus > http link to a diff of your actual sources to that revision. > That would greatly help to see what you actually have, and what you don't have. The zfs files don't seem to have any svn revision information in them. Is there something else that would id the revision of sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c or the svn revision of the version of stable in use? > Well, I would love to see the mentioned above graphs for this real test load. > Going below c_min likely means that you don't have all the latest stable/8 ZFS > code, but i am not sure. It defintely is :( If its relavent the source was downloaded via cvsup from the uk mirror. >> If someone could suggest a set of tests that would help I'll be happy to run them but >> from what's been said thus far is seems that the use of sendfile is forcing memory >> use >> other than that coming from arc which is what's expected? >> >> Would running the same test with sendfile disabled in nginx help? > > The more test data the better, we could have some base for comparison and > separation of general ARC issues from sendfile-specific issues. Going to run the following tests:- 1. run a live test with "sendfile off" in the nginx config 2. run a live test with "sendfile on" in the nginx config. During these tests I'm going to monitor the following every minute:- time, kstat.zfs.misc.arcstats.size, vm.stats.vm.v_pdwakeups, vm.stats.vm.v_cache_count, vm.stats.vm.v_inactive_count, vm.stats.vm.v_active_count, vm.stats.vm.v_wire_count, vm.stats.vm.v_free_count Anything else that should be monitored? Before each test the machine will be rebooted to try to ensure as direct a comparison as possible. Anything else that I should add / change before running said tests or should monitor? Regards Steve ================================================ This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337 or return the E.mail to postmaster@multiplay.co.uk. From owner-freebsd-fs@FreeBSD.ORG Tue Sep 14 18:36:55 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 4BBB7106566B for ; Tue, 14 Sep 2010 18:36:55 +0000 (UTC) (envelope-from zeus@relay.ibs.dn.ua) Received: from relay.ibs.dn.ua (relay1.ibs.dn.ua [91.216.196.25]) by mx1.freebsd.org (Postfix) with ESMTP id AA7178FC17 for ; Tue, 14 Sep 2010 18:36:54 +0000 (UTC) Received: from relay.ibs.dn.ua (localhost [127.0.0.1]) by relay.ibs.dn.ua with ESMTP id o8EIL9n1034529 for ; Tue, 14 Sep 2010 21:21:09 +0300 (EEST) Received: (from zeus@localhost) by relay.ibs.dn.ua (8.14.4/8.14.4/Submit) id o8EIL9Gl034528 for freebsd-fs@freebsd.org; Tue, 14 Sep 2010 21:21:09 +0300 (EEST) Date: Tue, 14 Sep 2010 21:21:09 +0300 From: Zeus V Panchenko To: freebsd-fs@freebsd.org Message-ID: <20100914182109.GA31403@relay.ibs.dn.ua> Mime-Version: 1.0 Content-Type: text/plain; charset=koi8-r Content-Disposition: inline User-Agent: Mutt/1.4.2.3i X-Operating-System: FreeBSD 8.1-RELEASE X-Editor: GNU Emacs 23.2.1 Subject: is there graphics images "oriented" fs? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 14 Sep 2010 18:36:55 -0000 Hi All, can anybody advice, is there file system kind/type expecially for storage and management photo images? what i mean is the usage of EXIF tags ... some hashing maybe ... or tool to crawl photo bank or mp3 collections on hdd and to hash some desired data (EXIFs again)? especially inegratable with SQL may be ... any idea please? -- Zeus V. Panchenko IT Dpt., IBS ltd GMT+2 (EET) From owner-freebsd-fs@FreeBSD.ORG Wed Sep 15 07:42:28 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id CBD811065670 for ; Wed, 15 Sep 2010 07:42:28 +0000 (UTC) (envelope-from avg@freebsd.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 1AA668FC0A for ; Wed, 15 Sep 2010 07:42:27 +0000 (UTC) Received: from porto.topspin.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id KAA00433; Wed, 15 Sep 2010 10:42:25 +0300 (EEST) (envelope-from avg@freebsd.org) Received: from localhost.topspin.kiev.ua ([127.0.0.1]) by porto.topspin.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1OvmdN-00003U-6q; Wed, 15 Sep 2010 10:42:25 +0300 Message-ID: <4C9078E0.2050402@freebsd.org> Date: Wed, 15 Sep 2010 10:42:24 +0300 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; U; FreeBSD amd64; en-US; rv:1.9.2.9) Gecko/20100912 Lightning/1.0b2 Thunderbird/3.1.3 MIME-Version: 1.0 To: Steven Hartland References: <5DB6E7C798E44D33A05673F4B773405E@multiplay.co.uk><4C85E91E.1010602@icyb.net.ua> <4C873914.40404@freebsd.org><20100908084855.GF2465@deviant.kiev.zoral.com.ua> <4C874F00.3050605@freebsd.org> <4C8D087B.5040404@freebsd.org> <03537796FAB54E02959E2D64FC83004F@multiplay.co.uk> <4C8D280F.3040803@freebsd.org> <3FBF66BF11AA4CBBA6124CA435A4A31B@multiplay.co.uk> <4C8E4212.30000@freebsd.org> In-Reply-To: X-Enigmail-Version: 1.1.2 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org Subject: Re: zfs very poor performance compared to ufs due to lack of cache? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 15 Sep 2010 07:42:28 -0000 on 14/09/2010 20:30 Steven Hartland said the following: > > ----- Original Message ----- From: "Andriy Gapon" >> I'd really prefer to see description of your sources as svn revision rXXXXX plus >> http link to a diff of your actual sources to that revision. >> That would greatly help to see what you actually have, and what you don't have. > > The zfs files don't seem to have any svn revision information in them. Is there > something > else that would id the revision of > sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c > or the svn revision of the version of stable in use? I don't know. I haven't used cvsup since the switch to svn, but generally in CVS each file has its own independent revision, so it's impossible to tell that the whole tree is at revision R. Maybe you could get yourself another source tree by doing svn checkout and applying the patches there. Then you could update it with svn update. For you branch URI is svn://svn.freebsd.org/base/stable/8 You would be able to get a diff of your local changes with svn diff command. >> Well, I would love to see the mentioned above graphs for this real test load. >> Going below c_min likely means that you don't have all the latest stable/8 ZFS >> code, but i am not sure. > > It defintely is :( > If its relavent the source was downloaded via cvsup from the uk mirror. > >>> If someone could suggest a set of tests that would help I'll be happy to run >>> them but >>> from what's been said thus far is seems that the use of sendfile is forcing >>> memory >>> use >>> other than that coming from arc which is what's expected? >>> >>> Would running the same test with sendfile disabled in nginx help? >> >> The more test data the better, we could have some base for comparison and >> separation of general ARC issues from sendfile-specific issues. > > Going to run the following tests:- > 1. run a live test with "sendfile off" in the nginx config > 2. run a live test with "sendfile on" in the nginx config. > > During these tests I'm going to monitor the following every minute:- > time, kstat.zfs.misc.arcstats.size, vm.stats.vm.v_pdwakeups, > vm.stats.vm.v_cache_count, vm.stats.vm.v_inactive_count, > vm.stats.vm.v_active_count, vm.stats.vm.v_wire_count, > vm.stats.vm.v_free_count > > Anything else that should be monitored? > > Before each test the machine will be rebooted to try to ensure as direct a > comparison > as possible. > > Anything else that I should add / change before running said tests or should > monitor? This sounds sufficiently good. If you could arrange to draw the graphs of the data it would be terrific :) -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Wed Sep 15 08:07:41 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 93DE9106566C for ; Wed, 15 Sep 2010 08:07:41 +0000 (UTC) (envelope-from jdc@koitsu.dyndns.org) Received: from qmta03.emeryville.ca.mail.comcast.net (qmta03.emeryville.ca.mail.comcast.net [76.96.30.32]) by mx1.freebsd.org (Postfix) with ESMTP id 781FF8FC1E for ; Wed, 15 Sep 2010 08:07:41 +0000 (UTC) Received: from omta16.emeryville.ca.mail.comcast.net ([76.96.30.72]) by qmta03.emeryville.ca.mail.comcast.net with comcast id 6w6D1f0011ZMdJ4A3w7h1z; Wed, 15 Sep 2010 08:07:41 +0000 Received: from koitsu.dyndns.org ([98.248.41.155]) by omta16.emeryville.ca.mail.comcast.net with comcast id 6w7g1f0023LrwQ28cw7gr5; Wed, 15 Sep 2010 08:07:40 +0000 Received: by icarus.home.lan (Postfix, from userid 1000) id 13D4D9B423; Wed, 15 Sep 2010 01:07:40 -0700 (PDT) Date: Wed, 15 Sep 2010 01:07:40 -0700 From: Jeremy Chadwick To: Andriy Gapon Message-ID: <20100915080740.GA55725@icarus.home.lan> References: <20100908084855.GF2465@deviant.kiev.zoral.com.ua> <4C874F00.3050605@freebsd.org> <4C8D087B.5040404@freebsd.org> <03537796FAB54E02959E2D64FC83004F@multiplay.co.uk> <4C8D280F.3040803@freebsd.org> <3FBF66BF11AA4CBBA6124CA435A4A31B@multiplay.co.uk> <4C8E4212.30000@freebsd.org> <4C9078E0.2050402@freebsd.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4C9078E0.2050402@freebsd.org> User-Agent: Mutt/1.5.20 (2009-06-14) Cc: freebsd-fs@freebsd.org Subject: Re: zfs very poor performance compared to ufs due to lack of cache? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 15 Sep 2010 08:07:41 -0000 On Wed, Sep 15, 2010 at 10:42:24AM +0300, Andriy Gapon wrote: > on 14/09/2010 20:30 Steven Hartland said the following: > > Going to run the following tests:- > > 1. run a live test with "sendfile off" in the nginx config > > 2. run a live test with "sendfile on" in the nginx config. > > > > During these tests I'm going to monitor the following every minute:- > > time, kstat.zfs.misc.arcstats.size, vm.stats.vm.v_pdwakeups, > > vm.stats.vm.v_cache_count, vm.stats.vm.v_inactive_count, > > vm.stats.vm.v_active_count, vm.stats.vm.v_wire_count, > > vm.stats.vm.v_free_count > > > > Anything else that should be monitored? > > > > Before each test the machine will be rebooted to try to ensure as direct a > > comparison > > as possible. > > > > Anything else that I should add / change before running said tests or should > > monitor? > > This sounds sufficiently good. > If you could arrange to draw the graphs of the data it would be terrific :) Please be aware the OP is using RRDTool to store the sample data, which means the values you see in the graphs are going to be averaged unless he's taken the time to use MIN/MAX/LAST on both the CF and the DS (there is a difference): > > Now monitoring these each minute to an rrd and text file and updated > > 8-STABLE ... What I'm trying to say: averaged data may not show you what you're looking for, depending on what that is. :-) -- | Jeremy Chadwick jdc@parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, USA | | Making life hard for others since 1977. PGP: 4BD6C0CB | From owner-freebsd-fs@FreeBSD.ORG Wed Sep 15 08:28:40 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 43C771065670 for ; Wed, 15 Sep 2010 08:28:40 +0000 (UTC) (envelope-from bsdunix44@gmail.com) Received: from mail-iw0-f182.google.com (mail-iw0-f182.google.com [209.85.214.182]) by mx1.freebsd.org (Postfix) with ESMTP id 0BA5D8FC0A for ; Wed, 15 Sep 2010 08:28:39 +0000 (UTC) Received: by iwn34 with SMTP id 34so7886915iwn.13 for ; Wed, 15 Sep 2010 01:28:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:from:to :content-type:content-transfer-encoding:mime-version:subject:date :x-mailer; bh=SA17GQq+Zqb/kiIqHU1JPV3yi7eP/s+4Wadme2mA2J4=; b=mcVcGHJAoOr4r6NNtiSY1mVEkeDP3bwtXat060FOquQ4l+T+aZpUl2Jhh5P4zcHmUm RTJ1ODaoS/TRDmqpkoIAsurhM2rZdb1wmNMAM9P+qp1Abo9MqjBLmVzwgnjm2U1F1aBS 2ZEqJwo0B1vBwRu54Miar2VaWRY5u3wF1xj0A= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:from:to:content-type:content-transfer-encoding :mime-version:subject:date:x-mailer; b=OJSn0tPQ5oGNFCO+NXjBNFM0pjFSAXBYaC0aOdw8ISK3aDhp5OoSIRy5kuBC90zD09 QSqv32i1ProqS3kSlVivo7dQWVDX++5QGFgPOfYOrVCO2cCXzrkwi7DTVFx74W5wMKvJ nXDy97/9IMgmrYuIGeMPxYE+fE7RkVlKL3ers= Received: by 10.231.157.135 with SMTP id b7mr1211823ibx.164.1284537949570; Wed, 15 Sep 2010 01:05:49 -0700 (PDT) Received: from [192.168.1.4] (ip98-164-15-137.ks.ks.cox.net [98.164.15.137]) by mx.google.com with ESMTPS id r3sm1029714ibk.13.2010.09.15.01.05.47 (version=TLSv1/SSLv3 cipher=RC4-MD5); Wed, 15 Sep 2010 01:05:48 -0700 (PDT) Message-Id: <82EA2358-F5E5-4CEE-91AC-4211C04F22FD@gmail.com> From: Chris Watson To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes Content-Transfer-Encoding: 7bit Mime-Version: 1.0 (Apple Message framework v936) Date: Wed, 15 Sep 2010 03:05:46 -0500 X-Mailer: Apple Mail (2.936) Subject: ZFS I/O Throughput question.. X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 15 Sep 2010 08:28:41 -0000 I have been testing ZFS on a home box now for a few days and I have a question that is perplexing me. Everything I have read on ZFS says in almost every case mirroring is faster than raidz. So I initially setup a 2x2 Raid 10 striped mirror. Like so: priyanka# zpool status pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror ONLINE 0 0 0 ada2 ONLINE 0 0 0 ada3 ONLINE 0 0 0 mirror ONLINE 0 0 0 ada4 ONLINE 0 0 0 ada5 ONLINE 0 0 0 errors: No known data errors priyanka# With this configuration I am getting the following throughput for reads: priyanka# dd if=/dev/zero of=/tank/Aperture/test01 bs=1m count=10000 10000+0 records in 10000+0 records out 10485760000 bytes transferred in 98.533820 secs (106417878 bytes/sec) priyanka# And for reads: priyanka# dd if=/tank/Aperture/test01 of=/dev/null bs=1m 10000+0 records in 10000+0 records out 10485760000 bytes transferred in 50.309988 secs (208423027 bytes/sec) priyanka# So basically 100MB/writes, 200MB/reads. I thought the disks I have would do a little better than that assuming from much of the zfs literature proclaiming mirroring to be fastest with more I/O and more OPS/sec. Well I decided to blow away the mirror and instead do a 4 disk raidz to see just how much faster mirroring was with ZFS vs raidz. This is where I was blown away and more than a little confused. priyanka# zpool status pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 raidz1 ONLINE 0 0 0 ada2 ONLINE 0 0 0 ada3 ONLINE 0 0 0 ada4 ONLINE 0 0 0 ada5 ONLINE 0 0 0 errors: No known data errors priyanka# Write performance: priyanka# dd if=/dev/zero of=/tank/test.001 bs=1m count=10000 10000+0 records in 10000+0 records out 10485760000 bytes transferred in 34.310930 secs (305609903 bytes/sec) priyanka# Read performance: priyanka# dd if=/tank/test.001 of=/dev/null bs=1m count=10000 10000+0 records in 10000+0 records out 10485760000 bytes transferred in 31.463025 secs (333272467 bytes/sec) priyanka# Say whaaaaaat?! Perhaps I am completely misunderstanding every zfs admin guide, FAQ and paper on ZFS. But everything I have read says mirroring should be much faster than a raidz and should almost always be preferred. Which clearly from above is not the case. The only thing I can think of is that the dd "benchmark" is not accurate because it is writing data sequentially? Which is the place raidz has an edge over mirroring, again from what I have read. But the above is not so much an 'edge' in performance as much as a complete and total data rape. So my question is, is everything i've read about ZFS and mirroring vs raidz wrong? Is the benchmark horribly flawed? Is raidz actually faster versus mirroring? Does FreeBSD perform some kind of voodoo h0h0magic that makes raidz perform much better than mirroring in ZFS than other platforms? Or am I just having a really weird dream and none of this is real. Thank you for any comments or light anyone cares to share on this. Chris The pertinent specs for the machine are: Copyright (c) 1992-2010 The FreeBSD Project. Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994 The Regents of the University of California. All rights reserved. FreeBSD is a registered trademark of The FreeBSD Foundation. FreeBSD 8.1-STABLE #8: Sun Sep 12 01:00:49 CDT 2010 root@priyanka.open-systems.net:/usr/obj/usr/src/sys/PRIYANKA amd64 Timecounter "i8254" frequency 1193182 Hz quality 0 CPU: AMD Phenom(tm) II X4 965 Processor (3415.13-MHz K8-class CPU) Origin = "AuthenticAMD" Id = 0x100f43 Family = 10 Model = 4 Stepping = 3 Features = 0x178bfbff < FPU ,VME ,DE ,PSE ,TSC ,MSR ,PAE ,MCE ,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CLFLUSH,MMX,FXSR,SSE,SSE2,HTT> Features2=0x802009 AMD Features=0xee500800 AMD Features2 = 0x37ff TSC: P-state invariant real memory = 8589934592 (8192 MB) avail memory = 8259407872 (7876 MB) ACPI APIC Table: FreeBSD/SMP: Multiprocessor System Detected: 4 CPUs FreeBSD/SMP: 1 package(s) x 4 core(s) cpu0 (BSP): APIC ID: 0 cpu1 (AP): APIC ID: 1 cpu2 (AP): APIC ID: 2 cpu3 (AP): APIC ID: 3 ioapic0: Changing APIC ID to 2 ioapic0 irqs 0-23 on motherboard cryptosoft0: on motherboard acpi0: on motherboard acpi0: [ITHREAD] acpi0: Power Button (fixed) acpi0: reservation of 0, a0000 (3) failed acpi0: reservation of 100000, cfce0000 (3) failed Timecounter "ACPI-fast" frequency 3579545 Hz quality 1000 acpi_timer0: <32-bit timer at 3.579545MHz> port 0x4008-0x400b on acpi0 cpu0: on acpi0 cpu1: on acpi0 cpu2: on acpi0 cpu3: on acpi0 acpi_hpet0: iomem 0xfed00000-0xfed003ff on acpi0 Timecounter "HPET" frequency 14318180 Hz quality 900 acpi_button0: on acpi0 pcib0: port 0xcf8-0xcff on acpi0 pci0: on pcib0 pcib1: irq 18 at device 2.0 on pci0 pci1: on pcib1 vgapci0: port 0xee00-0xeeff mem 0xd0000000-0xdfffffff,0xfdde0000-0xfddeffff irq 18 at device 0.0 on pci1 hdac0: mem 0xfddfc000-0xfddfffff irq 19 at device 0.1 on pci1 hdac0: HDA Driver Revision: 20100226_0142 hdac0: [ITHREAD] pcib2: irq 16 at device 4.0 on pci0 pci2: on pcib2 pci2: at device 0.0 (no driver attached) pcib3: irq 17 at device 5.0 on pci0 pci3: on pcib3 ahci0: port 0xcf00-0xcf07,0xce00-0xce03,0xcd00-0xcd07,0xcc00-0xcc03,0xcb00-0xcb0f mem 0xfdaff000-0xfdaff7ff irq 17 at device 0.0 on pci3 ahci0: [ITHREAD] ahci0: AHCI v1.20 with 8 6Gbps ports, Port Multiplier not supported ahcich0: at channel 0 on ahci0 ahcich0: [ITHREAD] ahcich1: at channel 1 on ahci0 ahcich1: [ITHREAD] ahcich2: at channel 2 on ahci0 ahcich2: [ITHREAD] ahcich3: at channel 3 on ahci0 ahcich3: [ITHREAD] ahcich4: at channel 4 on ahci0 ahcich4: [ITHREAD] ahcich5: at channel 5 on ahci0 ahcich5: [ITHREAD] ahcich6: at channel 6 on ahci0 ahcich6: [ITHREAD] ahcich7: at channel 7 on ahci0 ahcich7: [ITHREAD] pcib4: irq 19 at device 7.0 on pci0 pci4: on pcib4 re0: port 0xbe00-0xbeff mem 0xfd7ff000-0xfd7fffff,0xfd7f8000-0xfd7fbfff irq 19 at device 0.0 on pci4 re0: Using 1 MSI messages re0: Chip rev. 0x28000000 re0: MAC rev. 0x00000000 miibus0: on re0 rgephy0: PHY 1 on miibus0 rgephy0: 10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, 1000baseT, 1000baseT-FDX, auto re0: Ethernet address: 6c:f0:49:5e:70:dd re0: [FILTER] pcib5: irq 17 at device 9.0 on pci0 pci5: on pcib5 re1: port 0xae00-0xaeff mem 0xfd3ff000-0xfd3fffff,0xfd3f8000-0xfd3fbfff irq 17 at device 0.0 on pci5 re1: Using 1 MSI messages re1: Chip rev. 0x28000000 re1: MAC rev. 0x00000000 miibus1: on re1 rgephy1: PHY 1 on miibus1 rgephy1: 10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, 1000baseT, 1000baseT-FDX, auto re1: Ethernet address: 6c:f0:49:5e:70:cd re1: [FILTER] pcib6: irq 18 at device 10.0 on pci0 pci6: on pcib6 pci6: at device 0.0 (no driver attached) ahci1: port 0xff00-0xff07,0xfe00-0xfe03,0xfd00-0xfd07,0xfc00-0xfc03,0xfb00-0xfb0f mem 0xfe02f000-0xfe02f3ff irq 22 at device 17.0 on pci0 ahci1: [ITHREAD] ahci1: AHCI v1.10 with 6 3Gbps ports, Port Multiplier supported ahcich8: at channel 0 on ahci1 ahcich8: [ITHREAD] ahcich9: at channel 1 on ahci1 ahcich9: [ITHREAD] ahcich10: at channel 2 on ahci1 ahcich10: [ITHREAD] ahcich11: at channel 3 on ahci1 ahcich11: [ITHREAD] ahcich12: at channel 4 on ahci1 ahcich12: [ITHREAD] ahcich13: at channel 5 on ahci1 ahcich13: [ITHREAD] ohci0: mem 0xfe02e000-0xfe02efff irq 16 at device 18.0 on pci0 ohci0: [ITHREAD] usbus0: on ohci0 ohci1: mem 0xfe02d000-0xfe02dfff irq 16 at device 18.1 on pci0 ohci1: [ITHREAD] usbus1: on ohci1 ehci0: mem 0xfe02c000-0xfe02c0ff irq 17 at device 18.2 on pci0 ehci0: [ITHREAD] usbus2: EHCI version 1.0 usbus2: on ehci0 ohci2: mem 0xfe02b000-0xfe02bfff irq 18 at device 19.0 on pci0 ohci2: [ITHREAD] usbus3: on ohci2 ohci3: mem 0xfe02a000-0xfe02afff irq 18 at device 19.1 on pci0 ohci3: [ITHREAD] usbus4: on ohci3 ehci1: mem 0xfe029000-0xfe0290ff irq 19 at device 19.2 on pci0 ehci1: [ITHREAD] usbus5: EHCI version 1.0 usbus5: on ehci1 pci0: at device 20.0 (no driver attached) pci0: at device 20.1 (no driver attached) hdac1: mem 0xfe024000-0xfe027fff irq 16 at device 20.2 on pci0 hdac1: HDA Driver Revision: 20100226_0142 hdac1: [ITHREAD] isab0: at device 20.3 on pci0 isa0: on isab0 pcib7: at device 20.4 on pci0 pci7: on pcib7 fwohci0: mem 0xfd5ff000-0xfd5ff7ff, 0xfd5f8000-0xfd5fbfff irq 22 at device 14.0 on pci7 fwohci0: [ITHREAD] fwohci0: OHCI version 1.10 (ROM=0) fwohci0: No. of Isochronous channels is 4. fwohci0: EUI64 00:5b:e0:78:00:6c:f0:49 fwohci0: Phy 1394a available S400, 3 ports. fwohci0: Link S400, max_rec 2048 bytes. firewire0: on fwohci0 fwe0: on firewire0 if_fwe0: Fake Ethernet address: 02:5b:e0:6c:f0:49 fwe0: Ethernet address: 02:5b:e0:6c:f0:49 fwip0: on firewire0 fwip0: Firewire address: 00:5b:e0:78:00:6c:f0:49 @ 0xfffe00000000, S400, maxrec 2048 dcons_crom0: on firewire0 dcons_crom0: bus_addr 0xcfd2c000 fwohci0: Initiate bus reset fwohci0: fwohci_intr_core: BUS reset fwohci0: fwohci_intr_core: node_id=0x00000000, SelfID Count=1, CYCLEMASTER mode ohci4: mem 0xfe028000-0xfe028fff irq 18 at device 20.5 on pci0 ohci4: [ITHREAD] usbus6: on ohci4 amdtemp0: on hostb4 atrtc0: port 0x70-0x73 on acpi0 orm0: at iomem 0xd0000-0xd2fff on isa0 sc0: at flags 0x100 on isa0 sc0: VGA <16 virtual consoles, flags=0x300> vga0: at port 0x3c0-0x3df iomem 0xa0000-0xbffff on isa0 atkbdc0: at port 0x60,0x64 on isa0 atkbd0: irq 1 on atkbdc0 kbd0 at atkbd0 atkbd0: [GIANT-LOCKED] atkbd0: [ITHREAD] psm0: irq 12 on atkbdc0 psm0: [GIANT-LOCKED] psm0: [ITHREAD] psm0: model Generic PS/2 mouse, device ID 0 hwpstate0: on cpu0 Timecounters tick every 1.000 msec firewire0: 1 nodes, maxhop <= 0 cable IRM irm(0) (me) firewire0: bus manager 0 hdac0: HDA Codec #0: ATI R6xx HDMI usbus0: 12Mbps Full Speed USB v1.0 usbus1: 12Mbps Full Speed USB v1.0 usbus2: 480Mbps High Speed USB v2.0 usbus3: 12Mbps Full Speed USB v1.0 usbus4: 12Mbps Full Speed USB v1.0 usbus5: 480Mbps High Speed USB v2.0 usbus6: 12Mbps Full Speed USB v1.0 pcm0: at cad 0 nid 1 on hdac0 hdac1: HDA Codec #0: Realtek ALC889 pcm1: at cad 0 nid 1 on hdac1 pcm2: at cad 0 nid 1 on hdac1 pcm3: at cad 0 nid 1 on hdac1 pcm4: at cad 0 nid 1 on hdac1 ugen0.1: at usbus0 uhub0: on usbus0 ugen1.1: at usbus1 uhub1: on usbus1 ugen2.1: at usbus2 uhub2: on usbus2 ugen3.1: at usbus3 uhub3: on usbus3 ugen4.1: at usbus4 uhub4: on usbus4 ugen5.1: at usbus5 uhub5: on usbus5 ugen6.1: at usbus6 uhub6: on usbus6 uhub6: 2 ports with 2 removable, self powered uhub0: 3 ports with 3 removable, self powered uhub1: 3 ports with 3 removable, self powered uhub3: 3 ports with 3 removable, self powered uhub4: 3 ports with 3 removable, self powered ada0 at ahcich8 bus 0 scbus8 target 0 lun 0 ada0: ATA-8 SATA 2.x device ada0: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes) ada0: Command Queueing enabled ada0: 953869MB (1953525168 512 byte sectors: 16H 63S/T 16383C) ada1 at ahcich9 bus 0 scbus9 target 0 lun 0 ada1: ATA-8 SATA 2.x device ada1: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes) ada1: Command Queueing enabled ada1: 953869MB (1953525168 512 byte sectors: 16H 63S/T 16383C) ada2 at ahcich10 bus 0 scbus10 target 0 lun 0 ada2: ATA-8 SATA 2.x device ada2: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes) ada2: Command Queueing enabled ada2: 953869MB (1953525168 512 byte sectors: 16H 63S/T 16383C) ada3 at ahcich11 bus 0 scbus11 target 0 lun 0 ada3: ATA-8 SATA 2.x device ada3: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes) ada3: Command Queueing enabled ada3: 953869MB (1953525168 512 byte sectors: 16H 63S/T 16383C) ada4 at ahcich12 bus 0 scbus12 target 0 lun 0 ada4: ATA-8 SATA 2.x device ada4: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes) ada4: Command Queueing enabled ada4: 953869MB (1953525168 512 byte sectors: 16H 63S/T 16383C) ada5 at ahcich13 bus 0 scbus13 target 0 lun 0 ada5: ATA-8 SATA 2.x device ada5: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes) ada5: Command Queueing enabled ada5: 953869MB (1953525168 512 byte sectors: 16H 63S/T 16383C) SMP: AP CPU #1 Launched! SMP: AP CPU #3 Launched! SMP: AP CPU #2 Launched! GEOM_MIRROR: Device mirror/gm0 launched (2/2). GEOM: mirror/gm0s1: geometry does not match label (16h,63s != 255h,63s). Root mount waiting for: usbus5 usbus2 Root mount waiting for: usbus5 usbus2 uhub2: 6 ports with 6 removable, self powered uhub5: 6 ports with 6 removable, self powered Trying to mount root from ufs:/dev/mirror/gm0s1a ugen1.2: at usbus1 ums0: on usbus1 ums0: 16 buttons and [XYZT] coordinates ID=0 GEOM_ELI: Device mirror/gm0s1b.eli created. GEOM_ELI: Encryption: AES-CBC 256 GEOM_ELI: Crypto: software WARNING: /usr/src was not properly dismounted WARNING: /var was not properly dismounted ugen1.3: at usbus1 ukbd0: on usbus1 kbd1 at ukbd0 ZFS filesystem version 3 ZFS storage pool version 14 From owner-freebsd-fs@FreeBSD.ORG Wed Sep 15 08:29:05 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 8808C106564A; Wed, 15 Sep 2010 08:29:05 +0000 (UTC) (envelope-from prvs=1874f602db=killing@multiplay.co.uk) Received: from mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) by mx1.freebsd.org (Postfix) with ESMTP id F08568FC1B; Wed, 15 Sep 2010 08:29:04 +0000 (UTC) X-MDAV-Processed: mail1.multiplay.co.uk, Wed, 15 Sep 2010 09:18:42 +0100 X-Spam-Processed: mail1.multiplay.co.uk, Wed, 15 Sep 2010 09:18:41 +0100 X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on mail1.multiplay.co.uk X-Spam-Level: X-Spam-Status: No, score=-5.0 required=6.0 tests=USER_IN_WHITELIST shortcircuit=ham autolearn=disabled version=3.2.5 Received: from r2d2 by mail1.multiplay.co.uk (MDaemon PRO v10.0.4) with ESMTP id md50011246658.msg; Wed, 15 Sep 2010 09:18:41 +0100 X-Authenticated-Sender: Killing@multiplay.co.uk X-MDRemoteIP: 188.220.16.49 X-Return-Path: prvs=1874f602db=killing@multiplay.co.uk X-Envelope-From: killing@multiplay.co.uk Message-ID: <706A632A08354EC09DDFBD56FFEA4DB1@multiplay.co.uk> From: "Steven Hartland" To: "Jeremy Chadwick" , "Andriy Gapon" References: <20100908084855.GF2465@deviant.kiev.zoral.com.ua> <4C874F00.3050605@freebsd.org> <4C8D087B.5040404@freebsd.org> <03537796FAB54E02959E2D64FC83004F@multiplay.co.uk> <4C8D280F.3040803@freebsd.org> <3FBF66BF11AA4CBBA6124CA435A4A31B@multiplay.co.uk> <4C8E4212.30000@freebsd.org> <4C9078E0.2050402@freebsd.org> <20100915080740.GA55725@icarus.home.lan> Date: Wed, 15 Sep 2010 09:18:39 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.5931 Cc: freebsd-fs@freebsd.org Subject: Re: zfs very poor performance compared to ufs due to lack of cache? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 15 Sep 2010 08:29:05 -0000 ----- Original Message ----- From: "Jeremy Chadwick" > Please be aware the OP is using RRDTool to store the sample data, which > means the values you see in the graphs are going to be averaged unless > he's taken the time to use MIN/MAX/LAST on both the CF and the DS (there > is a difference): I have both rrd and text logs, gonna be lazy and using the text + excel to graph I think ;-) >> > Now monitoring these each minute to an rrd and text file and updated >> > 8-STABLE ... > > What I'm trying to say: averaged data may not show you what you're > looking for, depending on what that is. :-) NP using the text log will solve this. ================================================ This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337 or return the E.mail to postmaster@multiplay.co.uk. From owner-freebsd-fs@FreeBSD.ORG Wed Sep 15 08:45:18 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 3240B1065674 for ; Wed, 15 Sep 2010 08:45:18 +0000 (UTC) (envelope-from ticso@cicely7.cicely.de) Received: from raven.bwct.de (raven.bwct.de [85.159.14.73]) by mx1.freebsd.org (Postfix) with ESMTP id B74AF8FC08 for ; Wed, 15 Sep 2010 08:45:17 +0000 (UTC) Received: from mail.cicely.de ([10.1.1.37]) by raven.bwct.de (8.13.4/8.13.4) with ESMTP id o8F8jFQU016807 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Wed, 15 Sep 2010 10:45:15 +0200 (CEST) (envelope-from ticso@cicely7.cicely.de) Received: from cicely7.cicely.de (cicely7.cicely.de [10.1.1.9]) by mail.cicely.de (8.14.4/8.14.4) with ESMTP id o8F8j1X8061884 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Wed, 15 Sep 2010 10:45:01 +0200 (CEST) (envelope-from ticso@cicely7.cicely.de) Received: from cicely7.cicely.de (localhost [127.0.0.1]) by cicely7.cicely.de (8.14.2/8.14.2) with ESMTP id o8F8j1wP022926; Wed, 15 Sep 2010 10:45:01 +0200 (CEST) (envelope-from ticso@cicely7.cicely.de) Received: (from ticso@localhost) by cicely7.cicely.de (8.14.2/8.14.2/Submit) id o8F8j1U5022925; Wed, 15 Sep 2010 10:45:01 +0200 (CEST) (envelope-from ticso) Date: Wed, 15 Sep 2010 10:45:01 +0200 From: Bernd Walter To: Chris Watson Message-ID: <20100915084501.GF17282@cicely7.cicely.de> References: <82EA2358-F5E5-4CEE-91AC-4211C04F22FD@gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <82EA2358-F5E5-4CEE-91AC-4211C04F22FD@gmail.com> X-Operating-System: FreeBSD cicely7.cicely.de 7.0-STABLE i386 User-Agent: Mutt/1.5.11 X-Spam-Status: No, score=-2.9 required=5.0 tests=ALL_TRUSTED=-1, BAYES_00=-1.9, T_RP_MATCHES_RCVD=-0.01 autolearn=unavailable version=3.3.0 X-Spam-Checker-Version: SpamAssassin 3.3.0 (2010-01-18) on spamd.cicely.de Cc: freebsd-fs@freebsd.org Subject: Re: ZFS I/O Throughput question.. X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: ticso@cicely.de List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 15 Sep 2010 08:45:18 -0000 On Wed, Sep 15, 2010 at 03:05:46AM -0500, Chris Watson wrote: > I have been testing ZFS on a home box now for a few days and I have a > question that is perplexing me. Everything I have read on ZFS says in > almost every case mirroring is faster than raidz. So I initially setup > a 2x2 Raid 10 striped mirror. Like so: > > priyanka# zpool status > pool: tank > state: ONLINE > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > tank ONLINE 0 0 0 > mirror ONLINE 0 0 0 > ada2 ONLINE 0 0 0 > ada3 ONLINE 0 0 0 > mirror ONLINE 0 0 0 > ada4 ONLINE 0 0 0 > ada5 ONLINE 0 0 0 > > errors: No known data errors > priyanka# > > With this configuration I am getting the following throughput for reads: > > priyanka# dd if=/dev/zero of=/tank/Aperture/test01 bs=1m count=10000 > 10000+0 records in > 10000+0 records out > 10485760000 bytes transferred in 98.533820 secs (106417878 bytes/sec) > priyanka# > > And for reads: > > priyanka# dd if=/tank/Aperture/test01 of=/dev/null bs=1m > 10000+0 records in > 10000+0 records out > 10485760000 bytes transferred in 50.309988 secs (208423027 bytes/sec) > priyanka# > > So basically 100MB/writes, 200MB/reads. Not surprising - two disks in parallel are used to write data. Probably it might have been layed out over the stripe set, so that actually twice the number of disks could have been used, but this optimization for single linear file access is bad for random performance, since you need to seek all drives. > I thought the disks I have would do a little better than that assuming > from much of the zfs literature proclaiming mirroring to be fastest > with more I/O and more OPS/sec. Well I decided to blow away the mirror > and instead do a 4 disk raidz to see just how much faster mirroring > was with ZFS vs raidz. This is where I was blown away and more than a > little confused. > > priyanka# zpool status > pool: tank > state: ONLINE > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > tank ONLINE 0 0 0 > raidz1 ONLINE 0 0 0 > ada2 ONLINE 0 0 0 > ada3 ONLINE 0 0 0 > ada4 ONLINE 0 0 0 > ada5 ONLINE 0 0 0 > > errors: No known data errors > priyanka# > > Write performance: > > priyanka# dd if=/dev/zero of=/tank/test.001 bs=1m count=10000 > 10000+0 records in > 10000+0 records out > 10485760000 bytes transferred in 34.310930 secs (305609903 bytes/sec) > priyanka# You basicly have 3 drives to write too - the parity disk writes redundand data, so it doesn't add to the bandwidth. > Read performance: > > priyanka# dd if=/tank/test.001 of=/dev/null bs=1m count=10000 > 10000+0 records in > 10000+0 records out > 10485760000 bytes transferred in 31.463025 secs (333272467 bytes/sec) > priyanka# Now you have 4 drives to read from. The problem however is that you seek all four drives. But you get the same pessimisation for random access as if your mirror would have been used spreading data over all disks. The only difference is that with a single raidz you don't have a choice anymore. > Say whaaaaaat?! Perhaps I am completely misunderstanding every zfs > admin guide, FAQ and paper on ZFS. But everything I have read says > mirroring should be much faster than a raidz and should almost always > be preferred. Which clearly from above is not the case. The only thing > I can think of is that the dd "benchmark" is not accurate because it > is writing data sequentially? Which is the place raidz has an edge > over mirroring, again from what I have read. But the above is not so > much an 'edge' in performance as much as a complete and total data > rape. So my question is, is everything i've read about ZFS and > mirroring vs raidz wrong? Is the benchmark horribly flawed? Is raidz > actually faster versus mirroring? Does FreeBSD perform some kind of > voodoo h0h0magic that makes raidz perform much better than mirroring > in ZFS than other platforms? Or am I just having a really weird dream > and none of this is real. That's exactly the point - your dd benchmark only tests a very specific case, whichin fact might match your application, but in almost every use case you access multiple files at the same time and then it is good to seek drives independly. Just repeat the same test with two files written/read at the same time and you should easily see a major difference. You should also note that all the cases where linear reads are faster than a single drive only works because of very agressive prereading. The faster your drives are and the more drives you have the prereading must be more agressive to still get a win - in the 4 disk raidz read case you already seem to have reached some kind of limitation. -- B.Walter http://www.bwct.de Modbus/TCP Ethernet I/O Baugruppen, ARM basierte FreeBSD Rechner uvm. From owner-freebsd-fs@FreeBSD.ORG Wed Sep 15 10:32:25 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 9069E1065695; Wed, 15 Sep 2010 10:32:25 +0000 (UTC) (envelope-from prvs=1874f602db=killing@multiplay.co.uk) Received: from mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) by mx1.freebsd.org (Postfix) with ESMTP id D10878FC19; Wed, 15 Sep 2010 10:32:24 +0000 (UTC) X-MDAV-Processed: mail1.multiplay.co.uk, Wed, 15 Sep 2010 11:32:19 +0100 X-Spam-Processed: mail1.multiplay.co.uk, Wed, 15 Sep 2010 11:32:19 +0100 X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on mail1.multiplay.co.uk X-Spam-Level: X-Spam-Status: No, score=-5.0 required=6.0 tests=USER_IN_WHITELIST shortcircuit=ham autolearn=disabled version=3.2.5 Received: from r2d2 by mail1.multiplay.co.uk (MDaemon PRO v10.0.4) with ESMTP id md50011248038.msg; Wed, 15 Sep 2010 11:32:18 +0100 X-Authenticated-Sender: Killing@multiplay.co.uk X-MDRemoteIP: 188.220.16.49 X-Return-Path: prvs=1874f602db=killing@multiplay.co.uk X-Envelope-From: killing@multiplay.co.uk Message-ID: From: "Steven Hartland" To: "Steven Hartland" , "Andriy Gapon" References: <5DB6E7C798E44D33A05673F4B773405E@multiplay.co.uk><4C85E91E.1010602@icyb.net.ua><4C873914.40404@freebsd.org><20100908084855.GF2465@deviant.kiev.zoral.com.ua><4C874F00.3050605@freebsd.org><4C8D087B.5040404@freebsd.org><03537796FAB54E02959E2D64FC83004F@multiplay.co.uk><4C8D280F.3040803@freebsd.org><3FBF66BF11AA4CBBA6124CA435A4A31B@multiplay.co.uk><4C8E4212.30000@freebsd.org> Date: Wed, 15 Sep 2010 11:32:18 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="utf-8"; reply-type=response Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.5931 Cc: freebsd-fs@freebsd.org, jhell , Pawel Jakub Dawidek Subject: Re: zfs very poor performance compared to ufs due to lack of cache? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 15 Sep 2010 10:32:25 -0000 Ok the results are in, the conclusions I can see from the data, others may see more, are:- === Common === * arc size on boot is ~180M with a target size of 6.5G === sendfile on === * arc size increases on demand but peaks the min value * The difference between min and max arc is taken up by inactive pages * vm page daemon wakeups sit at a constant level once the machine has filled memory and never fully empties swap. === sendfile off === * arc size increases on demand all the way up the the max value * vm cache count stays at almost zero all the time === conclusion === The interaction of zfs and sendfile is causing large amounts of memory to end up in the inactive pool and only the use of a hard min arc limit is ensuring that zfs forces the vm to release said memory so that it can be used by zfs arc. The source data, xls's and exported graphs can be found here:- http://www.multiplaygameservers.com/dropzone/zfs-sendfile-results.zip Regards Steve ================================================ This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337 or return the E.mail to postmaster@multiplay.co.uk. From owner-freebsd-fs@FreeBSD.ORG Wed Sep 15 10:40:03 2010 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 333B41065673 for ; Wed, 15 Sep 2010 10:40:03 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 063C88FC25 for ; Wed, 15 Sep 2010 10:40:03 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id o8FAe2Fd060989 for ; Wed, 15 Sep 2010 10:40:02 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id o8FAe2U4060988; Wed, 15 Sep 2010 10:40:02 GMT (envelope-from gnats) Date: Wed, 15 Sep 2010 10:40:02 GMT Message-Id: <201009151040.o8FAe2U4060988@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org From: dfilter@FreeBSD.ORG (dfilter service) Cc: Subject: Re: kern/141305: commit references a PR X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: dfilter service List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 15 Sep 2010 10:40:03 -0000 The following reply was made to PR kern/141305; it has been noted by GNATS. From: dfilter@FreeBSD.ORG (dfilter service) To: bug-followup@FreeBSD.org Cc: Subject: Re: kern/141305: commit references a PR Date: Wed, 15 Sep 2010 10:31:35 +0000 (UTC) Author: avg Date: Wed Sep 15 10:31:27 2010 New Revision: 212650 URL: http://svn.freebsd.org/changeset/base/212650 Log: tmpfs, zfs + sendfile: mark page bits as valid after populating it with data Otherwise, adding insult to injury, in addition to double-caching of data we would always copy the data into a vnode's vm object page from backend. This is specific to sendfile case only (VOP_READ with UIO_NOCOPY). PR: kern/141305 Reported by: Wiktor Niesiobedzki Reviewed by: alc Tested by: tools/regression/sockets/sendfile MFC after: 2 weeks Modified: head/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c head/sys/fs/tmpfs/tmpfs_vnops.c Modified: head/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c ============================================================================== --- head/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c Wed Sep 15 10:18:18 2010 (r212649) +++ head/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c Wed Sep 15 10:31:27 2010 (r212650) @@ -498,6 +498,8 @@ again: sched_unpin(); } VM_OBJECT_LOCK(obj); + if (error == 0) + vm_page_set_valid(m, off, bytes); vm_page_wakeup(m); if (error == 0) uio->uio_resid -= bytes; Modified: head/sys/fs/tmpfs/tmpfs_vnops.c ============================================================================== --- head/sys/fs/tmpfs/tmpfs_vnops.c Wed Sep 15 10:18:18 2010 (r212649) +++ head/sys/fs/tmpfs/tmpfs_vnops.c Wed Sep 15 10:31:27 2010 (r212650) @@ -562,6 +562,8 @@ lookupvpg: sf_buf_free(sf); sched_unpin(); VM_OBJECT_LOCK(vobj); + if (error == 0) + vm_page_set_valid(m, offset, tlen); vm_page_wakeup(m); VM_OBJECT_UNLOCK(vobj); return (error); _______________________________________________ svn-src-all@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/svn-src-all To unsubscribe, send any mail to "svn-src-all-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Wed Sep 15 10:46:37 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id EAF8F1065694 for ; Wed, 15 Sep 2010 10:46:37 +0000 (UTC) (envelope-from jdc@koitsu.dyndns.org) Received: from qmta05.emeryville.ca.mail.comcast.net (qmta05.emeryville.ca.mail.comcast.net [76.96.30.48]) by mx1.freebsd.org (Postfix) with ESMTP id CC1208FC20 for ; Wed, 15 Sep 2010 10:46:37 +0000 (UTC) Received: from omta06.emeryville.ca.mail.comcast.net ([76.96.30.51]) by qmta05.emeryville.ca.mail.comcast.net with comcast id 6ymd1f00216AWCUA5ymdPt; Wed, 15 Sep 2010 10:46:37 +0000 Received: from koitsu.dyndns.org ([98.248.41.155]) by omta06.emeryville.ca.mail.comcast.net with comcast id 6ymc1f0013LrwQ28SymctR; Wed, 15 Sep 2010 10:46:37 +0000 Received: by icarus.home.lan (Postfix, from userid 1000) id EA4D89B423; Wed, 15 Sep 2010 03:46:35 -0700 (PDT) Date: Wed, 15 Sep 2010 03:46:35 -0700 From: Jeremy Chadwick To: Steven Hartland Message-ID: <20100915104635.GA59871@icarus.home.lan> References: <20100908084855.GF2465@deviant.kiev.zoral.com.ua> <4C874F00.3050605@freebsd.org> <4C8D087B.5040404@freebsd.org> <03537796FAB54E02959E2D64FC83004F@multiplay.co.uk> <4C8D280F.3040803@freebsd.org> <3FBF66BF11AA4CBBA6124CA435A4A31B@multiplay.co.uk> <4C8E4212.30000@freebsd.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.20 (2009-06-14) Cc: freebsd-fs@freebsd.org, jhell , Pawel Jakub Dawidek , Andriy Gapon Subject: Re: zfs very poor performance compared to ufs due to lack of cache? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 15 Sep 2010 10:46:38 -0000 On Wed, Sep 15, 2010 at 11:32:18AM +0100, Steven Hartland wrote: > Ok the results are in, the conclusions I can see from the data, others may see > more, are:- > > === Common === > * arc size on boot is ~180M with a target size of 6.5G > > === sendfile on === > * arc size increases on demand but peaks the min value > * The difference between min and max arc is taken up by inactive pages > * vm page daemon wakeups sit at a constant level once the machine has > filled memory and never fully empties swap. > > === sendfile off === > * arc size increases on demand all the way up the the max value > * vm cache count stays at almost zero all the time > > === conclusion === > The interaction of zfs and sendfile is causing large amounts of memory > to end up in the inactive pool and only the use of a hard min arc limit is > ensuring that zfs forces the vm to release said memory so that it can be > used by zfs arc. > > The source data, xls's and exported graphs can be found here:- > http://www.multiplaygameservers.com/dropzone/zfs-sendfile-results.zip Looks like Andriy just committed something to HEAD/CURRENT which might address this: http://www.freebsd.org/cgi/query-pr.cgi?pr=kern/141305 Commit: Author: avg Date: Wed Sep 15 10:31:27 2010 New Revision: 212650 URL: http://svn.freebsd.org/changeset/base/212650 -- | Jeremy Chadwick jdc@parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, USA | | Making life hard for others since 1977. PGP: 4BD6C0CB | From owner-freebsd-fs@FreeBSD.ORG Wed Sep 15 10:54:37 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 43466106566C; Wed, 15 Sep 2010 10:54:37 +0000 (UTC) (envelope-from prvs=1874f602db=killing@multiplay.co.uk) Received: from mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) by mx1.freebsd.org (Postfix) with ESMTP id 8401B8FC1C; Wed, 15 Sep 2010 10:54:36 +0000 (UTC) X-MDAV-Processed: mail1.multiplay.co.uk, Wed, 15 Sep 2010 11:54:32 +0100 X-Spam-Processed: mail1.multiplay.co.uk, Wed, 15 Sep 2010 11:54:31 +0100 X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on mail1.multiplay.co.uk X-Spam-Level: X-Spam-Status: No, score=-5.0 required=6.0 tests=USER_IN_WHITELIST shortcircuit=ham autolearn=disabled version=3.2.5 Received: from r2d2 by mail1.multiplay.co.uk (MDaemon PRO v10.0.4) with ESMTP id md50011248086.msg; Wed, 15 Sep 2010 11:54:30 +0100 X-Authenticated-Sender: Killing@multiplay.co.uk X-MDRemoteIP: 188.220.16.49 X-Return-Path: prvs=1874f602db=killing@multiplay.co.uk X-Envelope-From: killing@multiplay.co.uk Message-ID: <8E233260F0334BC58B2C07F383939F8E@multiplay.co.uk> From: "Steven Hartland" To: "Jeremy Chadwick" References: <20100908084855.GF2465@deviant.kiev.zoral.com.ua> <4C874F00.3050605@freebsd.org> <4C8D087B.5040404@freebsd.org> <03537796FAB54E02959E2D64FC83004F@multiplay.co.uk> <4C8D280F.3040803@freebsd.org> <3FBF66BF11AA4CBBA6124CA435A4A31B@multiplay.co.uk> <4C8E4212.30000@freebsd.org> <20100915104635.GA59871@icarus.home.lan> Date: Wed, 15 Sep 2010 11:54:32 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.5931 Cc: freebsd-fs@freebsd.org, jhell , Pawel Jakub Dawidek , Andriy Gapon Subject: Re: zfs very poor performance compared to ufs due to lack of cache? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 15 Sep 2010 10:54:37 -0000 ----- Original Message ----- From: "Jeremy Chadwick" > Looks like Andriy just committed something to HEAD/CURRENT which might > address this: > http://www.freebsd.org/cgi/query-pr.cgi?pr=kern/141305 Already running that as part of the patches unfortunately, it doesn't seem to have any effect. Regards Steve ================================================ This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337 or return the E.mail to postmaster@multiplay.co.uk. From owner-freebsd-fs@FreeBSD.ORG Wed Sep 15 04:11:54 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id EC9E2106566B for ; Wed, 15 Sep 2010 04:11:53 +0000 (UTC) (envelope-from warinthepocket@gmail.com) Received: from mail-wy0-f182.google.com (mail-wy0-f182.google.com [74.125.82.182]) by mx1.freebsd.org (Postfix) with ESMTP id 859708FC22 for ; Wed, 15 Sep 2010 04:11:53 +0000 (UTC) Received: by wyb33 with SMTP id 33so9697677wyb.13 for ; Tue, 14 Sep 2010 21:11:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:received:date:message-id :subject:from:to:content-type; bh=ZaQ/9Fo//ClFb5lm1/qQu4q5yJIFKRi2iq4fgLbKhkI=; b=dZmNqyzVul/EF5V1zdVQX/xhIes3so53VwZyU6sF1ls8qJbZoq46Q6JttyMx7gfEn1 6kGm6X9zq/L62Hu+qk9iGyRSJfWFMz8pnZkkJNXCHZ7++Kd2h6IV79hfpPDslFiRlvCE 9m0/CLP4Ci5RWF2kUC8ej+0in+CDaEDhxus0o= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:date:message-id:subject:from:to:content-type; b=TxdHmcPpBi67oMaOK0AmsIl2KRZbzr8DJozIMeG2AInPamsCh71XVgcLZuErmFnOUT wHp/gpYuOm/lm3jga2JBr44+gpqNauzvFf+uAxIP6U8XuizuR75IhL5PRQI3GOMiv6fx 41AEi7VjZ/r5r1RdRd8AmEfpn1NC0B2WPJTZs= MIME-Version: 1.0 Received: by 10.216.231.83 with SMTP id k61mr4696890weq.88.1284522018297; Tue, 14 Sep 2010 20:40:18 -0700 (PDT) Received: by 10.216.232.140 with HTTP; Tue, 14 Sep 2010 20:40:18 -0700 (PDT) Date: Tue, 14 Sep 2010 22:40:18 -0500 Message-ID: From: Dan Davis To: freebsd-fs@freebsd.org X-Mailman-Approved-At: Wed, 15 Sep 2010 11:03:25 +0000 Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Subject: RAID-Z pool causing system lockups X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 15 Sep 2010 04:11:54 -0000 Hello there, I'm experiencing the same as Sergey http://lists.freebsd.org/pipermail/freebsd-fs/2010-September/009288.html The pool that I'm having trouble started as v13 under FBSD 8R back in December. It worked fine after moving to 8.1R. Only after upgrading to v14 in August I have been unable to access the pool at all. I've tried the disks over two separate machines running 8.1R with the same results. ZFS will recognize the pool as present after "zpool import," but any attempts to import them or mount them causes the system to hang. I ran dd to see if I could access the disks, along with checking their SMART records and they seem to be working just fine. Please let me know with what system information I can give you that will be of help. The kernel isn't giving me any panic screens when this happens, unfortunately. Blessings, -Dan D. -- http://alpha2delta.blogspot.com/ Is it faith to understand nothing, and merely submit your convictions implicitly to the Church? -John Calvin From owner-freebsd-fs@FreeBSD.ORG Wed Sep 15 11:07:53 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 693581065670; Wed, 15 Sep 2010 11:07:53 +0000 (UTC) (envelope-from avg@freebsd.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 4267C8FC08; Wed, 15 Sep 2010 11:07:52 +0000 (UTC) Received: from porto.topspin.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id OAA05671; Wed, 15 Sep 2010 14:07:47 +0300 (EEST) (envelope-from avg@freebsd.org) Received: from localhost.topspin.kiev.ua ([127.0.0.1]) by porto.topspin.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1Ovpq6-0000N2-Oy; Wed, 15 Sep 2010 14:07:46 +0300 Message-ID: <4C90A901.5000200@freebsd.org> Date: Wed, 15 Sep 2010 14:07:45 +0300 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; U; FreeBSD amd64; en-US; rv:1.9.2.9) Gecko/20100912 Lightning/1.0b2 Thunderbird/3.1.3 MIME-Version: 1.0 To: Steven Hartland References: <20100908084855.GF2465@deviant.kiev.zoral.com.ua> <4C874F00.3050605@freebsd.org> <4C8D087B.5040404@freebsd.org> <03537796FAB54E02959E2D64FC83004F@multiplay.co.uk> <4C8D280F.3040803@freebsd.org> <3FBF66BF11AA4CBBA6124CA435A4A31B@multiplay.co.uk> <4C8E4212.30000@freebsd.org> <20100915104635.GA59871@icarus.home.lan> <8E233260F0334BC58B2C07F383939F8E@multiplay.co.uk> In-Reply-To: <8E233260F0334BC58B2C07F383939F8E@multiplay.co.uk> X-Enigmail-Version: 1.1.2 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org, jhell , Pawel Jakub Dawidek Subject: Re: zfs very poor performance compared to ufs due to lack of cache? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 15 Sep 2010 11:07:53 -0000 on 15/09/2010 13:54 Steven Hartland said the following: > > Already running that as part of the patches unfortunately, it doesn't seem > to have any effect. Well, it does have an effect for me in strictly controlled micro-test. -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Wed Sep 15 11:08:35 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 8FD841065672; Wed, 15 Sep 2010 11:08:35 +0000 (UTC) (envelope-from avg@freebsd.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 90DEE8FC18; Wed, 15 Sep 2010 11:08:34 +0000 (UTC) Received: from porto.topspin.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id OAA05688; Wed, 15 Sep 2010 14:08:32 +0300 (EEST) (envelope-from avg@freebsd.org) Received: from localhost.topspin.kiev.ua ([127.0.0.1]) by porto.topspin.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1Ovpqp-0000NF-R7; Wed, 15 Sep 2010 14:08:31 +0300 Message-ID: <4C90A92F.2090303@freebsd.org> Date: Wed, 15 Sep 2010 14:08:31 +0300 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; U; FreeBSD amd64; en-US; rv:1.9.2.9) Gecko/20100912 Lightning/1.0b2 Thunderbird/3.1.3 MIME-Version: 1.0 To: Steven Hartland References: <5DB6E7C798E44D33A05673F4B773405E@multiplay.co.uk><4C85E91E.1010602@icyb.net.ua><4C873914.40404@freebsd.org><20100908084855.GF2465@deviant.kiev.zoral.com.ua><4C874F00.3050605@freebsd.org><4C8D087B.5040404@freebsd.org><03537796FAB54E02959E2D64FC83004F@multiplay.co.uk><4C8D280F.3040803@freebsd.org><3FBF66BF11AA4CBBA6124CA435A4A31B@multiplay.co.uk><4C8E4212.30000@freebsd.org> In-Reply-To: X-Enigmail-Version: 1.1.2 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org, jhell , Pawel Jakub Dawidek Subject: Re: zfs very poor performance compared to ufs due to lack of cache? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 15 Sep 2010 11:08:35 -0000 on 15/09/2010 13:32 Steven Hartland said the following: > The source data, xls's and exported graphs can be found here:- I don't see source data in the archive. > http://www.multiplaygameservers.com/dropzone/zfs-sendfile-results.zip -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Wed Sep 15 11:56:19 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 250401065670; Wed, 15 Sep 2010 11:56:19 +0000 (UTC) (envelope-from prvs=1874f602db=killing@multiplay.co.uk) Received: from mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) by mx1.freebsd.org (Postfix) with ESMTP id 661BC8FC12; Wed, 15 Sep 2010 11:56:18 +0000 (UTC) X-MDAV-Processed: mail1.multiplay.co.uk, Wed, 15 Sep 2010 12:56:13 +0100 X-Spam-Processed: mail1.multiplay.co.uk, Wed, 15 Sep 2010 12:56:12 +0100 X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on mail1.multiplay.co.uk X-Spam-Level: X-Spam-Status: No, score=-5.0 required=6.0 tests=USER_IN_WHITELIST shortcircuit=ham autolearn=disabled version=3.2.5 Received: from r2d2 by mail1.multiplay.co.uk (MDaemon PRO v10.0.4) with ESMTP id md50011248317.msg; Wed, 15 Sep 2010 12:56:12 +0100 X-Authenticated-Sender: Killing@multiplay.co.uk X-MDRemoteIP: 188.220.16.49 X-Return-Path: prvs=1874f602db=killing@multiplay.co.uk X-Envelope-From: killing@multiplay.co.uk Message-ID: <412E0DD28FEF4E25AB786C2B204D4BB5@multiplay.co.uk> From: "Steven Hartland" To: "Andriy Gapon" References: <5DB6E7C798E44D33A05673F4B773405E@multiplay.co.uk><4C85E91E.1010602@icyb.net.ua><4C873914.40404@freebsd.org><20100908084855.GF2465@deviant.kiev.zoral.com.ua><4C874F00.3050605@freebsd.org><4C8D087B.5040404@freebsd.org><03537796FAB54E02959E2D64FC83004F@multiplay.co.uk><4C8D280F.3040803@freebsd.org><3FBF66BF11AA4CBBA6124CA435A4A31B@multiplay.co.uk><4C8E4212.30000@freebsd.org> <4C90A92F.2090303@freebsd.org> Date: Wed, 15 Sep 2010 12:56:08 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="utf-8"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.5931 Cc: freebsd-fs@freebsd.org, jhell , Pawel Jakub Dawidek Subject: Re: zfs very poor performance compared to ufs due to lack of cache? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 15 Sep 2010 11:56:19 -0000 Source data is in the files: sendfile-on-arcstats-idle.txt and sendfile-off-arcstats-idle.txt For reference:- start = results just after boot inprog = snap shot while test is running stop = results just after new requests where disabled idle = test has finished and connections have had chance to finish and the machine return to an idle state. Regards Steve ----- Original Message ----- From: "Andriy Gapon" To: "Steven Hartland" Cc: ; "Pawel Jakub Dawidek" ; "jhell" Sent: Wednesday, September 15, 2010 12:08 PM Subject: Re: zfs very poor performance compared to ufs due to lack of cache? > on 15/09/2010 13:32 Steven Hartland said the following: >> The source data, xls's and exported graphs can be found here:- > > I don't see source data in the archive. > >> http://www.multiplaygameservers.com/dropzone/zfs-sendfile-results.zip ================================================ This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337 or return the E.mail to postmaster@multiplay.co.uk. From owner-freebsd-fs@FreeBSD.ORG Wed Sep 15 11:58:04 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id D16B1106564A; Wed, 15 Sep 2010 11:58:04 +0000 (UTC) (envelope-from avg@freebsd.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id BCF7C8FC1F; Wed, 15 Sep 2010 11:58:03 +0000 (UTC) Received: from porto.topspin.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id OAA06885; Wed, 15 Sep 2010 14:58:00 +0300 (EEST) (envelope-from avg@freebsd.org) Received: from localhost.topspin.kiev.ua ([127.0.0.1]) by porto.topspin.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1Ovqci-0000RB-IE; Wed, 15 Sep 2010 14:58:00 +0300 Message-ID: <4C90B4C8.90203@freebsd.org> Date: Wed, 15 Sep 2010 14:58:00 +0300 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; U; FreeBSD amd64; en-US; rv:1.9.2.9) Gecko/20100912 Lightning/1.0b2 Thunderbird/3.1.3 MIME-Version: 1.0 To: Steven Hartland References: <5DB6E7C798E44D33A05673F4B773405E@multiplay.co.uk><4C85E91E.1010602@icyb.net.ua><4C873914.40404@freebsd.org><20100908084855.GF2465@deviant.kiev.zoral.com.ua><4C874F00.3050605@freebsd.org><4C8D087B.5040404@freebsd.org><03537796FAB54E02959E2D64FC83004F@multiplay.co.uk><4C8D280F.3040803@freebsd.org><3FBF66BF11AA4CBBA6124CA435A4A31B@multiplay.co.uk><4C8E4212.30000@freebsd.org> In-Reply-To: X-Enigmail-Version: 1.1.2 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org, jhell , Pawel Jakub Dawidek Subject: Re: zfs very poor performance compared to ufs due to lack of cache? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 15 Sep 2010 11:58:05 -0000 on 15/09/2010 13:32 Steven Hartland said the following: > === conclusion === > The interaction of zfs and sendfile is causing large amounts of memory > to end up in the inactive pool and only the use of a hard min arc limit is > ensuring that zfs forces the vm to release said memory so that it can be > used by zfs arc. Memory ends up as inactive because of how sendfile works. It first pulls data into a page cache as active pages. After pages are not used for a while, they become inactive. Pagedaemon can further recycle inactive pages, but only if there is any shortage. In your situation there is no shortage, so pages just stay there, but are ready to be reclaimed (or re-activated) at any moment. They are not a waste! Just a form of a cache. If ARC size doesn't grow in that condition, then it means that ZFS simply doesn't need it to. General problem of double-caching with ZFS still remains and will remain and nobody promised to fix that. I.e. with sendfile (or mmap) you will end up with two copies of data, one in page cache and the other in ARC. That happens on Solaris too, no magic. The things I am trying to fix are: 1. Interaction between ARC and the rest of VM during page shortage; you don't seem to have much of that, so you don't see it. Besides, your range for ARC size is quite narrow and your workload is so peculiar that your setup is not the best one for testing this. 2. Copying of data from ARC to page cache each time the same data is served by sendfile. You won't see much changes without monitoring ARC hits as Wiktor has suggested. In bad case there would be many hits because the same data is constantly copied from ARC to page cache (and that simply kills any benefit sendfile may have). In good case there would be much less hits, because data is not copied, but is served directly from page cache. > The source data, xls's and exported graphs can be found here:- > http://www.multiplaygameservers.com/dropzone/zfs-sendfile-results.zip So, what problem, performance or otherwise, do you perceive with your system's behavior? Because I don't see any. To summarize: 1. With sendfile enabled you will have two copies of actively served data in RAM, but perhaps slightly faster performance, because of avoiding another copy to mbuf in sendfile(2). 2. With sendfile disabled, you will have one copy of actively served data in RAM (in ARC), but perhaps slightly slower performance because of a need to make a copy to mbuf. Which would serve you better depends on size of your hot data vs RAM size, and on actual benefit from avoiding the copying to mbuf. I have never measured the latter, so I don't have any real numbers. >From your graphs it seems that your hot data (multiplied by two) is larger than what your RAM can accommodate, so you should benefit from disabling sendfile. -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Wed Sep 15 11:59:51 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 6AC2D1065679; Wed, 15 Sep 2010 11:59:51 +0000 (UTC) (envelope-from avg@freebsd.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 66A0B8FC18; Wed, 15 Sep 2010 11:59:50 +0000 (UTC) Received: from porto.topspin.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id OAA06910; Wed, 15 Sep 2010 14:59:48 +0300 (EEST) (envelope-from avg@freebsd.org) Received: from localhost.topspin.kiev.ua ([127.0.0.1]) by porto.topspin.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1OvqeR-0000RZ-Pa; Wed, 15 Sep 2010 14:59:47 +0300 Message-ID: <4C90B533.9030909@freebsd.org> Date: Wed, 15 Sep 2010 14:59:47 +0300 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; U; FreeBSD amd64; en-US; rv:1.9.2.9) Gecko/20100912 Lightning/1.0b2 Thunderbird/3.1.3 MIME-Version: 1.0 To: Steven Hartland References: <5DB6E7C798E44D33A05673F4B773405E@multiplay.co.uk><4C85E91E.1010602@icyb.net.ua><4C873914.40404@freebsd.org><20100908084855.GF2465@deviant.kiev.zoral.com.ua><4C874F00.3050605@freebsd.org><4C8D087B.5040404@freebsd.org><03537796FAB54E02959E2D64FC83004F@multiplay.co.uk><4C8D280F.3040803@freebsd.org><3FBF66BF11AA4CBBA6124CA435A4A31B@multiplay.co.uk><4C8E4212.30000@freebsd.org> <4C90A92F.2090303@freebsd.org> <412E0DD28FEF4E25AB786C2B204D4BB5@multiplay.co.uk> In-Reply-To: <412E0DD28FEF4E25AB786C2B204D4BB5@multiplay.co.uk> X-Enigmail-Version: 1.1.2 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org, jhell , Pawel Jakub Dawidek Subject: Re: zfs very poor performance compared to ufs due to lack of cache? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 15 Sep 2010 11:59:51 -0000 on 15/09/2010 14:56 Steven Hartland said the following: > Source data is in the files: sendfile-on-arcstats-idle.txt and > sendfile-off-arcstats-idle.txt Miscommunication :( I thought 'source data' was data on your source code, not source data for the graphs. Sorry about that. > For reference:- > start = results just after boot > inprog = snap shot while test is running > stop = results just after new requests where disabled > idle = test has finished and connections have had chance to finish and the machine > return to an idle state. -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Wed Sep 15 13:42:28 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id E074D1065670; Wed, 15 Sep 2010 13:42:28 +0000 (UTC) (envelope-from prvs=1874f602db=killing@multiplay.co.uk) Received: from mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) by mx1.freebsd.org (Postfix) with ESMTP id 259738FC17; Wed, 15 Sep 2010 13:42:27 +0000 (UTC) X-MDAV-Processed: mail1.multiplay.co.uk, Wed, 15 Sep 2010 14:42:22 +0100 X-Spam-Processed: mail1.multiplay.co.uk, Wed, 15 Sep 2010 14:42:22 +0100 X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on mail1.multiplay.co.uk X-Spam-Level: X-Spam-Status: No, score=-5.0 required=6.0 tests=USER_IN_WHITELIST shortcircuit=ham autolearn=disabled version=3.2.5 Received: from r2d2 by mail1.multiplay.co.uk (MDaemon PRO v10.0.4) with ESMTP id md50011248828.msg; Wed, 15 Sep 2010 14:42:20 +0100 X-Authenticated-Sender: Killing@multiplay.co.uk X-MDRemoteIP: 188.220.16.49 X-Return-Path: prvs=1874f602db=killing@multiplay.co.uk X-Envelope-From: killing@multiplay.co.uk Message-ID: <6DFACB27CA8A4A22898BC81E55C4FD36@multiplay.co.uk> From: "Steven Hartland" To: "Andriy Gapon" References: <5DB6E7C798E44D33A05673F4B773405E@multiplay.co.uk><4C85E91E.1010602@icyb.net.ua><4C873914.40404@freebsd.org><20100908084855.GF2465@deviant.kiev.zoral.com.ua><4C874F00.3050605@freebsd.org><4C8D087B.5040404@freebsd.org><03537796FAB54E02959E2D64FC83004F@multiplay.co.uk><4C8D280F.3040803@freebsd.org><3FBF66BF11AA4CBBA6124CA435A4A31B@multiplay.co.uk><4C8E4212.30000@freebsd.org> <4C90B4C8.90203@freebsd.org> Date: Wed, 15 Sep 2010 14:42:08 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="utf-8"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.5931 Cc: freebsd-fs@freebsd.org, jhell , Pawel Jakub Dawidek Subject: Re: zfs very poor performance compared to ufs due to lack of cache? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 15 Sep 2010 13:42:29 -0000 ----- Original Message ----- From: "Andriy Gapon" > on 15/09/2010 13:32 Steven Hartland said the following: >> === conclusion === >> The interaction of zfs and sendfile is causing large amounts of memory >> to end up in the inactive pool and only the use of a hard min arc limit is >> ensuring that zfs forces the vm to release said memory so that it can be >> used by zfs arc. > > Memory ends up as inactive because of how sendfile works. It first pulls data > into a page cache as active pages. After pages are not used for a while, they > become inactive. Pagedaemon can further recycle inactive pages, but only if > there is any shortage. In your situation there is no shortage, so pages just > stay there, but are ready to be reclaimed (or re-activated) at any moment. > They are not a waste! Just a form of a cache. That doesnt seem to explain why without setting a min arc cache the io to disk went nuts even though only a few files where being requested. This however was prior to the upgrade to stable and all patches so I think I need remove the configured min for arc from loader and retest with the current code base to confirm this is still an issue. > If ARC size doesn't grow in that condition, then it means that ZFS simply > doesn't need it to. So what your saying is that even with zero arc there should be no IO required as it should come direct from inactive pages? Another reason to retest with no hard coded arc settings. > General problem of double-caching with ZFS still remains and will remain and > nobody promised to fix that. > I.e. with sendfile (or mmap) you will end up with two copies of data, one in > page cache and the other in ARC. That happens on Solaris too, no magic. Obviously this is quite an issue as a 1GB source file will require 2GB of memory to stream hence totally outweighing any benefit of the zero copy sendfile offers? > The things I am trying to fix are: > 1. Interaction between ARC and the rest of VM during page shortage; you don't > seem to have much of that, so you don't see it. Besides, your range for ARC > size is quite narrow and your workload is so peculiar that your setup is not the > best one for testing this. Indeed we have no other memory pressures, but holding two copies of the data is an issue. This doesn't seem to be the case in ufs so where's the difference? > 2. Copying of data from ARC to page cache each time the same data is served by > sendfile. You won't see much changes without monitoring ARC hits as Wiktor has > suggested. In bad case there would be many hits because the same data is > constantly copied from ARC to page cache (and that simply kills any benefit > sendfile may have). In good case there would be much less hits, because data is > not copied, but is served directly from page cache. Indeed. Where would this need to be addressed as ufs doesn't suffer from this? >> The source data, xls's and exported graphs can be found here:- >> http://www.multiplaygameservers.com/dropzone/zfs-sendfile-results.zip > > So, what problem, performance or otherwise, do you perceive with your system's > behavior? Because I don't see any. The initial problem was that with a default config, ie no hard coded min or max on arc the machine very quickly becomes seriously IO bottlenecked which simply doesn't happen on ufs. Now we have a very simple setup so we can make sensible values for min / max but it still means that for every file being sent when sendfile is enabled: 1. There are two copies in memory which is still going to mean that only half the amount files can be successfully cached and served without resorting to disk IO. 2. sendfile isn't achieving what it states it should be i.e. a zero-copy. Does this explain the other odd behaviour we noticed, high CPU usage from nginx? > To summarize: > 1. With sendfile enabled you will have two copies of actively served data in > RAM, but perhaps slightly faster performance, because of avoiding another copy > to mbuf in sendfile(2). > 2. With sendfile disabled, you will have one copy of actively served data in RAM > (in ARC), but perhaps slightly slower performance because of a need to make a > copy to mbuf. > > Which would serve you better depends on size of your hot data vs RAM size, and > on actual benefit from avoiding the copying to mbuf. I have never measured the > latter, so I don't have any real numbers. > From your graphs it seems that your hot data (multiplied by two) is larger than > what your RAM can accommodate, so you should benefit from disabling sendfile. This is what I thought, memory pressure has been eased from the initial problem point due to a memory increase from 4 - 7GB in the machine in question, but it seems at this point both 1 and 2 are far from ideal situations both having fairly serious side effects on memory use / bandwidth and possibly CPU, especially as hot data vs. clients is never going to be static ratio and hence both are going to fall down at some point :( I suspect this is going to be effecting quite a few users with nginx and others that use sendfile for high performance file transmission becoming more and more popular as is zfs. So the question is how do we remove these unexpected bottlenecks and make zfs as efficient as ufs when sendfile is used? Regards Steve ================================================ This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337 or return the E.mail to postmaster@multiplay.co.uk. From owner-freebsd-fs@FreeBSD.ORG Wed Sep 15 13:43:37 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 80F431065673; Wed, 15 Sep 2010 13:43:37 +0000 (UTC) (envelope-from prvs=1874f602db=killing@multiplay.co.uk) Received: from mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) by mx1.freebsd.org (Postfix) with ESMTP id BDED58FC21; Wed, 15 Sep 2010 13:43:36 +0000 (UTC) X-MDAV-Processed: mail1.multiplay.co.uk, Wed, 15 Sep 2010 14:43:32 +0100 X-Spam-Processed: mail1.multiplay.co.uk, Wed, 15 Sep 2010 14:43:32 +0100 X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on mail1.multiplay.co.uk X-Spam-Level: X-Spam-Status: No, score=-5.0 required=6.0 tests=USER_IN_WHITELIST shortcircuit=ham autolearn=disabled version=3.2.5 Received: from r2d2 by mail1.multiplay.co.uk (MDaemon PRO v10.0.4) with ESMTP id md50011248856.msg; Wed, 15 Sep 2010 14:43:31 +0100 X-Authenticated-Sender: Killing@multiplay.co.uk X-MDRemoteIP: 188.220.16.49 X-Return-Path: prvs=1874f602db=killing@multiplay.co.uk X-Envelope-From: killing@multiplay.co.uk Message-ID: From: "Steven Hartland" To: "Andriy Gapon" References: <5DB6E7C798E44D33A05673F4B773405E@multiplay.co.uk><4C85E91E.1010602@icyb.net.ua><4C873914.40404@freebsd.org><20100908084855.GF2465@deviant.kiev.zoral.com.ua><4C874F00.3050605@freebsd.org><4C8D087B.5040404@freebsd.org><03537796FAB54E02959E2D64FC83004F@multiplay.co.uk><4C8D280F.3040803@freebsd.org><3FBF66BF11AA4CBBA6124CA435A4A31B@multiplay.co.uk><4C8E4212.30000@freebsd.org> <4C90A92F.2090303@freebsd.org> <412E0DD28FEF4E25AB786C2B204D4BB5@multiplay.co.uk> <4C90B533.9030909@freebsd.org> Date: Wed, 15 Sep 2010 14:43:26 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="utf-8"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.5931 Cc: freebsd-fs@freebsd.org, jhell , Pawel Jakub Dawidek Subject: Re: zfs very poor performance compared to ufs due to lack of cache? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 15 Sep 2010 13:43:37 -0000 ----- Original Message ----- From: "Andriy Gapon" To: "Steven Hartland" Cc: ; "Pawel Jakub Dawidek" ; "jhell" Sent: Wednesday, September 15, 2010 12:59 PM Subject: Re: zfs very poor performance compared to ufs due to lack of cache? > on 15/09/2010 14:56 Steven Hartland said the following: >> Source data is in the files: sendfile-on-arcstats-idle.txt and >> sendfile-off-arcstats-idle.txt > > Miscommunication :( > I thought 'source data' was data on your source code, not source data for the > graphs. Sorry about that. Hehe no problem would you like me to tar up our current zfs source? Regards Steve ================================================ This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337 or return the E.mail to postmaster@multiplay.co.uk. From owner-freebsd-fs@FreeBSD.ORG Wed Sep 15 14:09:58 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 59448106566C; Wed, 15 Sep 2010 14:09:58 +0000 (UTC) (envelope-from avg@freebsd.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 321BD8FC18; Wed, 15 Sep 2010 14:09:56 +0000 (UTC) Received: from odyssey.starpoint.kiev.ua (alpha-e.starpoint.kiev.ua [212.40.38.101]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id RAA09692; Wed, 15 Sep 2010 17:09:37 +0300 (EEST) (envelope-from avg@freebsd.org) Message-ID: <4C90D3A1.7030008@freebsd.org> Date: Wed, 15 Sep 2010 17:09:37 +0300 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; U; FreeBSD amd64; en-US; rv:1.9.2.9) Gecko/20100909 Lightning/1.0b2 Thunderbird/3.1.3 MIME-Version: 1.0 To: Steven Hartland References: <5DB6E7C798E44D33A05673F4B773405E@multiplay.co.uk><4C85E91E.1010602@icyb.net.ua><4C873914.40404@freebsd.org><20100908084855.GF2465@deviant.kiev.zoral.com.ua><4C874F00.3050605@freebsd.org><4C8D087B.5040404@freebsd.org><03537796FAB54E02959E2D64FC83004F@multiplay.co.uk><4C8D280F.3040803@freebsd.org><3FBF66BF11AA4CBBA6124CA435A4A31B@multiplay.co.uk><4C8E4212.30000@freebsd.org> <4C90B4C8.90203@freebsd.org> <6DFACB27CA8A4A22898BC81E55C4FD36@multiplay.co.uk> In-Reply-To: <6DFACB27CA8A4A22898BC81E55C4FD36@multiplay.co.uk> X-Enigmail-Version: 1.1.2 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org, jhell , Pawel Jakub Dawidek Subject: Re: zfs very poor performance compared to ufs due to lack of cache? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 15 Sep 2010 14:09:58 -0000 on 15/09/2010 16:42 Steven Hartland said the following: > ----- Original Message ----- From: "Andriy Gapon" >> on 15/09/2010 13:32 Steven Hartland said the following: >>> === conclusion === >>> The interaction of zfs and sendfile is causing large amounts of memory >>> to end up in the inactive pool and only the use of a hard min arc limit is >>> ensuring that zfs forces the vm to release said memory so that it can be >>> used by zfs arc. >> >> Memory ends up as inactive because of how sendfile works. It first pulls data >> into a page cache as active pages. After pages are not used for a while, they >> become inactive. Pagedaemon can further recycle inactive pages, but only if >> there is any shortage. In your situation there is no shortage, so pages just >> stay there, but are ready to be reclaimed (or re-activated) at any moment. >> They are not a waste! Just a form of a cache. > > That doesnt seem to explain why without setting a min arc cache the io to disk > went nuts even though only a few files where being requested. > > This however was prior to the upgrade to stable and all patches so I think I need > remove the configured min for arc from loader and retest with the current code > base to confirm this is still an issue. Right, I described behavior that you should see after the patches are applied. Before patches it's too easy to drive ARC size into the ground. >> If ARC size doesn't grow in that condition, then it means that ZFS simply >> doesn't need it to. > > So what your saying is that even with zero arc there should be no IO required > as it should come direct from inactive pages? Another reason to retest with no > hard coded arc settings. No, I am not saying that. >> General problem of double-caching with ZFS still remains and will remain and >> nobody promised to fix that. >> I.e. with sendfile (or mmap) you will end up with two copies of data, one in >> page cache and the other in ARC. That happens on Solaris too, no magic. > > Obviously this is quite an issue as a 1GB source file will require 2GB of memory > to stream hence totally outweighing any benefit of the zero copy sendfile offers? I can't quite compare oranges to apples or speed to size, so that's up for you to decide in your particular situation. >> The things I am trying to fix are: >> 1. Interaction between ARC and the rest of VM during page shortage; you don't >> seem to have much of that, so you don't see it. Besides, your range for ARC >> size is quite narrow and your workload is so peculiar that your setup is not the >> best one for testing this. > > Indeed we have no other memory pressures, but holding two copies of the data is > an issue. This doesn't seem to be the case in ufs so where's the difference? UFS doesn't have its own dedicate private cache like ARC. It uses buffer cache system which means unified cache. >> 2. Copying of data from ARC to page cache each time the same data is served by >> sendfile. You won't see much changes without monitoring ARC hits as Wiktor has >> suggested. In bad case there would be many hits because the same data is >> constantly copied from ARC to page cache (and that simply kills any benefit >> sendfile may have). In good case there would be much less hits, because data is >> not copied, but is served directly from page cache. > > Indeed. Where would this need to be addressed as ufs doesn't suffer from this? In ZFS. But I don't think that this is going to happen any time soon if at all. Authors of ZFS specifically chose to use a dedicated cache, which is ARC. Talk to them, or don't use ZFS, or get used to it. ARC has a price, but it supposedly has benefits too. Changing ZFS to use buffer cache is a lot of work and effectively means not using ARC, IMO. >>> The source data, xls's and exported graphs can be found here:- >>> http://www.multiplaygameservers.com/dropzone/zfs-sendfile-results.zip >> >> So, what problem, performance or otherwise, do you perceive with your system's >> behavior? Because I don't see any. > > The initial problem was that with a default config, ie no hard coded min or max on > arc > the machine very quickly becomes seriously IO bottlenecked which simply doesn't > happen on ufs. Well, I thought that you hurried when you applied the patches and changed the settings at the same time. This made it impossible for you to judge properly what patches do and don't do for you. > Now we have a very simple setup so we can make sensible values for min / max but > it still means that for every file being sent when sendfile is enabled: > 1. There are two copies in memory which is still going to mean that only half the > amount files can be successfully cached and served without resorting to disk IO. Can't really say, depends on the size of the files. Though, it's approximately a half of what could have fit in memory with e.g. UFS, yes. > 2. sendfile isn't achieving what it states it should be i.e. a zero-copy. Does > this explain > the other odd behaviour we noticed, high CPU usage from nginx? sendfile should achieve zero copy with all the patches applied once both copies of data are settled in memory. If you have insufficient memory to hold the workset, then that's a different issue of moving competing data in and out of memory. And that may explain the CPU load, but it's just a speculation. >> To summarize: >> 1. With sendfile enabled you will have two copies of actively served data in >> RAM, but perhaps slightly faster performance, because of avoiding another copy >> to mbuf in sendfile(2). >> 2. With sendfile disabled, you will have one copy of actively served data in RAM >> (in ARC), but perhaps slightly slower performance because of a need to make a >> copy to mbuf. >> >> Which would serve you better depends on size of your hot data vs RAM size, and >> on actual benefit from avoiding the copying to mbuf. I have never measured the >> latter, so I don't have any real numbers. >> From your graphs it seems that your hot data (multiplied by two) is larger than >> what your RAM can accommodate, so you should benefit from disabling sendfile. > > This is what I thought, memory pressure has been eased from the initial problem point > due to a memory increase from 4 - 7GB in the machine in question, but it seems at > this point both 1 and 2 are far from ideal situations both having fairly serious > side effects > on memory use / bandwidth and possibly CPU, especially as hot data vs. clients is > never > going to be static ratio and hence both are going to fall down at some point :( > > I suspect this is going to be effecting quite a few users with nginx and others > that use > sendfile for high performance file transmission becoming more and more popular as is > zfs. > > So the question is how do we remove these unexpected bottlenecks and make zfs as > efficient as ufs when sendfile is used? At present I don't see any other way but brute force - throw even more RAM at the problem. Perhaps, a miracle would happen and someone would post patches that radically change ZFS behavior with respect to caches. But I don't expect it (pessimist/realist). -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Wed Sep 15 14:10:56 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id B1D651065675 for ; Wed, 15 Sep 2010 14:10:56 +0000 (UTC) (envelope-from avg@freebsd.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 010388FC1C for ; Wed, 15 Sep 2010 14:10:55 +0000 (UTC) Received: from odyssey.starpoint.kiev.ua (alpha-e.starpoint.kiev.ua [212.40.38.101]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id RAA09731; Wed, 15 Sep 2010 17:10:53 +0300 (EEST) (envelope-from avg@freebsd.org) Message-ID: <4C90D3ED.6050108@freebsd.org> Date: Wed, 15 Sep 2010 17:10:53 +0300 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; U; FreeBSD amd64; en-US; rv:1.9.2.9) Gecko/20100909 Lightning/1.0b2 Thunderbird/3.1.3 MIME-Version: 1.0 To: Steven Hartland References: <5DB6E7C798E44D33A05673F4B773405E@multiplay.co.uk><4C85E91E.1010602@icyb.net.ua><4C873914.40404@freebsd.org><20100908084855.GF2465@deviant.kiev.zoral.com.ua><4C874F00.3050605@freebsd.org><4C8D087B.5040404@freebsd.org><03537796FAB54E02959E2D64FC83004F@multiplay.co.uk><4C8D280F.3040803@freebsd.org><3FBF66BF11AA4CBBA6124CA435A4A31B@multiplay.co.uk><4C8E4212.30000@freebsd.org> <4C90A92F.2090303@freebsd.org> <412E0DD28FEF4E25AB786C2B204D4BB5@multiplay.co.uk> <4C90B533.9030909@freebsd.org> In-Reply-To: X-Enigmail-Version: 1.1.2 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org Subject: Re: zfs very poor performance compared to ufs due to lack of cache? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 15 Sep 2010 14:10:56 -0000 on 15/09/2010 16:43 Steven Hartland said the following: > > Hehe no problem would you like me to tar up our current zfs source? No, thanks. rXXXX plus diff will be sufficient. (Yes, I am that stubborn). -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Wed Sep 15 15:04:52 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 802D1106564A; Wed, 15 Sep 2010 15:04:52 +0000 (UTC) (envelope-from prvs=1874f602db=killing@multiplay.co.uk) Received: from mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) by mx1.freebsd.org (Postfix) with ESMTP id B54288FC1A; Wed, 15 Sep 2010 15:04:51 +0000 (UTC) X-MDAV-Processed: mail1.multiplay.co.uk, Wed, 15 Sep 2010 16:04:46 +0100 X-Spam-Processed: mail1.multiplay.co.uk, Wed, 15 Sep 2010 16:04:46 +0100 X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on mail1.multiplay.co.uk X-Spam-Level: X-Spam-Status: No, score=-5.0 required=6.0 tests=USER_IN_WHITELIST shortcircuit=ham autolearn=disabled version=3.2.5 Received: from r2d2 by mail1.multiplay.co.uk (MDaemon PRO v10.0.4) with ESMTP id md50011249194.msg; Wed, 15 Sep 2010 16:04:44 +0100 X-Authenticated-Sender: Killing@multiplay.co.uk X-MDRemoteIP: 188.220.16.49 X-Return-Path: prvs=1874f602db=killing@multiplay.co.uk X-Envelope-From: killing@multiplay.co.uk Message-ID: <0B1A90A08DFE4ADA9540F9F3846FDF38@multiplay.co.uk> From: "Steven Hartland" To: "Andriy Gapon" References: <5DB6E7C798E44D33A05673F4B773405E@multiplay.co.uk><4C85E91E.1010602@icyb.net.ua><4C873914.40404@freebsd.org><20100908084855.GF2465@deviant.kiev.zoral.com.ua><4C874F00.3050605@freebsd.org><4C8D087B.5040404@freebsd.org><03537796FAB54E02959E2D64FC83004F@multiplay.co.uk><4C8D280F.3040803@freebsd.org><3FBF66BF11AA4CBBA6124CA435A4A31B@multiplay.co.uk><4C8E4212.30000@freebsd.org> <4C90B4C8.90203@freebsd.org> <6DFACB27CA8A4A22898BC81E55C4FD36@multiplay.co.uk> <4C90D3A1.7030008@freebsd.org> Date: Wed, 15 Sep 2010 16:04:42 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="utf-8"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.5931 Cc: freebsd-fs@freebsd.org, jhell , Pawel Jakub Dawidek Subject: Re: zfs very poor performance compared to ufs due to lack of cache? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 15 Sep 2010 15:04:52 -0000 ----- Original Message ----- From: "Andriy Gapon" >> Indeed. Where would this need to be addressed as ufs doesn't suffer from this? > > In ZFS. But I don't think that this is going to happen any time soon if at all. > Authors of ZFS specifically chose to use a dedicated cache, which is ARC. > Talk to them, or don't use ZFS, or get used to it. > ARC has a price, but it supposedly has benefits too. > Changing ZFS to use buffer cache is a lot of work and effectively means not using > ARC, IMO. Hmm, so taking a different track on the issue is the a way to make sendfile use data directly from ARC instead of having to copy it first? > Well, I thought that you hurried when you applied the patches and changed the > settings at the same time. This made it impossible for you to judge properly what > patches do and don't do for you. No hurry just applying the patches that where suggested, retest, apply new retest etc but in parrallel been reading up on the arc tunables. >> Now we have a very simple setup so we can make sensible values for min / max but >> it still means that for every file being sent when sendfile is enabled: >> 1. There are two copies in memory which is still going to mean that only half the >> amount files can be successfully cached and served without resorting to disk IO. > > Can't really say, depends on the size of the files. > Though, it's approximately a half of what could have fit in memory with e.g. UFS, yes. Out of interest if a copy of the data is being made from ARC whats ties those two copies together, in order to prevent the next request for the same file having to create a third copy etc... >> 2. sendfile isn't achieving what it states it should be i.e. a zero-copy. Does >> this explain >> the other odd behaviour we noticed, high CPU usage from nginx? > > sendfile should achieve zero copy with all the patches applied once both copies of > data are settled in memory. If you have insufficient memory to hold the workset, > then that's a different issue of moving competing data in and out of memory. And > that may explain the CPU load, but it's just a speculation. Yes, more investigation needed ;-) > At present I don't see any other way but brute force - throw even more RAM at the > problem. > > Perhaps, a miracle would happen and someone would post patches that radically > change ZFS behavior with respect to caches. But I don't expect it > (pessimist/realist). Or alternatively make sendfile work directly from ARC, would that be possible? Thanks for all the info :) Regards Steve ================================================ This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337 or return the E.mail to postmaster@multiplay.co.uk. From owner-freebsd-fs@FreeBSD.ORG Wed Sep 15 15:07:38 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 40F481065670 for ; Wed, 15 Sep 2010 15:07:38 +0000 (UTC) (envelope-from freebsd-fs@m.gmane.org) Received: from lo.gmane.org (lo.gmane.org [80.91.229.12]) by mx1.freebsd.org (Postfix) with ESMTP id B813A8FC0C for ; Wed, 15 Sep 2010 15:07:37 +0000 (UTC) Received: from list by lo.gmane.org with local (Exim 4.69) (envelope-from ) id 1OvtaA-0005hz-Uk for freebsd-fs@freebsd.org; Wed, 15 Sep 2010 17:07:34 +0200 Received: from lara.cc.fer.hr ([161.53.72.113]) by main.gmane.org with esmtp (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Wed, 15 Sep 2010 17:07:34 +0200 Received: from ivoras by lara.cc.fer.hr with local (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Wed, 15 Sep 2010 17:07:34 +0200 X-Injected-Via-Gmane: http://gmane.org/ To: freebsd-fs@freebsd.org From: Ivan Voras Date: Wed, 15 Sep 2010 17:07:28 +0200 Lines: 43 Message-ID: References: <5DB6E7C798E44D33A05673F4B773405E@multiplay.co.uk><4C85E91E.1010602@icyb.net.ua><4C873914.40404@freebsd.org><20100908084855.GF2465@deviant.kiev.zoral.com.ua><4C874F00.3050605@freebsd.org><4C8D087B.5040404@freebsd.org><03537796FAB54E02959E2D64FC83004F@multiplay.co.uk><4C8D280F.3040803@freebsd.org><3FBF66BF11AA4CBBA6124CA435A4A31B@multiplay.co.uk><4C8E4212.30000@freebsd.org> <4C90B4C8.90203@freebsd.org> <6DFACB27CA8A4A22898BC81E55C4FD36@multiplay.co.uk> <4C90D3A1.7030008@freebsd.org> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Complaints-To: usenet@dough.gmane.org X-Gmane-NNTP-Posting-Host: lara.cc.fer.hr User-Agent: Mozilla/5.0 (X11; U; FreeBSD amd64; en-US; rv:1.9.1.9) Gecko/20100518 Thunderbird/3.0.4 In-Reply-To: <4C90D3A1.7030008@freebsd.org> X-Enigmail-Version: 1.0.1 Subject: Re: zfs very poor performance compared to ufs due to lack of cache? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 15 Sep 2010 15:07:38 -0000 On 09/15/10 16:09, Andriy Gapon wrote: > on 15/09/2010 16:42 Steven Hartland said the following: >>> General problem of double-caching with ZFS still remains and will remain and >>> nobody promised to fix that. >>> I.e. with sendfile (or mmap) you will end up with two copies of data, one in >>> page cache and the other in ARC. That happens on Solaris too, no magic. >> Obviously this is quite an issue as a 1GB source file will require 2GB of memory >> to stream hence totally outweighing any benefit of the zero copy sendfile offers? >> Indeed. Where would this need to be addressed as ufs doesn't suffer from this? > > In ZFS. But I don't think that this is going to happen any time soon if at all. > Authors of ZFS specifically chose to use a dedicated cache, which is ARC. > Talk to them, or don't use ZFS, or get used to it. > ARC has a price, but it supposedly has benefits too. (replying for the OPs benefit) This has been a question since the beginnings of ZFS on Solaris - the authors wanted their own control over the cache and hence the ARC was implemented (modified from the IBM's original). This decision has been contested as possibly ineffective but at the end it stayed. Here are some Googled references: http://www.mail-archive.com/zfs-discuss@opensolaris.org/msg08692.html http://mail.opensolaris.org/pipermail/zfs-discuss/2009-July/029370.html There are also some problems which are curiously similar to ones people complain about in FreeBSD+ZFS: http://mail.opensolaris.org/pipermail/zfs-discuss/2009-July/029654.html http://mail.opensolaris.org/pipermail/zfs-discuss/2009-July/029362.html Random other references: http://www.almaden.ibm.com/cs/people/dmodha/arcfast.pdf http://nilesh-joshi.blogspot.com/2010/07/zfs-revisited.html http://www.mail-archive.com/zfs-discuss@opensolaris.org/msg38362.html http://www.thezonemanager.com/2009/03/filesystem-cache-optimization.html From owner-freebsd-fs@FreeBSD.ORG Wed Sep 15 15:15:56 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A69C8106564A; Wed, 15 Sep 2010 15:15:56 +0000 (UTC) (envelope-from avg@freebsd.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 986AE8FC16; Wed, 15 Sep 2010 15:15:55 +0000 (UTC) Received: from odyssey.starpoint.kiev.ua (alpha-e.starpoint.kiev.ua [212.40.38.101]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id SAA11033; Wed, 15 Sep 2010 18:15:52 +0300 (EEST) (envelope-from avg@freebsd.org) Message-ID: <4C90E328.20606@freebsd.org> Date: Wed, 15 Sep 2010 18:15:52 +0300 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; U; FreeBSD amd64; en-US; rv:1.9.2.9) Gecko/20100909 Lightning/1.0b2 Thunderbird/3.1.3 MIME-Version: 1.0 To: Steven Hartland References: <5DB6E7C798E44D33A05673F4B773405E@multiplay.co.uk><4C85E91E.1010602@icyb.net.ua><4C873914.40404@freebsd.org><20100908084855.GF2465@deviant.kiev.zoral.com.ua><4C874F00.3050605@freebsd.org><4C8D087B.5040404@freebsd.org><03537796FAB54E02959E2D64FC83004F@multiplay.co.uk><4C8D280F.3040803@freebsd.org><3FBF66BF11AA4CBBA6124CA435A4A31B@multiplay.co.uk><4C8E4212.30000@freebsd.org> <4C90B4C8.90203@freebsd.org> <6DFACB27CA8A4A22898BC81E55C4FD36@multiplay.co.uk> <4C90D3A1.7030008@freebsd.org> <0B1A90A08DFE4ADA9540F9F3846FDF38@multiplay.co.uk> In-Reply-To: <0B1A90A08DFE4ADA9540F9F3846FDF38@multiplay.co.uk> X-Enigmail-Version: 1.1.2 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org, jhell , Pawel Jakub Dawidek Subject: Re: zfs very poor performance compared to ufs due to lack of cache? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 15 Sep 2010 15:15:56 -0000 on 15/09/2010 18:04 Steven Hartland said the following: > ----- Original Message ----- From: "Andriy Gapon" > >>> Indeed. Where would this need to be addressed as ufs doesn't suffer from this? >> >> In ZFS. But I don't think that this is going to happen any time soon if at all. >> Authors of ZFS specifically chose to use a dedicated cache, which is ARC. >> Talk to them, or don't use ZFS, or get used to it. >> ARC has a price, but it supposedly has benefits too. >> Changing ZFS to use buffer cache is a lot of work and effectively means not using >> ARC, IMO. > > Hmm, so taking a different track on the issue is the a way to make sendfile use data > directly from ARC instead of having to copy it first? Well, theoretically everything is possible, but I am not sure if it's feasible. It's a lot of work anyways, it should be a very specialized sendfile and a lot if inter-layer knowledge and dependencies. Don't hold your breath for it. >> Well, I thought that you hurried when you applied the patches and changed the >> settings at the same time. This made it impossible for you to judge properly what >> patches do and don't do for you. > > No hurry just applying the patches that where suggested, retest, apply new retest > etc but > in parrallel been reading up on the arc tunables. > >>> Now we have a very simple setup so we can make sensible values for min / max but >>> it still means that for every file being sent when sendfile is enabled: >>> 1. There are two copies in memory which is still going to mean that only half the >>> amount files can be successfully cached and served without resorting to disk IO. >> >> Can't really say, depends on the size of the files. >> Though, it's approximately a half of what could have fit in memory with e.g. >> UFS, yes. > > Out of interest if a copy of the data is being made from ARC whats ties those > two copies together, in order to prevent the next request for the same file having to > create a third copy etc... Read about FreeBSD VM, particularly about a notion of VM object. http://www.informit.com/store/product.aspx?isbn=0201702452 >>> 2. sendfile isn't achieving what it states it should be i.e. a zero-copy. Does >>> this explain >>> the other odd behaviour we noticed, high CPU usage from nginx? >> >> sendfile should achieve zero copy with all the patches applied once both copies of >> data are settled in memory. If you have insufficient memory to hold the workset, >> then that's a different issue of moving competing data in and out of memory. And >> that may explain the CPU load, but it's just a speculation. > > Yes, more investigation needed ;-) > >> At present I don't see any other way but brute force - throw even more RAM at the >> problem. >> >> Perhaps, a miracle would happen and someone would post patches that radically >> change ZFS behavior with respect to caches. But I don't expect it >> (pessimist/realist). > > Or alternatively make sendfile work directly from ARC, would that be possible? I'll be glad to review the patches :-) > Thanks for all the info :) You are welcome. You did a lot of testing and investigative work here and I hope this thread will be useful for other people researching the topic. -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Wed Sep 15 15:29:01 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 4F2621065672 for ; Wed, 15 Sep 2010 15:29:01 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-annu.mail.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id 0AF818FC32 for ; Wed, 15 Sep 2010 15:29:00 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: ApwEABuDkEyDaFvO/2dsb2JhbACDG587sjKSGYEigyt0BIoshHc X-IronPort-AV: E=Sophos;i="4.56,371,1280721600"; d="scan'208";a="91964071" Received: from erie.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.206]) by esa-annu-pri.mail.uoguelph.ca with ESMTP; 15 Sep 2010 11:29:00 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 0AFC5B3F21; Wed, 15 Sep 2010 11:29:00 -0400 (EDT) Date: Wed, 15 Sep 2010 11:28:59 -0400 (EDT) From: Rick Macklem To: Eric Crist , Thomas Johnson Message-ID: <1260697257.960376.1284564539991.JavaMail.root@erie.cs.uoguelph.ca> In-Reply-To: MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [24.65.230.102] X-Mailer: Zimbra 6.0.7_GA_2476.RHEL4 (ZimbraWebClient - SAF3 (Mac)/6.0.7_GA_2473.RHEL4_64) Cc: freebsd-fs@freebsd.org Subject: Re: NFS nfs_getpages errors X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 15 Sep 2010 15:29:01 -0000 > Hey folks, > > We've got 4 servers running FreeBSD 8.1-RELEASE which PXE boot with > NFS root. On these machines, we run proftpd and apache 2.2. Over the > past couple weeks, we've seen a ton of errors as follows: > > Sep 14 20:28:59 lion-3 proftpd[31761]: 0.0.0.0 > (folsom-1-red.claimlynx.com[216.17.68.130]) - ProFTPD terminating > (signal 11) > Sep 14 20:28:59 lion-3 kernel: nfs_getpages: error 1046353552 > Sep 14 20:28:59 lion-3 kernel: vm_fault: pager read error, pid 31761 > (proftpd) > Sep 14 20:28:59 lion-3 kernel: Sep 14 20:28:59 lion-3 proftpd[31761]: > 0.0.0.0 (folsom-1-red.claimlynx.com[216.17.68.130]) - ProFTPD > terminating (signal 11) > Sep 14 20:28:59 lion-3 kernel: nfs_getpages: error 1046353552 > Sep 14 20:28:59 lion-3 kernel: vm_fault: pager read error, pid 31761 > (proftpd) > Sep 14 20:28:59 lion-3 kernel: pid 31761 (proftpd), uid 0: exited on > signal 11 > > These, in this case, occurred on three of the four machines until > midnight after which all three of the machines had proftpd exit on > signal 11. The message above was for child processes. At midnight, the > logfile rotated, and newsyslog sent singal 1 to the parent process, > which I think finally finished it off. The fourth machine remained > running and did not display these messages. > > The number following 'nfs_getpages: error' changes for each cycle and > I'm not certain if any of them repeat. > Well, at a quick glance, those errors seem to be coming from the NFS server in a read reply. Also, the error values seem bogus, since they should be small positive numbers (1<->70 + a few just above 10000). Could you possibly get a packet capture when one of these happens? ("tcpdump -s -0 -w xxx host " would suffice, but you need to have it running when the error occurs. If you can reproduce it by talking to the proftpd server, so the tcpdump doesn't run for too long, that would be best.) You can look in the tcpdump via wireshark and see what it being returned for the Read RPCs at that time. (You can email me the "xxx" packet trace as an attachment and I can look at it, if you get that far.) rick ps: Otherwise, I'd go look at your NFS server and see if it's logging errors or if there are indications of problems. From owner-freebsd-fs@FreeBSD.ORG Wed Sep 15 15:30:09 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id DC0BD106564A for ; Wed, 15 Sep 2010 15:30:09 +0000 (UTC) (envelope-from ecrist@claimlynx.com) Received: from na3sys009aog102.obsmtp.com (na3sys009aog102.obsmtp.com [74.125.149.69]) by mx1.freebsd.org (Postfix) with ESMTP id 931C58FC2A for ; Wed, 15 Sep 2010 15:30:09 +0000 (UTC) Received: from source ([209.85.213.173]) by na3sys009aob102.postini.com ([74.125.148.12]) with SMTP ID DSNKTJDmgBTOuSsuZjBLnWQECLwRhWAFf7Tz@postini.com; Wed, 15 Sep 2010 08:30:09 PDT Received: by mail-yx0-f173.google.com with SMTP id 7so127100yxs.4 for ; Wed, 15 Sep 2010 08:30:08 -0700 (PDT) Received: by 10.151.14.15 with SMTP id r15mr2125501ybi.75.1284562899659; Wed, 15 Sep 2010 08:01:39 -0700 (PDT) Received: from swordfish.ply.claimlynx.com (mtka.claimlynx.com [74.95.66.25]) by mx.google.com with ESMTPS id m11sm2316194ybn.4.2010.09.15.08.01.37 (version=TLSv1/SSLv3 cipher=RC4-MD5); Wed, 15 Sep 2010 08:01:38 -0700 (PDT) Content-Type: text/plain; charset=us-ascii Mime-Version: 1.0 (Apple Message framework v1081) From: Eric Crist Date: Wed, 15 Sep 2010 10:01:36 -0500 Content-Transfer-Encoding: quoted-printable Message-Id: To: freebsd-fs@freebsd.org X-Mailer: Apple Mail (2.1081) Cc: Thomas Johnson Subject: NFS nfs_getpages errors X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: Eric Crist , Thomas Johnson List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 15 Sep 2010 15:30:09 -0000 Hey folks, We've got 4 servers running FreeBSD 8.1-RELEASE which PXE boot with NFS = root. On these machines, we run proftpd and apache 2.2. Over the past = couple weeks, we've seen a ton of errors as follows: Sep 14 20:28:59 lion-3 proftpd[31761]: 0.0.0.0 = (folsom-1-red.claimlynx.com[216.17.68.130]) - ProFTPD terminating = (signal 11)=20 Sep 14 20:28:59 lion-3 kernel: nfs_getpages: error 1046353552 Sep 14 20:28:59 lion-3 kernel: vm_fault: pager read error, pid 31761 = (proftpd) Sep 14 20:28:59 lion-3 kernel: Sep 14 20:28:59 lion-3 proftpd[31761]: = 0.0.0.0 (folsom-1-red.claimlynx.com[216.17.68.130]) - ProFTPD = terminating (signal 11)=20 Sep 14 20:28:59 lion-3 kernel: nfs_getpages: error 1046353552 Sep 14 20:28:59 lion-3 kernel: vm_fault: pager read error, pid 31761 = (proftpd) Sep 14 20:28:59 lion-3 kernel: pid 31761 (proftpd), uid 0: exited on = signal 11 These, in this case, occurred on three of the four machines until = midnight after which all three of the machines had proftpd exit on = signal 11. The message above was for child processes. At midnight, the = logfile rotated, and newsyslog sent singal 1 to the parent process, = which I think finally finished it off. The fourth machine remained = running and did not display these messages. The number following 'nfs_getpages: error' changes for each cycle and = I'm not certain if any of them repeat. root@lion-3:~-> uname -a FreeBSD lion-3.claimlynx.com 8.1-RELEASE FreeBSD 8.1-RELEASE #2: Mon Aug = 2 12:50:40 CDT 2010 = root@jaguar-1.claimlynx.com:/usr/obj/usr/src/sys/GENERIC-CARP amd64 --- Eric F Crist System Administrator ClaimLynx, Inc (952) 593-5969 x2301 From owner-freebsd-fs@FreeBSD.ORG Wed Sep 15 15:38:21 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id DD4961065675; Wed, 15 Sep 2010 15:38:21 +0000 (UTC) (envelope-from avg@freebsd.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id EA7618FC12; Wed, 15 Sep 2010 15:38:20 +0000 (UTC) Received: from odyssey.starpoint.kiev.ua (alpha-e.starpoint.kiev.ua [212.40.38.101]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id SAA11317; Wed, 15 Sep 2010 18:38:17 +0300 (EEST) (envelope-from avg@freebsd.org) Message-ID: <4C90E869.8000400@freebsd.org> Date: Wed, 15 Sep 2010 18:38:17 +0300 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; U; FreeBSD amd64; en-US; rv:1.9.2.9) Gecko/20100909 Lightning/1.0b2 Thunderbird/3.1.3 MIME-Version: 1.0 To: Steven Hartland References: <5DB6E7C798E44D33A05673F4B773405E@multiplay.co.uk><4C85E91E.1010602@icyb.net.ua><4C873914.40404@freebsd.org><20100908084855.GF2465@deviant.kiev.zoral.com.ua><4C874F00.3050605@freebsd.org><4C8D087B.5040404@freebsd.org><03537796FAB54E02959E2D64FC83004F@multiplay.co.uk><4C8D280F.3040803@freebsd.org><3FBF66BF11AA4CBBA6124CA435A4A31B@multiplay.co.uk><4C8E4212.30000@freebsd.org> <4C90B4C8.90203@freebsd.org> <6DFACB27CA8A4A22898BC81E55C4FD36@multiplay.co.uk> <4C90D3A1.7030008@freebsd.org> <0B1A90A08DFE4ADA9540F9F3846FDF38@multiplay.co.uk> <4C90E328.20606@freebsd.org> In-Reply-To: <4C90E328.20606@freebsd.org> X-Enigmail-Version: 1.1.2 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org, jhell , Pawel Jakub Dawidek , freebsd-net@freebsd.org Subject: Re: zfs very poor performance compared to ufs due to lack of cache? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 15 Sep 2010 15:38:21 -0000 on 15/09/2010 18:15 Andriy Gapon said the following: > on 15/09/2010 18:04 Steven Hartland said the following: >> Hmm, so taking a different track on the issue is the a way to make sendfile use data >> directly from ARC instead of having to copy it first? > > Well, theoretically everything is possible, but I am not sure if it's feasible. > It's a lot of work anyways, it should be a very specialized sendfile and a lot if > inter-layer knowledge and dependencies. > Don't hold your breath for it. Perhaps some middle-ground solution can be developed with less effort. This solution would be specific to filesystems that don't use buffer cache, so it wouldn't touch any pages, but instead it would use regular VOP_READ into a mbuf. So, there would be copying, but page caches won't be unnecessarily "polluted" with second copy of the data and this all would happen in kernel giving an advantage over userland solution with read(2)+send(2). Having said that, I see that OpenSolaris has a mechanism for something like that. The mechanism can either globally enabled or enabled for file over certain size. The mechanism uses dedicated kernel threads that get data using direct I/O, buffer and send it. That's my impression from a quick look, I may have gotten things wrong. -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Wed Sep 15 15:40:02 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id D143F106564A; Wed, 15 Sep 2010 15:40:02 +0000 (UTC) (envelope-from doublef-ctm@yandex.ru) Received: from forward17.mail.yandex.net (forward17.mail.yandex.net [95.108.253.142]) by mx1.freebsd.org (Postfix) with ESMTP id 7A8828FC13; Wed, 15 Sep 2010 15:40:02 +0000 (UTC) Received: from smtp19.mail.yandex.net (smtp19.mail.yandex.net [95.108.252.19]) by forward17.mail.yandex.net (Yandex) with ESMTP id 83592A58C04; Wed, 15 Sep 2010 19:40:00 +0400 (MSD) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex.ru; s=mail; t=1284565200; bh=4jrE3qOvQLOFWRNCvsfxZwCCnmxYqqz3lizfFOinhgc=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:In-Reply-To; b=eLfUg+mCZiQLuiPUmfU3CHuiq8X4qaYfej0h2J1Dum7IWiA+yA2L8f9aFa832YRpr opndvCn0XhImHo/srCvcffdISuzJfetPu+wCkXMI+BmhM9qzdYvyTuKm1msOcX4J/Q Atf1NXu4Z4FlNSu5/LVmHeZq9oCEqut1VDch8AiQ= Received: from nautilus (unknown [178.155.116.41]) by smtp19.mail.yandex.net (Yandex) with ESMTPA id 540D2287009E; Wed, 15 Sep 2010 19:40:00 +0400 (MSD) Received: by nautilus (Postfix, from userid 1001) id 113B51DD43E; Wed, 15 Sep 2010 19:39:59 +0400 (MSD) Date: Wed, 15 Sep 2010 19:39:58 +0400 From: Sergey Zaharchenko To: freebsd-fs@freebsd.org, freebsd-stable@freebsd.org Message-ID: <20100915153958.GA3256@nautilus.vmks.ru> References: <20100907164204.GA2571@nautilus.vmks.ru> <20100908065222.GA2522@nautilus.vmks.ru> <20100908103338.GA5091@nautilus.vmks.ru> <20100908104200.GA36566@icarus.home.lan> <20100909165829.GA2602@nautilus.vmks.ru> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="EeQfGwPcQSOJBaQU" Content-Disposition: inline In-Reply-To: <20100909165829.GA2602@nautilus.vmks.ru> X-Listening-To: Silence User-Agent: Mutt/1.5.20 (2009-06-14) X-Yandex-TimeMark: 1284565200 X-Yandex-Spam: 1 X-Yandex-Front: smtp19.mail.yandex.net Cc: Dan Davis Subject: Re: 8.1-RELEASE ZFS hangs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 15 Sep 2010 15:40:02 -0000 --EeQfGwPcQSOJBaQU Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable Hello list(s), After all it turned out that the HighPoint controller was buggy. When we later configured it in RAID5 mode, it reported drive faults for random drives that were OK. I don't get how/why it worked in geom stripe mode. Anyway, we switched to a 3ware card and ZFS works happily with it out of the box without any kernel tuning (I may do some fine-tuning later). So, I'm sorry for suspecting ZFS, and thanks to Pawel for maintaining it! However, seems like that there are other people, like Tim and Dan, who are having a similar problem, but without noticeable hardware relation. Maybe there is something to be learnt from their input. Sorry for the noise and thanks again, --=20 Sergey Zaharchenko --EeQfGwPcQSOJBaQU Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.14 (FreeBSD) iEUEARECAAYFAkyQ6M4ACgkQwo7hT/9lVdyAYwCePiXdrIlE4F3BapCA8KgFaMNp kzMAmP6T5cAZpDjQsR8tNWV5NxxU5SQ= =Q2Jb -----END PGP SIGNATURE----- --EeQfGwPcQSOJBaQU-- From owner-freebsd-fs@FreeBSD.ORG Wed Sep 15 15:44:13 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A4BE910656A6 for ; Wed, 15 Sep 2010 15:44:13 +0000 (UTC) (envelope-from ecrist@secure-computing.net) Received: from kenny.secure-computing.net (unknown [IPv6:2001:470:1f11:463::210]) by mx1.freebsd.org (Postfix) with ESMTP id 588788FC14 for ; Wed, 15 Sep 2010 15:44:13 +0000 (UTC) Received: from swordfish.ply.claimlynx.com (mtka.claimlynx.com [74.95.66.25]) (using TLSv1 with cipher AES128-SHA (128/128 bits)) (No client certificate requested) (Authenticated sender: ecrist@secure-computing.net) by kenny.secure-computing.net (Postfix) with ESMTP id 907FF2E06D; Wed, 15 Sep 2010 10:44:12 -0500 (CDT) Mime-Version: 1.0 (Apple Message framework v1081) Content-Type: text/plain; charset=us-ascii From: Eric Crist In-Reply-To: <4C90E88D.9050608@comcast.net> Date: Wed, 15 Sep 2010 10:44:11 -0500 Content-Transfer-Encoding: quoted-printable Message-Id: References: <1260697257.960376.1284564539991.JavaMail.root@erie.cs.uoguelph.ca> <4C90E88D.9050608@comcast.net> To: Steve Polyack X-Mailer: Apple Mail (2.1081) Cc: freebsd-fs@freebsd.org, Thomas Johnson Subject: Re: NFS nfs_getpages errors X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 15 Sep 2010 15:44:13 -0000 On Sep 15, 2010, at 10:38:53, Steve Polyack wrote: > On 09/15/10 11:28, Rick Macklem wrote: >>> Hey folks, >>>=20 >>> We've got 4 servers running FreeBSD 8.1-RELEASE which PXE boot with >>> NFS root. On these machines, we run proftpd and apache 2.2. Over the >>> past couple weeks, we've seen a ton of errors as follows: >>>=20 >>> Sep 14 20:28:59 lion-3 proftpd[31761]: 0.0.0.0 >>> (folsom-1-red.claimlynx.com[216.17.68.130]) - ProFTPD terminating >>> (signal 11) >>> Sep 14 20:28:59 lion-3 kernel: nfs_getpages: error 1046353552 >>> Sep 14 20:28:59 lion-3 kernel: vm_fault: pager read error, pid 31761 >>> (proftpd) >>> Sep 14 20:28:59 lion-3 kernel: Sep 14 20:28:59 lion-3 = proftpd[31761]: >>> 0.0.0.0 (folsom-1-red.claimlynx.com[216.17.68.130]) - ProFTPD >>> terminating (signal 11) >>> Sep 14 20:28:59 lion-3 kernel: nfs_getpages: error 1046353552 >>> Sep 14 20:28:59 lion-3 kernel: vm_fault: pager read error, pid 31761 >>> (proftpd) >>> Sep 14 20:28:59 lion-3 kernel: pid 31761 (proftpd), uid 0: exited on >>> signal 11 >>>=20 >>> These, in this case, occurred on three of the four machines until >>> midnight after which all three of the machines had proftpd exit on >>> signal 11. The message above was for child processes. At midnight, = the >>> logfile rotated, and newsyslog sent singal 1 to the parent process, >>> which I think finally finished it off. The fourth machine remained >>> running and did not display these messages. >>>=20 >>> The number following 'nfs_getpages: error' changes for each cycle = and >>> I'm not certain if any of them repeat. >>>=20 >> Well, at a quick glance, those errors seem to be coming from the NFS >> server in a read reply. Also, the error values seem bogus, since they >> should be small positive numbers (1<->70 + a few just above 10000). > We see these errors on some 8.1 clients as well: > nfs_getpages: error 1110586608 > nfs_getpages: error 1108948624 > vm_fault: pager read error, pid 56216 (php) > nfs_getpages: error 1114969744 > vm_fault: pager read error, pid 54770 (php) > nfs_getpages: error 1137006224 > vm_fault: pager read error, pid 50578 (php) >=20 > They do not show up often, so we haven't spent much time looking into = it (no tcpdumps yet). Our NFS server is a 8-STABLE system backed by = ZFS, so maybe its related to that (again :) ). >=20 > Eric, is your NFS server backed by ZFS as well? >=20 > The NFS server doesn't seem to be logging any errors, but the = ret-failed count is always increasing: > Server Info: > Getattr Setattr Lookup Readlink Read Write Create = Remove > 543523097 14397049 1949982185 6380 17587820 14002952 8980955 = 8070238 > Rename Link Symlink Mkdir Rmdir Readdir RdirPlus = Access > 6966495 9 1668 1117125 904969 5567689 22307 = 184929325 > Mknod Fsstat Fsinfo PathConf Commit > 0 338500745 57 0 7129262 > Server Ret-Failed > 29089796 > Server Faults > 0 > Server Cache Stats: > Inprog Idem Non-idem Misses > 0 0 0 0 > Server Write Gathering: > WriteOps WriteRPC Opsaved > 14001235 14002952 1717 >=20 >> Could you possibly get a packet capture when one of these happens? >> ("tcpdump -s -0 -w xxx host" would suffice, but you need = to >> have it running when the error occurs. If you can reproduce it by >> talking to the proftpd server, so the tcpdump doesn't run for too >> long, that would be best.) >>=20 >> You can look in the tcpdump via wireshark and see what it being = returned >> for the Read RPCs at that time. (You can email me the "xxx" packet = trace >> as an attachment and I can look at it, if you get that far.) >>=20 >> rick >> ps: Otherwise, I'd go look at your NFS server and see if it's logging >> errors or if there are indications of problems. The NFS server is logging nothing at all related to NFS. It *is* = running 8.1-RC2, so there is potential for an update. If/when we notice = these errors again, we'll try to get a packet capture and forward it to = you. Our NFS server is backed by ZFS, as well. Eric From owner-freebsd-fs@FreeBSD.ORG Wed Sep 15 15:54:42 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id AFCE11065670 for ; Wed, 15 Sep 2010 15:54:42 +0000 (UTC) (envelope-from korvus@comcast.net) Received: from mx04.pub.collaborativefusion.com (mx04.pub.collaborativefusion.com [206.210.72.84]) by mx1.freebsd.org (Postfix) with ESMTP id 68D668FC16 for ; Wed, 15 Sep 2010 15:54:42 +0000 (UTC) Received: from [192.168.2.164] ([206.210.89.202]) by mx04.pub.collaborativefusion.com (StrongMail Enterprise 4.1.1.4(4.1.1.4-47689)); Wed, 15 Sep 2010 11:20:14 -0400 X-VirtualServerGroup: Default X-MailingID: 00000::00000::00000::00000::::2974 X-SMHeaderMap: mid="X-MailingID" X-Destination-ID: freebsd-fs@freebsd.org X-SMFBL: ZnJlZWJzZC1mc0BmcmVlYnNkLm9yZw== Message-ID: <4C90E88D.9050608@comcast.net> Date: Wed, 15 Sep 2010 11:38:53 -0400 From: Steve Polyack User-Agent: Mozilla/5.0 (X11; U; FreeBSD amd64; en-US; rv:1.9.2.7) Gecko/20100805 Lightning/1.0b2 Thunderbird/3.1.1 MIME-Version: 1.0 To: Rick Macklem References: <1260697257.960376.1284564539991.JavaMail.root@erie.cs.uoguelph.ca> In-Reply-To: <1260697257.960376.1284564539991.JavaMail.root@erie.cs.uoguelph.ca> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Cc: Eric Crist , freebsd-fs@freebsd.org, Thomas Johnson Subject: Re: NFS nfs_getpages errors X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 15 Sep 2010 15:54:42 -0000 On 09/15/10 11:28, Rick Macklem wrote: >> Hey folks, >> >> We've got 4 servers running FreeBSD 8.1-RELEASE which PXE boot with >> NFS root. On these machines, we run proftpd and apache 2.2. Over the >> past couple weeks, we've seen a ton of errors as follows: >> >> Sep 14 20:28:59 lion-3 proftpd[31761]: 0.0.0.0 >> (folsom-1-red.claimlynx.com[216.17.68.130]) - ProFTPD terminating >> (signal 11) >> Sep 14 20:28:59 lion-3 kernel: nfs_getpages: error 1046353552 >> Sep 14 20:28:59 lion-3 kernel: vm_fault: pager read error, pid 31761 >> (proftpd) >> Sep 14 20:28:59 lion-3 kernel: Sep 14 20:28:59 lion-3 proftpd[31761]: >> 0.0.0.0 (folsom-1-red.claimlynx.com[216.17.68.130]) - ProFTPD >> terminating (signal 11) >> Sep 14 20:28:59 lion-3 kernel: nfs_getpages: error 1046353552 >> Sep 14 20:28:59 lion-3 kernel: vm_fault: pager read error, pid 31761 >> (proftpd) >> Sep 14 20:28:59 lion-3 kernel: pid 31761 (proftpd), uid 0: exited on >> signal 11 >> >> These, in this case, occurred on three of the four machines until >> midnight after which all three of the machines had proftpd exit on >> signal 11. The message above was for child processes. At midnight, the >> logfile rotated, and newsyslog sent singal 1 to the parent process, >> which I think finally finished it off. The fourth machine remained >> running and did not display these messages. >> >> The number following 'nfs_getpages: error' changes for each cycle and >> I'm not certain if any of them repeat. >> > Well, at a quick glance, those errors seem to be coming from the NFS > server in a read reply. Also, the error values seem bogus, since they > should be small positive numbers (1<->70 + a few just above 10000). We see these errors on some 8.1 clients as well: nfs_getpages: error 1110586608 nfs_getpages: error 1108948624 vm_fault: pager read error, pid 56216 (php) nfs_getpages: error 1114969744 vm_fault: pager read error, pid 54770 (php) nfs_getpages: error 1137006224 vm_fault: pager read error, pid 50578 (php) They do not show up often, so we haven't spent much time looking into it (no tcpdumps yet). Our NFS server is a 8-STABLE system backed by ZFS, so maybe its related to that (again :) ). Eric, is your NFS server backed by ZFS as well? The NFS server doesn't seem to be logging any errors, but the ret-failed count is always increasing: Server Info: Getattr Setattr Lookup Readlink Read Write Create Remove 543523097 14397049 1949982185 6380 17587820 14002952 8980955 8070238 Rename Link Symlink Mkdir Rmdir Readdir RdirPlus Access 6966495 9 1668 1117125 904969 5567689 22307 184929325 Mknod Fsstat Fsinfo PathConf Commit 0 338500745 57 0 7129262 Server Ret-Failed 29089796 Server Faults 0 Server Cache Stats: Inprog Idem Non-idem Misses 0 0 0 0 Server Write Gathering: WriteOps WriteRPC Opsaved 14001235 14002952 1717 > Could you possibly get a packet capture when one of these happens? > ("tcpdump -s -0 -w xxx host" would suffice, but you need to > have it running when the error occurs. If you can reproduce it by > talking to the proftpd server, so the tcpdump doesn't run for too > long, that would be best.) > > You can look in the tcpdump via wireshark and see what it being returned > for the Read RPCs at that time. (You can email me the "xxx" packet trace > as an attachment and I can look at it, if you get that far.) > > rick > ps: Otherwise, I'd go look at your NFS server and see if it's logging > errors or if there are indications of problems. > > From owner-freebsd-fs@FreeBSD.ORG Wed Sep 15 16:01:00 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id CCF481065672; Wed, 15 Sep 2010 16:01:00 +0000 (UTC) (envelope-from avg@freebsd.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 9CBC88FC2A; Wed, 15 Sep 2010 16:00:59 +0000 (UTC) Received: from odyssey.starpoint.kiev.ua (alpha-e.starpoint.kiev.ua [212.40.38.101]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id TAA11738; Wed, 15 Sep 2010 19:00:56 +0300 (EEST) (envelope-from avg@freebsd.org) Message-ID: <4C90EDB8.3040709@freebsd.org> Date: Wed, 15 Sep 2010 19:00:56 +0300 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; U; FreeBSD amd64; en-US; rv:1.9.2.9) Gecko/20100909 Lightning/1.0b2 Thunderbird/3.1.3 MIME-Version: 1.0 To: Steven Hartland References: <5DB6E7C798E44D33A05673F4B773405E@multiplay.co.uk><4C85E91E.1010602@icyb.net.ua><4C873914.40404@freebsd.org><20100908084855.GF2465@deviant.kiev.zoral.com.ua><4C874F00.3050605@freebsd.org><4C8D087B.5040404@freebsd.org><03537796FAB54E02959E2D64FC83004F@multiplay.co.uk><4C8D280F.3040803@freebsd.org><3FBF66BF11AA4CBBA6124CA435A4A31B@multiplay.co.uk><4C8E4212.30000@freebsd.org> <4C90B4C8.90203@freebsd.org> <6DFACB27CA8A4A22898BC81E55C4FD36@multiplay.co.uk> <4C90D3A1.7030008@freebsd.org> <0B1A90A08DFE4ADA9540F9F3846FDF38@multiplay.co.uk> In-Reply-To: <0B1A90A08DFE4ADA9540F9F3846FDF38@multiplay.co.uk> X-Enigmail-Version: 1.1.2 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org, jhell , Pawel Jakub Dawidek Subject: Re: zfs very poor performance compared to ufs due to lack of cache? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 15 Sep 2010 16:01:00 -0000 on 15/09/2010 18:04 Steven Hartland said the following: > Hmm, so taking a different track on the issue is the a way to make sendfile use data > directly from ARC instead of having to copy it first? Or even try the opposite, if your version of ZFS permits it. You can set primarycache=metadata on the filesystem where you have the data that you serve via sendfile. With that setting it shouldn't get cached in ARC, but it should be still cached in VM cache, so you should get UFS-like behavior. Will you test it? :) -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Wed Sep 15 16:20:04 2010 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 954BD1065670 for ; Wed, 15 Sep 2010 16:20:04 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 839078FC17 for ; Wed, 15 Sep 2010 16:20:04 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id o8FGK4Uj018369 for ; Wed, 15 Sep 2010 16:20:04 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id o8FGK4pX018368; Wed, 15 Sep 2010 16:20:04 GMT (envelope-from gnats) Date: Wed, 15 Sep 2010 16:20:04 GMT Message-Id: <201009151620.o8FGK4pX018368@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org From: dfilter@FreeBSD.ORG (dfilter service) Cc: Subject: Re: kern/145778: commit references a PR X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: dfilter service List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 15 Sep 2010 16:20:04 -0000 The following reply was made to PR kern/145778; it has been noted by GNATS. From: dfilter@FreeBSD.ORG (dfilter service) To: bug-followup@FreeBSD.org Cc: Subject: Re: kern/145778: commit references a PR Date: Wed, 15 Sep 2010 16:10:50 +0000 (UTC) Author: mm Date: Wed Sep 15 16:10:38 2010 New Revision: 212670 URL: http://svn.freebsd.org/changeset/base/212670 Log: MFC r210398: Enable fake resolving of SMB RIDs by using nulldomain and UID_NOBODY - fixes panics when Solaris/OpenSolaris pools that contain files uploaded with the SMB protocol are accessed Enable seting/unsetting the sharesmb property (dummy action) - allows users who import pools from Solaris/Opensolaris to unset the sharesmb property and get rid of annoying messages PR: kern/145778, kern/148709 Approved by: pjd, delphij (mentor)) Modified: stable/8/cddl/contrib/opensolaris/lib/libzfs/common/libzfs_dataset.c stable/8/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_fuid.c Directory Properties: stable/8/cddl/contrib/opensolaris/ (props changed) stable/8/sys/ (props changed) stable/8/sys/amd64/include/xen/ (props changed) stable/8/sys/cddl/contrib/opensolaris/ (props changed) stable/8/sys/contrib/dev/acpica/ (props changed) stable/8/sys/contrib/pf/ (props changed) stable/8/sys/dev/xen/xenpci/ (props changed) Modified: stable/8/cddl/contrib/opensolaris/lib/libzfs/common/libzfs_dataset.c ============================================================================== --- stable/8/cddl/contrib/opensolaris/lib/libzfs/common/libzfs_dataset.c Wed Sep 15 16:05:51 2010 (r212669) +++ stable/8/cddl/contrib/opensolaris/lib/libzfs/common/libzfs_dataset.c Wed Sep 15 16:10:38 2010 (r212670) @@ -1265,7 +1265,6 @@ zfs_prop_set(zfs_handle_t *zhp, const ch case ZFS_PROP_XATTR: case ZFS_PROP_VSCAN: case ZFS_PROP_NBMAND: - case ZFS_PROP_SHARESMB: (void) snprintf(errbuf, sizeof (errbuf), "property '%s' not supported on FreeBSD", propname); ret = zfs_error(hdl, EZFS_PERM, errbuf); Modified: stable/8/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_fuid.c ============================================================================== --- stable/8/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_fuid.c Wed Sep 15 16:05:51 2010 (r212669) +++ stable/8/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_fuid.c Wed Sep 15 16:10:38 2010 (r212670) @@ -410,7 +410,7 @@ zfs_fuid_map_id(zfsvfs_t *zfsvfs, uint64 domain = zfs_fuid_find_by_idx(zfsvfs, index); ASSERT(domain != NULL); -#ifdef TODO +#ifdef sun if (type == ZFS_OWNER || type == ZFS_ACE_USER) { (void) kidmap_getuidbysid(crgetzone(cr), domain, FUID_RID(fuid), &id); @@ -418,9 +418,9 @@ zfs_fuid_map_id(zfsvfs_t *zfsvfs, uint64 (void) kidmap_getgidbysid(crgetzone(cr), domain, FUID_RID(fuid), &id); } -#else - panic(__func__); -#endif +#else /* sun */ + id = UID_NOBODY; +#endif /* sun */ return (id); } @@ -514,21 +514,21 @@ zfs_fuid_create_cred(zfsvfs_t *zfsvfs, z if (!zfsvfs->z_use_fuids || !IS_EPHEMERAL(id)) return ((uint64_t)id); -#ifdef TODO +#ifdef sun ksid = crgetsid(cr, (type == ZFS_OWNER) ? KSID_OWNER : KSID_GROUP); VERIFY(ksid != NULL); rid = ksid_getrid(ksid); domain = ksid_getdomain(ksid); - +#else /* sun */ + rid = UID_NOBODY; + domain = nulldomain; +#endif /* sun */ idx = zfs_fuid_find_by_domain(zfsvfs, domain, &kdomain, B_TRUE); zfs_fuid_node_add(fuidp, kdomain, rid, idx, id, type); return (FUID_ENCODE(idx, rid)); -#else - panic(__func__); -#endif } /* @@ -597,7 +597,7 @@ zfs_fuid_create(zfsvfs_t *zfsvfs, uint64 }; domain = fuidp->z_domain_table[idx -1]; } else { -#ifdef TODO +#ifdef sun if (type == ZFS_OWNER || type == ZFS_ACE_USER) status = kidmap_getsidbyuid(crgetzone(cr), id, &domain, &rid); @@ -606,6 +606,7 @@ zfs_fuid_create(zfsvfs_t *zfsvfs, uint64 &domain, &rid); if (status != 0) { +#endif /* sun */ /* * When returning nobody we will need to * make a dummy fuid table entry for logging @@ -613,10 +614,9 @@ zfs_fuid_create(zfsvfs_t *zfsvfs, uint64 */ rid = UID_NOBODY; domain = nulldomain; +#ifdef sun } -#else - panic(__func__); -#endif +#endif /* sun */ } idx = zfs_fuid_find_by_domain(zfsvfs, domain, &kdomain, B_TRUE); _______________________________________________ svn-src-all@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/svn-src-all To unsubscribe, send any mail to "svn-src-all-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Wed Sep 15 16:27:15 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id DB23D1065675 for ; Wed, 15 Sep 2010 16:27:14 +0000 (UTC) (envelope-from andre@freebsd.org) Received: from c00l3r.networx.ch (c00l3r.networx.ch [62.48.2.2]) by mx1.freebsd.org (Postfix) with ESMTP id 45E8A8FC26 for ; Wed, 15 Sep 2010 16:27:13 +0000 (UTC) Received: (qmail 72596 invoked from network); 15 Sep 2010 15:55:19 -0000 Received: from localhost (HELO [127.0.0.1]) ([127.0.0.1]) (envelope-sender ) by c00l3r.networx.ch (qmail-ldap-1.03) with SMTP for ; 15 Sep 2010 15:55:19 -0000 Message-ID: <4C90EDA1.6020501@freebsd.org> Date: Wed, 15 Sep 2010 18:00:33 +0200 From: Andre Oppermann User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.8) Gecko/20100802 Thunderbird/3.1.2 MIME-Version: 1.0 To: Andriy Gapon References: <5DB6E7C798E44D33A05673F4B773405E@multiplay.co.uk><4C85E91E.1010602@icyb.net.ua><4C873914.40404@freebsd.org><20100908084855.GF2465@deviant.kiev.zoral.com.ua><4C874F00.3050605@freebsd.org><4C8D087B.5040404@freebsd.org><03537796FAB54E02959E2D64FC83004F@multiplay.co.uk><4C8D280F.3040803@freebsd.org><3FBF66BF11AA4CBBA6124CA435A4A31B@multiplay.co.uk><4C8E4212.30000@freebsd.org> <4C90B4C8.90203@freebsd.org> <6DFACB27CA8A4A22898BC81E55C4FD36@multiplay.co.uk> <4C90D3A1.7030008@freebsd.org> <0B1A90A08DFE4ADA9540F9F3846FDF38@multiplay.co.uk> <4C90E328.20606@freebsd.org> <4C90E869.8000400@freebsd.org> In-Reply-To: <4C90E869.8000400@freebsd.org> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org, jhell , Pawel Jakub Dawidek , freebsd-net@freebsd.org Subject: Re: zfs very poor performance compared to ufs due to lack of cache? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 15 Sep 2010 16:27:15 -0000 On 15.09.2010 17:38, Andriy Gapon wrote: > on 15/09/2010 18:15 Andriy Gapon said the following: >> on 15/09/2010 18:04 Steven Hartland said the following: >>> Hmm, so taking a different track on the issue is the a way to make sendfile use data >>> directly from ARC instead of having to copy it first? >> >> Well, theoretically everything is possible, but I am not sure if it's feasible. >> It's a lot of work anyways, it should be a very specialized sendfile and a lot if >> inter-layer knowledge and dependencies. >> Don't hold your breath for it. > > Perhaps some middle-ground solution can be developed with less effort. > This solution would be specific to filesystems that don't use buffer cache, so it > wouldn't touch any pages, but instead it would use regular VOP_READ into a mbuf. > So, there would be copying, but page caches won't be unnecessarily "polluted" with > second copy of the data and this all would happen in kernel giving an advantage > over userland solution with read(2)+send(2). Is there a quick way of deciding within sendfile(2) whether a file resides on a filesystem that doesn't use the buffer cache? > Having said that, I see that OpenSolaris has a mechanism for something like that. > The mechanism can either globally enabled or enabled for file over certain size. > The mechanism uses dedicated kernel threads that get data using direct I/O, buffer > and send it. > That's my impression from a quick look, I may have gotten things wrong. -- Andre From owner-freebsd-fs@FreeBSD.ORG Wed Sep 15 16:31:53 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 03B151065670; Wed, 15 Sep 2010 16:31:53 +0000 (UTC) (envelope-from avg@freebsd.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id D75928FC12; Wed, 15 Sep 2010 16:31:51 +0000 (UTC) Received: from odyssey.starpoint.kiev.ua (alpha-e.starpoint.kiev.ua [212.40.38.101]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id TAA12300; Wed, 15 Sep 2010 19:31:50 +0300 (EEST) (envelope-from avg@freebsd.org) Message-ID: <4C90F4F6.209@freebsd.org> Date: Wed, 15 Sep 2010 19:31:50 +0300 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; U; FreeBSD amd64; en-US; rv:1.9.2.9) Gecko/20100909 Lightning/1.0b2 Thunderbird/3.1.3 MIME-Version: 1.0 To: Andre Oppermann References: <5DB6E7C798E44D33A05673F4B773405E@multiplay.co.uk><4C85E91E.1010602@icyb.net.ua><4C873914.40404@freebsd.org><20100908084855.GF2465@deviant.kiev.zoral.com.ua><4C874F00.3050605@freebsd.org><4C8D087B.5040404@freebsd.org><03537796FAB54E02959E2D64FC83004F@multiplay.co.uk><4C8D280F.3040803@freebsd.org><3FBF66BF11AA4CBBA6124CA435A4A31B@multiplay.co.uk><4C8E4212.30000@freebsd.org> <4C90B4C8.90203@freebsd.org> <6DFACB27CA8A4A22898BC81E55C4FD36@multiplay.co.uk> <4C90D3A1.7030008@freebsd.org> <0B1A90A08DFE4ADA9540F9F3846FDF38@multiplay.co.uk> <4C90E328.20606@freebsd.org> <4C90E869.8000400@freebsd.org> <4C90EDA1.6020501@freebsd.org> In-Reply-To: <4C90EDA1.6020501@freebsd.org> X-Enigmail-Version: 1.1.2 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org, freebsd-net@freebsd.org Subject: Re: zfs very poor performance compared to ufs due to lack of cache? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 15 Sep 2010 16:31:53 -0000 on 15/09/2010 19:00 Andre Oppermann said the following: > Is there a quick way of deciding within sendfile(2) whether a file resides > on a filesystem that doesn't use the buffer cache? I don't know of any reliable way to do it. -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Wed Sep 15 17:38:41 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 16A431065672; Wed, 15 Sep 2010 17:38:41 +0000 (UTC) (envelope-from prvs=1874f602db=killing@multiplay.co.uk) Received: from mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) by mx1.freebsd.org (Postfix) with ESMTP id 4FEC08FC0A; Wed, 15 Sep 2010 17:38:39 +0000 (UTC) X-MDAV-Processed: mail1.multiplay.co.uk, Wed, 15 Sep 2010 18:38:35 +0100 X-Spam-Processed: mail1.multiplay.co.uk, Wed, 15 Sep 2010 18:38:35 +0100 X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on mail1.multiplay.co.uk X-Spam-Level: X-Spam-Status: No, score=-5.0 required=6.0 tests=USER_IN_WHITELIST shortcircuit=ham autolearn=disabled version=3.2.5 Received: from r2d2 by mail1.multiplay.co.uk (MDaemon PRO v10.0.4) with ESMTP id md50011249725.msg; Wed, 15 Sep 2010 18:38:35 +0100 X-Authenticated-Sender: Killing@multiplay.co.uk X-MDRemoteIP: 188.220.16.49 X-Return-Path: prvs=1874f602db=killing@multiplay.co.uk X-Envelope-From: killing@multiplay.co.uk Message-ID: From: "Steven Hartland" To: "Andriy Gapon" References: <5DB6E7C798E44D33A05673F4B773405E@multiplay.co.uk><4C85E91E.1010602@icyb.net.ua><4C873914.40404@freebsd.org><20100908084855.GF2465@deviant.kiev.zoral.com.ua><4C874F00.3050605@freebsd.org><4C8D087B.5040404@freebsd.org><03537796FAB54E02959E2D64FC83004F@multiplay.co.uk><4C8D280F.3040803@freebsd.org><3FBF66BF11AA4CBBA6124CA435A4A31B@multiplay.co.uk><4C8E4212.30000@freebsd.org> <4C90B4C8.90203@freebsd.org> <6DFACB27CA8A4A22898BC81E55C4FD36@multiplay.co.uk> <4C90D3A1.7030008@freebsd.org> <0B1A90A08DFE4ADA9540F9F3846FDF38@multiplay.co.uk> <4C90EDB8.3040709@freebsd.org> Date: Wed, 15 Sep 2010 18:38:36 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="utf-8"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.5931 Cc: freebsd-fs@freebsd.org, jhell , Pawel Jakub Dawidek Subject: Re: zfs very poor performance compared to ufs due to lack of cache? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 15 Sep 2010 17:38:41 -0000 ----- Original Message ----- From: "Andriy Gapon" To: "Steven Hartland" Cc: ; "Pawel Jakub Dawidek" ; "jhell" Sent: Wednesday, September 15, 2010 5:00 PM Subject: Re: zfs very poor performance compared to ufs due to lack of cache? > on 15/09/2010 18:04 Steven Hartland said the following: >> Hmm, so taking a different track on the issue is the a way to make sendfile use data >> directly from ARC instead of having to copy it first? > > Or even try the opposite, if your version of ZFS permits it. > You can set primarycache=metadata on the filesystem where you have the data that > you serve via sendfile. With that setting it shouldn't get cached in ARC, but it > should be still cached in VM cache, so you should get UFS-like behavior. > > Will you test it? :) Interesting, the same for secondarycache? Regards Steve ================================================ This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337 or return the E.mail to postmaster@multiplay.co.uk. From owner-freebsd-fs@FreeBSD.ORG Wed Sep 15 17:45:24 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 425211065672 for ; Wed, 15 Sep 2010 17:45:24 +0000 (UTC) (envelope-from prvs=1874f602db=killing@multiplay.co.uk) Received: from mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) by mx1.freebsd.org (Postfix) with ESMTP id A650D8FC0C for ; Wed, 15 Sep 2010 17:45:23 +0000 (UTC) X-MDAV-Processed: mail1.multiplay.co.uk, Wed, 15 Sep 2010 18:45:19 +0100 X-Spam-Processed: mail1.multiplay.co.uk, Wed, 15 Sep 2010 18:45:18 +0100 X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on mail1.multiplay.co.uk X-Spam-Level: X-Spam-Status: No, score=-5.0 required=6.0 tests=USER_IN_WHITELIST shortcircuit=ham autolearn=disabled version=3.2.5 Received: from r2d2 by mail1.multiplay.co.uk (MDaemon PRO v10.0.4) with ESMTP id md50011249751.msg; Wed, 15 Sep 2010 18:45:17 +0100 X-Authenticated-Sender: Killing@multiplay.co.uk X-MDRemoteIP: 188.220.16.49 X-Return-Path: prvs=1874f602db=killing@multiplay.co.uk X-Envelope-From: killing@multiplay.co.uk Message-ID: <1E6C45FC4BEF44B5A99B9DBC4ACD1744@multiplay.co.uk> From: "Steven Hartland" To: "Sergey Zaharchenko" , , References: <20100907164204.GA2571@nautilus.vmks.ru><20100908065222.GA2522@nautilus.vmks.ru><20100908103338.GA5091@nautilus.vmks.ru><20100908104200.GA36566@icarus.home.lan><20100909165829.GA2602@nautilus.vmks.ru> <20100915153958.GA3256@nautilus.vmks.ru> Date: Wed, 15 Sep 2010 18:45:15 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.5931 Cc: Dan Davis Subject: Re: 8.1-RELEASE ZFS hangs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 15 Sep 2010 17:45:24 -0000 ----- Original Message ----- From: "Sergey Zaharchenko" > After all it turned out that the HighPoint controller was buggy. When we > later configured it in RAID5 mode, it reported drive faults for random > drives that were OK. I don't get how/why it worked in geom stripe mode. > Anyway, we switched to a 3ware card and ZFS works happily with it out of > the box without any kernel tuning (I may do some fine-tuning later). Out of interest which controller, and what size / manufacture disks? Regards Steve ================================================ This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337 or return the E.mail to postmaster@multiplay.co.uk. From owner-freebsd-fs@FreeBSD.ORG Wed Sep 15 17:48:09 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A9FF61065695 for ; Wed, 15 Sep 2010 17:48:09 +0000 (UTC) (envelope-from avg@freebsd.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id EC06E8FC22 for ; Wed, 15 Sep 2010 17:48:08 +0000 (UTC) Received: from odyssey.starpoint.kiev.ua (alpha-e.starpoint.kiev.ua [212.40.38.101]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id UAA13650; Wed, 15 Sep 2010 20:48:05 +0300 (EEST) (envelope-from avg@freebsd.org) Message-ID: <4C9106D5.3000100@freebsd.org> Date: Wed, 15 Sep 2010 20:48:05 +0300 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; U; FreeBSD amd64; en-US; rv:1.9.2.9) Gecko/20100909 Lightning/1.0b2 Thunderbird/3.1.3 MIME-Version: 1.0 To: Steven Hartland References: <5DB6E7C798E44D33A05673F4B773405E@multiplay.co.uk><4C85E91E.1010602@icyb.net.ua><4C873914.40404@freebsd.org><20100908084855.GF2465@deviant.kiev.zoral.com.ua><4C874F00.3050605@freebsd.org><4C8D087B.5040404@freebsd.org><03537796FAB54E02959E2D64FC83004F@multiplay.co.uk><4C8D280F.3040803@freebsd.org><3FBF66BF11AA4CBBA6124CA435A4A31B@multiplay.co.uk><4C8E4212.30000@freebsd.org> <4C90B4C8.90203@freebsd.org> <6DFACB27CA8A4A22898BC81E55C4FD36@multiplay.co.uk> <4C90D3A1.7030008@freebsd.org> <0B1A90A08DFE4ADA9540F9F3846FDF38@multiplay.co.uk> <4C90EDB8.3040709@freebsd.org> In-Reply-To: X-Enigmail-Version: 1.1.2 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org Subject: Re: zfs very poor performance compared to ufs due to lack of cache? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 15 Sep 2010 17:48:09 -0000 on 15/09/2010 20:38 Steven Hartland said the following: > ----- Original Message ----- From: "Andriy Gapon" > To: "Steven Hartland" > Cc: ; "Pawel Jakub Dawidek" ; "jhell" > > Sent: Wednesday, September 15, 2010 5:00 PM > Subject: Re: zfs very poor performance compared to ufs due to lack of cache? > > >> on 15/09/2010 18:04 Steven Hartland said the following: >>> Hmm, so taking a different track on the issue is the a way to make sendfile use >>> data >>> directly from ARC instead of having to copy it first? >> >> Or even try the opposite, if your version of ZFS permits it. >> You can set primarycache=metadata on the filesystem where you have the data that >> you serve via sendfile. With that setting it shouldn't get cached in ARC, but it >> should be still cached in VM cache, so you should get UFS-like behavior. >> >> Will you test it? :) > > Interesting, the same for secondarycache? Do you have it (L2ARC) ? Anyways, L2ARC is not in RAM. -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Wed Sep 15 19:52:43 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 03EDF1065672; Wed, 15 Sep 2010 19:52:43 +0000 (UTC) (envelope-from doublef-ctm@yandex.ru) Received: from forward6.mail.yandex.net (forward6.mail.yandex.net [77.88.60.125]) by mx1.freebsd.org (Postfix) with ESMTP id 9F5EB8FC17; Wed, 15 Sep 2010 19:52:42 +0000 (UTC) Received: from smtp6.mail.yandex.net (smtp6.mail.yandex.net [77.88.61.56]) by forward6.mail.yandex.net (Yandex) with ESMTP id C8F98BB0745; Wed, 15 Sep 2010 23:52:40 +0400 (MSD) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex.ru; s=mail; t=1284580360; bh=tphABPQW9zxzYm2PUp+Qd59yef1wUtX1qzLOekvw8OE=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:In-Reply-To; b=R9uJ0QPFo38aL5A093Kj52TmiCweX4wK6paCXVSzMOuBwXlfOT/8w1UZE2b9kqHYI 03Qm10zToYl1bZTiBaQ4Rd5Ue0BC5B49vJjw1MtZyajSNKESU4N7EoZ6y1AcYoKSwb 6IJVWYRxqgRJPQ8NYihwWKZfDIsRBi5dec2UhmgQ= Received: from nautilus (unknown [178.155.116.41]) by smtp6.mail.yandex.net (Yandex) with ESMTPA id 9E8A932804D; Wed, 15 Sep 2010 23:52:40 +0400 (MSD) Received: by nautilus (Postfix, from userid 1001) id 08D621DD41D; Wed, 15 Sep 2010 23:52:40 +0400 (MSD) Date: Wed, 15 Sep 2010 23:52:39 +0400 From: Sergey Zaharchenko To: Steven Hartland Message-ID: <20100915195239.GA2604@nautilus.vmks.ru> References: <20100907164204.GA2571@nautilus.vmks.ru> <20100908065222.GA2522@nautilus.vmks.ru> <20100908103338.GA5091@nautilus.vmks.ru> <20100908104200.GA36566@icarus.home.lan> <20100909165829.GA2602@nautilus.vmks.ru> <20100915153958.GA3256@nautilus.vmks.ru> <1E6C45FC4BEF44B5A99B9DBC4ACD1744@multiplay.co.uk> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="k1lZvvs/B4yU6o8G" Content-Disposition: inline In-Reply-To: <1E6C45FC4BEF44B5A99B9DBC4ACD1744@multiplay.co.uk> X-Listening-To: Silence User-Agent: Mutt/1.5.20 (2009-06-14) X-Yandex-TimeMark: 1284580360 X-Yandex-Spam: 1 X-Yandex-Front: smtp6.mail.yandex.net Cc: freebsd-fs@freebsd.org, freebsd-stable@freebsd.org Subject: Re: 8.1-RELEASE ZFS hangs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 15 Sep 2010 19:52:43 -0000 --k1lZvvs/B4yU6o8G Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable Hello Steven! Wed, Sep 15, 2010 at 06:45:15PM +0100 you wrote: > ----- Original Message ----- From: "Sergey Zaharchenko" > >=20 > >After all it turned out that the HighPoint controller was buggy. When we > >later configured it in RAID5 mode, it reported drive faults for random > >drives that were OK. I don't get how/why it worked in geom stripe mode. > >Anyway, we switched to a 3ware card and ZFS works happily with it out of > >the box without any kernel tuning (I may do some fine-tuning later). >=20 > Out of interest which controller, and what size / manufacture disks? I assume you mean the working config. The broken one has been described earlier. twa0@pci0:3:3:0: class=3D0x010400 card=3D0x100213c1 chip=3D0x100213c= 1 rev=3D0x00 hdr=3D0x00 vendor =3D '3ware Inc' device =3D 'SATA/PATA Storage Controller (9000 series)' class =3D mass storage subclass =3D RAID 8 identical 1907729MB (~2TB) Seagate drives. --=20 Sergey Zaharchenko --k1lZvvs/B4yU6o8G Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.14 (FreeBSD) iEYEARECAAYFAkyRJAcACgkQwo7hT/9lVdwsIQCdHBJibWno8N2b3cbab7CPVEGj bv0An2xpWjs0kJJ1mOd0Q4tveN/cYZBH =n+bG -----END PGP SIGNATURE----- --k1lZvvs/B4yU6o8G-- From owner-freebsd-fs@FreeBSD.ORG Wed Sep 15 20:00:42 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id DF74A1065696; Wed, 15 Sep 2010 20:00:42 +0000 (UTC) (envelope-from prvs=1874f602db=killing@multiplay.co.uk) Received: from mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) by mx1.freebsd.org (Postfix) with ESMTP id F2B7E8FC0A; Wed, 15 Sep 2010 20:00:41 +0000 (UTC) X-MDAV-Processed: mail1.multiplay.co.uk, Wed, 15 Sep 2010 21:00:36 +0100 X-Spam-Processed: mail1.multiplay.co.uk, Wed, 15 Sep 2010 21:00:36 +0100 X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on mail1.multiplay.co.uk X-Spam-Level: X-Spam-Status: No, score=-5.0 required=6.0 tests=USER_IN_WHITELIST shortcircuit=ham autolearn=disabled version=3.2.5 Received: from r2d2 by mail1.multiplay.co.uk (MDaemon PRO v10.0.4) with ESMTP id md50011249987.msg; Wed, 15 Sep 2010 21:00:36 +0100 X-Authenticated-Sender: Killing@multiplay.co.uk X-MDRemoteIP: 188.220.16.49 X-Return-Path: prvs=1874f602db=killing@multiplay.co.uk X-Envelope-From: killing@multiplay.co.uk Message-ID: <3F29E8CED7B24805B2D93F62A4EC9559@multiplay.co.uk> From: "Steven Hartland" To: "Andriy Gapon" References: <5DB6E7C798E44D33A05673F4B773405E@multiplay.co.uk><4C85E91E.1010602@icyb.net.ua><4C873914.40404@freebsd.org><20100908084855.GF2465@deviant.kiev.zoral.com.ua><4C874F00.3050605@freebsd.org><4C8D087B.5040404@freebsd.org><03537796FAB54E02959E2D64FC83004F@multiplay.co.uk><4C8D280F.3040803@freebsd.org><3FBF66BF11AA4CBBA6124CA435A4A31B@multiplay.co.uk><4C8E4212.30000@freebsd.org> <4C90B4C8.90203@freebsd.org> <6DFACB27CA8A4A22898BC81E55C4FD36@multiplay.co.uk> <4C90D3A1.7030008@freebsd.org> <0B1A90A08DFE4ADA9540F9F3846FDF38@multiplay.co.uk> <4C90EDB8.3040709@freebsd.org> Date: Wed, 15 Sep 2010 21:00:38 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="utf-8"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.5931 Cc: freebsd-fs@freebsd.org, jhell , Pawel Jakub Dawidek Subject: Re: zfs very poor performance compared to ufs due to lack of cache? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 15 Sep 2010 20:00:43 -0000 ----- Original Message ----- From: "Andriy Gapon" > Or even try the opposite, if your version of ZFS permits it. > You can set primarycache=metadata on the filesystem where you have the data that > you serve via sendfile. With that setting it shouldn't get cached in ARC, but it > should be still cached in VM cache, so you should get UFS-like behavior. > > Will you test it? :) Ok given this a whirl, don't have the full results just yet but does seem that buf cache is not used at all? Mem: 32M Active, 1378M Inact, 159M Wired, 120K Cache, 21M Buf, 5348M Free Swap: 4096M Total, 4096M Free It also appears to have totally destroyed overall disk IO performance, particularly for small reads e.g. those used in cat. I just tried to cat a 1.8GB file from disk to /dev/null which I would usually expect to ~170MB/s on initial read but I only to got 1.8MB/s even though the disk subsystem was doing 200MB/s to sustain this request. Massive over read which is then just thrown away because there is no data cache? This seems backed up by using dd with a block size over 128K (zfs block size I believe?) results in normal performance of 180MB/s but repeat runs still only get the same, where on a similar box with ufs I see 1.3GB/s on repeat runs. Going to pull the test there as the machines struggling to keep up with even 10 clients. Next test I think should be sendfile on but with no special zfs loader.conf options to see what really happens to arc without any limits. I've got a suspicion that I may end up with close to or zero arc due to inact memory use. Regards Steve ================================================ This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337 or return the E.mail to postmaster@multiplay.co.uk. From owner-freebsd-fs@FreeBSD.ORG Wed Sep 15 20:05:20 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 014891065670 for ; Wed, 15 Sep 2010 20:05:20 +0000 (UTC) (envelope-from avg@freebsd.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 3C9BF8FC19 for ; Wed, 15 Sep 2010 20:05:18 +0000 (UTC) Received: from porto.topspin.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id XAA15945; Wed, 15 Sep 2010 23:05:16 +0300 (EEST) (envelope-from avg@freebsd.org) Received: from localhost.topspin.kiev.ua ([127.0.0.1]) by porto.topspin.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1OvyEG-00018A-Ex; Wed, 15 Sep 2010 23:05:16 +0300 Message-ID: <4C9126FB.2020707@freebsd.org> Date: Wed, 15 Sep 2010 23:05:15 +0300 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; U; FreeBSD amd64; en-US; rv:1.9.2.9) Gecko/20100912 Lightning/1.0b2 Thunderbird/3.1.3 MIME-Version: 1.0 To: Steven Hartland References: <5DB6E7C798E44D33A05673F4B773405E@multiplay.co.uk><4C85E91E.1010602@icyb.net.ua><4C873914.40404@freebsd.org><20100908084855.GF2465@deviant.kiev.zoral.com.ua><4C874F00.3050605@freebsd.org><4C8D087B.5040404@freebsd.org><03537796FAB54E02959E2D64FC83004F@multiplay.co.uk><4C8D280F.3040803@freebsd.org><3FBF66BF11AA4CBBA6124CA435A4A31B@multiplay.co.uk><4C8E4212.30000@freebsd.org> <4C90B4C8.90203@freebsd.org> <6DFACB27CA8A4A22898BC81E55C4FD36@multiplay.co.uk> <4C90D3A1.7030008@freebsd.org> <0B1A90A08DFE4ADA9540F9F3846FDF38@multiplay.co.uk> <4C90EDB8.3040709@freebsd.org> <3F29E8CED7B24805B2D93F62A4EC9559@multiplay.co.uk> In-Reply-To: <3F29E8CED7B24805B2D93F62A4EC9559@multiplay.co.uk> X-Enigmail-Version: 1.1.2 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org Subject: Re: zfs very poor performance compared to ufs due to lack of cache? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 15 Sep 2010 20:05:20 -0000 on 15/09/2010 23:00 Steven Hartland said the following: > ----- Original Message ----- From: "Andriy Gapon" >> Or even try the opposite, if your version of ZFS permits it. >> You can set primarycache=metadata on the filesystem where you have the data that >> you serve via sendfile. With that setting it shouldn't get cached in ARC, but it >> should be still cached in VM cache, so you should get UFS-like behavior. >> >> Will you test it? :) > > Ok given this a whirl, don't have the full results just yet but does seem that > buf cache is not > used at all? > > Mem: 32M Active, 1378M Inact, 159M Wired, 120K Cache, 21M Buf, 5348M Free > Swap: 4096M Total, 4096M Free This was with sendfile enabled? -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Wed Sep 15 20:52:35 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 0970D106564A; Wed, 15 Sep 2010 20:52:35 +0000 (UTC) (envelope-from jhellenthal@gmail.com) Received: from mail-ww0-f50.google.com (mail-ww0-f50.google.com [74.125.82.50]) by mx1.freebsd.org (Postfix) with ESMTP id 2178E8FC12; Wed, 15 Sep 2010 20:52:33 +0000 (UTC) Received: by wwb13 with SMTP id 13so27199wwb.31 for ; Wed, 15 Sep 2010 13:52:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:sender:message-id:date:from :user-agent:mime-version:to:cc:subject:references:in-reply-to :x-enigmail-version:content-type:content-transfer-encoding; bh=vDXQRKpffjfd7McJiqVYx9WWn5pELJmEW9KCL5LnOOw=; b=vIZxt5de/evYrZZosQ2Dn7+iOtUsnUXKfjjbMwJgXahcLY4cGtSzL0sIplG5d7Oc+I IKR4HdHOFhZG8ABOZE2VE45jUlaMkI+RjDZbWCwbYowQBmFOuKpAvxVJwQecfVspbPtj vYOeFUUu083eddE1+UA3bsbRRQLetIra4PFN0= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject :references:in-reply-to:x-enigmail-version:content-type :content-transfer-encoding; b=pid3Y4cDetmstQJnBH+k1zy4ZjeZjF+yyPojnvqaZLe3vMN7efWX23HvmVV6qXszg2 CQHRHu4/1f1ym4qABgtQwuzH80uHJnyrtzmwscZQjKIc2SdLukRai3f0/ab1jQy3ec9n nx7fSukAzB5dQUqn0m7Uf0Aq9IhyrA031njO8= Received: by 10.227.129.13 with SMTP id m13mr1467216wbs.9.1284583930706; Wed, 15 Sep 2010 13:52:10 -0700 (PDT) Received: from centel.dataix.local (adsl-99-181-146-122.dsl.klmzmi.sbcglobal.net [99.181.146.122]) by mx.google.com with ESMTPS id w31sm1617920wbd.3.2010.09.15.13.52.08 (version=SSLv3 cipher=RC4-MD5); Wed, 15 Sep 2010 13:52:09 -0700 (PDT) Sender: "J. Hellenthal" Message-ID: <4C9131F6.10807@DataIX.net> Date: Wed, 15 Sep 2010 16:52:06 -0400 From: jhell User-Agent: Mozilla/5.0 (X11; U; FreeBSD i386; en-US; rv:1.9.2.9) Gecko/20100908 Lightning/1.0b1 Thunderbird MIME-Version: 1.0 To: Steven Hartland References: <20100908084855.GF2465@deviant.kiev.zoral.com.ua> <4C874F00.3050605@freebsd.org> <4C8D087B.5040404@freebsd.org> <03537796FAB54E02959E2D64FC83004F@multiplay.co.uk> <4C8D280F.3040803@freebsd.org> <3FBF66BF11AA4CBBA6124CA435A4A31B@multiplay.co.uk> <4C8E4212.30000@freebsd.org> <20100915104635.GA59871@icarus.home.lan> <8E233260F0334BC58B2C07F383939F8E@multiplay.co.uk> In-Reply-To: <8E233260F0334BC58B2C07F383939F8E@multiplay.co.uk> X-Enigmail-Version: 1.1.2 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org, Pawel Jakub Dawidek , Andriy Gapon Subject: Re: zfs very poor performance compared to ufs due to lack of cache? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 15 Sep 2010 20:52:35 -0000 On 09/15/2010 06:54, Steven Hartland wrote: > ----- Original Message ----- From: "Jeremy Chadwick" > >> Looks like Andriy just committed something to HEAD/CURRENT which might >> address this: >> http://www.freebsd.org/cgi/query-pr.cgi?pr=kern/141305 > > Already running that as part of the patches unfortunately, it doesn't seem > to have any effect. > Is it ? vm_page_set_validclean(m, off, bytes); I recall you saying that you added this from earlier in the thread. could be wrong though but what Andriy committed was the following. or ? vm_page_set_valid(m, off, bytes); Regards, -- jhell,v From owner-freebsd-fs@FreeBSD.ORG Wed Sep 15 21:25:34 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 46A051065698; Wed, 15 Sep 2010 21:25:34 +0000 (UTC) (envelope-from prvs=1874f602db=killing@multiplay.co.uk) Received: from mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) by mx1.freebsd.org (Postfix) with ESMTP id A41AD8FC19; Wed, 15 Sep 2010 21:25:33 +0000 (UTC) X-MDAV-Processed: mail1.multiplay.co.uk, Wed, 15 Sep 2010 22:25:28 +0100 X-Spam-Processed: mail1.multiplay.co.uk, Wed, 15 Sep 2010 22:25:28 +0100 X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on mail1.multiplay.co.uk X-Spam-Level: X-Spam-Status: No, score=-5.0 required=6.0 tests=USER_IN_WHITELIST shortcircuit=ham autolearn=disabled version=3.2.5 Received: from r2d2 by mail1.multiplay.co.uk (MDaemon PRO v10.0.4) with ESMTP id md50011250178.msg; Wed, 15 Sep 2010 22:25:27 +0100 X-Authenticated-Sender: Killing@multiplay.co.uk X-MDRemoteIP: 188.220.16.49 X-Return-Path: prvs=1874f602db=killing@multiplay.co.uk X-Envelope-From: killing@multiplay.co.uk Message-ID: <63568211614648FC97E3F98BEFF3A014@multiplay.co.uk> From: "Steven Hartland" To: "Andriy Gapon" References: <5DB6E7C798E44D33A05673F4B773405E@multiplay.co.uk><4C85E91E.1010602@icyb.net.ua><4C873914.40404@freebsd.org><20100908084855.GF2465@deviant.kiev.zoral.com.ua><4C874F00.3050605@freebsd.org><4C8D087B.5040404@freebsd.org><03537796FAB54E02959E2D64FC83004F@multiplay.co.uk><4C8D280F.3040803@freebsd.org><3FBF66BF11AA4CBBA6124CA435A4A31B@multiplay.co.uk><4C8E4212.30000@freebsd.org> <4C90B4C8.90203@freebsd.org> <6DFACB27CA8A4A22898BC81E55C4FD36@multiplay.co.uk> <4C90D3A1.7030008@freebsd.org> <0B1A90A08DFE4ADA9540F9F3846FDF38@multiplay.co.uk> <4C90EDB8.3040709@freebsd.org> <3F29E8CED7B24805B2D93F62A4EC9559@multiplay.co.uk> <4C9126FB.2020707@freebsd.org> Date: Wed, 15 Sep 2010 22:25:27 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="utf-8"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.5931 Cc: freebsd-fs@freebsd.org Subject: Re: zfs very poor performance compared to ufs due to lack of cache? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 15 Sep 2010 21:25:34 -0000 ----- Original Message ----- From: "Andriy Gapon" >> Ok given this a whirl, don't have the full results just yet but does seem that >> buf cache is not >> used at all? >> >> Mem: 32M Active, 1378M Inact, 159M Wired, 120K Cache, 21M Buf, 5348M Free >> Swap: 4096M Total, 4096M Free > > This was with sendfile enabled? Sorry yes it was. ================================================ This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337 or return the E.mail to postmaster@multiplay.co.uk. From owner-freebsd-fs@FreeBSD.ORG Wed Sep 15 21:30:55 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 5B5EE1065670; Wed, 15 Sep 2010 21:30:55 +0000 (UTC) (envelope-from prvs=1874f602db=killing@multiplay.co.uk) Received: from mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) by mx1.freebsd.org (Postfix) with ESMTP id 979F98FC0C; Wed, 15 Sep 2010 21:30:54 +0000 (UTC) X-MDAV-Processed: mail1.multiplay.co.uk, Wed, 15 Sep 2010 22:30:49 +0100 X-Spam-Processed: mail1.multiplay.co.uk, Wed, 15 Sep 2010 22:30:49 +0100 X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on mail1.multiplay.co.uk X-Spam-Level: X-Spam-Status: No, score=-5.0 required=6.0 tests=USER_IN_WHITELIST shortcircuit=ham autolearn=disabled version=3.2.5 Received: from r2d2 by mail1.multiplay.co.uk (MDaemon PRO v10.0.4) with ESMTP id md50011250192.msg; Wed, 15 Sep 2010 22:30:49 +0100 X-Authenticated-Sender: Killing@multiplay.co.uk X-MDRemoteIP: 188.220.16.49 X-Return-Path: prvs=1874f602db=killing@multiplay.co.uk X-Envelope-From: killing@multiplay.co.uk Message-ID: From: "Steven Hartland" To: "jhell" References: <20100908084855.GF2465@deviant.kiev.zoral.com.ua> <4C874F00.3050605@freebsd.org> <4C8D087B.5040404@freebsd.org> <03537796FAB54E02959E2D64FC83004F@multiplay.co.uk> <4C8D280F.3040803@freebsd.org> <3FBF66BF11AA4CBBA6124CA435A4A31B@multiplay.co.uk> <4C8E4212.30000@freebsd.org> <20100915104635.GA59871@icarus.home.lan> <8E233260F0334BC58B2C07F383939F8E@multiplay.co.uk> <4C9131F6.10807@DataIX.net> Date: Wed, 15 Sep 2010 22:30:33 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.5931 Cc: freebsd-fs@freebsd.org, Pawel Jakub Dawidek , Andriy Gapon Subject: Re: zfs very poor performance compared to ufs due to lack of cache? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 15 Sep 2010 21:30:55 -0000 ----- Original Message ----- From: "jhell" jhell@DataIX.net > On 09/15/2010 06:54, Steven Hartland wrote: >> ----- Original Message ----- From: "Jeremy Chadwick" >> >>> Looks like Andriy just committed something to HEAD/CURRENT which might >>> address this: >>> http://www.freebsd.org/cgi/query-pr.cgi?pr=kern/141305 >> >> Already running that as part of the patches unfortunately, it doesn't seem >> to have any effect. >> > > Is it ? vm_page_set_validclean(m, off, bytes); > I recall you saying that you added this from earlier in the thread. > could be wrong though but what Andriy committed was the following. > > or ? vm_page_set_valid(m, off, bytes); Ahh good catch I have: if (error == 0) vm_page_set_validclean(m, off, bytes); and not as mentioned by 141305: if (error == 0) vm_page_set_valid(m, off, bytes); Which should it actaully be? Regards Steve ================================================ This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337 or return the E.mail to postmaster@multiplay.co.uk. From owner-freebsd-fs@FreeBSD.ORG Wed Sep 15 21:54:43 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 91EEB1065670; Wed, 15 Sep 2010 21:54:43 +0000 (UTC) (envelope-from jhellenthal@gmail.com) Received: from mail-vw0-f54.google.com (mail-vw0-f54.google.com [209.85.212.54]) by mx1.freebsd.org (Postfix) with ESMTP id 102678FC14; Wed, 15 Sep 2010 21:54:42 +0000 (UTC) Received: by vws7 with SMTP id 7so475775vws.13 for ; Wed, 15 Sep 2010 14:54:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:sender:message-id:date:from :user-agent:mime-version:to:cc:subject:references:in-reply-to :x-enigmail-version:content-type:content-transfer-encoding; bh=J7KvFj3tyTrFcbcBBQYB9+BvevJnPQ+hLO+VzJzMvbg=; b=I94H6ff0VYpIE0VEpwD2U9QoARibOd1BXvgkNN1pIWnt1lTnhxT8xXNXL8mWTPcQrg OliLcguWKTi1CQtK8gdqqooB1EJ55h8vqePzZf3r23ZF+7jERyqbiDm6u2tMJOzneJ8h lUK8OjlFVQi/OJsCf7njpyIGdbT5C9Bw4kvfQ= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject :references:in-reply-to:x-enigmail-version:content-type :content-transfer-encoding; b=IhmXNPCEjFsI6eJ8//gfO69KTOfKojU3gVgOK2F0XF2cf3r0xez3B/ZcK8E4tZ5NlL KhQX2BQMm909qE54pcSQS/mCR2qIHxIRmfWGYXVg2Op+ocFT9gV7sQj6vKmaSQZAoNxS U90SdfxIisGOykk6EaeD1qjWAs4CHTGVl0AMo= Received: by 10.220.49.16 with SMTP id t16mr1311709vcf.59.1284587682377; Wed, 15 Sep 2010 14:54:42 -0700 (PDT) Received: from centel.dataix.local (adsl-99-181-146-122.dsl.klmzmi.sbcglobal.net [99.181.146.122]) by mx.google.com with ESMTPS id a15sm1025352vci.13.2010.09.15.14.54.40 (version=SSLv3 cipher=RC4-MD5); Wed, 15 Sep 2010 14:54:41 -0700 (PDT) Sender: "J. Hellenthal" Message-ID: <4C91409F.9090204@DataIX.net> Date: Wed, 15 Sep 2010 17:54:39 -0400 From: jhell User-Agent: Mozilla/5.0 (X11; U; FreeBSD i386; en-US; rv:1.9.2.9) Gecko/20100908 Lightning/1.0b1 Thunderbird MIME-Version: 1.0 To: Steven Hartland References: <20100908084855.GF2465@deviant.kiev.zoral.com.ua> <4C874F00.3050605@freebsd.org> <4C8D087B.5040404@freebsd.org> <03537796FAB54E02959E2D64FC83004F@multiplay.co.uk> <4C8D280F.3040803@freebsd.org> <3FBF66BF11AA4CBBA6124CA435A4A31B@multiplay.co.uk> <4C8E4212.30000@freebsd.org> <20100915104635.GA59871@icarus.home.lan> <8E233260F0334BC58B2C07F383939F8E@multiplay.co.uk> <4C9131F6.10807@DataIX.net> In-Reply-To: X-Enigmail-Version: 1.1.2 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org, Pawel Jakub Dawidek , Andriy Gapon Subject: Re: zfs very poor performance compared to ufs due to lack of cache? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 15 Sep 2010 21:54:43 -0000 On 09/15/2010 17:30, Steven Hartland wrote: > ----- Original Message ----- From: "jhell" jhell@DataIX.net > >> On 09/15/2010 06:54, Steven Hartland wrote: >>> ----- Original Message ----- From: "Jeremy Chadwick" >>> >>>> Looks like Andriy just committed something to HEAD/CURRENT which might >>>> address this: >>>> http://www.freebsd.org/cgi/query-pr.cgi?pr=kern/141305 >>> >>> Already running that as part of the patches unfortunately, it doesn't >>> seem >>> to have any effect. >>> >> >> Is it ? vm_page_set_validclean(m, off, bytes); >> I recall you saying that you added this from earlier in the thread. >> could be wrong though but what Andriy committed was the following. >> >> or ? vm_page_set_valid(m, off, bytes); > > > Ahh good catch I have: > if (error == 0) > vm_page_set_validclean(m, off, bytes); > > and not as mentioned by 141305: > if (error == 0) > vm_page_set_valid(m, off, bytes); > > Which should it actaully be? > Looking at the manual page vm_page_bits(9) I don't see a vm_page_is_validclean so really would it have a effect ?. -- jhell,v From owner-freebsd-fs@FreeBSD.ORG Wed Sep 15 23:38:33 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 00C291065674 for ; Wed, 15 Sep 2010 23:38:33 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-annu.mail.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id B6E8F8FC16 for ; Wed, 15 Sep 2010 23:38:32 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: ApwEAPL1kEyDaFvO/2dsb2JhbACDG59JtRmSRIEigyt0BIoshHc X-IronPort-AV: E=Sophos;i="4.56,373,1280721600"; d="scan'208";a="92030025" Received: from erie.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.206]) by esa-annu-pri.mail.uoguelph.ca with ESMTP; 15 Sep 2010 19:38:28 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 9D568B3F36; Wed, 15 Sep 2010 19:38:28 -0400 (EDT) Date: Wed, 15 Sep 2010 19:38:28 -0400 (EDT) From: Rick Macklem To: Steve Polyack Message-ID: <1847513949.997169.1284593908573.JavaMail.root@erie.cs.uoguelph.ca> In-Reply-To: <4C90E88D.9050608@comcast.net> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [24.65.230.102] X-Mailer: Zimbra 6.0.7_GA_2476.RHEL4 (ZimbraWebClient - SAF3 (Mac)/6.0.7_GA_2473.RHEL4_64) Cc: Eric Crist , freebsd-fs@freebsd.org, Thomas Johnson Subject: Re: NFS nfs_getpages errors X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 15 Sep 2010 23:38:33 -0000 > We see these errors on some 8.1 clients as well: > nfs_getpages: error 1110586608 > nfs_getpages: error 1108948624 > vm_fault: pager read error, pid 56216 (php) > nfs_getpages: error 1114969744 > vm_fault: pager read error, pid 54770 (php) > nfs_getpages: error 1137006224 > vm_fault: pager read error, pid 50578 (php) > > They do not show up often, so we haven't spent much time looking into > it > (no tcpdumps yet). Our NFS server is a 8-STABLE system backed by ZFS, > so maybe its related to that (again :) ). > > Eric, is your NFS server backed by ZFS as well? > > The NFS server doesn't seem to be logging any errors, but the > ret-failed > count is always increasing: > ret-failed doesn't really tell us anything. As I understand it, any error return, such as ENOENT, EACCES,... is being counted. (ie. legit) You could try switching to the exp. server. That would tell us if the problem is specific to the regular server or not. To switch the server over: - create an empty stable restart file # install -o root -g wheel -m 600 /dev/null /var/db/nfs-stablerestart - either set nfsv4_server_enable="YES" in /etc/rc.conf or add "-e" to both mountd and nfsd It had been stable for others (of course your mmv:-) and should be fine for NFSv3 (ie. you don't have to use NFSv4). rick ps: Use "nfsstat -e -s" for stats related to the exp. server. From owner-freebsd-fs@FreeBSD.ORG Wed Sep 15 23:49:40 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id BB545106564A for ; Wed, 15 Sep 2010 23:49:40 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id 7A0E58FC24 for ; Wed, 15 Sep 2010 23:49:40 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: ApwEANL3kEyDaFvO/2dsb2JhbACDG59JtR+SRoEigyt0BIoshHc X-IronPort-AV: E=Sophos;i="4.56,373,1280721600"; d="scan'208";a="94034453" Received: from erie.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.206]) by esa-jnhn-pri.mail.uoguelph.ca with ESMTP; 15 Sep 2010 19:49:39 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 9708DB3F26; Wed, 15 Sep 2010 19:49:39 -0400 (EDT) Date: Wed, 15 Sep 2010 19:49:39 -0400 (EDT) From: Rick Macklem To: Eric Crist Message-ID: <349221090.997567.1284594579563.JavaMail.root@erie.cs.uoguelph.ca> In-Reply-To: MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [24.65.230.102] X-Mailer: Zimbra 6.0.7_GA_2476.RHEL4 (ZimbraWebClient - SAF3 (Mac)/6.0.7_GA_2473.RHEL4_64) Cc: freebsd-fs@freebsd.org, Thomas Johnson Subject: Re: NFS nfs_getpages errors X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 15 Sep 2010 23:49:40 -0000 > > The NFS server is logging nothing at all related to NFS. It *is* > running 8.1-RC2, so there is potential for an update. If/when we > notice these errors again, we'll try to get a packet capture and > forward it to you. Our NFS server is backed by ZFS, as well. > > Eric I don't think there are any server fixes that would be relevant. I can't think of anything between 8.1-RC2 and 8.1 release and the only two post-8.1 release server patches are: - A fix for the regular server so it doesn't get into an infinite loop in the DRC code. (It stops servicing all NFS requests when this bug happened.) - A fix for the exp. server specific to NFSv4 and ZFS. Feel free to add these two patches from head/current (the files are sys/rpc/replay.c and sys/fs/nfs/nfsdport.h), but I'm almost sure they won't help with this. rick From owner-freebsd-fs@FreeBSD.ORG Thu Sep 16 06:36:06 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id B97D3106566C; Thu, 16 Sep 2010 06:36:06 +0000 (UTC) (envelope-from avg@freebsd.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 854B98FC0C; Thu, 16 Sep 2010 06:36:05 +0000 (UTC) Received: from porto.topspin.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id JAA23084; Thu, 16 Sep 2010 09:35:58 +0300 (EEST) (envelope-from avg@freebsd.org) Received: from localhost.topspin.kiev.ua ([127.0.0.1]) by porto.topspin.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1Ow84c-00044a-5f; Thu, 16 Sep 2010 09:35:58 +0300 Message-ID: <4C91BACD.3080501@freebsd.org> Date: Thu, 16 Sep 2010 09:35:57 +0300 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; U; FreeBSD amd64; en-US; rv:1.9.2.9) Gecko/20100912 Lightning/1.0b2 Thunderbird/3.1.3 MIME-Version: 1.0 To: jhell References: <20100908084855.GF2465@deviant.kiev.zoral.com.ua> <4C874F00.3050605@freebsd.org> <4C8D087B.5040404@freebsd.org> <03537796FAB54E02959E2D64FC83004F@multiplay.co.uk> <4C8D280F.3040803@freebsd.org> <3FBF66BF11AA4CBBA6124CA435A4A31B@multiplay.co.uk> <4C8E4212.30000@freebsd.org> <20100915104635.GA59871@icarus.home.lan> <8E233260F0334BC58B2C07F383939F8E@multiplay.co.uk> <4C9131F6.10807@DataIX.net> <4C91409F.9090204@DataIX.net> In-Reply-To: <4C91409F.9090204@DataIX.net> X-Enigmail-Version: 1.1.2 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org, Pawel Jakub Dawidek Subject: Re: zfs very poor performance compared to ufs due to lack of cache? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 16 Sep 2010 06:36:06 -0000 on 16/09/2010 00:54 jhell said the following: > On 09/15/2010 17:30, Steven Hartland wrote: >> ----- Original Message ----- From: "jhell" jhell@DataIX.net >> >>> On 09/15/2010 06:54, Steven Hartland wrote: >>>> ----- Original Message ----- From: "Jeremy Chadwick" >>>> >>>>> Looks like Andriy just committed something to HEAD/CURRENT which might >>>>> address this: >>>>> http://www.freebsd.org/cgi/query-pr.cgi?pr=kern/141305 >>>> >>>> Already running that as part of the patches unfortunately, it doesn't >>>> seem >>>> to have any effect. >>>> >>> >>> Is it ? vm_page_set_validclean(m, off, bytes); >>> I recall you saying that you added this from earlier in the thread. >>> could be wrong though but what Andriy committed was the following. >>> >>> or ? vm_page_set_valid(m, off, bytes); >> >> >> Ahh good catch I have: >> if (error == 0) >> vm_page_set_validclean(m, off, bytes); >> >> and not as mentioned by 141305: >> if (error == 0) >> vm_page_set_valid(m, off, bytes); >> >> Which should it actaully be? >> > > Looking at the manual page vm_page_bits(9) I don't see a > vm_page_is_validclean so really would it have a effect ?. > > Maybe the man page doesn't have it, but the function is real :-) In this case it actually doesn't matter much which one to use, but what was committed is more correct (as you might have expected). -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Thu Sep 16 10:04:42 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 4B8F91065670; Thu, 16 Sep 2010 10:04:42 +0000 (UTC) (envelope-from jhellenthal@gmail.com) Received: from mail-yw0-f54.google.com (mail-yw0-f54.google.com [209.85.213.54]) by mx1.freebsd.org (Postfix) with ESMTP id C3EF08FC1B; Thu, 16 Sep 2010 10:04:41 +0000 (UTC) Received: by ywt2 with SMTP id 2so419387ywt.13 for ; Thu, 16 Sep 2010 03:04:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:sender:message-id:date:from :user-agent:mime-version:to:cc:subject:references:in-reply-to :x-enigmail-version:content-type:content-transfer-encoding; bh=agYwHdPPzDLiFEEwp2iy65kUdHMqN/y+hJkCWw8CLaU=; b=EFEeVGJQk7IiX3zWDU/SsGTJu/xz3HkAQew/wSiLXAaBsOZKCMITUDHtk9jSm1zAvO 3EoNrG6qIZHgbIxkY/RU0sGI900qWGpTeFVDaus81L0CACkMCIwGikauX+zUFw9Pcm5q EK2T0H2wn5o1zQqWfes/nOdp/7NlumRNazNoA= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject :references:in-reply-to:x-enigmail-version:content-type :content-transfer-encoding; b=dG0pxRkn5gN6Q/C4g0ORy4H9SZQQEd8gcTfUQvNcewJIDdUe43RZpFElHUe4m64zdc 9uJvFlUB+rYC06M/UwpgHs+bdwT8wtuFzgQqyYWtROt2jY9+uWmO4ibD3l630hxInTpo NGM2YgXYJizmg9rxoqspYgcqXSLif+gE9zDgQ= Received: by 10.151.62.5 with SMTP id p5mr3432456ybk.55.1284631480835; Thu, 16 Sep 2010 03:04:40 -0700 (PDT) Received: from centel.dataix.local (adsl-99-181-146-122.dsl.klmzmi.sbcglobal.net [99.181.146.122]) by mx.google.com with ESMTPS id t20sm7317013ybm.5.2010.09.16.03.04.37 (version=SSLv3 cipher=RC4-MD5); Thu, 16 Sep 2010 03:04:39 -0700 (PDT) Sender: "J. Hellenthal" Message-ID: <4C91EBB4.9080304@DataIX.net> Date: Thu, 16 Sep 2010 06:04:36 -0400 From: jhell User-Agent: Mozilla/5.0 (X11; U; FreeBSD i386; en-US; rv:1.9.2.9) Gecko/20100908 Lightning/1.0b1 Thunderbird MIME-Version: 1.0 To: Andriy Gapon References: <20100908084855.GF2465@deviant.kiev.zoral.com.ua> <4C874F00.3050605@freebsd.org> <4C8D087B.5040404@freebsd.org> <03537796FAB54E02959E2D64FC83004F@multiplay.co.uk> <4C8D280F.3040803@freebsd.org> <3FBF66BF11AA4CBBA6124CA435A4A31B@multiplay.co.uk> <4C8E4212.30000@freebsd.org> <20100915104635.GA59871@icarus.home.lan> <8E233260F0334BC58B2C07F383939F8E@multiplay.co.uk> <4C9131F6.10807@DataIX.net> <4C91409F.9090204@DataIX.net> <4C91BACD.3080501@freebsd.org> In-Reply-To: <4C91BACD.3080501@freebsd.org> X-Enigmail-Version: 1.1.2 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org, Pawel Jakub Dawidek Subject: Re: zfs very poor performance compared to ufs due to lack of cache? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 16 Sep 2010 10:04:42 -0000 On 09/16/2010 02:35, Andriy Gapon wrote: > on 16/09/2010 00:54 jhell said the following: >> On 09/15/2010 17:30, Steven Hartland wrote: >>> ----- Original Message ----- From: "jhell" jhell@DataIX.net >>> >>>> On 09/15/2010 06:54, Steven Hartland wrote: >>>>> ----- Original Message ----- From: "Jeremy Chadwick" >>>>> >>>>>> Looks like Andriy just committed something to HEAD/CURRENT which might >>>>>> address this: >>>>>> http://www.freebsd.org/cgi/query-pr.cgi?pr=kern/141305 >>>>> >>>>> Already running that as part of the patches unfortunately, it doesn't >>>>> seem >>>>> to have any effect. >>>>> >>>> >>>> Is it ? vm_page_set_validclean(m, off, bytes); >>>> I recall you saying that you added this from earlier in the thread. >>>> could be wrong though but what Andriy committed was the following. >>>> >>>> or ? vm_page_set_valid(m, off, bytes); >>> >>> >>> Ahh good catch I have: >>> if (error == 0) >>> vm_page_set_validclean(m, off, bytes); >>> >>> and not as mentioned by 141305: >>> if (error == 0) >>> vm_page_set_valid(m, off, bytes); >>> >>> Which should it actaully be? >>> >> >> Looking at the manual page vm_page_bits(9) I don't see a >> vm_page_is_validclean so really would it have a effect ?. >> >> > > Maybe the man page doesn't have it, but the function is real :-) > In this case it actually doesn't matter much which one to use, but what was > committed is more correct (as you might have expected). > Yeah that's what I thought since the data is clean in the first place that extra ability to zero off the end bits shouldn't ever need to happen. Notice though I mixed up vm_page_set* with vm_page_is*, I must have been sleeping during that point ;). -- jhell,v From owner-freebsd-fs@FreeBSD.ORG Thu Sep 16 10:58:40 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id F087C1065672 for ; Thu, 16 Sep 2010 10:58:39 +0000 (UTC) (envelope-from mavbsd@gmail.com) Received: from mail-bw0-f54.google.com (mail-bw0-f54.google.com [209.85.214.54]) by mx1.freebsd.org (Postfix) with ESMTP id 7009C8FC18 for ; Thu, 16 Sep 2010 10:58:38 +0000 (UTC) Received: by bwz15 with SMTP id 15so1822272bwz.13 for ; Thu, 16 Sep 2010 03:58:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:sender:message-id:date:from :user-agent:mime-version:to:cc:subject:references:in-reply-to :x-enigmail-version:content-type; bh=lw9P/9sS0E1J6VOSCTHSkbA1wHPZCXa0lJ4+Hpn22Fw=; b=n8zwZLx3Mk/dTu64KORhmSkUUeZCfQo7Df8h8hbstMeOrC/WGorCmyIea2y8c2uQnU ySoXMZm8yz1vyrQY7JjnCVQFkHD4SJS8yPUAkWob7CPHOh7QgEp9B85p6d1YIZHRJYGw mLOPB9eAyXHxSD1ePodbiowJhhJL1ZpEn/Jw4= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject :references:in-reply-to:x-enigmail-version:content-type; b=o0fAmfFjB3TYkfTNZy89H/o06cFoPc4WtWJivK5zNebqznFSREZl6g4MXabqOGllhd 6zNZnOb52Stp4Rqt8aGKaG2lo/BwqNeo97zGrr3VgcgGX47/eE6Nw7vp4QiSNeW2SXQC pytGVK5etK2RJXAACi5qmEncYarGhL7r7CvrE= Received: by 10.223.108.212 with SMTP id g20mr1201257fap.47.1284634718088; Thu, 16 Sep 2010 03:58:38 -0700 (PDT) Received: from mavbook2.mavhome.dp.ua (pc.mavhome.dp.ua [212.86.226.226]) by mx.google.com with ESMTPS id b11sm1071226faq.6.2010.09.16.03.58.35 (version=SSLv3 cipher=RC4-MD5); Thu, 16 Sep 2010 03:58:36 -0700 (PDT) Sender: Alexander Motin Message-ID: <4C91F845.4010100@FreeBSD.org> Date: Thu, 16 Sep 2010 13:58:13 +0300 From: Alexander Motin User-Agent: Thunderbird 2.0.0.23 (X11/20091212) MIME-Version: 1.0 To: a.smith@ukgrid.net References: <20100909140000.5744370gkyqv4eo0@webmail2.ukgrid.net> <20100909182318.11133lqu4q4u1mw4@webmail2.ukgrid.net> <4C89D6A8.1080107@icyb.net.ua> <20100910143900.20382xl5bl6oo9as@webmail2.ukgrid.net> <20100910141127.GA13056@icarus.home.lan> <20100910155510.11831w104qjpyc4g@webmail2.ukgrid.net> <20100910152544.GA14636@icarus.home.lan> <20100910173912.205969tzhjiovf8c@webmail2.ukgrid.net> <4C8A6B26.8050305@icyb.net.ua> <20100910184921.16956kbaskhrsmg4@webmail2.ukgrid.net> <4C8A7B20.7090408@FreeBSD.org> In-Reply-To: <4C8A7B20.7090408@FreeBSD.org> X-Enigmail-Version: 0.96.0 Content-Type: multipart/mixed; boundary="------------050703070900040004070005" Cc: freebsd-fs@freebsd.org, Andriy Gapon Subject: Re: ZFS related kernel panic X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 16 Sep 2010 10:58:40 -0000 This is a multi-part message in MIME format. --------------050703070900040004070005 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Alexander Motin wrote: > It looks like during timeout handling (it is quite complicated process > when port multiplier is used) some request was completed twice. So > original problem is probably in hardware (try to check/replace cables, > multiplier, ...), that caused timeout, but the fact that drive was > unable to handle it is probably a siis(4) driver bug. Thanks to console access provided, I have found the reason of crash. Attached patch should fix it. Patched system successfully runs the stress test for 45 minutes now, comparing to crashing in few minutes without it. Also I've found that timeouts reported by the driver are not fatal. Affected commands are correctly completing as soon as after detecting time out driver freezes new incoming requests to resolve situation, and as result, idling the bus. ones. These timeouts I think caused by some congestion on SATA interface, that probably caused by port multiplier. This panic could be triggered only by such fake timeouts, not the real -- Alexander Motin --------------050703070900040004070005 Content-Type: text/plain; name="siis.c.patch" Content-Transfer-Encoding: 7bit Content-Disposition: inline; filename="siis.c.patch" --- siis.c.debug 2010-09-16 11:11:59.000000000 +0100 +++ siis.c 2010-09-16 11:12:31.000000000 +0100 @@ -1209,6 +1209,7 @@ siis_end_transaction(struct siis_slot *s device_t dev = slot->dev; struct siis_channel *ch = device_get_softc(dev); union ccb *ccb = slot->ccb; + int lastto; mtx_assert(&ch->mtx, MA_OWNED); bus_dmamap_sync(ch->dma.work_tag, ch->dma.work_map, @@ -1292,11 +1293,6 @@ siis_end_transaction(struct siis_slot *s ch->oslots &= ~(1 << slot->slot); ch->rslots &= ~(1 << slot->slot); ch->aslots &= ~(1 << slot->slot); - if (et != SIIS_ERR_TIMEOUT) { - if (ch->toslots == (1 << slot->slot)) - xpt_release_simq(ch->sim, TRUE); - ch->toslots &= ~(1 << slot->slot); - } slot->state = SIIS_SLOT_EMPTY; slot->ccb = NULL; /* Update channel stats. */ @@ -1305,6 +1301,13 @@ siis_end_transaction(struct siis_slot *s (ccb->ataio.cmd.flags & CAM_ATAIO_FPDMA)) { ch->numtslots[ccb->ccb_h.target_id]--; } + /* Cancel timeout state if request completed normally. */ + if (et != SIIS_ERR_TIMEOUT) { + lastto = (ch->toslots == (1 << slot->slot)); + ch->toslots &= ~(1 << slot->slot); + if (lastto) + xpt_release_simq(ch->sim, TRUE); + } /* If it was our READ LOG command - process it. */ if (ch->readlog) { siis_process_read_log(dev, ccb); --------------050703070900040004070005-- From owner-freebsd-fs@FreeBSD.ORG Thu Sep 16 11:17:49 2010 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 09E8A1065679; Thu, 16 Sep 2010 11:17:49 +0000 (UTC) (envelope-from avg@icyb.net.ua) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 1CE6B8FC13; Thu, 16 Sep 2010 11:17:47 +0000 (UTC) Received: from odyssey.starpoint.kiev.ua (alpha-e.starpoint.kiev.ua [212.40.38.101]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id OAA29537; Thu, 16 Sep 2010 14:17:46 +0300 (EEST) (envelope-from avg@icyb.net.ua) Message-ID: <4C91FCD9.1000203@icyb.net.ua> Date: Thu, 16 Sep 2010 14:17:45 +0300 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; U; FreeBSD amd64; en-US; rv:1.9.2.9) Gecko/20100909 Lightning/1.0b2 Thunderbird/3.1.3 MIME-Version: 1.0 To: Alexander Motin References: <20100909140000.5744370gkyqv4eo0@webmail2.ukgrid.net> <20100909182318.11133lqu4q4u1mw4@webmail2.ukgrid.net> <4C89D6A8.1080107@icyb.net.ua> <20100910143900.20382xl5bl6oo9as@webmail2.ukgrid.net> <20100910141127.GA13056@icarus.home.lan> <20100910155510.11831w104qjpyc4g@webmail2.ukgrid.net> <20100910152544.GA14636@icarus.home.lan> <20100910173912.205969tzhjiovf8c@webmail2.ukgrid.net> <4C8A6B26.8050305@icyb.net.ua> <20100910184921.16956kbaskhrsmg4@webmail2.ukgrid.net> <4C8A7B20.7090408@FreeBSD.org> <4C91F845.4010100@FreeBSD.org> In-Reply-To: <4C91F845.4010100@FreeBSD.org> X-Enigmail-Version: 1.1.2 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@FreeBSD.org, a.smith@ukgrid.net Subject: Re: ZFS related kernel panic X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 16 Sep 2010 11:17:49 -0000 on 16/09/2010 13:58 Alexander Motin said the following: > Thanks to console access provided, I have found the reason of crash. > Attached patch should fix it. Patched system successfully runs the > stress test for 45 minutes now, comparing to crashing in few minutes > without it. > > Also I've found that timeouts reported by the driver are not fatal. > Affected commands are correctly completing as soon as after detecting > time out driver freezes new incoming requests to resolve situation, and > as result, idling the bus. ones. These timeouts I think caused by some > congestion on SATA interface, that probably caused by port multiplier. > This panic could be triggered only by such fake timeouts, not the real Can the same happen with ahci (in theory)? Thanks a lot! -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Thu Sep 16 11:21:12 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 4C67310656C2 for ; Thu, 16 Sep 2010 11:21:12 +0000 (UTC) (envelope-from mavbsd@gmail.com) Received: from mail-bw0-f54.google.com (mail-bw0-f54.google.com [209.85.214.54]) by mx1.freebsd.org (Postfix) with ESMTP id C225F8FC15 for ; Thu, 16 Sep 2010 11:21:11 +0000 (UTC) Received: by bwz15 with SMTP id 15so1842638bwz.13 for ; Thu, 16 Sep 2010 04:21:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:sender:message-id:date:from :user-agent:mime-version:to:cc:subject:references:in-reply-to :x-enigmail-version:content-type:content-transfer-encoding; bh=4ZloGiDxWPJ7aVdASGPX4+9bbPqkdqEdG4w546/qYLI=; b=tdN8GnX2jSjz5VFjZtno+LC0urVj6SGJ+HIjCgMv861Aom9qd0raTPCfSEF/20D/i6 IihR+dQpHgKOYLBfis++EL5l4LY/D9gdtB1k763ITM7EEhwUDUOzDRweY+jPT8LNBWkh PJSFCyeL1Qedoay/wduOcoBDOprPn0kTqPJLQ= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject :references:in-reply-to:x-enigmail-version:content-type :content-transfer-encoding; b=Y9H7dpYzVgAx+eUPzwbUhYBRDpM3E+oSUuyHrRfljU7N2P0oFPYL4CpB1qhwovhwgj QLFHoP/lw5RiA3ySxWNpRHoyY2hVTy1D9ZPLWltTZ2pkOTO3wI8p+qFfak5YS03JA9+o FUgbmK1+Qxfrbo9KIXGCRxaOAipnSKTPqhaGU= Received: by 10.204.113.20 with SMTP id y20mr2279182bkp.170.1284636070610; Thu, 16 Sep 2010 04:21:10 -0700 (PDT) Received: from mavbook2.mavhome.dp.ua (pc.mavhome.dp.ua [212.86.226.226]) by mx.google.com with ESMTPS id s34sm2381465bkk.1.2010.09.16.04.21.08 (version=SSLv3 cipher=RC4-MD5); Thu, 16 Sep 2010 04:21:09 -0700 (PDT) Sender: Alexander Motin Message-ID: <4C91FD8E.8080201@FreeBSD.org> Date: Thu, 16 Sep 2010 14:20:46 +0300 From: Alexander Motin User-Agent: Thunderbird 2.0.0.23 (X11/20091212) MIME-Version: 1.0 To: Andriy Gapon References: <20100909140000.5744370gkyqv4eo0@webmail2.ukgrid.net> <20100909182318.11133lqu4q4u1mw4@webmail2.ukgrid.net> <4C89D6A8.1080107@icyb.net.ua> <20100910143900.20382xl5bl6oo9as@webmail2.ukgrid.net> <20100910141127.GA13056@icarus.home.lan> <20100910155510.11831w104qjpyc4g@webmail2.ukgrid.net> <20100910152544.GA14636@icarus.home.lan> <20100910173912.205969tzhjiovf8c@webmail2.ukgrid.net> <4C8A6B26.8050305@icyb.net.ua> <20100910184921.16956kbaskhrsmg4@webmail2.ukgrid.net> <4C8A7B20.7090408@FreeBSD.org> <4C91F845.4010100@FreeBSD.org> <4C91FCD9.1000203@icyb.net.ua> In-Reply-To: <4C91FCD9.1000203@icyb.net.ua> X-Enigmail-Version: 0.96.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@FreeBSD.org, a.smith@ukgrid.net Subject: Re: ZFS related kernel panic X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 16 Sep 2010 11:21:12 -0000 Andriy Gapon wrote: > on 16/09/2010 13:58 Alexander Motin said the following: >> Thanks to console access provided, I have found the reason of crash. >> Attached patch should fix it. Patched system successfully runs the >> stress test for 45 minutes now, comparing to crashing in few minutes >> without it. >> >> Also I've found that timeouts reported by the driver are not fatal. >> Affected commands are correctly completing as soon as after detecting >> time out driver freezes new incoming requests to resolve situation, and >> as result, idling the bus. ones. These timeouts I think caused by some >> congestion on SATA interface, that probably caused by port multiplier. >> This panic could be triggered only by such fake timeouts, not the real > > Can the same happen with ahci (in theory)? Yes, but only on AHCI controllers with FIS-based switching support. At this moment there is only one such chip - 6Gbps Marvell 88SE912x. Same patch should apply ahci(4) also. -- Alexander Motin From owner-freebsd-fs@FreeBSD.ORG Thu Sep 16 11:43:49 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 2CF931065675; Thu, 16 Sep 2010 11:43:49 +0000 (UTC) (envelope-from a.smith@ukgrid.net) Received: from mx0.ukgrid.net (mx0.ukgrid.net [89.21.28.37]) by mx1.freebsd.org (Postfix) with ESMTP id 843ED8FC1D; Thu, 16 Sep 2010 11:43:48 +0000 (UTC) Received: from [89.21.28.38] (port=44015 helo=omicron.ukgrid.net) by mx0.ukgrid.net with esmtp (Exim 4.72; FreeBSD) envelope-from a.smith@ukgrid.net id 1OwCsV-000Bxd-Bd; Thu, 16 Sep 2010 12:43:47 +0100 Received: from voip.ukgrid.net (voip.ukgrid.net [89.107.16.9]) by webmail2.ukgrid.net (Horde Framework) with HTTP; Thu, 16 Sep 2010 12:43:47 +0100 Message-ID: <20100916124347.21133zzmy7ucn30g@webmail2.ukgrid.net> Date: Thu, 16 Sep 2010 12:43:47 +0100 From: a.smith@ukgrid.net To: Alexander Motin References: <20100909140000.5744370gkyqv4eo0@webmail2.ukgrid.net> <20100909182318.11133lqu4q4u1mw4@webmail2.ukgrid.net> <4C89D6A8.1080107@icyb.net.ua> <20100910143900.20382xl5bl6oo9as@webmail2.ukgrid.net> <20100910141127.GA13056@icarus.home.lan> <20100910155510.11831w104qjpyc4g@webmail2.ukgrid.net> <20100910152544.GA14636@icarus.home.lan> <20100910173912.205969tzhjiovf8c@webmail2.ukgrid.net> <4C8A6B26.8050305@icyb.net.ua> <20100910184921.16956kbaskhrsmg4@webmail2.ukgrid.net> <4C8A7B20.7090408@FreeBSD.org> <4C91F845.4010100@FreeBSD.org> In-Reply-To: <4C91F845.4010100@FreeBSD.org> MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; DelSp="Yes"; format="flowed" Content-Disposition: inline Content-Transfer-Encoding: 7bit User-Agent: Internet Messaging Program (IMP) H3 (4.3.7) / FreeBSD-8.0 Cc: freebsd-fs@freebsd.org, Andriy Gapon Subject: Re: ZFS related kernel panic X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 16 Sep 2010 11:43:49 -0000 Quoting Alexander Motin : > Thanks to console access provided, I have found the reason of crash. > Attached patch should fix it. Patched system successfully runs the > stress test for 45 minutes now, comparing to crashing in few minutes > without it. Hi, so to apply this patch (to my other systems), I need to apply this patch and then do I do a recompile of the entire kernel? thanks Andy. From owner-freebsd-fs@FreeBSD.ORG Thu Sep 16 11:46:09 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id B96FC106564A for ; Thu, 16 Sep 2010 11:46:09 +0000 (UTC) (envelope-from mavbsd@gmail.com) Received: from mail-bw0-f54.google.com (mail-bw0-f54.google.com [209.85.214.54]) by mx1.freebsd.org (Postfix) with ESMTP id 394F28FC2A for ; Thu, 16 Sep 2010 11:46:09 +0000 (UTC) Received: by bwz15 with SMTP id 15so1866058bwz.13 for ; Thu, 16 Sep 2010 04:46:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:sender:message-id:date:from :user-agent:mime-version:to:cc:subject:references:in-reply-to :x-enigmail-version:content-type:content-transfer-encoding; bh=L4BO+qiCQvL8ypvEj6eJm3sMqtMlfna/6fLAk5+jxH8=; b=Img1aZoDNXE9RhG3iftfYFh0qvD7lovHIQEvrpqmDP09hdC4yQT2Ly/WbTaqBMTGCr ob5oyhaunfrRX+WxElYqi6NZgZPB9KbNvyOl7Jm1fBWj0I/buE2/BiGicY3GjsplmPAz Q0rqT0dsOIFbgolOSLSu6su1d3LYuqjyM1gMk= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject :references:in-reply-to:x-enigmail-version:content-type :content-transfer-encoding; b=Y5jiurKoYZJ/ZcxK8vQuKJebgS/VO2+CAsCtJQHYS6Q6o/7S/b79XTgJjrK0RAzjvi ncH3i8/ivd/5IqOfo7H9WMwUaVRs20U28eWCnqg+yazXBTsXnSVsQ2U1QSPS5r/4PiwD s7Bx63/QFiglKV5PfCdFpxCtpgaSkTizSq4Qk= Received: by 10.204.48.75 with SMTP id q11mr2693627bkf.0.1284637568251; Thu, 16 Sep 2010 04:46:08 -0700 (PDT) Received: from mavbook2.mavhome.dp.ua (pc.mavhome.dp.ua [212.86.226.226]) by mx.google.com with ESMTPS id f18sm2401414bkf.3.2010.09.16.04.46.01 (version=SSLv3 cipher=RC4-MD5); Thu, 16 Sep 2010 04:46:04 -0700 (PDT) Sender: Alexander Motin Message-ID: <4C920363.5070306@FreeBSD.org> Date: Thu, 16 Sep 2010 14:45:39 +0300 From: Alexander Motin User-Agent: Thunderbird 2.0.0.23 (X11/20091212) MIME-Version: 1.0 To: a.smith@ukgrid.net References: <20100909140000.5744370gkyqv4eo0@webmail2.ukgrid.net> <20100909182318.11133lqu4q4u1mw4@webmail2.ukgrid.net> <4C89D6A8.1080107@icyb.net.ua> <20100910143900.20382xl5bl6oo9as@webmail2.ukgrid.net> <20100910141127.GA13056@icarus.home.lan> <20100910155510.11831w104qjpyc4g@webmail2.ukgrid.net> <20100910152544.GA14636@icarus.home.lan> <20100910173912.205969tzhjiovf8c@webmail2.ukgrid.net> <4C8A6B26.8050305@icyb.net.ua> <20100910184921.16956kbaskhrsmg4@webmail2.ukgrid.net> <4C8A7B20.7090408@FreeBSD.org> <4C91F845.4010100@FreeBSD.org> <20100916124347.21133zzmy7ucn30g@webmail2.ukgrid.net> In-Reply-To: <20100916124347.21133zzmy7ucn30g@webmail2.ukgrid.net> X-Enigmail-Version: 0.96.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org, Andriy Gapon Subject: Re: ZFS related kernel panic X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 16 Sep 2010 11:46:09 -0000 a.smith@ukgrid.net wrote: > Quoting Alexander Motin : >> Thanks to console access provided, I have found the reason of crash. >> Attached patch should fix it. Patched system successfully runs the >> stress test for 45 minutes now, comparing to crashing in few minutes >> without it. > > so to apply this patch (to my other systems), I need to apply this > patch and then do I do a recompile of the entire kernel? If you use siis(4) as mosule - you may rebuild only one module. Otherwise - entire kernel. -- Alexander Motin From owner-freebsd-fs@FreeBSD.ORG Thu Sep 16 12:13:24 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 576561065695; Thu, 16 Sep 2010 12:13:24 +0000 (UTC) (envelope-from a.smith@ukgrid.net) Received: from mx0.ukgrid.net (mx0.ukgrid.net [89.21.28.37]) by mx1.freebsd.org (Postfix) with ESMTP id 0C1A98FC12; Thu, 16 Sep 2010 12:13:24 +0000 (UTC) Received: from [89.21.28.38] (port=53664 helo=omicron.ukgrid.net) by mx0.ukgrid.net with esmtp (Exim 4.72; FreeBSD) envelope-from a.smith@ukgrid.net id 1OwDL9-000CaX-B2; Thu, 16 Sep 2010 13:13:23 +0100 Received: from voip.ukgrid.net (voip.ukgrid.net [89.107.16.9]) by webmail2.ukgrid.net (Horde Framework) with HTTP; Thu, 16 Sep 2010 13:13:23 +0100 Message-ID: <20100916131323.14772kmekjpzi6uc@webmail2.ukgrid.net> Date: Thu, 16 Sep 2010 13:13:23 +0100 From: a.smith@ukgrid.net To: Alexander Motin References: <20100909140000.5744370gkyqv4eo0@webmail2.ukgrid.net> <20100909182318.11133lqu4q4u1mw4@webmail2.ukgrid.net> <4C89D6A8.1080107@icyb.net.ua> <20100910143900.20382xl5bl6oo9as@webmail2.ukgrid.net> <20100910141127.GA13056@icarus.home.lan> <20100910155510.11831w104qjpyc4g@webmail2.ukgrid.net> <20100910152544.GA14636@icarus.home.lan> <20100910173912.205969tzhjiovf8c@webmail2.ukgrid.net> <4C8A6B26.8050305@icyb.net.ua> <20100910184921.16956kbaskhrsmg4@webmail2.ukgrid.net> <4C8A7B20.7090408@FreeBSD.org> <4C91F845.4010100@FreeBSD.org> In-Reply-To: <4C91F845.4010100@FreeBSD.org> MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; DelSp="Yes"; format="flowed" Content-Disposition: inline Content-Transfer-Encoding: 7bit User-Agent: Internet Messaging Program (IMP) H3 (4.3.7) / FreeBSD-8.0 Cc: freebsd-fs@freebsd.org, Andriy Gapon Subject: Re: ZFS related kernel panic X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 16 Sep 2010 12:13:24 -0000 Quoting Alexander Motin : > Thanks to console access provided, I have found the reason of crash. > Attached patch should fix it. Patched system successfully runs the > stress test for 45 minutes now, comparing to crashing in few minutes > without it. > Hi Alexander, regarding the patch, if I apply the patch to another system it applies fine but the siis.c is not the same as the one on the system you have been testing. Can you confirm if this is correct? cksum of the siis.c on the test system is: 346699298 55045 ./siis.c cksum of the siis.c after applying the attached patch file is: 269993354 54059 ./siis.c thanks Andy. From owner-freebsd-fs@FreeBSD.ORG Thu Sep 16 12:19:08 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 497471065675 for ; Thu, 16 Sep 2010 12:19:08 +0000 (UTC) (envelope-from mavbsd@gmail.com) Received: from mail-bw0-f54.google.com (mail-bw0-f54.google.com [209.85.214.54]) by mx1.freebsd.org (Postfix) with ESMTP id BDA418FC12 for ; Thu, 16 Sep 2010 12:19:07 +0000 (UTC) Received: by bwz15 with SMTP id 15so1902194bwz.13 for ; Thu, 16 Sep 2010 05:19:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:sender:message-id:date:from :user-agent:mime-version:to:cc:subject:references:in-reply-to :x-enigmail-version:content-type:content-transfer-encoding; bh=aN5wCAkXcFd+5SQBBWtfaUcJw/tJNK4f8vM5d630Qtw=; b=A7wfaIMybr4vbxIBeoSR52u4zJcjNsRTRxjAh9sVwfL81pvxC7OBuoKpwR330Y1PJq 1+btAhKiO/d1UAdhyhTpQ1Xppn9XWTanRvizi1K8qmPTDhFCnZh4sMHMRyX47aBmRoSP QK0gfp3ZcsGi9sJfk3ErMTFXbdxAPKjERReTI= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject :references:in-reply-to:x-enigmail-version:content-type :content-transfer-encoding; b=SSu/OSFvKUuL8S8ymjogWBPU31bELO7y/8+fz4t42RwhhfI2+blP5RGfNfKW6SjM5c b4hoeOCJ3NhfegP1zGLmhzqjoU+V/NL+/h9BFfNdWRp+VYPXAYCCywQRe6u2YFhj6aDZ 6IaG3OvYZMLf7l7nGTxGQJdGtSsLkRdErFVT4= Received: by 10.204.131.132 with SMTP id x4mr2566801bks.50.1284639546614; Thu, 16 Sep 2010 05:19:06 -0700 (PDT) Received: from mavbook2.mavhome.dp.ua (pc.mavhome.dp.ua [212.86.226.226]) by mx.google.com with ESMTPS id x13sm2428010bki.0.2010.09.16.05.19.04 (version=SSLv3 cipher=RC4-MD5); Thu, 16 Sep 2010 05:19:05 -0700 (PDT) Sender: Alexander Motin Message-ID: <4C920B22.1060804@FreeBSD.org> Date: Thu, 16 Sep 2010 15:18:42 +0300 From: Alexander Motin User-Agent: Thunderbird 2.0.0.23 (X11/20091212) MIME-Version: 1.0 To: a.smith@ukgrid.net References: <20100909140000.5744370gkyqv4eo0@webmail2.ukgrid.net> <20100909182318.11133lqu4q4u1mw4@webmail2.ukgrid.net> <4C89D6A8.1080107@icyb.net.ua> <20100910143900.20382xl5bl6oo9as@webmail2.ukgrid.net> <20100910141127.GA13056@icarus.home.lan> <20100910155510.11831w104qjpyc4g@webmail2.ukgrid.net> <20100910152544.GA14636@icarus.home.lan> <20100910173912.205969tzhjiovf8c@webmail2.ukgrid.net> <4C8A6B26.8050305@icyb.net.ua> <20100910184921.16956kbaskhrsmg4@webmail2.ukgrid.net> <4C8A7B20.7090408@FreeBSD.org> <4C91F845.4010100@FreeBSD.org> <20100916131323.14772kmekjpzi6uc@webmail2.ukgrid.net> In-Reply-To: <20100916131323.14772kmekjpzi6uc@webmail2.ukgrid.net> X-Enigmail-Version: 0.96.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org, Andriy Gapon Subject: Re: ZFS related kernel panic X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 16 Sep 2010 12:19:08 -0000 a.smith@ukgrid.net wrote: > Quoting Alexander Motin : >> Thanks to console access provided, I have found the reason of crash. >> Attached patch should fix it. Patched system successfully runs the >> stress test for 45 minutes now, comparing to crashing in few minutes >> without it. > > regarding the patch, if I apply the patch to another system it applies > fine but the siis.c is not the same as the one on the system you have > been testing. Can you confirm if this is correct? > cksum of the siis.c on the test system is: > > 346699298 55045 ./siis.c > > cksum of the siis.c after applying the attached patch file is: > > 269993354 54059 ./siis.c On the test system left many additional debug. -- Alexander Motin From owner-freebsd-fs@FreeBSD.ORG Thu Sep 16 17:00:53 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 127181065679 for ; Thu, 16 Sep 2010 17:00:53 +0000 (UTC) (envelope-from daniel@digsys.bg) Received: from relay2.digsys.bg (varna.digsys.bg [192.92.129.9]) by mx1.freebsd.org (Postfix) with ESMTP id 85EBE8FC15 for ; Thu, 16 Sep 2010 17:00:52 +0000 (UTC) Received: from dcave.digsys.bg (daniel@dcave.digsys.bg [192.92.129.5]) by relay2.digsys.bg (8.14.4/8.14.4) with ESMTP id o8GGWZD9057725 for ; Thu, 16 Sep 2010 19:32:36 +0300 (EEST) (envelope-from daniel@digsys.bg) Message-ID: <4C9246A3.9050802@digsys.bg> Date: Thu, 16 Sep 2010 19:32:35 +0300 From: Daniel Kalchev User-Agent: Mozilla/5.0 (X11; U; FreeBSD amd64; en-US; rv:1.9.2.9) Gecko/20100908 Thunderbird/3.1.3 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=windows-1251; format=flowed Content-Transfer-Encoding: quoted-printable Subject: Swap on ZFS Volume still panics? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 16 Sep 2010 17:00:53 -0000 Have been using swap on ZFS for a while. 8 GB RAM, in=20 /boot/loader.conf only: vm.kmem_size=3D"12G" KDE workstation with two 500GB drives in ZFS mirror. Swap volume is NAME PROPERTY VALUE SOURCE storage/swap type volume - storage/swap creation =D1=F0 =DE=ED=E8 16 14:17 2010 - storage/swap used 16G - storage/swap available 58,6G - storage/swap referenced 3,99G - storage/swap compressratio 1.00x - storage/swap reservation none default storage/swap volsize 16G - storage/swap volblocksize 8K - storage/swap checksum off local storage/swap compression on local storage/swap readonly off default storage/swap shareiscsi off default storage/swap copies 1 default storage/swap refreservation 16G local storage/swap primarycache all default storage/swap secondarycache all default storage/swap usedbysnapshots 0 - storage/swap usedbydataset 3,99G - storage/swap usedbychildren 0 - storage/swap usedbyrefreservation 12,0G - storage/swap org.freebsd:swap on local Just added compression by the way, haven't yet seen it in action.=20 Sometimes, it swaps up to 2-4 GB. Swapping is noticeably slow. Never had hangs or kernel panics. Daniel From owner-freebsd-fs@FreeBSD.ORG Thu Sep 16 18:57:40 2010 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 23DC6106564A; Thu, 16 Sep 2010 18:57:40 +0000 (UTC) (envelope-from arundel@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id EF4ED8FC12; Thu, 16 Sep 2010 18:57:39 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id o8GIvdIn025888; Thu, 16 Sep 2010 18:57:39 GMT (envelope-from arundel@freefall.freebsd.org) Received: (from arundel@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id o8GIvdLg025884; Thu, 16 Sep 2010 18:57:39 GMT (envelope-from arundel) Date: Thu, 16 Sep 2010 18:57:39 GMT Message-Id: <201009161857.o8GIvdLg025884@freefall.freebsd.org> To: arundel@FreeBSD.org, freebsd-fs@FreeBSD.org, freebsd-geom@FreeBSD.org From: arundel@FreeBSD.org Cc: Subject: Re: kern/127420: [geom] [gjournal] [panic] Journal overflow on gmirrored gjournal X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 16 Sep 2010 18:57:40 -0000 Old Synopsis: [gjournal] [panic] Journal overflow on gmirrored gjournal New Synopsis: [geom] [gjournal] [panic] Journal overflow on gmirrored gjournal Responsible-Changed-From-To: freebsd-fs->freebsd-geom Responsible-Changed-By: arundel Responsible-Changed-When: Thu Sep 16 18:56:53 UTC 2010 Responsible-Changed-Why: This one looks more geom than fs related. http://www.freebsd.org/cgi/query-pr.cgi?pr=127420 From owner-freebsd-fs@FreeBSD.ORG Thu Sep 16 19:22:31 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id B34131065670 for ; Thu, 16 Sep 2010 19:22:31 +0000 (UTC) (envelope-from andriy@irbisnet.com) Received: from smtp108.rog.mail.re2.yahoo.com (smtp108.rog.mail.re2.yahoo.com [68.142.225.206]) by mx1.freebsd.org (Postfix) with SMTP id 4BA648FC1C for ; Thu, 16 Sep 2010 19:22:30 +0000 (UTC) Received: (qmail 79353 invoked from network); 16 Sep 2010 19:22:30 -0000 Received: from smtp.irbisnet.com (andriy@99.235.226.221 with login) by smtp108.rog.mail.re2.yahoo.com with SMTP; 16 Sep 2010 12:22:30 -0700 PDT X-Yahoo-SMTP: dz9sigaswBA5kWoYWVTZrGHmIs2vaKgG1w-- X-YMail-OSG: TASJdbIVM1lJJUi6..eXrwXc5narOaNrmzPlTjJaN3EL2W3 BsiV2nL1liNqblcXwPGhzX_Xv1vtkoUyS0QbDbbCkiizxNzqYcF4HggNd5t3 O3hRpHSILr8oXHy_Vsv.o9jTGUoS2pSREzgRHbyPjbMijPyznmcXSzxJmzXB P08eJZSK8OeCmbGIz9a9TclsZZUWIt528no_LumPVXQDr4qpcHd9D49bymc5 _FQu0WdmSwFn5UMlhMY.p2DR97Bssm7M4a6zYV8HIFvAlgy.4GHVssse3KvC wmvkV9YTort2xTxN_vbtjh1HEQQr4v6UpsLPIpLNUVLhJoTVG19nlw04qjZH 85ORHWQqybWJhL5GdhXpD9o1wdj8q9C6djg-- X-Yahoo-Newman-Property: ymail-3 Received: from prime.irbisnet.com (prime.irbisnet.vpn [10.78.76.4]) by smtp.irbisnet.com (Postfix) with ESMTPSA id C7CFA11425 for ; Thu, 16 Sep 2010 15:22:29 -0400 (EDT) Content-Type: text/plain; charset=utf-8; format=flowed; delsp=yes To: "freebsd-fs@freebsd.org" Date: Thu, 16 Sep 2010 15:22:27 -0400 MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: "Andriy Bakay" Message-ID: User-Agent: Opera Mail/10.61 (FreeBSD) Subject: ZFS + GELI data integrity X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 16 Sep 2010 19:22:31 -0000 Hi list(s), I am using ZFS on top of GELI. Does exists any practical reason to enable GELI data authentication (data integrity) underneath of ZFS? I understand GELI data integrity is cryptographically strong -- up to HMAC/SHA512, but ZFS has SHA256 checksum. GELI linked data to sector and will detect if somebody move data around, but my understanding is to move data around consistently one need to decrypt it which is very difficult. Correct me if I wrong. Any thoughts? Thanks, Andriy From owner-freebsd-fs@FreeBSD.ORG Thu Sep 16 22:15:21 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id AC5551065696 for ; Thu, 16 Sep 2010 22:15:21 +0000 (UTC) (envelope-from bra@fsn.hu) Received: from people.fsn.hu (people.fsn.hu [195.228.252.137]) by mx1.freebsd.org (Postfix) with ESMTP id 140E68FC14 for ; Thu, 16 Sep 2010 22:15:20 +0000 (UTC) Received: by people.fsn.hu (Postfix, from userid 1001) id E1AFC44374D; Thu, 16 Sep 2010 23:57:29 +0200 (CEST) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.2 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MF-ACE0E1EA [pR: 18.7462] X-CRM114-CacheID: sfid-20100916_23572_2D60D4D9 X-CRM114-Status: Good ( pR: 18.7462 ) X-DSPAM-Result: Whitelisted X-DSPAM-Processed: Thu Sep 16 23:57:29 2010 X-DSPAM-Confidence: 0.7620 X-DSPAM-Probability: 0.0000 X-DSPAM-Signature: 4c9292c9191461017366706 X-DSPAM-Factors: 27, From*Attila Nagy , 0.00061, wrote+>, 0.00213, >+>, 0.00381, >+>, 0.00381, wrote, 0.00389, >+I, 0.00490, >+the, 0.00708, the+>, 0.00760, that+>, 0.01000, stuff, 0.01000, both+the, 0.01000, >+is, 0.01000, that+to, 0.01000, see+if, 0.01000, the+local, 0.01000, devices, 0.01000, reproduce, 0.01000, reproduce, 0.01000, heavy, 0.99000, and+3, 0.01000, Subject*file, 0.01000, Subject*file, 0.01000, 32), 0.01000, causing, 0.01000, Subject*can+be, 0.99000, ZFS, 0.01000, X-Spambayes-Classification: ham; 0.00 Message-ID: <4C9292C4.5090300@fsn.hu> Date: Thu, 16 Sep 2010 23:57:24 +0200 From: Attila Nagy User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.1.23) Gecko/20090817 Thunderbird/2.0.0.23 Mnenhy/0.7.6.0 MIME-Version: 1.0 To: Rick Macklem References: <853573529.515333.1281496508531.JavaMail.root@erie.cs.uoguelph.ca> In-Reply-To: <853573529.515333.1281496508531.JavaMail.root@erie.cs.uoguelph.ca> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org Subject: Re: NFS problem: file doesn't appear in file listing, but can be accessed directly X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 16 Sep 2010 22:15:21 -0000 Hi, Sorry for the delay, there was a long vacation and then the usual catch-up problems... On 08/11/2010 05:15 AM, Rick Macklem wrote: > Ok, so it seems to be a server issue. Since there are quite a few files > missing, I'm surprised others aren't seeing problmes? > > I suspect it is some interaction between ZFS and the NFS server that > is causing this, but that's just a hunch. Possibly a problem w.r.t. > how the directory offset cookies are handled. > > Can you conveniently move the directory to a UFS2 volume and export > that, to see if the problem then goes away? Only to an md or zvol backed one, but that shouldn't affect the result, I think. BTW, all of this stuff is happening while doing maildir directory move between servers over NFS (using rsync, yeah I know this sounds silly, but we have the reasons for doing that) and checking the results on both sides with find and diff. On the source there is UFS where I can't see this problem. > Also, what architecture is the server running? (I'm wondering if > it might be an endianness or 32/64 bit issue related to the > directory offset cookies.? Just wild guesses at this point.) amd64 on all machines. > If you can't move the directory to UFS2 or that doesn't fix > the problem, all I can think to do is write/run a little program > locally on the server that does getdirentries() on the directory, > to try and spot something that might confuse the NFS server. (I can > write such a program for you, but I'd like to hear if it is a ZFS > specific problem first. > > Before doing an UFS2 copy, I've copied the problematic directory tree to a new location on the same ZFS volume. Surprise: it's OK! I get the same answer for "find . -type f | wc -l" on both the NFS client and directly on the server. I've tried to copy the directory via NFS with rsync (the way I produced this problematic directory and the others), without luck. I'm confused. Maybe it would be the best to try to reproduce it with similar usage pattern, but it seems to be hard on non real machines. The machine itself does heavy IO on the local file system (this is the hard thing to reproduce closely), while another machine does rsync over NFS copies (but I've tried direct rsync over ssh copies with the same result) and then checks the two directories with find (file listing) and diff. This process is done on many "threads" (up to 32) and there are occasional differences, where the copy stops because the source file listing is not the same as on the destination. The ZFS itself has 35 disks, one log and 3 cache devices. From owner-freebsd-fs@FreeBSD.ORG Fri Sep 17 05:18:42 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id B3C311065679 for ; Fri, 17 Sep 2010 05:18:42 +0000 (UTC) (envelope-from gvidals@gmail.com) Received: from mail-bw0-f54.google.com (mail-bw0-f54.google.com [209.85.214.54]) by mx1.freebsd.org (Postfix) with ESMTP id 3BFE48FC15 for ; Fri, 17 Sep 2010 05:18:42 +0000 (UTC) Received: by bwz15 with SMTP id 15so2951116bwz.13 for ; Thu, 16 Sep 2010 22:18:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:received:reply-to:date :message-id:subject:from:to:content-type; bh=qVrYXwPbnaLV48s8lA4Nhk4IDhW/98VAzRH/8WX0u2Q=; b=YzyCgFUPUq01dzwqGcWYLR0HCMJ+eMFuRceTx5X+Kq+PyM79WzfC3JwCMB79DmpOmO 2DXfsDnBinohzqS8sWQss9NTH//hjtSmug9vBe6u/hIW78jr0w5QGLCyYhrEAeaW64S+ iPzYOvxZANeHGtSB0gHyKZGVxPEfGQOUPCcc8= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:reply-to:date:message-id:subject:from:to:content-type; b=GDWfZK/E5RaBE1NzUgeGz5JVaOXjfYhxeeh01NrSOYjumGo90O8ZKvCUGF3zkt6QBs buqtjywXgQ3I4GGYw2gvQeS25lXvAmj6B3tKctQFpAGVcn6xfMEAUjLauOCfZt53k47+ 44/zcYdzP/9Mj2JyR7uqUbpfv3I2OA79SuUT4= MIME-Version: 1.0 Received: by 10.239.180.140 with SMTP id i12mr264658hbg.140.1284700720780; Thu, 16 Sep 2010 22:18:40 -0700 (PDT) Received: by 10.239.153.75 with HTTP; Thu, 16 Sep 2010 22:18:40 -0700 (PDT) Date: Thu, 16 Sep 2010 22:18:40 -0700 Message-ID: From: Gil Vidals To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Subject: what happens to pool if ZIL dies on ZFS v14 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: gil@vidals.net List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 17 Sep 2010 05:18:42 -0000 I read a forum post claiming that FreeBSD's ZFS v13 will not continue to function normally when the dedicated ZIL devices dies or goes away. Apparently the Solaris version of ZFS does support losing the ZIL. So can somebody confirm what happens in FreeBSD 8.1 (ZFS v14)? Here's the forum post: *If you are going to split the ZIL onto a separate device, then you ***MUST* ** make it a mirrored vdev. If the ZIL device ever dies, the entire pool goes with it!! ZFSv13 (in FreeBSD 8) doesn't support the removal of ZIL devices.* http://forums.freebsd.org/showthread.php?t=9859 Thanks for your responses. --Gil Vidals / VMRacks.com From owner-freebsd-fs@FreeBSD.ORG Fri Sep 17 05:39:06 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id EED731065670 for ; Fri, 17 Sep 2010 05:39:06 +0000 (UTC) (envelope-from gvidals@gmail.com) Received: from mail-bw0-f54.google.com (mail-bw0-f54.google.com [209.85.214.54]) by mx1.freebsd.org (Postfix) with ESMTP id 800A98FC12 for ; Fri, 17 Sep 2010 05:39:06 +0000 (UTC) Received: by bwz15 with SMTP id 15so2961247bwz.13 for ; Thu, 16 Sep 2010 22:39:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:received:reply-to:date :message-id:subject:from:to:content-type; bh=1pUSxqiOrFf0PuXAhgzsEfhp8Nc/HRV99WdEQSgM+O8=; b=u/+GMK5dXapVijuhQJRxMgfo6d53ODND2SxQ18I3bZwlcsh4otChjI9kAxVp/uyGfX wyV/FldH3mVvMSIQjfFHgLDvY/NvaIlljQj2R+AR8XQJuMZSyalZFb+/mPMI9EQ9H8/u qcTI5+h8c61njhl3NIIpDXmATawMEUL7Q9uno= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:reply-to:date:message-id:subject:from:to:content-type; b=q5QHUDkzkHwBlkUoY2j8QrDEG6h59omoBz2jvXbP5DG3jqbvTuViIUo/KdpGN6O68Z f6S5z2hEnqlcl67giI0W0wYSkq+zCBVBc09aRtM6B6R+lBnOIzdTQIbex/OFPA+757un 89ubY3bDZOUuCMrNuM5PBHzFJRUCM0cyPYbOc= MIME-Version: 1.0 Received: by 10.239.136.72 with SMTP id g8mr194674hbg.191.1284701945116; Thu, 16 Sep 2010 22:39:05 -0700 (PDT) Received: by 10.239.153.75 with HTTP; Thu, 16 Sep 2010 22:39:05 -0700 (PDT) Date: Thu, 16 Sep 2010 22:39:05 -0700 Message-ID: From: Gil Vidals To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Subject: vm.kmem_size for stability to avoid kernel panics? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: gil@vidals.net List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 17 Sep 2010 05:39:07 -0000 The ZFS Tuning guide says that no tuning may be necessary on servers with 2+ GB of RAM. *"FreeBSD 7.2+ has improved kernel memory allocation strategy and no tuning may be necessary on systems with more than 2 GB of RAM."* However, is it advisable to put an upper limit for these two parameters to ensure stability and avoid kernel panics? vm.kmem_size vm.kmem_size_max Thank you. Gil Vidals / VMRacks.com From owner-freebsd-fs@FreeBSD.ORG Fri Sep 17 06:12:18 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 8E4A01065672 for ; Fri, 17 Sep 2010 06:12:18 +0000 (UTC) (envelope-from jdc@koitsu.dyndns.org) Received: from qmta05.emeryville.ca.mail.comcast.net (qmta05.emeryville.ca.mail.comcast.net [76.96.30.48]) by mx1.freebsd.org (Postfix) with ESMTP id 754FA8FC17 for ; Fri, 17 Sep 2010 06:12:18 +0000 (UTC) Received: from omta06.emeryville.ca.mail.comcast.net ([76.96.30.51]) by qmta05.emeryville.ca.mail.comcast.net with comcast id 7gf11f00216AWCUA5iCH0z; Fri, 17 Sep 2010 06:12:17 +0000 Received: from koitsu.dyndns.org ([98.248.41.155]) by omta06.emeryville.ca.mail.comcast.net with comcast id 7iCG1f0093LrwQ28SiCHJv; Fri, 17 Sep 2010 06:12:17 +0000 Received: by icarus.home.lan (Postfix, from userid 1000) id AFE539B427; Thu, 16 Sep 2010 23:12:16 -0700 (PDT) Date: Thu, 16 Sep 2010 23:12:16 -0700 From: Jeremy Chadwick To: gil@vidals.net Message-ID: <20100917061216.GA44936@icarus.home.lan> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.20 (2009-06-14) Cc: freebsd-fs@freebsd.org Subject: Re: vm.kmem_size for stability to avoid kernel panics? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 17 Sep 2010 06:12:18 -0000 On Thu, Sep 16, 2010 at 10:39:05PM -0700, Gil Vidals wrote: > The ZFS Tuning guide says that no tuning may be necessary on servers with 2+ > GB of RAM. > > *"FreeBSD 7.2+ has improved kernel memory allocation strategy and no tuning > may be necessary on systems with more than 2 GB of RAM."* > However, is it advisable to put an upper limit for these two parameters to > ensure stability and avoid kernel panics? > > vm.kmem_size > vm.kmem_size_max It depends entirely on what OS version -- and build date -- you're using, in addition to architecture (i386 vs. amd64). Please provide more information about your system. There were changes to the underlying VM to extend vm.kmem_size_max's limit, which got committed sometime during the 7.2-STABLE (I think) cycle. However, vm.kmem_size still needs to be adjusted on both RELENG_7 and RELENG_8. You *do not* (and should not) need to adjust vm.kmem_size_max. Another ZFS-centric tunable you should adjust is vfs.zfs.arc_max -- but again, the functionality of this tunable depends exactly on what OS version and date of build you're using. The functionality in vfs.zfs.arc_max was changed from being a "high watermark" to a hard limit due to people continuing to experience "kmem map too small" panics. I think I may have posted to the list long ago about when this was changed; I don't remember the date off the top of my head. I do know that for RELENG_8, vfs.zfs.arc_max is a hard limit. Each person's hardware, environment, and workload is different, so your tuning will vary. This is the ZFS-related tuning bits we use on our amd64 RELENG_7 and RELENG_8 systems which have 4GB physical RAM installed: # Increase vm.kmem_size to allow for ZFS ARC to utilise more memory. vm.kmem_size="4096M" vfs.zfs.arc_max="3584M" # Disable ZFS prefetching # http://southbrain.com/south/2008/04/the-nightmare-comes-slowly-zfs.html # Increases overall speed of ZFS, but when disk flushing/writes occur, # system is less responsive (due to extreme disk I/O). # NOTE: 8.0-RC1 disables this by default on systems <= 4GB RAM anyway # NOTE: System has 8GB of RAM, so prefetch would be enabled by default. vfs.zfs.prefetch_disable="1" # Disable UMA (uma(9)) for ZFS; amd64 was moved to exclusively use UMA # on 2010/05/24. # http://lists.freebsd.org/pipermail/freebsd-stable/2010-June/057162.html vfs.zfs.zio.use_uma="0" # Decrease ZFS txg timeout value from 30 (default) to 5 seconds. This # should increase throughput and decrease the "bursty" stalls that # happen during immense I/O with ZFS. # http://lists.freebsd.org/pipermail/freebsd-fs/2009-December/007343.html # http://lists.freebsd.org/pipermail/freebsd-fs/2009-December/007355.html vfs.zfs.txg.timeout="5" -- | Jeremy Chadwick jdc@parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, USA | | Making life hard for others since 1977. PGP: 4BD6C0CB | From owner-freebsd-fs@FreeBSD.ORG Fri Sep 17 07:20:08 2010 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id E78FD106566C for ; Fri, 17 Sep 2010 07:20:08 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id D84328FC14 for ; Fri, 17 Sep 2010 07:20:08 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id o8H7K87D004058 for ; Fri, 17 Sep 2010 07:20:08 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id o8H7K802004052; Fri, 17 Sep 2010 07:20:08 GMT (envelope-from gnats) Date: Fri, 17 Sep 2010 07:20:08 GMT Message-Id: <201009170720.o8H7K802004052@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org From: dfilter@FreeBSD.ORG (dfilter service) Cc: Subject: Re: kern/138790: commit references a PR X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: dfilter service List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 17 Sep 2010 07:20:09 -0000 The following reply was made to PR kern/138790; it has been noted by GNATS. From: dfilter@FreeBSD.ORG (dfilter service) To: bug-followup@FreeBSD.org Cc: Subject: Re: kern/138790: commit references a PR Date: Fri, 17 Sep 2010 07:14:16 +0000 (UTC) Author: avg Date: Fri Sep 17 07:14:07 2010 New Revision: 212780 URL: http://svn.freebsd.org/changeset/base/212780 Log: zfs arc_reclaim_needed: more reasonable threshold for available pages vm_paging_target() is not a trigger of any kind for pageademon, but rather a "soft" target for it when it's already triggered. Thus, trying to keep 2048 pages above that level at the expense of ARC was simply driving ARC size into the ground even with normal memory loads. Instead, use a threshold at which a pagedaemon scan is triggered, so that ARC reclaiming helps with pagedaemon's task, but the latter still recycles active and inactive pages. PR: kern/146410, kern/138790 MFC after: 3 weeks Modified: head/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c Modified: head/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c ============================================================================== --- head/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c Fri Sep 17 04:55:01 2010 (r212779) +++ head/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c Fri Sep 17 07:14:07 2010 (r212780) @@ -2161,10 +2161,10 @@ arc_reclaim_needed(void) return (0); /* - * If pages are needed or we're within 2048 pages - * of needing to page need to reclaim + * Cooperate with pagedaemon when it's time for it to scan + * and reclaim some pages. */ - if (vm_pages_needed || (vm_paging_target() > -2048)) + if (vm_paging_need()) return (1); #if 0 _______________________________________________ svn-src-all@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/svn-src-all To unsubscribe, send any mail to "svn-src-all-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Fri Sep 17 07:40:08 2010 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 8D6BC1065728 for ; Fri, 17 Sep 2010 07:40:08 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 7D4838FC13 for ; Fri, 17 Sep 2010 07:40:08 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id o8H7e8I5044916 for ; Fri, 17 Sep 2010 07:40:08 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id o8H7e86f044915; Fri, 17 Sep 2010 07:40:08 GMT (envelope-from gnats) Date: Fri, 17 Sep 2010 07:40:08 GMT Message-Id: <201009170740.o8H7e86f044915@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org From: dfilter@FreeBSD.ORG (dfilter service) Cc: Subject: Re: kern/138790: commit references a PR X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: dfilter service List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 17 Sep 2010 07:40:08 -0000 The following reply was made to PR kern/138790; it has been noted by GNATS. From: dfilter@FreeBSD.ORG (dfilter service) To: bug-followup@FreeBSD.org Cc: Subject: Re: kern/138790: commit references a PR Date: Fri, 17 Sep 2010 07:34:57 +0000 (UTC) Author: avg Date: Fri Sep 17 07:34:50 2010 New Revision: 212783 URL: http://svn.freebsd.org/changeset/base/212783 Log: zfs arc_reclaim_needed: fix typo in mismerge in r212780 PR: kern/146410, kern/138790 MFC after: 3 weeks X-MFC with: r212780 Modified: head/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c Modified: head/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c ============================================================================== --- head/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c Fri Sep 17 07:20:20 2010 (r212782) +++ head/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c Fri Sep 17 07:34:50 2010 (r212783) @@ -2160,7 +2160,7 @@ arc_reclaim_needed(void) * Cooperate with pagedaemon when it's time for it to scan * and reclaim some pages. */ - if (vm_paging_need()) + if (vm_paging_needed()) return (1); #if 0 _______________________________________________ svn-src-all@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/svn-src-all To unsubscribe, send any mail to "svn-src-all-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Fri Sep 17 08:11:37 2010 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 06155106566C; Fri, 17 Sep 2010 08:11:37 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id D12328FC16; Fri, 17 Sep 2010 08:11:36 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id o8H8Ba7g087393; Fri, 17 Sep 2010 08:11:36 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id o8H8BasU087389; Fri, 17 Sep 2010 08:11:36 GMT (envelope-from linimon) Date: Fri, 17 Sep 2010 08:11:36 GMT Message-Id: <201009170811.o8H8BasU087389@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Cc: Subject: Re: kern/150501: [zfs] ZFS vdev failure vdev.bad_label on amd64 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 17 Sep 2010 08:11:37 -0000 Old Synopsis: ZFS vdev failure vdev.bad_label on amd64 New Synopsis: [zfs] ZFS vdev failure vdev.bad_label on amd64 Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Fri Sep 17 08:11:22 UTC 2010 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=150501 From owner-freebsd-fs@FreeBSD.ORG Fri Sep 17 08:12:06 2010 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 5AF1C1065670; Fri, 17 Sep 2010 08:12:06 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 31DC58FC24; Fri, 17 Sep 2010 08:12:06 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id o8H8C6Gx087465; Fri, 17 Sep 2010 08:12:06 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id o8H8C67q087460; Fri, 17 Sep 2010 08:12:06 GMT (envelope-from linimon) Date: Fri, 17 Sep 2010 08:12:06 GMT Message-Id: <201009170812.o8H8C67q087460@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Cc: Subject: Re: kern/150503: [zfs] ZFS disks are UNAVAIL and corrupted after reboot X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 17 Sep 2010 08:12:06 -0000 Old Synopsis: ZFS disks are UNAVAIL and corrupted after reboot New Synopsis: [zfs] ZFS disks are UNAVAIL and corrupted after reboot Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Fri Sep 17 08:11:48 UTC 2010 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=150503 From owner-freebsd-fs@FreeBSD.ORG Fri Sep 17 08:24:39 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 1756D106566C for ; Fri, 17 Sep 2010 08:24:39 +0000 (UTC) (envelope-from avg@icyb.net.ua) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 443C28FC1B for ; Fri, 17 Sep 2010 08:24:37 +0000 (UTC) Received: from porto.topspin.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id LAA18815; Fri, 17 Sep 2010 11:24:31 +0300 (EEST) (envelope-from avg@icyb.net.ua) Received: from localhost.topspin.kiev.ua ([127.0.0.1]) by porto.topspin.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1OwWFD-0008C0-4q; Fri, 17 Sep 2010 11:24:31 +0300 Message-ID: <4C9325BE.7000101@icyb.net.ua> Date: Fri, 17 Sep 2010 11:24:30 +0300 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; U; FreeBSD amd64; en-US; rv:1.9.2.9) Gecko/20100912 Lightning/1.0b2 Thunderbird/3.1.3 MIME-Version: 1.0 To: Jeremy Chadwick References: <20100917061216.GA44936@icarus.home.lan> In-Reply-To: <20100917061216.GA44936@icarus.home.lan> X-Enigmail-Version: 1.1.2 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: gil@vidals.net, freebsd-fs@freebsd.org Subject: Re: vm.kmem_size for stability to avoid kernel panics? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 17 Sep 2010 08:24:39 -0000 on 17/09/2010 09:12 Jeremy Chadwick said the following: > # Disable UMA (uma(9)) for ZFS; amd64 was moved to exclusively use UMA > # on 2010/05/24. > # http://lists.freebsd.org/pipermail/freebsd-stable/2010-June/057162.html > vfs.zfs.zio.use_uma="0" I think that that commit was reverted and zero is a default value now. -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Fri Sep 17 08:39:53 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id D45AA1065670 for ; Fri, 17 Sep 2010 08:39:53 +0000 (UTC) (envelope-from jdc@koitsu.dyndns.org) Received: from qmta03.emeryville.ca.mail.comcast.net (qmta03.emeryville.ca.mail.comcast.net [76.96.30.32]) by mx1.freebsd.org (Postfix) with ESMTP id AF0FA8FC18 for ; Fri, 17 Sep 2010 08:39:53 +0000 (UTC) Received: from omta20.emeryville.ca.mail.comcast.net ([76.96.30.87]) by qmta03.emeryville.ca.mail.comcast.net with comcast id 7kfs1f0051smiN4A3kfs6r; Fri, 17 Sep 2010 08:39:52 +0000 Received: from koitsu.dyndns.org ([98.248.41.155]) by omta20.emeryville.ca.mail.comcast.net with comcast id 7kfr1f0043LrwQ28gkfrBc; Fri, 17 Sep 2010 08:39:52 +0000 Received: by icarus.home.lan (Postfix, from userid 1000) id 5B97D9B427; Fri, 17 Sep 2010 01:39:51 -0700 (PDT) Date: Fri, 17 Sep 2010 01:39:51 -0700 From: Jeremy Chadwick To: Andriy Gapon Message-ID: <20100917083951.GA48183@icarus.home.lan> References: <20100917061216.GA44936@icarus.home.lan> <4C9325BE.7000101@icyb.net.ua> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4C9325BE.7000101@icyb.net.ua> User-Agent: Mutt/1.5.20 (2009-06-14) Cc: gil@vidals.net, freebsd-fs@freebsd.org Subject: Re: vm.kmem_size for stability to avoid kernel panics? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 17 Sep 2010 08:39:53 -0000 On Fri, Sep 17, 2010 at 11:24:30AM +0300, Andriy Gapon wrote: > on 17/09/2010 09:12 Jeremy Chadwick said the following: > > # Disable UMA (uma(9)) for ZFS; amd64 was moved to exclusively use UMA > > # on 2010/05/24. > > # http://lists.freebsd.org/pipermail/freebsd-stable/2010-June/057162.html > > vfs.zfs.zio.use_uma="0" > > I think that that commit was reverted and zero is a default value now. Yeah, it was. However, because it's impossible to determine in every single case what the kernel build date is on a readers' machine (ex. people who search Google, find the mailing list post, etc.), I mention it as a safety net. I should have mentioned this one as well (I don't include it in our loader.conf because our RELENG_8 systems all have kernels built after the below commit) -- the recent change of vfs.zfs.vdev.max_pending's value, from 35 to 10: vfs.zfs.vdev.max_pending="10" And that's based on this commit: http://www.freebsd.org/cgi/cvsweb.cgi/src/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_queue.c#rev1.4.2.1 I say this while on a soapbox, but I'm trying to do so politely. This is why I continue to harp/rant/whine about the need for better communication about all the changes that happen, especially in ZFS, to the RELENG_x (non-HEAD) branches. That means literally every commit. Most users/admins do not follow commits or mailing lists, instead resorting to Google to find solutions, etc... I also know that committers/engineers can't dedicate that amount of time either. I just wish we could find a compromise that works well. A lot of places use blogs for this task. -- | Jeremy Chadwick jdc@parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, USA | | Making life hard for others since 1977. PGP: 4BD6C0CB | From owner-freebsd-fs@FreeBSD.ORG Fri Sep 17 08:59:40 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id BE4F8106564A for ; Fri, 17 Sep 2010 08:59:40 +0000 (UTC) (envelope-from alexander@leidinger.net) Received: from mail.ebusiness-leidinger.de (mail.ebusiness-leidinger.de [217.11.53.44]) by mx1.freebsd.org (Postfix) with ESMTP id 508648FC15 for ; Fri, 17 Sep 2010 08:59:39 +0000 (UTC) Received: from outgoing.leidinger.net (p57B3ADE4.dip.t-dialin.net [87.179.173.228]) by mail.ebusiness-leidinger.de (Postfix) with ESMTPSA id 9D18884400D; Fri, 17 Sep 2010 10:59:36 +0200 (CEST) Received: from webmail.leidinger.net (webmail.leidinger.net [192.168.1.102]) by outgoing.leidinger.net (Postfix) with ESMTP id 82E7423C7; Fri, 17 Sep 2010 10:59:33 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=Leidinger.net; s=outgoing-alex; t=1284713973; bh=Uq5m2Q+FnH7KSXxkqZhJj9C64KsAZnSBOlS9G+JZgRg=; h=Message-ID:Date:From:To:Cc:Subject:References:In-Reply-To: MIME-Version:Content-Type:Content-Transfer-Encoding; b=gabsnbsjlmTfqfLDt6cvhMDTnGS/JWZFjm6BYWMa6XzacP7viuS0jJKmzmchwWwD5 NKYpqISEPmiNp1YDuvDur3Ikfy6G0jDC0TFPLDgzHelOe8C+OPrRoHVhxJFT8EcPxp 7NWSP3W2hlWu7E8kcClsxhnddJ6ORITCFmOe2Tzv1IpC/rQPhvXquO0s0T1Efgxye2 k9XBSPm7LRsLPKnht9aE4XyzsHfe11NzzaKAs/1gH4NBYPBStT61kJTXSYFPBVVsbe HrgQcTaCG6w7CM0Su6CpKYHeMS0N7fACMAGVjNpNrgUXhXoKclQJxAMrvx8EpyKHZ4 Gj/blN2PtoUSQ== Received: (from www@localhost) by webmail.leidinger.net (8.14.4/8.13.8/Submit) id o8H8xWKm050598; Fri, 17 Sep 2010 10:59:32 +0200 (CEST) (envelope-from Alexander@Leidinger.net) Received: from pslux.ec.europa.eu (pslux.ec.europa.eu [158.169.9.14]) by webmail.leidinger.net (Horde Framework) with HTTP; Fri, 17 Sep 2010 10:59:32 +0200 Message-ID: <20100917105932.2049587u8fwz4pog@webmail.leidinger.net> Date: Fri, 17 Sep 2010 10:59:32 +0200 From: Alexander Leidinger To: gil@vidals.net, Gil Vidals References: In-Reply-To: MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; DelSp="Yes"; format="flowed" Content-Disposition: inline Content-Transfer-Encoding: 7bit User-Agent: Dynamic Internet Messaging Program (DIMP) H3 (1.1.4) X-EBL-MailScanner-Information: Please contact the ISP for more information X-EBL-MailScanner-ID: 9D18884400D.A8165 X-EBL-MailScanner: Found to be clean X-EBL-MailScanner-SpamCheck: not spam, spamhaus-ZEN, SpamAssassin (not cached, score=-1.1, required 6, autolearn=disabled, ALL_TRUSTED -1.00, DKIM_SIGNED 0.10, DKIM_VALID -0.10, DKIM_VALID_AU -0.10) X-EBL-MailScanner-From: alexander@leidinger.net X-EBL-MailScanner-Watermark: 1285318777.90982@ssZtouasrrSuV3uyyg4U1A X-EBL-Spam-Status: No X-Mailman-Approved-At: Fri, 17 Sep 2010 10:58:15 +0000 Cc: freebsd-fs@freebsd.org Subject: Re: what happens to pool if ZIL dies on ZFS v14 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 17 Sep 2010 08:59:40 -0000 Quoting Gil Vidals (from Thu, 16 Sep 2010 22:18:40 -0700): > I read a forum post claiming that FreeBSD's ZFS v13 will not continue to > function normally when the dedicated ZIL devices dies or goes away. > Apparently the Solaris version of ZFS does support losing the ZIL. So can > somebody confirm what happens in FreeBSD 8.1 (ZFS v14)? > > Here's the forum post: > *If you are going to split the ZIL onto a separate device, then you ***MUST* > ** make it a mirrored vdev. If the ZIL device ever dies, the entire pool > goes with it!! ZFSv13 (in FreeBSD 8) doesn't support the removal of ZIL > devices.* > > http://forums.freebsd.org/showthread.php?t=9859 No matter if it is Solaris or FreeBSD, the _the_ ZIL dies, the pool dies too. For this reason the recommendation is to mirror any additional ZIL device on any System, to prevent a disk-crash to render the pool useless. A completely different matter is that ZIL devices can not be removed ("administratively removed" is different from "suddenly dead", as in the first case the system can move data from the device to be removed away to the real storage space in the pool). This is true even for -current (ATM). There is work on the way to update the ZFS to a version which allows the removal of ZIL devices (and more). Bye, Alexander. -- Stability itself is nothing else than a more sluggish motion. http://www.Leidinger.net Alexander @ Leidinger.net: PGP ID = B0063FE7 http://www.FreeBSD.org netchild @ FreeBSD.org : PGP ID = 72077137 From owner-freebsd-fs@FreeBSD.ORG Fri Sep 17 15:35:06 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id ADCB71065679 for ; Fri, 17 Sep 2010 15:35:06 +0000 (UTC) (envelope-from gvidals@gmail.com) Received: from mail-ew0-f54.google.com (mail-ew0-f54.google.com [209.85.215.54]) by mx1.freebsd.org (Postfix) with ESMTP id 3BC468FC16 for ; Fri, 17 Sep 2010 15:35:05 +0000 (UTC) Received: by ewy22 with SMTP id 22so1288045ewy.13 for ; Fri, 17 Sep 2010 08:35:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:received:reply-to :in-reply-to:references:date:message-id:subject:from:to:content-type; bh=KiU0I8tD/S/9eferFpoCgPdc+1olvTss12VM527RE5c=; b=QbciECutPDZ64V+GCR95aC4UkcZxp65FjfBO0QB96eMfBP0aQbZ57+3cr6eIUh2Cr8 xHpy5KlXzFIBd9CIbvo+Oe/ZQT+qGm5eBqiNnufpQM3PV/Dgy7p44Ea65QUQPLPAj6TP 6ikvGKwHP/tYqENaG5Det18M1alKMUABosk3c= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:reply-to:in-reply-to:references:date:message-id :subject:from:to:content-type; b=kvLoilgYqX5YzbEzZY9o/G0V+uYOhe5ctDPVVam1MTWWei9CNtnlYRuXfeTEqQ45Ur 29J+IJVYx9M6UCNE5pdSkk58+iEJ/orCm/jcFM4Zoa6zNpVwzo846b1h7YPtHLZzsjBN 7u4dLxTXmidiKKiQrsTb4vODFa0dNtrhjPXhE= MIME-Version: 1.0 Received: by 10.239.137.3 with SMTP id j3mr297484hbj.66.1284737704796; Fri, 17 Sep 2010 08:35:04 -0700 (PDT) Received: by 10.239.153.75 with HTTP; Fri, 17 Sep 2010 08:35:04 -0700 (PDT) In-Reply-To: <4C9385B0.2080909@shatow.net> References: <4C9385B0.2080909@shatow.net> Date: Fri, 17 Sep 2010 08:35:04 -0700 Message-ID: From: Gil Vidals To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Subject: Re: what happens to pool if ZIL dies on ZFS v14 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: gil@vidals.net List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 17 Sep 2010 15:35:06 -0000 Bryan thank you for the detailed answer. Assuming the ZIL SSD died, what steps would I follow to recover the pool? (i hope it is recoverable). -Gil Vidals / VMRacks.com On Fri, Sep 17, 2010 at 8:13 AM, Bryan Drewery wrote: > Gil Vidals wrote: > >> I read a forum post claiming that FreeBSD's ZFS v13 will not continue to >> function normally when the dedicated ZIL devices dies or goes away. >> Apparently the Solaris version of ZFS does support losing the ZIL. So can >> somebody confirm what happens in FreeBSD 8.1 (ZFS v14)? >> >> >> > Yes that is correct. You should mirror your log devices to avoid problems. > > zfs v19 allows removing log devices - so mirroring will not be necessary > after that point. > opensolaris$ zpool upgrade -v|grep Log > 19 Log device removal > > > Bryan > From owner-freebsd-fs@FreeBSD.ORG Fri Sep 17 15:40:40 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 33CFE106567A for ; Fri, 17 Sep 2010 15:40:40 +0000 (UTC) (envelope-from bryan@shatow.net) Received: from secure.xzibition.com (secure.xzibition.com [173.160.118.92]) by mx1.freebsd.org (Postfix) with ESMTP id D4B228FC08 for ; Fri, 17 Sep 2010 15:40:39 +0000 (UTC) DomainKey-Signature: a=rsa-sha1; c=nofws; d=shatow.net; h=message-id :date:from:mime-version:to:cc:subject:references:in-reply-to :content-type:content-transfer-encoding; q=dns; s=sweb; b=ubLG7C J94tpeQN0CLGtORdpeVsQKVK2+bWseHZOvXVBvUAeah9TQP2rn1ed09FJToFt2yv MFaQ8GAfd8mZRPW/Xu8z9+oicg2+NG8Y8YKWU9oCVk5qvWFHbMaF+dTALnqDO5/X tENofR5PgyKXMxPyJz4Jp5uSnorUJPGzIYFhY= DKIM-Signature: v=1; a=rsa-sha256; c=simple; d=shatow.net; h=message-id :date:from:mime-version:to:cc:subject:references:in-reply-to :content-type:content-transfer-encoding; s=sweb; bh=gy4Pf93g0ppO xyBotQDPE286TKwRbpXRFAPuXGqPQtk=; b=2WIN7gYl4u/FPDxlvhCbX6jalN12 7cDER7XluySgwsV4nl5JG2YQvLzywTcEkVj8TYHM3nQAe+N3/N8H5HXxok0p5I/C Gv+tckCgSMg4RVQzcqqmUfzjolYYrYz3G7f1EJdn7j8UEg1p2T8IWD4/yLM/LBnh CoZKDfSn4GMMkvo= Received: (qmail 72754 invoked from network); 17 Sep 2010 10:13:57 -0500 Received: from unknown (HELO ?192.168.0.201?) (bryan@shatow.net@74.94.87.209) by sweb.xzibition.com with ESMTPA; 17 Sep 2010 10:13:57 -0500 Message-ID: <4C9385B0.2080909@shatow.net> Date: Fri, 17 Sep 2010 10:13:52 -0500 From: Bryan Drewery User-Agent: Thunderbird 2.0.0.24 (Windows/20100228) MIME-Version: 1.0 To: gil@vidals.net References: In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org Subject: Re: what happens to pool if ZIL dies on ZFS v14 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 17 Sep 2010 15:40:40 -0000 Gil Vidals wrote: > I read a forum post claiming that FreeBSD's ZFS v13 will not continue to > function normally when the dedicated ZIL devices dies or goes away. > Apparently the Solaris version of ZFS does support losing the ZIL. So can > somebody confirm what happens in FreeBSD 8.1 (ZFS v14)? > > Yes that is correct. You should mirror your log devices to avoid problems. zfs v19 allows removing log devices - so mirroring will not be necessary after that point. opensolaris$ zpool upgrade -v|grep Log 19 Log device removal Bryan From owner-freebsd-fs@FreeBSD.ORG Fri Sep 17 16:17:13 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 1001410656A8 for ; Fri, 17 Sep 2010 16:17:13 +0000 (UTC) (envelope-from fjwcash@gmail.com) Received: from mail-ey0-f182.google.com (mail-ey0-f182.google.com [209.85.215.182]) by mx1.freebsd.org (Postfix) with ESMTP id 954C58FC14 for ; Fri, 17 Sep 2010 16:17:12 +0000 (UTC) Received: by eyx24 with SMTP id 24so1322652eyx.13 for ; Fri, 17 Sep 2010 09:17:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:received:in-reply-to :references:date:message-id:subject:from:to:content-type; bh=eaQIGIn1Y8x0vjevO0X0frOGGfVgvoCgPJUim3f1JP8=; b=gl4kxp9PendPhCVnmWsd/fF1xeQ0w4E8lArw2f/BhffvwI5UpSH/pqcvjkvBNISBGm DbxOHEXDKP/kDtBq5yQfYMEdn8A6RZ1REjnP+IJy32mI0+q2+BdGN03K9BODnQqkkd/q Q3SWel5ZXyFGqBfSA565tFwlKnufQoaK5WuK0= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; b=DrSFYRDalpkS+nSfUVi9EDywwSsEuUkim7dcKr79emRyWHh0RGhQP8Q96h3dvjVTxX f711DseMmlqyDPTEnJROicdEpeOE+wWbRWRd5jT9216XtlpnRZDaBWM6iUyY8tPbbLG1 h48QGEfglF4cH6+nMlllcLccLg9poy5cAW1lY= MIME-Version: 1.0 Received: by 10.223.105.84 with SMTP id s20mr2189478fao.10.1284740231646; Fri, 17 Sep 2010 09:17:11 -0700 (PDT) Received: by 10.223.110.197 with HTTP; Fri, 17 Sep 2010 09:17:11 -0700 (PDT) In-Reply-To: References: <4C9385B0.2080909@shatow.net> Date: Fri, 17 Sep 2010 09:17:11 -0700 Message-ID: From: Freddie Cash To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 Subject: Re: what happens to pool if ZIL dies on ZFS v14 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 17 Sep 2010 16:17:13 -0000 On Fri, Sep 17, 2010 at 8:35 AM, Gil Vidals wrote: > Bryan thank you for the detailed answer. > > Assuming the ZIL SSD died, what steps would I follow to recover the pool? (i > hope it is recoverable). If you are running ZFSv1 through ZFSv18 and your log device dies, your pool is dead, gone, unrecoverable, no secret prize, no continues, do not pass go, etc, etc, etc. If you are running ZFSv19 or newer and your log device dies, you can remove the dead device and carry on. You will lose any data that was in the ZIL, but the pool will be intact. Simple as that. -- Freddie Cash fjwcash@gmail.com From owner-freebsd-fs@FreeBSD.ORG Fri Sep 17 16:18:50 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 5325B10656FF for ; Fri, 17 Sep 2010 16:18:50 +0000 (UTC) (envelope-from jdc@koitsu.dyndns.org) Received: from qmta06.westchester.pa.mail.comcast.net (qmta06.westchester.pa.mail.comcast.net [76.96.62.56]) by mx1.freebsd.org (Postfix) with ESMTP id E9FAA8FC2C for ; Fri, 17 Sep 2010 16:18:49 +0000 (UTC) Received: from omta14.westchester.pa.mail.comcast.net ([76.96.62.60]) by qmta06.westchester.pa.mail.comcast.net with comcast id 7njY1f00A1HzFnQ56sJpMe; Fri, 17 Sep 2010 16:18:49 +0000 Received: from koitsu.dyndns.org ([98.248.41.155]) by omta14.westchester.pa.mail.comcast.net with comcast id 7sJo1f00l3LrwQ23asJpWt; Fri, 17 Sep 2010 16:18:49 +0000 Received: by icarus.home.lan (Postfix, from userid 1000) id 8ECCF9B427; Fri, 17 Sep 2010 09:18:47 -0700 (PDT) Date: Fri, 17 Sep 2010 09:18:47 -0700 From: Jeremy Chadwick To: Freddie Cash Message-ID: <20100917161847.GA58503@icarus.home.lan> References: <4C9385B0.2080909@shatow.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.20 (2009-06-14) Cc: freebsd-fs@freebsd.org Subject: Re: what happens to pool if ZIL dies on ZFS v14 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 17 Sep 2010 16:18:50 -0000 On Fri, Sep 17, 2010 at 09:17:11AM -0700, Freddie Cash wrote: > On Fri, Sep 17, 2010 at 8:35 AM, Gil Vidals wrote: > > Bryan thank you for the detailed answer. > > > > Assuming the ZIL SSD died, what steps would I follow to recover the pool? (i > > hope it is recoverable). > > If you are running ZFSv1 through ZFSv18 and your log device dies, your > pool is dead, gone, unrecoverable, no secret prize, no continues, do > not pass go, etc, etc, etc. > > If you are running ZFSv19 or newer and your log device dies, you can > remove the dead device and carry on. You will lose any data that was > in the ZIL, but the pool will be intact. Given the severity of this predicament, then why is it people are disabling the ZIL (via vfs.zfs.zil_disable=1) ? -- | Jeremy Chadwick jdc@parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, USA | | Making life hard for others since 1977. PGP: 4BD6C0CB | From owner-freebsd-fs@FreeBSD.ORG Fri Sep 17 16:21:56 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id C635C1065672 for ; Fri, 17 Sep 2010 16:21:56 +0000 (UTC) (envelope-from fjwcash@gmail.com) Received: from mail-bw0-f54.google.com (mail-bw0-f54.google.com [209.85.214.54]) by mx1.freebsd.org (Postfix) with ESMTP id 526428FC13 for ; Fri, 17 Sep 2010 16:21:55 +0000 (UTC) Received: by bwz15 with SMTP id 15so3534807bwz.13 for ; Fri, 17 Sep 2010 09:21:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:received:in-reply-to :references:date:message-id:subject:from:to:content-type :content-transfer-encoding; bh=dUImTFLc1fcFAJlnwx1jc8pouYkLpEwGRWmRdddazYQ=; b=hxP88UUHbtK2sv3un97i44qy2OSQpm+fPk1LSQCvHOvwe4lajBmZuaFSn3/HkuHP8a BpNFW1eD6NxCm+Zc4h+W8cZjPZ6uJJB1Txlc9GyJa4qodoZedvb//t5q6a1ZWwzog+pP DxHdP/lGokB7zRfaHlXPriE4e7cNVuhbMzEBk= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type:content-transfer-encoding; b=jKiy5d2ZaAAjqLr1b8NCCMwtizbZ0pe85maH84fx5Xk0BQw8r594O/7rA0tGKkarl4 aSrDrdCyVM9GAGLzSgv5VMT3yEjOJglh+Lv05T3p4hEVXmWF6gr60HeZ7R4o+Te7imVx 601V+QUiwcxldBbpW8qBu+YfL5HbapW14z/+Q= MIME-Version: 1.0 Received: by 10.223.106.209 with SMTP id y17mr2074781fao.105.1284740514998; Fri, 17 Sep 2010 09:21:54 -0700 (PDT) Received: by 10.223.110.197 with HTTP; Fri, 17 Sep 2010 09:21:54 -0700 (PDT) In-Reply-To: <20100917161847.GA58503@icarus.home.lan> References: <4C9385B0.2080909@shatow.net> <20100917161847.GA58503@icarus.home.lan> Date: Fri, 17 Sep 2010 09:21:54 -0700 Message-ID: From: Freddie Cash To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Subject: Re: what happens to pool if ZIL dies on ZFS v14 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 17 Sep 2010 16:21:56 -0000 On Fri, Sep 17, 2010 at 9:18 AM, Jeremy Chadwick wrote: > On Fri, Sep 17, 2010 at 09:17:11AM -0700, Freddie Cash wrote: >> On Fri, Sep 17, 2010 at 8:35 AM, Gil Vidals wrote: >> > Bryan thank you for the detailed answer. >> > >> > Assuming the ZIL SSD died, what steps would I follow to recover the po= ol? (i >> > hope it is recoverable). >> >> If you are running ZFSv1 through ZFSv18 and your log device dies, your >> pool is dead, gone, unrecoverable, no secret prize, no continues, do >> not pass go, etc, etc, etc. >> >> If you are running ZFSv19 or newer and your log device dies, you can >> remove the dead device and carry on. =C2=A0You will lose any data that w= as >> in the ZIL, but the pool will be intact. > > Given the severity of this predicament, then why is it people are > disabling the ZIL (via vfs.zfs.zil_disable=3D1) ? I'm not sure what you mean by that. This (dead ZIL =3D=3D dead pool) only applies to separate log (slog) device= s. --=20 Freddie Cash fjwcash@gmail.com From owner-freebsd-fs@FreeBSD.ORG Fri Sep 17 16:42:34 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 563BF106564A for ; Fri, 17 Sep 2010 16:42:34 +0000 (UTC) (envelope-from jdc@koitsu.dyndns.org) Received: from qmta01.westchester.pa.mail.comcast.net (qmta01.westchester.pa.mail.comcast.net [76.96.62.16]) by mx1.freebsd.org (Postfix) with ESMTP id F28FF8FC14 for ; Fri, 17 Sep 2010 16:42:33 +0000 (UTC) Received: from omta02.westchester.pa.mail.comcast.net ([76.96.62.19]) by qmta01.westchester.pa.mail.comcast.net with comcast id 7o8s1f0070QuhwU51siah6; Fri, 17 Sep 2010 16:42:34 +0000 Received: from koitsu.dyndns.org ([98.248.41.155]) by omta02.westchester.pa.mail.comcast.net with comcast id 7sdZ1f00B3LrwQ23NsdZk5; Fri, 17 Sep 2010 16:37:34 +0000 Received: by icarus.home.lan (Postfix, from userid 1000) id 097609B427; Fri, 17 Sep 2010 09:37:32 -0700 (PDT) Date: Fri, 17 Sep 2010 09:37:32 -0700 From: Jeremy Chadwick To: Freddie Cash Message-ID: <20100917163732.GA59537@icarus.home.lan> References: <4C9385B0.2080909@shatow.net> <20100917161847.GA58503@icarus.home.lan> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: User-Agent: Mutt/1.5.20 (2009-06-14) Cc: freebsd-fs@freebsd.org Subject: Re: what happens to pool if ZIL dies on ZFS v14 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 17 Sep 2010 16:42:34 -0000 On Fri, Sep 17, 2010 at 09:21:54AM -0700, Freddie Cash wrote: > On Fri, Sep 17, 2010 at 9:18 AM, Jeremy Chadwick > wrote: > > On Fri, Sep 17, 2010 at 09:17:11AM -0700, Freddie Cash wrote: > >> On Fri, Sep 17, 2010 at 8:35 AM, Gil Vidals wrote: > >> > Bryan thank you for the detailed answer. > >> > > >> > Assuming the ZIL SSD died, what steps would I follow to recover the pool? (i > >> > hope it is recoverable). > >> > >> If you are running ZFSv1 through ZFSv18 and your log device dies, your > >> pool is dead, gone, unrecoverable, no secret prize, no continues, do > >> not pass go, etc, etc, etc. > >> > >> If you are running ZFSv19 or newer and your log device dies, you can > >> remove the dead device and carry on.  You will lose any data that was > >> in the ZIL, but the pool will be intact. > > > > Given the severity of this predicament, then why is it people are > > disabling the ZIL (via vfs.zfs.zil_disable=1) ? > > I'm not sure what you mean by that. > > This (dead ZIL == dead pool) only applies to separate log (slog) devices. I was under the impression ZFS still managed to utilise the ZIL when a pool didn't have any "log" devices associated with it (possibly some sort of statically-allocated amount of RAM?) You can search the FreeBSD lists for people continually advocating vfs.zfs.zil_disable=1. There's even a couple blog posts from engineers talking about how the only way to get their filers to behave decently was to disable the ZIL[1][2][3]. In most (every?) cases I've seen where someone advocates disabling the ZIL, pool details aren't provided, which leads me to believe their pools have no "log" devices. Here's a better way to phrase my question: does vfs.zfs.zil_disable=1 do anything if there aren't any "log" devices in use (in any pool)? [1]: http://jmlittle.blogspot.com/2010/03/zfs-log-devices-review-of-ddrdrive-x1.html [2]: http://blogs.sun.com/erickustarz/entry/zil_disable [3]: http://weblog.etherized.com/posts/130 -- | Jeremy Chadwick jdc@parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, USA | | Making life hard for others since 1977. PGP: 4BD6C0CB | From owner-freebsd-fs@FreeBSD.ORG Fri Sep 17 16:46:06 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 8240D1065670 for ; Fri, 17 Sep 2010 16:46:06 +0000 (UTC) (envelope-from bryan@shatow.net) Received: from secure.xzibition.com (secure.xzibition.com [173.160.118.92]) by mx1.freebsd.org (Postfix) with ESMTP id 2A3008FC25 for ; Fri, 17 Sep 2010 16:46:05 +0000 (UTC) DomainKey-Signature: a=rsa-sha1; c=nofws; d=shatow.net; h=message-id :date:from:mime-version:to:cc:subject:references:in-reply-to :content-type:content-transfer-encoding; q=dns; s=sweb; b=kgoQUL L23Liec7BLHfbddaqx7zHhJNQjeIHV/A74pZfq16QI5HDcOfPN26rxDee9jgLE0a Hd2EitfLHxNP8na2RJxwaZTFAKJnj8EQVIkjl5gYXcU5FwnEKDdNUvYGVCSXx4Ub 3/Ev2vhbXC68BH8XnqYJjfopm7nFcaAxfhxZo= DKIM-Signature: v=1; a=rsa-sha256; c=simple; d=shatow.net; h=message-id :date:from:mime-version:to:cc:subject:references:in-reply-to :content-type:content-transfer-encoding; s=sweb; bh=eANGZ+t8cp1j kUZTn27H5bs0wdvIkvglPAE84steywg=; b=iikKk/W3USMfhEAb7H/Ht5VP5mDa CnSIqQh8ND6fdrwbddh2/BPEOgDR9mxKBeadc0taW7g4DXx9cmz7cXKzl9sJnHmt gcu+meCRXmCLOJtgumowjeNv9nb66Q1Lvw3fsU9oSVxI7db/QvXcaQrL1svpYFjX iuKPUXVwnhbxTHg= Received: (qmail 12263 invoked from network); 17 Sep 2010 11:46:04 -0500 Received: from unknown (HELO ?192.168.0.201?) (bryan@shatow.net@74.94.87.209) by sweb.xzibition.com with ESMTPA; 17 Sep 2010 11:46:04 -0500 Message-ID: <4C939B47.6030701@shatow.net> Date: Fri, 17 Sep 2010 11:45:59 -0500 From: Bryan Drewery User-Agent: Thunderbird 2.0.0.24 (Windows/20100228) MIME-Version: 1.0 To: Jeremy Chadwick References: <4C9385B0.2080909@shatow.net> <20100917161847.GA58503@icarus.home.lan> <20100917163732.GA59537@icarus.home.lan> In-Reply-To: <20100917163732.GA59537@icarus.home.lan> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org Subject: Re: what happens to pool if ZIL dies on ZFS v14 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 17 Sep 2010 16:46:06 -0000 > I was under the impression ZFS still managed to utilise the ZIL when a > pool didn't have any "log" devices associated with it (possibly some > sort of statically-allocated amount of RAM?) > > You can search the FreeBSD lists for people continually advocating > vfs.zfs.zil_disable=1. There's even a couple blog posts from engineers > talking about how the only way to get their filers to behave decently > was to disable the ZIL[1][2][3]. In most (every?) cases I've seen where > someone advocates disabling the ZIL, pool details aren't provided, which > leads me to believe their pools have no "log" devices. > > Here's a better way to phrase my question: does vfs.zfs.zil_disable=1 do > anything if there aren't any "log" devices in use (in any pool)? > > > [1]: http://jmlittle.blogspot.com/2010/03/zfs-log-devices-review-of-ddrdrive-x1.html > [2]: http://blogs.sun.com/erickustarz/entry/zil_disable > [3]: http://weblog.etherized.com/posts/130 > > The ZIL is still used even without a dedicated log device. Disabling it is *stupid* in most cases. Same goes for disabling the ARC. There is a lot of FUD out there regarding ZFS tuning. The bottom line: don't tune; add more RAM. Bryan From owner-freebsd-fs@FreeBSD.ORG Fri Sep 17 16:47:34 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 3B751106564A for ; Fri, 17 Sep 2010 16:47:34 +0000 (UTC) (envelope-from gvidals@gmail.com) Received: from mail-ey0-f182.google.com (mail-ey0-f182.google.com [209.85.215.182]) by mx1.freebsd.org (Postfix) with ESMTP id B51D18FC25 for ; Fri, 17 Sep 2010 16:47:33 +0000 (UTC) Received: by eyx24 with SMTP id 24so1343632eyx.13 for ; Fri, 17 Sep 2010 09:47:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:received:reply-to :in-reply-to:references:date:message-id:subject:from:to:content-type; bh=iqej5zkzuxskJkuwLuntFCneCblNXycHl9Gcqv77MLM=; b=prhi2Oaamf80nGZCdYtCwaiUq3tJEcpiiT37z2bl5EukfUPMPcnlj5R1u242W2AZC5 l+lpSVvvIrkB6qAqdGTDBeJSODNTNGoWa8iGfQsfnMxVXVwxDw6Z/tjDRK1IohIsWiOj jYemxIMXNMR62Dz75GRR8bTB7NRzV2hjj54Wo= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:reply-to:in-reply-to:references:date:message-id :subject:from:to:content-type; b=xqZe7dGnXkx4oHUMsL6BmrT+9E4ECJVJOAqI/fLPCnRHij+RqgiKd6OtdSiGhUGNaa CaAGiKcPJYHbScx/DfU4G/f1TltCoWXJFdqLVTwv+Hrr6WcHcAfndkfGbSM0dfl0UEvo NGR70LBjFs6GEUzukuh3bwwyxEuPdKFOk5MPg= MIME-Version: 1.0 Received: by 10.239.129.195 with SMTP id 3mr324469hbg.22.1284742052277; Fri, 17 Sep 2010 09:47:32 -0700 (PDT) Received: by 10.239.153.75 with HTTP; Fri, 17 Sep 2010 09:47:32 -0700 (PDT) In-Reply-To: References: <4C9385B0.2080909@shatow.net> <20100917161847.GA58503@icarus.home.lan> Date: Fri, 17 Sep 2010 09:47:32 -0700 Message-ID: From: Gil Vidals To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Subject: Re: what happens to pool if ZIL dies on ZFS v14 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: gil@vidals.net List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 17 Sep 2010 16:47:34 -0000 First, let me say that I'm receiving excellent input from the FreeBSD community. I'm new to FreeBSD and ZFS and this mailing list has been very helpful. I'm running ZFSv14 on FreeBSD 8.1 AMD64 with 8GB of DDR3 RAM with two SSDs - one for the ZIL and the other for the L2ARC cache. zambia# zpool iostat -v 1 1 capacity operations bandwidth pool used avail read write read write ---------------- ----- ----- ----- ----- ----- ----- tank 6.57G 921G 0 11 116K 438K mirror 6.57G 921G 0 5 116K 229K label/disk1 - - 0 3 57.9K 229K label/disk2 - - 0 3 57.8K 229K label/zilcache 136K 59.5G 0 6 17 209K cache - - - - - - label/l2cache 59.6G 8.50K 0 0 31.5K 48.9K ---------------- ----- ----- ----- ----- ----- ----- Observing the ZIL Cache, I see it being used very sparingly. And now that I know the SSD slog must be mirrored in ZFS < v19, I think the best course of action (assuming I'm not buying more equipment) is to mirror the ZIL SSD and abandon the L2ARC altogether. Won't RAM be used for L2ARC instead? --Gil Vidals / VMRacks.com On Fri, Sep 17, 2010 at 9:21 AM, Freddie Cash wrote: > On Fri, Sep 17, 2010 at 9:18 AM, Jeremy Chadwick > wrote: > > On Fri, Sep 17, 2010 at 09:17:11AM -0700, Freddie Cash wrote: > >> On Fri, Sep 17, 2010 at 8:35 AM, Gil Vidals wrote: > >> > Bryan thank you for the detailed answer. > >> > > >> > Assuming the ZIL SSD died, what steps would I follow to recover the > pool? (i > >> > hope it is recoverable). > >> > >> If you are running ZFSv1 through ZFSv18 and your log device dies, your > >> pool is dead, gone, unrecoverable, no secret prize, no continues, do > >> not pass go, etc, etc, etc. > >> > >> If you are running ZFSv19 or newer and your log device dies, you can > >> remove the dead device and carry on. You will lose any data that was > >> in the ZIL, but the pool will be intact. > > > > Given the severity of this predicament, then why is it people are > > disabling the ZIL (via vfs.zfs.zil_disable=1) ? > > I'm not sure what you mean by that. > > This (dead ZIL == dead pool) only applies to separate log (slog) devices. > > -- > Freddie Cash > fjwcash@gmail.com > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Fri Sep 17 16:49:03 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 1ECC21065673 for ; Fri, 17 Sep 2010 16:49:03 +0000 (UTC) (envelope-from bryan@shatow.net) Received: from secure.xzibition.com (secure.xzibition.com [173.160.118.92]) by mx1.freebsd.org (Postfix) with ESMTP id BB3ED8FC15 for ; Fri, 17 Sep 2010 16:49:02 +0000 (UTC) DomainKey-Signature: a=rsa-sha1; c=nofws; d=shatow.net; h=message-id :date:from:mime-version:to:cc:subject:references:in-reply-to :content-type:content-transfer-encoding; q=dns; s=sweb; b=VKYWnj ATuDel4sHrEmnZkSHpGix0s8tSwac5sPEYrIWQBkpdvDcyg8eXvl2Cwa5FovwGdL 6kra+kI0ms83WVW8oudEs5x8iTkQacDp0fjn9MzaSRLPYnaMyauPVpgslEgX+/+d 7xXAC0s8q5g2vbhNi0wV0nXyJbbYx0qc92VI0= DKIM-Signature: v=1; a=rsa-sha256; c=simple; d=shatow.net; h=message-id :date:from:mime-version:to:cc:subject:references:in-reply-to :content-type:content-transfer-encoding; s=sweb; bh=2hnwgnN1VShF O4JE25Sd7ULWPG3ntDsbJAny47UBM6o=; b=bj78LK1vGpt9H1FBD3F9b5wYi/im V0FGRkHAY0PIvCIdsHMuwF1CFlcbR6h4MqPQQ/MkGv4j1j87QtVV295yCtxIyoMH /kGvaQiwAmvTlFherHv/gqOwt0dwexC05roljxZJFjldkR3gL6cXTHdA4tqRxf8h LBXt8JeNg2FZ97o= Received: (qmail 15785 invoked from network); 17 Sep 2010 11:49:01 -0500 Received: from unknown (HELO ?192.168.0.201?) (bryan@shatow.net@74.94.87.209) by sweb.xzibition.com with ESMTPA; 17 Sep 2010 11:49:01 -0500 Message-ID: <4C939BF8.7060105@shatow.net> Date: Fri, 17 Sep 2010 11:48:56 -0500 From: Bryan Drewery User-Agent: Thunderbird 2.0.0.24 (Windows/20100228) MIME-Version: 1.0 To: Jeremy Chadwick References: <4C9385B0.2080909@shatow.net> <20100917161847.GA58503@icarus.home.lan> <20100917163732.GA59537@icarus.home.lan> <4C939B47.6030701@shatow.net> In-Reply-To: <4C939B47.6030701@shatow.net> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org Subject: Re: what happens to pool if ZIL dies on ZFS v14 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 17 Sep 2010 16:49:03 -0000 > > The ZIL is still used even without a dedicated log device. Disabling > it is *stupid* in most cases. > Same goes for disabling the ARC. > > There is a lot of FUD out there regarding ZFS tuning. The bottom line: > don't tune; add more RAM. > http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Disabling_the_ZIL_.28Don.27t.29 From owner-freebsd-fs@FreeBSD.ORG Fri Sep 17 17:03:56 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 308801065694 for ; Fri, 17 Sep 2010 17:03:56 +0000 (UTC) (envelope-from fjwcash@gmail.com) Received: from mail-ey0-f182.google.com (mail-ey0-f182.google.com [209.85.215.182]) by mx1.freebsd.org (Postfix) with ESMTP id B0FA28FC0A for ; Fri, 17 Sep 2010 17:03:55 +0000 (UTC) Received: by eyx24 with SMTP id 24so1353014eyx.13 for ; Fri, 17 Sep 2010 10:03:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:received:in-reply-to :references:date:message-id:subject:from:to:content-type :content-transfer-encoding; bh=Ar8Gn3h8buZ8O1b58Ev77hHwGRPDy85BVSfVVna5yB4=; b=DnyMP0xJLY0uusxTJz/b/5h7O8eGD3au4kJH6DY9CfgXgBXtU6OHRqinwxYNnArwlZ z42/aOEGqSDme+T5NqTlsWCmOzHu5cL61uzCL9TDgD3rIM2kLsjFz4AqNS0Og2fwa+EC UcuvGb1a619iYOh+XKAI1dLFlwlw0Au3wLoYo= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type:content-transfer-encoding; b=uONXS2DEwYYnXiY3HGiLMShnnq06Kx++OtxM1EaqLQ0zxDFiNHQQ/EXGbaxErDrXNf n1Xups1i8l7EskLyw7wkZcRRxPGMqfTNIbE+c5qpYSn4hsbDFE+vrkK+IL/a/4kg2oUn QHpm6FweTa55fOiV6CEjEArEd6xcGmtSEx3zY= MIME-Version: 1.0 Received: by 10.223.124.70 with SMTP id t6mr42020far.80.1284743034350; Fri, 17 Sep 2010 10:03:54 -0700 (PDT) Received: by 10.223.110.197 with HTTP; Fri, 17 Sep 2010 10:03:54 -0700 (PDT) In-Reply-To: <20100917163732.GA59537@icarus.home.lan> References: <4C9385B0.2080909@shatow.net> <20100917161847.GA58503@icarus.home.lan> <20100917163732.GA59537@icarus.home.lan> Date: Fri, 17 Sep 2010 10:03:54 -0700 Message-ID: From: Freddie Cash To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Subject: Re: what happens to pool if ZIL dies on ZFS v14 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 17 Sep 2010 17:03:56 -0000 On Fri, Sep 17, 2010 at 9:37 AM, Jeremy Chadwick wrote: > I was under the impression ZFS still managed to utilise the ZIL when a > pool didn't have any "log" devices associated with it (possibly some > sort of statically-allocated amount of RAM?) Yes, there has been a ZIL in ZFS from the very beginning. Originally, it was part of the pool. Then the ability to have separate log (slog) devices was added. At the time that slog support was added, the recommendation was to mirror the slogs, since losing the slog would kill the entire pool. Then in ZFSv19, support for removing slogs and booting with a dead slog was added, so one could run with non-mirrored slogs. Disabling the ZIL has no bearing on whether or not the pool dies when the slog dies. > You can search the FreeBSD lists for people continually advocating > vfs.zfs.zil_disable=3D1. =C2=A0There's even a couple blog posts from engi= neers > talking about how the only way to get their filers to behave decently > was to disable the ZIL[1][2][3]. =C2=A0In most (every?) cases I've seen w= here > someone advocates disabling the ZIL, pool details aren't provided, which > leads me to believe their pools have no "log" devices. Correct. However, disabling the ZIL is orthogonal to having slog devices and whether or not the pool dies when the slog dies. You can have separate log devices, then disable the ZIL, but the slog devices will remain as part of the pool, just unused. For example, in ZFSv14, if you have a separate log device, then disable the ZIL, then the slog device dies (or you physically remove it), and you reboot ... your pool is dead as the log device is inaccessible. (There may be rare occasions when you may be able to boot without the slog if there's no data in the slog, but I wouldn't count on it.) > Here's a better way to phrase my question: does vfs.zfs.zil_disable=3D1 d= o > anything if there aren't any "log" devices in use (in any pool)? Yes, it disables the in-pool ZIL. ZFS always has a ZIL unless you disable it. the only difference is where the ZIL is located (in-pool or on separate device). --=20 Freddie Cash fjwcash@gmail.com From owner-freebsd-fs@FreeBSD.ORG Fri Sep 17 17:10:00 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 954261065670 for ; Fri, 17 Sep 2010 17:10:00 +0000 (UTC) (envelope-from fjwcash@gmail.com) Received: from mail-ey0-f182.google.com (mail-ey0-f182.google.com [209.85.215.182]) by mx1.freebsd.org (Postfix) with ESMTP id 214A58FC13 for ; Fri, 17 Sep 2010 17:09:59 +0000 (UTC) Received: by eyx24 with SMTP id 24so1356418eyx.13 for ; Fri, 17 Sep 2010 10:09:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:received:in-reply-to :references:date:message-id:subject:from:to:content-type :content-transfer-encoding; bh=3FQ6UxuSDDcQJHqHoRVGq5GOYsljiStzbETa5t3vb5g=; b=CAgHPIcvYkx2/iidAAQnHvnO/4U3UpjHfunQN43unDEu2T39n1rAlrNiv3uWDcYG1w LRmpRCmvO8EitwYuruHTO1mliRHM252BhcZFeJwT0HV1Ez5WLHkfVqw4gLHRcCckvW+x qcket5UjwK0r/E4aqmU/40htPIaREvM25XmYc= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type:content-transfer-encoding; b=hmx8/vSbG2eucLCWf4tGBEmNJQF8xQdf9xe2HPv1OhlibWFkr4ALQgRfLqQhukt9f5 VKYHX2oywXBp08p3JviZUqsJs3L7pqvZjpkgnJZxbQY42Wq1aedNifQ5dmH8Sk2Y8DDx utuYvTSyX/h7na6bWB9j/kg8jhdWGIM1WPlyM= MIME-Version: 1.0 Received: by 10.223.124.141 with SMTP id u13mr989192far.32.1284743398958; Fri, 17 Sep 2010 10:09:58 -0700 (PDT) Received: by 10.223.110.197 with HTTP; Fri, 17 Sep 2010 10:09:58 -0700 (PDT) In-Reply-To: References: <4C9385B0.2080909@shatow.net> <20100917161847.GA58503@icarus.home.lan> Date: Fri, 17 Sep 2010 10:09:58 -0700 Message-ID: From: Freddie Cash To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Subject: Re: what happens to pool if ZIL dies on ZFS v14 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 17 Sep 2010 17:10:00 -0000 On Fri, Sep 17, 2010 at 9:47 AM, Gil Vidals wrote: > First, let me say that I'm receiving excellent input from the FreeBSD > community. I'm new to FreeBSD and ZFS and this mailing list has been very > helpful. > > I'm running ZFSv14 on FreeBSD 8.1 AMD64 with 8GB of DDR3 RAM with two SSD= s - > one for the ZIL and the other for the L2ARC cache. > > zambia# zpool iostat -v 1 1 > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 cap= acity =C2=A0 =C2=A0 operations =C2=A0 =C2=A0bandwidth > pool =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 used =C2=A0avail = =C2=A0 read =C2=A0write =C2=A0 read =C2=A0write > ---------------- =C2=A0----- =C2=A0----- =C2=A0----- =C2=A0----- =C2=A0--= --- =C2=A0----- > tank =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A06.57G =C2=A0 921G = =C2=A0 =C2=A0 =C2=A00 =C2=A0 =C2=A0 11 =C2=A0 116K =C2=A0 438K > =C2=A0mirror =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A06.57G =C2=A0 921G =C2=A0 = =C2=A0 =C2=A00 =C2=A0 =C2=A0 =C2=A05 =C2=A0 116K =C2=A0 229K > =C2=A0 =C2=A0label/disk1 =C2=A0 =C2=A0 =C2=A0 - =C2=A0 =C2=A0 =C2=A0- =C2= =A0 =C2=A0 =C2=A00 =C2=A0 =C2=A0 =C2=A03 =C2=A057.9K =C2=A0 229K > =C2=A0 =C2=A0label/disk2 =C2=A0 =C2=A0 =C2=A0 - =C2=A0 =C2=A0 =C2=A0- =C2= =A0 =C2=A0 =C2=A00 =C2=A0 =C2=A0 =C2=A03 =C2=A057.8K =C2=A0 229K > =C2=A0label/zilcache =C2=A0 136K =C2=A059.5G =C2=A0 =C2=A0 =C2=A00 =C2=A0= =C2=A0 =C2=A06 =C2=A0 =C2=A0 17 =C2=A0 209K > cache =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 - =C2=A0 = =C2=A0 =C2=A0- =C2=A0 =C2=A0 =C2=A0- =C2=A0 =C2=A0 =C2=A0- =C2=A0 =C2=A0 = =C2=A0- =C2=A0 =C2=A0 =C2=A0- > =C2=A0label/l2cache =C2=A0 59.6G =C2=A08.50K =C2=A0 =C2=A0 =C2=A00 =C2=A0= =C2=A0 =C2=A00 =C2=A031.5K =C2=A048.9K > ---------------- =C2=A0----- =C2=A0----- =C2=A0----- =C2=A0----- =C2=A0--= --- =C2=A0----- > > Observing the ZIL Cache, I see it being used very sparingly. And now that= I > know the SSD slog must be mirrored in ZFS < v19, I think the best course = of > action (assuming I'm not buying more equipment) is to mirror the ZIL SSD = and > abandon the L2ARC altogether. Won't RAM be used for L2ARC instead? The ZIL is only used for synchronous writes, and does not need to be very large. I forget the formula for determining the exact size of a ZIL (something along the lines of the max amount of data you can write in 30 seconds), but it's rarely more than 4 GB and usually in the 1-2 GB range. If possible, you'd be better off rebuilding your pool like so: mirror disk1 and disk2 slice both SSDs into two: 4-8 GB for ZIL, rest for L2ARC mirror zilcache1 zilcache2 add l2cache1 l2cache2 (don't mirror them) That way, you have a mirrored ZIL, and double the L2ARC. However, since it takes around 270 bytes of RAM for every object in the L2ARC, you'll want to make sure you have lots of RAM to manage it (or, possibly, make 3 slices on the SSDs and use the third for swap?). --=20 Freddie Cash fjwcash@gmail.com From owner-freebsd-fs@FreeBSD.ORG Fri Sep 17 17:24:44 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 332DE1065672 for ; Fri, 17 Sep 2010 17:24:44 +0000 (UTC) (envelope-from olivier@gid0.org) Received: from mail-pv0-f182.google.com (mail-pv0-f182.google.com [74.125.83.182]) by mx1.freebsd.org (Postfix) with ESMTP id 0FF518FC22 for ; Fri, 17 Sep 2010 17:24:43 +0000 (UTC) Received: by pvc21 with SMTP id 21so842641pvc.13 for ; Fri, 17 Sep 2010 10:24:43 -0700 (PDT) MIME-Version: 1.0 Received: by 10.114.112.15 with SMTP id k15mr5751719wac.183.1284742894661; Fri, 17 Sep 2010 10:01:34 -0700 (PDT) Received: by 10.231.168.202 with HTTP; Fri, 17 Sep 2010 10:01:34 -0700 (PDT) In-Reply-To: References: <4C9385B0.2080909@shatow.net> <20100917161847.GA58503@icarus.home.lan> Date: Fri, 17 Sep 2010 19:01:34 +0200 Message-ID: From: Olivier Smedts To: gil@vidals.net Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Cc: freebsd-fs@freebsd.org Subject: Re: what happens to pool if ZIL dies on ZFS v14 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 17 Sep 2010 17:24:44 -0000 2010/9/17 Gil Vidals : > First, let me say that I'm receiving excellent input from the FreeBSD > community. I'm new to FreeBSD and ZFS and this mailing list has been very > helpful. > > I'm running ZFSv14 on FreeBSD 8.1 AMD64 with 8GB of DDR3 RAM with two SSD= s - > one for the ZIL and the other for the L2ARC cache. > > zambia# zpool iostat -v 1 1 > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 capacity =A0 =A0 operations =A0 = =A0bandwidth > pool =A0 =A0 =A0 =A0 =A0 =A0 =A0 used =A0avail =A0 read =A0write =A0 read= =A0write > ---------------- =A0----- =A0----- =A0----- =A0----- =A0----- =A0----- > tank =A0 =A0 =A0 =A0 =A0 =A0 =A06.57G =A0 921G =A0 =A0 =A00 =A0 =A0 11 = =A0 116K =A0 438K > =A0mirror =A0 =A0 =A0 =A0 =A06.57G =A0 921G =A0 =A0 =A00 =A0 =A0 =A05 =A0= 116K =A0 229K > =A0 =A0label/disk1 =A0 =A0 =A0 - =A0 =A0 =A0- =A0 =A0 =A00 =A0 =A0 =A03 = =A057.9K =A0 229K > =A0 =A0label/disk2 =A0 =A0 =A0 - =A0 =A0 =A0- =A0 =A0 =A00 =A0 =A0 =A03 = =A057.8K =A0 229K > =A0label/zilcache =A0 136K =A059.5G =A0 =A0 =A00 =A0 =A0 =A06 =A0 =A0 17 = =A0 209K > cache =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 - =A0 =A0 =A0- =A0 =A0 =A0- =A0 =A0= =A0- =A0 =A0 =A0- =A0 =A0 =A0- > =A0label/l2cache =A0 59.6G =A08.50K =A0 =A0 =A00 =A0 =A0 =A00 =A031.5K = =A048.9K > ---------------- =A0----- =A0----- =A0----- =A0----- =A0----- =A0----- > > Observing the ZIL Cache, I see it being used very sparingly. And now that= I > know the SSD slog must be mirrored in ZFS < v19, I think the best course = of > action (assuming I'm not buying more equipment) is to mirror the ZIL SSD = and > abandon the L2ARC altogether. Won't RAM be used for L2ARC instead? Maybe 64G of ZIL is a bit much for your workload, too (I saw somewhere it must be the size of 30s of sustained write to your pool). You could also make two partitions on your SSDs, mirror the ZIL on one partition of each SSD, and add the remaining partitions (not mirrored - useless) for the L2ARC. I'm assuming you have identical SSDs. If you have for example en MLC and an SLC, prefer the SLC for ZIL. Cheers > > --Gil Vidals / VMRacks.com > > On Fri, Sep 17, 2010 at 9:21 AM, Freddie Cash wrote: > >> On Fri, Sep 17, 2010 at 9:18 AM, Jeremy Chadwick >> wrote: >> > On Fri, Sep 17, 2010 at 09:17:11AM -0700, Freddie Cash wrote: >> >> On Fri, Sep 17, 2010 at 8:35 AM, Gil Vidals wrote= : >> >> > Bryan thank you for the detailed answer. >> >> > >> >> > Assuming the ZIL SSD died, what steps would I follow to recover the >> pool? (i >> >> > hope it is recoverable). >> >> >> >> If you are running ZFSv1 through ZFSv18 and your log device dies, you= r >> >> pool is dead, gone, unrecoverable, no secret prize, no continues, do >> >> not pass go, etc, etc, etc. >> >> >> >> If you are running ZFSv19 or newer and your log device dies, you can >> >> remove the dead device and carry on. =A0You will lose any data that w= as >> >> in the ZIL, but the pool will be intact. >> > >> > Given the severity of this predicament, then why is it people are >> > disabling the ZIL (via vfs.zfs.zil_disable=3D1) ? >> >> I'm not sure what you mean by that. >> >> This (dead ZIL =3D=3D dead pool) only applies to separate log (slog) dev= ices. >> >> -- >> Freddie Cash >> fjwcash@gmail.com >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >> > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > --=20 Olivier Smedts=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=A0 _ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0= =A0 ASCII ribbon campaign ( ) e-mail: olivier@gid0.org=A0 =A0 =A0 =A0 - against HTML email & vCards=A0 X www: http://www.gid0.org=A0 =A0 - against proprietary attachments / \ =A0 "Il y a seulement 10 sortes de gens dans le monde : =A0 ceux qui comprennent le binaire, =A0 et ceux qui ne le comprennent pas." From owner-freebsd-fs@FreeBSD.ORG Fri Sep 17 18:41:18 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 3F53A106564A for ; Fri, 17 Sep 2010 18:41:18 +0000 (UTC) (envelope-from obrien@NUXI.org) Received: from dragon.nuxi.org (trang.nuxi.org [74.95.12.85]) by mx1.freebsd.org (Postfix) with ESMTP id 030F98FC1C for ; Fri, 17 Sep 2010 18:41:17 +0000 (UTC) Received: from dragon.nuxi.org (obrien@localhost [127.0.0.1]) by dragon.nuxi.org (8.14.4/8.14.4) with ESMTP id o8HI7cUf051716 for ; Fri, 17 Sep 2010 11:07:38 -0700 (PDT) (envelope-from obrien@dragon.nuxi.org) Received: (from obrien@localhost) by dragon.nuxi.org (8.14.4/8.14.4/Submit) id o8HI7cEg051715 for freebsd-fs@freebsd.org; Fri, 17 Sep 2010 11:07:38 -0700 (PDT) (envelope-from obrien) Date: Fri, 17 Sep 2010 11:07:38 -0700 From: "David O'Brien" To: freebsd-fs@freebsd.org Message-ID: <20100917180738.GA51572@dragon.NUXI.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline X-Operating-System: FreeBSD 9.0-CURRENT X-to-the-FBI-CIA-and-NSA: HI! HOW YA DOIN? User-Agent: Mutt/1.5.16 (2007-06-09) Subject: [PATCH] replace INVARIANTS+panic() with KASSERT X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: obrien@freebsd.org List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 17 Sep 2010 18:41:18 -0000 This patch changes most of the "asserts" and panic() within #ifdef INVARIANTS of olden years with KASSERTS. In doing so, it also changes some '"%s: blah", "thing"' with just '"thing: blah"' to make grep'ing easier. Some "notyet" code from the early 2000s is also reaped. Some sysctls are also added to make it easier to change some diagnostics values at runtime vs. I believe using the debugger to change them. thoughts? -- -- David (obrien@FreeBSD.org) Index: IVs/ffs/ffs_softdep.c =================================================================== --- ufs/ffs/ffs_softdep.c (revision 212799) +++ ufs/ffs/ffs_softdep.c (working copy) @@ -6015,11 +6015,9 @@ handle_complete_freeblocks(freeblks) vput(vp); } -#ifdef INVARIANTS - if (freeblks->fb_chkcnt != 0 && - ((fs->fs_flags & FS_UNCLEAN) == 0 || (flags & LK_NOWAIT) != 0)) - printf("handle_workitem_freeblocks: block count\n"); -#endif /* INVARIANTS */ + KASSERT(freeblks->fb_chkcnt != 0 && + ((fs->fs_flags & FS_UNCLEAN) == 0 || (flags & LK_NOWAIT) != 0), + ("handle_workitem_freeblocks: block count")); ACQUIRE_LOCK(&lk); /* @@ -6089,12 +6087,7 @@ indir_trunc(freework, dbn, lbn) * a complete copy of the indirect block in memory for our use. * Otherwise we have to read the blocks in from the disk. */ -#ifdef notyet - bp = getblk(freeblks->fb_devvp, dbn, (int)fs->fs_bsize, 0, 0, - GB_NOCREAT); -#else bp = incore(&freeblks->fb_devvp->v_bufobj, dbn); -#endif ACQUIRE_LOCK(&lk); if (bp != NULL && (wk = LIST_FIRST(&bp->b_dep)) != NULL) { if (wk->wk_type != D_INDIRDEP || @@ -6109,10 +6102,6 @@ indir_trunc(freework, dbn, lbn) ump->um_numindirdeps -= 1; FREE_LOCK(&lk); } else { -#ifdef notyet - if (bp) - brelse(bp); -#endif FREE_LOCK(&lk); if (bread(freeblks->fb_devvp, dbn, (int)fs->fs_bsize, NOCRED, &bp) != 0) { @@ -7847,7 +7836,9 @@ handle_workitem_freefile(freefile) { struct workhead wkhd; struct fs *fs; +#ifdef DEBUG struct inodedep *idp; +#endif struct ufsmount *ump; int error; @@ -8164,29 +8155,26 @@ initiate_write_inodeblock_ufs1(inodedep, */ for (deplist = 0, adp = TAILQ_FIRST(&inodedep->id_inoupdt); adp; adp = TAILQ_NEXT(adp, ad_next)) { + KASSERT(deplist != 0 && prevlbn >= adp->ad_offset, + ("softdep_write_inodeblock: lbn order")); #ifdef INVARIANTS - if (deplist != 0 && prevlbn >= adp->ad_offset) - panic("softdep_write_inodeblock: lbn order"); prevlbn = adp->ad_offset; - if (adp->ad_offset < NDADDR && - dp->di_db[adp->ad_offset] != adp->ad_newblkno) - panic("%s: direct pointer #%jd mismatch %d != %jd", - "softdep_write_inodeblock", - (intmax_t)adp->ad_offset, - dp->di_db[adp->ad_offset], - (intmax_t)adp->ad_newblkno); - if (adp->ad_offset >= NDADDR && - dp->di_ib[adp->ad_offset - NDADDR] != adp->ad_newblkno) - panic("%s: indirect pointer #%jd mismatch %d != %jd", - "softdep_write_inodeblock", - (intmax_t)adp->ad_offset - NDADDR, - dp->di_ib[adp->ad_offset - NDADDR], - (intmax_t)adp->ad_newblkno); + KASSERT(adp->ad_offset < NDADDR && + dp->di_db[adp->ad_offset] != adp->ad_newblkno, + ("softdep_write_inodeblock: direct pointer #%jd mismatch %d != %jd", + (intmax_t)adp->ad_offset, dp->di_db[adp->ad_offset], + (intmax_t)adp->ad_newblkno)); + KASSERT(adp->ad_offset >= NDADDR && + dp->di_ib[adp->ad_offset - NDADDR] != adp->ad_newblkno, + ("softdep_write_inodeblock: indirect pointer #%jd mismatch %d != %jd", + (intmax_t)adp->ad_offset - NDADDR, + dp->di_ib[adp->ad_offset - NDADDR], + (intmax_t)adp->ad_newblkno)); deplist |= 1 << adp->ad_offset; - if ((adp->ad_state & ATTACHED) == 0) - panic("softdep_write_inodeblock: Unknown state 0x%x", - adp->ad_state); #endif /* INVARIANTS */ + KASSERT((adp->ad_state & ATTACHED) == 0, + ("softdep_write_inodeblock: Unknown state 0x%x", + adp->ad_state)); adp->ad_state &= ~ATTACHED; adp->ad_state |= UNDONE; } @@ -8206,18 +8194,14 @@ initiate_write_inodeblock_ufs1(inodedep, continue; dp->di_size = fs->fs_bsize * adp->ad_offset + adp->ad_oldsize; for (i = adp->ad_offset + 1; i < NDADDR; i++) { -#ifdef INVARIANTS - if (dp->di_db[i] != 0 && (deplist & (1 << i)) == 0) - panic("softdep_write_inodeblock: lost dep1"); -#endif /* INVARIANTS */ + KASSERT(dp->di_db[i] != 0 && (deplist & (1 << i)) == 0, + ("softdep_write_inodeblock: lost dep1")); dp->di_db[i] = 0; } for (i = 0; i < NIADDR; i++) { -#ifdef INVARIANTS - if (dp->di_ib[i] != 0 && - (deplist & ((1 << NDADDR) << i)) == 0) - panic("softdep_write_inodeblock: lost dep2"); -#endif /* INVARIANTS */ + KASSERT(dp->di_ib[i] != 0 && + (deplist & ((1 << NDADDR) << i)) == 0, + ("softdep_write_inodeblock: lost dep2")); dp->di_ib[i] = 0; } return; @@ -8345,21 +8329,20 @@ initiate_write_inodeblock_ufs2(inodedep, */ for (deplist = 0, adp = TAILQ_FIRST(&inodedep->id_extupdt); adp; adp = TAILQ_NEXT(adp, ad_next)) { + KASSERT(deplist != 0 && prevlbn >= adp->ad_offset, + ("softdep_write_inodeblock: lbn order")); #ifdef INVARIANTS - if (deplist != 0 && prevlbn >= adp->ad_offset) - panic("softdep_write_inodeblock: lbn order"); prevlbn = adp->ad_offset; - if (dp->di_extb[adp->ad_offset] != adp->ad_newblkno) - panic("%s: direct pointer #%jd mismatch %jd != %jd", - "softdep_write_inodeblock", - (intmax_t)adp->ad_offset, - (intmax_t)dp->di_extb[adp->ad_offset], - (intmax_t)adp->ad_newblkno); + KASSERT(dp->di_extb[adp->ad_offset] != adp->ad_newblkno, + ("softdep_write_inodeblock: direct pointer #%jd mismatch %jd != %jd", + (intmax_t)adp->ad_offset, + (intmax_t)dp->di_extb[adp->ad_offset], + (intmax_t)adp->ad_newblkno)); deplist |= 1 << adp->ad_offset; - if ((adp->ad_state & ATTACHED) == 0) - panic("softdep_write_inodeblock: Unknown state 0x%x", - adp->ad_state); #endif /* INVARIANTS */ + KASSERT((adp->ad_state & ATTACHED) == 0, + ("softdep_write_inodeblock: Unknown state 0x%x", + adp->ad_state)); adp->ad_state &= ~ATTACHED; adp->ad_state |= UNDONE; } @@ -8377,10 +8360,9 @@ initiate_write_inodeblock_ufs2(inodedep, continue; dp->di_extsize = fs->fs_bsize * adp->ad_offset + adp->ad_oldsize; for (i = adp->ad_offset + 1; i < NXADDR; i++) { -#ifdef INVARIANTS - if (dp->di_extb[i] != 0 && (deplist & (1 << i)) == 0) - panic("softdep_write_inodeblock: lost dep1"); -#endif /* INVARIANTS */ + KASSERT(dp->di_extb[i] != 0 && + (deplist & (1 << i)) == 0, + ("softdep_write_inodeblock: lost dep1")); dp->di_extb[i] = 0; } lastadp = NULL; @@ -8404,29 +8386,27 @@ initiate_write_inodeblock_ufs2(inodedep, */ for (deplist = 0, adp = TAILQ_FIRST(&inodedep->id_inoupdt); adp; adp = TAILQ_NEXT(adp, ad_next)) { + KASSERT(deplist != 0 && prevlbn >= adp->ad_offset, + ("softdep_write_inodeblock: lbn order")); #ifdef INVARIANTS - if (deplist != 0 && prevlbn >= adp->ad_offset) - panic("softdep_write_inodeblock: lbn order"); prevlbn = adp->ad_offset; - if (adp->ad_offset < NDADDR && - dp->di_db[adp->ad_offset] != adp->ad_newblkno) - panic("%s: direct pointer #%jd mismatch %jd != %jd", - "softdep_write_inodeblock", - (intmax_t)adp->ad_offset, - (intmax_t)dp->di_db[adp->ad_offset], - (intmax_t)adp->ad_newblkno); - if (adp->ad_offset >= NDADDR && - dp->di_ib[adp->ad_offset - NDADDR] != adp->ad_newblkno) - panic("%s indirect pointer #%jd mismatch %jd != %jd", - "softdep_write_inodeblock:", - (intmax_t)adp->ad_offset - NDADDR, - (intmax_t)dp->di_ib[adp->ad_offset - NDADDR], - (intmax_t)adp->ad_newblkno); + KASSERT(adp->ad_offset < NDADDR && + dp->di_db[adp->ad_offset] != adp->ad_newblkno, + ("softdep_write_inodeblock: direct pointer #%jd mismatch %jd != %jd", + (intmax_t)adp->ad_offset, + (intmax_t)dp->di_db[adp->ad_offset], + (intmax_t)adp->ad_newblkno)); + KASSERT(adp->ad_offset >= NDADDR && + dp->di_ib[adp->ad_offset - NDADDR] != adp->ad_newblkno, + ("softdep_write_inodeblock: indirect pointer #%jd mismatch %jd != %jd", + (intmax_t)adp->ad_offset - NDADDR, + (intmax_t)dp->di_ib[adp->ad_offset - NDADDR], + (intmax_t)adp->ad_newblkno)); deplist |= 1 << adp->ad_offset; - if ((adp->ad_state & ATTACHED) == 0) - panic("softdep_write_inodeblock: Unknown state 0x%x", - adp->ad_state); #endif /* INVARIANTS */ + KASSERT((adp->ad_state & ATTACHED) == 0, + ("softdep_write_inodeblock: Unknown state 0x%x", + adp->ad_state)); adp->ad_state &= ~ATTACHED; adp->ad_state |= UNDONE; } @@ -8446,18 +8426,14 @@ initiate_write_inodeblock_ufs2(inodedep, continue; dp->di_size = fs->fs_bsize * adp->ad_offset + adp->ad_oldsize; for (i = adp->ad_offset + 1; i < NDADDR; i++) { -#ifdef INVARIANTS - if (dp->di_db[i] != 0 && (deplist & (1 << i)) == 0) - panic("softdep_write_inodeblock: lost dep2"); -#endif /* INVARIANTS */ + KASSERT(dp->di_db[i] != 0 && (deplist & (1 << i)) == 0, + ("softdep_write_inodeblock: lost dep2")); dp->di_db[i] = 0; } for (i = 0; i < NIADDR; i++) { -#ifdef INVARIANTS - if (dp->di_ib[i] != 0 && - (deplist & ((1 << NDADDR) << i)) == 0) - panic("softdep_write_inodeblock: lost dep3"); -#endif /* INVARIANTS */ + KASSERT(dp->di_ib[i] != 0 && + (deplist & ((1 << NDADDR) << i)) == 0, + ("softdep_write_inodeblock: lost dep3")); dp->di_ib[i] = 0; } return; Index: ufs/ffs/ffs_vnops.c =================================================================== --- ufs/ffs/ffs_vnops.c (revision 212799) +++ ufs/ffs/ffs_vnops.c (working copy) @@ -465,10 +465,9 @@ ffs_read(ap) seqcount = ap->a_ioflag >> IO_SEQSHIFT; ip = VTOI(vp); -#ifdef INVARIANTS - if (uio->uio_rw != UIO_READ) - panic("ffs_read: mode"); + KASSERT(uio->uio_rw != UIO_READ, ("ffs_read: mode")); +#ifdef INVARIANTS if (vp->v_type == VLNK) { if ((int)ip->i_size < vp->v_mount->mnt_maxsymlinklen) panic("ffs_read: short symlink"); @@ -667,10 +666,7 @@ ffs_write(ap) seqcount = ap->a_ioflag >> IO_SEQSHIFT; ip = VTOI(vp); -#ifdef INVARIANTS - if (uio->uio_rw != UIO_WRITE) - panic("ffs_write: mode"); -#endif + KASSERT(uio->uio_rw != UIO_WRITE, ("ffs_write: mode")); switch (vp->v_type) { case VREG: @@ -884,11 +880,9 @@ ffs_extread(struct vnode *vp, struct uio fs = ip->i_fs; dp = ip->i_din2; -#ifdef INVARIANTS - if (uio->uio_rw != UIO_READ || fs->fs_magic != FS_UFS2_MAGIC) - panic("ffs_extread: mode"); + KASSERT(uio->uio_rw != UIO_READ || fs->fs_magic != FS_UFS2_MAGIC, + ("ffs_extread: mode")); -#endif orig_resid = uio->uio_resid; KASSERT(orig_resid >= 0, ("ffs_extread: uio->uio_resid < 0")); if (orig_resid == 0) @@ -1036,10 +1030,8 @@ ffs_extwrite(struct vnode *vp, struct ui fs = ip->i_fs; dp = ip->i_din2; -#ifdef INVARIANTS - if (uio->uio_rw != UIO_WRITE || fs->fs_magic != FS_UFS2_MAGIC) - panic("ffs_extwrite: mode"); -#endif + KASSERT(uio->uio_rw != UIO_WRITE || fs->fs_magic != FS_UFS2_MAGIC, + ("ffs_extwrite: mode")); if (ioflag & IO_APPEND) uio->uio_offset = dp->di_extsize; Index: ufs/ffs/ffs_alloc.c =================================================================== --- ufs/ffs/ffs_alloc.c (revision 212799) +++ ufs/ffs/ffs_alloc.c (working copy) @@ -257,9 +257,9 @@ ffs_realloccg(ip, lbprev, bprev, bpref, bp = NULL; ump = ip->i_ump; mtx_assert(UFS_MTX(ump), MA_OWNED); + KASSERT(vp->v_mount->mnt_kern_flag & MNTK_SUSPENDED, + ("ffs_realloccg: allocation on suspended filesystem")); #ifdef INVARIANTS - if (vp->v_mount->mnt_kern_flag & MNTK_SUSPENDED) - panic("ffs_realloccg: allocation on suspended filesystem"); if ((u_int)osize > fs->fs_bsize || fragoff(fs, osize) != 0 || (u_int)nsize > fs->fs_bsize || fragoff(fs, nsize) != 0) { printf( @@ -268,9 +268,8 @@ ffs_realloccg(ip, lbprev, bprev, bpref, nsize, fs->fs_fsmnt); panic("ffs_realloccg: bad size"); } - if (cred == NOCRED) - panic("ffs_realloccg: missing credential"); #endif /* INVARIANTS */ + KASSERT(cred == NOCRED, ("ffs_realloccg: missing credential")); reclaimed = 0; retry: if (priv_check_cred(cred, PRIV_VFS_BLOCKRESERVE, 0) && @@ -455,7 +454,11 @@ static int doreallocblks = 1; SYSCTL_INT(_vfs_ffs, OID_AUTO, doreallocblks, CTLFLAG_RW, &doreallocblks, 0, ""); #ifdef DEBUG -static volatile int prtrealloc = 0; +static int prtrealloc = 0; +static SYSCTL_NODE(_vfs_ffs, OID_AUTO, diagnostics, CTLFLAG_RW, 0, + "FFS filesystem diagnostics"); +SYSCTL_INT(_vfs_ffs_diagnostics, OID_AUTO, prtrealloc, CTLFLAG_RW, &prtrealloc, + 0, ""); #endif int @@ -517,14 +520,14 @@ ffs_reallocblks_ufs1(ap) dbtofsb(fs, buflist->bs_children[i]->b_blkno), fs->fs_bsize)) panic("ffs_reallocblks: unallocated block 1"); for (i = 1; i < len; i++) - if (buflist->bs_children[i]->b_lblkno != start_lbn + i) - panic("ffs_reallocblks: non-logical cluster"); + KASSERT(buflist->bs_children[i]->b_lblkno != start_lbn + i, + ("ffs_reallocblks: non-logical cluster")); blkno = buflist->bs_children[0]->b_blkno; ssize = fsbtodb(fs, fs->fs_frag); for (i = 1; i < len - 1; i++) - if (buflist->bs_children[i]->b_blkno != blkno + (i * ssize)) - panic("ffs_reallocblks: non-physical cluster %d", i); -#endif + KASSERT(buflist->bs_children[i]->b_blkno != blkno + (i * ssize), + ("ffs_reallocblks: non-physical cluster %d", i)); +#endif /* INVARIANTS */ /* * If the latest allocation is in a new cylinder group, assume that * the filesystem has decided to move and do not force it back to @@ -557,11 +560,9 @@ ffs_reallocblks_ufs1(ap) if (end_lvl == 0 || (idp = &end_ap[end_lvl - 1])->in_off + 1 >= len) { ssize = len; } else { -#ifdef INVARIANTS - if (start_lvl > 0 && - start_ap[start_lvl - 1].in_lbn == idp->in_lbn) - panic("ffs_reallocblk: start == end"); -#endif + KASSERT(start_lvl > 0 && + start_ap[start_lvl - 1].in_lbn == idp->in_lbn, + ("INVARIANT: ffs_reallocblk: start == end")); ssize = len - (idp->in_off + 1); if (bread(vp, idp->in_lbn, (int)fs->fs_bsize, NOCRED, &ebp)) goto fail; @@ -598,13 +599,11 @@ ffs_reallocblks_ufs1(ap) bap = ebap; soff = -i; } -#ifdef INVARIANTS - if (!ffs_checkblk(ip, - dbtofsb(fs, buflist->bs_children[i]->b_blkno), fs->fs_bsize)) - panic("ffs_reallocblks: unallocated block 2"); - if (dbtofsb(fs, buflist->bs_children[i]->b_blkno) != *bap) - panic("ffs_reallocblks: alloc mismatch"); -#endif + KASSERT(!ffs_checkblk(ip, + dbtofsb(fs, buflist->bs_children[i]->b_blkno), + fs->fs_bsize), ("ffs_reallocblks: unallocated block 2")); + KASSERT(dbtofsb(fs, buflist->bs_children[i]->b_blkno) != *bap, + ("ffs_reallocblks: alloc mismatch")); #ifdef DEBUG if (prtrealloc) printf(" %d,", *bap); @@ -664,12 +663,10 @@ ffs_reallocblks_ufs1(ap) dbtofsb(fs, buflist->bs_children[i]->b_blkno), fs->fs_bsize, ip->i_number, NULL); buflist->bs_children[i]->b_blkno = fsbtodb(fs, blkno); -#ifdef INVARIANTS - if (!ffs_checkblk(ip, - dbtofsb(fs, buflist->bs_children[i]->b_blkno), fs->fs_bsize)) - panic("ffs_reallocblks: unallocated block 3"); -#endif -#ifdef DEBUG + KASSERT(!ffs_checkblk(ip, + dbtofsb(fs, buflist->bs_children[i]->b_blkno), + fs->fs_bsize), ("ffs_reallocblks: unallocated block 3")); +#ifdef DIAGNOSTIC if (prtrealloc) printf(" %d,", blkno); #endif @@ -721,18 +718,18 @@ ffs_reallocblks_ufs2(ap) end_lbn = start_lbn + len - 1; #ifdef INVARIANTS for (i = 0; i < len; i++) - if (!ffs_checkblk(ip, - dbtofsb(fs, buflist->bs_children[i]->b_blkno), fs->fs_bsize)) - panic("ffs_reallocblks: unallocated block 1"); + KASSERT(!ffs_checkblk(ip, + dbtofsb(fs, buflist->bs_children[i]->b_blkno), + fs->fs_bsize), ("ffs_reallocblks: unallocated block 1")); for (i = 1; i < len; i++) - if (buflist->bs_children[i]->b_lblkno != start_lbn + i) - panic("ffs_reallocblks: non-logical cluster"); + KASSERT(buflist->bs_children[i]->b_lblkno != start_lbn + i, + ("ffs_reallocblks: non-logical cluster")); blkno = buflist->bs_children[0]->b_blkno; ssize = fsbtodb(fs, fs->fs_frag); for (i = 1; i < len - 1; i++) - if (buflist->bs_children[i]->b_blkno != blkno + (i * ssize)) - panic("ffs_reallocblks: non-physical cluster %d", i); -#endif + KASSERT(buflist->bs_children[i]->b_blkno != blkno + (i * ssize), + ("ffs_reallocblks: non-physical cluster %d", i)); +#endif /* INVARIANTS */ /* * If the latest allocation is in a new cylinder group, assume that * the filesystem has decided to move and do not force it back to @@ -765,11 +762,9 @@ ffs_reallocblks_ufs2(ap) if (end_lvl == 0 || (idp = &end_ap[end_lvl - 1])->in_off + 1 >= len) { ssize = len; } else { -#ifdef INVARIANTS - if (start_lvl > 0 && - start_ap[start_lvl - 1].in_lbn == idp->in_lbn) - panic("ffs_reallocblk: start == end"); -#endif + KASSERT(start_lvl > 0 && + start_ap[start_lvl - 1].in_lbn == idp->in_lbn, + ("INVARIANT: ffs_reallocblk: start == end")); ssize = len - (idp->in_off + 1); if (bread(vp, idp->in_lbn, (int)fs->fs_bsize, NOCRED, &ebp)) goto fail; @@ -806,13 +801,11 @@ ffs_reallocblks_ufs2(ap) bap = ebap; soff = -i; } -#ifdef INVARIANTS - if (!ffs_checkblk(ip, - dbtofsb(fs, buflist->bs_children[i]->b_blkno), fs->fs_bsize)) - panic("ffs_reallocblks: unallocated block 2"); - if (dbtofsb(fs, buflist->bs_children[i]->b_blkno) != *bap) - panic("ffs_reallocblks: alloc mismatch"); -#endif + KASSERT(!ffs_checkblk(ip, + dbtofsb(fs, buflist->bs_children[i]->b_blkno), fs->fs_bsize), + ("ffs_reallocblks: unallocated block 2")); + KASSERT(dbtofsb(fs, buflist->bs_children[i]->b_blkno) != *bap, + ("ffs_reallocblks: alloc mismatch")); #ifdef DEBUG if (prtrealloc) printf(" %jd,", (intmax_t)*bap); @@ -872,11 +865,9 @@ ffs_reallocblks_ufs2(ap) dbtofsb(fs, buflist->bs_children[i]->b_blkno), fs->fs_bsize, ip->i_number, NULL); buflist->bs_children[i]->b_blkno = fsbtodb(fs, blkno); -#ifdef INVARIANTS - if (!ffs_checkblk(ip, - dbtofsb(fs, buflist->bs_children[i]->b_blkno), fs->fs_bsize)) - panic("ffs_reallocblks: unallocated block 3"); -#endif + KASSERT(!ffs_checkblk(ip, + dbtofsb(fs, buflist->bs_children[i]->b_blkno), + fs->fs_bsize), ("ffs_reallocblks: unallocated block 3")); #ifdef DEBUG if (prtrealloc) printf(" %jd,", (intmax_t)blkno); @@ -1280,10 +1271,8 @@ ffs_hashalloc(ip, cg, pref, size, rsize, u_int i, icg = cg; mtx_assert(UFS_MTX(ip->i_ump), MA_OWNED); -#ifdef INVARIANTS - if (ITOV(ip)->v_mount->mnt_kern_flag & MNTK_SUSPENDED) - panic("ffs_hashalloc: allocation on suspended filesystem"); -#endif + KASSERT(ITOV(ip)->v_mount->mnt_kern_flag & MNTK_SUSPENDED, + ("ffs_hashalloc: allocation on suspended filesystem")); fs = ip->i_fs; /* * 1: preferred cylinder group Index: ufs/ffs/ffs_balloc.c =================================================================== --- ufs/ffs/ffs_balloc.c (revision 212799) +++ ufs/ffs/ffs_balloc.c (working copy) @@ -226,10 +226,8 @@ ffs_balloc_ufs1(struct vnode *vp, off_t pref = 0; if ((error = ufs_getlbns(vp, lbn, indirs, &num)) != 0) return(error); -#ifdef INVARIANTS - if (num < 1) - panic ("ffs_balloc_ufs1: ufs_getlbns returned indirect block"); -#endif + KASSERT(num < 1, + ("ffs_balloc_ufs1: ufs_getlbns returned indirect block")); saved_inbdflush = ~TDP_INBDFLUSH | (curthread->td_pflags & TDP_INBDFLUSH); curthread->td_pflags |= TDP_INBDFLUSH; @@ -737,10 +735,8 @@ ffs_balloc_ufs2(struct vnode *vp, off_t pref = 0; if ((error = ufs_getlbns(vp, lbn, indirs, &num)) != 0) return(error); -#ifdef INVARIANTS - if (num < 1) - panic ("ffs_balloc_ufs2: ufs_getlbns returned indirect block"); -#endif + KASSERT(num < 1, + ("ffs_balloc_ufs2: ufs_getlbns returned indirect block")); saved_inbdflush = ~TDP_INBDFLUSH | (curthread->td_pflags & TDP_INBDFLUSH); curthread->td_pflags |= TDP_INBDFLUSH; Index: ufs/ffs/ffs_inode.c =================================================================== --- ufs/ffs/ffs_inode.c (revision 212799) +++ ufs/ffs/ffs_inode.c (working copy) @@ -243,10 +243,8 @@ ffs_truncate(vp, length, flags, cred, td if (vp->v_type == VLNK && (ip->i_size < vp->v_mount->mnt_maxsymlinklen || datablocks == 0)) { -#ifdef INVARIANTS - if (length != 0) - panic("ffs_truncate: partial truncate of symlink"); -#endif + KASSERT(length != 0, + ("ffs_truncate: partial truncate of symlink")); bzero(SHORTLINK(ip), (u_int)ip->i_size); ip->i_size = 0; DIP_SET(ip, i_size, 0); @@ -516,16 +514,15 @@ ffs_truncate(vp, length, flags, cred, td done: #ifdef INVARIANTS for (level = SINGLE; level <= TRIPLE; level++) - if (newblks[NDADDR + level] != DIP(ip, i_ib[level])) - panic("ffs_truncate1"); + KASSERT(newblks[NDADDR + level] != DIP(ip, i_ib[level]), + ("ffs_truncate1")); for (i = 0; i < NDADDR; i++) - if (newblks[i] != DIP(ip, i_db[i])) - panic("ffs_truncate2"); + KASSERT(newblks[i] != DIP(ip, i_db[i]), ("ffs_truncate2")); BO_LOCK(bo); - if (length == 0 && + KASSERT(length == 0 && (fs->fs_magic != FS_UFS2_MAGIC || ip->i_din2->di_extsize == 0) && - (bo->bo_dirty.bv_cnt > 0 || bo->bo_clean.bv_cnt > 0)) - panic("ffs_truncate3"); + (bo->bo_dirty.bv_cnt > 0 || bo->bo_clean.bv_cnt > 0), + ("ffs_truncate3")); BO_UNLOCK(bo); #endif /* INVARIANTS */ /* Index: ufs/ffs/ffs_snapshot.c =================================================================== --- ufs/ffs/ffs_snapshot.c (revision 212799) +++ ufs/ffs/ffs_snapshot.c (working copy) @@ -49,6 +49,7 @@ __FBSDID("$FreeBSD$"); #include #include #include +#include #include #include #include @@ -178,17 +179,13 @@ static int ffs_bp_snapblk(struct vnode * * penalty that this imposes, the following flag allows this * crash persistence to be disabled. */ -int dopersistence = 0; - -#ifdef DEBUG -#include +static int dopersistence = 0; SYSCTL_INT(_debug, OID_AUTO, dopersistence, CTLFLAG_RW, &dopersistence, 0, ""); static int snapdebug = 0; SYSCTL_INT(_debug, OID_AUTO, snapdebug, CTLFLAG_RW, &snapdebug, 0, ""); int collectsnapstats = 0; SYSCTL_INT(_debug, OID_AUTO, collectsnapstats, CTLFLAG_RW, &collectsnapstats, 0, ""); -#endif /* DEBUG */ /* * Create a snapshot file and initialize it for the filesystem. @@ -2306,10 +2303,8 @@ ffs_copyonwrite(devvp, bp) blkno=((ufs2_daddr_t *)(ibp->b_data))[indiroff]; bqrelse(ibp); } -#ifdef INVARIANTS - if (blkno == BLK_SNAP && bp->b_lblkno >= 0) - panic("ffs_copyonwrite: bad copy block"); -#endif + KASSERT(blkno == BLK_SNAP && bp->b_lblkno >= 0, + ("ffs_copyonwrite: bad copy block")); if (blkno != 0) continue; /* From owner-freebsd-fs@FreeBSD.ORG Fri Sep 17 18:49:41 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 6ADFB1065672 for ; Fri, 17 Sep 2010 18:49:41 +0000 (UTC) (envelope-from mckusick@mckusick.com) Received: from chez.mckusick.com (chez.mckusick.com [64.81.247.49]) by mx1.freebsd.org (Postfix) with ESMTP id 34C878FC08 for ; Fri, 17 Sep 2010 18:49:40 +0000 (UTC) Received: from chez.mckusick.com (localhost [127.0.0.1]) by chez.mckusick.com (8.14.3/8.14.3) with ESMTP id o8HIneVw099913; Fri, 17 Sep 2010 11:49:40 -0700 (PDT) (envelope-from mckusick@chez.mckusick.com) Message-Id: <201009171849.o8HIneVw099913@chez.mckusick.com> To: obrien@freebsd.org In-reply-to: <20100917180738.GA51572@dragon.NUXI.org> Date: Fri, 17 Sep 2010 11:49:40 -0700 From: Kirk McKusick Cc: freebsd-fs@freebsd.org Subject: Re: [PATCH] replace INVARIANTS+panic() with KASSERT X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 17 Sep 2010 18:49:41 -0000 > Date: Fri, 17 Sep 2010 11:07:38 -0700 > From: "David O'Brien" > To: freebsd-fs@freebsd.org > Subject: [PATCH] replace INVARIANTS+panic() with KASSERT > > This patch changes most of the "asserts" and panic() within > #ifdef INVARIANTS of olden years with KASSERTS. > > In doing so, it also changes some '"%s: blah", "thing"' with just > '"thing: blah"' to make grep'ing easier. > > Some "notyet" code from the early 2000s is also reaped. > > Some sysctls are also added to make it easier to change some diagnostics > values at runtime vs. I believe using the debugger to change them. > > thoughts? > -- > -- David (obrien@FreeBSD.org) > > <<< patch followed >>> Your changes look like a good step forward. Especially since most folks do not include INVARIANTS these days expecting that KASSERTS will cover them. Kirk McKusick From owner-freebsd-fs@FreeBSD.ORG Fri Sep 17 19:09:13 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id F2881106564A for ; Fri, 17 Sep 2010 19:09:12 +0000 (UTC) (envelope-from kostikbel@gmail.com) Received: from mail.zoral.com.ua (mx0.zoral.com.ua [91.193.166.200]) by mx1.freebsd.org (Postfix) with ESMTP id 74D8A8FC15 for ; Fri, 17 Sep 2010 19:09:11 +0000 (UTC) Received: from deviant.kiev.zoral.com.ua (root@deviant.kiev.zoral.com.ua [10.1.1.148]) by mail.zoral.com.ua (8.14.2/8.14.2) with ESMTP id o8HJ97qP089652 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Fri, 17 Sep 2010 22:09:07 +0300 (EEST) (envelope-from kostikbel@gmail.com) Received: from deviant.kiev.zoral.com.ua (kostik@localhost [127.0.0.1]) by deviant.kiev.zoral.com.ua (8.14.4/8.14.4) with ESMTP id o8HJ97MO021636; Fri, 17 Sep 2010 22:09:07 +0300 (EEST) (envelope-from kostikbel@gmail.com) Received: (from kostik@localhost) by deviant.kiev.zoral.com.ua (8.14.4/8.14.4/Submit) id o8HJ97dm021635; Fri, 17 Sep 2010 22:09:07 +0300 (EEST) (envelope-from kostikbel@gmail.com) X-Authentication-Warning: deviant.kiev.zoral.com.ua: kostik set sender to kostikbel@gmail.com using -f Date: Fri, 17 Sep 2010 22:09:07 +0300 From: Kostik Belousov To: "David O'Brien" Message-ID: <20100917190907.GQ2389@deviant.kiev.zoral.com.ua> References: <20100917180738.GA51572@dragon.NUXI.org> Mime-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="nWEzmRaGLXxZdI3i" Content-Disposition: inline In-Reply-To: <20100917180738.GA51572@dragon.NUXI.org> User-Agent: Mutt/1.4.2.3i X-Virus-Scanned: clamav-milter 0.95.2 at skuns.kiev.zoral.com.ua X-Virus-Status: Clean X-Spam-Status: No, score=-3.4 required=5.0 tests=ALL_TRUSTED,AWL,BAYES_00, DNS_FROM_OPENWHOIS autolearn=no version=3.2.5 X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on skuns.kiev.zoral.com.ua Cc: freebsd-fs@freebsd.org Subject: Re: [PATCH] replace INVARIANTS+panic() with KASSERT X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 17 Sep 2010 19:09:13 -0000 --nWEzmRaGLXxZdI3i Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Fri, Sep 17, 2010 at 11:07:38AM -0700, David O'Brien wrote: > This patch changes most of the "asserts" and panic() within > #ifdef INVARIANTS of olden years with KASSERTS. >=20 > In doing so, it also changes some '"%s: blah", "thing"' with just > '"thing: blah"' to make grep'ing easier. >=20 > Some "notyet" code from the early 2000s is also reaped. >=20 > Some sysctls are also added to make it easier to change some diagnostics > values at runtime vs. I believe using the debugger to change them. >=20 > thoughts? > --=20 > -- David (obrien@FreeBSD.org) >=20 >=20 > Index: IVs/ffs/ffs_softdep.c > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > --- ufs/ffs/ffs_softdep.c (revision 212799) > +++ ufs/ffs/ffs_softdep.c (working copy) > @@ -6015,11 +6015,9 @@ handle_complete_freeblocks(freeblks) > vput(vp); > } > =20 > -#ifdef INVARIANTS > - if (freeblks->fb_chkcnt !=3D 0 &&=20 > - ((fs->fs_flags & FS_UNCLEAN) =3D=3D 0 || (flags & LK_NOWAIT) !=3D 0= )) > - printf("handle_workitem_freeblocks: block count\n"); > -#endif /* INVARIANTS */ > + KASSERT(freeblks->fb_chkcnt !=3D 0 && > + ((fs->fs_flags & FS_UNCLEAN) =3D=3D 0 || (flags & LK_NOWAIT) !=3D 0= ), > + ("handle_workitem_freeblocks: block count")); Isn't this reverted ? I believe that all conditions for panics/printf should be reverted in KASSERTs. > =20 > ACQUIRE_LOCK(&lk); > /* > @@ -6089,12 +6087,7 @@ indir_trunc(freework, dbn, lbn) > * a complete copy of the indirect block in memory for our use. > * Otherwise we have to read the blocks in from the disk. > */ > -#ifdef notyet > - bp =3D getblk(freeblks->fb_devvp, dbn, (int)fs->fs_bsize, 0, 0, > - GB_NOCREAT); > -#else > bp =3D incore(&freeblks->fb_devvp->v_bufobj, dbn); > -#endif > ACQUIRE_LOCK(&lk); > if (bp !=3D NULL && (wk =3D LIST_FIRST(&bp->b_dep)) !=3D NULL) { > if (wk->wk_type !=3D D_INDIRDEP || > @@ -6109,10 +6102,6 @@ indir_trunc(freework, dbn, lbn) > ump->um_numindirdeps -=3D 1; > FREE_LOCK(&lk); > } else { > -#ifdef notyet > - if (bp) > - brelse(bp); > -#endif Please leave both notyet blocks in indir_trunc() as is. There are patches in progress that change this fragments, and you supposedly seen them. --nWEzmRaGLXxZdI3i Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (FreeBSD) iEYEARECAAYFAkyTvNMACgkQC3+MBN1Mb4iaoACgo9laYXI+5727zodwTlog26pl 3U8AoISPP2jXWjGF4a6+1qQZa1CzrI85 =ERrv -----END PGP SIGNATURE----- --nWEzmRaGLXxZdI3i-- From owner-freebsd-fs@FreeBSD.ORG Fri Sep 17 19:16:33 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 8499B106564A for ; Fri, 17 Sep 2010 19:16:33 +0000 (UTC) (envelope-from pjd@garage.freebsd.pl) Received: from mail.garage.freebsd.pl (60.wheelsystems.com [83.12.187.60]) by mx1.freebsd.org (Postfix) with ESMTP id 216D38FC08 for ; Fri, 17 Sep 2010 19:16:32 +0000 (UTC) Received: by mail.garage.freebsd.pl (Postfix, from userid 65534) id 8EAAE45E11; Fri, 17 Sep 2010 21:16:31 +0200 (CEST) Received: from localhost (chello089077043238.chello.pl [89.77.43.238]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.garage.freebsd.pl (Postfix) with ESMTP id 263B245C99; Fri, 17 Sep 2010 21:16:26 +0200 (CEST) Date: Fri, 17 Sep 2010 21:16:09 +0200 From: Pawel Jakub Dawidek To: David O'Brien Message-ID: <20100917191609.GA1902@garage.freebsd.pl> References: <20100917180738.GA51572@dragon.NUXI.org> Mime-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="0OAP2g/MAC+5xKAE" Content-Disposition: inline In-Reply-To: <20100917180738.GA51572@dragon.NUXI.org> User-Agent: Mutt/1.4.2.3i X-PGP-Key-URL: http://people.freebsd.org/~pjd/pjd.asc X-OS: FreeBSD 9.0-CURRENT amd64 X-Spam-Checker-Version: SpamAssassin 3.0.4 (2005-06-05) on mail.garage.freebsd.pl X-Spam-Level: X-Spam-Status: No, score=-0.6 required=4.5 tests=BAYES_00,RCVD_IN_SORBS_DUL autolearn=no version=3.0.4 Cc: freebsd-fs@freebsd.org Subject: Re: [PATCH] replace INVARIANTS+panic() with KASSERT X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 17 Sep 2010 19:16:33 -0000 --0OAP2g/MAC+5xKAE Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Fri, Sep 17, 2010 at 11:07:38AM -0700, David O'Brien wrote: > This patch changes most of the "asserts" and panic() within > #ifdef INVARIANTS of olden years with KASSERTS. >=20 > In doing so, it also changes some '"%s: blah", "thing"' with just > '"thing: blah"' to make grep'ing easier. >=20 > Some "notyet" code from the early 2000s is also reaped. >=20 > Some sysctls are also added to make it easier to change some diagnostics > values at runtime vs. I believe using the debugger to change them. >=20 > thoughts? David, have you actually tried to boot with your patch in place? Every single change you made is wrong. You converted: if (cond) panic("message"); to: KASSERT(cond, "message"); But assertions don't work this way. It should be: KASSERT(!cond, "message"); One more thing: > -#ifdef INVARIANTS > - if (freeblks->fb_chkcnt !=3D 0 &&=20 > - ((fs->fs_flags & FS_UNCLEAN) =3D=3D 0 || (flags & LK_NOWAIT) !=3D 0= )) > - printf("handle_workitem_freeblocks: block count\n"); > -#endif /* INVARIANTS */ > + KASSERT(freeblks->fb_chkcnt !=3D 0 && > + ((fs->fs_flags & FS_UNCLEAN) =3D=3D 0 || (flags & LK_NOWAIT) !=3D 0= ), > + ("handle_workitem_freeblocks: block count")); You replaced printf() with KASSERT(9) here, not panic(9). --=20 Pawel Jakub Dawidek http://www.wheelsystems.com pjd@FreeBSD.org http://www.FreeBSD.org FreeBSD committer Am I Evil? Yes, I Am! --0OAP2g/MAC+5xKAE Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.14 (FreeBSD) iEYEARECAAYFAkyTvnkACgkQForvXbEpPzRrlgCgvcS7Mgq5v52ct5Vc57L2+U8/ 8lsAoJBYLkOrgdTymPVWqZVrsYuH+jOH =ASuH -----END PGP SIGNATURE----- --0OAP2g/MAC+5xKAE-- From owner-freebsd-fs@FreeBSD.ORG Fri Sep 17 19:30:02 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 251731065672 for ; Fri, 17 Sep 2010 19:30:02 +0000 (UTC) (envelope-from pjd@garage.freebsd.pl) Received: from mail.garage.freebsd.pl (60.wheelsystems.com [83.12.187.60]) by mx1.freebsd.org (Postfix) with ESMTP id BF2648FC0A for ; Fri, 17 Sep 2010 19:30:01 +0000 (UTC) Received: by mail.garage.freebsd.pl (Postfix, from userid 65534) id 3F62145C98; Fri, 17 Sep 2010 21:30:00 +0200 (CEST) Received: from localhost (chello089077043238.chello.pl [89.77.43.238]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.garage.freebsd.pl (Postfix) with ESMTP id E6F8945683; Fri, 17 Sep 2010 21:29:54 +0200 (CEST) Date: Fri, 17 Sep 2010 21:29:38 +0200 From: Pawel Jakub Dawidek To: Andriy Bakay Message-ID: <20100917192938.GB1902@garage.freebsd.pl> References: Mime-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="JP+T4n/bALQSJXh8" Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.4.2.3i X-PGP-Key-URL: http://people.freebsd.org/~pjd/pjd.asc X-OS: FreeBSD 9.0-CURRENT amd64 X-Spam-Checker-Version: SpamAssassin 3.0.4 (2005-06-05) on mail.garage.freebsd.pl X-Spam-Level: X-Spam-Status: No, score=-0.6 required=4.5 tests=BAYES_00,RCVD_IN_SORBS_DUL autolearn=no version=3.0.4 Cc: "freebsd-fs@freebsd.org" Subject: Re: ZFS + GELI data integrity X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 17 Sep 2010 19:30:02 -0000 --JP+T4n/bALQSJXh8 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Thu, Sep 16, 2010 at 03:22:27PM -0400, Andriy Bakay wrote: > Hi list(s), >=20 > I am using ZFS on top of GELI. Does exists any practical reason to enable= =20 > GELI data authentication (data integrity) underneath of ZFS? I understand= =20 > GELI data integrity is cryptographically strong -- up to HMAC/SHA512, but= =20 > ZFS has SHA256 checksum. GELI linked data to sector and will detect if = =20 > somebody move data around, but my understanding is to move data around = =20 > consistently one need to decrypt it which is very difficult. Correct me i= f =20 > I wrong. >=20 > Any thoughts? ZFS blocks form z merkle tree (http://en.wikipedia.org/wiki/Hash_tree), so if you're using cryptographically strong hash, like sha256 within your pool, I believe it is safe not to use GELI data authentication, but only encryption. Note, that I'm not cryptographer and this is quite complex scenario, so what I believe in here might not be true. Alternatively you could use GELI authetication and turn off ZFS checksum. When I personally use ZFS on top of GELI, I do just that: GELI does encryption only and ZFS does authentication with SHA256 checksum. --=20 Pawel Jakub Dawidek http://www.wheelsystems.com pjd@FreeBSD.org http://www.FreeBSD.org FreeBSD committer Am I Evil? Yes, I Am! --JP+T4n/bALQSJXh8 Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.14 (FreeBSD) iEYEARECAAYFAkyTwaEACgkQForvXbEpPzQIbQCgjA89ID5Jep0BoeeC2kilB8j7 Of4AnRqOnbvFwRE1t+iFkfkCAVXbbofG =sLC7 -----END PGP SIGNATURE----- --JP+T4n/bALQSJXh8-- From owner-freebsd-fs@FreeBSD.ORG Fri Sep 17 19:35:44 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A610D1065674 for ; Fri, 17 Sep 2010 19:35:44 +0000 (UTC) (envelope-from pjd@garage.freebsd.pl) Received: from mail.garage.freebsd.pl (60.wheelsystems.com [83.12.187.60]) by mx1.freebsd.org (Postfix) with ESMTP id 508498FC15 for ; Fri, 17 Sep 2010 19:35:44 +0000 (UTC) Received: by mail.garage.freebsd.pl (Postfix, from userid 65534) id BDCF945D8D; Fri, 17 Sep 2010 21:35:42 +0200 (CEST) Received: from localhost (chello089077043238.chello.pl [89.77.43.238]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.garage.freebsd.pl (Postfix) with ESMTP id 9F96C45685; Fri, 17 Sep 2010 21:35:37 +0200 (CEST) Date: Fri, 17 Sep 2010 21:35:21 +0200 From: Pawel Jakub Dawidek To: Chris Watson Message-ID: <20100917193521.GC1902@garage.freebsd.pl> References: <82EA2358-F5E5-4CEE-91AC-4211C04F22FD@gmail.com> Mime-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="Bu8it7iiRSEf40bY" Content-Disposition: inline In-Reply-To: <82EA2358-F5E5-4CEE-91AC-4211C04F22FD@gmail.com> User-Agent: Mutt/1.4.2.3i X-PGP-Key-URL: http://people.freebsd.org/~pjd/pjd.asc X-OS: FreeBSD 9.0-CURRENT amd64 X-Spam-Checker-Version: SpamAssassin 3.0.4 (2005-06-05) on mail.garage.freebsd.pl X-Spam-Level: X-Spam-Status: No, score=-0.6 required=4.5 tests=BAYES_00,RCVD_IN_SORBS_DUL autolearn=no version=3.0.4 Cc: freebsd-fs@freebsd.org Subject: Re: ZFS I/O Throughput question.. X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 17 Sep 2010 19:35:44 -0000 --Bu8it7iiRSEf40bY Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Wed, Sep 15, 2010 at 03:05:46AM -0500, Chris Watson wrote: > I have been testing ZFS on a home box now for a few days and I have a =20 > question that is perplexing me. Everything I have read on ZFS says in =20 > almost every case mirroring is faster than raidz. So I initially setup = =20 > a 2x2 Raid 10 striped mirror. Like so: [...] Could you try running something like this: # apply "dd if=3D/dev/ada%1 of=3D/dev/null bs=3D1m count=3D5000 &" 2 3 4 5 This will tell us how much of total throughput do you have. If you can destroy your data, you may also try this: # apply "dd if=3D/dev/null of=3D/dev/ada%1 bs=3D1m count=3D5000 &" 2 3 4 5 If you disks cannot work at full speed in parallel this might explain what you're seeing. Mirror send to disk twice as much data as it receives and RAIDZ sends only 33% more data in four disk case. And no, there are neither special RAIDZ optimizations not special mirror pesimizations in FreeBSD. --=20 Pawel Jakub Dawidek http://www.wheelsystems.com pjd@FreeBSD.org http://www.FreeBSD.org FreeBSD committer Am I Evil? Yes, I Am! --Bu8it7iiRSEf40bY Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.14 (FreeBSD) iEYEARECAAYFAkyTwvgACgkQForvXbEpPzQInwCgqQ/Xo425FBWSH9tua2Da+tpr uWwAn1XeNmFtTtJJfbk9f9o4CGBm2Il2 =RUn9 -----END PGP SIGNATURE----- --Bu8it7iiRSEf40bY-- From owner-freebsd-fs@FreeBSD.ORG Fri Sep 17 20:09:56 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 77BB6106566C; Fri, 17 Sep 2010 20:09:56 +0000 (UTC) (envelope-from bsdunix44@gmail.com) Received: from mail-yw0-f54.google.com (mail-yw0-f54.google.com [209.85.213.54]) by mx1.freebsd.org (Postfix) with ESMTP id 192228FC0C; Fri, 17 Sep 2010 20:09:55 +0000 (UTC) Received: by ywt2 with SMTP id 2so1063017ywt.13 for ; Fri, 17 Sep 2010 13:09:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:references:in-reply-to :mime-version:content-transfer-encoding:content-type:message-id:cc :x-mailer:from:subject:date:to; bh=qsDznvMWu+D0BTK2jSXhTjOBXcGm/GVWr0BaQEX8N9U=; b=fZgS8816LfVy+ZteGvWwJEc181NPpq1+qevUSJUIg7OGrZgTtfh7k0VeQvbl32ouzy N0OjwqTAFaywLhWy3326SeNYOBrSYjtaL7Tppac01a3LdVtWj3Eh8T9vK/XXJP0ssZXW kzGHkpV9/5TUG73tltU+NFMj29L6qBcbJ7758= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=references:in-reply-to:mime-version:content-transfer-encoding :content-type:message-id:cc:x-mailer:from:subject:date:to; b=aIGfqfn1lSqdouJx+ZrQLt7NFL8HwmoVqnAjGlgkYYPqbvr+TnktkjDQ8ZYvSYFjH6 s1ECEYE14phAmLM+TauaS0YLM0Q4ZEpqYrcnxB6jaI7o3vhWxs2/ZwLb6qRkC1jNlgu7 hU4wrKUjp2s+oqqyQGz4Gi2SPGWn58yYaSsZA= Received: by 10.150.185.18 with SMTP id i18mr5693919ybf.327.1284754193217; Fri, 17 Sep 2010 13:09:53 -0700 (PDT) Received: from [10.1.140.232] (mobile-166-137-142-254.mycingular.net [166.137.142.254]) by mx.google.com with ESMTPS id t16sm5273671ybm.10.2010.09.17.13.09.49 (version=TLSv1/SSLv3 cipher=RC4-MD5); Fri, 17 Sep 2010 13:09:51 -0700 (PDT) References: <82EA2358-F5E5-4CEE-91AC-4211C04F22FD@gmail.com> <20100917193521.GC1902@garage.freebsd.pl> In-Reply-To: <20100917193521.GC1902@garage.freebsd.pl> Mime-Version: 1.0 (iPhone Mail 8B117) Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=us-ascii Message-Id: <42855753-ED49-4850-8E9F-DB3DEB984E36@gmail.com> X-Mailer: iPhone Mail (8B117) From: Christopher Watson Date: Fri, 17 Sep 2010 15:10:09 -0500 To: Pawel Jakub Dawidek Cc: "freebsd-fs@freebsd.org" Subject: Re: ZFS I/O Throughput question.. X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 17 Sep 2010 20:09:56 -0000 I'll be able to test that in 8 hours. Thanks for the reply! I'll post result= s then. Also, what *is* the recommended way to get a more accurate test of f= ile I/O regarding zfs? dd clearly isn't the best tool. Postmark? Blogbench? Chris=20 Sent from my iPhone On Sep 17, 2010, at 2:35 PM, Pawel Jakub Dawidek wrote: > On Wed, Sep 15, 2010 at 03:05:46AM -0500, Chris Watson wrote: >> I have been testing ZFS on a home box now for a few days and I have a =20= >> question that is perplexing me. Everything I have read on ZFS says in =20= >> almost every case mirroring is faster than raidz. So I initially setup =20= >> a 2x2 Raid 10 striped mirror. Like so: > [...] >=20 > Could you try running something like this: >=20 > # apply "dd if=3D/dev/ada%1 of=3D/dev/null bs=3D1m count=3D5000 &" 2 3 4= 5 >=20 > This will tell us how much of total throughput do you have. > If you can destroy your data, you may also try this: >=20 > # apply "dd if=3D/dev/null of=3D/dev/ada%1 bs=3D1m count=3D5000 &" 2 3 4= 5 >=20 > If you disks cannot work at full speed in parallel this might explain > what you're seeing. Mirror send to disk twice as much data as it > receives and RAIDZ sends only 33% more data in four disk case. >=20 > And no, there are neither special RAIDZ optimizations not special mirror > pesimizations in FreeBSD. >=20 > --=20 > Pawel Jakub Dawidek http://www.wheelsystems.com > pjd@FreeBSD.org http://www.FreeBSD.org > FreeBSD committer Am I Evil? Yes, I Am! From owner-freebsd-fs@FreeBSD.ORG Fri Sep 17 20:10:43 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id E42861065670 for ; Fri, 17 Sep 2010 20:10:43 +0000 (UTC) (envelope-from pjd@garage.freebsd.pl) Received: from mail.garage.freebsd.pl (60.wheelsystems.com [83.12.187.60]) by mx1.freebsd.org (Postfix) with ESMTP id 8D4508FC16 for ; Fri, 17 Sep 2010 20:10:43 +0000 (UTC) Received: by mail.garage.freebsd.pl (Postfix, from userid 65534) id 5B68C45C98; Fri, 17 Sep 2010 22:10:42 +0200 (CEST) Received: from localhost (chello089077043238.chello.pl [89.77.43.238]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.garage.freebsd.pl (Postfix) with ESMTP id 56A0345684; Fri, 17 Sep 2010 22:10:37 +0200 (CEST) Date: Fri, 17 Sep 2010 22:10:21 +0200 From: Pawel Jakub Dawidek To: Chris Watson Message-ID: <20100917201021.GD1902@garage.freebsd.pl> References: <82EA2358-F5E5-4CEE-91AC-4211C04F22FD@gmail.com> <20100917193521.GC1902@garage.freebsd.pl> Mime-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="hxkXGo8AKqTJ+9QI" Content-Disposition: inline In-Reply-To: <20100917193521.GC1902@garage.freebsd.pl> User-Agent: Mutt/1.4.2.3i X-PGP-Key-URL: http://people.freebsd.org/~pjd/pjd.asc X-OS: FreeBSD 9.0-CURRENT amd64 X-Spam-Checker-Version: SpamAssassin 3.0.4 (2005-06-05) on mail.garage.freebsd.pl X-Spam-Level: X-Spam-Status: No, score=-0.6 required=4.5 tests=BAYES_00,RCVD_IN_SORBS_DUL autolearn=no version=3.0.4 Cc: freebsd-fs@freebsd.org Subject: Re: ZFS I/O Throughput question.. X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 17 Sep 2010 20:10:44 -0000 --hxkXGo8AKqTJ+9QI Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Fri, Sep 17, 2010 at 09:35:21PM +0200, Pawel Jakub Dawidek wrote: > On Wed, Sep 15, 2010 at 03:05:46AM -0500, Chris Watson wrote: > > I have been testing ZFS on a home box now for a few days and I have a = =20 > > question that is perplexing me. Everything I have read on ZFS says in = =20 > > almost every case mirroring is faster than raidz. So I initially setup = =20 > > a 2x2 Raid 10 striped mirror. Like so: > [...] >=20 > Could you try running something like this: >=20 > # apply "dd if=3D/dev/ada%1 of=3D/dev/null bs=3D1m count=3D5000 &" 2 3 4= 5 >=20 > This will tell us how much of total throughput do you have. > If you can destroy your data, you may also try this: >=20 > # apply "dd if=3D/dev/null of=3D/dev/ada%1 bs=3D1m count=3D5000 &" 2 3 4= 5 # apply "dd if=3D/dev/zero of=3D/dev/ada%1 bs=3D1m count=3D5000 &" 2 3 4 5 Thanks to se@ for noticing this. > If you disks cannot work at full speed in parallel this might explain > what you're seeing. Mirror send to disk twice as much data as it > receives and RAIDZ sends only 33% more data in four disk case. >=20 > And no, there are neither special RAIDZ optimizations not special mirror > pesimizations in FreeBSD. --=20 Pawel Jakub Dawidek http://www.wheelsystems.com pjd@FreeBSD.org http://www.FreeBSD.org FreeBSD committer Am I Evil? Yes, I Am! --hxkXGo8AKqTJ+9QI Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.14 (FreeBSD) iEYEARECAAYFAkyTyywACgkQForvXbEpPzQNuwCfbYa9OSUjAQQDIfLY2RBnVh9E y2kAn3DdRqbA6PTH3qRYHwe1LoA5WqSy =jbFu -----END PGP SIGNATURE----- --hxkXGo8AKqTJ+9QI-- From owner-freebsd-fs@FreeBSD.ORG Fri Sep 17 22:58:09 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id E0C21106564A; Fri, 17 Sep 2010 22:58:09 +0000 (UTC) (envelope-from mm@FreeBSD.org) Received: from mail.vx.sk (mail.vx.sk [IPv6:2a01:4f8:100:1043::3]) by mx1.freebsd.org (Postfix) with ESMTP id 74F0F8FC08; Fri, 17 Sep 2010 22:58:09 +0000 (UTC) Received: from core.vx.sk (localhost [127.0.0.1]) by mail.vx.sk (Postfix) with ESMTP id 8EF0711ADA3; Sat, 18 Sep 2010 00:58:08 +0200 (CEST) X-Virus-Scanned: amavisd-new at mail.vx.sk Received: from mail.vx.sk ([127.0.0.1]) by core.vx.sk (mail.vx.sk [127.0.0.1]) (amavisd-new, port 10024) with LMTP id jwifWSqFIo3e; Sat, 18 Sep 2010 00:57:55 +0200 (CEST) Received: from [10.9.8.1] (188-167-78-139.dynamic.chello.sk [188.167.78.139]) by mail.vx.sk (Postfix) with ESMTPSA id CFA4F11AD6C; Sat, 18 Sep 2010 00:57:55 +0200 (CEST) Message-ID: <4C93F274.6080303@FreeBSD.org> Date: Sat, 18 Sep 2010 00:57:56 +0200 From: Martin Matuska User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; sk; rv:1.8.1.23) Gecko/20090812 Lightning/0.9 Thunderbird/2.0.0.23 Mnenhy/0.7.5.0 MIME-Version: 1.0 To: Josh Paetzel References: <4C8D234F.40204@quip.cz> <201009121640.39157.josh@tcbug.org> In-Reply-To: <201009121640.39157.josh@tcbug.org> X-Enigmail-Version: 1.1.1 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org, Robert Watson Subject: Re: FreeNAS vs OpenSolaris vs Nexenta ZFS Benchmarks X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 17 Sep 2010 22:58:10 -0000 >From the ZFS point of view, we have v15 now in 8-STABLE but the most important part are the speed improvements merged from recent ZFS (new metaslab code for significantly faster writes, new ACL caching and stat() speedup). pjd's recent very experimental patch enables ZFS v28 (the very latest). Anyway, there will be always some speed penalty compared to OpenSolaris (which is now dead and going to be continued in Openindiana and Illumos). Another point is this test was done via iSCSI (initiator). The choice of the used network card, its drivers and tuning of network parameters have to be considered as well. I have access to a Promise Storage Array with iSCSI and another one with SAS with the same drives - and I must say iSCSI was slow in my tests (SAS was really fast). On 12. 9. 2010 23:40, Josh Paetzel wrote: > > I'll respond and say that the current FreeNAS is based on FreeBSD 7, where ZFS > was an experimental filesystem. I think a system based on FreeBSD 8 will > provide a better comparison. > > I'm a tad confused about the whole "sharing a ZFS filesystem over iSCSI". I > thought iSCSI was used to eport LUNs that you then put a filesystem on with a > client. > > iSCSI on FreeBSD is fairly slow compared to other solutions, I think there is > some very preliminary work to fix that going on. > From owner-freebsd-fs@FreeBSD.ORG Sat Sep 18 00:51:41 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 4E054106564A for ; Sat, 18 Sep 2010 00:51:41 +0000 (UTC) (envelope-from jhellenthal@gmail.com) Received: from mail-gw0-f54.google.com (mail-gw0-f54.google.com [74.125.83.54]) by mx1.freebsd.org (Postfix) with ESMTP id E3B9E8FC0C for ; Sat, 18 Sep 2010 00:51:40 +0000 (UTC) Received: by gwb15 with SMTP id 15so1119745gwb.13 for ; Fri, 17 Sep 2010 17:51:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:sender:message-id:date:from :user-agent:mime-version:to:cc:subject:references:in-reply-to :x-enigmail-version:content-type:content-transfer-encoding; bh=kMO5UDbKLrYX+Thhr3UUCoBAIE5AhPCsvopkBNwguCk=; b=jhQ7eKvKDYP8spoicbmFXp3Df5SQlI9wi3oFWJaS9ZRAR+pL9FhfDvtX4jX6jOGC64 6d5bypeLAfsePwwHp+zJmieQ5WlHZN6iYvAS2AR5gadJq+AQDe2LUaEZX/QYJ2GAHVgY PDDsz4EOfmVAR7z5+bOp+yH4udU28fI886yYs= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject :references:in-reply-to:x-enigmail-version:content-type :content-transfer-encoding; b=l12fq1+WOXkMFHNDFKnIPZt9mvrJkGFM0TUOQ5OVBwi1yYYnilVRngiW71Emv+HGmx wp4w5dNxXWyy/R6ULpvWkmSXOTEhoEgxEul+zFbAt8iCSlgrxAgpqNFvtuPmDUgAi6Jc dBPMRPEuj9k2O4lYw64BDCfqwpW2D1tNPexRU= Received: by 10.101.95.9 with SMTP id x9mr6419911anl.36.1284771099919; Fri, 17 Sep 2010 17:51:39 -0700 (PDT) Received: from centel.dataix.local (adsl-99-181-146-122.dsl.klmzmi.sbcglobal.net [99.181.146.122]) by mx.google.com with ESMTPS id w6sm7028101anb.23.2010.09.17.17.51.37 (version=SSLv3 cipher=RC4-MD5); Fri, 17 Sep 2010 17:51:38 -0700 (PDT) Sender: "J. Hellenthal" Message-ID: <4C940D18.30808@DataIX.net> Date: Fri, 17 Sep 2010 20:51:36 -0400 From: jhell User-Agent: Mozilla/5.0 (X11; U; FreeBSD i386; en-US; rv:1.9.2.9) Gecko/20100917 Lightning/1.0b1 Thunderbird MIME-Version: 1.0 To: Bryan Drewery References: <4C9385B0.2080909@shatow.net> <20100917161847.GA58503@icarus.home.lan> <20100917163732.GA59537@icarus.home.lan> <4C939B47.6030701@shatow.net> <4C939BF8.7060105@shatow.net> In-Reply-To: <4C939BF8.7060105@shatow.net> X-Enigmail-Version: 1.1.2 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org Subject: Re: what happens to pool if ZIL dies on ZFS v14 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 18 Sep 2010 00:51:41 -0000 On 09/17/2010 12:48, Bryan Drewery wrote: >> >> The ZIL is still used even without a dedicated log device. Disabling >> it is *stupid* in most cases. >> Same goes for disabling the ARC. >> >> There is a lot of FUD out there regarding ZFS tuning. The bottom line: >> don't tune; add more RAM. >> > http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Disabling_the_ZIL_.28Don.27t.29 > This article is what I think of every time I see a user say they need to get a bigger ZIL vdev or slog in other terms. Ive seen people solely buy a 200GB drive just for the purpose of a slog. This is a sheer waste. If your going to buy drive or even use an existing one then partition that drive with one partition of 512MB, which you will never get to the end of, and then use the rest for a quick backup drive. -- jhell,v From owner-freebsd-fs@FreeBSD.ORG Sat Sep 18 00:55:35 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 3F257106564A for ; Sat, 18 Sep 2010 00:55:35 +0000 (UTC) (envelope-from jhellenthal@gmail.com) Received: from mail-yw0-f54.google.com (mail-yw0-f54.google.com [209.85.213.54]) by mx1.freebsd.org (Postfix) with ESMTP id E046C8FC08 for ; Sat, 18 Sep 2010 00:55:34 +0000 (UTC) Received: by ywt2 with SMTP id 2so1123066ywt.13 for ; Fri, 17 Sep 2010 17:55:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:sender:message-id:date:from :user-agent:mime-version:to:cc:subject:references:in-reply-to :x-enigmail-version:content-type:content-transfer-encoding; bh=VTjJIthja9mK9Wy6GW7jxa/UJX4Q/mynH643QybPFwE=; b=Aw8NLKdM8sg+SD220sKWCUJQ3YfjityRKCRAk5F+L+q31JjdgaeKBZG8onu7t8Pf7Z 5TwCyIpmGUPfI7V0LXvAhYpCaQiSlfedvJ+EpCae1NKrjbBQ8LhHgQslGdzcrltYr7Sg IOJwHfWh/Po547Z/gupcb2xJ0cqaA4+7zFldY= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject :references:in-reply-to:x-enigmail-version:content-type :content-transfer-encoding; b=k2ciQYp39o5gIXbyDa/FVSyZDSgokLWKwJqvvkopJaoZ/7KezXQhTBcz0mHMEBAVEC oSNikKm4odCdpilM5jUle9ejJW/8jHO5j5NZYCeuIw9nLLBMx/14z0I0+IX+tOm99Mop qua7KaoVYwHPJMGqTmrhCpuPDmQKiuueyDPbc= Received: by 10.150.47.37 with SMTP id u37mr6697324ybu.47.1284771333937; Fri, 17 Sep 2010 17:55:33 -0700 (PDT) Received: from centel.dataix.local (adsl-99-181-146-122.dsl.klmzmi.sbcglobal.net [99.181.146.122]) by mx.google.com with ESMTPS id w3sm261477ybi.7.2010.09.17.17.55.32 (version=SSLv3 cipher=RC4-MD5); Fri, 17 Sep 2010 17:55:33 -0700 (PDT) Sender: "J. Hellenthal" Message-ID: <4C940E02.1010405@DataIX.net> Date: Fri, 17 Sep 2010 20:55:30 -0400 From: jhell User-Agent: Mozilla/5.0 (X11; U; FreeBSD i386; en-US; rv:1.9.2.9) Gecko/20100917 Lightning/1.0b1 Thunderbird MIME-Version: 1.0 To: Freddie Cash References: <4C9385B0.2080909@shatow.net> <20100917161847.GA58503@icarus.home.lan> In-Reply-To: X-Enigmail-Version: 1.1.2 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org Subject: Re: what happens to pool if ZIL dies on ZFS v14 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 18 Sep 2010 00:55:35 -0000 On 09/17/2010 13:09, Freddie Cash wrote: > On Fri, Sep 17, 2010 at 9:47 AM, Gil Vidals wrote: >> First, let me say that I'm receiving excellent input from the FreeBSD >> community. I'm new to FreeBSD and ZFS and this mailing list has been very >> helpful. >> >> I'm running ZFSv14 on FreeBSD 8.1 AMD64 with 8GB of DDR3 RAM with two SSDs - >> one for the ZIL and the other for the L2ARC cache. >> >> zambia# zpool iostat -v 1 1 >> capacity operations bandwidth >> pool used avail read write read write >> ---------------- ----- ----- ----- ----- ----- ----- >> tank 6.57G 921G 0 11 116K 438K >> mirror 6.57G 921G 0 5 116K 229K >> label/disk1 - - 0 3 57.9K 229K >> label/disk2 - - 0 3 57.8K 229K >> label/zilcache 136K 59.5G 0 6 17 209K >> cache - - - - - - >> label/l2cache 59.6G 8.50K 0 0 31.5K 48.9K >> ---------------- ----- ----- ----- ----- ----- ----- >> >> Observing the ZIL Cache, I see it being used very sparingly. And now that I >> know the SSD slog must be mirrored in ZFS < v19, I think the best course of >> action (assuming I'm not buying more equipment) is to mirror the ZIL SSD and >> abandon the L2ARC altogether. Won't RAM be used for L2ARC instead? > > The ZIL is only used for synchronous writes, and does not need to be > very large. I forget the formula for determining the exact size of a > ZIL (something along the lines of the max amount of data you can write > in 30 seconds), but it's rarely more than 4 GB and usually in the 1-2 > GB range. > > If possible, you'd be better off rebuilding your pool like so: > mirror disk1 and disk2 > slice both SSDs into two: 4-8 GB for ZIL, rest for L2ARC > mirror zilcache1 zilcache2 > add l2cache1 l2cache2 (don't mirror them) > > That way, you have a mirrored ZIL, and double the L2ARC. However, > since it takes around 270 bytes of RAM for every object in the L2ARC, > you'll want to make sure you have lots of RAM to manage it (or, > possibly, make 3 slices on the SSDs and use the third for swap?). > http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Disabling_the_ZIL_.28Don.27t.29 -- jhell,v From owner-freebsd-fs@FreeBSD.ORG Sat Sep 18 01:53:47 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id E24281065673 for ; Sat, 18 Sep 2010 01:53:47 +0000 (UTC) (envelope-from andriy@irbisnet.com) Received: from smtp103.rog.mail.re2.yahoo.com (smtp103.rog.mail.re2.yahoo.com [206.190.36.81]) by mx1.freebsd.org (Postfix) with SMTP id A6BBD8FC0A for ; Sat, 18 Sep 2010 01:53:46 +0000 (UTC) Received: (qmail 93157 invoked from network); 18 Sep 2010 01:53:45 -0000 Received: from smtp.irbisnet.com (andriy@99.235.226.221 with login) by smtp103.rog.mail.re2.yahoo.com with SMTP; 17 Sep 2010 18:53:45 -0700 PDT X-Yahoo-SMTP: dz9sigaswBA5kWoYWVTZrGHmIs2vaKgG1w-- X-YMail-OSG: _cRN7ZYVM1mzU4nuNIwmkWbg652UM7t29BrZJksm_69WDli D1MzLKGcgW1.jCAa9Azi.fTT3.OmPWhOjkFit_IRalu4qRhEExhsXWVkR0No nsZR8kzrmKk6_pVypodHxhO_g7Uh_PPr2eW60pO5NIJty1KlngNrjTI_rWfA Myj2t59LvkWQLD2c66qb6RukcF6JPlL8ZxbKTGko5kVylwT2gOCmmFmKtdjZ OLLSTwLxk99Vb4AEAh7ODpbVmLmTBsrHmoKrf3FDew6nQH.blLp35RbzOEem k0G_BsTuTrn3qrnpax.OpiKDXjSU6A5fb_PBDTC3viXY2gqn_rr2dIOpD8nk m3wlf_15pguNXJh9obNdttWHLnebvj1HUUjke3WyNIq9hqm.ltTbKoTYu9Qh SIQjLzTTWv9QYrZmYyGV0vgIqOMP59aAgBVGOzHMBWOkX X-Yahoo-Newman-Property: ymail-3 Received: from prime.irbisnet.com (prime.irbisnet.vpn [10.78.76.4]) by smtp.irbisnet.com (Postfix) with ESMTPSA id DEACA11425; Fri, 17 Sep 2010 21:53:43 -0400 (EDT) Content-Type: text/plain; charset=utf-8; format=flowed; delsp=yes To: "Pawel Jakub Dawidek" References: <20100917192938.GB1902@garage.freebsd.pl> Date: Fri, 17 Sep 2010 21:53:36 -0400 MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: "Andriy Bakay" Message-ID: In-Reply-To: <20100917192938.GB1902@garage.freebsd.pl> User-Agent: Opera Mail/10.61 (FreeBSD) Cc: "freebsd-fs@freebsd.org" Subject: Re: ZFS + GELI data integrity X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 18 Sep 2010 01:53:48 -0000 Thanks, Pawel for detailed answer. Turn off ZFS checksum is not a option at least for me, because I will loose self healing I guess. But (ZFS with SHA256) + (GELI only encryption) sounds good. I have another question. I read on OpenSolaris ZFS Dedup FAQ, they used not very efficient implementation of ZFS SHA256 checksum: "However, ZFS uses its own copy of SHA256 and doesn't currently use a crypto accelerator or crypto framework." http://hub.opensolaris.org/bin/view/Community+Group+zfs/dedup What about FreeBSD implementation of ZFS SHA256 checksum? Thanks, Andriy On Fri, 17 Sep 2010 15:29:38 -0400, Pawel Jakub Dawidek wrote: > On Thu, Sep 16, 2010 at 03:22:27PM -0400, Andriy Bakay wrote: >> Hi list(s), >> >> I am using ZFS on top of GELI. Does exists any practical reason to >> enable >> GELI data authentication (data integrity) underneath of ZFS? I >> understand >> GELI data integrity is cryptographically strong -- up to HMAC/SHA512, >> but >> ZFS has SHA256 checksum. GELI linked data to sector and will detect if >> somebody move data around, but my understanding is to move data around >> consistently one need to decrypt it which is very difficult. Correct me >> if >> I wrong. >> >> Any thoughts? > > ZFS blocks form z merkle tree (http://en.wikipedia.org/wiki/Hash_tree), > so if you're using cryptographically strong hash, like sha256 within > your pool, I believe it is safe not to use GELI data authentication, but > only encryption. Note, that I'm not cryptographer and this is quite > complex scenario, so what I believe in here might not be true. > Alternatively you could use GELI authetication and turn off ZFS > checksum. When I personally use ZFS on top of GELI, I do just that: GELI > does encryption only and ZFS does authentication with SHA256 checksum. > -- Using Opera's revolutionary email client: http://www.opera.com/mail/ From owner-freebsd-fs@FreeBSD.ORG Sat Sep 18 04:42:01 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 271591065693; Sat, 18 Sep 2010 04:42:00 +0000 (UTC) (envelope-from bsdunix44@gmail.com) Received: from mail-iw0-f182.google.com (mail-iw0-f182.google.com [209.85.214.182]) by mx1.freebsd.org (Postfix) with ESMTP id C6BAA8FC0A; Sat, 18 Sep 2010 04:41:59 +0000 (UTC) Received: by iwn34 with SMTP id 34so3004007iwn.13 for ; Fri, 17 Sep 2010 21:41:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:cc:message-id:from:to :in-reply-to:content-type:content-transfer-encoding:mime-version :subject:date:references:x-mailer; bh=QCXG8l+1NUrIJpVuvcTO6L4V3qBW/n8czEPvgc8xQRA=; b=RNM/dC5c73bHLymRJoJxfbi71YMDdk39lsA9v0prlWW1lApl1Ad7VL214RBEMSMTGj uzX9M/ixNzmPUqDIA0eWCG9Dydt6Hj7N1Jh028Kh4q7VONYOoNVVVqyg3TlK9zpoMvGE 6LXnAi57eVL17ITJ4A3I6tyfiIGAOsYgKMkuM= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=cc:message-id:from:to:in-reply-to:content-type :content-transfer-encoding:mime-version:subject:date:references :x-mailer; b=GTuQrsO3yVh6A8M0g8LDMh41cwMcqoFXqN61P1WerLnJSclJoTkAMfJOKFIIbzNpgL HqvcNleotKW+l03s9S7V1szMJo+vxjer8BNQM2EQIaf9SGWQGTKwkdnVV+JVfwOmPUCm jtX5EugkM/l6LNQSgy/fvBIsDcot6k4ZZ/sO0= Received: by 10.231.11.197 with SMTP id u5mr4385141ibu.41.1284784918851; Fri, 17 Sep 2010 21:41:58 -0700 (PDT) Received: from [192.168.1.4] (ip98-164-15-137.ks.ks.cox.net [98.164.15.137]) by mx.google.com with ESMTPS id g31sm4459676ibh.16.2010.09.17.21.41.57 (version=TLSv1/SSLv3 cipher=RC4-MD5); Fri, 17 Sep 2010 21:41:57 -0700 (PDT) Message-Id: From: Chris Watson To: Pawel Jakub Dawidek In-Reply-To: <20100917201021.GD1902@garage.freebsd.pl> Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes Content-Transfer-Encoding: 7bit Mime-Version: 1.0 (Apple Message framework v936) Date: Fri, 17 Sep 2010 23:41:50 -0500 References: <82EA2358-F5E5-4CEE-91AC-4211C04F22FD@gmail.com> <20100917193521.GC1902@garage.freebsd.pl> <20100917201021.GD1902@garage.freebsd.pl> X-Mailer: Apple Mail (2.936) Cc: freebsd-fs@freebsd.org Subject: Re: ZFS I/O Throughput question.. X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 18 Sep 2010 04:42:01 -0000 priyanka# zpool status pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 raidz1 ONLINE 0 0 0 ada2 ONLINE 0 0 0 ada3 ONLINE 0 0 0 ada4 ONLINE 0 0 0 ada5 ONLINE 0 0 0 errors: No known data errors priyanka#apply "dd if=/dev/ada%1 of=/dev/null bs=1m count=5000 &" 2 3 4 5 [1] 73068 [1] 73071 [1] 73074 [1] 73077 priyanka# 5000+0 records in 5000+0 records out 5242880000 bytes transferred in 35.093718 secs (149396538 bytes/sec) 5000+0 records in 5000+0 records out 5242880000 bytes transferred in 35.374131 secs (148212262 bytes/sec) 5000+0 records in 5000+0 records out 5242880000 bytes transferred in 36.654998 secs (143033154 bytes/sec) 5000+0 records in 5000+0 records out 5242880000 bytes transferred in 38.558150 secs (135973328 bytes/sec) priyanka# apply "dd if=/dev/zero of=/dev/ada%1 bs=1m count=5000 &" 2 3 4 5 [1] 73130 [1] 73133 [1] 73136 [1] 73139 priyanka# 5000+0 records in 5000+0 records out 5242880000 bytes transferred in 35.149276 secs (149160398 bytes/sec) 5000+0 records in 5000+0 records out 5242880000 bytes transferred in 35.786368 secs (146504949 bytes/sec) 5000+0 records in 5000+0 records out 5242880000 bytes transferred in 36.552717 secs (143433387 bytes/sec) 5000+0 records in 5000+0 records out 5242880000 bytes transferred in 40.162832 secs (130540595 bytes/sec) priyanka# On Sep 17, 2010, at 3:10 PM, Pawel Jakub Dawidek wrote: > On Fri, Sep 17, 2010 at 09:35:21PM +0200, Pawel Jakub Dawidek wrote: >> On Wed, Sep 15, 2010 at 03:05:46AM -0500, Chris Watson wrote: >>> I have been testing ZFS on a home box now for a few days and I >>> have a >>> question that is perplexing me. Everything I have read on ZFS says >>> in >>> almost every case mirroring is faster than raidz. So I initially >>> setup >>> a 2x2 Raid 10 striped mirror. Like so: >> [...] >> >> Could you try running something like this: >> >> # apply "dd if=/dev/ada%1 of=/dev/null bs=1m count=5000 &" 2 3 4 5 >> >> This will tell us how much of total throughput do you have. >> If you can destroy your data, you may also try this: >> >> # apply "dd if=/dev/null of=/dev/ada%1 bs=1m count=5000 &" 2 3 4 5 > > # apply "dd if=/dev/zero of=/dev/ada%1 bs=1m count=5000 &" 2 3 4 5 > > Thanks to se@ for noticing this. > >> If you disks cannot work at full speed in parallel this might explain >> what you're seeing. Mirror send to disk twice as much data as it >> receives and RAIDZ sends only 33% more data in four disk case. >> >> And no, there are neither special RAIDZ optimizations not special >> mirror >> pesimizations in FreeBSD. > > -- > Pawel Jakub Dawidek http://www.wheelsystems.com > pjd@FreeBSD.org http://www.FreeBSD.org > FreeBSD committer Am I Evil? Yes, I Am! From owner-freebsd-fs@FreeBSD.ORG Sat Sep 18 15:48:19 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 5360D106566C for ; Sat, 18 Sep 2010 15:48:19 +0000 (UTC) (envelope-from bfriesen@simple.dallas.tx.us) Received: from blade.simplesystems.org (blade.simplesystems.org [65.66.246.74]) by mx1.freebsd.org (Postfix) with ESMTP id EAF3F8FC20 for ; Sat, 18 Sep 2010 15:48:18 +0000 (UTC) Received: from freddy.simplesystems.org (freddy.simplesystems.org [65.66.246.65]) by blade.simplesystems.org (8.13.8+Sun/8.13.8) with ESMTP id o8IFNtln028051; Sat, 18 Sep 2010 10:23:56 -0500 (CDT) Date: Sat, 18 Sep 2010 10:23:55 -0500 (CDT) From: Bob Friesenhahn X-X-Sender: bfriesen@freddy.simplesystems.org To: Jeremy Chadwick In-Reply-To: <20100917163732.GA59537@icarus.home.lan> Message-ID: References: <4C9385B0.2080909@shatow.net> <20100917161847.GA58503@icarus.home.lan> <20100917163732.GA59537@icarus.home.lan> User-Agent: Alpine 2.01 (GSO 1266 2009-07-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.2.2 (blade.simplesystems.org [65.66.246.90]); Sat, 18 Sep 2010 10:23:56 -0500 (CDT) Cc: freebsd-fs@freebsd.org Subject: Re: what happens to pool if ZIL dies on ZFS v14 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 18 Sep 2010 15:48:19 -0000 On Fri, 17 Sep 2010, Jeremy Chadwick wrote: >> >> This (dead ZIL == dead pool) only applies to separate log (slog) devices. > > I was under the impression ZFS still managed to utilise the ZIL when a > pool didn't have any "log" devices associated with it (possibly some > sort of statically-allocated amount of RAM?) To clarify, the ZIL (ZFS Intent Log) is a non-volatile log of pending (uncomitted) synchronous write requests. ZFS always has one. A synchronous write does not return until the data is at least written into the ZIL. If you "disable ZIL" then you are pretending that synchronous writes were immediately written (even when they were not). This will not endanger your pool, but recently requested synchronous writes may be lost (just as recent asyncronous writes may be lost) if the system loses power, or spontaneously reboots. By default, ZFS will buffer up to 30 seconds of writes (async + sync) and in fact zfs writes are coherent so that synchronous writes are treated the same as asynchronous writes. The only difference is that when a synchronous write completes, it is cleared from the ZIL. The ZIL is used to replay buffered synchronous writes which did not complete prior to a system crash or unexpected reboot. The 30 seconds of buffering only occurs on systems with a very large amount of RAM and/or a relatively slow write rate. Otherwise, zfs will write data much more often. If a system has limited RAM, then it will also buffer less data in the ZIL since it needs to write more often. Bob -- Bob Friesenhahn bfriesen@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/