From owner-freebsd-doc@FreeBSD.ORG Tue Nov 5 05:08:48 2013 Return-Path: Delivered-To: freebsd-doc@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id 634CEC51 for ; Tue, 5 Nov 2013 05:08:48 +0000 (UTC) (envelope-from freebsd@allanjude.com) Received: from mx1.scaleengine.net (beauharnois2.bhs1.scaleengine.net [142.4.218.15]) by mx1.freebsd.org (Postfix) with ESMTP id 2C3432BBD for ; Tue, 5 Nov 2013 05:08:47 +0000 (UTC) Received: from [10.1.1.1] (S01060001abad1dea.hm.shawcable.net [50.70.108.129]) (Authenticated sender: allan.jude@scaleengine.com) by mx1.scaleengine.net (Postfix) with ESMTPSA id 4373744C1F for ; Tue, 5 Nov 2013 05:08:45 +0000 (UTC) Message-ID: <52787D62.5020607@allanjude.com> Date: Tue, 05 Nov 2013 00:08:50 -0500 From: Allan Jude User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.1.0 MIME-Version: 1.0 To: freebsd-doc@FreeBSD.org Subject: ZFS Handbook Update X-Enigmail-Version: 1.6 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="sSWk50K8CUtjruQCeBXPH3Rk6OfCLssuk" X-BeenThere: freebsd-doc@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Documentation project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 05 Nov 2013 05:08:48 -0000 This is an OpenPGP/MIME signed message (RFC 4880 and 3156) --sSWk50K8CUtjruQCeBXPH3Rk6OfCLssuk Content-Type: multipart/mixed; boundary="------------000608030202070600010905" This is a multi-part message in MIME format. --------------000608030202070600010905 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Attached find ~320 new lines and 87 modified lines of the ZFS chapter of the FreeBSD Handbook that I wrote on the plane to and from the FreeBSD 20th Anniversary Party. Note: this is for, and is a patch against, the projects/zfsupdate-201307 branch. --=20 Allan Jude --------------000608030202070600010905 Content-Type: text/plain; charset=windows-1252; name="zfs_freebsdparty_content.patch" Content-Transfer-Encoding: quoted-printable Content-Disposition: attachment; filename="zfs_freebsdparty_content.patch" Index: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapt= er.xml =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.= xml (revision 43100) +++ projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.= xml (working copy) @@ -22,22 +22,35 @@ Reuschling Written by + + Warren + Block + Written by + =20 The Z File System (<acronym>ZFS</acronym>) =20 The Z File System - (ZFS) was developed at &sun; to address many of - the problems with current file systems. There were three major - design goals: + (ZFS) was originally developed at &sun; to + address many of the problems with then current file systems. + Development has since moved to the Open-ZFS Project. For more on + past and future development, see + .. The three major design + goals of ZFS are: =20 - Data integrity: checksums are created when data is written - and checked when data is read. If on-disk data corruption is - detected, the user is notified and recovery methods are - initiated. + Data integrity: All data that is stored on + ZFS includes a checksum of the data. When + data is written the checksum is calculated and written along + with the data. When that data is later read back, the + checksum is calculated again and if the values do not match an + error is returned. ZFS will attempt to + automatically correct the error if there is sufficient + redundancy available. =20 @@ -48,7 +61,13 @@ =20 - Performance: + Performance: ZFS features a number of + optional caching mechanisms to provide increased performance. + In addition to an advanced read cache known as the ARC in memory, there is also the + optional L2ARC read + cache and the ZIL + synchronous write cache. =20 @@ -360,13 +379,15 @@ =20 &prompt.root; zpool status -x =20 - If all pools are healthy and everything is normal, the - message indicates that: + If all pools are Online and everything is + normal, the message indicates that: =20 all pools are healthy =20 - If there is an issue, perhaps a disk has gone offline, - the pool state will look similar to: + If there is an issue, perhaps a disk is in the Offline state, the pool + state will look similar to: =20 pool: storage state: DEGRADED @@ -474,7 +495,14 @@ <command>zpool</command> Administration =20 - + The administration of ZFS is divided between two main + utilities. The zpool utility which controls + the operation of the pool and deals with adding, removing, + replacing and managing disks, and the zfs utility, which + deals with creating, destroying and managing datasets (both + filesystems and volumes). =20 Creating & Destroying Storage Pools @@ -496,7 +524,15 @@ instead the data must be backed up and the pool recreated. =20 - + A ZFS pool that is no longer needed can be destroyed so + that the disks making up the pool can be reused in another + pool or for other purposes. Destroying a pool involves + unmouting all of the datasets in that pool. If the datasets + are in use, the unmount operation will fail and the pool will + not be destroyed. The destruction of the pool can be forced + with the parameter, however this can cause + undefined behavior in the applications which had open files on + those datasets. =20 @@ -504,9 +540,9 @@ =20 Adding disks to a zpool can be broken down into two separate cases: attaching a disk to an - existing vdev with the zpool attach - command, or adding vdevs to the pool with the - zpool add command. Only some + existing vdev with zpool attach, + or adding vdevs to the pool with=20 + zpool add. Only some vdev types allow disks to be added to the vdev after creation. =20 @@ -525,23 +561,16 @@ can only be removed from a mirror if there is enough remaining redundancy. =20 - Creating a ZFS Storage Pool (zpool) - involves making a number of decisions that are relatively - permanent. Although additional vdevs can be added to a pool, - the layout of the pool cannot be changed once the pool has - been created, instead the data must be backed up and the pool - recreated. Currently, devices cannot be removed from a - zpool. =20 - Replacing a Working Devices + Replacing a Functioning Device =20 There are a number of situations in which it may be desirable to replace a disk with a different disk. This process requires connecting the new disk at the same time as - the disk to be replaced. The - zpool replace command will copy all of the + the disk to be replaced. + zpool replace will copy all of the data from the old disk to the new one. After this operation completes, the old disk is disconnected from the vdev. If the new disk is larger than the old disk, it may be possible to grow the zp= ool, using the new space. See @@ -551,12 +580,27 @@ Dealing with Failed Devices =20 - When a disk fails and the physical device is replaced, ZFS - must be told to begin the When a disk in a ZFS pool fails, the vdev that the disk + belongs to will enter the Degraded state. In this + state, all of the data stored on the vdev is still available, + but performance may be impacted because missing data will need + to be calculated from the available redundancy. To restore + the vdev to a fully functional state the failed physical + device will need to be replace replaced, and ZFS must be + instructed to begin the resilver operation, where data that was on the failed device will be recalculated - from the available redundancy and written to the new - device. + from the available redundancy and written to the replacement + device. Once this process has completed the vdev will return + to Online status. If + the vdev does not have any redundancy, or if multiple devices + have failed and there is insufficient redundancy to + compensate, the pool will enter the Faulted state. If a + sufficient number of devices cannot be reconnected to the pool + then the pool will be inoperative, and data will need to be + restored from backups. =20 @@ -565,12 +609,14 @@ The usable size of a redundant ZFS pool is limited by the size of the smallest device in the vdev. If each device in the vdev is= replaced sequentially, after the smallest device - has completed the replace or resilver operation, the pool + has completed the replace or resilver operation, the pool can grow based on the size of the new smallest device. - This expansion can be triggered with the - zpool online command with the -e flag on + This expansion can be triggered by using zpool + online with on each device. After the expansion of each device, - the additional space will be available in the pool. + the additional space will become available in the pool. =20 @@ -585,7 +631,8 @@ &man.zpool.8;). When a dataset has open files, can= be used to force the export of a pool. causes the datasets to be forcibly - unmounted. This can have unexpected side effects. + unmounted, which can cause undefined behavior in the + applications which had open files on those datasets. =20 Importing a pool automatically mounts the datasets. This may not be the desired behavior, and can be prevented with -N. @@ -604,16 +651,17 @@ =20 After upgrading &os;, or if a pool has been imported from a system using an older version of ZFS, the pool - must be manually upgraded to the latest version of ZFS. This - process is unreversible. Consider whether the pool may ever need - to be imported on an older system before upgrading. An upgrade + can be manually upgraded to the latest version of ZFS. + Consider whether the pool may ever need=20 + to be imported on an older system before upgrading. + The upgrade process is unreversible and cannot be undone. =20 The newer features of ZFS will not be available until - the zpool upgrade command has completed. - will the newer features of ZFS be available. + zpool upgrade has completed. can be used to see what new - features will be provided by upgrading. + features will be provided by upgrading, as well as which features are a= lready + supported by the existing version. =20 @@ -627,11 +675,13 @@ =20 ZFS has a built-in monitoring system that can display statistics about I/O happening on the pool in real-time. - Additionally, it shows the free and used space on the pool and - how much I/O bandwidth is currently utilized for read and - write operations. By default, all pools in the system will be - monitored and displayed. A pool name can be provided to monitor - just that single pool. A basic example: + It shows the amount of free and used space on the pool, how + many read and write operations are being performed per second, + and how much I/O bandwidth is currently being utilized for + read and write operations. By default, all pools in the system will be= + monitored and displayed. A pool name can be provided=20 + as part of the command to monitor just that specific + pool. A basic example: =20 &prompt.root; zpool iostat capacity operations bandwidth @@ -639,8 +689,9 @@ ---------- ----- ----- ----- ----- ----- ----- data 288G 1.53T 2 11 11.3K 57.1K =20 - To continuously monitor I/O activity on the pool, specify - a number as the last parameter, indicating the number of seconds + To continuously monitor I/O activity on the pool, + a number can be specified as the last parameter, indicating + the frequency in seconds=20 to wait between updates. ZFS will print the next statistic line after each interval. Press =20 Even more detailed pool I/O statistics can be - displayed with parameter. - Each storage device in the pool will be shown with a - separate statistic line. This is helpful to - determine reads and writes on devices that slow down I/O on - the whole pool. The following example shows a - mirrored pool consisting of two devices. For each of these, - a separate line is shown with the current I/O - activity. + displayed with . In this case=20 + each storage device in the pool will be shown with a + corresponding statistics line. This is helpful to + determine how many read and write operations are being + performed on each device, and can help determine if any + specific device is slowing down I/O on the entire pool. The + following example shows a mirrored pool consisting of two + devices. For each of these, a separate line is shown with + the current I/O activity. =20 &prompt.root; zpool iostat -v capacity operations bandwidth @@ -674,25 +726,86 @@ Splitting a Storage Pool =20 - + A ZFS pool consisting of one or more mirror vdevs can be + split into a second pool. The last member of each mirror + (unless otherwise specified) is detached and used to create a + new pool containing the same data. It is recommended that + the operation first be attempted with the + parameter. This will print out the details of the proposed + operation without actually performancing it. This helps + ensure the operation will happen as expected. =20 <command>zfs</command> Administration =20 - + The zfs utility is responsible for + creating, destroying, and managing all ZFS + datasets that exist within a pool. The pool is managed using + the zpool + command. =20 Creating & Destroying Datasets =20 - + Unlike with traditional disks and volume managers, space + in ZFS is not preallocated, allowing + additional file systems to be created at any time. With + traditional file systems, once all of the space was + partitioned and assigned to a file system, there was no way to + add an additional file system without adding a new disk. + ZFS also allows you to set a number of + properties on each dataset. These properties + include features like compression, deduplication, caching and + quoteas, as well as other useful properties like readonly, + case sensitivity, network file sharing and mount point. Each + separate dataset can be administered, delegated, replicated, snapshoted, jailed, and destroyed as a unit. + This offers many advantages to creating a separate dataset for + each different type or set of files. The only drawback to + having an extremely large number of datasets, is that some + commands like zfs list will be slower, + and the mounting of an extremely large number of datasets + (100s or 1000s) can make the &os; boot process take + longer. + + Destroying a dataset is much quicker than deleting all + of the files that reside on the dataset, as it does not + invole scanning all of the files and updating all of the + corresponding metadata. In modern versions of + ZFS the zfs destroy + operation is asynchronous, the free space may take several + minutes to appear in the pool. The freeing + property, accessible with zpool get freeing + poolname indicates how + many datasets are having their blocks freed in the background. + If there are child datasets, such as snapshots or other + datasets, then the parent cannot be destroyed. To destroy a + dataset and all of its children, use the + parameter to recursively destroy the dataset and all of its + children. The parameters can be used + to not actually perform the destruction, but instead list + which datasets and snapshots would be destroyed and in the + case of snapshots, how much space would be reclaimed by + proceeding with the destruction. =20 Creating & Destroying Volumes =20 - + A volume is special type of ZFS + dataset. Rather than being mounted as a file system, it is + exposed as a block device under + /dev/zvol/poolname/= dataset. + This allows the volume to be used for other file systems, to + back the disks of a virtual machine, or to be exported using + protocols like iSCSI or HAST. =20 A volume can be formatted with any filesystem on top of it. This will appear to the user as if they are working with @@ -714,18 +827,46 @@ /dev/zvol/tank/fat32 249M 24k 249M 0% /mnt &prompt.root; mount | grep fat32 /dev/zvol/tank/fat32 on /mnt (msdosfs, local) + + Destroying a volume is much the same as destroying a + regular filesystem dataset. The operation is nearly + instantaneous, but it make take several minutes for the free + space to be reclaimed in the background. + =20 Renaming a Dataset =20 - + The name of a dataset can be changed using zfs + rename. The rename command can also be used to + change the parent of a dataset. Renaming a dataset to be + under a different parent dataset will change the value of + those properties that are inherited by the child dataset. + When a dataset is renamed, it is unmounted and then remounted + in the new location (inherited from the parent dataset). This + behavior can be prevented using the + parameter. Due to the nature of snapshots, they cannot be + renamed outside of the parent dataset. To rename a recursive + snapshot, specify the parameter, and all + snapshots with the same specified snapshot will be + renamed. =20 Setting Dataset Properties =20 - + Each ZFS dataset has a number of + properties to control its behavior. Most properties are + automatically inherited from the parent dataset, but can be + overridden locally. Set a property on a dataset with + zfs set + property=3Dvalue + dataset. Most properties + have a limited set of valid values, zfs get + will display each possible property and its valid values. + Most properties can be reverted to their inherited values + using zfs inherit. =20 It is possible to set user-defined properties in ZFS. They become part of the dataset configuration and can be used @@ -743,13 +884,55 @@ Managing Snapshots =20 - + Snapshots are one= + of the most powerful features of ZFS. A + snapshot provides a point-in-time copy of the dataset that the + parent dataset can be rolled back to if required. Create a + snapshot with zfs snapshot + dataset@snapshotname. + Specifying the parameter will recursively + create a snapshot with the same name on all child + datasets. + + By default, snapshots are mounted in a hidden directory + under the parent dataset: .zfs/snapshots/snapshotname. + Individual files can easily be restored to a previous state by + copying them from the snapshot back to the parent dataset. It + is also possible to revert the entire dataset back to the + point-in-time of the snapshot using zfs + rollback. + + Snapshots consume space based on how much the parent file + system has changed since the time of the snapshot. The + written property of a snapshot tracks how + much space is being used by a snapshot. + + To destroy a snapshot and recover the space consumed by + the overwritten or deleted files, run zfs destroy + dataset@snapshot<= /command>. + The parameter will recursively remove all + snapshots with the same name under the parent dataset. Adding + the parameters to the destroy command + will display a list of the snapshots that would be deleted and + an estimate of how much space would be reclaimed by proceeding + with the destroy operation. =20 Managing Clones =20 - + A clone is a copy of a snapshot that is treated more like + a regular dataset. Unlike a snapshot, a clone is not read + only, is mounted, and can have its own properties. Once a + clone has been created, the snapshot it was created from + cannot be destroyed. The child/parent relationship between + the clone and the snapshot can be reversed using zfs + promote. After a clone has been promoted, the + snapshot becomes a child of the clone, rather than of the + original parent dataset. This will change how the space is + accounted, but not actually change the amount of space + consumed. =20 @@ -761,6 +944,18 @@ Dataset, User and Group Quotas =20 + Dataset + quotas can be used to restrict the amount of space + that can be consumed by a peticular dataset. Reference Quotas work in + very much the same way, except they only count the space used + by the dataset it self, excluding snapshots and child + datasets. Similarly user and group quotas can be used + to prevent users or groups from consuming all of the available + space in the pool or dataset. + To enforce a dataset quota of 10 GB for storage/home/bob, use the following: @@ -861,7 +1056,13 @@ Reservations =20 - + Reservations + guarantee a minimum amount of space will always be available + to a dataset. The reserved space will not + be available to any other dataset. This feature can be + especially useful to ensure that users cannot comsume all of + the free space, leaving none for an important dataset or log + files. =20 The general format of the reservation property is @@ -878,7 +1079,8 @@ =20 The same principle can be applied to the refreservation property for setting a - refreservation, with the general format + Reference + Reservation, with the general format refreservation=3Dsize. =20 To check if any reservations or refreservations exist on @@ -898,7 +1100,18 @@ Deduplication =20 - + When enabled, Deduplication uses + the checksum of each block to detect duplicate blocks. When a + new block is about to be written and it is determined to be a + duplicate of an existing block, rather than writing the same + data again, ZFS just references the + existing data on disk an additional time. This can offer + tremendous space savings if your data contains many discreet + copies of the file information. Deduplication requires an + extremely large amount of memory, and most of the space + savings can be had without the extra cost by enabling + compression instead. =20 To activate deduplication, you simply need to set the following property on the target pool. @@ -986,6 +1199,22 @@ thumb, compression should be used first before deduplication due to the lower memory requirements. + + + ZFS and Jails + + zfs jail and the corresponding + jailed property are used to delegate a + ZFS dataset to a Jail. zfs jail + jailid attaches a dataset + to the specified jail, and the zfs unjail + detaches it. In order for the dataset to be administered from + within a jail, the jailed property must be + set. Once a dataset is jailed it can no longer be mounted on + the host, because the jail administrator may have set + unacceptable mount points. + =20 @@ -1170,6 +1399,12 @@ Best Practices Guide + + + History of <acronym>ZFS</acronym> + + + =20 @@ -1344,28 +1579,23 @@ Log - ZFS Log Devices, also known as ZFS Intent Log - (ZIL) move the intent log from - the regular pool devices to a dedicated device. - The ZIL accelerates synchronous - transactions by using storage devices (such as - SSDs) that are faster than - those used for the main pool. When data is being - written and the application requests a guarantee - that the data has been safely stored, the data is - written to the faster ZIL - storage, then later flushed out to the regular - disks, greatly reducing the latency of synchronous - writes. Log devices can be mirrored, but - RAID-Z is not supported. If - multiple log devices are used, writes will be load - balanced across them. + (ZIL) mov= e the intent log from + the regular pool devices to a dedicated device, + typically an SSD. + Having a dedicated log + device can significantly improve the performance + of applications with a high volume of synchronous + writes, especially databases. Log devices can be + mirrored, but RAID-Z is not + supported. If multiple log devices are used, + writes will be load balanced across them. =20 Cache - Adding a cache vdev to a zpool will add the storage of the cache to - the L2ARC. Cache devices + the L2ARC. Cache devices cannot be mirrored. Since a cache device only stores additional copies of existing data, there is no risk of data loss. @@ -1446,6 +1676,26 @@ =20 + ZIL + + The ZIL accelerates synchronous + transactions by using storage devices (such as + SSDs) that are faster than those used + for the main storage pool. When data is being written + and the application requests a synchronous write (a + guarantee that the data has been safely stored to disk + rather than only cached to be written later), the data + is written to the faster ZIL storage, + then later flushed out to the regular disks, greatly + reducing the latency and increasing performance. + Only workloads that are synchronous such as databases + will benefit from a ZIL. Regular + asynchronous writes such as copying files will not use + the ZIL at all. + + + Copy-On-Write =20 Unlike a traditional file system, when data is @@ -1481,12 +1731,24 @@ properties on a child to override the defaults inherited from the parents and grandparents. ZFS also allows administration of - datasets and their children to be delegated. + datasets and their children to be delegated. =20 - Volume + Filesystem =20 + A ZFS dataset is most often used + as a file system. Like most other file systems, a + ZFS file system is mounted somewhere + in the systems directory heirarchy and contains files + and directories of its own with permissions, flags and + other metadata. + + + + Volume + In additional to regular file system datasets, ZFS can also create volumes, which are block devices. Volumes have many of the same @@ -1802,6 +2064,63 @@ remaining drives) to the new drive is called resilvering. + + + Online + + A ZFS pool or vdev that is in the + Online state has all of its member + devices connected and fully operational. Individual + devices in the Online state are + functioning normally. + + + + Offline + + Individual devices can be put in an + Offline state by the administrator if + there is sufficient redundancy to avoid putting the pool + or vdev into a Faulted state. An + administrator may choose to offline a disk in + preperation for replacing it, or to make it easier to + identify. + + + + Degraded + + A ZFS pool or vdev that is in the + Degraded state has one or more disks + that have been disconnected or have failed. The pool is + still usable however if additional devices fail the pool + could become unrecoverable. Reconnecting the missing + device(s) or replacing the failed disks will return the + pool to a Online state after + the reconnected or new device has completed the Resilver + process. + + + + Faulted + + A ZFS pool or vdev that is in the + Faulted state is no longer + operational and the data residing on it can no longer + be accessed. A pool or vdev enters the + Faulted state when the number of + missing or failed devices exceeds the level of + redundancy in the vdev. If missing devices can be + reconnected the pool will return to a Online state. If + there is insufficient redundancy to compensate for the + number of failed disks, then the contents of the pool + are lost and will need to be restored from + backups. + --------------000608030202070600010905-- --sSWk50K8CUtjruQCeBXPH3Rk6OfCLssuk Content-Type: application/pgp-signature; name="signature.asc" Content-Description: OpenPGP digital signature Content-Disposition: attachment; filename="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.16 (MingW32) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQIcBAEBAgAGBQJSeH1lAAoJEJrBFpNRJZKfdhgQAKIJ49QJ2ziq+vfxf4crnNIX T4fndNnaFRIZPQvx2lfgSV93XXCnNr+YBplqVnDHEOX/FsnGwwQGOfcrqfxM2Lyb 45DghQkR+siNahKT8v5DrEjM64hvDnEFvQgf6em1lL9XbAAxLc9HqXqS9hAOucIB KAr726OiETVEzWyc5YDJLhDxTy1hN2Kldqeuhasta32D0FOAm3Q7/vsh9yYdX08j Z35/7i534ro/vGDxRhoVRjfJne3c/vTFKDcrfdqVtKqCl3cLolfup+RZxYY5j1Op L7jm4m5OxdqQPXij5c2bYRM/ydxg8UdR2MVo65MWqrsyzAaq+SLpW4Cr3dU1dn2v DLui0PTSj6V5hcTQWjxlRwf1mOYjOjRi2SpNUrvR2NNZwxtfHKSX0T3D1s/cy4z+ 8lHRmHOdQY4ZaR5dWRghvGE711DFhWrd/hrSBW3U3PKA47VBnnVsIwfKRMG7BEzZ kTSxEeNDnmvGXLv4gRpNz1EjYW9PaQ72/jyiQIIiOgPQ6qhTrriyjBxt/3MICn7y SKQs3I5pe9OCHBCt8I2KBHfs3P87+Bsdd02+aL8NlN2mRhBkMd3S/qYvGdiqGcxg mtRj/mTLOxEKEruc6iwjdSNq5gameD/vUoc5PvHbQhBbund2ZeIpOSSfdJnPTvr5 mXmoA+yc08w20WsK/ejG =izMj -----END PGP SIGNATURE----- --sSWk50K8CUtjruQCeBXPH3Rk6OfCLssuk--