Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 20 Oct 2015 10:51:09 -0500
From:      Karl Denninger <karl@denninger.net>
To:        freebsd-fs@freebsd.org
Subject:   Re: Panic in ZFS during zfs recv (while snapshots being destroyed)
Message-ID:  <562662ED.1070302@denninger.net>
In-Reply-To: <561FB7D0.4080107@denninger.net>
References:  <55BB443E.8040801@denninger.net> <55CF7926.1030901@denninger.net> <55DF7191.2080409@denninger.net> <sig.0681f4fd27.ADD991B6-BCF2-4B11-A5D6-EF1DB585AA33@chittenden.org> <55DF76AA.3040103@denninger.net> <561FB7D0.4080107@denninger.net>

next in thread | previous in thread | raw e-mail | index | archive | help
This is a cryptographically signed message in MIME format.

--------------ms020804010306020008070202
Content-Type: text/plain; charset=windows-1252
Content-Transfer-Encoding: quoted-printable

More data on this crash from this morning; I caught it in-process this
time and know exactly where it was in the backup script when it detonated=
=2E

Here's the section of the script that was running when it blew up:

for i in $ZFS
do
        echo Processing $i

        FILESYS=3D`echo $i|cut -d/ -f2`

        zfs list $i@zfs-base >/dev/null 2>&1
        result=3D$?
        if [ $result -eq 1 ];
        then
                echo "Make and send zfs-base snapshot"
                zfs snapshot -r $i@zfs-base
                zfs send -R $i@zfs-base | zfs receive -Fuvd $BACKUP
        else
                base_only=3D`zfs get -H com.backup:base-only $i|grep true=
`
                result=3D$?
                if [ $result -eq 1 ];
                then
                        echo "Bring incremental backup up to date"
                        old_present=3D`zfs list $i@zfs-old >/dev/null 2>&=
1`
                        old=3D$?
                        if [ $old -eq 0 ];
                        then
                                echo "Previous incremental sync was
interrupted; resume"
                        else
                                zfs rename -r $i@zfs-base $i@zfs-old
                                zfs snapshot -r $i@zfs-base
                        fi
                        zfs send -RI $i@zfs-old $i@zfs-base | zfs
receive -Fudv $BACKUP
                        result=3D$?
                        if [ $result -eq 0 ];
                        then
                                zfs destroy -r $i@zfs-old
                                zfs destroy -r $BACKUP/$FILESYS@zfs-old
                        else
                                echo "STOP - - ERROR RUNNING COPY on $i!!=
!!"
                                exit 1
                        fi
                else
                        echo "Base-only backup configured for $i"
                fi
        fi
        echo
done

And the output on the console when it happened:

Begin local ZFS backup by SEND
Run backups of default [zsr/R/10.2-STABLE zsr/ssd-txlog zs/archive
zs/colo-archive zs/disk zs/pgsql zs/pics dbms/ticker]
Tue Oct 20 10:14:09 CDT 2015

Import backup pool
Imported; ready to proceed
Processing zsr/R/10.2-STABLE
Bring incremental backup up to date
_*Previous incremental sync was interrupted; resume*_
attempting destroy backup/R/10.2-STABLE@zfs-auto-snap_daily-2015-10-13-00=
h07
success
attempting destroy
backup/R/10.2-STABLE@zfs-auto-snap_hourly-2015-10-18-12h04
success
*local fs backup/R/10.2-STABLE does not have fromsnap (zfs-old in stream)=
;**
**must have been deleted locally; ignoring*
receiving incremental stream of
zsr/R/10.2-STABLE@zfs-auto-snap_daily-2015-10-18-00h07 into
backup/R/10.2-STABLE@zfs-auto-snap_daily-2015-10-18-00h07
snap backup/R/10.2-STABLE@zfs-auto-snap_daily-2015-10-18-00h07 already
exists; ignoring
received 0B stream in 1 seconds (0B/sec)
receiving incremental stream of
zsr/R/10.2-STABLE@zfs-auto-snap_weekly-2015-10-18-00h14 into
backup/R/10.2-STABLE@zfs-auto-snap_weekly-2015-10-18-00h14
snap backup/R/10.2-STABLE@zfs-auto-snap_weekly-2015-10-18-00h14 already
exists; ignoring
received 0B stream in 1 seconds (0B/sec)
receiving incremental stream of zsr/R/10.2-STABLE@zfs-base into
backup/R/10.2-STABLE@zfs-base
snap backup/R/10.2-STABLE@zfs-base already exists; ignoring

That bolded pair of lines should _*not*_ be there.  It  means that the
snapshot "zfs-old" is on the source volume, but it shouldn't be there
because after the copy is complete we destroy it on both the source and
destination.  Further, when it is attempted to be sent while the
snapshot name (zfs-old) was found in a zfs list /*the data allegedly
associated with the phantom snapshot that shouldn't exist was not there
*/(second bolded line)

                        zfs send -RI $i@zfs-old $i@zfs-base | zfs
receive -Fudv $BACKUP
                        result=3D$?
                        if [ $result -eq 0 ];
                        then
                               *zfs destroy -r $i@zfs-old**
**                                zfs destroy -r $BACKUP/$FILESYS@zfs-old=
*
                        else
                                echo "STOP - - ERROR RUNNING COPY on $i!!=
!!"
                                exit 1
                        fi

I don't have the trace from the last run (I didn't save it) *but there
were no errors returned by it *as it was run by hand (from the console)
while I was watching it.

This STRONGLY implies that the zfs destroy /allegedly /succeeded (it ran
without an error and actually removed the snapshot) but left the
snapshot _*name*_ on the volume as it was able to be found by the
script, and then when that was referenced by the backup script in the
incremental send the data set was invalid producing the resulting panic.

The bad news is that the pool shows no faults and a scrub which took
place a few days ago says the pool is fine.  If I re-run the backup I
suspect it will properly complete (it always has in the past as a resume
from the interrupted one) but clearly, whatever is going on here looks
like some sort of race in the destroy commands (which _*are*_ being run
recursively) that leaves the snapshot name on the volume but releases
the data stored in it, thus the panic when it is attempted to be
referenced.

I have NOT run a scrub on the pool in an attempt to preserve whatever
evidence may be there; if there is something that I can look at with zdb
or similar I'll leave this alone for a bit -- the machine came back up
and is running fine.

This is a production system but I can take it down off-hours for a short
time if there is a need to run something in single-user mode using zdb
to try to figure out what's going on.  There have been no known hardware
issues with it and all the other pools (including the ones on the same
host adapter) are and have been running without incident.

Ideas?


Fatal trap 12: page fault while in kernel mode
cpuid =3D 9; apic id =3D 21
fault virtual address   =3D 0x378
fault code              =3D supervisor read data, page not present
instruction pointer     =3D 0x20:0xffffffff80940ae0
stack pointer           =3D 0x28:0xfffffe0668018680
frame pointer           =3D 0x28:0xfffffe0668018700
code segment            =3D base 0x0, limit 0xfffff, type 0x1b
                        =3D DPL 0, pres 1, long 1, def32 0, gran 1
processor eflags        =3D interrupt enabled, resume, IOPL =3D 0
current process         =3D 70921 (zfs)
trap number             =3D 12
panic: page fault
cpuid =3D 9
KDB: stack backtrace:
db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame
0xfffffe0668018160
kdb_backtrace() at kdb_backtrace+0x39/frame 0xfffffe0668018210
vpanic() at vpanic+0x126/frame 0xfffffe0668018250
panic() at panic+0x43/frame 0xfffffe06680182b0
trap_fatal() at trap_fatal+0x36b/frame 0xfffffe0668018310
trap_pfault() at trap_pfault+0x2ed/frame 0xfffffe06680183b0
trap() at trap+0x47a/frame 0xfffffe06680185c0
calltrap() at calltrap+0x8/frame 0xfffffe06680185c0
--- trap 0xc, rip =3D 0xffffffff80940ae0, rsp =3D 0xfffffe0668018680, rbp=
 =3D
0xfffffe0668018700 ---
*__mtx_lock_sleep() at __mtx_lock_sleep+0x1b0/frame 0xfffffe0668018700**
**dounmount() at dounmount+0x58/frame 0xfffffe0668018780**
**zfs_unmount_snap() at zfs_unmount_snap+0x114/frame 0xfffffe06680187a0**=

**dsl_dataset_user_release_impl() at
dsl_dataset_user_release_impl+0x103/frame 0xfffffe0668018920**
**dsl_dataset_user_release_onexit() at
dsl_dataset_user_release_onexit+0x5c/frame 0xfffffe0668018940**
**zfs_onexit_destroy() at zfs_onexit_destroy+0x56/frame 0xfffffe066801897=
0**
**zfsdev_close() at zfsdev_close+0x52/frame 0xfffffe0668018990*
devfs_fpdrop() at devfs_fpdrop+0xa9/frame 0xfffffe06680189b0
devfs_close_f() at devfs_close_f+0x45/frame 0xfffffe06680189e0
_fdrop() at _fdrop+0x29/frame 0xfffffe0668018a00
closef() at closef+0x21e/frame 0xfffffe0668018a90
closefp() at closefp+0x98/frame 0xfffffe0668018ad0
amd64_syscall() at amd64_syscall+0x35d/frame 0xfffffe0668018bf0
Xfast_syscall() at Xfast_syscall+0xfb/frame 0xfffffe0668018bf0
--- syscall (6, FreeBSD ELF64, sys_close), rip =3D 0x801a01f5a, rsp =3D
0x7fffffffd1e8, rbp =3D 0x7fffffffd200 ---
Uptime: 5d0h58m0s



On 10/15/2015 09:27, Karl Denninger wrote:
> Got another one of these this morning, after a long absence...
>
> Same symptom -- happened during a backup (send/receive) and it's in
> the same place -- when the snapshot is unmounted.  I have a clean dump
> and this is against a quite-recent checkout, so the
> most-currently-rolled forward ZFS changes are in this kernel.
>
>
> Fatal trap 12: page fault while in kernel mode
> cpuid =3D 14; apic id =3D 34
> fault virtual address   =3D 0x378
> fault code              =3D supervisor read data, page not present
> instruction pointer     =3D 0x20:0xffffffff80940ae0
> stack pointer           =3D 0x28:0xfffffe066796c680
> frame pointer           =3D 0x28:0xfffffe066796c700
> code segment            =3D base 0x0, limit 0xfffff, type 0x1b
>                         =3D DPL 0, pres 1, long 1, def32 0, gran 1
> processor eflags        =3D interrupt enabled, resume, IOPL =3D 0
> current process         =3D 81187 (zfs)
> trap number             =3D 12
> panic: page fault
> cpuid =3D 14
> KDB: stack backtrace:
> db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame
> 0xfffffe066796c160
> kdb_backtrace() at kdb_backtrace+0x39/frame 0xfffffe066796c210
> vpanic() at vpanic+0x126/frame 0xfffffe066796c250
> panic() at panic+0x43/frame 0xfffffe066796c2b0
> trap_fatal() at trap_fatal+0x36b/frame 0xfffffe066796c310
> trap_pfault() at trap_pfault+0x2ed/frame 0xfffffe066796c3b0
> trap() at trap+0x47a/frame 0xfffffe066796c5c0
> calltrap() at calltrap+0x8/frame 0xfffffe066796c5c0
> --- trap 0xc, rip =3D 0xffffffff80940ae0, rsp =3D 0xfffffe066796c680, r=
bp
> =3D 0xfffffe
> 066796c700 ---
> __mtx_lock_sleep() at __mtx_lock_sleep+0x1b0/frame 0xfffffe066796c700
> dounmount() at dounmount+0x58/frame 0xfffffe066796c780
> zfs_unmount_snap() at zfs_unmount_snap+0x114/frame 0xfffffe066796c7a0
> dsl_dataset_user_release_impl() at
> dsl_dataset_user_release_impl+0x103/frame 0xf
> ffffe066796c920
> dsl_dataset_user_release_onexit() at
> dsl_dataset_user_release_onexit+0x5c/frame
> 0xfffffe066796c940
> zfs_onexit_destroy() at zfs_onexit_destroy+0x56/frame 0xfffffe066796c97=
0
> zfsdev_close() at zfsdev_close+0x52/frame 0xfffffe066796c990
> devfs_fpdrop() at devfs_fpdrop+0xa9/frame 0xfffffe066796c9b0
> devfs_close_f() at devfs_close_f+0x45/frame 0xfffffe066796c9e0
> _fdrop() at _fdrop+0x29/frame 0xfffffe066796ca00
> closef() at closef+0x21e/frame 0xfffffe066796ca90
> closefp() at closefp+0x98/frame 0xfffffe066796cad0
> amd64_syscall() at amd64_syscall+0x35d/frame 0xfffffe066796cbf0
> Xfast_syscall() at Xfast_syscall+0xfb/frame 0xfffffe066796cbf0
> --- syscall (6, FreeBSD ELF64, sys_close), rip =3D 0x801a01f5a, rsp =3D=

> 0x7fffffffd728, rbp =3D 0x7fffffffd740 ---
> Uptime: 3d15h37m26s
>
>
> On 8/27/2015 15:44, Karl Denninger wrote:
>> No, but that does sound like it might be involved.....
>>
>> And yeah, this did start when I moved the root pool to a mirrored pair=

>> of Intel 530s off a pair of spinning-rust WD RE4s.... (The 530s are da=
rn
>> nice performance-wise, reasonably inexpensive and thus very suitable f=
or
>> a root filesystem drive and they also pass the "pull the power cord"
>> test, incidentally.)
>>
>> You may be onto something -- I'll try shutting it off, but due to the
>> fact that I can't make this happen and it's a "every week or two" pani=
c,
>> but ALWAYS when the zfs send | zfs recv is running AND it's always on
>> the same filesystem it will be a fair while before I know if it's fixe=
d
>> (like over a month, given the usual pattern here, as that would be 4
>> "average" periods without a panic).....
>>
>> I also wonder if I could tune this out with some of the other TRIM
>> parameters instead of losing it entirely.
>>
>> vfs.zfs.trim.max_interval: 1
>> vfs.zfs.trim.timeout: 30
>> vfs.zfs.trim.txg_delay: 32
>> vfs.zfs.trim.enabled: 1
>> vfs.zfs.vdev.trim_max_pending: 10000
>> vfs.zfs.vdev.trim_max_active: 64
>> vfs.zfs.vdev.trim_min_active: 1
>>
>> That it's panic'ing on a mtx_lock_sleep might point this way.... the
>> trace shows it coming from a zfs_onexit_destroy, which ends up calling=

>> zfs_unmount_snap() and then it blows in dounmount() while executing
>> mtx_lock_sleep().
>>
>> I do wonder if I'm begging for new and innovative performance issues i=
f
>> I run with TRIM off for an extended period of time, however..... :-)
>>
>>
>> On 8/27/2015 15:30, Sean Chittenden wrote:
>>> Have you tried disabling TRIM?  We recently ran in to an issue where =
a `zfs delete` on a large dataset caused the host to panic because TRIM w=
as tripping over the ZFS deadman timer.  Disabling TRIM worked as  valid =
workaround for us.  ?  You mentioned a recent move to SSDs, so this can h=
appen, esp after the drive has experienced a little bit of actual work.  =
?  -sc
>>>
>>>
>>> --
>>> Sean Chittenden
>>> sean@chittenden.org
>>>
>>>
>>>> On Aug 27, 2015, at 13:22, Karl Denninger <karl@denninger.net> wrote=
:
>>>>
>>>> On 8/15/2015 12:38, Karl Denninger wrote:
>>>>> Update:
>>>>>
>>>>> This /appears /to be related to attempting to send or receive a
>>>>> /cloned /snapshot.
>>>>>
>>>>> I use /beadm /to manage boot environments and the crashes have all
>>>>> come while send/recv-ing the root pool, which is the one where thes=
e
>>>>> clones get created.  It is /not /consistent within a given snapshot=

>>>>> when it crashes and a second attempt (which does a "recovery"
>>>>> send/receive) succeeds every time -- I've yet to have it panic twic=
e
>>>>> sequentially.
>>>>>
>>>>> I surmise that the problem comes about when a file in the cloned
>>>>> snapshot is modified, but this is a guess at this point.
>>>>>
>>>>> I'm going to try to force replication of the problem on my test sys=
tem.
>>>>>
>>>>> On 7/31/2015 04:47, Karl Denninger wrote:
>>>>>> I have an automated script that runs zfs send/recv copies to bring=
 a
>>>>>> backup data set into congruence with the running copies nightly.  =
The
>>>>>> source has automated snapshots running on a fairly frequent basis
>>>>>> through zfs-auto-snapshot.
>>>>>>
>>>>>> Recently I have started having a panic show up about once a week d=
uring
>>>>>> the backup run, but it's inconsistent.  It is in the same place, b=
ut I
>>>>>> cannot force it to repeat.
>>>>>>
>>>>>> The trap itself is a page fault in kernel mode in the zfs code at
>>>>>> zfs_unmount_snap(); here's the traceback from the kvm (sorry for t=
he
>>>>>> image link but I don't have a better option right now.)
>>>>>>
>>>>>> I'll try to get a dump, this is a production machine with encrypte=
d swap
>>>>>> so it's not normally turned on.
>>>>>>
>>>>>> Note that the pool that appears to be involved (the backup pool) h=
as
>>>>>> passed a scrub and thus I would assume the on-disk structure is ok=
=2E....
>>>>>> but that might be an unfair assumption.  It is always occurring in=
 the
>>>>>> same dataset although there are a half-dozen that are sync'd -- if=
 this
>>>>>> one (the first one) successfully completes during the run then all=
 the
>>>>>> rest will as well (that is, whenever I restart the process it has =
always
>>>>>> failed here.)  The source pool is also clean and passes a scrub.
>>>>>>
>>>>>> traceback is at http://www.denninger.net/kvmimage.png; apologies f=
or the
>>>>>> image traceback but this is coming from a remote KVM.
>>>>>>
>>>>>> I first saw this on 10.1-STABLE and it is still happening on FreeB=
SD
>>>>>> 10.2-PRERELEASE #9 r285890M, which I updated to in an attempt to s=
ee if
>>>>>> the problem was something that had been addressed.
>>>>>>
>>>>>>
>>>>> --=20
>>>>> Karl Denninger
>>>>> karl@denninger.net <mailto:karl@denninger.net>
>>>>> /The Market Ticker/
>>>>> /[S/MIME encrypted email preferred]/
>>>> Second update: I have now taken another panic on 10.2-Stable, same d=
eal,
>>>> but without any cloned snapshots in the source image. I had thought =
that
>>>> removing cloned snapshots might eliminate the issue; that is now out=
 the
>>>> window.
>>>>
>>>> It ONLY happens on this one filesystem (the root one, incidentally)
>>>> which is fairly-recently created as I moved this machine from spinni=
ng
>>>> rust to SSDs for the OS and root pool -- and only when it is being
>>>> backed up by using zfs send | zfs recv (with the receive going to a
>>>> different pool in the same machine.)  I have yet to be able to provo=
ke
>>>> it when using zfs send to copy to a different machine on the same LA=
N,
>>>> but given that it is not able to be reproduced on demand I can't be
>>>> certain it's timing related (e.g. performance between the two pools =
in
>>>> question) or just that I haven't hit the unlucky combination.
>>>>
>>>> This looks like some sort of race condition and I will continue to s=
ee
>>>> if I can craft a case to make it occur "on demand"
>>>>
>>>> --=20
>>>> Karl Denninger
>>>> karl@denninger.net <mailto:karl@denninger.net>
>>>> /The Market Ticker/
>>>> /[S/MIME encrypted email preferred]/
>>> %SPAMBLOCK-SYS: Matched [+Sean Chittenden <sean@chittenden.org>], mes=
sage ok
>>>
>
> --=20
> Karl Denninger
> karl@denninger.net <mailto:karl@denninger.net>
> /The Market Ticker/
> /[S/MIME encrypted email preferred]/

--=20
Karl Denninger
karl@denninger.net <mailto:karl@denninger.net>
/The Market Ticker/
/[S/MIME encrypted email preferred]/

--------------ms020804010306020008070202
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="smime.p7s"
Content-Description: S/MIME Cryptographic Signature

MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgMFADCABgkqhkiG9w0BBwEAAKCC
Bl8wggZbMIIEQ6ADAgECAgEpMA0GCSqGSIb3DQEBCwUAMIGQMQswCQYDVQQGEwJVUzEQMA4G
A1UECBMHRmxvcmlkYTESMBAGA1UEBxMJTmljZXZpbGxlMRkwFwYDVQQKExBDdWRhIFN5c3Rl
bXMgTExDMRwwGgYDVQQDExNDdWRhIFN5c3RlbXMgTExDIENBMSIwIAYJKoZIhvcNAQkBFhND
dWRhIFN5c3RlbXMgTExDIENBMB4XDTE1MDQyMTAyMjE1OVoXDTIwMDQxOTAyMjE1OVowWjEL
MAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExGTAXBgNVBAoTEEN1ZGEgU3lzdGVtcyBM
TEMxHjAcBgNVBAMTFUthcmwgRGVubmluZ2VyIChPQ1NQKTCCAiIwDQYJKoZIhvcNAQEBBQAD
ggIPADCCAgoCggIBALmEWPhAdphrWd4K5VTvE5pxL3blRQPyGF3ApjUjgtavqU1Y8pbI3Byg
XDj2/Uz9Si8XVj/kNbKEjkRh5SsNvx3Fc0oQ1uVjyCq7zC/kctF7yLzQbvWnU4grAPZ3IuAp
3/fFxIVaXpxEdKmyZAVDhk9az+IgHH43rdJRIMzxJ5vqQMb+n2EjadVqiGPbtG9aZEImlq7f
IYDTnKyToi23PAnkPwwT+q1IkI2DTvf2jzWrhLR5DTX0fUYC0nxlHWbjgpiapyJWtR7K2YQO
aevQb/3vN9gSojT2h+cBem7QIj6U69rEYcEDvPyCMXEV9VcXdcmW42LSRsPvZcBHFkWAJqMZ
Myiz4kumaP+s+cIDaXitR/szoqDKGSHM4CPAZV9Yh8asvxQL5uDxz5wvLPgS5yS8K/o7zDR5
vNkMCyfYQuR6PAJxVOk5Arqvj9lfP3JSVapwbr01CoWDBkpuJlKfpQIEeC/pcCBKknllbMYq
yHBO2TipLyO5Ocd1nhN/nOsO+C+j31lQHfOMRZaPQykXVPWG5BbhWT7ttX4vy5hOW6yJgeT/
o3apynlp1cEavkQRS8uJHoQszF6KIrQMID/JfySWvVQ4ksnfzwB2lRomrdrwnQ4eG/HBS+0l
eozwOJNDIBlAP+hLe8A5oWZgooIIK/SulUAsfI6Sgd8dTZTTYmlhAgMBAAGjgfQwgfEwNwYI
KwYBBQUHAQEEKzApMCcGCCsGAQUFBzABhhtodHRwOi8vY3VkYXN5c3RlbXMubmV0Ojg4ODgw
CQYDVR0TBAIwADARBglghkgBhvhCAQEEBAMCBaAwCwYDVR0PBAQDAgXgMCwGCWCGSAGG+EIB
DQQfFh1PcGVuU1NMIEdlbmVyYXRlZCBDZXJ0aWZpY2F0ZTAdBgNVHQ4EFgQUxRyULenJaFwX
RtT79aNmIB/u5VkwHwYDVR0jBBgwFoAUJHGbnYV9/N3dvbDKkpQDofrTbTUwHQYDVR0RBBYw
FIESa2FybEBkZW5uaW5nZXIubmV0MA0GCSqGSIb3DQEBCwUAA4ICAQBPf3cYtmKowmGIYsm6
eBinJu7QVWvxi1vqnBz3KE+HapqoIZS8/PolB/hwiY0UAE1RsjBJ7yEjihVRwummSBvkoOyf
G30uPn4yg4vbJkR9lTz8d21fPshWETa6DBh2jx2Qf13LZpr3Pj2fTtlu6xMYKzg7cSDgd2bO
sJGH/rcvva9Spkx5Vfq0RyOrYph9boshRN3D4tbWgBAcX9POdXCVfJONDxhfBuPHsJ6vEmPb
An+XL5Yl26XYFPiODQ+Qbk44Ot1kt9s7oS3dVUrh92Qv0G3J3DF+Vt6C15nED+f+bk4gScu+
JHT7RjEmfa18GT8DcT//D1zEke1Ymhb41JH+GyZchDRWtjxsS5OBFMzrju7d264zJUFtX7iJ
3xvpKN7VcZKNtB6dLShj3v/XDsQVQWXmR/1YKWZ93C3LpRs2Y5nYdn6gEOpL/WfQFThtfnat
HNc7fNs5vjotaYpBl5H8+VCautKbGOs219uQbhGZLYTv6okuKcY8W+4EJEtK0xB08vqr9Jd0
FS9MGjQE++GWo+5eQxFt6nUENHbVYnsr6bYPQsZH0CRNycgTG9MwY/UIXOf4W034UpR82TBG
1LiMsYfb8ahQJhs3wdf1nzipIjRwoZKT1vGXh/cj3gwSr64GfenURBxaFZA5O1acOZUjPrRT
n3ci4McYW/0WVVA3lDGCBRMwggUPAgEBMIGWMIGQMQswCQYDVQQGEwJVUzEQMA4GA1UECBMH
RmxvcmlkYTESMBAGA1UEBxMJTmljZXZpbGxlMRkwFwYDVQQKExBDdWRhIFN5c3RlbXMgTExD
MRwwGgYDVQQDExNDdWRhIFN5c3RlbXMgTExDIENBMSIwIAYJKoZIhvcNAQkBFhNDdWRhIFN5
c3RlbXMgTExDIENBAgEpMA0GCWCGSAFlAwQCAwUAoIICTTAYBgkqhkiG9w0BCQMxCwYJKoZI
hvcNAQcBMBwGCSqGSIb3DQEJBTEPFw0xNTEwMjAxNTUxMDlaME8GCSqGSIb3DQEJBDFCBECH
IaPMFjd/MssLYsIDpBu5o9tqiWyZ9FLXzccK7FKFmvMZeOo3sc7eJ3ucNGPDOr5VYsB6mPzF
5OY0f8m3w3kfMGwGCSqGSIb3DQEJDzFfMF0wCwYJYIZIAWUDBAEqMAsGCWCGSAFlAwQBAjAK
BggqhkiG9w0DBzAOBggqhkiG9w0DAgICAIAwDQYIKoZIhvcNAwICAUAwBwYFKw4DAgcwDQYI
KoZIhvcNAwICASgwgacGCSsGAQQBgjcQBDGBmTCBljCBkDELMAkGA1UEBhMCVVMxEDAOBgNV
BAgTB0Zsb3JpZGExEjAQBgNVBAcTCU5pY2V2aWxsZTEZMBcGA1UEChMQQ3VkYSBTeXN0ZW1z
IExMQzEcMBoGA1UEAxMTQ3VkYSBTeXN0ZW1zIExMQyBDQTEiMCAGCSqGSIb3DQEJARYTQ3Vk
YSBTeXN0ZW1zIExMQyBDQQIBKTCBqQYLKoZIhvcNAQkQAgsxgZmggZYwgZAxCzAJBgNVBAYT
AlVTMRAwDgYDVQQIEwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoTEEN1
ZGEgU3lzdGVtcyBMTEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExIjAgBgkqhkiG
9w0BCQEWE0N1ZGEgU3lzdGVtcyBMTEMgQ0ECASkwDQYJKoZIhvcNAQEBBQAEggIAAUeeayWj
BC8PtJ1u3b8hUpYDaLZy2hc5uYyhqJxALjYoIA1VLo+sVpSa0CXZ+TsRQL4Ir0lGevmSqCGP
1AsG6XuXPCdAbkp8Jz5v2LckIgw765Q2lfv3vCzaO4qP5hNcTHxrrHmAXd8NiXxKi4TDyax4
f9RmF1lGZikToARHp17dpVAo3do6aql3pZP1Rir4FPa//h9zh6NsIx8VVLknIxkYaLPKQI4d
v6y73mD/EenYx9KiYn4ACWIonbMrrj2OuZJssAzRXGvqCY1IuaIuEb53ipNhhA0rNGIs/3Un
l5UvTPaGtiW75Z8n5JbcMShZyr7Hd0v3la8MLViYsHCWKEqutB9aaOS5f6r07EchiebW8Zj0
p6seT4gZ2NUMFjg0XrrkskbRYCiNNx7nnBxygCaDoIAd6Pwt1GePqQKiGW66z4IL83d/x0nQ
Sjj/ERIFUD6pY1Rn13QOebBpLtG6v1CrWfLvsG8IuSjMF9n7frIJrR2rlCsH8AMJhtx3s4ky
jccGhWmJm6hf6llKt7IW9K65J3mmvZ0VARvT1K6X1V4vLYSLhQjLTPNedmMZTUXZcD5y/ZVu
6cb8v3CWIw31EzT4CrEZckT0T0GzPqVa9ntotX/Ka5ypC7dHZS0qEyIA0EtA21rOfmY20ZU0
hdV3azCXdVYSyQXPhrsBqsbIfeIAAAAAAAA=
--------------ms020804010306020008070202--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?562662ED.1070302>