Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 03 Apr 2014 14:42:41 -0500
From:      Karl Denninger <karl@denninger.net>
To:        freebsd-stable@freebsd.org
Subject:   Re: What's up with the swapping since 10/stable
Message-ID:  <533DB9B1.3070500@denninger.net>
In-Reply-To: <A4BE503B-ADA9-4F61-893E-79A5F30728A2@bridgenet.se>
References:  <CA%2BD9QhvDsTwosUxUeL2U05dMt%2BKe6kY5BYCNjJo8e8TsfZTsXg@mail.gmail.com> <A4BE503B-ADA9-4F61-893E-79A5F30728A2@bridgenet.se>

next in thread | previous in thread | raw e-mail | index | archive | help
This is a cryptographically signed message in MIME format.

--------------ms050608090701010307010801
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Transfer-Encoding: quoted-printable

You mention that you're running ZFS -- if so see here:

http://www.freebsd.org/cgi/query-pr.cgi?pr=3D187594

With this change in my kernel and more than a week of uptime on a very=20
busy production machine running Internet-facing web service, Postgresql=20
and serving local Windows clients over Samba:

[karl@NewFS ~]$ pstat -s
Device          1K-blocks     Used    Avail Capacity
/dev/gpt/swap1.eli  67108864        0 67108864     0%



On 4/3/2014 2:32 PM, Johan Broman wrote:
> Hi!
>
> I=92m seeing the same thing since upgrading to 10/stable. Things seems =
to need swap although there is still available memory. I tend not to use =
swap on my virtual instances but I=92ve seen error messages like this sin=
ce upgrading to 10/stable:
>
> pid 3028 (mysqld), uid 88, was killed: out of swap space
>
> Mem: 24M Active, 8012K Inact, 109M Wired, 2176K Cache, 69M Buf, 433M Fr=
ee
>
>
> Looks like there should be enough memory to start mysql=85 (the above i=
nstance is a t1.micro FreeBSD AMI running on AWS EC2, created by Colin Pe=
rcival)
>
> Something seems to have changed since FreeBSD 9 in terms of memory mana=
ger / page eviction.
>
> Anyone else seeing this? Is it now impossible to run FreeBSD without a =
swap partition (and or file)? This happens on my server as well which has=
 8GB RAM and plenty of free RAM=85
>
> I don=92t want to start guessing, but perhaps this happens when there i=
s some memory fragmentation=85? I need to verify if this is the case thou=
gh.
>
> Thanks
> Johan
>
>
> On 02 Feb 2014, at 18:00, Matthias Gamsjager <mgamsjager@gmail.com> wro=
te:
>
>> Hi,
>>
>> My ZFS Nas box seems to use some swap since the upgrade to 10/stable. =
This
>> machine just runs couple of hours per week and with 9/stable I never
>> witnessed any swapping when serving media files.
>>
>> First thinks that caught my eye was the difference between ARC and Wir=
ed.
>> At some point there is a 1+ GB difference while all this machine does =
is
>> serving single 10GB mkv via AFP.
>>
>> Problem is that at some point the performance get's to a point that
>> streaming isn't possible.
>>
>> This is after couple of video's watched and scrub 99% done.
>>
>> No ZFS tuning in /boot/loader.conf
>>
>> last pid:  2571;  load averages:  0.19,  0.20,  0.19              up
>> 0+04:06:20  17:55:43
>>
>> 42 processes:  1 running, 41 sleeping
>>
>> CPU:  0.0% user,  0.0% nice,  2.3% system,  0.0% interrupt, 97.7% idle=

>>
>> Mem: 32M Active, 14M Inact, 7563M Wired, 16M Cache, 273M Buf, 303M Fre=
e
>>
>> ARC: 6065M Total, 2142M MFU, 3309M MRU, 50K Anon, 136M Header, 478M Ot=
her
>>
>> Swap: 4096M Total, 66M Used, 4030M Free, 1% Inuse
>>
>>
>> System Information:
>>
>>
>> Kernel Version:  1000702 (osreldate)
>>
>> Hardware Platform:  amd64
>>
>> Processor Architecture:  amd64
>>
>>
>> ZFS Storage pool Version: 5000
>>
>> ZFS Filesystem Version:  5
>>
>>
>> FreeBSD 10.0-STABLE #0 r261210: Mon Jan 27 15:19:13 CET 2014 matty
>>
>> 5:57PM  up  4:08, 2 users, load averages: 0.31, 0.23, 0.21
>>
>>
>> ----------------------------------------------------------------------=
--
>>
>>
>> System Memory:
>>
>>
>> 0.41% 32.43 MiB Active, 0.18% 14.11 MiB Inact
>>
>> 95.39% 7.39 GiB Wired, 0.21% 16.37 MiB Cache
>>
>> 3.81% 301.97 MiB Free, 0.01% 784.00 KiB Gap
>>
>>
>> Real Installed:  8.00 GiB
>>
>> Real Available:  99.50% 7.96 GiB
>>
>> Real Managed:  97.28% 7.74 GiB
>>
>>
>> Logical Total:  8.00 GiB
>>
>> Logical Used:  95.94% 7.68 GiB
>>
>> Logical Free:  4.06% 332.45 MiB
>>
>>
>> Kernel Memory:   196.21 MiB
>>
>> Data:  79.49% 155.96 MiB
>>
>> Text:  20.51% 40.25 MiB
>>
>>
>> Kernel Memory Map:  7.74 GiB
>>
>> Size:  71.72% 5.55 GiB
>>
>> Free:  28.28% 2.19 GiB
>>
>>
>> ----------------------------------------------------------------------=
--
>>
>>
>> ARC Summary: (HEALTHY)
>>
>> Memory Throttle Count:  0
>>
>>
>> ARC Misc:
>>
>> Deleted:  34.10k
>>
>> Recycle Misses:  102.86k
>>
>> Mutex Misses:  10
>>
>> Evict Skips:  989.63k
>>
>>
>> ARC Size:  87.94% 5.93 GiB
>>
>> Target Size: (Adaptive) 90.63% 6.11 GiB
>>
>> Min Size (Hard Limit): 12.50% 863.10 MiB
>>
>> Max Size (High Water): 8:1 6.74 GiB
>>
>>
>> ARC Size Breakdown:
>>
>> Recently Used Cache Size: 65.86% 4.02 GiB
>>
>> Frequently Used Cache Size: 34.14% 2.09 GiB
>>
>>
>> ARC Hash Breakdown:
>>
>> Elements Max:  594.22k
>>
>> Elements Current: 100.00% 594.21k
>>
>> Collisions:  609.54k
>>
>> Chain Max:  15
>>
>> Chains:   122.92k
>>
>>
>> ----------------------------------------------------------------------=
--
>>
>>
>> ARC Efficiency:   4.19m
>>
>> Cache Hit Ratio: 83.08% 3.48m
>>
>> Cache Miss Ratio: 16.92% 708.94k
>>
>> Actual Hit Ratio: 73.81% 3.09m
>>
>>
>> Data Demand Efficiency: 79.24% 456.96k
>>
>> Data Prefetch Efficiency: 2.94% 90.16k
>>
>>
>> CACHE HITS BY CACHE LIST:
>>
>>   Anonymously Used: 8.80% 306.18k
>>
>>   Most Recently Used: 23.42% 815.06k
>>
>>   Most Frequently Used: 65.43% 2.28m
>>
>>   Most Recently Used Ghost: 0.41% 14.36k
>>
>>   Most Frequently Used Ghost: 1.94% 67.65k
>>
>>
>> CACHE HITS BY DATA TYPE:
>>
>>   Demand Data:  10.40% 362.08k
>>
>>   Prefetch Data: 0.08% 2.65k
>>
>>   Demand Metadata: 76.84% 2.67m
>>
>>   Prefetch Metadata: 12.68% 441.47k
>>
>>
>> CACHE MISSES BY DATA TYPE:
>>
>>   Demand Data:  13.38% 94.88k
>>
>>   Prefetch Data: 12.34% 87.51k
>>
>>   Demand Metadata: 34.54% 244.88k
>>
>>   Prefetch Metadata: 39.73% 281.67k
>>
>>
>> ----------------------------------------------------------------------=
--
>>
>>
>> L2ARC is disabled
>>
>>
>> ----------------------------------------------------------------------=
--
>>
>>
>> File-Level Prefetch: (HEALTHY)
>>
>>
>> DMU Efficiency:   9.57m
>>
>> Hit Ratio:  73.77% 7.06m
>>
>> Miss Ratio:  26.23% 2.51m
>>
>>
>> Colinear:  2.51m
>>
>>   Hit Ratio:  0.06% 1.54k
>>
>>   Miss Ratio:  99.94% 2.51m
>>
>>
>> Stride:   6.92m
>>
>>   Hit Ratio:  99.99% 6.92m
>>
>>   Miss Ratio:  0.01% 594
>>
>>
>> DMU Misc:
>>
>> Reclaim:  2.51m
>>
>>   Successes:  0.85% 21.28k
>>
>>   Failures:  99.15% 2.49m
>>
>>
>> Streams:  137.84k
>>
>>   +Resets:  0.06% 79
>>
>>   -Resets:  99.94% 137.76k
>>
>>   Bogus:  0
>>
>>
>> ----------------------------------------------------------------------=
--
>>
>>
>> VDEV cache is disabled
>>
>>
>> ----------------------------------------------------------------------=
--
>>
>>
>> ZFS Tunables (sysctl):
>>
>> kern.maxusers                           845
>>
>> vm.kmem_size                            8313913344
>>
>> vm.kmem_size_scale                      1
>>
>> vm.kmem_size_min                        0
>>
>> vm.kmem_size_max                        1319413950874
>>
>> vfs.zfs.arc_max                         7240171520
>>
>> vfs.zfs.arc_min                         905021440
>>
>> vfs.zfs.arc_meta_used                   2166001368
>>
>> vfs.zfs.arc_meta_limit                  1810042880
>>
>> vfs.zfs.l2arc_write_max                 8388608
>>
>> vfs.zfs.l2arc_write_boost               8388608
>>
>> vfs.zfs.l2arc_headroom                  2
>>
>> vfs.zfs.l2arc_feed_secs                 1
>>
>> vfs.zfs.l2arc_feed_min_ms               200
>>
>> vfs.zfs.l2arc_noprefetch                1
>>
>> vfs.zfs.l2arc_feed_again                1
>>
>> vfs.zfs.l2arc_norw                      1
>>
>> vfs.zfs.anon_size                       51200
>>
>> vfs.zfs.anon_metadata_lsize             0
>>
>> vfs.zfs.anon_data_lsize                 0
>>
>> vfs.zfs.mru_size                        3476498432
>>
>> vfs.zfs.mru_metadata_lsize              1319031808
>>
>> vfs.zfs.mru_data_lsize                  2150589440
>>
>> vfs.zfs.mru_ghost_size                  361860096
>>
>> vfs.zfs.mru_ghost_metadata_lsize        210866688
>>
>> vfs.zfs.mru_ghost_data_lsize            150993408
>>
>> vfs.zfs.mfu_size                        2246172672
>>
>> vfs.zfs.mfu_metadata_lsize              32768
>>
>> vfs.zfs.mfu_data_lsize                  2050486272
>>
>> vfs.zfs.mfu_ghost_size                  6198800896
>>
>> vfs.zfs.mfu_ghost_metadata_lsize        2818404864
>>
>> vfs.zfs.mfu_ghost_data_lsize            3380396032
>>
>> vfs.zfs.l2c_only_size                   0
>>
>> vfs.zfs.dedup.prefetch                  1
>>
>> vfs.zfs.nopwrite_enabled                1
>>
>> vfs.zfs.mdcomp_disable                  0
>>
>> vfs.zfs.prefetch_disable                0
>>
>> vfs.zfs.zfetch.max_streams              8
>>
>> vfs.zfs.zfetch.min_sec_reap             2
>>
>> vfs.zfs.zfetch.block_cap                256
>>
>> vfs.zfs.zfetch.array_rd_sz              1048576
>>
>> vfs.zfs.top_maxinflight                 32
>>
>> vfs.zfs.resilver_delay                  2
>>
>> vfs.zfs.scrub_delay                     4
>>
>> vfs.zfs.scan_idle                       50
>>
>> vfs.zfs.scan_min_time_ms                1000
>>
>> vfs.zfs.free_min_time_ms                1000
>>
>> vfs.zfs.resilver_min_time_ms            3000
>>
>> vfs.zfs.no_scrub_io                     0
>>
>> vfs.zfs.no_scrub_prefetch               0
>>
>> vfs.zfs.metaslab.gang_bang              131073
>>
>> vfs.zfs.metaslab.debug                  0
>>
>> vfs.zfs.metaslab.df_alloc_threshold     131072
>>
>> vfs.zfs.metaslab.df_free_pct            4
>>
>> vfs.zfs.metaslab.min_alloc_size         10485760
>>
>> vfs.zfs.metaslab.prefetch_limit         3
>>
>> vfs.zfs.metaslab.smo_bonus_pct          150
>>
>> vfs.zfs.mg_alloc_failures               8
>>
>> vfs.zfs.write_to_degraded               0
>>
>> vfs.zfs.check_hostid                    1
>>
>> vfs.zfs.recover                         0
>>
>> vfs.zfs.deadman_synctime_ms             1000000
>>
>> vfs.zfs.deadman_checktime_ms            5000
>>
>> vfs.zfs.deadman_enabled                 1
>>
>> vfs.zfs.space_map_last_hope             0
>>
>> vfs.zfs.txg.timeout                     5
>>
>> vfs.zfs.vdev.cache.max                  16384
>>
>> vfs.zfs.vdev.cache.size                 0
>>
>> vfs.zfs.vdev.cache.bshift               16
>>
>> vfs.zfs.vdev.trim_on_init               1
>>
>> vfs.zfs.vdev.max_active                 1000
>>
>> vfs.zfs.vdev.sync_read_min_active       10
>>
>> vfs.zfs.vdev.sync_read_max_active       10
>>
>> vfs.zfs.vdev.sync_write_min_active      10
>>
>> vfs.zfs.vdev.sync_write_max_active      10
>>
>> vfs.zfs.vdev.async_read_min_active      1
>>
>> vfs.zfs.vdev.async_read_max_active      3
>>
>> vfs.zfs.vdev.async_write_min_active     1
>>
>> vfs.zfs.vdev.async_write_max_active     10
>>
>> vfs.zfs.vdev.scrub_min_active           1
>>
>> vfs.zfs.vdev.scrub_max_active           2
>>
>> vfs.zfs.vdev.aggregation_limit          131072
>>
>> vfs.zfs.vdev.read_gap_limit             32768
>>
>> vfs.zfs.vdev.write_gap_limit            4096
>>
>> vfs.zfs.vdev.bio_flush_disable          0
>>
>> vfs.zfs.vdev.bio_delete_disable         0
>>
>> vfs.zfs.vdev.trim_max_bytes             2147483648
>>
>> vfs.zfs.vdev.trim_max_pending           64
>>
>> vfs.zfs.max_auto_ashift                 13
>>
>> vfs.zfs.zil_replay_disable              0
>>
>> vfs.zfs.cache_flush_disable             0
>>
>> vfs.zfs.zio.use_uma                     1
>>
>> vfs.zfs.zio.exclude_metadata            0
>>
>> vfs.zfs.sync_pass_deferred_free         2
>>
>> vfs.zfs.sync_pass_dont_compress         5
>>
>> vfs.zfs.sync_pass_rewrite               2
>>
>> vfs.zfs.snapshot_list_prefetch          0
>>
>> vfs.zfs.super_owner                     0
>>
>> vfs.zfs.debug                           0
>>
>> vfs.zfs.version.ioctl                   3
>>
>> vfs.zfs.version.acl                     1
>>
>> vfs.zfs.version.spa                     5000
>>
>> vfs.zfs.version.zpl                     5
>>
>> vfs.zfs.trim.enabled                    1
>>
>> vfs.zfs.trim.txg_delay                  32
>>
>> vfs.zfs.trim.timeout                    30
>>
>> vfs.zfs.trim.max_interval               1
>>
>>
>> ----------------------------------------------------------------------=
--
>> _______________________________________________
>> freebsd-stable@freebsd.org mailing list
>> http://lists.freebsd.org/mailman/listinfo/freebsd-stable
>> To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.o=
rg"
> _______________________________________________
> freebsd-stable@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-stable
> To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.or=
g"
>
>
> %SPAMBLOCK-SYS: Matched [@freebsd.org+], message ok
>

--=20
-- Karl
karl@denninger.net



--------------ms050608090701010307010801
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="smime.p7s"
Content-Description: S/MIME Cryptographic Signature

MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIFTzCC
BUswggQzoAMCAQICAQgwDQYJKoZIhvcNAQEFBQAwgZ0xCzAJBgNVBAYTAlVTMRAwDgYDVQQI
EwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoTEEN1ZGEgU3lzdGVtcyBM
TEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkqhkiG9w0BCQEWIGN1c3Rv
bWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0MB4XDTEzMDgyNDE5MDM0NFoXDTE4MDgyMzE5
MDM0NFowWzELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExFzAVBgNVBAMTDkthcmwg
RGVubmluZ2VyMSEwHwYJKoZIhvcNAQkBFhJrYXJsQGRlbm5pbmdlci5uZXQwggIiMA0GCSqG
SIb3DQEBAQUAA4ICDwAwggIKAoICAQC5n2KBrBmG22nVntVdvgKCB9UcnapNThrW1L+dq6th
d9l4mj+qYMUpJ+8I0rTbY1dn21IXQBoBQmy8t1doKwmTdQ59F0FwZEPt/fGbRgBKVt3Quf6W
6n7kRk9MG6gdD7V9vPpFV41e+5MWYtqGWY3ScDP8SyYLjL/Xgr+5KFKkDfuubK8DeNqdLniV
jHo/vqmIgO+6NgzPGPgmbutzFQXlxUqjiNAAKzF2+Tkddi+WKABrcc/EqnBb0X8GdqcIamO5
SyVmuM+7Zdns7D9pcV16zMMQ8LfNFQCDvbCuuQKMDg2F22x5ekYXpwjqTyfjcHBkWC8vFNoY
5aFMdyiN/Kkz0/kduP2ekYOgkRqcShfLEcG9SQ4LQZgqjMpTjSOGzBr3tOvVn5LkSJSHW2Z8
Q0dxSkvFG2/lsOWFbwQeeZSaBi5vRZCYCOf5tRd1+E93FyQfpt4vsrXshIAk7IK7f0qXvxP4
GDli5PKIEubD2Bn+gp3vB/DkfKySh5NBHVB+OPCoXRUWBkQxme65wBO02OZZt0k8Iq0i4Rci
WV6z+lQHqDKtaVGgMsHn6PoeYhjf5Al5SP+U3imTjF2aCca1iDB5JOccX04MNljvifXgcbJN
nkMgrzmm1ZgJ1PLur/ADWPlnz45quOhHg1TfUCLfI/DzgG7Z6u+oy4siQuFr9QT0MQIDAQAB
o4HWMIHTMAkGA1UdEwQCMAAwEQYJYIZIAYb4QgEBBAQDAgWgMAsGA1UdDwQEAwIF4DAsBglg
hkgBhvhCAQ0EHxYdT3BlblNTTCBHZW5lcmF0ZWQgQ2VydGlmaWNhdGUwHQYDVR0OBBYEFHw4
+LnuALyLA5Cgy7T5ZAX1WzKPMB8GA1UdIwQYMBaAFF3U3hpBZq40HB5VM7B44/gmXiI0MDgG
CWCGSAGG+EIBAwQrFilodHRwczovL2N1ZGFzeXN0ZW1zLm5ldDoxMTQ0My9yZXZva2VkLmNy
bDANBgkqhkiG9w0BAQUFAAOCAQEAZ0L4tQbBd0hd4wuw/YVqEBDDXJ54q2AoqQAmsOlnoxLO
31ehM/LvrTIP4yK2u1VmXtUumQ4Ao15JFM+xmwqtEGsh70RRrfVBAGd7KOZ3GB39FP2TgN/c
L5fJKVxOqvEnW6cL9QtvUlcM3hXg8kDv60OB+LIcSE/P3/s+0tEpWPjxm3LHVE7JmPbZIcJ1
YMoZvHh0NSjY5D0HZlwtbDO7pDz9sZf1QEOgjH828fhtborkaHaUI46pmrMjiBnY6ujXMcWD
pxtikki0zY22nrxfTs5xDWGxyrc/cmucjxClJF6+OYVUSaZhiiHfa9Pr+41okLgsRB0AmNwE
f6ItY3TI8DGCBQowggUGAgEBMIGjMIGdMQswCQYDVQQGEwJVUzEQMA4GA1UECBMHRmxvcmlk
YTESMBAGA1UEBxMJTmljZXZpbGxlMRkwFwYDVQQKExBDdWRhIFN5c3RlbXMgTExDMRwwGgYD
VQQDExNDdWRhIFN5c3RlbXMgTExDIENBMS8wLQYJKoZIhvcNAQkBFiBjdXN0b21lci1zZXJ2
aWNlQGN1ZGFzeXN0ZW1zLm5ldAIBCDAJBgUrDgMCGgUAoIICOzAYBgkqhkiG9w0BCQMxCwYJ
KoZIhvcNAQcBMBwGCSqGSIb3DQEJBTEPFw0xNDA0MDMxOTQyNDFaMCMGCSqGSIb3DQEJBDEW
BBS9dyrchxfUC9pqOmBwo6zSMBJtQjBsBgkqhkiG9w0BCQ8xXzBdMAsGCWCGSAFlAwQBKjAL
BglghkgBZQMEAQIwCgYIKoZIhvcNAwcwDgYIKoZIhvcNAwICAgCAMA0GCCqGSIb3DQMCAgFA
MAcGBSsOAwIHMA0GCCqGSIb3DQMCAgEoMIG0BgkrBgEEAYI3EAQxgaYwgaMwgZ0xCzAJBgNV
BAYTAlVTMRAwDgYDVQQIEwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoT
EEN1ZGEgU3lzdGVtcyBMTEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkq
hkiG9w0BCQEWIGN1c3RvbWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0AgEIMIG2BgsqhkiG
9w0BCRACCzGBpqCBozCBnTELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExEjAQBgNV
BAcTCU5pY2V2aWxsZTEZMBcGA1UEChMQQ3VkYSBTeXN0ZW1zIExMQzEcMBoGA1UEAxMTQ3Vk
YSBTeXN0ZW1zIExMQyBDQTEvMC0GCSqGSIb3DQEJARYgY3VzdG9tZXItc2VydmljZUBjdWRh
c3lzdGVtcy5uZXQCAQgwDQYJKoZIhvcNAQEBBQAEggIARGXo7HFJe7NBUqHFZ1i+y/EvnWaT
sBgTQz08c+cN8m/k2lLOdBhJFKYw4YmRxKuKOtqKd8LLNs1/SeRP2iybH2kRzt0Sz4a24k6n
CeRWDbaNBcgU6d/mrSFkZNlCsHpmQfN2PZDrq5nmjWbWc2hZJSM9PMXXsAiE8gJgfvpWgQdR
KXr6BaS7rTWe1W8XJJDQviEdjbDqZc46Xo9YmpJgCt2x4+unZ2tKCqAF5zFfuZs5aZETvz+f
3t0mrpz6NBVfGv91mKivg/njM3D8sykZei4KXyRb7eoQXzGtri9UsGC5Yazx48TEtl08Ku5S
uw42Ns/e3u3q2mEOxAHSXGB+Ed5EUmg1JIOgaJn8CwzkecISrlauNMR7X5hevYppkMbsOnUI
eGaZ+Ut8tLJZsZFmf5c2jU4HJPXR2ptw1NoecHPQunrg6KJfD6fdIhb05GweXoQ6nSKlYeOc
6+tPZ/ggfG1j7VuV7HsS/qava5KNLQVcjH64XgXUd7KS15Ts05qwGp+O+WsXdBxOCHzpcQ+0
E8D3RpMgI9p/1WU9fWvQJUEEZEbUaRXZah0mFq/0DlQ5NWNl8ywFWf+QFGfszDWbKQWor7Kj
kwQXuB8pTlPBBnLsNHeDiTnOoLAS2irKXDFwlYYneP1fEiPs15M9OAdfVZ4yS9xsxoOdhK2H
WI3tyFkAAAAAAAA=
--------------ms050608090701010307010801--





Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?533DB9B1.3070500>