Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 3 Apr 2014 21:32:54 +0200
From:      Johan Broman <johan@bridgenet.se>
To:        stable-list freebsd <freebsd-stable@freebsd.org>
Cc:        Matthias Gamsjager <mgamsjager@gmail.com>
Subject:   Re: What's up with the swapping since 10/stable
Message-ID:  <A4BE503B-ADA9-4F61-893E-79A5F30728A2@bridgenet.se>
In-Reply-To: <CA%2BD9QhvDsTwosUxUeL2U05dMt%2BKe6kY5BYCNjJo8e8TsfZTsXg@mail.gmail.com>
References:  <CA%2BD9QhvDsTwosUxUeL2U05dMt%2BKe6kY5BYCNjJo8e8TsfZTsXg@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Hi!

I=92m seeing the same thing since upgrading to 10/stable. Things seems =
to need swap although there is still available memory. I tend not to use =
swap on my virtual instances but I=92ve seen error messages like this =
since upgrading to 10/stable:

pid 3028 (mysqld), uid 88, was killed: out of swap space

Mem: 24M Active, 8012K Inact, 109M Wired, 2176K Cache, 69M Buf, 433M =
Free


Looks like there should be enough memory to start mysql=85 (the above =
instance is a t1.micro FreeBSD AMI running on AWS EC2, created by Colin =
Percival)=20

Something seems to have changed since FreeBSD 9 in terms of memory =
manager / page eviction.

Anyone else seeing this? Is it now impossible to run FreeBSD without a =
swap partition (and or file)? This happens on my server as well which =
has 8GB RAM and plenty of free RAM=85

I don=92t want to start guessing, but perhaps this happens when there is =
some memory fragmentation=85? I need to verify if this is the case =
though. =20

Thanks
Johan


On 02 Feb 2014, at 18:00, Matthias Gamsjager <mgamsjager@gmail.com> =
wrote:

> Hi,
>=20
> My ZFS Nas box seems to use some swap since the upgrade to 10/stable. =
This
> machine just runs couple of hours per week and with 9/stable I never
> witnessed any swapping when serving media files.
>=20
> First thinks that caught my eye was the difference between ARC and =
Wired.
> At some point there is a 1+ GB difference while all this machine does =
is
> serving single 10GB mkv via AFP.
>=20
> Problem is that at some point the performance get's to a point that
> streaming isn't possible.
>=20
> This is after couple of video's watched and scrub 99% done.
>=20
> No ZFS tuning in /boot/loader.conf
>=20
> last pid:  2571;  load averages:  0.19,  0.20,  0.19              up
> 0+04:06:20  17:55:43
>=20
> 42 processes:  1 running, 41 sleeping
>=20
> CPU:  0.0% user,  0.0% nice,  2.3% system,  0.0% interrupt, 97.7% idle
>=20
> Mem: 32M Active, 14M Inact, 7563M Wired, 16M Cache, 273M Buf, 303M =
Free
>=20
> ARC: 6065M Total, 2142M MFU, 3309M MRU, 50K Anon, 136M Header, 478M =
Other
>=20
> Swap: 4096M Total, 66M Used, 4030M Free, 1% Inuse
>=20
>=20
> System Information:
>=20
>=20
> Kernel Version:  1000702 (osreldate)
>=20
> Hardware Platform:  amd64
>=20
> Processor Architecture:  amd64
>=20
>=20
> ZFS Storage pool Version: 5000
>=20
> ZFS Filesystem Version:  5
>=20
>=20
> FreeBSD 10.0-STABLE #0 r261210: Mon Jan 27 15:19:13 CET 2014 matty
>=20
> 5:57PM  up  4:08, 2 users, load averages: 0.31, 0.23, 0.21
>=20
>=20
> =
------------------------------------------------------------------------
>=20
>=20
> System Memory:
>=20
>=20
> 0.41% 32.43 MiB Active, 0.18% 14.11 MiB Inact
>=20
> 95.39% 7.39 GiB Wired, 0.21% 16.37 MiB Cache
>=20
> 3.81% 301.97 MiB Free, 0.01% 784.00 KiB Gap
>=20
>=20
> Real Installed:  8.00 GiB
>=20
> Real Available:  99.50% 7.96 GiB
>=20
> Real Managed:  97.28% 7.74 GiB
>=20
>=20
> Logical Total:  8.00 GiB
>=20
> Logical Used:  95.94% 7.68 GiB
>=20
> Logical Free:  4.06% 332.45 MiB
>=20
>=20
> Kernel Memory:   196.21 MiB
>=20
> Data:  79.49% 155.96 MiB
>=20
> Text:  20.51% 40.25 MiB
>=20
>=20
> Kernel Memory Map:  7.74 GiB
>=20
> Size:  71.72% 5.55 GiB
>=20
> Free:  28.28% 2.19 GiB
>=20
>=20
> =
------------------------------------------------------------------------
>=20
>=20
> ARC Summary: (HEALTHY)
>=20
> Memory Throttle Count:  0
>=20
>=20
> ARC Misc:
>=20
> Deleted:  34.10k
>=20
> Recycle Misses:  102.86k
>=20
> Mutex Misses:  10
>=20
> Evict Skips:  989.63k
>=20
>=20
> ARC Size:  87.94% 5.93 GiB
>=20
> Target Size: (Adaptive) 90.63% 6.11 GiB
>=20
> Min Size (Hard Limit): 12.50% 863.10 MiB
>=20
> Max Size (High Water): 8:1 6.74 GiB
>=20
>=20
> ARC Size Breakdown:
>=20
> Recently Used Cache Size: 65.86% 4.02 GiB
>=20
> Frequently Used Cache Size: 34.14% 2.09 GiB
>=20
>=20
> ARC Hash Breakdown:
>=20
> Elements Max:  594.22k
>=20
> Elements Current: 100.00% 594.21k
>=20
> Collisions:  609.54k
>=20
> Chain Max:  15
>=20
> Chains:   122.92k
>=20
>=20
> =
------------------------------------------------------------------------
>=20
>=20
> ARC Efficiency:   4.19m
>=20
> Cache Hit Ratio: 83.08% 3.48m
>=20
> Cache Miss Ratio: 16.92% 708.94k
>=20
> Actual Hit Ratio: 73.81% 3.09m
>=20
>=20
> Data Demand Efficiency: 79.24% 456.96k
>=20
> Data Prefetch Efficiency: 2.94% 90.16k
>=20
>=20
> CACHE HITS BY CACHE LIST:
>=20
>  Anonymously Used: 8.80% 306.18k
>=20
>  Most Recently Used: 23.42% 815.06k
>=20
>  Most Frequently Used: 65.43% 2.28m
>=20
>  Most Recently Used Ghost: 0.41% 14.36k
>=20
>  Most Frequently Used Ghost: 1.94% 67.65k
>=20
>=20
> CACHE HITS BY DATA TYPE:
>=20
>  Demand Data:  10.40% 362.08k
>=20
>  Prefetch Data: 0.08% 2.65k
>=20
>  Demand Metadata: 76.84% 2.67m
>=20
>  Prefetch Metadata: 12.68% 441.47k
>=20
>=20
> CACHE MISSES BY DATA TYPE:
>=20
>  Demand Data:  13.38% 94.88k
>=20
>  Prefetch Data: 12.34% 87.51k
>=20
>  Demand Metadata: 34.54% 244.88k
>=20
>  Prefetch Metadata: 39.73% 281.67k
>=20
>=20
> =
------------------------------------------------------------------------
>=20
>=20
> L2ARC is disabled
>=20
>=20
> =
------------------------------------------------------------------------
>=20
>=20
> File-Level Prefetch: (HEALTHY)
>=20
>=20
> DMU Efficiency:   9.57m
>=20
> Hit Ratio:  73.77% 7.06m
>=20
> Miss Ratio:  26.23% 2.51m
>=20
>=20
> Colinear:  2.51m
>=20
>  Hit Ratio:  0.06% 1.54k
>=20
>  Miss Ratio:  99.94% 2.51m
>=20
>=20
> Stride:   6.92m
>=20
>  Hit Ratio:  99.99% 6.92m
>=20
>  Miss Ratio:  0.01% 594
>=20
>=20
> DMU Misc:
>=20
> Reclaim:  2.51m
>=20
>  Successes:  0.85% 21.28k
>=20
>  Failures:  99.15% 2.49m
>=20
>=20
> Streams:  137.84k
>=20
>  +Resets:  0.06% 79
>=20
>  -Resets:  99.94% 137.76k
>=20
>  Bogus:  0
>=20
>=20
> =
------------------------------------------------------------------------
>=20
>=20
> VDEV cache is disabled
>=20
>=20
> =
------------------------------------------------------------------------
>=20
>=20
> ZFS Tunables (sysctl):
>=20
> kern.maxusers                           845
>=20
> vm.kmem_size                            8313913344
>=20
> vm.kmem_size_scale                      1
>=20
> vm.kmem_size_min                        0
>=20
> vm.kmem_size_max                        1319413950874
>=20
> vfs.zfs.arc_max                         7240171520
>=20
> vfs.zfs.arc_min                         905021440
>=20
> vfs.zfs.arc_meta_used                   2166001368
>=20
> vfs.zfs.arc_meta_limit                  1810042880
>=20
> vfs.zfs.l2arc_write_max                 8388608
>=20
> vfs.zfs.l2arc_write_boost               8388608
>=20
> vfs.zfs.l2arc_headroom                  2
>=20
> vfs.zfs.l2arc_feed_secs                 1
>=20
> vfs.zfs.l2arc_feed_min_ms               200
>=20
> vfs.zfs.l2arc_noprefetch                1
>=20
> vfs.zfs.l2arc_feed_again                1
>=20
> vfs.zfs.l2arc_norw                      1
>=20
> vfs.zfs.anon_size                       51200
>=20
> vfs.zfs.anon_metadata_lsize             0
>=20
> vfs.zfs.anon_data_lsize                 0
>=20
> vfs.zfs.mru_size                        3476498432
>=20
> vfs.zfs.mru_metadata_lsize              1319031808
>=20
> vfs.zfs.mru_data_lsize                  2150589440
>=20
> vfs.zfs.mru_ghost_size                  361860096
>=20
> vfs.zfs.mru_ghost_metadata_lsize        210866688
>=20
> vfs.zfs.mru_ghost_data_lsize            150993408
>=20
> vfs.zfs.mfu_size                        2246172672
>=20
> vfs.zfs.mfu_metadata_lsize              32768
>=20
> vfs.zfs.mfu_data_lsize                  2050486272
>=20
> vfs.zfs.mfu_ghost_size                  6198800896
>=20
> vfs.zfs.mfu_ghost_metadata_lsize        2818404864
>=20
> vfs.zfs.mfu_ghost_data_lsize            3380396032
>=20
> vfs.zfs.l2c_only_size                   0
>=20
> vfs.zfs.dedup.prefetch                  1
>=20
> vfs.zfs.nopwrite_enabled                1
>=20
> vfs.zfs.mdcomp_disable                  0
>=20
> vfs.zfs.prefetch_disable                0
>=20
> vfs.zfs.zfetch.max_streams              8
>=20
> vfs.zfs.zfetch.min_sec_reap             2
>=20
> vfs.zfs.zfetch.block_cap                256
>=20
> vfs.zfs.zfetch.array_rd_sz              1048576
>=20
> vfs.zfs.top_maxinflight                 32
>=20
> vfs.zfs.resilver_delay                  2
>=20
> vfs.zfs.scrub_delay                     4
>=20
> vfs.zfs.scan_idle                       50
>=20
> vfs.zfs.scan_min_time_ms                1000
>=20
> vfs.zfs.free_min_time_ms                1000
>=20
> vfs.zfs.resilver_min_time_ms            3000
>=20
> vfs.zfs.no_scrub_io                     0
>=20
> vfs.zfs.no_scrub_prefetch               0
>=20
> vfs.zfs.metaslab.gang_bang              131073
>=20
> vfs.zfs.metaslab.debug                  0
>=20
> vfs.zfs.metaslab.df_alloc_threshold     131072
>=20
> vfs.zfs.metaslab.df_free_pct            4
>=20
> vfs.zfs.metaslab.min_alloc_size         10485760
>=20
> vfs.zfs.metaslab.prefetch_limit         3
>=20
> vfs.zfs.metaslab.smo_bonus_pct          150
>=20
> vfs.zfs.mg_alloc_failures               8
>=20
> vfs.zfs.write_to_degraded               0
>=20
> vfs.zfs.check_hostid                    1
>=20
> vfs.zfs.recover                         0
>=20
> vfs.zfs.deadman_synctime_ms             1000000
>=20
> vfs.zfs.deadman_checktime_ms            5000
>=20
> vfs.zfs.deadman_enabled                 1
>=20
> vfs.zfs.space_map_last_hope             0
>=20
> vfs.zfs.txg.timeout                     5
>=20
> vfs.zfs.vdev.cache.max                  16384
>=20
> vfs.zfs.vdev.cache.size                 0
>=20
> vfs.zfs.vdev.cache.bshift               16
>=20
> vfs.zfs.vdev.trim_on_init               1
>=20
> vfs.zfs.vdev.max_active                 1000
>=20
> vfs.zfs.vdev.sync_read_min_active       10
>=20
> vfs.zfs.vdev.sync_read_max_active       10
>=20
> vfs.zfs.vdev.sync_write_min_active      10
>=20
> vfs.zfs.vdev.sync_write_max_active      10
>=20
> vfs.zfs.vdev.async_read_min_active      1
>=20
> vfs.zfs.vdev.async_read_max_active      3
>=20
> vfs.zfs.vdev.async_write_min_active     1
>=20
> vfs.zfs.vdev.async_write_max_active     10
>=20
> vfs.zfs.vdev.scrub_min_active           1
>=20
> vfs.zfs.vdev.scrub_max_active           2
>=20
> vfs.zfs.vdev.aggregation_limit          131072
>=20
> vfs.zfs.vdev.read_gap_limit             32768
>=20
> vfs.zfs.vdev.write_gap_limit            4096
>=20
> vfs.zfs.vdev.bio_flush_disable          0
>=20
> vfs.zfs.vdev.bio_delete_disable         0
>=20
> vfs.zfs.vdev.trim_max_bytes             2147483648
>=20
> vfs.zfs.vdev.trim_max_pending           64
>=20
> vfs.zfs.max_auto_ashift                 13
>=20
> vfs.zfs.zil_replay_disable              0
>=20
> vfs.zfs.cache_flush_disable             0
>=20
> vfs.zfs.zio.use_uma                     1
>=20
> vfs.zfs.zio.exclude_metadata            0
>=20
> vfs.zfs.sync_pass_deferred_free         2
>=20
> vfs.zfs.sync_pass_dont_compress         5
>=20
> vfs.zfs.sync_pass_rewrite               2
>=20
> vfs.zfs.snapshot_list_prefetch          0
>=20
> vfs.zfs.super_owner                     0
>=20
> vfs.zfs.debug                           0
>=20
> vfs.zfs.version.ioctl                   3
>=20
> vfs.zfs.version.acl                     1
>=20
> vfs.zfs.version.spa                     5000
>=20
> vfs.zfs.version.zpl                     5
>=20
> vfs.zfs.trim.enabled                    1
>=20
> vfs.zfs.trim.txg_delay                  32
>=20
> vfs.zfs.trim.timeout                    30
>=20
> vfs.zfs.trim.max_interval               1
>=20
>=20
> =
------------------------------------------------------------------------
> _______________________________________________
> freebsd-stable@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-stable
> To unsubscribe, send any mail to =
"freebsd-stable-unsubscribe@freebsd.org"




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?A4BE503B-ADA9-4F61-893E-79A5F30728A2>