Date: Tue, 4 Oct 2011 17:45:36 -0400 From: Dave Cundiff <syshackmin@gmail.com> To: questions@freebsd.org Cc: freebsd-fs@freebsd.org Subject: Re: ZFS Write Lockup Message-ID: <CAKHEz2Y7qE-7%2BAmhB5oEsZszzM907LfPrmFz8UV7jtjdHF%2BNPg@mail.gmail.com> In-Reply-To: <CAKHEz2a%2BRFmcCyEMnooDmb8vERA-qg0A474LZ9mLtPvoij8Xmw@mail.gmail.com> References: <CAKHEz2a%2BRFmcCyEMnooDmb8vERA-qg0A474LZ9mLtPvoij8Xmw@mail.gmail.com>
next in thread | previous in thread | raw e-mail | index | archive | help
Hi, Decided to cross post this over here as well since it seems like it could be something actually wrong and not me being an idiot. Feel free to let me know if I'm an idiot. :) gstat is showing almost no IO hitting da1 which is my zpool(21 disk raid50 3x7). Yet the zvols are all backed up. 0 272 7 71 11.6 264 4439 0.2 4.8| da1 0 4 2 26 6.7 2 23 557.0 126.7| zvol/san/a2s= 61 0 14 0 0 0.0 14 86 91.5 132.5| zvol/san/a2s= 66 1 14 0 1 13.8 14 154 100.6 140.2| zvol/san/sol= semi1 1 19 1 5 0.1 18 156 76.6 139.8| zvol/san/sol= man1 1 6 1 26 8.4 5 112 275.3 140.9| zvol/san/a2s= 62 1 16 1 5 9.1 16 317 88.1 139.7| zvol/san/sol= man2 1 29 1 2 6.6 29 214 48.8 139.8| zvol/san/sol= semi2 1 7 1 2 8.5 6 50 232.5 140.4| zvol/san/sol= man4 I've tweaked only a few settings from default. [root@san2 ~]# cat /boot/loader.conf console=3D"comconsole,vidconsole" comconsole_speed=3D"115200" vm.kmem_size=3D"30G" vfs.zfs.arc_max=3D"22G" kern.hz=3D100 loader_logo=3D"beastie" [root@san2 ~]# cat /etc/sysctl.conf net.inet.tcp.sendbuf_max=3D16777216 net.inet.tcp.recvbuf_max=3D16777216 net.inet.tcp.sendspace=3D65536 net.inet.tcp.recvspace=3D131072 [root@san2 ~]# zpool status pool: san state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM san ONLINE 0 0 0 da1 ONLINE 0 0 0 logs mirror ONLINE 0 0 0 ad6s1b ONLINE 0 0 0 ad14s1b ONLINE 0 0 0 cache ad6s1d ONLINE 0 0 0 ad14s1d ONLINE 0 0 0 errors: No known data errors All my volumes are the same. I don't manually adjust any properties. [root@san2 ~]# zfs get all san NAME PROPERTY VALUE SOURCE san type filesystem - san creation Tue Feb 8 9:58 2011 - san used 9.10T - san available 3.33T - san referenced 221M - san compressratio 1.00x - san mounted yes - san quota none default san reservation none default san recordsize 128K default san mountpoint /san default san sharenfs off default san checksum off local san compression off default san atime on default san devices on default san exec on default san setuid on default san readonly off default san jailed off default san snapdir hidden default san aclmode groupmask default san aclinherit restricted default san canmount on default san shareiscsi off default san xattr off temporary san copies 1 default san version 4 - san utf8only off - san normalization none - san casesensitivity sensitive - san vscan off default san nbmand off default san sharesmb off default san refquota none default san refreservation none default san primarycache all default san secondarycache all default san usedbysnapshots 0 - san usedbydataset 221M - san usedbychildren 9.10T - san usedbyrefreservation 0 - [root@san2 ~]# zfs get all san/a2s66 NAME PROPERTY VALUE SOURCE san/a2s66 type volume - san/a2s66 creation Wed Sep 21 16:25 2011 - san/a2s66 used 770G - san/a2s66 available 3.33T - san/a2s66 referenced 753G - san/a2s66 compressratio 1.00x - san/a2s66 reservation none default san/a2s66 volsize 750G - san/a2s66 volblocksize 4K - san/a2s66 checksum off inherited from san san/a2s66 compression off default san/a2s66 readonly off default san/a2s66 shareiscsi off default san/a2s66 copies 1 default san/a2s66 refreservation none default san/a2s66 primarycache all default san/a2s66 secondarycache all default san/a2s66 usedbysnapshots 17.3G - san/a2s66 usedbydataset 753G - san/a2s66 usedbychildren 0 - san/a2s66 usedbyrefreservation 0 - last pid: 60292; load averages: 0.96, 0.67, 0.80 up 9+17:31:59 17:41:52 63 processes: 2 running, 61 sleeping CPU: 1.3% user, 0.0% nice, 46.4% system, 1.1% interrupt, 51.2% idle Mem: 37M Active, 32M Inact, 22G Wired, 15M Cache, 1940M Buf, 1075M Free Swap: 28G Total, 13M Used, 28G Free [root@san2 ~]# sysctl -a | grep zfs vfs.zfs.l2c_only_size: 81245392896 vfs.zfs.mfu_ghost_data_lsize: 51142656 vfs.zfs.mfu_ghost_metadata_lsize: 10687021568 vfs.zfs.mfu_ghost_size: 10738164224 vfs.zfs.mfu_data_lsize: 757547008 vfs.zfs.mfu_metadata_lsize: 954693120 vfs.zfs.mfu_size: 2612401664 vfs.zfs.mru_ghost_data_lsize: 1983434752 vfs.zfs.mru_ghost_metadata_lsize: 3657913344 vfs.zfs.mru_ghost_size: 5641348096 vfs.zfs.mru_data_lsize: 9817952768 vfs.zfs.mru_metadata_lsize: 395397632 vfs.zfs.mru_size: 10833757184 vfs.zfs.anon_data_lsize: 0 vfs.zfs.anon_metadata_lsize: 0 vfs.zfs.anon_size: 34037760 vfs.zfs.l2arc_norw: 1 vfs.zfs.l2arc_feed_again: 1 vfs.zfs.l2arc_noprefetch: 0 vfs.zfs.l2arc_feed_min_ms: 200 vfs.zfs.l2arc_feed_secs: 1 vfs.zfs.l2arc_headroom: 2 vfs.zfs.l2arc_write_boost: 8388608 vfs.zfs.l2arc_write_max: 8388608 vfs.zfs.arc_meta_limit: 5905580032 vfs.zfs.arc_meta_used: 5906093432 vfs.zfs.mdcomp_disable: 0 vfs.zfs.arc_min: 2952790016 vfs.zfs.arc_max: 23622320128 vfs.zfs.zfetch.array_rd_sz: 1048576 vfs.zfs.zfetch.block_cap: 256 vfs.zfs.zfetch.min_sec_reap: 2 vfs.zfs.zfetch.max_streams: 8 vfs.zfs.prefetch_disable: 1 vfs.zfs.check_hostid: 1 vfs.zfs.recover: 0 vfs.zfs.txg.write_limit_override: 0 vfs.zfs.txg.synctime: 5 vfs.zfs.txg.timeout: 30 vfs.zfs.scrub_limit: 10 vfs.zfs.vdev.cache.bshift: 16 vfs.zfs.vdev.cache.size: 10485760 vfs.zfs.vdev.cache.max: 16384 vfs.zfs.vdev.aggregation_limit: 131072 vfs.zfs.vdev.ramp_rate: 2 vfs.zfs.vdev.time_shift: 6 vfs.zfs.vdev.min_pending: 4 vfs.zfs.vdev.max_pending: 10 vfs.zfs.cache_flush_disable: 0 vfs.zfs.zil_disable: 0 vfs.zfs.zio.use_uma: 0 vfs.zfs.version.zpl: 4 vfs.zfs.version.spa: 15 vfs.zfs.version.dmu_backup_stream: 1 vfs.zfs.version.dmu_backup_header: 2 vfs.zfs.version.acl: 1 vfs.zfs.debug: 0 vfs.zfs.super_owner: 0 kstat.zfs.misc.zfetchstats.hits: 1622932834 kstat.zfs.misc.zfetchstats.misses: 300700562 kstat.zfs.misc.zfetchstats.colinear_hits: 144156 kstat.zfs.misc.zfetchstats.colinear_misses: 300556406 kstat.zfs.misc.zfetchstats.stride_hits: 1138458507 kstat.zfs.misc.zfetchstats.stride_misses: 386271 kstat.zfs.misc.zfetchstats.reclaim_successes: 5313527 kstat.zfs.misc.zfetchstats.reclaim_failures: 295242879 kstat.zfs.misc.zfetchstats.streams_resets: 141691 kstat.zfs.misc.zfetchstats.streams_noresets: 484474231 kstat.zfs.misc.zfetchstats.bogus_streams: 0 kstat.zfs.misc.arcstats.hits: 2877951340 kstat.zfs.misc.arcstats.misses: 677132553 kstat.zfs.misc.arcstats.demand_data_hits: 1090801028 kstat.zfs.misc.arcstats.demand_data_misses: 142078773 kstat.zfs.misc.arcstats.demand_metadata_hits: 760631826 kstat.zfs.misc.arcstats.demand_metadata_misses: 15429069 kstat.zfs.misc.arcstats.prefetch_data_hits: 77566631 kstat.zfs.misc.arcstats.prefetch_data_misses: 412415335 kstat.zfs.misc.arcstats.prefetch_metadata_hits: 948951855 kstat.zfs.misc.arcstats.prefetch_metadata_misses: 107209376 kstat.zfs.misc.arcstats.mru_hits: 762645834 kstat.zfs.misc.arcstats.mru_ghost_hits: 25063620 kstat.zfs.misc.arcstats.mfu_hits: 1792233827 kstat.zfs.misc.arcstats.mfu_ghost_hits: 107949685 kstat.zfs.misc.arcstats.allocated: 1220368604 kstat.zfs.misc.arcstats.deleted: 881708637 kstat.zfs.misc.arcstats.stolen: 324197286 kstat.zfs.misc.arcstats.recycle_miss: 393866103 kstat.zfs.misc.arcstats.mutex_miss: 47835019 kstat.zfs.misc.arcstats.evict_skip: 16800403516 kstat.zfs.misc.arcstats.evict_l2_cached: 3404346428416 kstat.zfs.misc.arcstats.evict_l2_eligible: 906780261888 kstat.zfs.misc.arcstats.evict_l2_ineligible: 1712274098176 kstat.zfs.misc.arcstats.hash_elements: 12932367 kstat.zfs.misc.arcstats.hash_elements_max: 26675689 kstat.zfs.misc.arcstats.hash_collisions: 1195027725 kstat.zfs.misc.arcstats.hash_chains: 524288 kstat.zfs.misc.arcstats.hash_chain_max: 114 kstat.zfs.misc.arcstats.p: 13900189206 kstat.zfs.misc.arcstats.c: 16514222070 kstat.zfs.misc.arcstats.c_min: 2952790016 kstat.zfs.misc.arcstats.c_max: 23622320128 kstat.zfs.misc.arcstats.size: 16514197960 kstat.zfs.misc.arcstats.hdr_size: 698646840 kstat.zfs.misc.arcstats.data_size: 13480204800 kstat.zfs.misc.arcstats.other_size: 222586112 kstat.zfs.misc.arcstats.l2_hits: 236859220 kstat.zfs.misc.arcstats.l2_misses: 440273314 kstat.zfs.misc.arcstats.l2_feeds: 998879 kstat.zfs.misc.arcstats.l2_rw_clash: 41492 kstat.zfs.misc.arcstats.l2_read_bytes: 1523423294976 kstat.zfs.misc.arcstats.l2_write_bytes: 2108729975808 kstat.zfs.misc.arcstats.l2_writes_sent: 908755 kstat.zfs.misc.arcstats.l2_writes_done: 908755 kstat.zfs.misc.arcstats.l2_writes_error: 0 kstat.zfs.misc.arcstats.l2_writes_hdr_miss: 125029 kstat.zfs.misc.arcstats.l2_evict_lock_retry: 78155 kstat.zfs.misc.arcstats.l2_evict_reading: 52 kstat.zfs.misc.arcstats.l2_free_on_write: 735076 kstat.zfs.misc.arcstats.l2_abort_lowmem: 2368 kstat.zfs.misc.arcstats.l2_cksum_bad: 9 kstat.zfs.misc.arcstats.l2_io_error: 0 kstat.zfs.misc.arcstats.l2_size: 88680833024 kstat.zfs.misc.arcstats.l2_hdr_size: 2275280224 kstat.zfs.misc.arcstats.memory_throttle_count: 0 kstat.zfs.misc.arcstats.l2_write_trylock_fail: 160181805 kstat.zfs.misc.arcstats.l2_write_passed_headroom: 48073379 kstat.zfs.misc.arcstats.l2_write_spa_mismatch: 0 kstat.zfs.misc.arcstats.l2_write_in_l2: 101326826532 kstat.zfs.misc.arcstats.l2_write_io_in_progress: 3016312 kstat.zfs.misc.arcstats.l2_write_not_cacheable: 16631379447 kstat.zfs.misc.arcstats.l2_write_full: 158541 kstat.zfs.misc.arcstats.l2_write_buffer_iter: 998879 kstat.zfs.misc.arcstats.l2_write_pios: 908755 kstat.zfs.misc.arcstats.l2_write_buffer_bytes_scanned: 881025143301120 kstat.zfs.misc.arcstats.l2_write_buffer_list_iter: 62580701 kstat.zfs.misc.arcstats.l2_write_buffer_list_null_iter: 2040782 kstat.zfs.misc.vdev_cache_stats.delegations: 2167916 kstat.zfs.misc.vdev_cache_stats.hits: 2801310 kstat.zfs.misc.vdev_cache_stats.misses: 5448597 On Tue, Oct 4, 2011 at 2:43 AM, Dave Cundiff <syshackmin@gmail.com> wrote: > Hi, > > I'm running 8.2-RELEASE and running into an IO lockup on ZFS that is > happening pretty regularly. The system is stock except for the > following set in loader.conf > > vm.kmem_size=3D"30G" > vfs.zfs.arc_max=3D"22G" > kern.hz=3D100 > > I know the kmem settings aren't SUPPOSED to be necessary now, buy my > ZFS boxes were crashing until I added them. The machine has 24 gigs of > RAM. The kern.hz=3D100 was to stretch out the l2arc bug that pops up at > 28days with it set to 1000. > > [root@san2 ~]# zpool status > =A0pool: san > =A0state: ONLINE > =A0scrub: none requested > config: > > =A0 =A0 =A0 =A0NAME =A0 =A0 =A0 =A0 STATE =A0 =A0 READ WRITE CKSUM > =A0 =A0 =A0 =A0san =A0 =A0 =A0 =A0 =A0ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 = =A0 0 > =A0 =A0 =A0 =A0 =A0da1 =A0 =A0 =A0 =A0ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 = =A0 0 > =A0 =A0 =A0 =A0logs > =A0 =A0 =A0 =A0 =A0mirror =A0 =A0 ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0 = 0 > =A0 =A0 =A0 =A0 =A0 =A0ad6s1b =A0 ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0 = 0 > =A0 =A0 =A0 =A0 =A0 =A0ad14s1b =A0ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0 = 0 > =A0 =A0 =A0 =A0cache > =A0 =A0 =A0 =A0 =A0ad6s1d =A0 =A0 ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0 = 0 > =A0 =A0 =A0 =A0 =A0ad14s1d =A0 =A0ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0 = 0 > > errors: No known data errors > > > Here's a zpool iostat from a machine in trouble. > > san =A0 =A0 =A0 =A0 9.08T =A03.55T =A0 =A0 =A00 =A0 =A0 =A00 =A0 =A0 =A00= =A07.92K > san =A0 =A0 =A0 =A0 9.08T =A03.55T =A0 =A0 =A00 =A0 =A0447 =A0 =A0 =A00 = =A05.77M > san =A0 =A0 =A0 =A0 9.08T =A03.55T =A0 =A0 =A00 =A0 =A0309 =A0 =A0 =A00 = =A02.83M > san =A0 =A0 =A0 =A0 9.08T =A03.55T =A0 =A0 =A00 =A0 =A0 =A00 =A0 =A0 =A00= =A0 =A0 =A00 > san =A0 =A0 =A0 =A0 9.08T =A03.55T =A0 =A0 62 =A0 =A0 =A00 =A02.22M =A0 = =A0 =A00 > san =A0 =A0 =A0 =A0 9.08T =A03.55T =A0 =A0 =A00 =A0 =A0 =A02 =A0 =A0 =A00= =A023.5K > san =A0 =A0 =A0 =A0 9.08T =A03.55T =A0 =A0 =A00 =A0 =A0 =A00 =A0 =A0 =A00= =A0 =A0 =A00 > san =A0 =A0 =A0 =A0 9.08T =A03.55T =A0 =A0 =A00 =A0 =A0 =A00 =A0 =A0 =A00= =A0 =A0 =A00 > san =A0 =A0 =A0 =A0 9.08T =A03.55T =A0 =A0 =A00 =A0 =A0254 =A0 =A0 =A00 = =A06.62M > san =A0 =A0 =A0 =A0 9.08T =A03.55T =A0 =A0 =A00 =A0 =A0249 =A0 =A0 =A00 = =A03.16M > san =A0 =A0 =A0 =A0 9.08T =A03.55T =A0 =A0 =A00 =A0 =A0 =A00 =A0 =A0 =A00= =A0 =A0 =A00 > san =A0 =A0 =A0 =A0 9.08T =A03.55T =A0 =A0 34 =A0 =A0 =A00 =A0 491K =A0 = =A0 =A00 > san =A0 =A0 =A0 =A0 9.08T =A03.55T =A0 =A0 =A00 =A0 =A0 =A06 =A0 =A0 =A00= =A062.7K > san =A0 =A0 =A0 =A0 9.08T =A03.55T =A0 =A0 =A00 =A0 =A0 =A00 =A0 =A0 =A00= =A0 =A0 =A00 > san =A0 =A0 =A0 =A0 9.08T =A03.55T =A0 =A0 =A00 =A0 =A0 85 =A0 =A0 =A00 = =A06.59M > san =A0 =A0 =A0 =A0 9.08T =A03.55T =A0 =A0 =A00 =A0 =A0 =A00 =A0 =A0 =A00= =A0 =A0 =A00 > san =A0 =A0 =A0 =A0 9.08T =A03.55T =A0 =A0 =A00 =A0 =A0452 =A0 =A0 =A00 = =A04.88M > san =A0 =A0 =A0 =A0 9.08T =A03.55T =A0 =A0109 =A0 =A0 =A00 =A03.12M =A0 = =A0 =A00 > san =A0 =A0 =A0 =A0 9.08T =A03.55T =A0 =A0 =A00 =A0 =A0 =A00 =A0 =A0 =A00= =A0 =A0 =A00 > san =A0 =A0 =A0 =A0 9.08T =A03.55T =A0 =A0 =A00 =A0 =A0 =A00 =A0 =A0 =A00= =A07.84K > san =A0 =A0 =A0 =A0 9.08T =A03.55T =A0 =A0 =A00 =A0 =A0434 =A0 =A0 =A00 = =A06.41M > san =A0 =A0 =A0 =A0 9.08T =A03.55T =A0 =A0 =A00 =A0 =A0 =A00 =A0 =A0 =A00= =A0 =A0 =A00 > san =A0 =A0 =A0 =A0 9.08T =A03.55T =A0 =A0 =A00 =A0 =A0304 =A0 =A0 =A00 = =A02.90M > san =A0 =A0 =A0 =A0 9.08T =A03.55T =A0 =A0 37 =A0 =A0 =A00 =A0 628K =A0 = =A0 =A00 > > Its supposed to look like > > san =A0 =A0 =A0 =A0 9.07T =A03.56T =A0 =A0162 =A0 =A0167 =A03.75M =A06.09= M > san =A0 =A0 =A0 =A0 9.07T =A03.56T =A0 =A0 =A05 =A0 =A0 =A00 =A047.4K =A0= =A0 =A00 > san =A0 =A0 =A0 =A0 9.07T =A03.56T =A0 =A0 19 =A0 =A0 =A00 =A0 213K =A0 = =A0 =A00 > san =A0 =A0 =A0 =A0 9.07T =A03.56T =A0 =A0120 =A0 =A0 =A00 =A03.26M =A0 = =A0 =A00 > san =A0 =A0 =A0 =A0 9.07T =A03.56T =A0 =A0 92 =A0 =A0 =A00 =A0 741K =A0 = =A0 =A00 > san =A0 =A0 =A0 =A0 9.07T =A03.56T =A0 =A0114 =A0 =A0 =A00 =A02.86M =A0 = =A0 =A00 > san =A0 =A0 =A0 =A0 9.07T =A03.56T =A0 =A0 72 =A0 =A0 =A00 =A0 579K =A0 = =A0 =A00 > san =A0 =A0 =A0 =A0 9.07T =A03.56T =A0 =A0 14 =A0 =A0 =A00 =A0 118K =A0 = =A0 =A00 > san =A0 =A0 =A0 =A0 9.07T =A03.56T =A0 =A0 24 =A0 =A0 =A00 =A0 213K =A0 = =A0 =A00 > san =A0 =A0 =A0 =A0 9.07T =A03.56T =A0 =A0 25 =A0 =A0 =A00 =A0 324K =A0 = =A0 =A00 > san =A0 =A0 =A0 =A0 9.07T =A03.56T =A0 =A0 =A08 =A0 =A0 =A00 =A0 126K =A0= =A0 =A00 > san =A0 =A0 =A0 =A0 9.07T =A03.56T =A0 =A0 28 =A0 =A0 =A00 =A0 505K =A0 = =A0 =A00 > san =A0 =A0 =A0 =A0 9.07T =A03.56T =A0 =A0 15 =A0 =A0 =A00 =A0 126K =A0 = =A0 =A00 > san =A0 =A0 =A0 =A0 9.07T =A03.56T =A0 =A0 11 =A0 =A0 =A00 =A0 158K =A0 = =A0 =A00 > san =A0 =A0 =A0 =A0 9.07T =A03.56T =A0 =A0 19 =A0 =A0 =A00 =A0 356K =A0 = =A0 =A00 > san =A0 =A0 =A0 =A0 9.07T =A03.56T =A0 =A0198 =A0 =A0 =A00 =A03.55M =A0 = =A0 =A00 > san =A0 =A0 =A0 =A0 9.07T =A03.56T =A0 =A0 21 =A0 =A0 =A00 =A0 173K =A0 = =A0 =A00 > san =A0 =A0 =A0 =A0 9.07T =A03.56T =A0 =A0 18 =A0 =A0 =A00 =A0 150K =A0 = =A0 =A00 > san =A0 =A0 =A0 =A0 9.07T =A03.56T =A0 =A0 23 =A0 =A0 =A00 =A0 260K =A0 = =A0 =A00 > san =A0 =A0 =A0 =A0 9.07T =A03.56T =A0 =A0 =A09 =A0 =A0 =A00 =A078.3K =A0= =A0 =A00 > san =A0 =A0 =A0 =A0 9.07T =A03.56T =A0 =A0 21 =A0 =A0 =A00 =A0 173K =A0 = =A0 =A00 > san =A0 =A0 =A0 =A0 9.07T =A03.56T =A0 =A0 =A02 =A04.59K =A016.8K =A0 142= M > san =A0 =A0 =A0 =A0 9.07T =A03.56T =A0 =A0 12 =A0 =A0 =A00 =A0 103K =A0 = =A0 =A00 > san =A0 =A0 =A0 =A0 9.07T =A03.56T =A0 =A0 26 =A0 =A0454 =A0 312K =A04.35= M > san =A0 =A0 =A0 =A0 9.07T =A03.56T =A0 =A0111 =A0 =A0 =A00 =A03.34M =A0 = =A0 =A00 > san =A0 =A0 =A0 =A0 9.07T =A03.56T =A0 =A0 28 =A0 =A0 =A00 =A0 870K =A0 = =A0 =A00 > san =A0 =A0 =A0 =A0 9.07T =A03.56T =A0 =A0 75 =A0 =A0 =A00 =A03.88M =A0 = =A0 =A00 > san =A0 =A0 =A0 =A0 9.07T =A03.56T =A0 =A0 43 =A0 =A0 =A00 =A01.22M =A0 = =A0 =A00 > san =A0 =A0 =A0 =A0 9.07T =A03.56T =A0 =A0 26 =A0 =A0 =A00 =A0 270K =A0 = =A0 =A00 > > I don't know what triggers the problem but I know how to fix it. If I > perform a couple snapshot deletes the IO will come back in line every > single time. Fortunately I have LOTS of snapshots to delete. > > [root@san2 ~]# zfs list -r -t snapshot | wc -l > =A0 =A05236 > [root@san2 ~]# zfs list -r -t volume | wc -l > =A0 =A0 =A017 > > Being fairly new to FreeBSD and ZFS I'm pretty clueless on where to > begin tracking this down. I've been staring at gstat trying to see if > a zvol is getting a big burst of writes that may be flooding the drive > controller but I haven't caught anything yet. top -S -H shows > zio_write_issue threads consuming massive amounts of CPU during the > lockup. Normally they sit around 5-10%. =A0Any suggestions on where I > could start to track this down would be greatly appreciated. > --=20 Dave Cundiff System Administrator A2Hosting, Inc http://www.a2hosting.com
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAKHEz2Y7qE-7%2BAmhB5oEsZszzM907LfPrmFz8UV7jtjdHF%2BNPg>