From owner-freebsd-bugs@freebsd.org Sat Mar 7 15:24:20 2020 Return-Path: Delivered-To: freebsd-bugs@mailman.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.nyi.freebsd.org (Postfix) with ESMTP id 5666E268846 for ; Sat, 7 Mar 2020 15:24:20 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from mailman.nyi.freebsd.org (mailman.nyi.freebsd.org [IPv6:2610:1c1:1:606c::50:13]) by mx1.freebsd.org (Postfix) with ESMTP id 48ZSvS1t65z4Q2w for ; Sat, 7 Mar 2020 15:24:20 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: by mailman.nyi.freebsd.org (Postfix) id E9DE3268845; Sat, 7 Mar 2020 15:24:19 +0000 (UTC) Delivered-To: bugs@mailman.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.nyi.freebsd.org (Postfix) with ESMTP id E9501268844 for ; Sat, 7 Mar 2020 15:24:19 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) server-signature RSA-PSS (4096 bits) client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "mxrelay.nyi.freebsd.org", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 48ZSvR6M4mz4Q2B for ; Sat, 7 Mar 2020 15:24:19 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2610:1c1:1:606c::50:1d]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id A3E72AF5F for ; Sat, 7 Mar 2020 15:24:19 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org ([127.0.1.5]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id 027FOJBl037554 for ; Sat, 7 Mar 2020 15:24:19 GMT (envelope-from bugzilla-noreply@freebsd.org) Received: (from www@localhost) by kenobi.freebsd.org (8.15.2/8.15.2/Submit) id 027FOJtE037551 for bugs@FreeBSD.org; Sat, 7 Mar 2020 15:24:19 GMT (envelope-from bugzilla-noreply@freebsd.org) X-Authentication-Warning: kenobi.freebsd.org: www set sender to bugzilla-noreply@freebsd.org using -f From: bugzilla-noreply@freebsd.org To: bugs@FreeBSD.org Subject: [Bug 244656] ZFS resilver in progress but which disks? Adn no progress? Date: Sat, 07 Mar 2020 15:24:19 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: new X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 12.1-RELEASE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Some People X-Bugzilla-Who: pen@lysator.liu.se X-Bugzilla-Status: New X-Bugzilla-Resolution: X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: bugs@FreeBSD.org X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: bug_id short_desc product version rep_platform op_sys bug_status bug_severity priority component assigned_to reporter Message-ID: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-bugs@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Bug reports List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 07 Mar 2020 15:24:20 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D244656 Bug ID: 244656 Summary: ZFS resilver in progress but which disks? Adn no progress? Product: Base System Version: 12.1-RELEASE Hardware: Any OS: Any Status: New Severity: Affects Some People Priority: --- Component: kern Assignee: bugs@FreeBSD.org Reporter: pen@lysator.liu.se FreeBSD 12.1-RELEASE-p2 One of our servers seems to have developed some kind of problem in one of t= he ZFS pools. It claims to be resilvering data, but I can't see any progress at all in the output from "zpool status". "zpool iostat" and "iostat -x" does indicate a lot of reads being done though. # zpool status -v DATA2 pool: DATA2 state: ONLINE status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scan: resilver in progress since Sat Mar 7 13:12:06 2020 10.4M scanned at 0/s, 56.1M issued at 0/s, 390T total 0 resilvered, 0.00% done, no estimated completion time config: NAME STATE READ WRITE CKSUM DATA2 ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 diskid/DISK-7PH8LBKG ONLINE 0 0 0 diskid/DISK-7PH8N1EG ONLINE 0 0 0 diskid/DISK-7PH3EZDG ONLINE 0 0 0 diskid/DISK-7PGN0VJG ONLINE 0 0 0 diskid/DISK-7PH8LVMG ONLINE 0 0 0 diskid/DISK-7PH8MZVG ONLINE 0 0 0 diskid/DISK-7PH5RASG ONLINE 0 0 0 diskid/DISK-7PH3EVJG ONLINE 0 0 0 raidz2-1 ONLINE 0 0 0 diskid/DISK-7PH3EXRG ONLINE 0 0 0 diskid/DISK-7PH8LV5G ONLINE 0 0 0 diskid/DISK-7PH3EX2G ONLINE 0 0 0 diskid/DISK-7PH77Z6G ONLINE 0 0 0 diskid/DISK-7PH8LWXG ONLINE 0 0 0 diskid/DISK-7PH81L1G ONLINE 0 0 0 diskid/DISK-7PH8N0BG ONLINE 0 0 0 diskid/DISK-7PH8MNMG ONLINE 0 0 0 raidz2-2 ONLINE 0 0 0 diskid/DISK-7PH3EWHG ONLINE 0 0 0 diskid/DISK-7PH8LTWG ONLINE 0 0 0 diskid/DISK-7PH5R02G ONLINE 0 0 0 diskid/DISK-7PH8L88G ONLINE 0 0 0 diskid/DISK-7PH81MUG ONLINE 0 0 0 diskid/DISK-7PH8LXMG ONLINE 0 0 0 diskid/DISK-7PH8N26G ONLINE 0 0 0 diskid/DISK-7PH93ZWG ONLINE 0 0 0 raidz2-3 ONLINE 0 0 0 diskid/DISK-7PH8LEVG ONLINE 0 0 0 diskid/DISK-7PH8MYKG ONLINE 0 0 0 diskid/DISK-7PH8MXRG ONLINE 0 0 0 diskid/DISK-7PH4TB1G ONLINE 0 0 0 diskid/DISK-7PH8G8XG ONLINE 0 0 0 diskid/DISK-7PH9SP9G ONLINE 0 0 0 diskid/DISK-7PH8LARG ONLINE 0 0 0 diskid/DISK-7PH8KRYG ONLINE 0 0 0 raidz2-4 ONLINE 0 0 0 diskid/DISK-7PH8LWMG ONLINE 0 0 0 diskid/DISK-7PH8LVXG ONLINE 0 0 0 diskid/DISK-7PH8LU1G ONLINE 0 0 0 diskid/DISK-7PH9451G ONLINE 0 0 0 diskid/DISK-7PH8ESMG ONLINE 0 0 0 diskid/DISK-7PH3EWVG ONLINE 0 0 0 diskid/DISK-7PH8EMKG ONLINE 0 0 0 diskid/DISK-7PH8LG9G ONLINE 0 0 0 raidz2-5 ONLINE 0 0 0 diskid/DISK-7PH93X6G ONLINE 0 0 0 diskid/DISK-7PH8MXJG ONLINE 0 0 0 diskid/DISK-7PH3EY8G ONLINE 0 0 0 diskid/DISK-7PHAHG3G ONLINE 0 0 0 diskid/DISK-7PH8LVYG ONLINE 0 0 0 diskid/DISK-7PH8L8KG ONLINE 0 0 0 diskid/DISK-7PH8LVNG ONLINE 0 0 0 diskid/DISK-7PH8L9SG ONLINE 0 0 0 spares diskid/DISK-7PH8G10G AVAIL=20=20=20 diskid/DISK-7PH8GJ2G AVAIL=20=20=20 diskid/DISK-7PH8MZ8G AVAIL=20=20=20 diskid/DISK-7PGRM7TG AVAIL=20=20=20 diskid/DISK-7PH7UUAG AVAIL=20=20=20 diskid/DISK-7PH8GJTG AVAIL=20=20=20 errors: Permanent errors have been detected in the following files: :<0x0> :<0xc49c12> :<0x19> :<0x1d> :<0x20> :<0x65714b> :<0x657157> :<0x65717b> :<0x657193> :<0xc530fe> =20=20=20=20=20=20=20 DATA2/filur06.it.liu.se/DATA/old/isy/nobackup-server@auto-2020-01-21.23:00:= 00:/cvl/GARNICS_DATA/20120123sorted/plant__69683/pos1/cam9181310_TS13276237= 08156387.tif # procstat -kka|egrep zfs 0 100212 kernel zfsvfs mi_switch+0xe2 sleepq_wait+0x2c _sleep+0x247 taskqueue_thread_loop+0xf1 fork_exit+0x83 fork_trampoline+0xe=20 0 100519 kernel zfs_vn_rele_taskq mi_switch+0xe2 sleepq_wait+0x2c _sleep+0x247 taskqueue_thread_loop+0xf1 fork_exit+0x83 fork_trampoline+0xe=20 0 100891 kernel zfs_vn_rele_taskq mi_switch+0xe2 sleepq_wait+0x2c _sleep+0x247 taskqueue_thread_loop+0xf1 fork_exit+0x83 fork_trampoline+0xe=20 0 102051 kernel zfs_vn_rele_taskq mi_switch+0xe2 sleepq_wait+0x2c _sleep+0x247 taskqueue_thread_loop+0xf1 fork_exit+0x83 fork_trampoline+0xe=20 32 100190 zfskern solthread 0xfffffff mi_switch+0xe2 sleepq_timedwait+0x2f _cv_timedwait_sbt+0x17a zthr_procedure+0xff fork_exit+0x83 fork_trampoline+0xe=20 32 100191 zfskern solthread 0xfffffff mi_switch+0xe2 sleepq_timedwait+0x2f _cv_timedwait_sbt+0x17a zthr_procedure+0xff fork_exit+0x83 fork_trampoline+0xe=20 32 100192 zfskern arc_dnlc_evicts_thr mi_switch+0xe2 sleepq_wait+0x2c _cv_wait+0x152 arc_dnlc_evicts_thread+0x14f fork_exit+0x83 fork_trampoline+0xe=20 32 100194 zfskern dbuf_evict_thread mi_switch+0xe2 sleepq_timedwait+0x2f _cv_timedwait_sbt+0x17a dbuf_evict_thread+0x1c8 fork_exit+0x83 fork_trampoline+0xe=20 32 100211 zfskern l2arc_feed_thread mi_switch+0xe2 sleepq_timedwait+0x2f _cv_timedwait_sbt+0x17a l2arc_feed_thread+0x239 fork_exit+0x83 fork_trampoline+0xe=20 32 100482 zfskern trim zroot mi_switch+0xe2 sleepq_timedwait+0x2f _cv_timedwait_sbt+0x17a trim_thread+0x120 fork_exit+0= x83 fork_trampoline+0xe=20 32 100552 zfskern txg_thread_enter mi_switch+0xe2 sleepq_wait+0x2c _cv_wait+0x152 txg_quiesce_thread+0xbb fork_exit+0x83 fork_trampoline+0xe=20 32 100553 zfskern txg_thread_enter mi_switch+0xe2 sleepq_timedwait+0x2f _cv_timedwait_sbt+0x17a txg_sync_thread+0x47f fork_exit+0x83 fork_trampoline+0xe=20 32 100554 zfskern solthread 0xfffffff mi_switch+0xe2 sleepq_wait+0x2c _cv_wait+0x152 zthr_procedure+0x117 fork_exit+0x83 fork_trampoline+0xe=20 32 100555 zfskern solthread 0xfffffff mi_switch+0xe2 sleepq_wait+0x2c _cv_wait+0x152 zthr_procedure+0x117 fork_exit+0x83 fork_trampoline+0xe=20 32 100808 zfskern trim DATA2 mi_switch+0xe2 sleepq_timedwait+0x2f _cv_timedwait_sbt+0x17a trim_thread+0x120 fork_exit+0= x83 fork_trampoline+0xe=20 32 101517 zfskern txg_thread_enter mi_switch+0xe2 sleepq_wait+0x2c _cv_wait+0x152 txg_quiesce_thread+0xbb fork_exit+0x83 fork_trampoline+0xe=20 32 101518 zfskern txg_thread_enter mi_switch+0xe2 sleepq_wait+0x2c _cv_wait+0x152 zio_wait+0x9b dbuf_read+0x669 dnode_hold_impl+0x1af dmu_bonus_hold+0x1d dsl_deadlist_open+0x49 dsl_dataset_hold_obj+0x3d5 dsl_scan_sync+0xf65 spa_sync+0xb57 txg_sync_thread+0x238 fork_exit+0x83 fork_trampoline+0xe=20 32 101519 zfskern solthread 0xfffffff mi_switch+0xe2 sleepq_wait+0x2c _cv_wait+0x152 zthr_procedure+0x117 fork_exit+0x83 fork_trampoline+0xe=20 32 101520 zfskern solthread 0xfffffff mi_switch+0xe2 sleepq_wait+0x2c _cv_wait+0x152 zthr_procedure+0x117 fork_exit+0x83 fork_trampoline+0xe=20 32 102005 zfskern trim DATA3 mi_switch+0xe2 sleepq_timedwait+0x2f _cv_timedwait_sbt+0x17a trim_thread+0x120 fork_exit+0= x83 fork_trampoline+0xe=20 32 102261 zfskern txg_thread_enter mi_switch+0xe2 sleepq_wait+0x2c _cv_wait+0x152 txg_quiesce_thread+0xbb fork_exit+0x83 fork_trampoline+0xe=20 32 102262 zfskern txg_thread_enter mi_switch+0xe2 sleepq_wait+0x2c _sleep+0x247 taskqueue_quiesce+0x114 dsl_scan_sync+0xd2a spa_sync+0xb57 txg_sync_thread+0x238 fork_exit+0x83 fork_trampoline+0xe=20 32 102263 zfskern solthread 0xfffffff mi_switch+0xe2 sleepq_wait+0x2c _cv_wait+0x152 zthr_procedure+0x117 fork_exit+0x83 fork_trampoline+0xe=20 32 102264 zfskern solthread 0xfffffff mi_switch+0xe2 sleepq_wait+0x2c _cv_wait+0x152 zthr_procedure+0x117 fork_exit+0x83 fork_trampoline+0xe=20 32 102265 zfskern zvol DATA3/sekur-is mi_switch+0xe2 sleepq_wait+0x2c _sleep+0x247 zvol_geom_worker+0x16d fork_exit+0x83 fork_trampoline+0xe=20 32 102266 zfskern zvol DATA3/sekur-is mi_switch+0xe2 sleepq_wait+0x2c _sleep+0x247 zvol_geom_worker+0x16d fork_exit+0x83 fork_trampoline+0xe=20 32 102267 zfskern zvol DATA3/sekur-is mi_switch+0xe2 sleepq_wait+0x2c _sleep+0x247 zvol_geom_worker+0x16d fork_exit+0x83 fork_trampoline+0xe=20 ... 32 102415 zfskern zvol DATA3/sekur-is mi_switch+0xe2 sleepq_wait+0x2c _sleep+0x247 zvol_geom_worker+0x16d fork_exit+0x83 fork_trampoline+0xe=20 32 102416 zfskern zvol DATA3/sekur-is mi_switch+0xe2 sleepq_wait+0x2c _sleep+0x247 zvol_geom_worker+0x16d fork_exit+0x83 fork_trampoline+0xe=20 2640 100568 zfsd - mi_switch+0xe2 sleepq_catch_signals+0x425 sleepq_wait_sig+0xf _cv_wait_sig+0x154 seltdwait+0xbf kern_poll+0x43d sys_poll+0x50 amd64_syscall+0x364 fast_syscall_common+0x101=20 20980 100567 zfs.orig - zap_cursor_retrieve+0x= 70 zap_value_search+0x8f dsl_dataset_get_snapname+0x90 dsl_dataset_name+0x34 dsl_dataset_stats+0xf4 dmu_objset_stats+0x1d zfs_ioc_objset_stats_impl+0x50 zfs_ioc_dataset_list_next+0x143 zfsdev_ioctl+0x72e devfs_ioctl+0xad VOP_IOCTL_APV+0x7c vn_ioctl+0x16a devfs_ioctl_f+0x1f kern_ioctl+0x2be sys_ioctl+0x15d amd64_syscall+0x364 fast_syscall_common+0x101=20 # zpool iostat -v DATA2 10 capacity operations bandwidth pool alloc free read write read write ------------------------ ----- ----- ----- ----- ----- ----- DATA2 390T 44.8T 871 0 3.68M 5.56K raidz2 66.4T 6.10T 167 0 726K 883 diskid/DISK-7PH8LBKG - - 22 0 92.6K 287 diskid/DISK-7PH8N1EG - - 22 0 92.4K 296 diskid/DISK-7PH3EZDG - - 22 0 93.6K 284 diskid/DISK-7PGN0VJG - - 22 0 92.6K 248 diskid/DISK-7PH8LVMG - - 22 0 92.9K 251 diskid/DISK-7PH8MZVG - - 22 0 93.2K 269 diskid/DISK-7PH5RASG - - 22 0 92.5K 284 diskid/DISK-7PH3EVJG - - 22 0 92.3K 287 raidz2 66.3T 6.16T 163 0 714K 883 diskid/DISK-7PH3EXRG - - 21 0 91.7K 248 diskid/DISK-7PH8LV5G - - 21 0 91.9K 251 diskid/DISK-7PH3EX2G - - 21 0 91.0K 275 diskid/DISK-7PH77Z6G - - 21 0 92.1K 287 diskid/DISK-7PH8LWXG - - 21 0 90.8K 320 diskid/DISK-7PH81L1G - - 21 0 91.0K 284 diskid/DISK-7PH8N0BG - - 21 0 91.7K 263 diskid/DISK-7PH8MNMG - - 21 0 91.5K 266 raidz2 66.3T 6.22T 166 0 719K 1013 diskid/DISK-7PH3EWHG - - 22 0 92.3K 246 diskid/DISK-7PH8LTWG - - 21 0 91.0K 231 diskid/DISK-7PH5R02G - - 21 0 91.8K 231 diskid/DISK-7PH8L88G - - 21 0 93.1K 219 diskid/DISK-7PH81MUG - - 21 0 92.1K 225 diskid/DISK-7PH8LXMG - - 21 0 92.1K 237 diskid/DISK-7PH8N26G - - 21 0 92.5K 237 diskid/DISK-7PH93ZWG - - 21 0 91.0K 240 raidz2 66.4T 6.14T 132 0 581K 1013 diskid/DISK-7PH8LEVG - - 17 0 73.0K 237 diskid/DISK-7PH8MYKG - - 17 0 73.3K 219 diskid/DISK-7PH8MXRG - - 17 0 73.4K 192 diskid/DISK-7PH4TB1G - - 17 0 74.5K 222 diskid/DISK-7PH8G8XG - - 17 0 73.9K 231 diskid/DISK-7PH9SP9G - - 17 0 73.4K 219 diskid/DISK-7PH8LARG - - 17 0 73.8K 201 diskid/DISK-7PH8KRYG - - 17 0 73.6K 231 raidz2 62.4T 10.1T 69 0 298K 1013 diskid/DISK-7PH8LWMG - - 9 0 38.4K 210 diskid/DISK-7PH8LVXG - - 9 0 38.3K 225 diskid/DISK-7PH8LU1G - - 9 0 37.7K 213 diskid/DISK-7PH9451G - - 9 0 38.2K 210 diskid/DISK-7PH8ESMG - - 9 0 38.2K 237 diskid/DISK-7PH3EWVG - - 9 0 37.8K 240 diskid/DISK-7PH8EMKG - - 9 0 38.1K 228 diskid/DISK-7PH8LG9G - - 9 0 38.2K 213 raidz2 62.7T 10.1T 172 0 733K 883 diskid/DISK-7PH93X6G - - 22 0 94.1K 246 diskid/DISK-7PH8MXJG - - 22 0 93.7K 266 diskid/DISK-7PH3EY8G - - 22 0 93.2K 257 diskid/DISK-7PHAHG3G - - 22 0 93.9K 281 diskid/DISK-7PH8LVYG - - 22 0 93.9K 281 diskid/DISK-7PH8L8KG - - 22 0 94.4K 237 diskid/DISK-7PH8LVNG - - 22 0 93.1K 243 diskid/DISK-7PH8L9SG - - 22 0 93.4K 266 ------------------------ ----- ----- ----- ----- ----- ----- --=20 You are receiving this mail because: You are the assignee for the bug.=