From owner-freebsd-arm@freebsd.org Wed Sep 12 02:24:11 2018 Return-Path: Delivered-To: freebsd-arm@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id A528410A67CA for ; Wed, 12 Sep 2018 02:24:11 +0000 (UTC) (envelope-from marklmi@yahoo.com) Received: from sonic317-21.consmr.mail.gq1.yahoo.com (sonic317-21.consmr.mail.gq1.yahoo.com [98.137.66.147]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 0B3019518D for ; Wed, 12 Sep 2018 02:24:10 +0000 (UTC) (envelope-from marklmi@yahoo.com) X-YMail-OSG: 3ZdQWm4VM1l4hYoOYczE3I0zvmitYtLmi_P3xuvWE1mFOLf3c71VsWVVvWZ5jTb gSGzUzfT.Erbj7bPCYDVO9ducmWL5zTkGygURKCKbZXXdLT02qMlf8AyIfGEFvk0UfCtvhOtUaIW HS.xEaV2XmmOIsqHN26D5z8LEE.s3b2WBacvgJyhN_iN.xgJ261cYi84Tk..8z_Q_SlT596HMuhm zKtVg9di4.a1uU3Q4_7OjQAjY4AiDsUFRtHt1Ii9H.XDwHtNmzaXKOmW3.HJSYy8Ta68swsO0BE7 yQzs8ozpLoR.HsIPQ5C6olnRzMHWQcz7XXCTUqodQiLy.4Lpn4dcs1_.JSOItSkE1_tcZ6K6qeYK iTDuiqml10WmzBbMUQaOwHwefCdhyl2TXTQPA0Nb4wb_IJTwC3E2AU12MUQ1NIsPWDt2ZGJekVM2 CwAskKdAvKLUc0.vvJJNLG0.INwTyEW3XjAxXbN1KXZCu3H4nHJWxBv7X7_OO6hfS7piVcTGm1EU 6sl98UUsfHeZS6KDVC5xsTulAgpPz7QAuztA3L23_e.PqPW3P_ap1zINDP7f1D4ujmoBLWIDqB7I AmaI1pJz3zu5XL7Lqsf8b7.STNXLS5gmkKbf4xSOwTPUPDaDSwSns3eRcxEFpoExcdgKQNJgecZ8 9Hwa8ZeDzOIwwK39axGMt_R7eQX_LIUxp9ugg0XvwnnPrs4X2rNXaEV8nnQoP0JZ_fKRtlGWQLSw SMmbyy72Cu5QS4qk6BXZhtpAhuay1Eu6wXMHL.DY4t85HB8FygT0Z6TtSHpR4VAPk08N99vpDOws Xri3naB2HT6LgCPOTg3ED1NmV1d4cOTSz9hg9zMIq3Pmq7gg2BHb8NSX0Az9ZKxrlMJBb4k7REry PhQ1zrfzmBl85RMI7PBXJygRChPg8t8iEA2FfPH1XyfFL8VimYVhvKp2ngyVL32m91Se38VwJXD8 3b66xz2vp7aDWvMeruHaRGYdH_8GHeY10mJFLT3lS__JzZYglYqMuL6pNy3bHxuoDsIOVOxFIvFK fTnENYQ-- Received: from sonic.gate.mail.ne1.yahoo.com by sonic317.consmr.mail.gq1.yahoo.com with HTTP; Wed, 12 Sep 2018 02:24:04 +0000 Received: from ip70-189-131-151.lv.lv.cox.net (EHLO [192.168.0.105]) ([70.189.131.151]) by smtp414.mail.gq1.yahoo.com (Oath Hermes SMTP Server) with ESMTPA ID 927b9386155167cb3794e88956251398; Wed, 12 Sep 2018 02:24:01 +0000 (UTC) From: Mark Millard Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable Mime-Version: 1.0 (Mac OS X Mail 11.5 \(3445.9.1\)) Subject: Re: FYI: devel/kyua 14 failures for head -r338518M based build in a Pine64+ 2GB (aarch64 / cortexA53 / A64) context [md related processes left waiting (and more)] Date: Tue, 11 Sep 2018 19:24:00 -0700 References: To: freebsd-arm , FreeBSD Current In-Reply-To: Message-Id: <3E3E4449-E132-4F59-87C8-22A0AD2092BF@yahoo.com> X-Mailer: Apple Mail (2.3445.9.1) X-BeenThere: freebsd-arm@freebsd.org X-Mailman-Version: 2.1.27 Precedence: list List-Id: "Porting FreeBSD to ARM processors." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 12 Sep 2018 02:24:12 -0000 [After the run top -CawSopid shows something interesting/odd: lots of g_eli[?] and md?? processes are still around in=20 geli:w state for g_eli[?] and mdwait for md??. Also there are 4 processes in aiordy state.] On 2018-Sep-11, at 8:48 AM, Mark Millard wrote: > [Adding listing broken tests, but ignoring sys/cddl/zfs/ ones. > lib/libc/string/memcmp_test:diff is one of them.] >=20 > On 2018-Sep-11, at 2:44 AM, Mark Millard wrote: >=20 >> [No zfs use, just a UFS e.MMC filesystem on a microsd adapter.] >>=20 >> I got 14 failures. I've not enabled any configuration properties. >>=20 >> I do not know if official devel/kyua tests are part of the head -> >> stable transition for any tier or not. I'm not claiming to know if >> anything here could be a significant issue. >>=20 >> Someone may want to test an official aarch64 build rather than = presume >> that my personal build is good enough. But I expect that its results >> should be strongly suggestive, even if an official tests uses a more >> normal-for-FreeBSD configuration of an aarch64 system. >>=20 >> The e.MMC is V5.1 is operating in DDR52 mode and is faster than = normal >> configurations for the Pine 64+ 2GB. TRIM is in use for the UFS file >> system. This might let some things pass that otherwise would time = out. >>=20 >>=20 >> =3D=3D=3D> Failed tests >> lib/libc/resolv/resolv_test:getaddrinfo_test -> failed: = /usr/src/lib/libc/tests/resolv/resolv_test.c:299: = run_tests(_hostlist_file, METHOD_GETADDRINFO) =3D=3D 0 not met = [98.834s] >> lib/libc/ssp/ssp_test:vsnprintf -> failed: atf-check failed; see = the output of the test for details [0.107s] >> lib/libc/ssp/ssp_test:vsprintf -> failed: atf-check failed; see the = output of the test for details [0.105s] >> lib/libproc/proc_test:symbol_lookup -> failed: = /usr/src/lib/libproc/tests/proc_test.c:143: memcmp(sym, &tsym, = sizeof(*sym)) !=3D 0 [0.057s] >> lib/msun/trig_test:accuracy -> failed: 3 checks failed; see output = for more details [0.013s] >> lib/msun/trig_test:special -> failed: 8 checks failed; see output = for more details [0.013s] >> local/kyua/utils/stacktrace_test:dump_stacktrace__integration -> = failed: Line 391: atf::utils::grep_file("#0", = exit_handle.stderr_file().str()) not met [4.015s] >> local/kyua/utils/stacktrace_test:dump_stacktrace__ok -> failed: = Line 420: atf::utils::grep_file("^frame 1$", = exit_handle.stderr_file().str()) not met [4.470s] >> local/kyua/utils/stacktrace_test:dump_stacktrace_if_available__append = -> failed: Line 560: atf::utils::grep_file("frame 1", = exit_handle.stderr_file().str()) not met [4.522s] >> local/kyua/utils/stacktrace_test:find_core__found__long -> failed: = Core dumped, but no candidates found [3.988s] >> local/kyua/utils/stacktrace_test:find_core__found__short -> failed: = Core dumped, but no candidates found [4.014s] >> sys/kern/ptrace_test:ptrace__PT_STEP_with_signal -> failed: = /usr/src/tests/sys/kern/ptrace_test.c:3465: WSTOPSIG(status) =3D=3D = SIGABRT not met [0.017s] >> usr.bin/indent/functional_test:nsac -> failed: atf-check failed; = see the output of the test for details [0.151s] >> usr.bin/indent/functional_test:sac -> failed: atf-check failed; see = the output of the test for details [0.150s] >> =3D=3D=3D> Summary >> Results read from = /root/.kyua/store/results.usr_tests.20180911-070147-413583.db >> Test cases: 7301 total, 212 skipped, 37 expected failures, 116 = broken, 14 failed >> Total time: 6688.125s >>=20 >>=20 >>=20 >>=20 >> I'll note that the console reported over 73720 messages like (with = figures >> where I've listed ????'s): >>=20 >> md????.eli: Failed to authenticate ???? bytes of data at offset ????. >>=20 >> There are also device created and destroyed/removed notices with = related >> material. Overall there were over 84852 lines reported with = "GEOM_ELI:" >> on the line. >>=20 >> This did not prevent tests from passing. >>=20 >> (The huge console output is unfortunate in my view: it makes finding >> interesting console messages a problem while watching messages >> go by.) >>=20 >>=20 >>=20 >> I did get the console message block: >>=20 >> kern.ipc.maxpipekva exceeded; see tuning(7) >> Freed UMA keg (rtentry) was not empty (18 items). Lost 1 pages of = memory. >> Sep 11 01:36:25 pine64 kernel: nd6_dad_timer: called with = non-tentative address (epair2Freed UMA keg (rtentry) was not = empty (18 items). Lost 1 pages of memory. >> a) >> Freed UMA keg (rtentry) was not empty (18 items). Lost 1 pages of = memory. >> Freed UMA keg (rtentry) was not empty (18 items). Lost 1 pages of = memory. >> Freed UMA keg (rtentry) was not empty (18 items). Lost 1 pages of = memory. >> Freed UMA keg (rtentry) was not empty (18 items). Lost 1 pages of = memory. >> Freed UMA keg (rtentry) was not empty (18 items). Lost 1 pages of = memory. >> Freed UMA keg (rtentry) was not empty (18 items). Lost 1 pages of = memory. >> Freed UMA keg (rtentry) was not empty (18 items). Lost 1 pages of = memory. >> Freed UMA keg (rtentry) was not empty (18 items). Lost 1 pages of = memory. >> Freed UMA keg (rtentry) was not empty (18 items). Lost 1 pages of = memory. >> Freed UMA keg (rtentry) was not empty (18 items). Lost 1 pages of = memory. >> Freed UMA keg (rtentry) was not empty (18 items). Lost 1 pages of = memory. >> Freed UMA keg (rtentry) was not empty (18 items). Lost 1 pages of = memory. >> Freed UMA keg (rtentry) was not empty (18 items). Lost 1 pages of = memory. >> Freed UMA keg (rtentry) was not empty (18 items). Lost 1 pages of = memory. >> Freed UMA keg (rtentry) was not empty (18 items). Lost 1 pages of = memory. >>=20 >> But no failure reports seemed to be associated. >>=20 >> Still, I wonder if the block of messages is significant. >>=20 >>=20 >> Some other console messages seen (extracted from various places): >>=20 >> GEOM_MIRROR: Request failed (error=3D5). md29[READ(offset=3D524288, = length=3D2048)] >> GEOM_MIRROR: Request failed (error=3D6). md28[READ(offset=3D1048576, = length=3D2048)] >> GEOM_MIRROR: Request failed (error=3D5). md28[WRITE(offset=3D0, = length=3D2048)] >> GEOM_MIRROR: Cannot write metadata on md29 (device=3Dmirror.KRYGpE, = error=3D5). >> GEOM_MIRROR: Cannot update metadata on disk md29 (error=3D5). >> GEOM_MIRROR: Request failed (error=3D5). md28[READ(offset=3D0, = length=3D131072)] >> GEOM_MIRROR: Synchronization request failed (error=3D5). = mirror/mirror.YQGUHJ[READ(offset=3D0, length=3D131072)] >> GEOM_MIRROR: Request failed (error=3D5). md29[READ(offset=3D0, = length=3D131072)] >>=20 >> Again no failure reports seemed to be associated. >>=20 >>=20 >> Some or all of the following may be normal/expected: >>=20 >> Sep 11 00:05:44 pine64 kernel: pid 21057 (process_test), uid 0: = exited on signal 3 (core dumped) >> Sep 11 00:05:49 pine64 kernel: pid 21071 (sanity_test), uid 0: exited = on signal 6 (core dumped) >> Sep 11 00:05:54 pine64 kernel: pid 21074 (sanity_test), uid 0: exited = on signal 6 (core dumped) >> Sep 11 00:05:58 pine64 kernel: pid 21077 (sanity_test), uid 0: exited = on signal 6 (core dumped) >> Sep 11 00:06:03 pine64 kernel: pid 21080 (sanity_test), uid 0: exited = on signal 6 (core dumped) >> Sep 11 00:06:44 pine64 kernel: pid 23170 (cpp_helpers), uid 0: exited = on signal 6 (core dumped) >> Sep 11 00:06:49 pine64 kernel: pid 23306 (c_helpers), uid 977: exited = on signal 6 (core dumped) >> Sep 11 00:06:54 pine64 kernel: pid 23308 (cpp_helpers), uid 977: = exited on signal 6 (core dumped) >> Sep 11 00:18:44 pine64 kernel: pid 38227 (assert_test), uid 0: exited = on signal 6 >> Sep 11 00:51:38 pine64 kernel: pid 39883 (getenv_test), uid 0: exited = on signal 11 (core dumped) >> Sep 11 00:51:51 pine64 kernel: pid 40063 (memcmp_test), uid 0: exited = on signal 6 (core dumped) >> Sep 11 00:53:26 pine64 kernel: pid 40627 (wait_test), uid 0: exited = on signal 11 (core dumped) >> Sep 11 00:53:27 pine64 kernel: pid 40632 (wait_test), uid 0: exited = on signal 3 >> Sep 11 00:53:27 pine64 kernel: pid 40634 (wait_test), uid 0: exited = on signal 3 >> Sep 11 07:53:32 pine64 h_fgets[41013]: stack overflow detected; = terminated >> Sep 11 00:53:32 pine64 kernel: pid 41013 (h_fgets), uid 0: exited on = signal 6 >> Sep 11 07:53:33 pine64 h_gets[41049]: stack overflow detected; = terminated >> Sep 11 00:53:33 pine64 kernel: pid 41049 (h_gets), uid 0: exited on = signal 6 >> Sep 11 07:53:33 pine64 h_memcpy[41066]: stack overflow detected; = terminated >> Sep 11 00:53:33 pine64 kernel: pid 41066 (h_memcpy), uid 0: exited on = signal 6 >> Sep 11 07:53:33 pine64 h_memmove[41083]: stack overflow detected; = terminated >> Sep 11 00:53:33 pine64 kernel: pid 41083 (h_memmove), uid 0: exited = on signal 6 >> Sep 11 07:53:33 pine64 h_memset[41100]: stack overflow detected; = terminated >> Sep 11 00:53:33 pine64 kernel: pid 41100 (h_memset), uid 0: exited on = signal 6 >> Sep 11 07:53:33 pine64 h_read[41135]: stack overflow detected; = terminated >> Sep 11 00:53:33 pine64 kernel: pid 41135 (h_read), uid 0: exited on = signal 6 >> Sep 11 07:53:33 pine64 h_readlink[41152]: stack overflow detected; = terminated >> Sep 11 00:53:33 pine64 kernel: pid 41152 (h_readlink), uid 0: exited = on signal 6 >> Sep 11 07:53:33 pine64 h_snprintf[41169]: stack overflow detected; = terminated >> Sep 11 00:53:33 pine64 kernel: pid 41169 (h_snprintf), uid 0: exited = on signal 6 >> Sep 11 07:53:34 pine64 h_sprintf[41186]: stack overflow detected; = terminated >> Sep 11 00:53:34 pine64 kernel: pid 41186 (h_sprintf), uid 0: exited = on signal 6 >> Sep 11 07:53:34 pine64 h_stpcpy[41203]: stack overflow detected; = terminated >> Sep 11 00:53:34 pine64 kernel: pid 41203 (h_stpcpy), uid 0: exited on = signal 6 >> Sep 11 07:53:34 pine64 h_stpncpy[41220]: stack overflow detected; = terminated >> Sep 11 00:53:34 pine64 kernel: pid 41220 (h_stpncpy), uid 0: exited = on signal 6 >> Sep 11 07:53:34 pine64 h_strcat[41237]: stack overflow detected; = terminated >> Sep 11 00:53:34 pine64 kernel: pid 41237 (h_strcat), uid 0: exited on = signal 6 >> Sep 11 07:53:34 pine64 h_strcpy[41254]: stack overflow detected; = terminated >> Sep 11 00:53:34 pine64 kernel: pid 41254 (h_strcpy), uid 0: exited on = signal 6 >> Sep 11 07:53:34 pine64 h_strncat[41271]: stack overflow detected; = terminated >> Sep 11 00:53:34 pine64 kernel: pid 41271 (h_strncat), uid 0: exited = on signal 6 >> Sep 11 07:53:34 pine64 h_strncpy[41288]: stack overflow detected; = terminated >> Sep 11 00:53:34 pine64 kernel: pid 41288 (h_strncpy), uid 0: exited = on signal 6 >> Sep 11 00:53:41 pine64 kernel: pid 41478 (target_prog), uid 0: exited = on signal 5 (core dumped) >> Sep 11 00:56:53 pine64 kernel: pid 43967 (exponential_test), uid 0: = exited on signal 6 (core dumped) >> Sep 11 00:56:58 pine64 kernel: pid 43972 (fenv_test), uid 0: exited = on signal 6 (core dumped) >> Sep 11 00:57:02 pine64 kernel: pid 43974 (fma_test), uid 0: exited on = signal 6 (core dumped) >> Sep 11 00:57:07 pine64 kernel: pid 43990 (invtrig_test), uid 0: = exited on signal 6 (core dumped) >> Sep 11 00:57:13 pine64 kernel: pid 44067 (logarithm_test), uid 0: = exited on signal 6 (core dumped) >> Sep 11 00:57:17 pine64 kernel: pid 44069 (lrint_test), uid 0: exited = on signal 6 (core dumped) >> Sep 11 00:57:21 pine64 kernel: pid 44073 (nearbyint_test), uid 0: = exited on signal 6 (core dumped) >> Sep 11 00:57:26 pine64 kernel: pid 44075 (next_test), uid 0: exited = on signal 6 (core dumped) >> Sep 11 00:57:31 pine64 kernel: pid 44100 (rem_test), uid 0: exited on = signal 6 (core dumped) >> Sep 11 00:57:43 pine64 kernel: pid 44248 (exhaust_test), uid 0: = exited on signal 11 (core dumped) >>=20 >> I'm not sure that they all would be expected. >=20 > =3D=3D=3D> Broken tests > lib/libc/string/memcmp_test:diff -> broken: Premature exit; test = case received signal 6 (core dumped) [3.962s] > lib/libregex/exhaust_test:regcomp_too_big -> broken: Premature exit; = test case received signal 11 (core dumped) [8.997s] > lib/msun/exponential_test:main -> broken: Received signal 6 = [3.893s] > lib/msun/fenv_test:main -> broken: Received signal 6 [4.326s] > lib/msun/fma_test:main -> broken: Received signal 6 [4.315s] > lib/msun/invtrig_test:main -> broken: Received signal 6 [4.345s] > lib/msun/logarithm_test:main -> broken: Received signal 6 [3.921s] > lib/msun/lrint_test:main -> broken: Received signal 6 [4.416s] > lib/msun/nearbyint_test:main -> broken: Received signal 6 [4.389s] > lib/msun/next_test:main -> broken: Received signal 6 [4.401s] > lib/msun/rem_test:main -> broken: Received signal 6 [4.385s] > sbin/growfs/legacy_test:main -> broken: TAP test program yielded = invalid data: Load of '/tmp/kyua.5BsFl9/3782/stdout.txt' failed: = Reported plan differs from actual executed tests [0.476s] >=20 > sys/cddl/zfs/ ones ignored here: no zfs context. One more thing of note after kyua completed (the Pine64+ 2GB has been mostly idle since then), top shows: last pid: 59782; load averages: 0.22, 0.25, 0.19 = = up 0+19:13:11 19:11:36 122 processes: 2 running, 119 sleeping, 1 waiting CPU: 0.0% user, 0.0% nice, 0.0% system, 0.1% interrupt, 99.9% idle Mem: 2164K Active, 1474M Inact, 14M Laundry, 365M Wired, 202M Buf, 122M = Free Swap: 3584M Total, 3584M Free PID USERNAME THR PRI NICE SIZE RES SWAP STATE C TIME = CPU COMMAND 82157 root 1 20 - 0 16K 0 geli:w 3 0:00 = 0.00% [g_eli[3] md27] 82156 root 1 20 - 0 16K 0 geli:w 2 0:00 = 0.00% [g_eli[2] md27] 82155 root 1 20 - 0 16K 0 geli:w 1 0:00 = 0.00% [g_eli[1] md27] 82154 root 1 20 - 0 16K 0 geli:w 0 0:00 = 0.00% [g_eli[0] md27] 82147 root 1 -8 - 0 16K 0 mdwait 0 0:00 = 0.00% [md27] 82001 root 1 -8 - 0 16K 0 mdwait 3 0:00 = 0.00% [md26] 81941 root 1 20 - 0 16K 0 geli:w 3 0:00 = 0.00% [g_eli[3] md25] 81940 root 1 20 - 0 16K 0 geli:w 2 0:00 = 0.00% [g_eli[2] md25] 81939 root 1 20 - 0 16K 0 geli:w 1 0:00 = 0.00% [g_eli[1] md25] 81938 root 1 20 - 0 16K 0 geli:w 0 0:00 = 0.00% [g_eli[0] md25] 81925 root 1 -8 - 0 16K 0 mdwait 1 0:00 = 0.00% [md25] 81777 root 1 20 - 0 16K 0 geli:w 3 0:00 = 0.00% [g_eli[3] md24p1] 81776 root 1 20 - 0 16K 0 geli:w 2 0:00 = 0.00% [g_eli[2] md24p1] 81775 root 1 20 - 0 16K 0 geli:w 1 0:00 = 0.00% [g_eli[1] md24p1] 81774 root 1 20 - 0 16K 0 geli:w 0 0:00 = 0.00% [g_eli[0] md24p1] 81701 root 1 -8 - 0 16K 0 mdwait 2 0:00 = 0.00% [md24] 81598 root 1 -8 - 0 16K 0 mdwait 0 0:00 = 0.00% [md23] 72532 root 1 -8 - 0 16K 0 mdwait 0 0:01 = 0.00% [md22] 70666 root 1 -8 - 0 16K 0 mdwait 2 0:01 = 0.00% [md21] 70485 root 1 20 - 0 16K 0 geli:w 3 0:00 = 0.00% [g_eli[3] md20] 70484 root 1 20 - 0 16K 0 geli:w 2 0:00 = 0.00% [g_eli[2] md20] 70483 root 1 20 - 0 16K 0 geli:w 1 0:00 = 0.00% [g_eli[1] md20] 70482 root 1 20 - 0 16K 0 geli:w 0 0:00 = 0.00% [g_eli[0] md20] 70479 root 1 -8 - 0 16K 0 mdwait 2 0:00 = 0.00% [md20] 70413 root 1 20 - 0 16K 0 geli:w 3 0:00 = 0.00% [g_eli[3] md19.nop] 70412 root 1 20 - 0 16K 0 geli:w 2 0:00 = 0.00% [g_eli[2] md19.nop] 70411 root 1 20 - 0 16K 0 geli:w 1 0:00 = 0.00% [g_eli[1] md19.nop] 70410 root 1 20 - 0 16K 0 geli:w 0 0:00 = 0.00% [g_eli[0] md19.nop] 70393 root 1 -8 - 0 16K 0 mdwait 3 0:00 = 0.00% [md19] 70213 root 1 20 - 0 16K 0 geli:w 3 0:00 = 0.00% [g_eli[3] md18] 70212 root 1 20 - 0 16K 0 geli:w 2 0:00 = 0.00% [g_eli[2] md18] 70211 root 1 20 - 0 16K 0 geli:w 1 0:00 = 0.00% [g_eli[1] md18] 70210 root 1 20 - 0 16K 0 geli:w 0 0:00 = 0.00% [g_eli[0] md18] 70193 root 1 -8 - 0 16K 0 mdwait 2 0:00 = 0.00% [md18] 70088 root 1 -8 - 0 16K 0 mdwait 2 0:00 = 0.00% [md17] 59763 root 1 -8 - 0 16K 0 mdwait 3 0:01 = 0.00% [md16] 49482 root 1 -8 - 0 16K 0 mdwait 2 0:01 = 0.00% [md15] 27196 root 1 -8 - 0 16K 0 mdwait 0 0:04 = 0.00% [md14] 27018 root 1 -8 - 0 16K 0 mdwait 0 0:00 = 0.00% [md13] 26956 root 1 -8 - 0 16K 0 mdwait 0 0:00 = 0.00% [md12] 26364 root 1 -8 - 0 16K 0 mdwait 0 0:00 = 0.00% [md11] 16100 root 1 -8 - 0 16K 0 mdwait 2 0:03 = 0.00% [md10] 15556 root 1 -8 - 0 16K 0 mdwait 0 0:00 = 0.00% [md9] 15498 root 1 20 - 0 16K 0 geli:w 3 0:00 = 0.00% [g_eli[3] md8] 15497 root 1 20 - 0 16K 0 geli:w 2 0:00 = 0.00% [g_eli[2] md8] 15496 root 1 20 - 0 16K 0 geli:w 1 0:00 = 0.00% [g_eli[1] md8] 15495 root 1 20 - 0 16K 0 geli:w 0 0:00 = 0.00% [g_eli[0] md8] 15462 root 1 -8 - 0 16K 0 mdwait 2 0:00 = 0.00% [md8] 13400 root 1 -8 - 0 16K 0 mdwait 0 0:00 = 0.00% [md7] 13101 root 1 -8 - 0 16K 0 mdwait 2 0:00 = 0.00% [md6] 13005 root 1 20 - 0 16K 0 geli:w 3 0:00 = 0.00% [g_eli[3] md5] 13004 root 1 20 - 0 16K 0 geli:w 2 0:00 = 0.00% [g_eli[2] md5] 13003 root 1 20 - 0 16K 0 geli:w 1 0:00 = 0.00% [g_eli[1] md5] 13002 root 1 20 - 0 16K 0 geli:w 0 0:00 = 0.00% [g_eli[0] md5] 12995 root 1 -8 - 0 16K 0 mdwait 0 0:00 = 0.00% [md5] 12877 root 1 -8 - 0 16K 0 mdwait 3 0:00 = 0.00% [md4] 12719 root 1 -8 - 0 16K 0 mdwait 0 0:00 = 0.00% [md3] 12621 root 1 -8 - 0 16K 0 mdwait 0 0:00 = 0.00% [md2] 12559 root 1 20 - 0 16K 0 geli:w 3 0:00 = 0.00% [g_eli[3] md1] 12558 root 1 20 - 0 16K 0 geli:w 2 0:00 = 0.00% [g_eli[2] md1] 12557 root 1 20 - 0 16K 0 geli:w 1 0:00 = 0.00% [g_eli[1] md1] 12556 root 1 20 - 0 16K 0 geli:w 0 0:00 = 0.00% [g_eli[0] md1] 12549 root 1 -8 - 0 16K 0 mdwait 3 0:00 = 0.00% [md1] 12477 root 1 -8 - 0 16K 0 mdwait 2 0:00 = 0.00% [md0] 1345 root 1 -16 - 0 16K 0 aiordy 1 0:00 = 0.00% [aiod4] 1344 root 1 -16 - 0 16K 0 aiordy 3 0:00 = 0.00% [aiod3] 1343 root 1 -16 - 0 16K 0 aiordy 2 0:00 = 0.00% [aiod2] 1342 root 1 -16 - 0 16K 0 aiordy 0 0:00 = 0.00% [aiod1] 34265 root 1 20 0 14M 2668K 0 CPU3 3 3:10 = 0.28% top -CawSores 34243 root 1 23 0 12M 1688K 0 wait 2 0:00 = 0.00% su (sh) 34242 markmi 1 20 0 13M 1688K 0 wait 2 0:00 = 0.00% su 34236 markmi 1 21 0 12M 1688K 0 wait 0 0:00 = 0.00% -sh (sh) 34235 markmi 1 20 0 20M 1312K 0 select 1 0:09 = 0.01% sshd: markmi@pts/1 (sshd) 34230 root 1 21 0 20M 3460K 0 select 3 0:00 = 0.00% sshd: markmi [priv] (sshd) 898 root 1 52 0 12M 1688K 0 ttyin 1 0:00 = 0.00% su (sh) 897 markmi 1 21 0 13M 1688K 0 wait 2 0:00 = 0.00% su 889 markmi 1 26 0 12M 1688K 0 wait 3 0:00 = 0.00% -sh (sh) 888 markmi 1 20 0 21M 1016K 0 select 1 0:03 = 0.00% sshd: markmi@pts/0 (sshd) 885 root 1 23 0 20M 3460K 0 select 2 0:00 = 0.00% sshd: markmi [priv] (sshd) 836 root 1 20 0 12M 2164K 0 ttyin 0 0:03 = 0.00% -sh (sh) 835 root 1 20 0 13M 1688K 0 wait 1 0:00 = 0.00% login [pam] (login) 785 root 1 52 0 11M 884K 0 nanslp 0 0:01 = 0.00% /usr/sbin/cron -s 781 smmsp 1 20 0 15M 796K 0 pause 3 0:00 = 0.00% sendmail: Queue runner@00:30:00 for /var/spool/clientmqueue = (sendmail) 778 root 1 20 0 15M 1832K 0 select 2 0:02 = 0.00% sendmail: accepting connections (sendmail) 775 root 1 20 0 19M 788K 0 select 1 0:00 = 0.00% /usr/sbin/sshd 731 root 1 20 0 18M 18M 0 select 3 0:07 = 0.01% /usr/sbin/ntpd -p /var/db/ntp/ntpd.pid -c /etc/ntp.conf -g 694 root 32 52 0 11M 1112K 0 rpcsvc 0 0:00 = 0.00% nfsd: server (nfsd) After the run top -CawSopid shows something interesting/odd: lots of g_eli[?] and md?? processes are still around in geli:w state for g_eli[?] and mdwait for md??. Also there are 4 aiod? processes in the aiordy state as well. =3D=3D=3D Mark Millard marklmi at yahoo.com ( dsl-only.net went away in early 2018-Mar)