From owner-freebsd-current@freebsd.org Wed May 27 00:10:55 2020 Return-Path: Delivered-To: freebsd-current@mailman.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.nyi.freebsd.org (Postfix) with ESMTP id 4BD7E2F8BB5 for ; Wed, 27 May 2020 00:10:55 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from CAN01-QB1-obe.outbound.protection.outlook.com (mail-qb1can01on0604.outbound.protection.outlook.com [IPv6:2a01:111:f400:fe5c::604]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "mail.protection.outlook.com", Issuer "GlobalSign Organization Validation CA - SHA256 - G3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 49Wrp61Jc6z4CgQ; Wed, 27 May 2020 00:10:53 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=b+h/ux9fgOcyB5XIbzwEPRGlB5bsrYu8WAM29o5I6XXD95YNusnuW5UVzyGKnCdbwi6XLrG2Z5sFmRjAqI0Pz2ygVMV+6wWvsCYmS08Y7zTOKMPJ+b3HNmJ0Zpy4whSodYUJ36+3YnHqjBb2l5BKUP3O9lH2US8NybYPoKzAAO3y+qzqXhlUWRQNtGv1SFxSxxSj26VnGG+7T8MIhT7CbxMxPKI7e08Dw/bnnORTwPOeFIn6mZ5pefP5/CHcU3kvStv8vCweAeqc2AxCln5WV/EfDAtbniueT20cwNuxUiO2Hn1eqOiMJC6sdJm4T2FaRuZRLP35H7YZM8TfYb11rA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=1QcxbfSz9P5PoRUotZIoEDWzeVjY+SkqQh+/kdpj8HI=; b=VTpnOXrQAv7kZ0HjbA9m6MXtdeJohJqfwbhGvC5Z+FGX24s1JwnETF1ZJRPxAqz8gSObRme8SSL6Xkz54cpSTzNPiLXfXrJl7aGyLiQB83lkh39Fg9fxUVlA4cpTFWPWPJGxwdFSQuxLJf5OhShjj+BQhKsDsCUvs06VVXnTLkgbB7LMFy+lU1oUqQp4s8j/Bv0kfcKeA+ecSAEA9Sy/Hf2BoYb6Rw/ZqqWWkwdiF12gpnfT576cC9oJzFGvFK+7V9Cq5a2WWqfvITZJbmVhweCJHf7OZFiIv9v7d8DNiiCjbx69e7L7oe9zFXSpLQhrJakulRhtoREhTxoQBg5oYA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=uoguelph.ca; dmarc=pass action=none header.from=uoguelph.ca; dkim=pass header.d=uoguelph.ca; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=uoguelph.ca; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=1QcxbfSz9P5PoRUotZIoEDWzeVjY+SkqQh+/kdpj8HI=; b=N/c5i8ZwHlsLkYQ5wuVHyid4pROvltDjN9DlKdbRnqYZPGkjFiynqSUDPfBm4jhdb3jwa1Hv4RPcA0dbGNA227r60TmEbgqibmSE4wrxn2GrIg/HAZUujyeCqoXOh20CkIGQa5kYWKgrqWA9jfwIq8REVxP20+CnZA6pk9mF5bqjtWVNMDCyNvHH4Bc03iSqh5ACNHWeWDLzF/MDGukfz4XWhuqT7paxRdEWz2RV8DkQj4FTWsR2xaL3yop3B6kdaRfbyV1+bDwjLibp25uf2wNzZMrJ0JwwYY3CYiGHLadNkwt+f79Gs3RaRN1AoC8cR1fUj9wo03ODebF8ALwNtg== Received: from QB1PR01MB3649.CANPRD01.PROD.OUTLOOK.COM (2603:10b6:c00:32::26) by QB1PR01MB3970.CANPRD01.PROD.OUTLOOK.COM (2603:10b6:c00:39::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3045.17; Wed, 27 May 2020 00:10:52 +0000 Received: from QB1PR01MB3649.CANPRD01.PROD.OUTLOOK.COM ([fe80::dd96:945c:b6ee:ffa2]) by QB1PR01MB3649.CANPRD01.PROD.OUTLOOK.COM ([fe80::dd96:945c:b6ee:ffa2%6]) with mapi id 15.20.3021.029; Wed, 27 May 2020 00:10:52 +0000 From: Rick Macklem To: Konstantin Belousov CC: Ryan Libby , "freebsd-current@FreeBSD.org" Subject: Re: r358252 causes intermittent hangs where processes are stuck sleeping on btalloc Thread-Topic: r358252 causes intermittent hangs where processes are stuck sleeping on btalloc Thread-Index: AQHWLwsKoH8lF1xuI0WPckyCNZ/JGaiyHHkAgAA2qQCAAlx9bIAA08mAgAWQ7N0= Date: Wed, 27 May 2020 00:10:51 +0000 Message-ID: References: <20200521101428.GC64045@kib.kiev.ua> , <20200523105601.GN64045@kib.kiev.ua> In-Reply-To: <20200523105601.GN64045@kib.kiev.ua> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: ce614e13-2120-45a3-ce26-08d801d26ad4 x-ms-traffictypediagnostic: QB1PR01MB3970: x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:7691; x-forefront-prvs: 04163EF38A x-ms-exchange-senderadcheck: 1 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: jGUt7/jLZ87wOAi8vurFdSQ63V7vUsmWVbRUSOvpd32lcQtztUcO28+vt3xkWsgPCjMLxHQK8CkXrgexWLdXiS5Roq6MazYxQNB3Me2nfHzpk/dPyUa5JyovZXIR4VN3XXE+lvAkVJhecxYsoELmVQOq1ZmZuwlQAOdb80IXjFXoWK6VUdKE+tfWXlm5rPO1HikFNZRWVu2tiCItpQk5SyOQKL6WV7Bk27yPCp507/Vz/cMXCgdtvrH7FrfL8hTiSfP8hgfh+6HRNHiFsWQSL3K0VkudhfIIMV4oktOReSrEJlP563JlinuvLROFzwlSeq4/1F4m7YjQgWaivB2HGSvep0TeBFlCM9j3jbNVy/q5TGja+nPwUStmmk0slpc2mY/PNJ8jVuBppL0J7/3N0A== x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:QB1PR01MB3649.CANPRD01.PROD.OUTLOOK.COM; PTR:; CAT:NONE; SFTY:; SFS:(136003)(376002)(346002)(39840400004)(396003)(366004)(33656002)(966005)(7696005)(83380400001)(66476007)(66946007)(66556008)(76116006)(66446008)(6506007)(4326008)(52536014)(53546011)(86362001)(64756008)(478600001)(54906003)(9686003)(55016002)(786003)(316002)(6916009)(8936002)(186003)(8676002)(5660300002)(71200400001)(2906002); DIR:OUT; SFP:1101; x-ms-exchange-antispam-messagedata: NXjqvp4SPWibHvxcu2iZl+cNJlyyFBmYMDfbkc5w/RqBn77uIIT/aY6uB6910QyIu2lVtzCuWxgd+sgK77JbAhqm17YUcOomIEaKsvldDQo6XNXAkMsL6NHjw9CTQyCs5WWrbqEd8eXo+VjH0Wt5HkuDoRjcpO11UvTnVkKCWwMnOQLrQhQhqB7F8ozyotN/cptX8gIVySqIX8nPfbBcm5vkV5uUz8RBRY9Q8+VNViHz42auhiMt+4loPHpMlW5CtO/Qp5AeUmp1D4Qpsj95caSkiEAlVrwc1fMkzLgW/xpNSYeEhrTTHPgmaslzFDCXC2DE77uBqiRdUl6m7LT0pAkTGeSUAD+3JoxjQZzz4K0tcAy4QHtJ9zoupBlXReE9iOaKIE/tBYfs5PGBn51JFvxkbVD6TMloyJcgCdkItHgZp8iPFIsoKCEaRnm8omiUuPZv/1WdA7vbVGdjGGxukNPyT2yCGKho600N/HWYsYOBw3Gzlp23O556QzObCo0oEAdTUAu1so0G23iYH2o6q9wt0EwJ/LZkQNI7lSk289X9EOldbV4cWwbuxSBt1Ix/ x-ms-exchange-transport-forked: True Content-Type: text/plain; charset="Windows-1252" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: uoguelph.ca X-MS-Exchange-CrossTenant-Network-Message-Id: ce614e13-2120-45a3-ce26-08d801d26ad4 X-MS-Exchange-CrossTenant-originalarrivaltime: 27 May 2020 00:10:51.8967 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: be62a12b-2cad-49a1-a5fa-85f4f3156a7d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: EGYUhOkGvyyc/oTphX1l2uCO0UaavOd6gXkp40047wO/1syQoHFqpD/bSGugH3EdH43Rp8rK52eUsQSrr2KqPQ== X-MS-Exchange-Transport-CrossTenantHeadersStamped: QB1PR01MB3970 X-Rspamd-Queue-Id: 49Wrp61Jc6z4CgQ X-Spamd-Bar: ----- Authentication-Results: mx1.freebsd.org; dkim=pass header.d=uoguelph.ca header.s=selector1 header.b=N/c5i8Zw; dmarc=none; spf=pass (mx1.freebsd.org: domain of rmacklem@uoguelph.ca designates 2a01:111:f400:fe5c::604 as permitted sender) smtp.mailfrom=rmacklem@uoguelph.ca X-Spamd-Result: default: False [-5.71 / 15.00]; TO_DN_EQ_ADDR_SOME(0.00)[]; NEURAL_HAM_MEDIUM(-1.06)[-1.058]; R_DKIM_ALLOW(-0.20)[uoguelph.ca:s=selector1]; FROM_HAS_DN(0.00)[]; RCPT_COUNT_THREE(0.00)[3]; TO_DN_SOME(0.00)[]; R_SPF_ALLOW(-0.20)[+ip6:2a01:111:f400::/48]; MIME_GOOD(-0.10)[text/plain]; DMARC_NA(0.00)[uoguelph.ca]; NEURAL_HAM_LONG(-1.00)[-0.998]; DWL_DNSWL_LOW(-1.00)[uoguelph.ca:dkim]; RCVD_COUNT_THREE(0.00)[3]; TO_MATCH_ENVRCPT_SOME(0.00)[]; DKIM_TRACE(0.00)[uoguelph.ca:+]; NEURAL_HAM_SHORT(-1.15)[-1.154]; FREEMAIL_TO(0.00)[gmail.com]; FROM_EQ_ENVFROM(0.00)[]; MIME_TRACE(0.00)[0:+]; RCVD_TLS_LAST(0.00)[]; ASN(0.00)[asn:8075, ipnet:2a01:111:f000::/36, country:US]; ARC_ALLOW(-1.00)[microsoft.com:s=arcselector9901:i=1] X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.33 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 27 May 2020 00:10:55 -0000 Konstantin Belousov wrote:=0A= >On Fri, May 22, 2020 at 11:46:26PM +0000, Rick Macklem wrote:=0A= >> Konstantin Belousov wrote:=0A= >> >On Wed, May 20, 2020 at 11:58:50PM -0700, Ryan Libby wrote:=0A= >> >> On Wed, May 20, 2020 at 6:04 PM Rick Macklem w= rote:=0A= >> >> >=0A= >> >> > Hi,=0A= >> >> >=0A= >> >> > Since I hadn't upgraded a kernel through the winter, it took me a w= hile=0A= >> >> > to bisect this, but r358252 seems to be the culprit.=0A= No longer true. I succeeded in reproducing the hang to-day running a=0A= r358251 kernel.=0A= =0A= I haven't had much luck sofar, but see below for what I have learned.=0A= =0A= >> >> >=0A= >> >> > If I do a kernel build over NFS using my not so big Pentium 4 (sing= le core,=0A= >> >> > 1.25Gbytes RAM, i386), about every second attempt will hang.=0A= >> >> > When I do a "ps" in the debugger, I see processes sleeping on btall= oc.=0A= >> >> > If I revert to r358251, I cannot reproduce this.=0A= As above, this is no longer true.=0A= =0A= >> >> >=0A= >> >> > Any ideas?=0A= >> >> >=0A= >> >> > I can easily test any change you might suggest to see if it fixes t= he=0A= >> >> > problem.=0A= >> >> >=0A= >> >> > If you want more debug info, let me know, since I can easily=0A= >> >> > reproduce it.=0A= >> >> >=0A= >> >> > Thanks, rick=0A= >> >>=0A= >> >> Nothing obvious to me. I can maybe try a repro on a VM...=0A= >> >>=0A= >> >> ddb ps, acttrace, alltrace, show all vmem, show page would be welcome= .=0A= >> >>=0A= >> >> "btalloc" is "We're either out of address space or lost a fill race."= =0A= >From what I see, I think it is "out of address space".=0A= For one of the hangs, when I did "show alllocks", everything except the=0A= intr thread, was waiting for the=0A= exclusive sx lock @ vm/vm_map.c:4761=0A= =0A= >> >=0A= >> >Yes, I would be not surprised to be out of something on 1G i386 machine= .=0A= >> >Please also add 'show alllocks'.=0A= >> Ok, I used an up to date head kernel and it took longer to reproduce a h= ang.=0A= Go down to Kostik's comment about kern.maxvnodes for the rest of what I've= =0A= learned. (The time it takes to reproduce one of these varies greatly, but I= usually=0A= get one within 3 cycles of a full kernel build over NFS. I have had it happ= en=0A= once when doing a kernel build over UFS.)=0A= =0A= >> This time, none of the processes are stuck on "btalloc".=0A= > I'll try and give you most of the above, but since I have to type it in b= y hand=0A= > from the screen, I might not get it all. (I'm no real typist;-)=0A= > > show alllocks=0A= > exclusive lockmgr ufs (ufs) r =3D 0 locked @ kern/vfs_subr.c: 3259=0A= > exclusive lockmgr nfs (nfs) r =3D 0 locked @ kern/vfs_lookup.c:737=0A= > exclusive sleep mutex kernel area domain (kernel arena domain) r =3D 0 lo= cked @ kern/subr_vmem.c:1343=0A= > exclusive lockmgr bufwait (bufwait) r =3D 0 locked @ kern/vfs_bio.c:1663= =0A= > exclusive lockmgr ufs (ufs) r =3D 0 locked @ kern/vfs_subr.c:2930=0A= > exclusive lockmgr syncer (syncer) r =3D 0 locked @ kern/vfs_subr.c:2474= =0A= > Process 12 (intr) thread 0x.. (1000008)=0A= > exclusive sleep mutex Giant (Giant) r =3D 0 locked @ kern/kern_intr.c:115= 2=0A= >=0A= > > ps=0A= > - Not going to list them all, but here are the ones that seem interesting= ...=0A= > 18 0 0 0 DL vlruwt 0x11d939cc [vnlru]=0A= > 16 0 0 0 DL (threaded) [bufdaemon]=0A= > 100069 D qsleep [bufdaemon]=0A= > 100074 D - [bufspacedaemon-0]=0A= > 100084 D sdflush 0x11923284 [/ worker]=0A= > - and more of these for the other UFS file systems=0A= > 9 0 0 0 DL psleep 0x1e2f830 [vmdaemon]=0A= > 8 0 0 0 DL (threaded) [pagedaemon]=0A= > 100067 D psleep 0x1e2e95c [dom0]=0A= > 100072 D launds 0x1e2e968 [laundry: dom0]=0A= > 100073 D umarcl 0x12cc720 [uma]=0A= > =85 a bunch of usb and cam ones=0A= > 100025 D - 0x1b2ee40 [doneq0]=0A= > =85=0A= > 12 0 0 0 RL (threaded) [intr]=0A= > 100007 I [swi6: task queue]=0A= > 100008 Run CPU 0 [swi6: Giant taskq]=0A= > =85=0A= > 100000 D swapin 0x1d96dfc [swapper]=0A= > - and a bunch more in D state.=0A= > Does this mean the swapper was trying to swap in?=0A= >=0A= > > acttrace=0A= > - just shows the keyboard=0A= > kdb_enter() at kdb_enter+0x35/frame=0A= > vt_kbdevent() at vt_kdbevent+0x329/frame=0A= > kdbmux_intr() at kbdmux_intr+0x19/frame=0A= > taskqueue_run_locked() at taskqueue_run_locked+0x175/frame=0A= > taskqueue_run() at taskqueue_run+0x44/frame=0A= > taskqueue_swi_giant_run(0) at taskqueue_swi_giant_run+0xe/frame=0A= > ithread_loop() at ithread_loop+0x237/frame=0A= > fork_exit() at fork_exit+0x6c/frame=0A= > fork_trampoline() at 0x../frame=0A= >=0A= > > show all vmem=0A= > vmem 0x.. 'transient arena'=0A= > quantum: 4096=0A= > size: 23592960=0A= > inuse: 0=0A= > free: 23592960=0A= > busy tags: 0=0A= > free tags: 2=0A= > inuse size free size=0A= > 16777216 0 0 1 23592960=0A= > vmem 0x.. 'buffer arena'=0A= > quantum: 4096=0A= > size: 94683136=0A= > inuse: 94502912=0A= > free: 180224=0A= > busy tags: 1463=0A= > free tags: 3=0A= > inuse size free size=0A= > 16384 2 32768 1 16384=0A= > 32768 39 1277952 1 32768=0A= > 65536 1422 93192192 0 0=0A= > 131072 0 0 1 131072=0A= > vmem 0x.. 'i386trampoline'=0A= > quantum: 1=0A= > size: 24576=0A= > inuse: 20860=0A= > free: 3716=0A= > busy tags: 9=0A= > free tags: 3=0A= > inuse size free size=0A= > 32 1 48 1 52=0A= > 64 2 208 0 0=0A= > 128 2 280 0 0=0A= > 2048 1 2048 1 3664=0A= > 4096 2 8192 0 0=0A= > 8192 1 10084 0 0=0A= > vmem 0x.. 'kernel rwx arena'=0A= > quantum: 4096=0A= > size: 0=0A= > inuse: 0=0A= > free: 0=0A= > busy tags: 0=0A= > free tags: 0=0A= > vmem 0x.. 'kernel area dom'=0A= > quantum: 4096=0A= > size: 56623104=0A= > inuse: 56582144=0A= >> free: 40960=0A= >> busy tags: 11224=0A= >> free tags: 3=0A= >I think this is the trouble.=0A= >=0A= >Did you tried to reduce kern.maxvnodes ? What is the default value for=0A= >the knob on your machine ?=0A= The default is 84342.=0A= I have tried 64K, 32K and 128K and they all hung sooner or later.=0A= For the 32K case, I did see vnodes being recycled for a while before it got= hung,=0A= so it isn't just when it hits the limit.=0A= =0A= Although it is much easier for me to reproduce on an NFS mount, I did see= =0A= a hang while doing a kernel build on UFS (no NFS mount on the machine at=0A= that time).=0A= =0A= So, I now know that the problem pre-dates r358252 and is not NFS specific.= =0A= =0A= I'm not bisecting back further to try and isolate the commit that causes th= is.=0A= (Unfortunately, each test cycle can take days. I now know that I have to do= =0A= several of these kernel builds, which take hours each, to see if a hang is = going=0A= to happen.)=0A= =0A= I'll post if/when I have more, rick=0A= =0A= We scaled maxvnodes for ZFS and UFS, might be NFS is even more demanding,= =0A= having larger node size.=0A= =0A= > inuse size free size=0A= > 4096 11091 45428736 0 0=0A= > 8192 63 516096 0 0=0A= > 16384 12 196608 0 0=0A= > 32768 6 196608 0 0=0A= > 40960 2 81920 1 40960=0A= > 65536 16 1048576 0 0=0A= > 94208 1 94208 0 0=0A= > 110592 1 110592 0 0=0A= > 131072 15 2441216 0 0=0A= > 262144 15 3997696 0 0=0A= > 524288 1 524288 0 0=0A= > 1048576 1 1945600 0 0=0A= > vmem 0x.. 'kernel arena'=0A= > quantum: 4096=0A= > size: 390070272=0A= > inuse: 386613248=0A= > free: 3457024=0A= > busy tags: 873=0A= > free tags: 3=0A= > inuse size free size=0A= > 4096 35 143360 1 4096=0A= > 8192 2 16384 2 16384=0A= > 12288 1 12288 0 0=0A= > 16384 30 491520 0 0=0A= > 20480 140 2867200 0 0=0A= > 65536 1 65536 0 0=0A= > 131072 631 82706432 0 0=0A= > 1048576 0 0 1 1339392=0A= > 2097152 27 56623104 1 2097152=0A= > 8388608 1 13774848 0 0=0A= > 16777216 3 74883072 0 0=0A= > 33554432 1 36753408 0 0=0A= > 67108864 1 118276096 0 0=0A= >=0A= > > alltrace=0A= > - I can't face typing too much more, but I'll put a few=0A= > here that look interesting=0A= >=0A= > - for csh=0A= > sched_switch()=0A= > mi_switch()=0A= > kern_yield()=0A= > getblkx()=0A= > breadn_flags()=0A= > ffs_update()=0A= > ufs_inactive()=0A= > VOP_INACTIVE()=0A= > vinactivef()=0A= > vput_final()=0A= > vm_object_deallocate()=0A= > vm_map_process_deferred()=0A= > kern_munmap()=0A= > sys_munmap()=0A= >=0A= > - For cc=0A= > sched_switch()=0A= > mi_switch()=0A= > sleepq_switch()=0A= > sleepq_timedwait()=0A= > _sleep()=0A= > pause_sbt()=0A= > vmem_bt_alloc()=0A= > keg_alloc_slab()=0A= > zone_import()=0A= > cache_alloc()=0A= > cache_alloc_retry()=0A= > uma_zalloc_arg()=0A= > bt_fill()=0A= > vmem_xalloc()=0A= > vmem_alloc()=0A= > kmem_alloc()=0A= > kmem_malloc_domainset()=0A= > page_alloc()=0A= > keg_alloc_slab()=0A= > zone_import()=0A= > cache_alloc()=0A= > cache_alloc_retry()=0A= > uma_zalloc_arg()=0A= > nfscl_nget()=0A= > nfs_create()=0A= > vop_sigdefer()=0A= > nfs_vnodeops_bypass()=0A= > VOP_CREATE_APV()=0A= > vn_open_cred()=0A= > vn_open()=0A= > kern_openat()=0A= > sys_openat()=0A= >=0A= > Then there are a bunch of these... for cc, make=0A= > sched_switch()=0A= > mi_switch()=0A= > sleepq_switch()=0A= > sleepq_catch_signals()=0A= > sleepq_wait_sig()=0A= > kern_wait6()=0A= > sys_wait4()=0A= >=0A= > - for vnlru=0A= > sched_switch()=0A= > mi_switch()=0A= > sleepq_switch()=0A= > sleepq_timedwait()=0A= > _sleep()=0A= > vnlru_proc()=0A= > fork_exit()=0A= > fork_trampoline()=0A= >=0A= > - for syncer=0A= > sched_switch()=0A= > mi_switch()=0A= > critical_exit_preempt()=0A= > intr_event_handle()=0A= > intr_execute_handlers()=0A= > lapic_handle_intr()=0A= > Xapic_isr1()=0A= > - interrupt=0A= > memset()=0A= > cache_alloc()=0A= > cache_alloc_retry()=0A= > uma_zalloc_arg()=0A= > vmem_xalloc()=0A= > vmem_bt_alloc()=0A= > keg_alloc_slab()=0A= > zone_import()=0A= > cache_alloc()=0A= > cache_alloc_retry()=0A= > uma_zalloc_arg()=0A= > bt_fill()=0A= > vmem_xalloc()=0A= > vmem_alloc()=0A= > bufkva_alloc()=0A= > getnewbuf()=0A= > getblkx()=0A= > breadn_flags()=0A= > ffs_update()=0A= > ffs_sync()=0A= > sync_fsync()=0A= > VOP_FSYNC_APV()=0A= > sched_sync()=0A= > fork_exit()=0A= > fork_trampoline()=0A= >=0A= > - For bufdaemon (a bunch of these)=0A= > sched_switch()=0A= > mi_switch()=0A= > sleepq_switch()=0A= > sleepq_timedwait()=0A= > _sleep()=0A= > buf_daemon()=0A= > fork_exit()=0A= > fork_trampoline()=0A= >=0A= > vmdaemon and pagedaemon are basically just like above,=0A= > sleeping in=0A= > vm_daemon()=0A= > or=0A= > vm_pageout_worker()=0A= > or=0A= > vm_pageout_laundry_worker()=0A= > or=0A= > uma_reclaim_worker()=0A= >=0A= > That's all the typing I can take right now.=0A= > I can probably make this happen again if you want more specific stuff.=0A= >=0A= > rick=0A= >=0A= >=0A= >=0A= >=0A= _______________________________________________=0A= freebsd-current@freebsd.org mailing list=0A= https://lists.freebsd.org/mailman/listinfo/freebsd-current=0A= To unsubscribe, send any mail to "freebsd-current-unsubscribe@freebsd.org"= =0A=