From owner-svn-src-all@freebsd.org Fri Oct 11 04:20:51 2019 Return-Path: Delivered-To: svn-src-all@mailman.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.nyi.freebsd.org (Postfix) with ESMTP id B850B13C3D9; Fri, 11 Oct 2019 04:20:51 +0000 (UTC) (envelope-from cy.schubert@cschubert.com) Received: from smtp-out-no.shaw.ca (smtp-out-no.shaw.ca [64.59.134.9]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "Client", Issuer "CA" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 46qFBC03VNz3xfX; Fri, 11 Oct 2019 04:20:50 +0000 (UTC) (envelope-from cy.schubert@cschubert.com) Received: from spqr.komquats.com ([70.67.125.17]) by shaw.ca with ESMTPA id ImQ2i0fQuUIS2ImQ3i3fFB; Thu, 10 Oct 2019 22:20:48 -0600 X-Authority-Analysis: v=2.3 cv=N41X6F1B c=1 sm=1 tr=0 a=VFtTW3WuZNDh6VkGe7fA3g==:117 a=VFtTW3WuZNDh6VkGe7fA3g==:17 a=kj9zAlcOel0A:10 a=XobE76Q3jBoA:10 a=6I5d2MoRAAAA:8 a=5aufQkHnAAAA:8 a=YxBL1-UpAAAA:8 a=U6ALD4cZ6_xjBTtgq30A:9 a=M7yIAYGdQk2ME6YH:21 a=4Vng0BGWIae9Y-f7:21 a=CjuIK1q_8ugA:10 a=IjZwj45LgO3ly-622nXo:22 a=5jWFXopkL0B9C0XP6NHj:22 a=Ia-lj3WSrqcvXOmTRaiG:22 Received: from slippy.cwsent.com (slippy [10.1.1.91]) by spqr.komquats.com (Postfix) with ESMTPS id 7754A15B; Thu, 10 Oct 2019 21:20:45 -0700 (PDT) Received: from slippy.cwsent.com (localhost [127.0.0.1]) by slippy.cwsent.com (8.15.2/8.15.2) with ESMTP id x9B4Kjvh006131; Thu, 10 Oct 2019 21:20:45 -0700 (PDT) (envelope-from Cy.Schubert@cschubert.com) Received: from slippy (cy@localhost) by slippy.cwsent.com (8.15.2/8.15.2/Submit) with ESMTP id x9B4KinB006128; Thu, 10 Oct 2019 21:20:44 -0700 (PDT) (envelope-from Cy.Schubert@cschubert.com) Message-Id: <201910110420.x9B4KinB006128@slippy.cwsent.com> X-Authentication-Warning: slippy.cwsent.com: cy owned process doing -bs X-Mailer: exmh version 2.9.0 11/07/2018 with nmh-1.7.1 Reply-to: Cy Schubert From: Cy Schubert X-os: FreeBSD X-Sender: cy@cwsent.com X-URL: http://www.cschubert.com/ To: Conrad Meyer cc: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: Re: svn commit: r353429 - in head: share/man/man4 sys/kern sys/vm In-reply-to: <201910110131.x9B1VV1R047982@repo.freebsd.org> References: <201910110131.x9B1VV1R047982@repo.freebsd.org> Comments: In-reply-to Conrad Meyer message dated "Fri, 11 Oct 2019 01:31:31 -0000." Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Date: Thu, 10 Oct 2019 21:20:44 -0700 X-CMAE-Envelope: MS4wfGi7mrgHe7goGbHyKMNUKWcqRv4RMCCMTgUbuCbtkzsZ2IRnv6QeqPE8jvEBFNDji0NaR/kgmItZ2MQKoLc2FCtI2a1ecidJcaO45+gs+uUCEhQwBp1X T/w7aAZ/6dyU8VCXG/PzDYD9mzhx90DdEBw0Gtwt7JNyvX9VhWzQcxqNTbMRXPs6ML9IqkpRqeFwLAT8vcou3VMVDa56iCkPp7KLYyXZ079/xwfM0V572nJK RpHBTL2x9N7ogA0vXdt0tKeBFiFdc/EvYuIx2v+d4FpNLQ6bhzjsC2YBFSTndxay X-Rspamd-Queue-Id: 46qFBC03VNz3xfX X-Spamd-Bar: ----- Authentication-Results: mx1.freebsd.org; none X-Spamd-Result: default: False [-6.00 / 15.00]; NEURAL_HAM_MEDIUM(-1.00)[-0.997,0]; NEURAL_HAM_LONG(-1.00)[-1.000,0]; REPLY(-4.00)[] X-BeenThere: svn-src-all@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: "SVN commit messages for the entire src tree \(except for " user" and " projects" \)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 11 Oct 2019 04:20:51 -0000 In message <201910110131.x9B1VV1R047982@repo.freebsd.org>, Conrad Meyer writes: > Author: cem > Date: Fri Oct 11 01:31:31 2019 > New Revision: 353429 > URL: https://svnweb.freebsd.org/changeset/base/353429 > > Log: > ddb: Add CSV option, sorting to 'show (malloc|uma)' > > Add /i option for machine-parseable CSV output. This allows ready copy/ > pasting into more sophisticated tooling outside of DDB. > > Add total zone size ("Memory Use") as a new column for UMA. > > For both, sort the displayed list on size (print the largest zones/types > first). This is handy for quickly diagnosing "where has my memory gone?" a > t > a high level. > > Submitted by: Emily Pettigrew (earlie > r version) > Sponsored by: Dell EMC Isilon > > Modified: > head/share/man/man4/ddb.4 > head/sys/kern/kern_malloc.c > head/sys/vm/uma_core.c > > Modified: head/share/man/man4/ddb.4 > ============================================================================= > = > --- head/share/man/man4/ddb.4 Fri Oct 11 00:02:00 2019 (r353428) > +++ head/share/man/man4/ddb.4 Fri Oct 11 01:31:31 2019 (r353429) > @@ -60,7 +60,7 @@ > .\" > .\" $FreeBSD$ > .\" > -.Dd September 9, 2019 > +.Dd October 10, 2019 > .Dt DDB 4 > .Os > .Sh NAME > @@ -806,11 +806,15 @@ is included in the kernel. > .It Ic show Cm locktree > .\" > .Pp > -.It Ic show Cm malloc > +.It Ic show Cm malloc Ns Op Li / Ns Cm i > Prints > .Xr malloc 9 > memory allocator statistics. > -The output format is as follows: > +If the > +.Cm i > +modifier is specified, format output as machine-parseable comma-separated > +values ("CSV"). > +The output columns are as follows: > .Pp > .Bl -tag -compact -offset indent -width "Requests" > .It Ic Type > @@ -1076,11 +1080,15 @@ Currently, those are: > .Xr rmlock 9 . > .\" > .Pp > -.It Ic show Cm uma > +.It Ic show Cm uma Ns Op Li / Ns Cm i > Show UMA allocator statistics. > -Output consists five columns: > +If the > +.Cm i > +modifier is specified, format output as machine-parseable comma-separated > +values ("CSV"). > +The output contains the following columns: > .Pp > -.Bl -tag -compact -offset indent -width "Requests" > +.Bl -tag -compact -offset indent -width "Total Mem" > .It Cm "Zone" > Name of the UMA zone. > The same string that was passed to > @@ -1094,9 +1102,18 @@ Number of slabs being currently used. > Number of free slabs within the UMA zone. > .It Cm "Requests" > Number of allocations requests to the given zone. > +.It Cm "Total Mem" > +Total memory in use (either allocated or free) by a zone, in bytes. > +.It Cm "XFree" > +Number of free slabs within the UMA zone that were freed on a different NUMA > +domain than allocated. > +(The count in the > +.Cm "Free" > +column is inclusive of > +.Cm "XFree" . ) > .El > .Pp > -The very same information might be gathered in the userspace > +The same information might be gathered in the userspace > with the help of > .Dq Nm vmstat Fl z . > .\" > > Modified: head/sys/kern/kern_malloc.c > ============================================================================= > = > --- head/sys/kern/kern_malloc.c Fri Oct 11 00:02:00 2019 (r35342 > 8) > +++ head/sys/kern/kern_malloc.c Fri Oct 11 01:31:31 2019 (r35342 > 9) > @@ -1205,35 +1205,90 @@ restart: > } > > #ifdef DDB > +static int64_t > +get_malloc_stats(const struct malloc_type_internal *mtip, uint64_t *allocs, > + uint64_t *inuse) > +{ > + const struct malloc_type_stats *mtsp; > + uint64_t frees, alloced, freed; > + int i; > + > + *allocs = 0; > + frees = 0; > + alloced = 0; > + freed = 0; > + for (i = 0; i <= mp_maxid; i++) { > + mtsp = zpcpu_get_cpu(mtip->mti_stats, i); > + > + *allocs += mtsp->mts_numallocs; > + frees += mtsp->mts_numfrees; > + alloced += mtsp->mts_memalloced; > + freed += mtsp->mts_memfreed; > + } > + *inuse = *allocs - frees; > + return (alloced - freed); > +} > + > DB_SHOW_COMMAND(malloc, db_show_malloc) > { > - struct malloc_type_internal *mtip; > - struct malloc_type_stats *mtsp; > + const char *fmt_hdr, *fmt_entry; > struct malloc_type *mtp; > - uint64_t allocs, frees; > - uint64_t alloced, freed; > - int i; > + uint64_t allocs, inuse; > + int64_t size; > + /* variables for sorting */ > + struct malloc_type *last_mtype, *cur_mtype; > + int64_t cur_size, last_size; > + int ties; > > - db_printf("%18s %12s %12s %12s\n", "Type", "InUse", "MemUse", > - "Requests"); > - for (mtp = kmemstatistics; mtp != NULL; mtp = mtp->ks_next) { > - mtip = (struct malloc_type_internal *)mtp->ks_handle; > - allocs = 0; > - frees = 0; > - alloced = 0; > - freed = 0; > - for (i = 0; i <= mp_maxid; i++) { > - mtsp = zpcpu_get_cpu(mtip->mti_stats, i); > - allocs += mtsp->mts_numallocs; > - frees += mtsp->mts_numfrees; > - alloced += mtsp->mts_memalloced; > - freed += mtsp->mts_memfreed; > + if (modif[0] == 'i') { > + fmt_hdr = "%s,%s,%s,%s\n"; > + fmt_entry = "\"%s\",%ju,%jdK,%ju\n"; > + } else { > + fmt_hdr = "%18s %12s %12s %12s\n"; > + fmt_entry = "%18s %12ju %12jdK %12ju\n"; > + } > + > + db_printf(fmt_hdr, "Type", "InUse", "MemUse", "Requests"); > + > + /* Select sort, largest size first. */ > + last_mtype = NULL; > + last_size = INT64_MAX; > + for (;;) { > + cur_mtype = NULL; > + cur_size = -1; > + ties = 0; > + > + for (mtp = kmemstatistics; mtp != NULL; mtp = mtp->ks_next) { > + /* > + * In the case of size ties, print out mtypes > + * in the order they are encountered. That is, > + * when we encounter the most recently output > + * mtype, we have already printed all preceding > + * ties, and we must print all following ties. > + */ > + if (mtp == last_mtype) { > + ties = 1; > + continue; > + } > + size = get_malloc_stats(mtp->ks_handle, &allocs, > + &inuse); > + if (size > cur_size && size < last_size + ties) { > + cur_size = size; > + cur_mtype = mtp; > + } > } > - db_printf("%18s %12ju %12juK %12ju\n", > - mtp->ks_shortdesc, allocs - frees, > - (alloced - freed + 1023) / 1024, allocs); > + if (cur_mtype == NULL) > + break; > + > + size = get_malloc_stats(cur_mtype->ks_handle, &allocs, &inuse); > + db_printf(fmt_entry, cur_mtype->ks_shortdesc, inuse, > + howmany(size, 1024), allocs); > + > if (db_pager_quit) > break; > + > + last_mtype = cur_mtype; > + last_size = cur_size; > } > } > > > Modified: head/sys/vm/uma_core.c > ============================================================================= > = > --- head/sys/vm/uma_core.c Fri Oct 11 00:02:00 2019 (r353428) > +++ head/sys/vm/uma_core.c Fri Oct 11 01:31:31 2019 (r353429) > @@ -4341,39 +4341,100 @@ uma_dbg_free(uma_zone_t zone, uma_slab_t slab, void > *i > #endif /* INVARIANTS */ > > #ifdef DDB > +static int64_t > +get_uma_stats(uma_keg_t kz, uma_zone_t z, uint64_t *allocs, uint64_t *used, > + uint64_t *sleeps, uint64_t *xdomain, long *cachefree) xdomain and cachefree are reversed by callers of this function. Probably simpler to change the definition here than the two use instances below. > +{ > + uint64_t frees; > + int i; > + > + if (kz->uk_flags & UMA_ZFLAG_INTERNAL) { > + *allocs = counter_u64_fetch(z->uz_allocs); > + frees = counter_u64_fetch(z->uz_frees); > + *sleeps = z->uz_sleeps; > + *cachefree = 0; > + *xdomain = 0; > + } else > + uma_zone_sumstat(z, cachefree, allocs, &frees, sleeps, > + xdomain); > + if (!((z->uz_flags & UMA_ZONE_SECONDARY) && > + (LIST_FIRST(&kz->uk_zones) != z))) > + *cachefree += kz->uk_free; > + for (i = 0; i < vm_ndomains; i++) > + *cachefree += z->uz_domain[i].uzd_nitems; > + *used = *allocs - frees; > + return (((int64_t)*used + *cachefree) * kz->uk_size); > +} > + > DB_SHOW_COMMAND(uma, db_show_uma) > { > + const char *fmt_hdr, *fmt_entry; > uma_keg_t kz; > uma_zone_t z; > - uint64_t allocs, frees, sleeps, xdomain; > + uint64_t allocs, used, sleeps, xdomain; > long cachefree; > - int i; > + /* variables for sorting */ > + uma_keg_t cur_keg; > + uma_zone_t cur_zone, last_zone; > + int64_t cur_size, last_size, size; > + int ties; > > - db_printf("%18s %8s %8s %8s %12s %8s %8s %8s\n", "Zone", "Size", "Used" > , > - "Free", "Requests", "Sleeps", "Bucket", "XFree"); > - LIST_FOREACH(kz, &uma_kegs, uk_link) { > - LIST_FOREACH(z, &kz->uk_zones, uz_link) { > - if (kz->uk_flags & UMA_ZFLAG_INTERNAL) { > - allocs = counter_u64_fetch(z->uz_allocs); > - frees = counter_u64_fetch(z->uz_frees); > - sleeps = z->uz_sleeps; > - cachefree = 0; > - } else > - uma_zone_sumstat(z, &cachefree, &allocs, > - &frees, &sleeps, &xdomain); > - if (!((z->uz_flags & UMA_ZONE_SECONDARY) && > - (LIST_FIRST(&kz->uk_zones) != z))) > - cachefree += kz->uk_free; > - for (i = 0; i < vm_ndomains; i++) > - cachefree += z->uz_domain[i].uzd_nitems; > + /* /i option produces machine-parseable CSV output */ > + if (modif[0] == 'i') { > + fmt_hdr = "%s,%s,%s,%s,%s,%s,%s,%s,%s\n"; > + fmt_entry = "\"%s\",%ju,%jd,%ld,%ju,%ju,%u,%jd,%ju\n"; > + } else { > + fmt_hdr = "%18s %6s %7s %7s %11s %7s %7s %10s %8s\n"; > + fmt_entry = "%18s %6ju %7jd %7ld %11ju %7ju %7u %10jd %8ju\n"; > + } > > - db_printf("%18s %8ju %8jd %8ld %12ju %8ju %8u %8ju\n", > - z->uz_name, (uintmax_t)kz->uk_size, > - (intmax_t)(allocs - frees), cachefree, > - (uintmax_t)allocs, sleeps, z->uz_count, xdomain); > - if (db_pager_quit) > - return; > + db_printf(fmt_hdr, "Zone", "Size", "Used", "Free", "Requests", > + "Sleeps", "Bucket", "Total Mem", "XFree"); > + > + /* Sort the zones with largest size first. */ > + last_zone = NULL; > + last_size = INT64_MAX; > + for (;;) { > + cur_zone = NULL; > + cur_size = -1; > + ties = 0; > + LIST_FOREACH(kz, &uma_kegs, uk_link) { > + LIST_FOREACH(z, &kz->uk_zones, uz_link) { > + /* > + * In the case of size ties, print out zones > + * in the order they are encountered. That is, > + * when we encounter the most recently output > + * zone, we have already printed all preceding > + * ties, and we must print all following ties. > + */ > + if (z == last_zone) { > + ties = 1; > + continue; > + } > + size = get_uma_stats(kz, z, &allocs, &used, > + &sleeps, &cachefree, &xdomain); cachefree and xdomain are reversed from the function header above. > + if (size > cur_size && size < last_size + ties) > + { > + cur_size = size; > + cur_zone = z; > + cur_keg = kz; > + } > + } > } > + if (cur_zone == NULL) > + break; > + > + size = get_uma_stats(cur_keg, cur_zone, &allocs, &used, > + &sleeps, &cachefree, &xdomain); cachefree and xdomain are reversed from the function header above. > + db_printf(fmt_entry, cur_zone->uz_name, > + (uintmax_t)cur_keg->uk_size, (intmax_t)used, cachefree, > + (uintmax_t)allocs, (uintmax_t)sleeps, > + (unsigned)cur_zone->uz_count, (intmax_t)size, xdomain); > + > + if (db_pager_quit) > + return; > + last_zone = cur_zone; > + last_size = cur_size; > } > } > > -- Cheers, Cy Schubert FreeBSD UNIX: Web: http://www.FreeBSD.org The need of the many outweighs the greed of the few.