Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 22 Mar 2006 10:24:07 -0800
From:      Jason Evans <jasone@FreeBSD.org>
To:        John Baldwin <jhb@freebsd.org>
Cc:        John-Mark Gurney <gurney_j@resnet.uoregon.edu>, freebsd-current@freebsd.org
Subject:   Re: core dumps are HUGE...
Message-ID:  <44219647.8010602@FreeBSD.org>
In-Reply-To: <200603221019.43713.jhb@freebsd.org>
References:  <20060321184019.GX35129@funkthat.com>	<1EB2EEE3-855C-4B76-81A6-1880526797CE@freebsd.org>	<44215B1B.1080104@mac.com> <200603221019.43713.jhb@freebsd.org>

next in thread | previous in thread | raw e-mail | index | archive | help
John Baldwin wrote:
> I think the better path is to provide sparse coredumps.  I.e., when dumping a
> core, leave the parts of the process map that are mapped but have no backing
> store yet (b/c the pages haven't been touched) sparse by not writing to them,
> but just seeking past them.  This doesn't require complicating the malloc
> implementation just for the sake of a core dump on a CF device.

I like this solution too.  I'm sure there are people who could make this 
change with much less effort than me though. =)

It looks like the necessary changes are in 
sys/kern/imgact_elf.c:coredump().  That code writes a segment at a time, 
but it would need to be modified to look at the process's page map and 
write segments piecemeal.

Jason



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?44219647.8010602>