Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 12 Jul 2007 12:52:53 -0500
From:      Craig Boston <craig@yekse.gank.org>
To:        Juergen Lock <nox@jelal.kn-bremen.de>
Cc:        attilio@freebsd.org, freebsd-emulation@freebsd.org, freebsd-ports@freebsd.org
Subject:   Re: experimental qemu-devel port update, please test!
Message-ID:  <20070712175252.GA77654@nowhere>
In-Reply-To: <200707092149.l69LnXe9023835@saturn.kn-bremen.de>
References:  <20070702203027.GA45302@saturn.kn-bremen.de> <46925324.9010908@freebsd.org> <3bbf2fe10707091140h6cdc7469nac5be03a8c8a60cb@mail.gmail.com> <200707092000.29768.dfr@rabson.org> <200707092149.l69LnXe9023835@saturn.kn-bremen.de>

next in thread | previous in thread | raw e-mail | index | archive | help
On Mon, Jul 09, 2007 at 11:49:33PM +0200, Juergen Lock wrote:
> In article <3bbf2fe10707091218p713b7e3ela2833eec0ba2df13@mail.gmail.com> you write:
> >2007/7/9, Doug Rabson <dfr@rabson.org>:
> >> On Monday 09 July 2007, Attilio Rao wrote:
> >> > Please also note that stack here seems highly corrupted since values
> >> > passed to _vm_map_lock are not possible (or there is something
> >> > serious going on with them).
> >>
> >> I had this exact same crash when attempting to use kqemu on a recent
> >> current. It appears as if the value it got for curproc was bad. Is
> >> kqemu messing with the kernel's %fs value perhaps?
> >
> >I don't know about kqemu, but in this case I would expect sorta of
> >larger corruption due to the wider pcpu accesses done through %fs.
> 
> Actually it might use %fs while in the monitor (for running guest code),
> but if I read the code right it doesn't let host kernel code run while
> in there (it catches interrupts and leaves the monitor, restoring state,
> to run them.)
> 
>  Also, it still seems to be in kqemu_init when this happened, and I
> don't think it enters the monitor from in there already.

I took a look at it last night and found some very confusing results.
It's definitely happening during the KQEMU_INIT ioctl.  The stack is not
being corrupted, %fs has not been tampered with, and 0x1 is indeed
being passed to vm_map_wire.

For some reason when the ioctl is issued, curproc points to a totally
bogus proc structure.  curthread seems to be sane as far as I can tell,
but the process it claims to belong to is full of junk.

Here is a dump of some miscellaneous values from curthread (labeled td
below) and curproc (cp) from a couple places along the call stack:

===== kqemu_ioctl (KQEMU_GET_VERSION) =====
%ds in kqemu_ioctl: 0x28
%fs in kqemu_ioctl: 0x8
td: 0xc7232400
td->td_proc: 0xc556e2ac
td->td_tid: 100141
td->td_wmesg: (null)
td->td_name: 
td->td_sleepqueue: 0xc4aa8b20
td->td_flags: 16777216
td->td_oncpu: 0
td->td_ucred: 0xc5f92b00
td->td_runtime: 493921239146
cp: 0xc556e2ac
cp->p_vmspace: 0x1
cp->p_pid: 1
cp->p_comm: 
cp->p_fd: 0
cp->p_pptr: 0
cp->p_magic: 0
cp->p_pgrp: 0
cp->p_numthreads: 2147483647

===== kqemu_ioctl (KQEMU_INIT) =====
%ds in kqemu_ioctl: 0x28
%fs in kqemu_ioctl: 0x8
td: 0xc7232400
td->td_proc: 0xc556e2ac
td->td_tid: 100141
td->td_wmesg: (null)
td->td_name: 
td->td_sleepqueue: 0xc4aa8b20
td->td_flags: 16777216
td->td_oncpu: 0
td->td_ucred: 0xc5f92b00
td->td_runtime: 498216206442
cp: 0xc556e2ac
cp->p_vmspace: 0x1
cp->p_pid: 1
cp->p_comm: 
cp->p_fd: 0
cp->p_pptr: 0
cp->p_magic: 0
cp->p_pgrp: 0
cp->p_numthreads: 2147483647

===== kqemu_lock_user_page (a result of previous ioctl, it's still above
us in the call stack) =====
%ds in kqemu_lock_user_page: 0x28
%fs in kqemu_lock_user_page: 0x8
td: 0xc7232400
td->td_proc: 0xc556e2ac
td->td_tid: 100141
td->td_wmesg: (null)
td->td_name: 
td->td_sleepqueue: 0xc4aa8b20
td->td_flags: 16910337
td->td_oncpu: 0
td->td_ucred: 0xc5f92b00
td->td_runtime: 511101108330
cp: 0xc556e2ac
cp->p_vmspace: 0x1
cp->p_pid: 1
cp->p_comm: 
cp->p_fd: 0
cp->p_pptr: 0
cp->p_magic: 0
cp->p_pgrp: 0
cp->p_numthreads: 2147483647

A few things to note.  First is that the initial dump from kqemu_ioctl
shows the bogus process too, and this is well before kqemu has a chance
to do much of anything, much less start mucking around with registers
and VM mappings.

The thread pointer does seem to be valid.  Note that td_runtime
increases a little between the first two ioctl calls, and then more by
the time it gets to kqemu_lock_user_page.

curproc (td_proc) changes between each invocation of qemu, so despite
the fact that p_pid == 1, I'm certain that it's not really init.

Other than that I'm not really sure why curproc is so screwed up.
Surely it has to be specific to the kqemu module otherwise I can't see
how the kernel could function at all...

Craig



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20070712175252.GA77654>