Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 8 Jul 2002 16:37:58 -0700 (PDT)
From:      David Xu <bsddiy@yahoo.com>
To:        John Baldwin <jhb@FreeBSD.org>, Jonathan Lemon <jlemon@flugsvamp.com>
Cc:        current@FreeBSD.ORG, David Schultz <dschultz@uclink.Berkeley.EDU>, David Xu <bsddiy@yahoo.com>
Subject:   Re: i386 trap code
Message-ID:  <20020708233758.1620.qmail@web20905.mail.yahoo.com>
In-Reply-To: <XFMail.20020708152746.jhb@FreeBSD.org>

next in thread | previous in thread | raw e-mail | index | archive | help

--- John Baldwin <jhb@FreeBSD.org> wrote:
> 
> On 07-Jul-2002 Jonathan Lemon wrote:
> > On Sat, Jul 06, 2002 at 11:59:50PM -0700, David Xu wrote:
> >> Jonthan,
> >> 
> >>   I just use DOS program as an example, for any program, if it wants to go
> >> into VM86 mode, it is very easy, just calls i386_vm86() to initailize its
> >> VM86 pcb extension, setups some memory area, then call sigreturn() to turn
> >> into VM86 mode.
> >>   I think global in_vm86call flags is a bug under SMP, it creates a race
> >> condition. suppose this scenario:
> >>   CPU A is running a simple VM86 code program.
> >>   CPU B is running vm86_intcall() by some kernel driver (vesa module ?)
> >>   CPU B set in_vm86call = 1
> >>   CPU A gets a fault trap.
> >>   CPU A runs trap(), and find that in_vm86call is set and handles the trap
> >>         as  it is running vm86_intcall(), but it is not true and make a
> mess.
> > 
> > Yes, as I mentioned earlier, the way the original vm86 bioscall worked 
> > was to prevent an AST until the BIOS was done.  This relied on the giant
> > lock for correctness, since we only allowed one CPU into the kernel at 
> > once.  There probably needs to be some work done for -current in this area.
> 
> Since vm86_lock is a spin lock, you could possibly make in_vm86call per-cpu
> or you could just check the lock instead of the variable to fix this.
> 
> -- 
> 
> John Baldwin <jhb@FreeBSD.org>  <><  http://www.FreeBSD.org/~jhb/
> "Power Users Use the Power to Serve!"  -  http://www.FreeBSD.org/

No, vm86_lock is not a spin lock, unless you change it now.
I saw a line in vm86.c:
  mtx_init(&vm86_lock, "vm86 lock", NULL, MTX_DEF);

fixing in_vm86call is not diffcult, problem is old code stores some parameters
in vm86 static pcb, so if the thread is preempted, when it switches back
again, if it gets parameters from the pcb, these parameters is already modified

by cpu switch routine, old code assume it will never be preempted until BIOS
returns, this is true under RELENG_4, but obviously it is not true in CURRENT
source.

David Xu


__________________________________________________
Do You Yahoo!?
Sign up for SBC Yahoo! Dial - First Month Free
http://sbc.yahoo.com

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-current" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20020708233758.1620.qmail>