Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 04 Oct 2001 10:30:43 -0700 (PDT)
From:      John Baldwin <jhb@FreeBSD.org>
To:        "Brian F. Feldman" <green@FreeBSD.org>
Cc:        arch@FreeBSD.org
Subject:   Re: sx-ifying src/sys/vm
Message-ID:  <XFMail.011004103043.jhb@FreeBSD.org>
In-Reply-To: <200110041129.f94BTjl31274@green.bikeshed.org>

next in thread | previous in thread | raw e-mail | index | archive | help

On 04-Oct-01 Brian F. Feldman wrote:
>> > Index: vm_glue.c
>> > ===================================================================
>> > RCS file: /home/freebsd/ncvs/src/sys/vm/vm_glue.c,v
>> > retrieving revision 1.119
>> > diff -u -r1.119 vm_glue.c
>> > --- vm_glue.c 2001/09/12 08:38:11     1.119
>> > +++ vm_glue.c 2001/10/04 00:41:37
>> > @@ -577,9 +577,7 @@
>> >                        * data structures there is a
>> >                        * possible deadlock.
>> >                        */
>> > -                     if (lockmgr(&vm->vm_map.lock,
>> > -                                     LK_EXCLUSIVE | LK_NOWAIT,
>> > -                                     NULL, curthread)) {
>> > +                     if (vm_map_trylock(&vm->vm_map)) {
>> >                               vmspace_free(vm);
>> >                               PROC_UNLOCK(p);
>> >                               goto nextproc;
>> 
>> Hrm, this reads weird.  It should be 'if (!vm_map_trylock()) {'.  I would
>> make
>> vm_map_trylock() return true on success and false on failure.  Same for
>> vm_map_lock_upgrade().
> 
> I suppose, but errno return types are consistent with lockmgr.  I also 
> expect them a hell of a lot more than some other kind of int return values.

But you aren't _using_ that return value anywhere, and it makes the code more
confusing to read.
 
>> Another old bug: why is the lock being modified w/o holding an exclusive
>> lock? 
>> Looks like this one is now moot, however.
> 
> What do you mean exactly?  The old way was to get the lockmgr's simplelock, 
> modify the lockmgr itself, then release it.

Ah, the simplelock handles it then.

>> Ok, I don't like the lockheld parameter here. :)  It looks like the
>> recursive
>> locks are read locks.  Is the problem that you can call this function with
>> an
>> exclusive lock held?  If so, I would rather that you add an assertion
>> SX_ASSERT_LOCKED(&map->lock) that the map is locked (either read or write)
>> by
>> the caller.  Then vm_map_lookup() can simply assume the map is already
>> locked.
> 
> vm_map_lookup() calls vm_map_lock_upgrade(), and in this case it would call 
> vm_map_lock_upgrade() while already having two shared locks, thus making the 
> sx lock have an sx_cnt of 2 and the vm_map_lock_upgrade() would always 
> subsequently fail in sx_try_upgrade, would it not?

Ok, then add an assertion SX_ASSERT_SLOCKED(&map->lock) and require callers to
call vm_map_lookup() with the map already share locked and have vm_map_lookup()
just do upgrade/downgrade but not actually get a slock itself.

How did this work with lockmgr() btw?  I thought it didn't allow an upgrade of a
recursed share lock either.

>> > Index: vm_zone.c
>> > ===================================================================
>> > RCS file: /home/freebsd/ncvs/src/sys/vm/vm_zone.c,v
>> > retrieving revision 1.48
>> > diff -u -r1.48 vm_zone.c
>> > --- vm_zone.c 2001/08/05 03:55:02     1.48
>> > +++ vm_zone.c 2001/10/04 00:41:48
>> > @@ -409,11 +409,12 @@
>> >                * map.
>> >                */
>> >               mtx_unlock(&z->zmtx);
>> > -             if (lockstatus(&kernel_map->lock, NULL)) {
>> > +             if (vm_map_trylock(kernel_map)) {
>> >                       item = (void *) kmem_malloc(kmem_map, nbytes,
>> >                       M_WAITOK);
>> >                       if (item != NULL)
>> >                               atomic_add_int(&zone_kmem_pages, z->zalloc);
>> >               } else {
>> > +                     vm_map_unlock(kernel_map);
>> >                       item = (void *) kmem_alloc(kernel_map, nbytes);
>> >                       if (item != NULL)
>> >                               atomic_add_int(&zone_kern_pages, z->zalloc);
>> 
>> Question: why does the zone allocator care if the map is locked or not? 
>> Also,
>> this has a race: some other thread could easily come in and lock the map in
>> between the vm_map_unlock() and the kmem_alloc(), thus rendering this test
>> useless.  This entire test here is not thread safe and needs to be
>> rethought.
> 
> I believe the code is meaning to check lockstatus() is LK_EXCLOTHER or 
> LK_SHARED, and it would be broken if LK_EXCLUSIVE were returned here :)

The comment above explicitly mentions that it is avoiding recursion, which
menas it does indeeed want to do this in the LK_EXCLUSIVE case.  The problem is
if the caller has xlock'd the kernel_map, you don't know if it is valid or not.

> I really was wondering about this.  If you can't have faults in an interrupt 
> context (well, you can't use lockmgr from an interrupt context that I know 
> either...), then in non-SMP world this would be just fine, wouldn't it?

You can have a fault in an interrupt context.  Probably it's a fatal one like a
NULL pointer deref, but you can still have it.  However, interrupts aren't in
the same context as the interrupted process in -current anymore, so it's less
of an issue.

> Anyway, it would be easy to fix if kmem_alloc() didn't xlock the map itself, 
> which it does.  It would be easy to make it do that by generating, say, a 
> kmem_alloc2() ;)

Ugh, can't you pick a better function name?   :-P  You could also just add a
kmem_alloc_try() that used a trylock internally and returned NULL if it
couldn't get the lock.  Then you could code this like so:

                item = (void *) kmem_alloc(kernel_map, ...);
                if (item != NULL)
                        atomic_add_int(...);
                else {
                        item = (void *) kmem_malloc(kmem_map, ...)
                        if (item != NULL)
                                atomic_add_int(...);
                }

Apparently the code can be called with an xlock already held on the kernel_map,
in which case we want to fail.  I'm not sure why the comment cares if another
process has the map locked.  It shouldn't.  The lockstatus() really should have
only failed in the LK_EXCLUSIVE case, and tried to do an upgrade in the
LK_SHARED case and just blocked on the lock in the LK_EXCLOTHER case.

-- 

John Baldwin <jhb@FreeBSD.org> -- http://www.FreeBSD.org/~jhb/
PGP Key: http://www.baldwin.cx/~john/pgpkey.asc
"Power Users Use the Power to Serve!"  -  http://www.FreeBSD.org/

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-arch" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?XFMail.011004103043.jhb>