From owner-svn-src-all@FreeBSD.ORG Sun Aug 25 14:39:52 2013 Return-Path: Delivered-To: svn-src-all@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id 4B305FFE; Sun, 25 Aug 2013 14:39:52 +0000 (UTC) (envelope-from dumbbell@FreeBSD.org) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:1900:2254:2068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 27ED420D2; Sun, 25 Aug 2013 14:39:52 +0000 (UTC) Received: from svn.freebsd.org ([127.0.1.70]) by svn.freebsd.org (8.14.7/8.14.7) with ESMTP id r7PEdqsr040732; Sun, 25 Aug 2013 14:39:52 GMT (envelope-from dumbbell@svn.freebsd.org) Received: (from dumbbell@localhost) by svn.freebsd.org (8.14.7/8.14.5/Submit) id r7PEdpav040729; Sun, 25 Aug 2013 14:39:51 GMT (envelope-from dumbbell@svn.freebsd.org) Message-Id: <201308251439.r7PEdpav040729@svn.freebsd.org> From: Jean-Sebastien Pedron Date: Sun, 25 Aug 2013 14:39:51 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: svn commit: r254861 - head/sys/dev/drm2/ttm X-SVN-Group: head MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-src-all@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: "SVN commit messages for the entire src tree \(except for " user" and " projects" \)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 25 Aug 2013 14:39:52 -0000 Author: dumbbell Date: Sun Aug 25 14:39:51 2013 New Revision: 254861 URL: http://svnweb.freebsd.org/changeset/base/254861 Log: drm/ttm: Import Linux commit 63d0a4195560362e2e00a3ad38fc331d34e1da9b Author: Maarten Lankhorst Date: Tue Jan 15 14:56:37 2013 +0100 drm/ttm: remove lru_lock around ttm_bo_reserve There should no longer be assumptions that reserve will always succeed with the lru lock held, so we can safely break the whole atomic reserve/lru thing. As a bonus this fixes most lockdep annotations for reservations. Signed-off-by: Maarten Lankhorst Reviewed-by: Jerome Glisse Modified: head/sys/dev/drm2/ttm/ttm_bo.c head/sys/dev/drm2/ttm/ttm_bo_driver.h head/sys/dev/drm2/ttm/ttm_execbuf_util.c Modified: head/sys/dev/drm2/ttm/ttm_bo.c ============================================================================== --- head/sys/dev/drm2/ttm/ttm_bo.c Sun Aug 25 14:33:49 2013 (r254860) +++ head/sys/dev/drm2/ttm/ttm_bo.c Sun Aug 25 14:39:51 2013 (r254861) @@ -196,13 +196,13 @@ int ttm_bo_del_from_lru(struct ttm_buffe return put_count; } -int ttm_bo_reserve_locked(struct ttm_buffer_object *bo, +int ttm_bo_reserve_nolru(struct ttm_buffer_object *bo, bool interruptible, bool no_wait, bool use_sequence, uint32_t sequence) { int ret; - while (unlikely(atomic_read(&bo->reserved) != 0)) { + while (unlikely(atomic_xchg(&bo->reserved, 1) != 0)) { /** * Deadlock avoidance for multi-bo reserving. */ @@ -224,22 +224,35 @@ int ttm_bo_reserve_locked(struct ttm_buf return -EBUSY; ret = ttm_bo_wait_unreserved_locked(bo, interruptible); + if (unlikely(ret)) return ret; } - atomic_set(&bo->reserved, 1); if (use_sequence) { + bool wake_up = false; /** * Wake up waiters that may need to recheck for deadlock, * if we decreased the sequence number. */ if (unlikely((bo->val_seq - sequence < (1 << 31)) || !bo->seq_valid)) - wakeup(bo); + wake_up = true; + /* + * In the worst case with memory ordering these values can be + * seen in the wrong order. However since we call wake_up_all + * in that case, this will hopefully not pose a problem, + * and the worst case would only cause someone to accidentally + * hit -EAGAIN in ttm_bo_reserve when they see old value of + * val_seq. However this would only happen if seq_valid was + * written before val_seq was, and just means some slightly + * increased cpu usage + */ bo->val_seq = sequence; bo->seq_valid = true; + if (wake_up) + wakeup(bo); } else { bo->seq_valid = false; } @@ -268,14 +281,14 @@ int ttm_bo_reserve(struct ttm_buffer_obj int put_count = 0; int ret; - mtx_lock(&glob->lru_lock); - ret = ttm_bo_reserve_locked(bo, interruptible, no_wait, use_sequence, - sequence); - if (likely(ret == 0)) + ret = ttm_bo_reserve_nolru(bo, interruptible, no_wait, use_sequence, + sequence); + if (likely(ret == 0)) { + mtx_lock(&glob->lru_lock); put_count = ttm_bo_del_from_lru(bo); - mtx_unlock(&glob->lru_lock); - - ttm_bo_list_ref_sub(bo, put_count, true); + mtx_unlock(&glob->lru_lock); + ttm_bo_list_ref_sub(bo, put_count, true); + } return ret; } @@ -488,7 +501,7 @@ static void ttm_bo_cleanup_refs_or_queue int ret; mtx_lock(&glob->lru_lock); - ret = ttm_bo_reserve_locked(bo, false, true, false, 0); + ret = ttm_bo_reserve_nolru(bo, false, true, false, 0); mtx_lock(&bdev->fence_lock); (void) ttm_bo_wait(bo, false, false, true); @@ -580,7 +593,7 @@ static int ttm_bo_cleanup_refs_and_unloc return ret; mtx_lock(&glob->lru_lock); - ret = ttm_bo_reserve_locked(bo, false, true, false, 0); + ret = ttm_bo_reserve_nolru(bo, false, true, false, 0); /* * We raced, and lost, someone else holds the reservation now, @@ -644,7 +657,14 @@ static int ttm_bo_delayed_delete(struct refcount_acquire(&nentry->list_kref); } - ret = ttm_bo_reserve_locked(entry, false, !remove_all, false, 0); + ret = ttm_bo_reserve_nolru(entry, false, true, false, 0); + if (remove_all && ret) { + mtx_unlock(&glob->lru_lock); + ret = ttm_bo_reserve_nolru(entry, false, false, + false, 0); + mtx_lock(&glob->lru_lock); + } + if (!ret) ret = ttm_bo_cleanup_refs_and_unlock(entry, false, !remove_all); @@ -796,7 +816,7 @@ static int ttm_mem_evict_first(struct tt mtx_lock(&glob->lru_lock); list_for_each_entry(bo, &man->lru, lru) { - ret = ttm_bo_reserve_locked(bo, false, true, false, 0); + ret = ttm_bo_reserve_nolru(bo, false, true, false, 0); if (!ret) break; } @@ -1737,7 +1757,7 @@ static int ttm_bo_swapout(struct ttm_mem mtx_lock(&glob->lru_lock); list_for_each_entry(bo, &glob->swap_lru, swap) { - ret = ttm_bo_reserve_locked(bo, false, true, false, 0); + ret = ttm_bo_reserve_nolru(bo, false, true, false, 0); if (!ret) break; } Modified: head/sys/dev/drm2/ttm/ttm_bo_driver.h ============================================================================== --- head/sys/dev/drm2/ttm/ttm_bo_driver.h Sun Aug 25 14:33:49 2013 (r254860) +++ head/sys/dev/drm2/ttm/ttm_bo_driver.h Sun Aug 25 14:39:51 2013 (r254861) @@ -791,16 +791,7 @@ extern void ttm_mem_io_unlock(struct ttm * to make room for a buffer already reserved. (Buffers are reserved before * they are evicted). The following algorithm prevents such deadlocks from * occurring: - * 1) Buffers are reserved with the lru spinlock held. Upon successful - * reservation they are removed from the lru list. This stops a reserved buffer - * from being evicted. However the lru spinlock is released between the time - * a buffer is selected for eviction and the time it is reserved. - * Therefore a check is made when a buffer is reserved for eviction, that it - * is still the first buffer in the lru list, before it is removed from the - * list. @check_lru == 1 forces this check. If it fails, the function returns - * -EINVAL, and the caller should then choose a new buffer to evict and repeat - * the procedure. - * 2) Processes attempting to reserve multiple buffers other than for eviction, + * Processes attempting to reserve multiple buffers other than for eviction, * (typically execbuf), should first obtain a unique 32-bit * validation sequence number, * and call this function with @use_sequence == 1 and @sequence == the unique @@ -833,7 +824,7 @@ extern int ttm_bo_reserve(struct ttm_buf /** - * ttm_bo_reserve_locked: + * ttm_bo_reserve_nolru: * * @bo: A pointer to a struct ttm_buffer_object. * @interruptible: Sleep interruptible if waiting. @@ -841,9 +832,7 @@ extern int ttm_bo_reserve(struct ttm_buf * @use_sequence: If @bo is already reserved, Only sleep waiting for * it to become unreserved if @sequence < (@bo)->sequence. * - * Must be called with struct ttm_bo_global::lru_lock held, - * and will not remove reserved buffers from the lru lists. - * The function may release the LRU spinlock if it needs to sleep. + * Will not remove reserved buffers from the lru lists. * Otherwise identical to ttm_bo_reserve. * * Returns: @@ -856,7 +845,7 @@ extern int ttm_bo_reserve(struct ttm_buf * -EDEADLK: Bo already reserved using @sequence. This error code will only * be returned if @use_sequence is set to true. */ -extern int ttm_bo_reserve_locked(struct ttm_buffer_object *bo, +extern int ttm_bo_reserve_nolru(struct ttm_buffer_object *bo, bool interruptible, bool no_wait, bool use_sequence, uint32_t sequence); Modified: head/sys/dev/drm2/ttm/ttm_execbuf_util.c ============================================================================== --- head/sys/dev/drm2/ttm/ttm_execbuf_util.c Sun Aug 25 14:33:49 2013 (r254860) +++ head/sys/dev/drm2/ttm/ttm_execbuf_util.c Sun Aug 25 14:39:51 2013 (r254861) @@ -150,7 +150,7 @@ retry_locked: struct ttm_buffer_object *bo = entry->bo; retry_this_bo: - ret = ttm_bo_reserve_locked(bo, true, true, true, val_seq); + ret = ttm_bo_reserve_nolru(bo, true, true, true, val_seq); switch (ret) { case 0: break;