From owner-svn-src-head@FreeBSD.ORG Sun Aug 25 14:52:20 2013 Return-Path: Delivered-To: svn-src-head@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id E0E77579; Sun, 25 Aug 2013 14:52:20 +0000 (UTC) (envelope-from dumbbell@FreeBSD.org) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:1900:2254:2068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id B3E052183; Sun, 25 Aug 2013 14:52:20 +0000 (UTC) Received: from svn.freebsd.org ([127.0.1.70]) by svn.freebsd.org (8.14.7/8.14.7) with ESMTP id r7PEqKmX048711; Sun, 25 Aug 2013 14:52:20 GMT (envelope-from dumbbell@svn.freebsd.org) Received: (from dumbbell@localhost) by svn.freebsd.org (8.14.7/8.14.5/Submit) id r7PEqKWI048709; Sun, 25 Aug 2013 14:52:20 GMT (envelope-from dumbbell@svn.freebsd.org) Message-Id: <201308251452.r7PEqKWI048709@svn.freebsd.org> From: Jean-Sebastien Pedron Date: Sun, 25 Aug 2013 14:52:20 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: svn commit: r254864 - head/sys/dev/drm2/ttm X-SVN-Group: head MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-src-head@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: SVN commit messages for the src tree for head/-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 25 Aug 2013 14:52:21 -0000 Author: dumbbell Date: Sun Aug 25 14:52:20 2013 New Revision: 254864 URL: http://svnweb.freebsd.org/changeset/base/254864 Log: drm/ttm: Import Linux commit f2d476a110bc24fde008698ae9018c99e803e25c Author: Maarten Lankhorst Date: Tue Jan 15 14:57:10 2013 +0100 drm/ttm: use ttm_bo_reserve_slowpath_nolru in ttm_eu_reserve_buffers, v2 This requires re-use of the seqno, which increases fairness slightly. Instead of spinning with a new seqno every time we keep the current one, but still drop all other reservations we hold. Only when we succeed, we try to get back our other reservations again. This should increase fairness slightly as well. Changes since v1: - Increase val_seq before calling ttm_bo_reserve_slowpath_nolru and retrying to take all entries to prevent a race. Signed-off-by: Maarten Lankhorst Reviewed-by: Jerome Glisse Approved by: kib@ Modified: head/sys/dev/drm2/ttm/ttm_execbuf_util.c Modified: head/sys/dev/drm2/ttm/ttm_execbuf_util.c ============================================================================== --- head/sys/dev/drm2/ttm/ttm_execbuf_util.c Sun Aug 25 14:47:22 2013 (r254863) +++ head/sys/dev/drm2/ttm/ttm_execbuf_util.c Sun Aug 25 14:52:20 2013 (r254864) @@ -130,12 +130,16 @@ int ttm_eu_reserve_buffers(struct list_h glob = entry->bo->glob; mtx_lock(&glob->lru_lock); -retry_locked: val_seq = entry->bo->bdev->val_seq++; +retry_locked: list_for_each_entry(entry, list, head) { struct ttm_buffer_object *bo = entry->bo; + /* already slowpath reserved? */ + if (entry->reserved) + continue; + ret = ttm_bo_reserve_nolru(bo, true, true, true, val_seq); switch (ret) { case 0: @@ -153,12 +157,26 @@ retry_locked: /* fallthrough */ case -EAGAIN: ttm_eu_backoff_reservation_locked(list); + + /* + * temporarily increase sequence number every retry, + * to prevent us from seeing our old reservation + * sequence when someone else reserved the buffer, + * but hasn't updated the seq_valid/seqno members yet. + */ + val_seq = entry->bo->bdev->val_seq++; + ttm_eu_list_ref_sub(list); - ret = ttm_bo_wait_unreserved_locked(bo, true); + ret = ttm_bo_reserve_slowpath_nolru(bo, true, val_seq); if (unlikely(ret != 0)) { mtx_unlock(&glob->lru_lock); return ret; } + entry->reserved = true; + if (unlikely(atomic_read(&bo->cpu_writers) > 0)) { + ret = -EBUSY; + goto err; + } goto retry_locked; default: goto err;