Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 25 Sep 2004 13:29:14 -0400
From:      Stephan Uphoff <ups@tree.com>
To:        "freebsd-arch@freebsd.org" <freebsd-arch@freebsd.org>
Cc:        Julian Elischer <julian@elischer.org>
Subject:   sched_userret priority adjustment patch for sched_4bsd
Message-ID:  <1096133353.53798.17613.camel@palm.tree.com>

next in thread | raw e-mail | index | archive | help

--=-Z7rROyI9E4Q5Yr8GPP56
Content-Type: text/plain
Content-Transfer-Encoding: 7bit

When a thread is about to return to user space it resets its priority to
the user level priority.
However after lowering the permission its priority it needs to check if
its priority is still better than all other runable threads.
This is currently not implemented.
Without the check the thread can block kernel or user threads with
better priority until a switch is forced by by an interrupt.

The attached patch checks the relevant runqueues and threads without
slots in the same ksegrp and forces a thread switch if the currently
running thread is no longer the best thread to run after it changed its
priority.   

The patch should improve interactive response under heavy load somewhat.
It needs a lot of testing.

	Stephan





--=-Z7rROyI9E4Q5Yr8GPP56
Content-Disposition: attachment; filename=sched_userret_patch
Content-Type: text/x-patch; name=sched_userret_patch; charset=ASCII
Content-Transfer-Encoding: 7bit

Index: sched_4bsd.c
===================================================================
RCS file: /cvsroot/src/sys/kern/sched_4bsd.c,v
retrieving revision 1.65
diff -u -r1.65 sched_4bsd.c
--- sched_4bsd.c	16 Sep 2004 07:12:59 -0000	1.65
+++ sched_4bsd.c	25 Sep 2004 15:28:58 -0000
@@ -1102,10 +1102,64 @@
 	return (ke);
 }
 
+
+/*
+ * Find if a better (lower numeric priority) non-empty queue then indicated by
+ * queue_pri exists. This is done by scanning the status bits, a set bit
+ * indicates a non-empty queue.
+ */
+static __inline int
+runq_better_priority_exists(struct runq *rq,int queue_pri)
+{
+	struct rqbits *rqb;
+	int i;
+	int word_index;
+	rqb_word_t mask;
+	
+	word_index = RQB_WORD(queue_pri); 
+	
+	rqb = &rq->rq_status;
+	for (i = 0; i < word_index ; i++)
+		if (rqb->rqb_bits[i])
+			return 1;
+	
+	/* XXXX Need a machine  macro for this ? */
+	mask = (RQB_BIT(queue_pri) - 1);
+	
+	if (rqb->rqb_bits[word_index] & mask)
+		return 1;
+	
+	return 0;
+}
+
+static __inline int
+sched_should_giveup_slot(struct thread *td)
+{
+	struct ksegrp *kg;
+	struct thread *td2;
+	
+	kg = td->td_ksegrp;
+	td2 = kg->kg_last_assigned;
+	if (td2 != NULL) {
+		td2 =  TAILQ_NEXT(td2, td_runq);
+	} else {
+		td2 = TAILQ_FIRST(&kg->kg_runq);
+	}
+	
+	if (td2 != NULL && td2->td_priority < td->td_priority) {
+		/* There is a runable thread in the ksegrp without a slot
+		 * and its priority is better than the current thread.
+		 */
+		return 1;
+	} 
+	return 0;
+}
+
 void
 sched_userret(struct thread *td)
 {
-	struct ksegrp *kg;
+	int queue_pri;
+	struct ksegrp *kg;	
 	/*
 	 * XXX we cheat slightly on the locking here to avoid locking in
 	 * the usual case.  Setting td_priority here is essentially an
@@ -1119,6 +1173,22 @@
 	if (td->td_priority != kg->kg_user_pri) {
 		mtx_lock_spin(&sched_lock);
 		td->td_priority = kg->kg_user_pri;
+		
+		/* Since we changed the priority we might need to switch to other threads 
+		 * first check the global and cpu runqueue - then check against best thread
+		* in own ksegrp without slot */
+		
+		queue_pri = td->td_priority / RQ_PPQ;
+		
+		if (runq_better_priority_exists(&runq,queue_pri) ||
+#ifdef SMP
+		    runq_better_priority_exists(&runq_pcpu[PCPU_GET(cpuid)],queue_pri) ||  
+#endif
+		    sched_should_giveup_slot(td)) {
+			/* Need to run a thread with better priority */
+			mi_switch(SW_INVOL, NULL);
+		}
+		
 		mtx_unlock_spin(&sched_lock);
 	}
 }

--=-Z7rROyI9E4Q5Yr8GPP56--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?1096133353.53798.17613.camel>