From owner-freebsd-arch@FreeBSD.ORG Fri Oct 7 11:38:38 2005 Return-Path: X-Original-To: arch@FreeBSD.org Delivered-To: freebsd-arch@FreeBSD.ORG Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 1A27816A423; Fri, 7 Oct 2005 11:38:38 +0000 (GMT) (envelope-from _pppp@mail.ru) Received: from f19.mail.ru (f19.mail.ru [194.67.57.49]) by mx1.FreeBSD.org (Postfix) with ESMTP id AEA3E43D46; Fri, 7 Oct 2005 11:38:37 +0000 (GMT) (envelope-from _pppp@mail.ru) Received: from mail by f19.mail.ru with local id 1ENqYe-0006Z2-00; Fri, 07 Oct 2005 15:38:36 +0400 Received: from [212.5.170.174] by win.mail.ru with HTTP; Fri, 07 Oct 2005 15:38:36 +0400 From: dima <_pppp@mail.ru> To: Gleb Smirnoff Mime-Version: 1.0 X-Mailer: mPOP Web-Mail 2.19 X-Originating-IP: [212.5.170.174] Date: Fri, 07 Oct 2005 15:38:36 +0400 In-Reply-To: <20051007102229.GL14542@cell.sick.ru> Content-Type: text/plain; charset=koi8-r Content-Transfer-Encoding: 8bit Message-Id: Cc: arch@FreeBSD.org Subject: Re[2]: [REVIEW/TEST] polling(4) changes X-BeenThere: freebsd-arch@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: dima <_pppp@mail.ru> List-Id: Discussion related to FreeBSD architecture List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 07 Oct 2005 11:38:38 -0000 > d> The loop body should really look like > d> if( mtx_try_lock( &iface_lock[i] ) ) { > d> pr[i].handler( pr[i].ifp, POLL_ONLY, count ); > d> mtx_unlock( &iface_lock[i] ); > d> } > d> I skipped this first to make the idea clearer. > > Yes, this approach should be better. > > d> > Really we do not have several kernel threads in polling. netisr_poll() is always > d> > run by one thread - swi1:net. Well, we have also idle_poll thread, but it is > d> > very special case. Frankly speaking, it can't work without help from netisr_poll(). > d> > The current polling is designed for a single threaded kernel, for RELENG_4. We > d> > can't achieve parallelization with strong redesign. The future plans are to create > d> > per-interface CPU bound threads. The plans can change. You are welcome to help. > d> > d> idle_poll can significantly increase network response time. I'd suggest per-CPU (not per-interface) threads. This would keep user_frac code much simpler. > > No, please don't spawn more idle_poll threads! :) Not idle_poll but swi threads actually. Btw, the loop discussed is just the same in ether_poll() and netisr_poll(). It could be splitted as a separate (inline?) function. Such complex macros aren't any good ;) > As said, the idle_poll thread can't work on its own. idle_poll needs netisr_poll() > to push it sometimes out of the priority pit. It is described in first mail of > this thread. idle_poll should surely remain a single entity. > > d> Not sure about the coding help in the next weeks. My current project is on the pre-release stage and the kid is going to be born soon. I can join a bit later though. > > There is no promises in the free project. Join when you can.