From owner-freebsd-current@FreeBSD.ORG Sun Oct 8 22:34:33 2006 Return-Path: X-Original-To: freebsd-current@freebsd.org Delivered-To: freebsd-current@freebsd.org Received: from localhost.my.domain (localhost [127.0.0.1]) by hub.freebsd.org (Postfix) with ESMTP id A2E6A16A40F; Sun, 8 Oct 2006 22:34:33 +0000 (UTC) (envelope-from davidxu@freebsd.org) From: David Xu To: freebsd-current@freebsd.org Date: Mon, 9 Oct 2006 06:34:31 +0800 User-Agent: KMail/1.8.2 References: <2fd864e0610080423q7ba6bdeal656a223e662a5d@mail.gmail.com> <20061008135031.G83537@demos.bsdclusters.com> <4529667D.8070108@fer.hr> In-Reply-To: <4529667D.8070108@fer.hr> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Content-Disposition: inline Message-Id: <200610090634.31297.davidxu@freebsd.org> Cc: Kip Macy , Ivan Voras Subject: Re: [PATCH] MAXCPU alterable in kernel config - needs testers X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 08 Oct 2006 22:34:34 -0000 On Monday 09 October 2006 04:58, Ivan Voras wrote: > Kip Macy wrote: > > It will only cover the single chip Niagara 2 boxes. > > Oh right, they'll doing multi chips in Niagara 2 :) Go Sun :) > > Still, single T2 chips should be more common, so I'd guess it will pay > to optimize for that case. > > (For the rest of the audience: Niagara 1 has 32 logical CPUs and > supports only one physical CPU/socket; Niagara 2 will have 64 logical > CPUs and support > 1 CPUs/sockets; so a 2 socket Niagara 2 box will have > 128 logical processors! Cue SciFi music...) > > Any word on how will they handle migration of threads across sockets (or > will it be OS's job)? Judging from T1 architecture, I think such event > would create a very large performance penalty, but I'm not an expert. > __________ The current 4BSD scheduler does not handle large number of cores very well, also the single sched_lock will be a bottleneck for such a configuration.