From owner-freebsd-hackers@FreeBSD.ORG Tue Dec 2 14:01:01 2003 Return-Path: Delivered-To: freebsd-hackers@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 2053416A4CE for ; Tue, 2 Dec 2003 14:01:01 -0800 (PST) Received: from mail.sandvine.com (sandvine.com [199.243.201.138]) by mx1.FreeBSD.org (Postfix) with ESMTP id CF36E43FE3 for ; Tue, 2 Dec 2003 14:00:59 -0800 (PST) (envelope-from gnagelhout@sandvine.com) Received: by mail.sandvine.com with Internet Mail Service (5.5.2657.72) id ; Tue, 2 Dec 2003 17:00:58 -0500 Message-ID: From: Gerrit Nagelhout To: "'freebsd-hackers@freebsd.org'" Date: Tue, 2 Dec 2003 17:00:57 -0500 MIME-Version: 1.0 X-Mailer: Internet Mail Service (5.5.2657.72) Content-Type: text/plain; charset="iso-8859-1" Subject: Page size for mbufs X-BeenThere: freebsd-hackers@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Technical Discussions relating to FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 02 Dec 2003 22:01:01 -0000 Hi, As part of some performance tuning for a bridging-like application, I am looking at the page sizes being used for mbufs (headers & clusters). As far as I can tell, it is currently using standard 4K pages for this. In this application (Running on 2.8 Ghz Xeon), there seems to be large pipeline stall whenever new mbufs are being accessed. The number of active mbufs in the system is about 4096, which works out to 2048 pages for the clusters alone. Since there are only 64 TLB entries in the xeon, I suspect that TLB thrashing will have a severe performance impact. In order to try and get around this, I'd like to try and change the page size for the Mbufs. Does anybody have any ideas on the best/easiest way to try this out, and figure out what the performance impact is? I know that the mbufs are allocated out of the mb_map, which is created by kmem_suballoc. I have also noticed some 4M page support in pmap.c, but I'm not sure how to tie the two together. Any suggestions? Thanks, Gerrit Nagelhout