From owner-freebsd-current@FreeBSD.ORG Tue Jun 15 02:14:42 2010 Return-Path: Delivered-To: freebsd-current@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id E4D1D1065672 for ; Tue, 15 Jun 2010 02:14:41 +0000 (UTC) (envelope-from dougb@FreeBSD.org) Received: from mail2.fluidhosting.com (mx21.fluidhosting.com [204.14.89.4]) by mx1.freebsd.org (Postfix) with ESMTP id 7CEB88FC08 for ; Tue, 15 Jun 2010 02:14:41 +0000 (UTC) Received: (qmail 22685 invoked by uid 399); 15 Jun 2010 02:14:40 -0000 Received: from localhost (HELO foreign.dougb.net) (dougb@dougbarton.us@127.0.0.1) by localhost with ESMTPAM; 15 Jun 2010 02:14:40 -0000 X-Originating-IP: 127.0.0.1 X-Sender: dougb@dougbarton.us Message-ID: <4C16E20E.9070309@FreeBSD.org> Date: Mon, 14 Jun 2010 19:14:38 -0700 From: Doug Barton Organization: http://SupersetSolutions.com/ User-Agent: Mozilla/5.0 (X11; U; FreeBSD i386; en-US; rv:1.9.1.9) Gecko/20100330 Thunderbird/3.0.4 MIME-Version: 1.0 To: Rene Ladan References: <4C15A09B.8080501@FreeBSD.org> <201006140848.55979.jhb@freebsd.org> <4C169F5B.7040409@gmail.com> In-Reply-To: <4C169F5B.7040409@gmail.com> X-Enigmail-Version: 1.0.1 OpenPGP: id=1A1ABC84 Content-Type: multipart/mixed; boundary="------------030401070901060403000204" Cc: danfe@FreeBSD.org, Christian Zander , alc@freebsd.org, Alan Cox , freebsd-current@freebsd.org Subject: Re: nvidia-driver 195.22 use horribly broken on amd64 between r206173 and X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 15 Jun 2010 02:14:42 -0000 This is a multi-part message in MIME format. --------------030401070901060403000204 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit On 06/14/10 14:30, Rene Ladan wrote: > On 14-06-2010 14:48, John Baldwin wrote: >> On Sunday 13 June 2010 11:23:07 pm Doug Barton wrote: >>> On 06/13/10 19:09, Alan Cox wrote: >>>> On Sun, Jun 13, 2010 at 8:38 PM, Doug Barton wrote: >>>> >>>>> On 06/01/10 08:26, John Baldwin wrote: >>>>> >>>>>> >>>>>> I've asked the driver author if the calls to vm_page_wire() and >>>>>> vm_page_unwire() can simply be removed but have not heard back yet. >>>>>> >>>>> >>>>> Is there any news on this? I have updated to the latest current so I'm >>>>> running the nv driver now, but I'd like to get the nvidia driver running >>>>> again. >>>>> >>>>> >>>> Yes, the unnecessary (and now problematic) wiring and unwiring calls will >> be >>>> removed in a future release of the driver. >>> >>> Excellent! Any ETA? Or are there patches against an existing version of >>> the driver? >> >> I would just remove the calls to vm_page_wire() and vm_page_unwire() along >> with the immediately adjacent calls to vm_page_{un,}lock_queues(). >> > Just to confirm, like the attached patch? > > This is with a GeForce GT 240M, current/amd64 r209035, nvidia-driver > 195.36.15 > > I haven't runtime-tested it yet... This worked great, thanks! I'm re-attaching the patch for Alexey's benefit, just in case. Details, I'm running today's -current (r209174) and I've had it up for 4.5 hours already, which is 3 hours longer than I was able to run with anything > 195.22 for months. I've done full "normal" use which includes lots of terminals, tbird, firefox, flash, etc. Thanks again, Doug -- ... and that's just a little bit of history repeating. -- Propellerheads Improve the effectiveness of your Internet presence with a domain name makeover! http://SupersetSolutions.com/ --------------030401070901060403000204 Content-Type: text/plain; name="patch-jhb-current" Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="patch-jhb-current" --- src/nvidia_subr.c.orig 2010-03-12 17:48:52.000000000 +0100 +++ src/nvidia_subr.c 2010-06-14 23:25:28.000000000 +0200 @@ -1301,9 +1301,6 @@ for (i = 0; i < count; i++) { pte_array[i] = at->pte_array[i].physical_address; - vm_page_lock_queues(); - vm_page_wire(PHYS_TO_VM_PAGE(pte_array[i])); - vm_page_unlock_queues(); sglist_append_phys(at->sg_list, pte_array[i], PAGE_SIZE); } @@ -1365,9 +1362,6 @@ os_flush_cpu_cache(); for (i = 0; i < count; i++) { - vm_page_lock_queues(); - vm_page_unwire(PHYS_TO_VM_PAGE(at->pte_array[i].physical_address), 0); - vm_page_unlock_queues(); kmem_free(kernel_map, at->pte_array[i].virtual_address, PAGE_SIZE); malloc_type_freed(M_NVIDIA, PAGE_SIZE); --------------030401070901060403000204--