From owner-freebsd-performance@freebsd.org Fri Jun 3 17:55:55 2016 Return-Path: Delivered-To: freebsd-performance@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id BC59BB6846E for ; Fri, 3 Jun 2016 17:55:55 +0000 (UTC) (envelope-from kostikbel@gmail.com) Received: from mailman.ysv.freebsd.org (mailman.ysv.freebsd.org [IPv6:2001:1900:2254:206a::50:5]) by mx1.freebsd.org (Postfix) with ESMTP id A24FC1CF7 for ; Fri, 3 Jun 2016 17:55:55 +0000 (UTC) (envelope-from kostikbel@gmail.com) Received: by mailman.ysv.freebsd.org (Postfix) id 9ACC2B6846A; Fri, 3 Jun 2016 17:55:55 +0000 (UTC) Delivered-To: performance@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 99E20B68468; Fri, 3 Jun 2016 17:55:55 +0000 (UTC) (envelope-from kostikbel@gmail.com) Received: from kib.kiev.ua (kib.kiev.ua [IPv6:2001:470:d5e7:1::1]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 435C91CF5; Fri, 3 Jun 2016 17:55:55 +0000 (UTC) (envelope-from kostikbel@gmail.com) Received: from tom.home (kib@localhost [127.0.0.1]) by kib.kiev.ua (8.15.2/8.15.2) with ESMTPS id u53HtlY6008405 (version=TLSv1 cipher=DHE-RSA-CAMELLIA256-SHA bits=256 verify=NO); Fri, 3 Jun 2016 20:55:47 +0300 (EEST) (envelope-from kostikbel@gmail.com) DKIM-Filter: OpenDKIM Filter v2.10.3 kib.kiev.ua u53HtlY6008405 Received: (from kostik@localhost) by tom.home (8.15.2/8.15.2/Submit) id u53HtkQc008398; Fri, 3 Jun 2016 20:55:46 +0300 (EEST) (envelope-from kostikbel@gmail.com) X-Authentication-Warning: tom.home: kostik set sender to kostikbel@gmail.com using -f Date: Fri, 3 Jun 2016 20:55:46 +0300 From: Konstantin Belousov To: Alan Somers Cc: Alan Cox , John Baldwin , alc@freebsd.org, Adrian Chadd , freebsd-current , performance@freebsd.org, "current@freebsd.org" Subject: Re: PostgreSQL performance on FreeBSD Message-ID: <20160603175546.GZ38613@kib.kiev.ua> References: <20140627125613.GT93733@kib.kiev.ua> <201408121409.40653.jhb@freebsd.org> <201408141147.45698.jhb@freebsd.org> <53ECFDC8.1070200@rice.edu> <20160603172633.GY38613@kib.kiev.ua> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.6.1 (2016-04-27) X-Spam-Status: No, score=-2.0 required=5.0 tests=ALL_TRUSTED,BAYES_00, DKIM_ADSP_CUSTOM_MED,FREEMAIL_FROM,NML_ADSP_CUSTOM_MED autolearn=no autolearn_force=no version=3.4.1 X-Spam-Checker-Version: SpamAssassin 3.4.1 (2015-04-28) on tom.home X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 03 Jun 2016 17:55:55 -0000 On Fri, Jun 03, 2016 at 11:29:13AM -0600, Alan Somers wrote: > On Fri, Jun 3, 2016 at 11:26 AM, Konstantin Belousov > wrote: > > On Fri, Jun 03, 2016 at 09:29:16AM -0600, Alan Somers wrote: > >> I notice that, with the exception of the VM_PHYSSEG_MAX change, these > >> patches never made it into head or ports. Are they unsuitable for low > >> core-count machines, or is there some other reason not to commit them? > >> If not, what would it take to get these into 11.0 or 11.1 ? > > > > The fast page fault handler was redesigned and committed in r269728 > > and r270011 (with several follow-ups). > > Instead of lock-less buffer queues iterators, Jeff changed buffer allocator > > to use uma, see r289279. Other improvement to the buffer cache was > > committed as r267255. > > > > What was not committed is the aggressive pre-population of the phys objects > > mem queue, and a knob to further split NUMA domains into smaller domains. > > The later change is rotten. > > > > In fact, I think that with that load, what you would see right now on > > HEAD, is the contention on vm_page_queue_free_mtx. There are plans to > > handle it. > > Thanks for the update. Is it still recommended to enable the > multithreaded pagedaemon? Single-threaded pagedaemon cannot maintain the good system state even on non-NUMA systems, if machine has large memory. This was the motivation for the NUMA domain split patch. So yes, to get better performance you should enable VM_NUMA_ALLOC option. Unfortunately, there were some code changes of quite low quality which resulted in the NUMA-enabled system to randomly fail with NULL pointer deref in the vm page alloc path. Supposedly that was fixed, but you should try that yourself. One result of the mentioned changes was that nobody used/tested NUMA-enabled systems under any significant load, for quite long time.