From owner-freebsd-fs@freebsd.org Mon Jan 18 22:37:19 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 50CC6A869AF for ; Mon, 18 Jan 2016 22:37:19 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id 0630610DF for ; Mon, 18 Jan 2016 22:37:18 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) IronPort-PHdr: 9a23:GUun0hzNm2vmNULXCy+O+j09IxM/srCxBDY+r6Qd0eITIJqq85mqBkHD//Il1AaPBtWFraMawLOK6+jJYi8p39WoiDg6aptCVhsI2409vjcLJ4q7M3D9N+PgdCcgHc5PBxdP9nC/NlVJSo6lPwWB6kO74TNaIBjjLw09fr2zQd6MyZXpnLnuo9X6WEZhunmUWftKNhK4rAHc5IE9oLBJDeIP8CbPuWZCYO9MxGlldhq5lhf44dqsrtY4q3wD89pozcNLUL37cqIkVvQYSW1+ayFmrPHs4CPKQguG+HYaXn8f2jVPHATM9AnzFsPrvSzluuNlwAGAMMH2RKxyUjOnufRFUhjt3R0GPD1x1Wjcich9ieoPuheorB97zov8fYaaKfd6ZqObdtpMFjkJZdpYSyEUWtD0VIAIFedUeL8A94Q= X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: A2DQAQDyZ51W/61jaINEGoQMbQaIULNFAQ2BYhgKhSNKAoFyFAEBAQEBAQEBgQmCLYIHAQEBBAEBASAEJyALDAQCAQgOAwQBAQECAg0ZAgInAQkeCAIECAcEAQgLCQSHeg4srm2QEQEBAQEBAQEBAQEBAQEBAQEBAQEBARiBAIEuhCeEf4Q3AQGDO4FJBY45iGGFSIUrhEpKhyWFNIpsg28CIAEBQoIRG4F7IDQBBoVzOoEIAQEB X-IronPort-AV: E=Sophos;i="5.22,314,1449550800"; d="scan'208";a="261871298" Received: from nipigon.cs.uoguelph.ca (HELO zcs1.mail.uoguelph.ca) ([131.104.99.173]) by esa-jnhn.mail.uoguelph.ca with ESMTP; 18 Jan 2016 17:37:11 -0500 Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 702F515F574; Mon, 18 Jan 2016 17:37:11 -0500 (EST) Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id w-L6rVbA2FkF; Mon, 18 Jan 2016 17:37:10 -0500 (EST) Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 3527F15F56E; Mon, 18 Jan 2016 17:37:10 -0500 (EST) X-Virus-Scanned: amavisd-new at zcs1.mail.uoguelph.ca Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id unQK1pqGciZN; Mon, 18 Jan 2016 17:37:10 -0500 (EST) Received: from zcs1.mail.uoguelph.ca (zcs1.mail.uoguelph.ca [172.17.95.18]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id C252D15F56D; Mon, 18 Jan 2016 17:37:09 -0500 (EST) Date: Mon, 18 Jan 2016 17:37:09 -0500 (EST) From: Rick Macklem To: Raghavendra Gowdappa Cc: Jeff Darcy , Raghavendra G , freebsd-fs , Hubbard Jordan , Xavier Hernandez , Gluster Devel Message-ID: <1045057902.165261325.1453156629344.JavaMail.zimbra@uoguelph.ca> In-Reply-To: <1256214214.7158114.1452310490692.JavaMail.zimbra@redhat.com> References: <571237035.145690509.1451437960464.JavaMail.zimbra@uoguelph.ca> <1083933309.146084334.1451517977647.JavaMail.zimbra@uoguelph.ca> <568F6D07.6070500@datalab.es> <1924941590.6473225.1452248249994.JavaMail.zimbra@redhat.com> <981529129.154244852.1452304799182.JavaMail.zimbra@uoguelph.ca> <1256214214.7158114.1452310490692.JavaMail.zimbra@redhat.com> Subject: Re: [Gluster-devel] FreeBSD port of GlusterFS racks up a lot of CPU usage MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.95.10] X-Mailer: Zimbra 8.0.9_GA_6191 (ZimbraWebClient - FF43 (Win)/8.0.9_GA_6191) Thread-Topic: FreeBSD port of GlusterFS racks up a lot of CPU usage Thread-Index: nzgvBLPPgcBXuRsf6GlZU17foVfxCE+M3nAjcp2ixGppRbv/tw== X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 18 Jan 2016 22:37:19 -0000 Raghavendra Gowdappa wrote: > > > ----- Original Message ----- > > From: "Rick Macklem" > > To: "Jeff Darcy" > > Cc: "Raghavendra G" , "freebsd-fs" > > , "Hubbard Jordan" > > , "Xavier Hernandez" , "Gluster > > Devel" > > Sent: Saturday, January 9, 2016 7:29:59 AM > > Subject: Re: [Gluster-devel] FreeBSD port of GlusterFS racks up a lot of > > CPU usage > > > > Jeff Darcy wrote: > > > > > I don't know anything about gluster's poll implementation so I may > > > > > be totally wrong, but would it be possible to use an eventfd (or a > > > > > pipe if eventfd is not supported) to signal the need to add more > > > > > file descriptors to the poll call ? > > > > > > > > > > > > > > > The poll call should listen on this new fd. When we need to change > > > > > the fd list, we should simply write to the eventfd or pipe from > > > > > another thread. This will cause the poll call to return and we will > > > > > be able to change the fd list without having a short timeout nor > > > > > having to decide on any trade-off. > > > > > > > > > > > > Thats a nice idea. Based on my understanding of why timeouts are being > > > > used, this approach can work. > > > > > > The own-thread code which preceded the current poll implementation did > > > something similar, using a pipe fd to be woken up for new *outgoing* > > > messages. That code still exists, and might provide some insight into > > > how to do this for the current poll code. > > I took a look at event-poll.c and found something interesting... > > - A pipe called "breaker" is already set up by event_pool_new_poll() and > > closed by event_pool_destroy_poll(), however it never gets used for > > anything. > > I did a check on history, but couldn't find any information on why it was > removed. Can you send this patch to http://review.gluster.org ? We can > review and merge the patch over there. If you are not aware, development > work flow can be found at: > > http://www.gluster.org/community/documentation/index.php/Developers > Actually, the patch turned out to be a flop. Sometimes a fuse mount would end up with an empty file system with the patch. (I don't know why it was broken, but maybe the original author tan into issues as well?) Anyhow, I am now using the 3.7.6 event-poll.c code except that I have increased the timeout from 1msec->10msec. (Going from 1->5->10 didn't seem to cause a problem, but I got slower test runs when I increased to 20msec, so I've settled on 10mses. This does reduce the CPU usage when the GlusterFS file systems aren't active.) I will submit this one line change to your workflow if it continues to test ok. Thanks for everyone's input, rick > > > > So, I added a few lines of code that writes a byte to it whenever the list > > of > > file descriptors is changed and read when poll() returns, if its revents is > > set. > > I also changed the timeout to -1 (infinity) and it seems to work for a > > trivial > > test. > > --> Btw, I also noticed the "changed" variable gets set to 1 on a change, > > but > > never reset to 0. I didn't change this, since it looks "racey". (ie. I > > think you could easily get a race between a thread that clears it and > > one > > that adds a new fd.) > > > > A slightly safer version of the patch would set a long (100msec ??) timeout > > instead > > of -1. > > > > Anyhow, I've attached the patch in case anyone would like to try it and > > will > > create a bug report for this after I've had more time to test it. > > (I only use a couple of laptops, so my testing will be minimal.) > > > > Thanks for all the help, rick > > > > > _______________________________________________ > > > freebsd-fs@freebsd.org mailing list > > > https://lists.freebsd.org/mailman/listinfo/freebsd-fs > > > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > > > > > >