Date: Tue, 23 Nov 2010 01:51:21 +0100 From: Ivan Voras <ivoras@freebsd.org> To: freebsd-performance@freebsd.org Subject: Re: PostgreSQL performance scaling Message-ID: <icf36a$8ik$1@dough.gmane.org> In-Reply-To: <icf1nk$192$1@dough.gmane.org> References: <iccd37$lhh$1@dough.gmane.org> <op.vmj44dm634t2sn@skeletor.lan> <4CEA9C46.8010507@freebsd.org> <icf1nk$192$1@dough.gmane.org>
next in thread | previous in thread | raw e-mail | index | archive | help
On 11/23/10 01:26, Ivan Voras wrote: > On 11/22/10 17:37, David Xu wrote: >> Mark Felder wrote: >>> I recommend posting this on the Postgres performance list, too. >>> >>> >>> >>> >>> Regards, >>> >>> >>> Mark >> >> I think if PostgreSQL uses semaphore for inter-process locking, >> it might be a good idea to use POSIX semaphore exits in our head >> branch, the new POSIX semaphore implementation now supports >> process-shared, and is more light weight than SYSV semaphore, >> if there is no contention, a process need not enter kernel to >> acquire/release a lock. Note that I have just fixed a bug in head >> branch. However RELENG_8 does not support process-shared semaphore >> yet. > > Another thing might be that, despite that they appear to try to avoid > it, they possibly have a large number of processes hanging on the same > semaphore, leading to thundering herd problem. > > There already is code for POSIX semaphores in PostgreSQL. It requires > some manual fiddling with the configuration to enable > (USE_UNNAMED_POSIX_SEMAPHORES). > > However, I've just tried it on 9-CURRENT and it doesn't work: > > Nov 23 01:23:02 biggie postgres[1515]: [1-1] FATAL: sem_init failed: No > space left on device Ok, I've found the p1003_1b.sem_nsems_max sysctl. It seems to help when used instead of sysv semaphores, but very little: sysv semaphores: -c# result 4 33549 8 64864 12 79491 16 79887 20 66957 24 52576 28 50406 32 49491 40 45535 50 39499 75 29415 posix semaphores: 16 79125 20 70061 24 55620 After 20 clients, sys time goes sharply up like before procs memory page disks faults cpu r b w avm fre flt re pi po fr sr mf0 mf1 in sy cs us sy id 27 32 0 11887M 3250M 62442 0 0 0 0 0 0 0 10 255078 109047 18 73 10 30 32 0 11887M 3162M 58165 0 0 0 12 0 0 1 7 272540 114416 17 75 9 29 32 0 11887M 3105M 57487 0 0 0 0 0 0 0 8 279475 117891 15 75 10 16 31 0 11887M 3063M 59215 0 0 0 0 0 0 0 6 295342 121090 16 70 13 and the overall behaviour is similar - the processes spend a lot of time in "sbwait" and "ksem" states.
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?icf36a$8ik$1>