Date: Tue, 13 Jun 2017 18:47:56 +0000 From: "Caza, Aaron" <Aaron.Caza@ca.weatherford.com> To: "freebsd-hackers@freebsd.org" <freebsd-hackers@freebsd.org> Subject: RE: [EXTERNAL] Re: FreeBSD10 Stable + ZFS + PostgreSQL + SSD performance drop < 24 hours Message-ID: <afab1b51be824a6e9368b2612c003516@DM2PR58MB013.032d.mgd.msft.net> In-Reply-To: <cebeb433-005e-7cb9-9498-eb58d4b12858@mail.ru> References: <79528bf7a85a47079756dc508130360b@DM2PR58MB013.032d.mgd.msft.net> <cebeb433-005e-7cb9-9498-eb58d4b12858@mail.ru>
next in thread | previous in thread | raw e-mail | index | archive | help
In response to the below, all the MB models I'm currently testing on are di= fferent. Thanks for sharing your results - you're running the FreeBSD10.3-RELEASE-p7= kernel back from Aug 2016. All the 10.3 kernels I've tested with so far a= re stable/10 from Jan 2017 or later so I might have to try going back earli= er. Currently trying a checkout of base/releng/10.3. Unfortunately, it so= metimes takes a while before the degradation hits which slows down testing.= Also testing a stable/11 r307264 per prior suggestion from Slawa. -----Original Message----- From: owner-freebsd-hackers@freebsd.org [mailto:owner-freebsd-hackers@freeb= sd.org] On Behalf Of trafdev via freebsd-hackers Sent: Tuesday, June 13, 2017 7:37 AM To: freebsd-hackers@freebsd.org Subject: [EXTERNAL] Re: FreeBSD10 Stable + ZFS + PostgreSQL + SSD performan= ce drop < 24 hours > Tested on half a dozen machines with different models of SSDs Do they all share same MB models? I have a similar setup (OVH Enterprise SP-128-S dedicated server with 128GB= RAM, 480GB SSD in ZFS mirror and an original manually installed FreeBSD 10= .3 image): robert@sqldb:~ % uname -a FreeBSD xxx.xxx.xxx 10.3-RELEASE-p7 FreeBSD 10.3-RELEASE-p7 #0: Thu Aug 11 18:38:15 UTC 2016 root@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC amd64 robert@sqldb:~ % uptime 6:27AM up 95 days, 9:41, 1 user, load averages: 3.29, 4.26, 5.28 zfs partition created with: zfs create -o recordsize=3D128k -o primarycache=3Dall zroot/ara/sqldb/pgsql custom param in sysctl.conf: vfs.zfs.metaslab.lba_weighting_enabled=3D0 robert@sqldb:~ % sudo dd if=3D/dev/urandom of=3D/ara/sqldb/pgsql/test.bin b= s=3D1M count=3D16000 16000+0 records in 16000+0 records out 16777216000 bytes transferred in 283.185773 secs (59244558 bytes/sec) robert@sqldb:~ % dd if=3D/ara/sqldb/pgsql/test.bin of=3D/dev/null bs=3D1m 16000+0 records in 16000+0 records out 16777216000 bytes transferred in 33.517116 secs (500556670 bytes/sec) robert@sqldb:~ % sudo diskinfo -c -t -v ada0 ada0 512 # sectorsize 480103981056 # mediasize in bytes (447G) 937703088 # mediasize in sectors 4096 # stripesize 0 # stripeoffset 930261 # Cylinders according to firmware. 16 # Heads according to firmware. 63 # Sectors according to firmware. PHWA629405UP480FGN # Disk ident. I/O command overhead: time to read 10MB block 0.285341 sec =3D 0.014 msec/sector time to read 20480 sectors 2.641372 sec =3D 0.129 msec/sector calculated command overhead =3D 0.115 msec/sector Seek times: Full stroke: 250 iter in 0.016943 sec =3D 0.068 msec Half stroke: 250 iter in 0.016189 sec =3D 0.065 msec Quarter stroke: 500 iter in 0.022226 sec =3D 0.044 msec Short forward: 400 iter in 0.018208 sec =3D 0.046 msec Short backward: 400 iter in 0.019637 sec =3D 0.049 msec Seq outer: 2048 iter in 0.066197 sec =3D 0.032 msec Seq inner: 2048 iter in 0.054291 sec =3D 0.027 msec Transfer rates: outside: 102400 kbytes in 0.671285 sec =3D 152543 kbytes/sec middle: 102400 kbytes in 0.640391 sec =3D 159902 kbytes/sec inside: 102400 kbytes in 0.328650 sec =3D 311578 kbytes/sec On 06/10/17 09:25, Caza, Aaron wrote: > Gents, > > I'm experiencing an issue where iterating over a PostgreSQL table of ~21.= 5 million rows (select count(*)) goes from ~35 seconds to ~635 seconds on I= ntel 540 SSDs. This is using a FreeBSD 10 amd64 stable kernel back from Ja= n 2017. SSDs are basically 2 drives in a ZFS mirrored zpool. I'm using Po= stgreSQL 9.5.7. > > I've tried: > > * Using the FreeBSD10 amd64 stable kernel snapshot of May 25, 2017. > > * Tested on half a dozen machines with different models of SSDs: > > o Intel 510s (120GB) in ZFS mirrored pair > > o Intel 520s (120GB) in ZFS mirrored pair > > o Intel 540s (120GB) in ZFS mirrored pair > > o Samsung 850 Pros (256GB) in ZFS mirrored pair > > * Using bonnie++ to remove Postgres from the equation and performan= ce does indeed drop. > > * Rebooting server and immediately re-running test and performance = is back to original. > > * Tried using Karl Denninger's patch from PR187594 (which took some= work to find a kernel that the FreeBSD10 patch would both apply and compil= e cleanly against). > > * Tried disabling ZFS lz4 compression. > > * Ran the same test on a FreeBSD9.0 amd64 system using PostgreSQL 9= .1.3 with 2 Intel 520s in ZFS mirrored pair. System had 165 days uptime an= d test took ~80 seconds after which I rebooted and re-ran test and was stil= l at ~80 seconds (older processor and memory in this system). > > I realize that there's a whole lot of info I'm not including (dmesg, zfs-= stats -a, gstat, et cetera): I'm hoping some enlightened individual will be= able to point me to a solution with only the above to go on. > > Cheers, > Aaron > This message may contain confidential and privileged information. If=20 > it has been sent to you in error, please reply to advise the sender of=20 > the error and then immediately delete it. If you are not the intended=20 > recipient, do not read, copy, disclose or otherwise use this message.=20 > The sender disclaims any liability for such unauthorized use. PLEASE=20 > NOTE that all incoming e-mails sent to Weatherford e-mail accounts=20 > will be archived and may be scanned by us and/or by external service=20 > providers to detect and prevent threats to our systems, investigate=20 > illegal or inappropriate behavior, and/or eliminate unsolicited=20 > promotional e-mails (spam). This process could result in deletion of a=20 > legitimate e-mail before it is read by its intended recipient at our=20 > organization. Moreover, based on the scanning results, the full text=20 > of e-mails and attachments may be made available to Weatherford=20 > security and other personnel for review and appropriate action. If you=20 > have any concerns about this proces s, > please contact us at dataprivacy@weatherford.com. > _______________________________________________ > freebsd-hackers@freebsd.org mailing list=20 > https://lists.freebsd.org/mailman/listinfo/freebsd-hackers > To unsubscribe, send any mail to "freebsd-hackers-unsubscribe@freebsd.org= " > _______________________________________________ freebsd-hackers@freebsd.org mailing list https://lists.freebsd.org/mailman/= listinfo/freebsd-hackers To unsubscribe, send any mail to "freebsd-hackers-unsubscribe@freebsd.org"
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?afab1b51be824a6e9368b2612c003516>