Date: Tue, 13 Jun 2017 06:36:53 -0700 From: trafdev <trafdev@mail.ru> To: freebsd-hackers@freebsd.org Subject: Re: FreeBSD10 Stable + ZFS + PostgreSQL + SSD performance drop < 24 hours Message-ID: <cebeb433-005e-7cb9-9498-eb58d4b12858@mail.ru> In-Reply-To: <79528bf7a85a47079756dc508130360b@DM2PR58MB013.032d.mgd.msft.net> References: <79528bf7a85a47079756dc508130360b@DM2PR58MB013.032d.mgd.msft.net>
next in thread | previous in thread | raw e-mail | index | archive | help
> Tested on half a dozen machines with different models of SSDs
Do they all share same MB models?
I have a similar setup (OVH Enterprise SP-128-S dedicated server with=20
128GB RAM, 480GB SSD in ZFS mirror and an original manually installed=20
FreeBSD 10.3 image):
robert@sqldb:~ % uname -a
FreeBSD xxx.xxx.xxx 10.3-RELEASE-p7 FreeBSD 10.3-RELEASE-p7 #0: Thu Aug=20
11 18:38:15 UTC 2016=20
root@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC amd64
robert@sqldb:~ % uptime
6:27AM up 95 days, 9:41, 1 user, load averages: 3.29, 4.26, 5.28
zfs partition created with:
zfs create -o recordsize=3D128k -o primarycache=3Dall zroot/ara/sqldb/pgs=
ql
custom param in sysctl.conf:
vfs.zfs.metaslab.lba_weighting_enabled=3D0
robert@sqldb:~ % sudo dd if=3D/dev/urandom of=3D/ara/sqldb/pgsql/test.bin=
=20
bs=3D1M count=3D16000
16000+0 records in
16000+0 records out
16777216000 bytes transferred in 283.185773 secs (59244558 bytes/sec)
robert@sqldb:~ % dd if=3D/ara/sqldb/pgsql/test.bin of=3D/dev/null bs=3D1m=
16000+0 records in
16000+0 records out
16777216000 bytes transferred in 33.517116 secs (500556670 bytes/sec)
robert@sqldb:~ % sudo diskinfo -c -t -v ada0
ada0
512 # sectorsize
480103981056 # mediasize in bytes (447G)
937703088 # mediasize in sectors
4096 # stripesize
0 # stripeoffset
930261 # Cylinders according to firmware.
16 # Heads according to firmware.
63 # Sectors according to firmware.
PHWA629405UP480FGN # Disk ident.
I/O command overhead:
time to read 10MB block 0.285341 sec =3D 0.014 msec/secto=
r
time to read 20480 sectors 2.641372 sec =3D 0.129 msec/secto=
r
calculated command overhead =3D 0.115 msec/sector
Seek times:
Full stroke: 250 iter in 0.016943 sec =3D 0.068 msec
Half stroke: 250 iter in 0.016189 sec =3D 0.065 msec
Quarter stroke: 500 iter in 0.022226 sec =3D 0.044 msec
Short forward: 400 iter in 0.018208 sec =3D 0.046 msec
Short backward: 400 iter in 0.019637 sec =3D 0.049 msec
Seq outer: 2048 iter in 0.066197 sec =3D 0.032 msec
Seq inner: 2048 iter in 0.054291 sec =3D 0.027 msec
Transfer rates:
outside: 102400 kbytes in 0.671285 sec =3D 152543 kbytes/s=
ec
middle: 102400 kbytes in 0.640391 sec =3D 159902 kbytes/s=
ec
inside: 102400 kbytes in 0.328650 sec =3D 311578 kbytes/s=
ec
On 06/10/17 09:25, Caza, Aaron wrote:
> Gents,
>
> I'm experiencing an issue where iterating over a PostgreSQL table of ~2=
1.5 million rows (select count(*)) goes from ~35 seconds to ~635 seconds =
on Intel 540 SSDs. This is using a FreeBSD 10 amd64 stable kernel back f=
rom Jan 2017. SSDs are basically 2 drives in a ZFS mirrored zpool. I'm =
using PostgreSQL 9.5.7.
>
> I've tried:
>
> * Using the FreeBSD10 amd64 stable kernel snapshot of May 25, 201=
7.
>
> * Tested on half a dozen machines with different models of SSDs:
>
> o Intel 510s (120GB) in ZFS mirrored pair
>
> o Intel 520s (120GB) in ZFS mirrored pair
>
> o Intel 540s (120GB) in ZFS mirrored pair
>
> o Samsung 850 Pros (256GB) in ZFS mirrored pair
>
> * Using bonnie++ to remove Postgres from the equation and perform=
ance does indeed drop.
>
> * Rebooting server and immediately re-running test and performanc=
e is back to original.
>
> * Tried using Karl Denninger's patch from PR187594 (which took so=
me work to find a kernel that the FreeBSD10 patch would both apply and co=
mpile cleanly against).
>
> * Tried disabling ZFS lz4 compression.
>
> * Ran the same test on a FreeBSD9.0 amd64 system using PostgreSQL=
9.1.3 with 2 Intel 520s in ZFS mirrored pair. System had 165 days uptim=
e and test took ~80 seconds after which I rebooted and re-ran test and wa=
s still at ~80 seconds (older processor and memory in this system).
>
> I realize that there's a whole lot of info I'm not including (dmesg, zf=
s-stats -a, gstat, et cetera): I'm hoping some enlightened individual wil=
l be able to point me to a solution with only the above to go on.
>
> Cheers,
> Aaron
> This message may contain confidential and privileged information. If it=
has been sent to you in error, please reply to advise the sender of the =
error and then immediately delete it. If you are not the intended recipie=
nt, do not read, copy, disclose or otherwise use this message. The sender=
disclaims any liability for such unauthorized use. PLEASE NOTE that all =
incoming e-mails sent to Weatherford e-mail accounts will be archived and=
may be scanned by us and/or by external service providers to detect and =
prevent threats to our systems, investigate illegal or inappropriate beha=
vior, and/or eliminate unsolicited promotional e-mails (spam). This proce=
ss could result in deletion of a legitimate e-mail before it is read by i=
ts intended recipient at our organization. Moreover, based on the scannin=
g results, the full text of e-mails and attachments may be made available=
to Weatherford security and other personnel for review and appropriate a=
ction. If you have any concerns about this process,
> please contact us at dataprivacy@weatherford.com.
> _______________________________________________
> freebsd-hackers@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-hackers
> To unsubscribe, send any mail to "freebsd-hackers-unsubscribe@freebsd.o=
rg"
>
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?cebeb433-005e-7cb9-9498-eb58d4b12858>
