Date: Sat, 17 Jun 2017 05:16:22 +0000 From: "Caza, Aaron" <Aaron.Caza@ca.weatherford.com> To: "freebsd-hackers@freebsd.org" <freebsd-hackers@freebsd.org> Subject: Re: FreeBSD10 Stable + ZFS + PostgreSQL + SSD performance drop < 24 hours Message-ID: <4561529b83ce4270b09aa0e3b12f299f@BLUPR58MB002.032d.mgd.msft.net>
next in thread | raw e-mail | index | archive | help
Regarding this issue, I've now conducted testing using merely a FreeBSD 10.=
3 Stable amd64 GENERIC kernel and using dd to read a large file. The follo=
wing is a log, taken hourly, of the degradation which occurred at just over=
9 hours of uptime. As the original is quite large, I've removed some sect=
ions; however, these can be supplied if desired.
Supplied are the initial dmesg and zpool status, logged only on startup, fo=
llowed by uptime, uname -a, and zfs-stats -a output, each of which are logg=
ed hourly.
Copyright (c) 1992-2017 The FreeBSD Project.
Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994
The Regents of the University of California. All rights reserved.
FreeBSD is a registered trademark of The FreeBSD Foundation.
FreeBSD 10.3-STABLE #0 r319701: Mon Jun 12 19:23:44 UTC 2017
root@releng1.nyi.freebsd.org:/usr/obj/usr/src/sys/GENERIC amd64
FreeBSD clang version 3.4.1 (tags/RELEASE_34/dot1-final 208032) 20140512
CPU: Intel(R) Xeon(R) CPU E31240 @ 3.30GHz (3292.60-MHz K8-class CPU)
Origin=3D"GenuineIntel" Id=3D0x206a7 Family=3D0x6 Model=3D0x2a Steppi=
ng=3D7
Features=3D0xbfebfbff<FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PG=
E,MCA,CMOV,PAT,PSE36,CLFLUSH,DTS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE>
Features2=3D0x1dbae3ff<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,SMX,EST,TM2,S=
SSE3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,POPCNT,TSCDLT,XSAVE,OSXSAVE,A=
VX>
AMD Features=3D0x28100800<SYSCALL,NX,RDTSCP,LM>
AMD Features2=3D0x1<LAHF>
XSAVE Features=3D0x1<XSAVEOPT>
VT-x: PAT,HLT,MTF,PAUSE,EPT,UG,VPID
TSC: P-state invariant, performance statistics
real memory =3D 8589934592 (8192 MB)
avail memory =3D 8219299840 (7838 MB)
Event timer "LAPIC" quality 600
ACPI APIC Table: <SUPERM SMCI--MB>
FreeBSD/SMP: Multiprocessor System Detected: 8 CPUs
FreeBSD/SMP: 1 package(s) x 4 core(s) x 2 SMT threads
cpu0 (BSP): APIC ID: 0
cpu1 (AP): APIC ID: 1
cpu2 (AP): APIC ID: 2
cpu3 (AP): APIC ID: 3
cpu4 (AP): APIC ID: 4
cpu5 (AP): APIC ID: 5
cpu6 (AP): APIC ID: 6
cpu7 (AP): APIC ID: 7
random: <Software, Yarrow> initialized
ioapic0 <Version 2.0> irqs 0-23 on motherboard
kbd1 at kbdmux0
cryptosoft0: <software crypto> on motherboard
acpi0: <SUPERM SMCI--MB> on motherboard
acpi0: Power Button (fixed)
cpu0: <ACPI CPU> on acpi0
cpu1: <ACPI CPU> on acpi0
cpu2: <ACPI CPU> on acpi0
cpu3: <ACPI CPU> on acpi0
cpu4: <ACPI CPU> on acpi0
cpu5: <ACPI CPU> on acpi0
cpu6: <ACPI CPU> on acpi0
cpu7: <ACPI CPU> on acpi0
attimer0: <AT timer> port 0x40-0x43 irq 0 on acpi0
Timecounter "i8254" frequency 1193182 Hz quality 0
Event timer "i8254" frequency 1193182 Hz quality 100
atrtc0: <AT realtime clock> port 0x70-0x71 irq 8 on acpi0
Event timer "RTC" frequency 32768 Hz quality 0
hpet0: <High Precision Event Timer> iomem 0xfed00000-0xfed003ff on acpi0
Timecounter "HPET" frequency 14318180 Hz quality 950
Event timer "HPET" frequency 14318180 Hz quality 550
Timecounter "ACPI-fast" frequency 3579545 Hz quality 900
acpi_timer0: <24-bit timer at 3.579545MHz> port 0x408-0x40b on acpi0
pcib0: <ACPI Host-PCI bridge> port 0xcf8-0xcff on acpi0
pci0: <ACPI PCI bus> on pcib0
em0: <Intel(R) PRO/1000 Network Connection 7.6.1-k> port 0xf020-0xf03f mem =
0xfba00000-0xfba1ffff,0xfba24000-0xfba24fff irq 20 at device 25.0 on pci0
em0: Using an MSI interrupt
em0: Ethernet address: 00:25:90:76:6b:41
ehci0: <Intel Cougar Point USB 2.0 controller> mem 0xfba23000-0xfba233ff ir=
q 16 at device 26.0 on pci0
usbus0: EHCI version 1.0
usbus0 on ehci0
pcib1: <ACPI PCI-PCI bridge> irq 17 at device 28.0 on pci0
pci1: <ACPI PCI bus> on pcib1
pcib2: <ACPI PCI-PCI bridge> irq 17 at device 28.4 on pci0
pci2: <ACPI PCI bus> on pcib2
em1: <Intel(R) PRO/1000 Network Connection 7.6.1-k> port 0xe000-0xe01f mem =
0xfb900000-0xfb91ffff,0xfb920000-0xfb923fff irq 16 at device 0.0 on pci2
em1: Using MSIX interrupts with 3 vectors
em1: Ethernet address: 00:25:90:76:6b:40
ehci1: <Intel Cougar Point USB 2.0 controller> mem 0xfba22000-0xfba223ff ir=
q 23 at device 29.0 on pci0
usbus1: EHCI version 1.0
usbus1 on ehci1
pcib3: <ACPI PCI-PCI bridge> at device 30.0 on pci0
pci3: <ACPI PCI bus> on pcib3
vgapci0: <VGA-compatible display> mem 0xfe000000-0xfe7fffff,0xfb800000-0xfb=
803fff,0xfb000000-0xfb7fffff irq 23 at device 3.0 on pci3
vgapci0: Boot video device
isab0: <PCI-ISA bridge> at device 31.0 on pci0
isa0: <ISA bus> on isab0
ahci0: <Intel Cougar Point AHCI SATA controller> port 0xf070-0xf077,0xf060-=
0xf063,0xf050-0xf057,0xf040-0xf043,0xf000-0xf01f mem 0xfba21000-0xfba217ff =
irq 19 at device 31.2 on pci0
ahci0: AHCI v1.30 with 6 6Gbps ports, Port Multiplier not supported
ahcich0: <AHCI channel> at channel 0 on ahci0
ahcich1: <AHCI channel> at channel 1 on ahci0
ahciem0: <AHCI enclosure management bridge> on ahci0
acpi_button0: <Power Button> on acpi0
atkbdc0: <Keyboard controller (i8042)> port 0x60,0x64 irq 1 on acpi0
atkbd0: <AT Keyboard> irq 1 on atkbdc0
kbd0 at atkbd0
atkbd0: [GIANT-LOCKED]
psm0: <PS/2 Mouse> irq 12 on atkbdc0
psm0: [GIANT-LOCKED]
psm0: model IntelliMouse Explorer, device ID 4
uart0: <16550 or compatible> port 0x3f8-0x3ff irq 4 flags 0x10 on acpi0
orm0: <ISA Option ROMs> at iomem 0xc0000-0xc7fff,0xc8000-0xc8fff on isa0
sc0: <System console> at flags 0x100 on isa0
sc0: VGA <16 virtual consoles, flags=3D0x300>
vga0: <Generic ISA VGA> at port 0x3c0-0x3df iomem 0xa0000-0xbffff on isa0
ppc0: cannot reserve I/O port range
est0: <Enhanced SpeedStep Frequency Control> on cpu0
est1: <Enhanced SpeedStep Frequency Control> on cpu1
est2: <Enhanced SpeedStep Frequency Control> on cpu2
est3: <Enhanced SpeedStep Frequency Control> on cpu3
est4: <Enhanced SpeedStep Frequency Control> on cpu4
est5: <Enhanced SpeedStep Frequency Control> on cpu5
est6: <Enhanced SpeedStep Frequency Control> on cpu6
est7: <Enhanced SpeedStep Frequency Control> on cpu7
ZFS filesystem version: 5
ZFS storage pool version: features support (5000)
Timecounters tick every 1.000 msec
md0: Preloaded image </boot/mfsroot> 17686528 bytes at 0xffffffff81daa1b8
random: unblocking device.
usbus0: 480Mbps High Speed USB v2.0
usbus1: 480Mbps High Speed USB v2.0
ugen0.1: <Intel EHCI root HUB> at usbus0
uhub0: <Intel EHCI root HUB, class 9/0, rev 2.00/1.00, addr 1> on usbus0
ugen1.1: <Intel EHCI root HUB> at usbus1
uhub1: <Intel EHCI root HUB, class 9/0, rev 2.00/1.00, addr 1> on usbus1
ada0 at ahcich0 bus 0 scbus0 target 0 lun 0
ada0: <Samsung SSD 850 PRO 256GB EXM03B6Q> ACS-2 ATA SATA 3.x device
ada0: Serial Number S39KNB0HB00482Y
ada0: 600.000MB/s transfers (SATA 3.x, UDMA6, PIO 512bytes)
ada0: Command Queueing enabled
ada0: 244198MB (500118192 512 byte sectors)
ada0: quirks=3D0x1<4K>
ada1 at ahcich1 bus 0 scbus1 target 0 lun 0
ada1: <Samsung SSD 850 PRO 256GB EXM03B6Q> ACS-2 ATA SATA 3.x device
ada1: Serial Number S39KNB0HB00473Z
ada1: 600.000MB/s transfers (SATA 3.x, UDMA6, PIO 512bytes)
ada1: Command Queueing enabled
ada1: 244198MB (500118192 512 byte sectors)
ada1: quirks=3D0x1<4K>
ses0 at ahciem0 bus 0 scbus2 target 0 lun 0
ses0: <AHCI SGPIO Enclosure 1.00 0001> SEMB S-E-S 2.00 device
ses0: SEMB SES Device
SMP: AP CPU #1 Launched!
SMP: AP CPU #6 Launched!
SMP: AP CPU #3 Launched!
SMP: AP CPU #5 Launched!
SMP: AP CPU #2 Launched!
SMP: AP CPU #4 Launched!
SMP: AP CPU #7 Launched!
Timecounter "TSC-low" frequency 1646298306 Hz quality 1000
Root mount waiting for: usbus1 usbus0
uhub1: 2 ports with 2 removable, self powered
uhub0: 2 ports with 2 removable, self powered
Root mount waiting for: usbus1 usbus0
ugen1.2: <vendor 0x8087 product 0x0024> at usbus1
uhub2: <vendor 0x8087 product 0x0024, class 9/0, rev 2.00/0.00, addr 2> on =
usbus1
ugen0.2: <vendor 0x8087 product 0x0024> at usbus0
uhub3: <vendor 0x8087 product 0x0024, class 9/0, rev 2.00/0.00, addr 2> on =
usbus0
Root mount waiting for: usbus1 usbus0
uhub2: 6 ports with 6 removable, self powered
uhub3: 6 ports with 6 removable, self powered
ugen1.3: <Weatherford SPD> at usbus1
Trying to mount root from ufs:/dev/md0 []...
bridge0: Ethernet address: 02:5d:9c:c3:f4:00
bridge0: link state changed to UP
em0: promiscuous mode enabled
em1: promiscuous mode enabled
em0: link state changed to UP
Zpool Status:
pool: wwbase
state: ONLINE
scan: scrub repaired 0 in 0h1m with 0 errors on Sat Jun 10 18:01:26 2017
config:
NAME STATE READ WRITE CKSUM
wwbase ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gpt/83b2ce34-4b9f-11e7-8d12-002590766b41 ONLINE 0 0 0
gpt/8ec0d395-4b9f-11e7-8d12-002590766b41 ONLINE 0 0 0
errors: No known data errors
---------------------------------------------------------------------------=
----
Testing SSD performance @ Fri Jun 16 19:00:00 UTC 2017
FreeBSD xyz.test 10.3-STABLE FreeBSD 10.3-STABLE #0 r319701: Mon Jun 12 19:=
23:44 UTC 2017 root@releng1.nyi.freebsd.org:/usr/obj/usr/src/sys/GENERI=
C amd64
7:00PM up 12 mins, 1 user, load averages: 0.00, 0.04, 0.07
Starting 'dd' test of large file...please wait
16000+0 records in
16000+0 records out
16777216000 bytes transferred in 18.844918 secs (890278004 bytes/sec)
------------------------------------------------------------------------
ZFS Subsystem ReportFri Jun 16 19:00:18 2017
------------------------------------------------------------------------
System Information:
Kernel Version:1003514 (osreldate)
Hardware Platform:amd64
Processor Architecture:amd64
ZFS Storage pool Version:5000
ZFS Filesystem Version:5
FreeBSD 10.3-STABLE #0 r319701: Mon Jun 12 19:23:44 UTC 2017 root
7:00PM up 13 mins, 1 user, load averages: 2.27, 0.57, 0.25
------------------------------------------------------------------------
System Memory:
0.19%14.90MiB Active,0.38%29.62MiB Inact
4.31%339.96MiB Wired,0.00%0 Cache
95.13%7.33GiB Free,0.00%4.00KiB Gap
Real Installed:8.00GiB
Real Available:99.18%7.93GiB
Real Managed:97.11%7.71GiB
Logical Total:8.00GiB
Logical Used:8.02%656.83MiB
Logical Free:91.98%7.36GiB
Kernel Memory:102.71MiB
Data:73.04%75.02MiB
Text:26.96%27.69MiB
Kernel Memory Map:7.71GiB
Size:2.48%196.06MiB
Free:97.52%7.51GiB
------------------------------------------------------------------------
ARC Summary: (HEALTHY)
Memory Throttle Count:0
ARC Misc:
Deleted:2.06m
Recycle Misses:0
Mutex Misses:10.06k
Evict Skips:1.49m
ARC Size:103.44%52.76MiB
Target Size: (Adaptive)100.00%51.00MiB
Min Size (Hard Limit):98.04%50.00MiB
Max Size (High Water):1:151.00MiB
ARC Size Breakdown:
Recently Used Cache Size:89.95%47.45MiB
Frequently Used Cache Size:10.05%5.30MiB
ARC Hash Breakdown:
Elements Max:11.92k
Elements Current:73.83%8.80k
Collisions:19.65k
Chain Max:2
Chains:25
------------------------------------------------------------------------
ARC Efficiency:4.06m
Cache Hit Ratio:49.05%1.99m
Cache Miss Ratio:50.95%2.07m
Actual Hit Ratio:49.05%1.99m
Data Demand Efficiency:99.91%1.94m
Data Prefetch Efficiency:0.01%2.05m
CACHE HITS BY CACHE LIST:
Anonymously Used:0.01%159
Most Recently Used:99.82%1.99m
Most Frequently Used:0.16%3.28k
Most Recently Used Ghost:0.00%9
Most Frequently Used Ghost:0.00%83
CACHE HITS BY DATA TYPE:
Demand Data:97.40%1.94m
Prefetch Data:0.01%151
Demand Metadata:2.59%51.44k
Prefetch Metadata:0.01%100
CACHE MISSES BY DATA TYPE:
Demand Data:0.09%1.76k
Prefetch Data:99.07%2.05m
Demand Metadata:0.07%1.50k
Prefetch Metadata:0.77%16.01k
------------------------------------------------------------------------
L2ARC is disabled
------------------------------------------------------------------------
File-Level Prefetch: (HEALTHY)
DMU Efficiency:46.70k
Hit Ratio:34.40%16.06k
Miss Ratio:65.60%30.64k
Colinear:0
Hit Ratio:100.00%0
Miss Ratio:100.00%0
Stride:0
Hit Ratio:100.00%0
Miss Ratio:100.00%0
DMU Misc:
Reclaim:0
Successes:100.00%0
Failures:100.00%0
Streams:0
+Resets:100.00%0
-Resets:100.00%0
Bogus:0
------------------------------------------------------------------------
VDEV Cache Summary:16.52k
Hit Ratio:1.65%272
Miss Ratio:93.47%15.44k
Delegations:4.88%806
------------------------------------------------------------------------
ZFS Tunables (sysctl):
kern.maxusers 843
vm.kmem_size 8273297408
vm.kmem_size_scale 1
vm.kmem_size_min 0
vm.kmem_size_max 1319413950874
vfs.zfs.trim.max_interval 1
vfs.zfs.trim.timeout 30
vfs.zfs.trim.txg_delay 32
vfs.zfs.trim.enabled 0
vfs.zfs.vol.unmap_enabled 1
vfs.zfs.vol.mode 1
vfs.zfs.version.zpl 5
vfs.zfs.version.spa 5000
vfs.zfs.version.acl 1
vfs.zfs.version.ioctl 7
vfs.zfs.debug 0
vfs.zfs.super_owner 0
vfs.zfs.sync_pass_rewrite 2
vfs.zfs.sync_pass_dont_compress 5
vfs.zfs.sync_pass_deferred_free 2
vfs.zfs.zio.dva_throttle_enabled 1
vfs.zfs.zio.exclude_metadata 0
vfs.zfs.zio.use_uma 1
vfs.zfs.cache_flush_disable 0
vfs.zfs.zil_replay_disable 0
vfs.zfs.min_auto_ashift 9
vfs.zfs.max_auto_ashift 13
vfs.zfs.vdev.trim_max_pending 10000
vfs.zfs.vdev.bio_delete_disable 0
vfs.zfs.vdev.bio_flush_disable 0
vfs.zfs.vdev.queue_depth_pct 1000
vfs.zfs.vdev.write_gap_limit 4096
vfs.zfs.vdev.read_gap_limit 32768
vfs.zfs.vdev.aggregation_limit 131072
vfs.zfs.vdev.trim_max_active 64
vfs.zfs.vdev.trim_min_active 1
vfs.zfs.vdev.scrub_max_active 2
vfs.zfs.vdev.scrub_min_active 1
vfs.zfs.vdev.async_write_max_active 10
vfs.zfs.vdev.async_write_min_active 1
vfs.zfs.vdev.async_read_max_active 3
vfs.zfs.vdev.async_read_min_active 1
vfs.zfs.vdev.sync_write_max_active 10
vfs.zfs.vdev.sync_write_min_active 10
vfs.zfs.vdev.sync_read_max_active 10
vfs.zfs.vdev.sync_read_min_active 10
vfs.zfs.vdev.max_active 1000
vfs.zfs.vdev.async_write_active_max_dirty_percent60
vfs.zfs.vdev.async_write_active_min_dirty_percent30
vfs.zfs.vdev.mirror.non_rotating_seek_inc1
vfs.zfs.vdev.mirror.non_rotating_inc 0
vfs.zfs.vdev.mirror.rotating_seek_offset1048576
vfs.zfs.vdev.mirror.rotating_seek_inc 5
vfs.zfs.vdev.mirror.rotating_inc 0
vfs.zfs.vdev.trim_on_init 1
vfs.zfs.vdev.cache.bshift 16
vfs.zfs.vdev.cache.size 10485760
vfs.zfs.vdev.cache.max 16384
vfs.zfs.vdev.metaslabs_per_vdev 200
vfs.zfs.txg.timeout 5
vfs.zfs.space_map_blksz 4096
vfs.zfs.spa_min_slop 134217728
vfs.zfs.spa_slop_shift 5
vfs.zfs.spa_asize_inflation 24
vfs.zfs.deadman_enabled 1
vfs.zfs.deadman_checktime_ms 5000
vfs.zfs.deadman_synctime_ms 1000000
vfs.zfs.debug_flags 0
vfs.zfs.debugflags 0
vfs.zfs.recover 0
vfs.zfs.spa_load_verify_data 1
vfs.zfs.spa_load_verify_metadata 1
vfs.zfs.spa_load_verify_maxinflight 10000
vfs.zfs.ccw_retry_interval 300
vfs.zfs.check_hostid 1
vfs.zfs.mg_fragmentation_threshold 85
vfs.zfs.mg_noalloc_threshold 0
vfs.zfs.condense_pct 200
vfs.zfs.metaslab.bias_enabled 1
vfs.zfs.metaslab.lba_weighting_enabled 1
vfs.zfs.metaslab.fragmentation_factor_enabled1
vfs.zfs.metaslab.preload_enabled 1
vfs.zfs.metaslab.preload_limit 3
vfs.zfs.metaslab.unload_delay 8
vfs.zfs.metaslab.load_pct 50
vfs.zfs.metaslab.min_alloc_size 33554432
vfs.zfs.metaslab.df_free_pct 4
vfs.zfs.metaslab.df_alloc_threshold 131072
vfs.zfs.metaslab.debug_unload 0
vfs.zfs.metaslab.debug_load 0
vfs.zfs.metaslab.fragmentation_threshold70
vfs.zfs.metaslab.gang_bang 16777217
vfs.zfs.free_bpobj_enabled 1
vfs.zfs.free_max_blocks -1
vfs.zfs.no_scrub_prefetch 0
vfs.zfs.no_scrub_io 0
vfs.zfs.resilver_min_time_ms 3000
vfs.zfs.free_min_time_ms 1000
vfs.zfs.scan_min_time_ms 1000
vfs.zfs.scan_idle 50
vfs.zfs.scrub_delay 4
vfs.zfs.resilver_delay 2
vfs.zfs.top_maxinflight 32
vfs.zfs.zfetch.array_rd_sz 1048576
vfs.zfs.zfetch.max_distance 8388608
vfs.zfs.zfetch.min_sec_reap 2
vfs.zfs.zfetch.max_streams 8
vfs.zfs.prefetch_disable 0
vfs.zfs.delay_scale 500000
vfs.zfs.delay_min_dirty_percent 60
vfs.zfs.dirty_data_sync 67108864
vfs.zfs.dirty_data_max_percent 10
vfs.zfs.dirty_data_max_max 4294967296
vfs.zfs.dirty_data_max 851961036
vfs.zfs.max_recordsize 1048576
vfs.zfs.send_holes_without_birth_time 1
vfs.zfs.mdcomp_disable 0
vfs.zfs.nopwrite_enabled 1
vfs.zfs.dedup.prefetch 1
vfs.zfs.l2c_only_size 0
vfs.zfs.mfu_ghost_data_esize 0
vfs.zfs.mfu_ghost_metadata_esize 39650304
vfs.zfs.mfu_ghost_size 39650304
vfs.zfs.mfu_data_esize 0
vfs.zfs.mfu_metadata_esize 2785280
vfs.zfs.mfu_size 2839040
vfs.zfs.mru_ghost_data_esize 253952
vfs.zfs.mru_ghost_metadata_esize 7712768
vfs.zfs.mru_ghost_size 7966720
vfs.zfs.mru_data_esize 40481280
vfs.zfs.mru_metadata_esize 3874816
vfs.zfs.mru_size 49604608
vfs.zfs.anon_data_esize 0
vfs.zfs.anon_metadata_esize 0
vfs.zfs.anon_size 28672
vfs.zfs.l2arc_norw 1
vfs.zfs.l2arc_feed_again 1
vfs.zfs.l2arc_noprefetch 1
vfs.zfs.l2arc_feed_min_ms 200
vfs.zfs.l2arc_feed_secs 1
vfs.zfs.l2arc_headroom 2
vfs.zfs.l2arc_write_boost 8388608
vfs.zfs.l2arc_write_max 8388608
vfs.zfs.arc_meta_limit 13369344
vfs.zfs.arc_free_target 14047
vfs.zfs.compressed_arc_enabled 1
vfs.zfs.arc_shrink_shift 7
vfs.zfs.arc_average_blocksize 8192
vfs.zfs.arc_min 52428800
vfs.zfs.arc_max 53477376
------------------------------------------------------------------------
SSD performance testing completed @ Fri Jun 16 19:00:19 UTC 2017
---------------------------------------------------------------------------=
----
This section was removed in the interests of brevity but can be
supplied is required.
---------------------------------------------------------------------------=
----
Testing SSD performance @ Sat Jun 17 03:00:00 UTC 2017
FreeBSD xyz.test 10.3-STABLE FreeBSD 10.3-STABLE #0 r319701: Mon Jun 12 19:=
23:44 UTC 2017 root@releng1.nyi.freebsd.org:/usr/obj/usr/src/sys/GENERI=
C amd64
3:00AM up 8:12, 0 users, load averages: 0.00, 0.00, 0.00
Starting 'dd' test of large file...please wait
16000+0 records in
16000+0 records out
16777216000 bytes transferred in 18.995611 secs (883215382 bytes/sec)
------------------------------------------------------------------------
ZFS Subsystem ReportSat Jun 17 03:00:19 2017
------------------------------------------------------------------------
System Information:
Kernel Version:1003514 (osreldate)
Hardware Platform:amd64
Processor Architecture:amd64
ZFS Storage pool Version:5000
ZFS Filesystem Version:5
FreeBSD 10.3-STABLE #0 r319701: Mon Jun 12 19:23:44 UTC 2017 root
3:00AM up 8:13, 0 users, load averages: 3.41, 0.80, 0.29
------------------------------------------------------------------------
System Memory:
0.04%2.78MiB Active,0.55%43.71MiB Inact
4.59%361.96MiB Wired,0.01%632.00KiB Cache
94.82%7.31GiB Free,0.00%4.00KiB Gap
Real Installed:8.00GiB
Real Available:99.18%7.93GiB
Real Managed:97.11%7.71GiB
Logical Total:8.00GiB
Logical Used:8.14%666.72MiB
Logical Free:91.86%7.35GiB
Kernel Memory:103.60MiB
Data:73.27%75.91MiB
Text:26.73%27.69MiB
Kernel Memory Map:7.71GiB
Size:2.66%210.04MiB
Free:97.34%7.50GiB
------------------------------------------------------------------------
ARC Summary: (HEALTHY)
Memory Throttle Count:0
ARC Misc:
Deleted:18.57m
Recycle Misses:0
Mutex Misses:91.37k
Evict Skips:13.33m
ARC Size:110.62%56.41MiB
Target Size: (Adaptive)100.00%51.00MiB
Min Size (Hard Limit):98.04%50.00MiB
Max Size (High Water):1:151.00MiB
ARC Size Breakdown:
Recently Used Cache Size:84.43%47.63MiB
Frequently Used Cache Size:15.57%8.78MiB
ARC Hash Breakdown:
Elements Max:11.93k
Elements Current:80.43%9.59k
Collisions:193.31k
Chain Max:3
Chains:30
------------------------------------------------------------------------
ARC Efficiency:36.57m
Cache Hit Ratio:49.18%17.98m
Cache Miss Ratio:50.82%18.58m
Actual Hit Ratio:49.18%17.98m
Data Demand Efficiency:99.96%17.53m
Data Prefetch Efficiency:0.00%18.43m
CACHE HITS BY CACHE LIST:
Anonymously Used:0.00%588
Most Recently Used:99.83%17.95m
Most Frequently Used:0.16%29.16k
Most Recently Used Ghost:0.00%56
Most Frequently Used Ghost:0.00%263
CACHE HITS BY DATA TYPE:
Demand Data:97.46%17.53m
Prefetch Data:0.00%633
Demand Metadata:2.54%456.26k
Prefetch Metadata:0.00%275
CACHE MISSES BY DATA TYPE:
Demand Data:0.04%7.74k
Prefetch Data:99.15%18.43m
Demand Metadata:0.03%5.77k
Prefetch Metadata:0.77%143.89k
------------------------------------------------------------------------
L2ARC is disabled
------------------------------------------------------------------------
File-Level Prefetch: (HEALTHY)
DMU Efficiency:597.55k
Hit Ratio:24.22%144.70k
Miss Ratio:75.78%452.85k
Colinear:0
Hit Ratio:100.00%0
Miss Ratio:100.00%0
Stride:0
Hit Ratio:100.00%0
Miss Ratio:100.00%0
DMU Misc:
Reclaim:0
Successes:100.00%0
Failures:100.00%0
Streams:0
+Resets:100.00%0
-Resets:100.00%0
Bogus:0
------------------------------------------------------------------------
VDEV Cache Summary:138.31k
Hit Ratio:0.57%794
Miss Ratio:98.82%136.67k
Delegations:0.61%840
------------------------------------------------------------------------
ZFS Tunables (sysctl):
kern.maxusers 843
vm.kmem_size 8273297408
vm.kmem_size_scale 1
vm.kmem_size_min 0
vm.kmem_size_max 1319413950874
vfs.zfs.trim.max_interval 1
vfs.zfs.trim.timeout 30
vfs.zfs.trim.txg_delay 32
vfs.zfs.trim.enabled 0
vfs.zfs.vol.unmap_enabled 1
vfs.zfs.vol.mode 1
vfs.zfs.version.zpl 5
vfs.zfs.version.spa 5000
vfs.zfs.version.acl 1
vfs.zfs.version.ioctl 7
vfs.zfs.debug 0
vfs.zfs.super_owner 0
vfs.zfs.sync_pass_rewrite 2
vfs.zfs.sync_pass_dont_compress 5
vfs.zfs.sync_pass_deferred_free 2
vfs.zfs.zio.dva_throttle_enabled 1
vfs.zfs.zio.exclude_metadata 0
vfs.zfs.zio.use_uma 1
vfs.zfs.cache_flush_disable 0
vfs.zfs.zil_replay_disable 0
vfs.zfs.min_auto_ashift 9
vfs.zfs.max_auto_ashift 13
vfs.zfs.vdev.trim_max_pending 10000
vfs.zfs.vdev.bio_delete_disable 0
vfs.zfs.vdev.bio_flush_disable 0
vfs.zfs.vdev.queue_depth_pct 1000
vfs.zfs.vdev.write_gap_limit 4096
vfs.zfs.vdev.read_gap_limit 32768
vfs.zfs.vdev.aggregation_limit 131072
vfs.zfs.vdev.trim_max_active 64
vfs.zfs.vdev.trim_min_active 1
vfs.zfs.vdev.scrub_max_active 2
vfs.zfs.vdev.scrub_min_active 1
vfs.zfs.vdev.async_write_max_active 10
vfs.zfs.vdev.async_write_min_active 1
vfs.zfs.vdev.async_read_max_active 3
vfs.zfs.vdev.async_read_min_active 1
vfs.zfs.vdev.sync_write_max_active 10
vfs.zfs.vdev.sync_write_min_active 10
vfs.zfs.vdev.sync_read_max_active 10
vfs.zfs.vdev.sync_read_min_active 10
vfs.zfs.vdev.max_active 1000
vfs.zfs.vdev.async_write_active_max_dirty_percent60
vfs.zfs.vdev.async_write_active_min_dirty_percent30
vfs.zfs.vdev.mirror.non_rotating_seek_inc1
vfs.zfs.vdev.mirror.non_rotating_inc 0
vfs.zfs.vdev.mirror.rotating_seek_offset1048576
vfs.zfs.vdev.mirror.rotating_seek_inc 5
vfs.zfs.vdev.mirror.rotating_inc 0
vfs.zfs.vdev.trim_on_init 1
vfs.zfs.vdev.cache.bshift 16
vfs.zfs.vdev.cache.size 10485760
vfs.zfs.vdev.cache.max 16384
vfs.zfs.vdev.metaslabs_per_vdev 200
vfs.zfs.txg.timeout 5
vfs.zfs.space_map_blksz 4096
vfs.zfs.spa_min_slop 134217728
vfs.zfs.spa_slop_shift 5
vfs.zfs.spa_asize_inflation 24
vfs.zfs.deadman_enabled 1
vfs.zfs.deadman_checktime_ms 5000
vfs.zfs.deadman_synctime_ms 1000000
vfs.zfs.debug_flags 0
vfs.zfs.debugflags 0
vfs.zfs.recover 0
vfs.zfs.spa_load_verify_data 1
vfs.zfs.spa_load_verify_metadata 1
vfs.zfs.spa_load_verify_maxinflight 10000
vfs.zfs.ccw_retry_interval 300
vfs.zfs.check_hostid 1
vfs.zfs.mg_fragmentation_threshold 85
vfs.zfs.mg_noalloc_threshold 0
vfs.zfs.condense_pct 200
vfs.zfs.metaslab.bias_enabled 1
vfs.zfs.metaslab.lba_weighting_enabled 1
vfs.zfs.metaslab.fragmentation_factor_enabled1
vfs.zfs.metaslab.preload_enabled 1
vfs.zfs.metaslab.preload_limit 3
vfs.zfs.metaslab.unload_delay 8
vfs.zfs.metaslab.load_pct 50
vfs.zfs.metaslab.min_alloc_size 33554432
vfs.zfs.metaslab.df_free_pct 4
vfs.zfs.metaslab.df_alloc_threshold 131072
vfs.zfs.metaslab.debug_unload 0
vfs.zfs.metaslab.debug_load 0
vfs.zfs.metaslab.fragmentation_threshold70
vfs.zfs.metaslab.gang_bang 16777217
vfs.zfs.free_bpobj_enabled 1
vfs.zfs.free_max_blocks -1
vfs.zfs.no_scrub_prefetch 0
vfs.zfs.no_scrub_io 0
vfs.zfs.resilver_min_time_ms 3000
vfs.zfs.free_min_time_ms 1000
vfs.zfs.scan_min_time_ms 1000
vfs.zfs.scan_idle 50
vfs.zfs.scrub_delay 4
vfs.zfs.resilver_delay 2
vfs.zfs.top_maxinflight 32
vfs.zfs.zfetch.array_rd_sz 1048576
vfs.zfs.zfetch.max_distance 8388608
vfs.zfs.zfetch.min_sec_reap 2
vfs.zfs.zfetch.max_streams 8
vfs.zfs.prefetch_disable 0
vfs.zfs.delay_scale 500000
vfs.zfs.delay_min_dirty_percent 60
vfs.zfs.dirty_data_sync 67108864
vfs.zfs.dirty_data_max_percent 10
vfs.zfs.dirty_data_max_max 4294967296
vfs.zfs.dirty_data_max 851961036
vfs.zfs.max_recordsize 1048576
vfs.zfs.send_holes_without_birth_time 1
vfs.zfs.mdcomp_disable 0
vfs.zfs.nopwrite_enabled 1
vfs.zfs.dedup.prefetch 1
vfs.zfs.l2c_only_size 0
vfs.zfs.mfu_ghost_data_esize 0
vfs.zfs.mfu_ghost_metadata_esize 41795584
vfs.zfs.mfu_ghost_size 41795584
vfs.zfs.mfu_data_esize 0
vfs.zfs.mfu_metadata_esize 2670592
vfs.zfs.mfu_size 2719744
vfs.zfs.mru_ghost_data_esize 1622016
vfs.zfs.mru_ghost_metadata_esize 6819840
vfs.zfs.mru_ghost_size 8441856
vfs.zfs.mru_data_esize 42278912
vfs.zfs.mru_metadata_esize 4341760
vfs.zfs.mru_size 52709376
vfs.zfs.anon_data_esize 0
vfs.zfs.anon_metadata_esize 0
vfs.zfs.anon_size 147456
vfs.zfs.l2arc_norw 1
vfs.zfs.l2arc_feed_again 1
vfs.zfs.l2arc_noprefetch 1
vfs.zfs.l2arc_feed_min_ms 200
vfs.zfs.l2arc_feed_secs 1
vfs.zfs.l2arc_headroom 2
vfs.zfs.l2arc_write_boost 8388608
vfs.zfs.l2arc_write_max 8388608
vfs.zfs.arc_meta_limit 13369344
vfs.zfs.arc_free_target 14047
vfs.zfs.compressed_arc_enabled 1
vfs.zfs.arc_shrink_shift 7
vfs.zfs.arc_average_blocksize 8192
vfs.zfs.arc_min 52428800
vfs.zfs.arc_max 53477376
------------------------------------------------------------------------
SSD performance testing completed @ Sat Jun 17 03:00:19 UTC 2017
---------------------------------------------------------------------------=
----
Testing SSD performance @ Sat Jun 17 04:00:00 UTC 2017
FreeBSD xyz.test 10.3-STABLE FreeBSD 10.3-STABLE #0 r319701: Mon Jun 12 19:=
23:44 UTC 2017 root@releng1.nyi.freebsd.org:/usr/obj/usr/src/sys/GENERI=
C amd64
4:00AM up 9:12, 0 users, load averages: 0.00, 0.00, 0.00
Starting 'dd' test of large file...please wait
16000+0 records in
16000+0 records out
16777216000 bytes transferred in 268.165167 secs (62562995 bytes/sec)
------------------------------------------------------------------------
ZFS Subsystem ReportSat Jun 17 04:04:28 2017
------------------------------------------------------------------------
System Information:
Kernel Version:1003514 (osreldate)
Hardware Platform:amd64
Processor Architecture:amd64
ZFS Storage pool Version:5000
ZFS Filesystem Version:5
FreeBSD 10.3-STABLE #0 r319701: Mon Jun 12 19:23:44 UTC 2017 root
4:04AM up 9:17, 0 users, load averages: 1.05, 0.67, 0.30
------------------------------------------------------------------------
System Memory:
0.04%2.80MiB Active,0.60%47.30MiB Inact
5.90%465.19MiB Wired,0.07%5.36MiB Cache
93.40%7.20GiB Free,0.00%4.00KiB Gap
Real Installed:8.00GiB
Real Available:99.18%7.93GiB
Real Managed:97.11%7.71GiB
Logical Total:8.00GiB
Logical Used:9.40%769.96MiB
Logical Free:90.60%7.25GiB
Kernel Memory:108.62MiB
Data:74.51%80.93MiB
Text:25.49%27.69MiB
Kernel Memory Map:7.71GiB
Size:2.68%211.31MiB
Free:97.32%7.50GiB
------------------------------------------------------------------------
ARC Summary: (HEALTHY)
Memory Throttle Count:0
ARC Misc:
Deleted:20.64m
Recycle Misses:0
Mutex Misses:682.38k
Evict Skips:2.33b
ARC Size:141.09%71.95MiB
Target Size: (Adaptive)100.00%51.00MiB
Min Size (Hard Limit):98.04%50.00MiB
Max Size (High Water):1:151.00MiB
ARC Size Breakdown:
Recently Used Cache Size:66.20%47.63MiB
Frequently Used Cache Size:33.80%24.32MiB
ARC Hash Breakdown:
Elements Max:11.93k
Elements Current:41.76%4.98k
Collisions:204.57k
Chain Max:3
Chains:7
------------------------------------------------------------------------
ARC Efficiency:40.75m
Cache Hit Ratio:49.30%20.09m
Cache Miss Ratio:50.70%20.66m
Actual Hit Ratio:49.29%20.08m
Data Demand Efficiency:99.96%19.58m
Data Prefetch Efficiency:0.00%20.47m
CACHE HITS BY CACHE LIST:
Anonymously Used:0.02%3.46k
Most Recently Used:99.63%20.02m
Most Frequently Used:0.33%66.92k
Most Recently Used Ghost:0.01%2.68k
Most Frequently Used Ghost:0.00%604
CACHE HITS BY DATA TYPE:
Demand Data:97.43%19.57m
Prefetch Data:0.00%639
Demand Metadata:2.54%510.40k
Prefetch Metadata:0.03%6.10k
CACHE MISSES BY DATA TYPE:
Demand Data:0.04%8.56k
Prefetch Data:99.10%20.47m
Demand Metadata:0.08%16.61k
Prefetch Metadata:0.78%160.96k
------------------------------------------------------------------------
L2ARC is disabled
------------------------------------------------------------------------
File-Level Prefetch: (HEALTHY)
DMU Efficiency:775.18k
Hit Ratio:20.75%160.83k
Miss Ratio:79.25%614.36k
Colinear:0
Hit Ratio:100.00%0
Miss Ratio:100.00%0
Stride:0
Hit Ratio:100.00%0
Miss Ratio:100.00%0
DMU Misc:
Reclaim:0
Successes:100.00%0
Failures:100.00%0
Streams:0
+Resets:100.00%0
-Resets:100.00%0
Bogus:0
------------------------------------------------------------------------
VDEV Cache Summary:156.84k
Hit Ratio:1.80%2.83k
Miss Ratio:97.53%152.96k
Delegations:0.67%1.05k
------------------------------------------------------------------------
ZFS Tunables (sysctl):
kern.maxusers 843
vm.kmem_size 8273297408
vm.kmem_size_scale 1
vm.kmem_size_min 0
vm.kmem_size_max 1319413950874
vfs.zfs.trim.max_interval 1
vfs.zfs.trim.timeout 30
vfs.zfs.trim.txg_delay 32
vfs.zfs.trim.enabled 0
vfs.zfs.vol.unmap_enabled 1
vfs.zfs.vol.mode 1
vfs.zfs.version.zpl 5
vfs.zfs.version.spa 5000
vfs.zfs.version.acl 1
vfs.zfs.version.ioctl 7
vfs.zfs.debug 0
vfs.zfs.super_owner 0
vfs.zfs.sync_pass_rewrite 2
vfs.zfs.sync_pass_dont_compress 5
vfs.zfs.sync_pass_deferred_free 2
vfs.zfs.zio.dva_throttle_enabled 1
vfs.zfs.zio.exclude_metadata 0
vfs.zfs.zio.use_uma 1
vfs.zfs.cache_flush_disable 0
vfs.zfs.zil_replay_disable 0
vfs.zfs.min_auto_ashift 9
vfs.zfs.max_auto_ashift 13
vfs.zfs.vdev.trim_max_pending 10000
vfs.zfs.vdev.bio_delete_disable 0
vfs.zfs.vdev.bio_flush_disable 0
vfs.zfs.vdev.queue_depth_pct 1000
vfs.zfs.vdev.write_gap_limit 4096
vfs.zfs.vdev.read_gap_limit 32768
vfs.zfs.vdev.aggregation_limit 131072
vfs.zfs.vdev.trim_max_active 64
vfs.zfs.vdev.trim_min_active 1
vfs.zfs.vdev.scrub_max_active 2
vfs.zfs.vdev.scrub_min_active 1
vfs.zfs.vdev.async_write_max_active 10
vfs.zfs.vdev.async_write_min_active 1
vfs.zfs.vdev.async_read_max_active 3
vfs.zfs.vdev.async_read_min_active 1
vfs.zfs.vdev.sync_write_max_active 10
vfs.zfs.vdev.sync_write_min_active 10
vfs.zfs.vdev.sync_read_max_active 10
vfs.zfs.vdev.sync_read_min_active 10
vfs.zfs.vdev.max_active 1000
vfs.zfs.vdev.async_write_active_max_dirty_percent60
vfs.zfs.vdev.async_write_active_min_dirty_percent30
vfs.zfs.vdev.mirror.non_rotating_seek_inc1
vfs.zfs.vdev.mirror.non_rotating_inc 0
vfs.zfs.vdev.mirror.rotating_seek_offset1048576
vfs.zfs.vdev.mirror.rotating_seek_inc 5
vfs.zfs.vdev.mirror.rotating_inc 0
vfs.zfs.vdev.trim_on_init 1
vfs.zfs.vdev.cache.bshift 16
vfs.zfs.vdev.cache.size 10485760
vfs.zfs.vdev.cache.max 16384
vfs.zfs.vdev.metaslabs_per_vdev 200
vfs.zfs.txg.timeout 5
vfs.zfs.space_map_blksz 4096
vfs.zfs.spa_min_slop 134217728
vfs.zfs.spa_slop_shift 5
vfs.zfs.spa_asize_inflation 24
vfs.zfs.deadman_enabled 1
vfs.zfs.deadman_checktime_ms 5000
vfs.zfs.deadman_synctime_ms 1000000
vfs.zfs.debug_flags 0
vfs.zfs.debugflags 0
vfs.zfs.recover 0
vfs.zfs.spa_load_verify_data 1
vfs.zfs.spa_load_verify_metadata 1
vfs.zfs.spa_load_verify_maxinflight 10000
vfs.zfs.ccw_retry_interval 300
vfs.zfs.check_hostid 1
vfs.zfs.mg_fragmentation_threshold 85
vfs.zfs.mg_noalloc_threshold 0
vfs.zfs.condense_pct 200
vfs.zfs.metaslab.bias_enabled 1
vfs.zfs.metaslab.lba_weighting_enabled 1
vfs.zfs.metaslab.fragmentation_factor_enabled1
vfs.zfs.metaslab.preload_enabled 1
vfs.zfs.metaslab.preload_limit 3
vfs.zfs.metaslab.unload_delay 8
vfs.zfs.metaslab.load_pct 50
vfs.zfs.metaslab.min_alloc_size 33554432
vfs.zfs.metaslab.df_free_pct 4
vfs.zfs.metaslab.df_alloc_threshold 131072
vfs.zfs.metaslab.debug_unload 0
vfs.zfs.metaslab.debug_load 0
vfs.zfs.metaslab.fragmentation_threshold70
vfs.zfs.metaslab.gang_bang 16777217
vfs.zfs.free_bpobj_enabled 1
vfs.zfs.free_max_blocks -1
vfs.zfs.no_scrub_prefetch 0
vfs.zfs.no_scrub_io 0
vfs.zfs.resilver_min_time_ms 3000
vfs.zfs.free_min_time_ms 1000
vfs.zfs.scan_min_time_ms 1000
vfs.zfs.scan_idle 50
vfs.zfs.scrub_delay 4
vfs.zfs.resilver_delay 2
vfs.zfs.top_maxinflight 32
vfs.zfs.zfetch.array_rd_sz 1048576
vfs.zfs.zfetch.max_distance 8388608
vfs.zfs.zfetch.min_sec_reap 2
vfs.zfs.zfetch.max_streams 8
vfs.zfs.prefetch_disable 0
vfs.zfs.delay_scale 500000
vfs.zfs.delay_min_dirty_percent 60
vfs.zfs.dirty_data_sync 67108864
vfs.zfs.dirty_data_max_percent 10
vfs.zfs.dirty_data_max_max 4294967296
vfs.zfs.dirty_data_max 851961036
vfs.zfs.max_recordsize 1048576
vfs.zfs.send_holes_without_birth_time 1
vfs.zfs.mdcomp_disable 0
vfs.zfs.nopwrite_enabled 1
vfs.zfs.dedup.prefetch 1
vfs.zfs.l2c_only_size 0
vfs.zfs.mfu_ghost_data_esize 1933312
vfs.zfs.mfu_ghost_metadata_esize 45613056
vfs.zfs.mfu_ghost_size 47546368
vfs.zfs.mfu_data_esize 0
vfs.zfs.mfu_metadata_esize 475136
vfs.zfs.mfu_size 1127936
vfs.zfs.mru_ghost_data_esize 4771840
vfs.zfs.mru_ghost_metadata_esize 1060864
vfs.zfs.mru_ghost_size 5832704
vfs.zfs.mru_data_esize 0
vfs.zfs.mru_metadata_esize 0
vfs.zfs.mru_size 26115584
vfs.zfs.anon_data_esize 0
vfs.zfs.anon_metadata_esize 0
vfs.zfs.anon_size 147456
vfs.zfs.l2arc_norw 1
vfs.zfs.l2arc_feed_again 1
vfs.zfs.l2arc_noprefetch 1
vfs.zfs.l2arc_feed_min_ms 200
vfs.zfs.l2arc_feed_secs 1
vfs.zfs.l2arc_headroom 2
vfs.zfs.l2arc_write_boost 8388608
vfs.zfs.l2arc_write_max 8388608
vfs.zfs.arc_meta_limit 13369344
vfs.zfs.arc_free_target 14047
vfs.zfs.compressed_arc_enabled 1
vfs.zfs.arc_shrink_shift 7
vfs.zfs.arc_average_blocksize 8192
vfs.zfs.arc_min 52428800
vfs.zfs.arc_max 53477376
------------------------------------------------------------------------
SSD performance testing completed @ Sat Jun 17 04:04:28 UTC 2017
---------------------------------------------------------------------------=
----
Hopefully, the above proves useful to help track down this issue.
--
Aaron
This message may contain confidential and privileged information. If it has=
been sent to you in error, please reply to advise the sender of the error =
and then immediately delete it. If you are not the intended recipient, do n=
ot read, copy, disclose or otherwise use this message. The sender disclaims=
any liability for such unauthorized use. PLEASE NOTE that all incoming e-m=
ails sent to Weatherford e-mail accounts will be archived and may be scanne=
d by us and/or by external service providers to detect and prevent threats =
to our systems, investigate illegal or inappropriate behavior, and/or elimi=
nate unsolicited promotional e-mails (spam). This process could result in d=
eletion of a legitimate e-mail before it is read by its intended recipient =
at our organization. Moreover, based on the scanning results, the full text=
of e-mails and attachments may be made available to Weatherford security a=
nd other personnel for review and appropriate action. If you have any conce=
rns about this process, please contact us at dataprivacy@weatherford.com.
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4561529b83ce4270b09aa0e3b12f299f>
