From owner-freebsd-stable@freebsd.org Sat Apr 20 21:26:06 2019 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 836651576716 for ; Sat, 20 Apr 2019 21:26:06 +0000 (UTC) (envelope-from karl@denninger.net) Received: from colo1.denninger.net (colo1.denninger.net [104.236.120.189]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 94C1972AD8 for ; Sat, 20 Apr 2019 21:26:05 +0000 (UTC) (envelope-from karl@denninger.net) Received: from denninger.net (ip68-1-57-197.pn.at.cox.net [68.1.57.197]) by colo1.denninger.net (Postfix) with ESMTP id E56C621109F for ; Sat, 20 Apr 2019 17:26:03 -0400 (EDT) Received: from [192.168.10.17] (D7.Denninger.Net [192.168.10.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by denninger.net (Postfix) with ESMTPSA id 70D6BC6FE2 for ; Sat, 20 Apr 2019 16:26:02 -0500 (CDT) Subject: Re: Concern: ZFS Mirror issues (12.STABLE and firmware 19 .v. 20) To: freebsd-stable@freebsd.org References: <9a96b1b5-9337-fcae-1a2a-69d7bb24a5b3@denninger.net> <1866e238-e2a1-ef4e-bee5-5a2f14e35b22@denninger.net> <3d2ad225-b223-e9db-cce8-8250571b92c9@FreeBSD.org> <2bc8a172-6168-5ba9-056c-80455eabc82b@denninger.net> <2c23c0de-1802-37be-323e-d390037c6a84@denninger.net> <864062ab-f68b-7e63-c3da-539d1e9714f9@denninger.net> <6dc1bad1-05b8-2c65-99d3-61c547007dfe@denninger.net> <758d5611-c3cf-82dd-220f-a775a57bdd0b@multiplay.co.uk> <3f53389a-0cb5-d106-1f64-bbc2123e975c@denninger.net> <8108da18-2cdd-fa29-983c-3ae7be6be412@multiplay.co.uk> From: Karl Denninger Openpgp: preference=signencrypt Autocrypt: addr=karl@denninger.net; prefer-encrypt=mutual; keydata= mQINBFIX1zsBEADRcJfsQUl9oFeoMfLPJ1kql+3sIaYx0MfJAUhV9LnbWxr0fsWCskM1O4cV tHm5dqPkuPM4Ztc0jLotD1i9ubWvCHOlkLGxFOL+pFbjA+XZ7VKsC/xWmhMwJ3cM8HavK2OV SzEWQ/AEYtMi04IzGSwsxh/5/5R0mPHrsIomV5SbuiI0vjLuDj7fo6146AABI1ULzge4hBYW i/SHrqUrLORmUNBs6bxek79/B0Dzk5cIktD3LOfbT9EAa5J/osVkstMBhToJgQttaMIGv8SG CzpR/HwEokE+7DP+k2mLHnLj6H3kfugOF9pJH8Za4yFmw//s9cPXV8WwtZ2SKfVzn1unpKqf wmJ1PwJoom/d4fGvQDkgkGKRa6RGC6tPmXnqnx+YX4iCOdFfbP8L9rmk2sewDDVzHDU3I3ZZ 8hFIjMYM/QXXYszRatK0LCV0QPZuF7LCf4uQVKw1/oyJInsnH7+6a3c0h21x+CmSja9QJ+y0 yzgEN/nM89d6YTakfR+1xkYgodVmMy/bS8kmXbUUZG/CyeqCqc95RUySjKT2ECrf9GhhoQkl +D8n2MsrAUSMGB4GQSN+TIq9OBTpNuvATGSRuF9wnQcs1iSry+JNCpfRTyWp83uCNApe6oHU EET4Et6KDO3AvjvBMAX0TInTRGW2SQlJMuFKpc7Dg7tHK8zzqQARAQABtCNLYXJsIERlbm5p bmdlciA8a2FybEBkZW5uaW5nZXIubmV0PokCPAQTAQIAJgUCUhfXOwIbIwUJCWYBgAYLCQgH AwIEFQIIAwQWAgMBAh4BAheAAAoJEG6/sivc5s0PLxQP/i6x/QFx9G4Cw7C+LthhLXIm7NSH AtNbz2UjySEx2qkoQQjtsK6mcpEEaky4ky6t8gz0/SifIfJmSmyAx0UhUQ0WBv1vAXwtNrQQ jJd9Bj6l4c2083WaXyHPjt2u2Na6YFowyb4SaQb83hu/Zs25vkPQYJVVE0JX409MFVPUa6E3 zFbd1OTr3T4yNUy4gNeQZfzDqDS8slbIks2sXeoJrZ6qqXVI0ionoivOlaN4T6Q0UYyXtigj dQvvhMt0aNowKFjRqrmSDRpdz+o6yg7Mp7qEZ1V6EZk8KqQTH6htpCTQ8i79ttK4LG6bstSF Re6Fwq52nbrcANrcdmtZXqjo+SGbUqJ8b1ggrxAsJ5MEhRh2peKrCgI/TjQo+ZxfnqEoR4AI 46Cyiz+/lcVvlvmf2iPifS3EEdaH3Itfwt7MxFm6mQORYs6skHDw3tOYB2/AdCW6eRVYs2hB RMAG4uwApZfZDKgRoE95PJmQjeTBiGmRPcsQZtNESe7I7EjHtCDLwtJqvD4HkDDQwpzreT6W XkyIJ7ns7zDfA1E+AQhFR6rsTFGgQZRZKsVeov3SbhYKkCnVDCvb/PKQCAGkSZM9SvYG5Yax 8CMry3AefKktf9fqBFg8pWqtVxDwJr56dhi0GHXRu3jVI995rMGo1fLUG5fSxiZ8L5sAtokh 9WFmQpyl Message-ID: Date: Sat, 20 Apr 2019 16:26:01 -0500 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.6.1 MIME-Version: 1.0 In-Reply-To: <8108da18-2cdd-fa29-983c-3ae7be6be412@multiplay.co.uk> Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha-512; boundary="------------ms050604040905000400030000" X-Rspamd-Queue-Id: 94C1972AD8 X-Spamd-Bar: ------ Authentication-Results: mx1.freebsd.org X-Spamd-Result: default: False [-6.54 / 15.00]; RCVD_VIA_SMTP_AUTH(0.00)[]; HAS_ATTACHMENT(0.00)[]; TO_DN_NONE(0.00)[]; RCVD_COUNT_THREE(0.00)[3]; MX_GOOD(-0.01)[cached: px.denninger.net]; NEURAL_HAM_SHORT(-0.86)[-0.858,0]; FROM_EQ_ENVFROM(0.00)[]; RCVD_TLS_LAST(0.00)[]; R_DKIM_NA(0.00)[]; ASN(0.00)[asn:14061, ipnet:104.236.64.0/18, country:US]; MIME_TRACE(0.00)[0:+,1:+,2:+]; MID_RHS_MATCH_FROM(0.00)[]; RECEIVED_SPAMHAUS_PBL(0.00)[197.57.1.68.zen.spamhaus.org : 127.0.0.11]; ARC_NA(0.00)[]; NEURAL_HAM_MEDIUM(-1.00)[-1.000,0]; FROM_HAS_DN(0.00)[]; SIGNED_SMIME(-2.00)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; NEURAL_HAM_LONG(-1.00)[-1.000,0]; MIME_GOOD(-0.20)[multipart/signed,multipart/alternative,text/plain]; PREVIOUSLY_DELIVERED(0.00)[freebsd-stable@freebsd.org]; AUTH_NA(1.00)[]; RCPT_COUNT_ONE(0.00)[1]; IP_SCORE(-2.47)[ip: (-9.88), ipnet: 104.236.64.0/18(-4.01), asn: 14061(1.60), country: US(-0.06)]; DMARC_NA(0.00)[denninger.net]; R_SPF_NA(0.00)[] X-Content-Filtered-By: Mailman/MimeDel 2.1.29 X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 20 Apr 2019 21:26:07 -0000 This is a cryptographically signed message in MIME format. --------------ms050604040905000400030000 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable No; I can, but of course that's another ~8 hour (overnight) delay between swaps. That's not a bad idea however.... On 4/20/2019 15:56, Steven Hartland wrote: > Thanks for extra info, the next question would be have you eliminated > that corruption exists before the disk is removed? > > Would be interesting to add a zpool scrub to confirm this isn't the > case before the disk removal is attempted. > > =C2=A0=C2=A0=C2=A0 Regards > =C2=A0=C2=A0=C2=A0 Steve > > On 20/04/2019 18:35, Karl Denninger wrote: >> >> On 4/20/2019 10:50, Steven Hartland wrote: >>> Have you eliminated geli as possible source? >> No; I could conceivably do so by re-creating another backup volume >> set without geli-encrypting the drives, but I do not have an extra >> set of drives of the capacity required laying around to do that. I >> would have to do it with lower-capacity disks, which I can attempt if >> you think it would help.=C2=A0 I *do* have open slots in the drive >> backplane to set up a second "test" unit of this sort.=C2=A0 For reaso= ns >> below it will take at least a couple of weeks to get good data on >> whether the problem exists without geli, however. >>> >>> I've just setup an old server which has a LSI 2008 running and old >>> FW (11.0) so was going to have a go at reproducing this. >>> >>> Apart from the disconnect steps below is there anything else needed >>> e.g. read / write workload during disconnect? >> >> Yes.=C2=A0 An attempt to recreate this on my sandbox machine using sma= ller >> disks (WD RE-320s) and a decent amount of read/write activity (tens >> to ~100 gigabytes) on a root mirror of three disks with one taken >> offline did not succeed.=C2=A0 It *reliably* appears, however, on my >> backup volumes with every drive swap. The sandbox machine is >> physically identical other than the physical disks; both are Xeons >> with ECC RAM in them. >> >> The only operational difference is that the backup volume sets have a >> *lot* of data written to them via zfs send|zfs recv over the >> intervening period where with "ordinary" activity from I/O (which was >> the case on my sandbox) the I/O pattern is materially different.=C2=A0= The >> root pool on the sandbox where I tried to reproduce it synthetically >> *is* using geli (in fact it boots native-encrypted.) >> >> The "ordinary" resilver on a disk swap typically covers ~2-3Tb and is >> a ~6-8 hour process. >> >> The usual process for the backup pool looks like this: >> >> Have 2 of the 3 physical disks mounted; the third is in the bank vault= =2E >> >> Over the space of a week, the backup script is run daily.=C2=A0 It fir= st >> imports the pool and then for each zfs filesystem it is backing up >> (which is not all of them; I have a few volatile ones that I don't >> care if I lose, such as object directories for builds and such, plus >> some that are R/O data sets that are backed up separately) it does: >> >> If there is no "...@zfs-base": zfs snapshot -r ...@zfs-base; zfs send >> -R ...@zfs-base | zfs receive -Fuvd $BACKUP >> >> else >> >> zfs rename -r ...@zfs-base ...@zfs-old >> zfs snapshot -r ...@zfs-base >> >> zfs send -RI ...@zfs-old ...@zfs-base |zfs recv -Fudv $BACKUP >> >> .... if ok then zfs destroy -vr ...@zfs-old otherwise print a >> complaint and stop. >> >> When all are complete it then does a "zpool export backup" to detach >> the pool in order to reduce the risk of "stupid root user" (me) >> accidents. >> >> In short I send an incremental of the changes since the last backup, >> which in many cases includes a bunch of automatic snapshots that are >> taken on frequent basis out of the cron. Typically there are a week's >> worth of these that accumulate between swaps of the disk to the >> vault, and the offline'd disk remains that way for a week.=C2=A0 I als= o >> wait for the zpool destroy on each of the targets to drain before >> continuing, as not doing so back in the 9 and 10.x days was a good >> way to stimulate an instant panic on re-import the next day due to >> kernel stack page exhaustion if the previous operation destroyed >> hundreds of gigabytes of snapshots (which does routinely happen as >> part of the backed up data is Macrium images from PCs, so when a new >> month comes around the PC's backup routine removes a huge amount of >> old data from the filesystem.) >> >> Trying to simulate the checksum errors in a few hours' time thus far >> has failed.=C2=A0 But every time I swap the disks on a weekly basis I = get >> a handful of checksum errors on the scrub. If I export and re-import >> the backup mirror after that the counters are zeroed -- the checksum >> error count does *not* remain across an export/import cycle although >> the "scrub repaired" line remains. >> >> For example after the scrub completed this morning I exported the >> pool (the script expects the pool exported before it begins) and ran >> the backup.=C2=A0 When it was complete: >> >> root@NewFS:~/backup-zfs # zpool status backup >> =C2=A0 pool: backup >> =C2=A0state: DEGRADED >> status: One or more devices has been taken offline by the administrato= r. >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 Sufficient replicas exist f= or the pool to continue >> functioning in a >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 degraded state. >> action: Online the device using 'zpool online' or replace the device >> with >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 'zpool replace'. >> =C2=A0 scan: scrub repaired 188K in 0 days 09:40:18 with 0 errors on S= at >> Apr 20 08:45:09 2019 >> config: >> >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 NAME=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0 STATE=C2=A0=C2=A0=C2=A0=C2=A0 READ WRITE CKSU= M >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 backup=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0 DEGRADED=C2=A0=C2=A0=C2=A0=C2=A0 0 0=C2=A0=C2=A0=C2=A0= =C2=A0 0 >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 mirror-0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0 DEGRADED=C2=A0=C2=A0=C2=A0=C2=A0 0 0=C2=A0=C2=A0=C2=A0=C2=A0 0 >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 gpt= /backup61.eli=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ONLINE=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0 0 0=C2=A0=C2=A0=C2=A0=C2=A0 0 >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 gpt= /backup62-1.eli=C2=A0=C2=A0=C2=A0 ONLINE=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0 0 0=C2=A0=C2=A0=C2=A0=C2=A0 0 >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 132= 82812295755460479=C2=A0 OFFLINE=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 0 0=C2=A0=C2= =A0=C2=A0=C2=A0 0=C2=A0 was >> /dev/gpt/backup62-2.eli >> >> errors: No known data errors >> >> It knows it fixed the checksums but the error count is zero -- I did >> NOT "zpool clear". >> >> This may have been present in 11.2; I didn't run that long enough in >> this environment to know.=C2=A0 It definitely was *not* present in 11.= 1 >> and before; the same data structure and script for backups has been >> in use for a very long time without any changes and this first >> appeared when I upgraded from 11.1 to 12.0 on this specific machine, >> with the exact same physical disks being used for over a year >> (they're currently 6Tb units; the last change out for those was ~1.5 >> years ago when I went from 4Tb to 6Tb volumes.)=C2=A0 I have both HGST= -NAS >> and He-Enterprise disks in the rotation and both show identical >> behavior so it doesn't appear to be related to a firmware problem in >> one disk .vs. the other (e.g. firmware that fails to flush the >> on-drive cache before going to standby even though it was told to.) >> >>> >>> mps0: port 0xe000-0xe0ff mem >>> 0xfaf3c000-0xfaf3ffff,0xfaf40000-0xfaf7ffff irq 26 at device 0.0 on >>> pci3 >>> mps0: Firmware: 11.00.00.00, Driver: 21.02.00.00-fbsd >>> mps0: IOCCapabilities: >>> 185c >>> >>> =C2=A0=C2=A0=C2=A0 Regards >>> =C2=A0=C2=A0=C2=A0 Steve >>> >>> On 20/04/2019 15:39, Karl Denninger wrote: >>>> I can confirm that 20.00.07.00 does *not* stop this. >>>> The previous write/scrub on this device was on 20.00.07.00. It was >>>> swapped back in from the vault yesterday, resilvered without inciden= t, >>>> but a scrub says.... >>>> >>>> root@NewFS:/home/karl # zpool status backup >>>> =C2=A0=C2=A0 pool: backup >>>> =C2=A0=C2=A0state: DEGRADED >>>> status: One or more devices has experienced an unrecoverable >>>> error.=C2=A0 An >>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 attempt was made to= correct the error.=C2=A0 Applications are >>>> unaffected. >>>> action: Determine if the device needs to be replaced, and clear the >>>> errors >>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 using 'zpool clear'= or replace the device with 'zpool >>>> replace'. >>>> =C2=A0=C2=A0=C2=A0 see: http://illumos.org/msg/ZFS-8000-9P >>>> =C2=A0=C2=A0 scan: scrub repaired 188K in 0 days 09:40:18 with 0 err= ors on >>>> Sat Apr >>>> 20 08:45:09 2019 >>>> config: >>>> >>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 NAME=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 STATE=C2=A0=C2=A0=C2=A0=C2=A0 READ W= RITE CKSUM >>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 backup=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0 DEGRADED=C2=A0=C2=A0=C2=A0=C2=A0 0=C2=A0=C2=A0=C2= =A0=C2=A0 0=C2=A0=C2=A0=C2=A0=C2=A0 0 >>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 mirror-= 0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0 DEGRADED=C2=A0=C2=A0=C2=A0=C2=A0 0=C2=A0=C2=A0=C2=A0=C2= =A0 0=C2=A0=C2=A0=C2=A0=C2=A0 0 >>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0 gpt/backup61.eli=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ONLINE=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0 0=C2=A0=C2=A0=C2=A0=C2=A0 0=C2=A0=C2=A0=C2=A0=C2=A0 0 >>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0 gpt/backup62-1.eli=C2=A0=C2=A0=C2=A0 ONLINE=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0 0=C2=A0=C2=A0=C2=A0=C2=A0 0=C2=A0=C2=A0=C2=A0 47 >>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0 13282812295755460479=C2=A0 OFFLINE=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 0=C2= =A0=C2=A0=C2=A0=C2=A0 0=C2=A0=C2=A0=C2=A0=C2=A0 0 was >>>> /dev/gpt/backup62-2.eli >>>> >>>> errors: No known data errors >>>> >>>> So this is firmware-invariant (at least between 19.00.00.00 and >>>> 20.00.07.00); the issue persists. >>>> >>>> Again, in my instance these devices are never removed "unsolicited" = so >>>> there can't be (or at least shouldn't be able to) unflushed data in >>>> the >>>> device or kernel cache.=C2=A0 The procedure is and remains: >>>> >>>> zpool offline ..... >>>> geli detach ..... >>>> camcontrol standby ... >>>> >>>> Wait a few seconds for the spindle to spin down. >>>> >>>> Remove disk. >>>> >>>> Then of course on the other side after insertion and the kernel has >>>> reported "finding" the device: >>>> >>>> geli attach ... >>>> zpool online .... >>>> >>>> Wait... >>>> >>>> If this is a boogered TXG that's held in the metadata for the >>>> "offline"'d device (maybe "off by one"?) that's potentially bad in >>>> that >>>> if there is an unknown failure in the other mirror component the >>>> resilver will complete but data has been irrevocably destroyed. >>>> >>>> Granted, this is a very low probability scenario (the area where >>>> the bad >>>> checksums are has to be where the corruption hits, and it has to >>>> happen >>>> between the resilver and access to that data.)=C2=A0 Those are long = odds >>>> but >>>> nonetheless a window of "you're hosed" does appear to exist. >>>> >>> >> --=C2=A0 >> Karl Denninger >> karl@denninger.net >> /The Market Ticker/ >> /[S/MIME encrypted email preferred]/ > > _______________________________________________ > freebsd-stable@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-stable > To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.or= g" --=20 Karl Denninger karl@denninger.net /The Market Ticker/ /[S/MIME encrypted email preferred]/ --------------ms050604040905000400030000 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgMFADCABgkqhkiG9w0BBwEAAKCC DdgwggagMIIEiKADAgECAhMA5EiKghDOXrvfxYxjITXYDdhIMA0GCSqGSIb3DQEBCwUAMIGL MQswCQYDVQQGEwJVUzEQMA4GA1UECAwHRmxvcmlkYTESMBAGA1UEBwwJTmljZXZpbGxlMRkw FwYDVQQKDBBDdWRhIFN5c3RlbXMgTExDMRgwFgYDVQQLDA9DdWRhIFN5c3RlbXMgQ0ExITAf BgNVBAMMGEN1ZGEgU3lzdGVtcyBMTEMgMjAxNyBDQTAeFw0xNzA4MTcxNjQyMTdaFw0yNzA4 MTUxNjQyMTdaMHsxCzAJBgNVBAYTAlVTMRAwDgYDVQQIDAdGbG9yaWRhMRkwFwYDVQQKDBBD dWRhIFN5c3RlbXMgTExDMRgwFgYDVQQLDA9DdWRhIFN5c3RlbXMgQ0ExJTAjBgNVBAMMHEN1 ZGEgU3lzdGVtcyBMTEMgMjAxNyBJbnQgQ0EwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIK AoICAQC1aJotNUI+W4jP7xQDO8L/b4XiF4Rss9O0B+3vMH7Njk85fZ052QhZpMVlpaaO+sCI KqG3oNEbuOHzJB/NDJFnqh7ijBwhdWutdsq23Ux6TvxgakyMPpT6TRNEJzcBVQA0kpby1DVD 0EKSK/FrWWBiFmSxg7qUfmIq/mMzgE6epHktyRM3OGq3dbRdOUgfumWrqHXOrdJz06xE9NzY vc9toqZnd79FUtE/nSZVm1VS3Grq7RKV65onvX3QOW4W1ldEHwggaZxgWGNiR/D4eosAGFxn uYeWlKEC70c99Mp1giWux+7ur6hc2E+AaTGh+fGeijO5q40OGd+dNMgK8Es0nDRw81lRcl24 SWUEky9y8DArgIFlRd6d3ZYwgc1DMTWkTavx3ZpASp5TWih6yI8ACwboTvlUYeooMsPtNa9E 6UQ1nt7VEi5syjxnDltbEFoLYcXBcqhRhFETJe9CdenItAHAtOya3w5+fmC2j/xJz29og1KH YqWHlo3Kswi9G77an+zh6nWkMuHs+03DU8DaOEWzZEav3lVD4u76bKRDTbhh0bMAk4eXriGL h4MUoX3Imfcr6JoyheVrAdHDL/BixbMH1UUspeRuqQMQ5b2T6pabXP0oOB4FqldWiDgJBGRd zWLgCYG8wPGJGYgHibl5rFiI5Ix3FQncipc6SdUzOQIDAQABo4IBCjCCAQYwHQYDVR0OBBYE FF3AXsKnjdPND5+bxVECGKtc047PMIHABgNVHSMEgbgwgbWAFBu1oRhUMNEzjODolDka5k4Q EDBioYGRpIGOMIGLMQswCQYDVQQGEwJVUzEQMA4GA1UECAwHRmxvcmlkYTESMBAGA1UEBwwJ TmljZXZpbGxlMRkwFwYDVQQKDBBDdWRhIFN5c3RlbXMgTExDMRgwFgYDVQQLDA9DdWRhIFN5 c3RlbXMgQ0ExITAfBgNVBAMMGEN1ZGEgU3lzdGVtcyBMTEMgMjAxNyBDQYIJAKxAy1WBo2kY MBIGA1UdEwEB/wQIMAYBAf8CAQAwDgYDVR0PAQH/BAQDAgGGMA0GCSqGSIb3DQEBCwUAA4IC AQCB5686UCBVIT52jO3sz9pKuhxuC2npi8ZvoBwt/IH9piPA15/CGF1XeXUdu2qmhOjHkVLN gO7XB1G8CuluxofOIUce0aZGyB+vZ1ylHXlMeB0R82f5dz3/T7RQso55Y2Vog2Zb7PYTC5B9 oNy3ylsnNLzanYlcW3AAfzZcbxYuAdnuq0Im3EpGm8DoItUcf1pDezugKm/yKtNtY6sDyENj tExZ377cYA3IdIwqn1Mh4OAT/Rmh8au2rZAo0+bMYBy9C11Ex0hQ8zWcvPZBDn4v4RtO8g+K uQZQcJnO09LJNtw94W3d2mj4a7XrsKMnZKvm6W9BJIQ4Nmht4wXAtPQ1xA+QpxPTmsGAU0Cv HmqVC7XC3qxFhaOrD2dsvOAK6Sn3MEpH/YrfYCX7a7cz5zW3DsJQ6o3pYfnnQz+hnwLlz4MK 17NIA0WOdAF9IbtQqarf44+PEyUbKtz1r0KGeGLs+VGdd2FLA0e7yuzxJDYcaBTVwqaHhU2/ Fna/jGU7BhrKHtJbb/XlLeFJ24yvuiYKpYWQSSyZu1R/gvZjHeGb344jGBsZdCDrdxtQQcVA 6OxsMAPSUPMrlg9LWELEEYnVulQJerWxpUecGH92O06wwmPgykkz//UmmgjVSh7ErNvL0lUY UMfunYVO/O5hwhW+P4gviCXzBFeTtDZH259O7TCCBzAwggUYoAMCAQICEwCg0WvVwekjGFiO 62SckFwepz0wDQYJKoZIhvcNAQELBQAwezELMAkGA1UEBhMCVVMxEDAOBgNVBAgMB0Zsb3Jp ZGExGTAXBgNVBAoMEEN1ZGEgU3lzdGVtcyBMTEMxGDAWBgNVBAsMD0N1ZGEgU3lzdGVtcyBD QTElMCMGA1UEAwwcQ3VkYSBTeXN0ZW1zIExMQyAyMDE3IEludCBDQTAeFw0xNzA4MTcyMTIx MjBaFw0yMjA4MTYyMTIxMjBaMFcxCzAJBgNVBAYTAlVTMRAwDgYDVQQIDAdGbG9yaWRhMRkw FwYDVQQKDBBDdWRhIFN5c3RlbXMgTExDMRswGQYDVQQDDBJrYXJsQGRlbm5pbmdlci5uZXQw ggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQC+HVSyxVtJhy3Ohs+PAGRuO//Dha9A 16l5FPATr6wude9zjX5f2lrkRyU8vhCXTZW7WbvWZKpcZ8r0dtZmiK9uF58Ec6hhvfkxJzbg 96WHBw5Fumd5ahZzuCJDtCAWW8R7/KN+zwzQf1+B3MVLmbaXAFBuKzySKhKMcHbK3/wjUYTg y+3UK6v2SBrowvkUBC+jxNg3Wy12GsTXcUS/8FYIXgVVPgfZZrbJJb5HWOQpvvhILpPCD3xs YJFNKEPltXKWHT7Qtc2HNqikgNwj8oqOb+PeZGMiWapsatKm8mxuOOGOEBhAoTVTwUHlMNTg 6QUCJtuWFCK38qOCyk9Haj+86lUU8RG6FkRXWgMbNQm1mWREQhw3axgGLSntjjnznJr5vsvX SYR6c+XKLd5KQZcS6LL8FHYNjqVKHBYM+hDnrTZMqa20JLAF1YagutDiMRURU23iWS7bA9tM cXcqkclTSDtFtxahRifXRI7Epq2GSKuEXe/1Tfb5CE8QsbCpGsfSwv2tZ/SpqVG08MdRiXxN 5tmZiQWo15IyWoeKOXl/hKxA9KPuDHngXX022b1ly+5ZOZbxBAZZMod4y4b4FiRUhRI97r9l CxsP/EPHuuTIZ82BYhrhbtab8HuRo2ofne2TfAWY2BlA7ExM8XShMd9bRPZrNTokPQPUCWCg CdIATQIDAQABo4IBzzCCAcswPAYIKwYBBQUHAQEEMDAuMCwGCCsGAQUFBzABhiBodHRwOi8v b2NzcC5jdWRhc3lzdGVtcy5uZXQ6ODg4ODAJBgNVHRMEAjAAMBEGCWCGSAGG+EIBAQQEAwIF oDAOBgNVHQ8BAf8EBAMCBeAwHQYDVR0lBBYwFAYIKwYBBQUHAwIGCCsGAQUFBwMEMDMGCWCG SAGG+EIBDQQmFiRPcGVuU1NMIEdlbmVyYXRlZCBDbGllbnQgQ2VydGlmaWNhdGUwHQYDVR0O BBYEFLElmNWeVgsBPe7O8NiBzjvjYnpRMIHKBgNVHSMEgcIwgb+AFF3AXsKnjdPND5+bxVEC GKtc047PoYGRpIGOMIGLMQswCQYDVQQGEwJVUzEQMA4GA1UECAwHRmxvcmlkYTESMBAGA1UE BwwJTmljZXZpbGxlMRkwFwYDVQQKDBBDdWRhIFN5c3RlbXMgTExDMRgwFgYDVQQLDA9DdWRh IFN5c3RlbXMgQ0ExITAfBgNVBAMMGEN1ZGEgU3lzdGVtcyBMTEMgMjAxNyBDQYITAORIioIQ zl6738WMYyE12A3YSDAdBgNVHREEFjAUgRJrYXJsQGRlbm5pbmdlci5uZXQwDQYJKoZIhvcN AQELBQADggIBAJXboPFBMLMtaiUt4KEtJCXlHO/3ZzIUIw/eobWFMdhe7M4+0u3te0sr77QR dcPKR0UeHffvpth2Mb3h28WfN0FmJmLwJk+pOx4u6uO3O0E1jNXoKh8fVcL4KU79oEQyYkbu 2HwbXBU9HbldPOOZDnPLi0whi/sbFHdyd4/w/NmnPgzAsQNZ2BYT9uBNr+jZw4SsluQzXG1X lFL/qCBoi1N2mqKPIepfGYF6drbr1RnXEJJsuD+NILLooTNf7PMgHPZ4VSWQXLNeFfygoOOK FiO0qfxPKpDMA+FHa8yNjAJZAgdJX5Mm1kbqipvb+r/H1UAmrzGMbhmf1gConsT5f8KU4n3Q IM2sOpTQe7BoVKlQM/fpQi6aBzu67M1iF1WtODpa5QUPvj1etaK+R3eYBzi4DIbCIWst8MdA 1+fEeKJFvMEZQONpkCwrJ+tJEuGQmjoQZgK1HeloepF0WDcviiho5FlgtAij+iBPtwMuuLiL shAXA5afMX1hYM4l11JXntle12EQFP1r6wOUkpOdxceCcMVDEJBBCHW2ZmdEaXgAm1VU+fnQ qS/wNw/S0X3RJT1qjr5uVlp2Y0auG/eG0jy6TT0KzTJeR9tLSDXprYkN2l/Qf7/nT6Q03qyE QnnKiBXWAZXveafyU/zYa7t3PTWFQGgWoC4w6XqgPo4KV44OMYIFBzCCBQMCAQEwgZIwezEL MAkGA1UEBhMCVVMxEDAOBgNVBAgMB0Zsb3JpZGExGTAXBgNVBAoMEEN1ZGEgU3lzdGVtcyBM TEMxGDAWBgNVBAsMD0N1ZGEgU3lzdGVtcyBDQTElMCMGA1UEAwwcQ3VkYSBTeXN0ZW1zIExM QyAyMDE3IEludCBDQQITAKDRa9XB6SMYWI7rZJyQXB6nPTANBglghkgBZQMEAgMFAKCCAkUw GAYJKoZIhvcNAQkDMQsGCSqGSIb3DQEHATAcBgkqhkiG9w0BCQUxDxcNMTkwNDIwMjEyNjAx WjBPBgkqhkiG9w0BCQQxQgRApM1ZSXalW0sFaLNthNU5f/k+2Z1ruYlH/ZZDmYdJxghokLjq NY4ki9TVRYJulJHlkos7fCQcORTCqZxhyCW2oTBsBgkqhkiG9w0BCQ8xXzBdMAsGCWCGSAFl AwQBKjALBglghkgBZQMEAQIwCgYIKoZIhvcNAwcwDgYIKoZIhvcNAwICAgCAMA0GCCqGSIb3 DQMCAgFAMAcGBSsOAwIHMA0GCCqGSIb3DQMCAgEoMIGjBgkrBgEEAYI3EAQxgZUwgZIwezEL MAkGA1UEBhMCVVMxEDAOBgNVBAgMB0Zsb3JpZGExGTAXBgNVBAoMEEN1ZGEgU3lzdGVtcyBM TEMxGDAWBgNVBAsMD0N1ZGEgU3lzdGVtcyBDQTElMCMGA1UEAwwcQ3VkYSBTeXN0ZW1zIExM QyAyMDE3IEludCBDQQITAKDRa9XB6SMYWI7rZJyQXB6nPTCBpQYLKoZIhvcNAQkQAgsxgZWg gZIwezELMAkGA1UEBhMCVVMxEDAOBgNVBAgMB0Zsb3JpZGExGTAXBgNVBAoMEEN1ZGEgU3lz dGVtcyBMTEMxGDAWBgNVBAsMD0N1ZGEgU3lzdGVtcyBDQTElMCMGA1UEAwwcQ3VkYSBTeXN0 ZW1zIExMQyAyMDE3IEludCBDQQITAKDRa9XB6SMYWI7rZJyQXB6nPTANBgkqhkiG9w0BAQEF AASCAgBTjiHmXJGAIUHx3z0UqQd8lrBXyzNkSy6MIDs5dGJUD+odpe0L8Dtu351k0xkRgur5 g+9chTSCU+aYJBIaD6n6ScHe1d5GqTlRh2N5zz4ZSsJSubd83rGKRh9FY9NIa9Lqpnh5zWyx DYU4J4+B9GKvQA79nJ6XmwUl7HxQJZMX3yo63wkbagwJEo647KKT47L0PeQs1SIyMRGOlO/E 8uJKXjqfEm1dlbevhoUa0ltV/bCc5r9MWnpPAQZLnJFEhxnL7p9jXbSuxe/kmIQZ8ikjoy2s CW9kkUYsZNnuRKzoJbxsMh3+1GCRTPnHrLo7v896FJtDms248+jHLY4vPTjYb+4dvUmYmIPl AjCtBqnhts454IGrhbAdS8YdfYcTH/imSk/swGTGOwSvniYRLATXGH0U7HH/a5eHPcjyRUq+ +5Lupnb1S8jEgVB+LpmuQE7aVMvEEB2OGhzBE2OZID/CCSaDhHCMj9A5TA+v34jKIY7CeiQ3 smvoHkiD1RYvD7Ftw+EJfUbLdpIF/8UPv7RlJmApCrtIvXablY1BrULEi+sGBWuhFT0gsj9M w6FjIscvv4rn5wt6Es2w+R3NkJUCuT+FVto8otQD30xFkubON7cPU0PieGNYn2+ZT9T3LKUn dd24nM4Q8r0UU7a0E/K/tmWUrvdpZkD3OKq0egrGsQAAAAAAAA== --------------ms050604040905000400030000--