From owner-freebsd-xen@freebsd.org Fri Feb 15 07:22:52 2019 Return-Path: Delivered-To: freebsd-xen@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 0F83514F22FF for ; Fri, 15 Feb 2019 07:22:52 +0000 (UTC) (envelope-from eric.bautsch@pobox.com) Received: from pb-smtp20.pobox.com (pb-smtp20.pobox.com [173.228.157.52]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 6CEB4827EA for ; Fri, 15 Feb 2019 07:22:50 +0000 (UTC) (envelope-from eric.bautsch@pobox.com) Received: from pb-smtp20.pobox.com (unknown [127.0.0.1]) by pb-smtp20.pobox.com (Postfix) with ESMTP id 051F0423BE; Fri, 15 Feb 2019 02:22:43 -0500 (EST) (envelope-from eric.bautsch@pobox.com) DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=pobox.com; h=subject:to:cc :references:from:message-id:date:mime-version:in-reply-to :content-type; s=sasl; bh=jVSx3EGrCXwT/u3rz5LYoYaf5HY=; b=qDF9+Q J9FbBX+S7SFLmW5a23d/vGZk6bX1jvdNIhLivNm5sm2jly8aYUaJSIl3EnTNzHEI B8I8YHrNtfyXGBT1Uv8CXfEhgr0GSsWixxRy4owL/eEqyxb6QA7XwQm7GQgi+WCR PGnV9Oa81caglk5DuoRjqs/dA/g62ZydGJKo0= DomainKey-Signature: a=rsa-sha1; c=nofws; d=pobox.com; h=subject:to:cc :references:from:message-id:date:mime-version:in-reply-to :content-type; q=dns; s=sasl; b=gTDjjXgCSmFNADQUMM/dtyZZVGina3CA wJBummE8ILyclYjRJwIHA1dN3GaPBSztX21sHr7ER8YwPToq2izRbxzL57fOOaK6 JeZ+1Ul0dJjkJZzQM+GdC6g8t1kdIkYWBJEuCl5yRgi628JczA8EDLDuIIBuEAxn TrFS4EoMUW8= Received: from pb-smtp20.sea.icgroup.com (unknown [127.0.0.1]) by pb-smtp20.pobox.com (Postfix) with ESMTP id E5B2B423BD; Fri, 15 Feb 2019 02:22:42 -0500 (EST) (envelope-from eric.bautsch@pobox.com) Received: from swangage.co.uk (unknown [80.247.22.25]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by pb-smtp20.pobox.com (Postfix) with ESMTPSA id 4F7EC423BC; Fri, 15 Feb 2019 02:22:40 -0500 (EST) (envelope-from eric.bautsch@pobox.com) Received: from [192.168.140.93] (host-93 [192.168.140.93]) (authenticated bits=0) by juliet.swangage.co.uk (8.14.7/8.14.7) with ESMTP id x1F7MTiI022457 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES128-SHA bits=128 verify=NO); Fri, 15 Feb 2019 07:22:37 GMT Subject: Re: Issues with XEN and ZFS To: "Rodney W. Grimes" , Roger Pau Monn? Cc: freebsd-xen@freebsd.org References: <201902111543.x1BFhODs071427@pdx.rh.CN85.dnsmgr.net> From: Eric Bautsch Message-ID: <234bf1db-b9e9-f30d-f966-5b4b6973fee7@pobox.com> Date: Fri, 15 Feb 2019 07:22:24 +0000 User-Agent: Mozilla/5.0 (X11; SunOS i86pc; rv:52.0) Gecko/20100101 Thunderbird/52.0 MIME-Version: 1.0 In-Reply-To: <201902111543.x1BFhODs071427@pdx.rh.CN85.dnsmgr.net> Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha-256; boundary="------------ms030905070900030802000704" X-Pobox-Relay-ID: 7A49FE2A-30F2-11E9-84DA-D01F9763A999-54785156!pb-smtp20.pobox.com X-Rspamd-Queue-Id: 6CEB4827EA X-Spamd-Bar: -------- Authentication-Results: mx1.freebsd.org; dkim=pass header.d=pobox.com header.s=sasl header.b=qDF9+Q J; dmarc=pass (policy=none) header.from=pobox.com; spf=pass (mx1.freebsd.org: domain of eric.bautsch@pobox.com designates 173.228.157.52 as permitted sender) smtp.mailfrom=eric.bautsch@pobox.com X-Spamd-Result: default: False [-8.89 / 15.00]; RCVD_VIA_SMTP_AUTH(0.00)[]; TO_DN_SOME(0.00)[]; R_SPF_ALLOW(-0.20)[+ip4:173.228.157.0/24]; HAS_ATTACHMENT(0.00)[]; DKIM_TRACE(0.00)[pobox.com:+]; DMARC_POLICY_ALLOW(-0.50)[pobox.com,none]; MX_GOOD(-0.01)[pb-mx10.pobox.com,pb-mx11.pobox.com,pb-mx22.pobox.com,pb-mx9.pobox.com,pb-mx14.pobox.com,pb-mx21.pobox.com,pb-mx23.pobox.com,pb-mx20.pobox.com]; NEURAL_HAM_SHORT(-0.98)[-0.979,0]; FROM_EQ_ENVFROM(0.00)[]; RCVD_TLS_LAST(0.00)[]; IP_SCORE(-1.70)[ipnet: 173.228.157.0/24(-4.87), asn: 11403(-3.57), country: US(-0.07)]; RCVD_IN_DNSWL_LOW(-0.10)[52.157.228.173.list.dnswl.org : 127.0.5.1]; ASN(0.00)[asn:11403, ipnet:173.228.157.0/24, country:US]; MID_RHS_MATCH_FROM(0.00)[]; ARC_NA(0.00)[]; NEURAL_HAM_MEDIUM(-1.00)[-1.000,0]; R_DKIM_ALLOW(-0.20)[pobox.com:s=sasl]; RCVD_COUNT_FIVE(0.00)[5]; FROM_HAS_DN(0.00)[]; SIGNED_SMIME(-2.00)[]; RCPT_COUNT_THREE(0.00)[3]; MIME_GOOD(-0.20)[multipart/signed,multipart/alternative,text/plain]; MIME_TRACE(0.00)[0:+,1:+,2:+]; NEURAL_HAM_LONG(-1.00)[-1.000,0]; DWL_DNSWL_LOW(-1.00)[pobox.com.dwl.dnswl.org : 127.0.5.1]; TO_MATCH_ENVRCPT_SOME(0.00)[] X-Content-Filtered-By: Mailman/MimeDel 2.1.29 X-BeenThere: freebsd-xen@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Discussion of the freebsd port to xen - implementation and usage List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 15 Feb 2019 07:22:52 -0000 This is a cryptographically signed message in MIME format. --------------ms030905070900030802000704 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: quoted-printable Thanks all for your help and my apologies for the late reply, I was out o= n a=20 long weekend and then on customer site until Wednesday night.... Comments/answers inline. Thanks again. Eric On 11/02/2019 15:43, Rodney W. Grimes wrote: >> Thanks for the testing! >> >> On Fri, Feb 08, 2019 at 07:35:04PM +0000, Eric Bautsch wrote: >>> Hi. >>> >>> >>> Brief abstract: I'm having ZFS/Xen interaction issues with the disks = being >>> declared unusable by the dom0. >>> >>> >>> The longer bit: >>> >>> I'm new to FreeBSD, so my apologies for all the stupid questions. I'm= trying >>> to migrate from Linux as my virtual platform host (very bad experienc= es with >>> stability, let's leave it at that). I'm hosting mostly Solaris VMs (t= hat >>> being my choice of OS, but again, Betamax/VHS, need I say more), as w= ell as >>> a Windows VM (because I have to) and a Linux VM (as a future desktop = via >>> thin clients as and when I have to retire my SunRay solution which al= so runs >>> on a VM for lack of functionality). >>> >>> So, I got xen working on FreeBSD now after my newbie mistake was poin= ted out to me. >>> >>> However, I seem to be stuck again: >>> >>> I have, in this initial test server, only two disks. They are SATA ha= nging >>> off the on-board SATA controller. The system is one of those Shuttle = XPC >>> cubes, an older one I had hanging around with 16GB memory and I think= 4 >>> cores. >>> >>> I've given the dom0 2GB of memory and 2 core to start with. >> 2GB might be too low when using ZFS, I would suggest 4G as a minimum >> when using ZFS for reasonable performance, even 8G. ZFS is quite >> memory hungry. > 2GB should not be too low, I comfortably run ZFS in 1G. ZFS is a > "free memory hog", by design it uses all memory it can. Unfortantly > often the free aspect is over looked and it does not return memory when= > it should, leading to OOM kills, those are bugs and need fixed. > > If you are going to run ZFS at all I do strongly suggest overriding > the arc memory size with vfs.zfs.arc_max=3D in /boot/loader.conf to be > something more reasonable than the default 95% of host memory. On my machines, I tend to limit it to 2GB where there's plenty of memory = about.=20 As this box only has 2GB, I didn't bother, but thanks for letting me know= where=20 and how to do it, as I will need to know at some point... ;-) > > For a DOM0 I would start at 50% of memory (so 1G in this case) and > monitor the DOM0 internally with top, and slowly increase this limit > until the free memory dropped to the 256MB region. If the work load > on DOM0 changes dramatically you may need to readjust. > >>> The root filesystem is zfs with a mirror between the two disks. >>> >>> The entire thing is dead easy to blow away and re-install as I was ve= ry >>> impressed how easy the FreeBSD automatic installer was to understand = and >>> pick up, so I have it all scripted. If I need to blow stuff away to t= est, no >>> problem and I can always get back to a known configuration. >>> >>> >>> As I only have two disks, I have created a zfs volume for the Xen dom= U thus: >>> >>> zfs create -V40G -o volmode=3Ddev zroot/nereid0 >>> >>> >>> The domU nereid is defined thus: >>> >>> cat - << EOI > /export/vm/nereid.cfg >>> builder =3D "hvm" >>> name =3D "nereid" >>> memory =3D 2048 >>> vcpus =3D 1 >>> vif =3D [ 'mac=3D00:16:3E:11:11:51,bridge=3Dbridge0', >>> 'mac=3D00:16:3E:11:11:52,bridge=3Dbridge1', >>> 'mac=3D00:16:3E:11:11:53,bridge=3Dbridge2' ] >>> disk =3D [ '/dev/zvol/zroot/nereid0,raw,hda,rw' ] >>> vnc =3D 1 >>> vnclisten =3D "0.0.0.0" >>> serial =3D "pty" >>> EOI >>> >>> nereid itself also auto-installs, it's a Solaris 11.3 instance. >>> >>> >>> As it tries to install, I get this in the dom0: >>> >>> Feb 8 18:57:16 bianca.swangage.co.uk kernel: (ada1:ahcich1:0:0:0): >>> WRITE_FPDMA_QUEUED. ACB: 61 18 a0 ef 88 40 46 00 00 00 00 00 >>> Feb 8 18:57:16 bianca.swangage.co.uk last message repeated 4 times >>> Feb 8 18:57:16 bianca.swangage.co.uk kernel: (ada1:ahcich1:0:0:0): C= AM >>> status: CCB request was invalid >> That's weird, and I would say it's not related to ZFS, the same could >> likely happen with UFS since this is an error message from the >> disk controller hardware. > CCB invalid, thats not good, we sent a command to the drive/controller = that > it does not like. > This drive may need to be quirked in some way, or there may be > some hardware issues here of some kind. Should I have pointed out that these two disks are both identical and not= SSDs: Geom name: ada0 Providers: 1. Name: ada0 Mediasize: 1000204886016 (932G) Sectorsize: 512 Stripesize: 4096 Stripeoffset: 0 Mode: r2w2e3 descr: ST1000LM035-1RK172 lunid: 5000c5009d4d4c12 ident: WDE0R5LL rotationrate: 5400 fwsectors: 63 fwheads: 16 >> Can you test whether the same happens _without_ Xen running? >> >> Ie: booting FreeBSD without Xen and then doing some kind of disk >> stress test, like fio [0]. I've just run an fio thus (sorry, not used it before, this seemed like a = reasonable set of options, but tell me if there's a better set): fio --name=3Drandwrite --iodepth=3D4 --rw=3Drandwrite --bs=3D4k --direct=3D= 0 --size=3D512M=20 --numjobs=3D10 --runtime=3D1200 --group_reporting Leading to this output when I stopped it: randwrite: (groupid=3D0, jobs=3D10): err=3D 0: pid=3D68148: Thu Feb 14 09= :50:08 2019 write: IOPS=3D926, BW=3D3705KiB/s (3794kB/s)(2400MiB/663425msec) clat (usec): min=3D10, max=3D4146.6k, avg=3D9558.71, stdev=3D94020.9= 8 lat (usec): min=3D10, max=3D4146.6k, avg=3D9558.97, stdev=3D94020.9= 8 clat percentiles (usec): | 1.00th=3D[ 47], 5.00th=3D[ 52], 10.00th=3D[ 100], | 20.00th=3D[ 133], 30.00th=3D[ 161], 40.00th=3D[ 174], | 50.00th=3D[ 180], 60.00th=3D[ 204], 70.00th=3D[ 249], | 80.00th=3D[ 367], 90.00th=3D[ 2008], 95.00th=3D[ 10552], | 99.00th=3D[ 160433], 99.50th=3D[ 566232], 99.90th=3D[1367344], | 99.95th=3D[2055209], 99.99th=3D[2868904] bw ( KiB/s): min=3D 7, max=3D16383, per=3D16.36%, avg=3D606.11, s= tdev=3D1379.59,=20 samples=3D7795 iops : min=3D 1, max=3D 4095, avg=3D151.06, stdev=3D344.94,= samples=3D7795 lat (usec) : 20=3D0.51%, 50=3D2.53%, 100=3D6.88%, 250=3D60.31%, 500=3D= 12.97% lat (usec) : 750=3D2.16%, 1000=3D1.64% lat (msec) : 2=3D2.98%, 4=3D2.65%, 10=3D2.27%, 20=3D1.16%, 50=3D1.58= % lat (msec) : 100=3D0.95%, 250=3D0.63%, 500=3D0.22%, 750=3D0.17%, 100= 0=3D0.16% cpu : usr=3D0.04%, sys=3D0.63%, ctx=3D660907, majf=3D1, minf=3D= 10 IO depths : 1=3D100.0%, 2=3D0.0%, 4=3D0.0%, 8=3D0.0%, 16=3D0.0%, 32= =3D0.0%, >=3D64=3D0.0% submit : 0=3D0.0%, 4=3D100.0%, 8=3D0.0%, 16=3D0.0%, 32=3D0.0%, 6= 4=3D0.0%, >=3D64=3D0.0% complete : 0=3D0.0%, 4=3D100.0%, 8=3D0.0%, 16=3D0.0%, 32=3D0.0%, 6= 4=3D0.0%, >=3D64=3D0.0% issued rwts: total=3D0,614484,0,0 short=3D0,0,0,0 dropped=3D0,0,0,0= latency : target=3D0, window=3D0, percentile=3D100.00%, depth=3D4= Run status group 0 (all jobs): WRITE: bw=3D3705KiB/s (3794kB/s), 3705KiB/s-3705KiB/s (3794kB/s-3794kB= /s),=20 io=3D2400MiB (2517MB), run=3D663425-663425msec I didn't manage to produce any errors in the log files... Just to be on the safe side, I have changed the dom0 memory to 4GB and li= mited=20 ZFS arc to 1GB thus: xen_cmdline=3D"dom0_mem=3D4092M dom0_max_vcpus=3D2 dom0=3Dpvh console=3Dc= om1,vga=20 com1=3D115200,8n1 guest_loglvl=3Dall loglvl=3Dall" vfs.zfs.arc_max=3D"1024M" I've now re-created one of my domUs and I have not experienced any issues= at all=20 this time. Of course I now don't know if it was the limiting of ZFS arc, = the=20 increase in memory or both together that fixed it. I will attempt further tests and update the list.... Thanks again. Eric --=20 =20 ____ / . Eric A. Bautsch /-- __ ___ __________________________________= ____ / / / / / (_____/____(___(__________________/ email: eric.bautsch@pobox.co= m --------------ms030905070900030802000704 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0BBwEAAKCC DFIwggNiMIIC6KADAgECAggwtYJcB7vEnDAKBggqhkjOPQQDAzBSMQswCQYDVQQGEwJFUzEU MBIGA1UECgwLU3RhcnRDb20gQ0ExLTArBgNVBAMMJFN0YXJ0Q29tIENlcnRpZmljYXRpb24g QXV0aG9yaXR5IEVDQzAeFw0xNzA0MjgwODAwMzVaFw0zNzA0MjgwODAwMzVaMGkxCzAJBgNV BAYTAkVTMRQwEgYDVQQKDAtTdGFydENvbSBDQTEpMCcGA1UECwwgU3RhcnRDb20gQ2VydGlm aWNhdGlvbiBBdXRob3JpdHkxGTAXBgNVBAMMEFN0YXJ0Q29tIENDMiBJQ0EwdjAQBgcqhkjO PQIBBgUrgQQAIgNiAAR7hlYvM7ymfqRetYHdncaz11zCyZQbJofX1jT1FiEsyKH7WFh7k9cN BMbe9RUh7mq6EcCcP7rHdV1yhkx9CNT8KSSDHIIWB1RbmK5XtKvK4BLQ1pLUbzvGVz/YBYro HK+jggFyMIIBbjBtBggrBgEFBQcBAQRhMF8wNQYIKwYBBQUHMAKGKWh0dHA6Ly9haWEuc3Rh cnRjb21jYS5jb20vY2VydHMvY2FjYzIuY3J0MCYGCCsGAQUFBzABhhpodHRwOi8vb2NzcC5z dGFydGNvbWNhLmNvbTAdBgNVHQ4EFgQUPLfG3okmWlcDidCvMGpGDgzq3GYwEgYDVR0TAQH/ BAgwBgEB/wIBADAfBgNVHSMEGDAWgBSeiMCybDMJy/8hfr/qnwiGu32qGTBBBgNVHSAEOjA4 MDYGBFUdIAAwLjAsBggrBgEFBQcCARYgaHR0cDovL3d3dy5zdGFydGNvbWNhLmNvbS9wb2xp Y3kwNwYDVR0fBDAwLjAsoCqgKIYmaHR0cDovL2NybC5zdGFydGNvbWNhLmNvbS9zZnNjYWNj Mi5jcmwwDgYDVR0PAQH/BAQDAgEGMB0GA1UdJQQWMBQGCCsGAQUFBwMCBggrBgEFBQcDBDAK BggqhkjOPQQDAwNoADBlAjEAxVHjDb7E+HcRO7j3UZg3lyI/6MgNJuD/Fc/5HTtjZc5B0iVz eeERiqV1sGJ/h9h8AjAlmjRwgkRXx8hJVcCzCCBl95zytvLdJdGPrBJHEaFJnsYX8FQZGB86 0clRb9QPXXQwggQzMIIDuKADAgECAghYh6dhuIrClTAKBggqhkjOPQQDAzBpMQswCQYDVQQG EwJFUzEUMBIGA1UECgwLU3RhcnRDb20gQ0ExKTAnBgNVBAsMIFN0YXJ0Q29tIENlcnRpZmlj YXRpb24gQXV0aG9yaXR5MRkwFwYDVQQDDBBTdGFydENvbSBDQzIgSUNBMB4XDTE3MDcwMzEw MzcyOVoXDTE5MDcwMzAyNDcwMFowSDElMCMGCSqGSIb3DQEJARYWZXJpYy5iYXV0c2NoQHBv Ym94LmNvbTEfMB0GA1UEAwwWZXJpYy5iYXV0c2NoQHBvYm94LmNvbTCCASIwDQYJKoZIhvcN AQEBBQADggEPADCCAQoCggEBAMghw1tHL4eLIUgcw9dnIy+JADzgH7YBJYhZWNH8J6Vq2kiU UTpBjjTALMNWxj7PakNtMXjWHqnQjWESwwzlvnaQnvf2bBjYafiC8+D3oocW3OyaDOLVRDqx dI2n+Zr2RNTZw2erl2/cHrToVvOkuqZVftcL8EocMHeLuaEUfgaXQBmFhUJpzDvPsSLp99fg z5zOY+j3sHa6HOGke8NvR4bi8pKnkgCu5lo9HWHgVJ/Ip8Cqk2EzwaZ0DSGfpvfXtv+OuBqO s6VBJ19TibT9wfFeYeoesgKnS73zQKLoZG3yKcfYfZs9TxS5BEhWDWr6JFP8hUlhL+ZUi+X9 AFNAAx8CAwEAAaOCAZ4wggGaMHQGCCsGAQUFBwEBBGgwZjA8BggrBgEFBQcwAoYwaHR0cDov L2FpYS5zdGFydGNvbWNhLmNvbS9jZXJ0cy9zY2EuY2xpZW50MjIuY3J0MCYGCCsGAQUFBzAB hhpodHRwOi8vb2NzcC5zdGFydGNvbWNhLmNvbTAdBgNVHQ4EFgQUS/x/U30ucvaPvk4aAXYu Q8qcFskwCQYDVR0TBAIwADAfBgNVHSMEGDAWgBQ8t8beiSZaVwOJ0K8wakYODOrcZjBIBgNV HSAEQTA/MD0GCysGAQQBgbU3AgIBMC4wLAYIKwYBBQUHAgEWIGh0dHA6Ly93d3cuc3RhcnRj b21jYS5jb20vcG9saWN5MDsGA1UdHwQ0MDIwMKAuoCyGKmh0dHA6Ly9jcmwuc3RhcnRjb21j YS5jb20vc2NhLWNsaWVudDIyLmNybDAOBgNVHQ8BAf8EBAMCBLAwHQYDVR0lBBYwFAYIKwYB BQUHAwIGCCsGAQUFBwMEMCEGA1UdEQQaMBiBFmVyaWMuYmF1dHNjaEBwb2JveC5jb20wCgYI KoZIzj0EAwMDaQAwZgIxAKbrgOkZ5i8pHnjkbxiyZbOvisCA9Z+0/DZjPybtrKlk3l/dl7dd AqPaZHKFNjGkGgIxAITRkSRMx0zlIb1ajYqEe3lVeouUc253pu+FOlAr5qvvJjZ+Gyc4/7ud YIdBYQb4KzCCBLEwggKZoAMCAQICEEzFbU1ZMWzGD67YyYtePFkwDQYJKoZIhvcNAQEMBQAw fTELMAkGA1UEBhMCSUwxFjAUBgNVBAoTDVN0YXJ0Q29tIEx0ZC4xKzApBgNVBAsTIlNlY3Vy ZSBEaWdpdGFsIENlcnRpZmljYXRlIFNpZ25pbmcxKTAnBgNVBAMTIFN0YXJ0Q29tIENlcnRp ZmljYXRpb24gQXV0aG9yaXR5MB4XDTE3MDQxMTA3MzAwMVoXDTIyMDQxMTA3MzAwMVowUjEL MAkGA1UEBhMCRVMxFDASBgNVBAoMC1N0YXJ0Q29tIENBMS0wKwYDVQQDDCRTdGFydENvbSBD ZXJ0aWZpY2F0aW9uIEF1dGhvcml0eSBFQ0MwdjAQBgcqhkjOPQIBBgUrgQQAIgNiAARtU/dM PcdNPiBafCl30C/R50Onc2/k4XgCYiJJrpw3hTs0B0P/+SZAknB0QU2BcIee3+22c5Ju/2UC meZoVbtekPtlX1g6CoN0fQbqaMUhBUIyxEFavSPBd3IQ4/Hi7KujggEEMIIBADAOBgNVHQ8B Af8EBAMCAQYwEgYDVR0TAQH/BAgwBgEB/wIBAjAyBgNVHR8EKzApMCegJaAjhiFodHRwOi8v Y3JsLnN0YXJ0c3NsLmNvbS9zZnNjYS5jcmwwZgYIKwYBBQUHAQEEWjBYMCQGCCsGAQUFBzAB hhhodHRwOi8vb2NzcC5zdGFydHNzbC5jb20wMAYIKwYBBQUHMAKGJGh0dHA6Ly9haWEuc3Rh cnRzc2wuY29tL2NlcnRzL2NhLmNydDAdBgNVHQ4EFgQUnojAsmwzCcv/IX6/6p8Ihrt9qhkw HwYDVR0jBBgwFoAUTgvvGqRAW6UXaYcwyjRoQ9BBrvIwDQYJKoZIhvcNAQEMBQADggIBAJmQ Ot9REj3wllH11HazsdBP1WD+Wt98G651eGEhVIHupeiO3+fM9JUxYNXrqFCcjlaWABGFAOXv BAeBta7vT05Hk8TIC9/r2StixF+aQvbWq0DGiPOfX29FWKJEe/e0QHrqebP29cwwh2WbzsPz EBklt0xObEt+/6R7QB7kJkqYeggmFSOf7G6VBlKDMt1dabf2uWs0cYoNceBHWX+Bepkl+V16 MJ4eYtOOGMzfaqbszR7SP4zJWXVOXbSa0gvk8zcAveJBw2CQK8My7mKCPx3IEhtn70b5oEBU kGZpEbSCEI9XCs3VHpnZuB3s7PimeULw0Vkvkug0GdWBvGPgyCwDCdp9oXonZmCDkKVn2IuI PWpYfIwAD8hK6nn14mrHXnOcyr0BB93hC1Cv394cjnunn9DSEgTxLgQITvCcbVlZdUIBObMi hFFCz2EkySSYp5HmpUTFG+LhY9B2TevagHztoDbOg7BBvbLzetZh7kM7Dxqp2EVDJ7qhU0So OLH7zeMf93L8wmg9pK5niFIFY9KbRoc0s/lTl+o+pjSsIrXEOovlDBx3Eg4ysyyvQQErbuRD 0sRrpHYbaJySScoFK8nAEgkDO+Q4EC2/mlowjy5fM6X1+FmZFedPbd4G59OadRVe7sGEK/4G nvu0BZUbbkUhfoFg/iAUGX83w93l5bRJMYIDizCCA4cCAQEwdTBpMQswCQYDVQQGEwJFUzEU MBIGA1UECgwLU3RhcnRDb20gQ0ExKTAnBgNVBAsMIFN0YXJ0Q29tIENlcnRpZmljYXRpb24g QXV0aG9yaXR5MRkwFwYDVQQDDBBTdGFydENvbSBDQzIgSUNBAghYh6dhuIrClTANBglghkgB ZQMEAgEFAKCCAecwGAYJKoZIhvcNAQkDMQsGCSqGSIb3DQEHATAcBgkqhkiG9w0BCQUxDxcN MTkwMjE1MDcyMjI0WjAvBgkqhkiG9w0BCQQxIgQgQULXR29xN3HWQ61jTPwJsO1xX2OhO8qF Fts+Sav67aYwbAYJKoZIhvcNAQkPMV8wXTALBglghkgBZQMEASowCwYJYIZIAWUDBAECMAoG CCqGSIb3DQMHMA4GCCqGSIb3DQMCAgIAgDANBggqhkiG9w0DAgIBQDAHBgUrDgMCBzANBggq hkiG9w0DAgIBKDCBhAYJKwYBBAGCNxAEMXcwdTBpMQswCQYDVQQGEwJFUzEUMBIGA1UECgwL U3RhcnRDb20gQ0ExKTAnBgNVBAsMIFN0YXJ0Q29tIENlcnRpZmljYXRpb24gQXV0aG9yaXR5 MRkwFwYDVQQDDBBTdGFydENvbSBDQzIgSUNBAghYh6dhuIrClTCBhgYLKoZIhvcNAQkQAgsx d6B1MGkxCzAJBgNVBAYTAkVTMRQwEgYDVQQKDAtTdGFydENvbSBDQTEpMCcGA1UECwwgU3Rh cnRDb20gQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkxGTAXBgNVBAMMEFN0YXJ0Q29tIENDMiBJ Q0ECCFiHp2G4isKVMA0GCSqGSIb3DQEBAQUABIIBALA2goHoNYS0qEtgkjTjBdZQhmfDonGC 3sw15L6iROsYl/zmRqfwX5rbvVH42EweW7rY9MSLEpzHj6FrQR8XBSuBnE9mRKtZwsfRDzhA 9svjZuWQLvRlHdmU7tYw1O6qNysDOeBeHFx9S2LhJj9vSw9KcpuAK7XxUSlIOhBgRNfqFo56 Wj2QdSr2KAC+ZgHqjy4zUv5Cr+3P9wozhCP/BZVJWdIQiVwqahicnmiHaKwPJsOX/M67GqKk 3dBSOf5T/n3Z47YnsY1/ZQIvcvtBrf+SDPkubIjtstIH58aRlKoo1KtTh9eZ+AAlQJUKRk0r y6TQaSe6AulyUwWNIvv9LoIAAAAAAAA= --------------ms030905070900030802000704--