From nobody Tue Apr 5 15:22:39 2022 X-Original-To: freebsd-xen@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id CE9401A99D8F for ; Tue, 5 Apr 2022 15:23:07 +0000 (UTC) (envelope-from prvs=087021108=roger.pau@citrix.com) Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com [216.71.145.153]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "mx1.hc3370-68.iphmx.com", Issuer "HydrantID Server CA O1" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4KXrxk56byz4pLt for ; Tue, 5 Apr 2022 15:23:06 +0000 (UTC) (envelope-from prvs=087021108=roger.pau@citrix.com) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1649172186; h=date:from:to:cc:subject:message-id:references: content-transfer-encoding:in-reply-to:mime-version; bh=sZ/NhjEFcUZVFt/SJ6G9PK/HWZdWx4G4YaRt5nHQl1A=; b=H6TqDw+Kls6hz/HxpXiJ7K5jt3VM1qax9WOne2UJDM9QzcbZbfyg1xD1 RT6xO2taYCKormq7v0YJo+tu+atm3iPEcdNdy+EOS8yQCPNHEyoQzaxo0 p4oh6u4qvAOAwDeBhWdbGeo+F5faNX1tY9ftKN5K1ZzXV/bCherYtbKsS 0=; X-SBRS: 5.1 X-MesageID: 68075823 X-Ironport-Server: esa2.hc3370-68.iphmx.com X-Remote-IP: 162.221.156.83 X-Policy: $RELAYED IronPort-Data: A9a23:xQDHT6/drp7d3Rl4J6D/DrUDq3mTJUtcMsCJ2f8bNWPdYAuW94E1v mIfWWibcqrUMCe2ZoArKM/psx8b7cOTl4U2VUYu/HpuTnRF79LMGd+ZRqubFy7DdcCcRRg8t Z9ONIDNIJFkRCbSrEimPOi+8CEkiPjZGrPxWL+UN3AtFQM4Qn9/2U1twLUz3YBm2om0U13Q6 I6aT6EzQLOA82cc3jU8t/rZ93uDxcjPhQ/017BGiZpj5AeGzTwFAZRaL6WyJXLzQcxaH/7/W +vK1vS1+XzfuhY2Ec6oifDje1EBQtY+ViDT1idbAfmu3kcdq3Vr2atnb/cXZRZc2mnVz9ohk 4RA6cW6FQ0iM/SdxekQA0kESS11MfEco7aaLSTu6JzNlEGeG5eAL42CKWlvVWFP0rotXTgmG YUkFQ0wgjC/a8Oeke6wGuJiis18dMPiZNkS5H0wnG/TUqkoTZnKSf2RvYUJgDtois5wRvuPP MBxhRiDzfjjj7+jHn9NVfrSSc/x3iGXnwVw8Q7T/exti4Tq5FQZPILFabI5QfTXA5QN9qqkj jiepT6hXklHbIb3JQetqRpAuMeexUsXZ6pKfFGI3qYCbIq7nzF75LU+DDNXkNHh4qKMc4s3x 388o0LCmZMa5k2zJuQRajXjyJKyUrzwbPILewEywFnlJqM5eG91DEBcJtJKQIROWMPb2VXGf 7JG9j/kLWUHjVGbdZ6S3quO9AiuMwgsFig5bjMEZiI308bPkI5m23ojTv47eEK0ptj8GDW2y DGWtikuwb4UiKbn1Y3iowqB2Wj14MGUEEhlvW07XUr8hu99TJSiaIGyr0DS8N5LLZqDT0nHt 38B8ySbxL5eVc7QxHbWKAkLNLTwzcTCbTjYuwdUHrQjyXfq5lL+I58FtVmSI282a51ZKFcFe nT7sx5R/phMFH2kZ6R+Z8S2EctC5ab6GNnvTfyRftdISpZreQKN5y0oYlSft0j/nUQxibouI r+UdM+tCTARDqEP8datb75Di/lxnHl4nD6NA8Cgp/i67VaATGKPEIwZOnu3VacC57+Eiyznw 8lzB8Tfnn2zT9bCSiXQ9IcSK3UDIn46GY36pqRrSwKTHuZ1MDp/UqGMmNvNb6Qgxv0IzbmQo hlRT2cCkDLCaWv7xRJmg5yJQJfmRt5BoH0yJkTA1n74iiF4Me5DAEryHqbbnIXLFsQ+lpaYr NFfIq1s58ijrBydplzxirGn8eRfmOyD317mAsZcSGFXk2RcbwLI4MT4WQDk6TMDCCG63eNn/ eHxjlmDHcdfHFg+ZCozVB5J5wns1ZT6sLgsN3Yk3/EJIBm8mGSUA3KZYgALzzEkdkyYm2ryO /e+ChYEv+jdy7LZA/GS7Z1oW7yBSrMkdmIDRjGzxe/vaUHyozryqacdAb3gVW2MCwvJFFCKO Lw9IwfUa6Zcwj6ncuNUTt5W8E7Jz4C2/eUHkV04Qi6jgpbCIuoIH0RqFPJn78Vl7rRYpRG3S gSI/NxbMq+OI8TrDBgaIw9NUwhJ/atE8tUOxZzZ+HnH2RI= IronPort-HdrOrdr: A9a23:47njlK6NPiHCz7cjigPXwSqBI+orL9Y04lQ7vn2ZFiY7TiXIra yTdaoguCMc6AxxZJkh8erwX5VoZUmsj6KdhrNhQItKPTOWw1dASbsN0WKM+UyDJ8STzJ856U 4kSdkDNDSSNykKsS+Z2njALz9I+rDum8rJ9ITjJjVWPHlXgslbnnlE422gYytLrWd9dP4E/M 323Ls5m9PsQwVeUu2LQl0+G8TTrdzCk5zrJTYAGh4c8QGLyRel8qTzHRS01goXF2on+8ZpzU H11yjCoomzufCyzRHRk0fV8pRtgdPkjv9OHtaFhMQ5IijlziyoeINicbufuy1dmpDl1H8a1P 335zswNcV67H3cOkmzvBvWwgHllA0j7nfzoGXo9kfLkIjcfnYXGsBBjYVWfl/y8Ew7puxx16 pNwiawq4dXJQmoplWz2/H4EzVR0makq3srluAey1ZFV5EFVbNXpYsDuGtIDZY7Gj7g4oxPKp gjMCjl3ocWTbqmVQGYgoE2q+bcHUjbXy32D3Tqg/blnQS/xxtCvgklLM92pAZ1yHtycegA2w 3+CNUaqFh/dL5nUUtDPpZyfSKWMB26ffueChPaHbzYfJt3Tk4l7aSHpIkI2A== X-IronPort-AV: E=Sophos;i="5.90,236,1643691600"; d="scan'208,223";a="68075823" ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=AU5oOZ5lG3Nt1Gu8Btz+hyWzG5Ygs0xC+OgDAa+syN2L8OaxtopYDa4PKTxdtjZvVuL1R70/I52LR+Tk5HYNFeuFS0vftkzQhIAfiMGnpgpKoySLtfy6f8Ih7vyAlsrDK2w4eIk11M7oTm0nGWBPJdUxHFHhtDZMUhWOXi0TMxnFCKQzOwVU3l5ooAdwqSeRza2unafmiiu4/0u6sQmkXcLSoqLmal870CvnwSIL7ROyma4l+kS7PtCDdwW3hTNqv6B7XIxhclOqURRzVLJZXaaeeD25rcw2fBChOrF6QrTM29MTiNP3fEeoEG0uIruUCzBF/7bkUG7QULz2GOIL8Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=rU3+G0jbKkVpndS40hA5Tdupv0Ej9e1bVJaRMaxeHPo=; b=E/lC+ZAPqdo3RjDuacjz1AUy2t0i9/U/rEgklkgGFIrfIALXqHLjJyJ8QgafgfsvWa8w2PIeU7nBT1SfhdfvjFSTuJKyrRzu2LqloiJM42grPQvvL5I0nVVGVaSR2Vw0tHzASNgvtctYx8bfndzrpuOXAxuW8OW9QKChemH1LV6XGzVJ/tCWdHJOUTpC/JKE4dJ/Klce/n+Pou64JSCrh1bsMSb2H7euKzQ9E8S8aCCI/vJjuGbV40vV7jDgGYTn8hFfk36m2S8/NvH2PiBpi2X89MZ0ImBCEdoOlvfB4TNAXNUhmbJAIJ47LoKjdWNVdV2OI0qfmpoCBUjxAgxxSQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=rU3+G0jbKkVpndS40hA5Tdupv0Ej9e1bVJaRMaxeHPo=; b=LFOUGTOpwPF7g1VcdoT+9iwvtXsUB+5x3c2QBfgwDb1eLYbcrMwoiiEiIrA8moskp7fubcfdGHcSVOPpqnw6dQjcYUOQ+hoSedjKoZLks2bd/ARHecj3izRwPsZsveoWe9LytMIjMgK8R9CR0YIHYP95pvMZNyCV+AewMsqsINM= Date: Tue, 5 Apr 2022 17:22:39 +0200 From: Roger Pau =?utf-8?B?TW9ubsOp?= To: Ze Dupsys CC: , Subject: Re: ZFS + FreeBSD XEN dom0 panic Message-ID: References: <088c8222-063a-1db5-da83-a5a0168d66c6@gmail.com> <639f7ce0-8a07-884c-c1cf-8257b9f3d9e8@gmail.com> <4da2302b-0745-ea1d-c868-5a8a5fc66b18@gmail.com> <48b74c39-abb3-0a3e-91a8-b5ab1e1223ce@gmail.com> <22643831-70d3-5a3e-f973-fb80957e80dc@gmail.com> Content-Type: multipart/mixed; boundary="4oNrmEULt/shPi4w" Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <22643831-70d3-5a3e-f973-fb80957e80dc@gmail.com> X-ClientProxiedBy: LO2P265CA0250.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:8a::22) To DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18) List-Id: Discussion List-Archive: https://lists.freebsd.org/archives/freebsd-xen List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-xen@freebsd.org X-BeenThere: freebsd-xen@freebsd.org MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 2180caab-aa97-4aa7-800f-08da171821a9 X-MS-TrafficTypeDiagnostic: BYAPR03MB4103:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: zmqRDYwWSoIGjSvvtEI7efCI+OC4Bk6VHGcYZjDqY3o8SygjVV+Nv8X9R1paeyPlRU2cG23rQ8oTINjF2FrRkOuoQ5h7v9Ep1TdkdjL+WPandEk7ruxhjvufuNftRfs4Pkm5VkLsL6TnEumgtYbqJY2+TQI8JuK26lQXerGsij8dP2nr4FqJNAtdAuTqTnVzG3+o7uZrJ7NcTPGbze9G6e3agIdrjCRjlqVZLhVP+5zSr1tIs9A2iIVgnUYBSVl/Qb0LTZHQVE7lAaCpzlUC3vDK0K7eOPHYhjLTV+4lG8fJrO969L1DliMk/oltC3qwss/TsUQh0dg4bH0wZ33aYTuQJJW6ZpYTndY/eWFDDa75oIDkIn5WFTmYqI+JHijUS6UXa4v0+LoI/XBnS4tScmi0/40riGvqBGJUXps6fMeBV+xBgw7W0XoVxjIwcMSDdPJgWy4UKKuZYTArSBgC3Mq54WEIZHkIebhUrb9kKkOX8BZ5cBXhTCD6pdvkW3camD+D8kTRT0vqTTVayMU/gsrHAPNPidQYd9G0hOtKiy7wIPJ4KRPh2soQvrLpnihoHuZgTLMMBVCkk46HRb6u40/8+5qSnLzU4X2iJGT2rz39WRw6dGEZ87/In164TUjc2gsoIrAddnbYgdjv7Q7BrDLCOSgBo1tiUGV0xIbZueIfIfMO4jHdnEEjsL9tjDLdt3vexzVdAlJVfbk+8hJdT+ty9YPxyKOEkdUH3PynsuqQeOlbxXkG/M6iH6VzfPcbNfjzM8l2bfmI524DbmDH+1DQFxjhA9BaWgKviHcOu1Y= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(4636009)(7916004)(366004)(6512007)(9686003)(8676002)(966005)(33716001)(8936002)(86362001)(5660300002)(82960400001)(66476007)(6506007)(33964004)(4326008)(6666004)(44144004)(66556008)(38100700002)(66946007)(235185007)(186003)(316002)(83380400001)(6916009)(85182001)(6486002)(508600001)(26005)(2906002)(2700100001);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?b2VYN3ZLaXBwNVRRTFgxQm1PdVZhUnYrSUdUbXFNQzdDUFBjdlZTOGZrNUJq?= =?utf-8?B?K0F1bzdsK2FFVmh6b1FYOU54TTF2MGlRR29kbVkzNkZSVHdMRis1L2JDU3p1?= =?utf-8?B?aGlNS2RiQmp4aERLalQ4YlovUnZhbSt3MjgyNzYxOWpCNWh4cWJrR20zZ2lG?= =?utf-8?B?dG9yMy9sMkJ3UnBjR0hidlJKYzhZbEVjYmNCdGxqN2dLTnBRR04ydFlrMVlM?= =?utf-8?B?eVRVMUFFM1VpNnFOM1NaYVRLVUs4Mi90Q2NXdGdXY01nRGRFWjN3a1BnYWtW?= =?utf-8?B?aFhqRzNvSmdia2lDL084OUdUMUI3c0Y1NjhFQzFPUFBWb3dmTDBHa2xSdXpq?= =?utf-8?B?WGNDSHN4Nms5NDhsekNuU25ETzBseW92TVgyMzh6ekFFekE2TEc2MXFUaTVT?= =?utf-8?B?VG41NWZCN1RGTys5S3JQZnV0bzlBWWJ2cUFjaFlCVG92YURuTTBFQ0lPRTdF?= =?utf-8?B?WGFUTjVnZXF3TithK2pyeTBmcDJKc1dpdUs3N1R6dFpoZkN1Q1pFTEpFYnBZ?= =?utf-8?B?bjgrOVFrUHRhbzUzNDhFenoyK3pUNVZTbTkyZDh4L3loMHgvc2VNVXVnQmk2?= =?utf-8?B?dm1CRjM1ZmpjWHAzUG5jRzBmaUZMWFZScjgwQnVVa21ZWkJTTXJoWnF4OXVx?= =?utf-8?B?VFp4MGF0V3F3alZ1RUFybEpLd1JDZzlVUm9nNklGT2hGUklSMk5OVFIyU2tQ?= =?utf-8?B?ZGhuU2srdFVpaXVBbDJQZC9RdDVLOVE0ZmY1d1g2N2NERHJtektGNS8rRjVk?= =?utf-8?B?aXNpbC84V2UwTS9SL2l2U1lYMld4NTZnTGJoSHNML3RWakVxUVZMWmlDZFNu?= =?utf-8?B?dGp5TW0vUmJwVHhPME1rNVdKUENiU2tKQ2g2MHgwSWlWV1lUbGkzbFY1RkJt?= =?utf-8?B?ckZESlc1LzZxVnhKRy95Q1RTNlZpVkx2RHRUbVM5MEVlWEMxMEx4YjlNdXJQ?= =?utf-8?B?dlN1K284Rjh5YlRWRGpSY2l2MENOT3BOaGt5eWZqYVp0dFhjRW1sVGJ4TW1j?= =?utf-8?B?cGR0c0poQStGQXovRnBTNVVDaVdQNi90MDE4NlRER2JocklVTTI1NnBKTk95?= =?utf-8?B?R3VWVWFYWDRSdGFscThWS0FNR09WazNXeDVEb0RvSjJ5bGZPWFdUK0FOcmxs?= =?utf-8?B?L3JSSDQxbDdObzBEdmJlVkhhOXFsdm1hWitaK0JLcldNTmRib0ZQWEZxZzJN?= =?utf-8?B?bjZib0JDc1RlNkZhdHkrQlVJN1k5bmFMMlhQbkZEeWxKVVB5aWxpUFFLQ29G?= =?utf-8?B?T1RCM3cxbUJ5RjBWRjVxTWpOY2N3TGNreDNBUERUV2xta0hkalc1eThQSXVh?= =?utf-8?B?RkpCcW9qUytoaWtCVXZuSS9qOEVBRVkzZytETkxNQ01LZmVaY0VvK3NrREhG?= =?utf-8?B?U3FibTR1U1gzTkNZUm1uV2VrM3F3NDZDZkY5QVZqV3QrUWZrRnlHWUwwNVFu?= =?utf-8?B?bzFjdHlSOW03aWFNOW16U09iWjdENlNUUFdKbHBJbzR0RmhLbDkxT2ZRbWpJ?= =?utf-8?B?ajBneHg1SzF1QUhidEhXdTVqME5kY1dYS0tqcWVXK1B3NWVlMXpRcGgvQXpY?= =?utf-8?B?NHhXWUJnRXVDeUFJUEVqdDBidXl6TzV0TFNJMElpNWxtcXRQUCtQczhaNlJu?= =?utf-8?B?TFNab0s2NW9kRE9nNHJkWlprZXp1bFVxaGdhVis4Tk5OWkdTc0RqZWRnQkE0?= =?utf-8?B?NDB1aDF6RWdlNWtrMVl2dkNnSklVcm9iMWRkOU1qTW8xMjlwYW5KdzdIN3hn?= =?utf-8?B?RnJDK1RRMUIzd2l0cmVkNUNKWjhrTnJPNktzcnlIN2hSSStDNkxCTlZHUmM4?= =?utf-8?B?aTExUEcxYTY3bzdibnUyTnhoTWl1aE1JQWFjTldxbUp6ODR5a1VjNFlNaHA5?= =?utf-8?B?R1JJajRRenJxNUZGNnlFZStSVFpQYjQ2cmN0NjJiVXVwNkNXeEh4SVZrVzNX?= =?utf-8?B?YnpTWmRWUnFaRXg4UkROVS9WZGdJK3JPRmpiNHR0MWNnTjJ4RzJ0VVV5aVU0?= =?utf-8?B?STV4RkFRQnNkSXRXZXdsUUQrdm9SL2hOTGZCcWRYOUFpR3pzMndVOFhVYzB1?= =?utf-8?B?NVIzRGtubVhIWVFjTjhRY3hJTmdWUU90cDlPVmk4a1ZuMHJFcGxubHpHVDhC?= =?utf-8?B?cnNRWUd6UEVwSFl5WjgxaWxmbDlhYyswYUwvRFpZRjU0M2gwem5aWExIYjJz?= =?utf-8?B?U2pkQjhubG1lY2lCQTlWVzYwL0F4d1JWdGZZNEl6Y0cxUGM0eDM0ZllIazJJ?= =?utf-8?B?bkk5RndnKy9XTWFhTVJRUjVxQjJ3UmZpUytMYjlxM1l5M0FVTFJjVi9HSFZp?= =?utf-8?B?ZTZ6Q3lkbS90T1gvT3YrWkZGSmVaRGEveGQ5LzdadW1ZMncyRTAvdVRiMHE2?= =?utf-8?Q?xG1fxnZu4j6rYpPw=3D?= X-MS-Exchange-CrossTenant-Network-Message-Id: 2180caab-aa97-4aa7-800f-08da171821a9 X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Apr 2022 15:22:44.0412 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: wZb2UqatkPF7XhDB3ZfM610shD7qoe1NcvnIdK7wCUULiXt6OxISJ/EzaBGM41k+UqqfoZNqegXu3m/lttj0Rw== X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4103 X-OriginatorOrg: citrix.com X-Rspamd-Queue-Id: 4KXrxk56byz4pLt X-Spamd-Bar: --- Authentication-Results: mx1.freebsd.org; dkim=pass header.d=citrix.com header.s=securemail header.b=H6TqDw+K; dkim=pass header.d=citrix.onmicrosoft.com header.s=selector2-citrix-onmicrosoft-com header.b=LFOUGTOp; arc=pass ("microsoft.com:s=arcselector9901:i=1"); dmarc=pass (policy=reject) header.from=citrix.com; spf=pass (mx1.freebsd.org: domain of "prvs=087021108=roger.pau@citrix.com" designates 216.71.145.153 as permitted sender) smtp.mailfrom="prvs=087021108=roger.pau@citrix.com" X-Spamd-Result: default: False [-3.89 / 15.00]; TO_DN_SOME(0.00)[]; R_SPF_ALLOW(-0.20)[+exists:216.71.145.153.spf.hc3370-68.iphmx.com]; HAS_ATTACHMENT(0.00)[]; RCVD_DKIM_ARC_DNSWL_MED(-0.50)[]; DKIM_TRACE(0.00)[citrix.com:+,citrix.onmicrosoft.com:+]; CTYPE_MIXED_BOGUS(1.00)[]; DMARC_POLICY_ALLOW(-0.50)[citrix.com,reject]; RCVD_IN_DNSWL_MED(-0.20)[216.71.145.153:from]; FREEMAIL_TO(0.00)[gmail.com]; RCVD_COUNT_ZERO(0.00)[0]; MIME_TRACE(0.00)[0:+,1:+,2:+,3:+]; R_MIXED_CHARSET(0.56)[subject]; ASN(0.00)[asn:16417, ipnet:216.71.145.0/24, country:US]; FROM_NEQ_ENVFROM(0.00)[roger.pau@citrix.com,prvs=087021108=roger.pau@citrix.com]; ARC_ALLOW(-1.00)[microsoft.com:s=arcselector9901:i=1]; NEURAL_HAM_MEDIUM(-1.00)[-1.000]; R_DKIM_ALLOW(-0.20)[citrix.com:s=securemail,citrix.onmicrosoft.com:s=selector2-citrix-onmicrosoft-com]; FROM_HAS_DN(0.00)[]; RCPT_COUNT_THREE(0.00)[3]; NEURAL_HAM_SHORT(-0.24)[-0.241]; NEURAL_HAM_LONG(-1.00)[-1.000]; MIME_GOOD(-0.10)[multipart/mixed,text/plain]; FORGED_SENDER_VERP_SRS(0.00)[]; DWL_DNSWL_LOW(-1.00)[citrix.com:dkim]; TO_MATCH_ENVRCPT_SOME(0.00)[]; MLMMJ_DEST(0.00)[freebsd-xen]; MID_RHS_NOT_FQDN(0.50)[] X-ThisMailContainsUnwantedMimeParts: N --4oNrmEULt/shPi4w Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit On Sun, Apr 03, 2022 at 09:54:24AM +0300, Ze Dupsys wrote: > On 2022.03.27. 12:13, Roger Pau Monné wrote: > > .. > > Thanks, unfortunately that patch was incomplete. I have an updated > > version that I think is better now, and I've slightly tested it > > (creating and destroying a domain with it doesn't seem to crash). > > Appended patch at the end of the message. > > Hi, > > This patch was far better, i almost wanted to say that it works, stressed > system with 2G RAM and it did not even have signs of sysctl-var leaks. There > were too many things going on, thus i most probably will not be able to > reproduce this case, but just before panic i did "xl list" command and it > instantly crashed (new trace). What i noticed after restart, that again some > default nightly script had made /var/backup/* files which made root file > system full. Since test overlaid 2 nights, in first day when root disk was > full, system did not panic, i just rm'ed /var/backup biggest file to make > sure that there are no problems. When i did "xl list" both test VMs were > running, state i do not know. > > Full serial log in attachment, part 2 is where most interesting stuff is, > ending is a bit mess, but: > .. > (XEN) d284v0: upcall vector 93 > Apr 3 08:10:11 lab-01 xenstored[937]: TDB: expand_file to 229376 failed (No > space left on devicex)bbd24: Error 5 w > riting backend/vbApd/284/51760/sectors > r 3 08:10:11 lab-01 kernel: xbbd24: Fapid 9tal error. T37 ransitioning to > Closing St(xensate > tored), uid 0 inumber 2003596 on /: filesystekernel trap m ful12 lw > ith interrupts disableApd > > r > Fatal trap 3 08: 110:11 2: page fault while in kerlab-nel mode > 0cp1uid = 0 ; apicx id = 00 > enstofault virtual address = 0red[9x20 > 37fault co]: code = surpervisor read data, page not presentru > inptistruction pointer = 0x20:0xffffffff80c94e80on > destack pointer = 0x28:0xfffffe0051tected8803c0 > frame pobyinter = 0x28:0xfffffe00518803d0 > connecode segment = basce 0x0, limit 0xfffff, type 0x1b > tion = DPL 0, pres 1 , long 1, def32 0, gran 1 > proces0: sor eflags = resumerre, IOPL = 0 No sp > current process = 16 (xenwatch) > traap number ce = 12 > panic: page fault > cpuid = 0 > time = 1648962612 > KDB: stack backtrace: > #0 0xffffffff80c7c285 at kdb_backtrace+0x65 > #1 0xffffffff80c2e2e1 at vpanic+0x181 > #2 0xffffffff80c2e153 at panic+0x43 > #3 0xffffffff810c8b97 at trap+0xba7 > #4 0xffffffff810c8bef at trap+0xbff > #5 0xffffffff810c8243 at trap+0x253 > #6 0xffffffff810a0848 at calltrap+0x8 > #7 0xffffffff80c0b87a at __mtx_unlock_sleep+0x7a > #8 0xffffffff80a98724 at xbd_instance_create+0x7aa4 > #9 0xffffffff80a9abb0 at xbd_instance_create+0x9f30 > #10 0xffffffff80f95c64 at xenbusb_localend_changed+0x7c4 > #11 0xffffffff80ab0f04 at xs_unlock+0x704 > #12 0xffffffff80beaeee at fork_exit+0x7e > #13 0xffffffff810a18be at fork_trampoline+0xe > Uptime: 1d10h56m34s Thanks, sorry for the late reply, somehow the message slip. I've been able to get the file:line for those, and the trace is kind of weird, I'm not sure I know what's going on TBH. It seems to me the backend instance got freed while being in the process of connecting. I've made some changes, that might mitigate this, but having not a clear understanding of what's going on makes this harder. I've pushed the changes to: http://xenbits.xen.org/gitweb/?p=people/royger/freebsd.git;a=shortlog;h=refs/heads/for-leak (This is on top of main branch). I'm also attaching the two patches on this email. Let me know if those make a difference to stabilize the system. Thanks, Roger. --4oNrmEULt/shPi4w Content-Type: text/plain; charset=utf-8 Content-Disposition: attachment; filename="0001-xenbus-improve-device-tracking.patch" >From 4cf5c9300bf8f9517b9b1acafcc95657dafd99de Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= Date: Mon, 21 Mar 2022 12:47:20 +0100 Subject: [PATCH 1/2] xenbus: improve device tracking xenbus needs to keep track of the devices exposed on xenstore, so that it can trigger frontend and backend device creation. Removal of backend devices is currently detected by checking the existence of the device (backend) xenstore directory, but that's prone to races as the device driver would usually add entries to such directory itself, so under certain circumstances it's possible for a driver to add node to the directory after the toolstack has removed it. This leads to devices not removed, which can eventually exhaust the memory of FreeBSD. Fix this by checking for the existence of the 'state' node instead of the directory, as such directory will always be present when a device is active, and will be removed by the toolstack when the device is shut down. In order to avoid any races with the updating of the 'state' node by FreeBSD and the toolstack removing it use a transaction in xenbusb_write_ivar() for that purpose. Reported by: Ze Dupsys Sponsored by: Citrix Systems R&D --- sys/xen/xenbus/xenbusb.c | 55 +++++++++++++++++++++++----------------- 1 file changed, 32 insertions(+), 23 deletions(-) diff --git a/sys/xen/xenbus/xenbusb.c b/sys/xen/xenbus/xenbusb.c index e026f8203ea1..b038b63bd289 100644 --- a/sys/xen/xenbus/xenbusb.c +++ b/sys/xen/xenbus/xenbusb.c @@ -254,7 +254,7 @@ xenbusb_delete_child(device_t dev, device_t child) static void xenbusb_verify_device(device_t dev, device_t child) { - if (xs_exists(XST_NIL, xenbus_get_node(child), "") == 0) { + if (xs_exists(XST_NIL, xenbus_get_node(child), "state") == 0) { /* * Device tree has been removed from Xenbus. * Tear down the device. @@ -907,6 +907,7 @@ xenbusb_write_ivar(device_t dev, device_t child, int index, uintptr_t value) case XENBUS_IVAR_STATE: { int error; + struct xs_transaction xst; newstate = (enum xenbus_state)value; sx_xlock(&ivars->xd_lock); @@ -915,31 +916,37 @@ xenbusb_write_ivar(device_t dev, device_t child, int index, uintptr_t value) goto out; } - error = xs_scanf(XST_NIL, ivars->xd_node, "state", - NULL, "%d", &currstate); - if (error) - goto out; - do { - error = xs_printf(XST_NIL, ivars->xd_node, "state", - "%d", newstate); - } while (error == EAGAIN); - if (error) { - /* - * Avoid looping through xenbus_dev_fatal() - * which calls xenbus_write_ivar to set the - * state to closing. - */ - if (newstate != XenbusStateClosing) - xenbus_dev_fatal(dev, error, - "writing new state"); - goto out; - } + error = xs_transaction_start(&xst); + if (error != 0) + goto out; + + error = xs_scanf(xst, ivars->xd_node, "state", NULL, + "%d", &currstate); + if (error) + goto out; + + do { + error = xs_printf(xst, ivars->xd_node, "state", + "%d", newstate); + } while (error == EAGAIN); + if (error) { + /* + * Avoid looping through xenbus_dev_fatal() + * which calls xenbus_write_ivar to set the + * state to closing. + */ + if (newstate != XenbusStateClosing) + xenbus_dev_fatal(dev, error, + "writing new state"); + goto out; + } + } while (xs_transaction_end(xst, 0)); ivars->xd_state = newstate; - if ((ivars->xd_flags & XDF_CONNECTING) != 0 - && (newstate == XenbusStateClosed - || newstate == XenbusStateConnected)) { + if ((ivars->xd_flags & XDF_CONNECTING) != 0 && + (newstate == XenbusStateClosed || + newstate == XenbusStateConnected)) { struct xenbusb_softc *xbs; ivars->xd_flags &= ~XDF_CONNECTING; @@ -949,6 +956,8 @@ xenbusb_write_ivar(device_t dev, device_t child, int index, uintptr_t value) wakeup(&ivars->xd_state); out: + if (error != 0) + xs_transaction_end(xst, 1); sx_xunlock(&ivars->xd_lock); return (error); } -- 2.35.1 --4oNrmEULt/shPi4w Content-Type: text/plain; charset=utf-8 Content-Disposition: attachment; filename="0002-xen-blkback-fix-tear-down-issues.patch" >From 1525f8ea0b35edf6df7633ac217c32711583caec Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= Date: Sun, 27 Mar 2022 10:43:42 +0200 Subject: [PATCH 2/2] xen/blkback: fix tear-down issues Handle tearing down a blkback that hasn't been fully initialized. This requires carefully checking that fields are allocated before trying to access them. Also communication memory is allocated before setting XBBF_RING_CONNECTED, so gating it's freeing on XBBF_RING_CONNECTED being set is wrong and will lead to memory leaks. Also stop using xbb_disconnect() in error paths. Use xenbus_dev_fatal and let the normal disconnection procedure take care of the cleanup. Reported by: Ze Dupsys Sponsored by: Citrix Systems R&D --- sys/dev/xen/blkback/blkback.c | 63 +++++++++++++++++------------------ 1 file changed, 30 insertions(+), 33 deletions(-) diff --git a/sys/dev/xen/blkback/blkback.c b/sys/dev/xen/blkback/blkback.c index 33414295bf5e..15e4bbe78fc0 100644 --- a/sys/dev/xen/blkback/blkback.c +++ b/sys/dev/xen/blkback/blkback.c @@ -2774,19 +2774,12 @@ xbb_free_communication_mem(struct xbb_softc *xbb) static int xbb_disconnect(struct xbb_softc *xbb) { - struct gnttab_unmap_grant_ref ops[XBB_MAX_RING_PAGES]; - struct gnttab_unmap_grant_ref *op; - u_int ring_idx; - int error; - DPRINTF("\n"); - if ((xbb->flags & XBBF_RING_CONNECTED) == 0) - return (0); - mtx_unlock(&xbb->lock); xen_intr_unbind(&xbb->xen_intr_handle); - taskqueue_drain(xbb->io_taskqueue, &xbb->io_task); + if (xbb->io_taskqueue != NULL) + taskqueue_drain(xbb->io_taskqueue, &xbb->io_task); mtx_lock(&xbb->lock); /* @@ -2796,19 +2789,28 @@ xbb_disconnect(struct xbb_softc *xbb) if (xbb->active_request_count != 0) return (EAGAIN); - for (ring_idx = 0, op = ops; - ring_idx < xbb->ring_config.ring_pages; - ring_idx++, op++) { - op->host_addr = xbb->ring_config.gnt_addr - + (ring_idx * PAGE_SIZE); - op->dev_bus_addr = xbb->ring_config.bus_addr[ring_idx]; - op->handle = xbb->ring_config.handle[ring_idx]; - } + if (xbb->flags & XBBF_RING_CONNECTED) { + struct gnttab_unmap_grant_ref ops[XBB_MAX_RING_PAGES]; + struct gnttab_unmap_grant_ref *op; + unsigned int ring_idx; + int error; + + for (ring_idx = 0, op = ops; + ring_idx < xbb->ring_config.ring_pages; + ring_idx++, op++) { + op->host_addr = xbb->ring_config.gnt_addr + + (ring_idx * PAGE_SIZE); + op->dev_bus_addr = xbb->ring_config.bus_addr[ring_idx]; + op->handle = xbb->ring_config.handle[ring_idx]; + } - error = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, ops, - xbb->ring_config.ring_pages); - if (error != 0) - panic("Grant table op failed (%d)", error); + error = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, ops, + xbb->ring_config.ring_pages); + if (error != 0) + panic("Grant table op failed (%d)", error); + + xbb->flags &= ~XBBF_RING_CONNECTED; + } xbb_free_communication_mem(xbb); @@ -2839,7 +2841,6 @@ xbb_disconnect(struct xbb_softc *xbb) xbb->request_lists = NULL; } - xbb->flags &= ~XBBF_RING_CONNECTED; return (0); } @@ -2963,7 +2964,6 @@ xbb_connect_ring(struct xbb_softc *xbb) INTR_TYPE_BIO | INTR_MPSAFE, &xbb->xen_intr_handle); if (error) { - (void)xbb_disconnect(xbb); xenbus_dev_fatal(xbb->dev, error, "binding event channel"); return (error); } @@ -3338,6 +3338,13 @@ xbb_connect(struct xbb_softc *xbb) return; } + error = xbb_publish_backend_info(xbb); + if (error != 0) { + xenbus_dev_fatal(xbb->dev, error, + "Unable to publish device information"); + return; + } + error = xbb_alloc_requests(xbb); if (error != 0) { /* Specific errors are reported by xbb_alloc_requests(). */ @@ -3359,16 +3366,6 @@ xbb_connect(struct xbb_softc *xbb) return; } - if (xbb_publish_backend_info(xbb) != 0) { - /* - * If we can't publish our data, we cannot participate - * in this connection, and waiting for a front-end state - * change will not help the situation. - */ - (void)xbb_disconnect(xbb); - return; - } - /* Ready for I/O. */ xenbus_set_state(xbb->dev, XenbusStateConnected); } -- 2.35.1 --4oNrmEULt/shPi4w--