K1E73U/zGlSpwho7ZVZv0I mf+9dcBvhl98CgSH0fKlBItg6K3sEB69scrgqHtMmookIX/XyahoAtlSr3YNP259xmTT3D soDylfxBeIAz3J9K9tLS5LiIaSzc0OiTCedRnUhbnYaOLPenrdYkXBV1yFiQeisKPkJwcS mECKb3XRUcyse2UiMnDZCITB8w4E34mG7ShNF/1/HBn/H+wF4v+3qcfE+/5y6Qt4rxGvj3 nJJQ6PGszrivXRHextXIHJZoNP2dtxxF4psFZ9TR45mmc83DEefcfkm3Ztr30A== ARC-Authentication-Results: i=1; mx1.freebsd.org; none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=freebsd.org; s=dkim; t=1742822950; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=LPgzWEVVcSBKLQ2THvY9cmr9zCMTcjO1ExaTmQbRnVA=; b=pS9KjhLHCdTo6B1P3eGFezYqiKj4h2C+MS+iDVm9sM88e+yRoT4HsRJUFBVRvOuq9NvFgC 4tVkhcGncxk83EUGTgDAFoqjdYynt4PjJ/cuzpGMX63vOc9jxOSTDXzdolyIYPDLlELxPg 6gl5osqCdlKp7E3BJpQbYOOeVZMUMdRtkh1kWy/iuQEUx220q3M51jdf2M6CUiGXhCZjDW cq3JRLPSXFZcv2uO2jf5nE3X3p9TVNlAsqAMBDpA3BH2ccu3RGUQZV0XmK8XvQdG6NdQfM TLxS/QpA2o0QtQYD6zspMl6roMR/ODVvGKndrYlSw6Ce2ue00ofUAqckpKRy9A== Received: from [127.0.0.1] (unknown [127.0.1.132]) by freefall.freebsd.org (Postfix) with ESMTP id 6FFC4A55B for ; Mon, 24 Mar 2025 13:29:10 +0000 (UTC) (envelope-from rm@FreeBSD.org) Message-ID: <58ded6c6-e47f-424e-8204-830e1558829a@FreeBSD.org> Date: Mon, 24 Mar 2025 16:29:09 +0300 List-Id: Production branch of FreeBSD source code List-Archive: https://lists.freebsd.org/archives/freebsd-stable List-Help: List-Post: List-Subscribe: List-Unsubscribe: X-BeenThere: freebsd-stable@freebsd.org Sender: owner-freebsd-stable@FreeBSD.org MIME-Version: 1.0 User-Agent: Mozilla Thunderbird From: Ruslan Makhmatkhanov Subject: zfs panic upon file removal To: freebsd-stable@freebsd.org Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Hello, tonight the server goes into reboot by itself. zpool status shoved $ sudo zpool status -xv pool: system state: ONLINE status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in question if possible. Otherwise restore the entire pool from backup. see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A config: NAME STATE READ WRITE CKSUM system ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 ada0p3 ONLINE 0 0 0 ada1p3 ONLINE 0 0 0 errors: Permanent errors have been detected in the following files: //var/db/mysql/xxx/xxx.ibd Every attempt to remove the file manually make system instantly panic with: panic: panic(Solaric): zfs: adding existent segment to range tree (offset=address size=9000) Please see attached screenshot, it's low-quality, but that's all what was given to me. Is this an existing problem and if yes what steps should I take to overcome this? Picture has been cropped, so here is link: https://i2.paste.pics/12d5a59a994bc93915ffecd470261674.png -- Regards, Ruslan T.O.S. Of Reality