From nobody Fri Feb 14 00:50:03 2025 X-Original-To: freebsd-virtualization@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 4YvD3y5s4Gz5nX22 for ; Fri, 14 Feb 2025 00:50:22 +0000 (UTC) (envelope-from marklmi@yahoo.com) Received: from sonic308-8.consmr.mail.gq1.yahoo.com (sonic308-8.consmr.mail.gq1.yahoo.com [98.137.68.32]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 4YvD3x3DDHz3jPc for ; Fri, 14 Feb 2025 00:50:21 +0000 (UTC) (envelope-from marklmi@yahoo.com) Authentication-Results: mx1.freebsd.org; none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1739494219; bh=vaevIU41A5vpkCDBJftQsD9u0fuzxGBBzHPWXVoxwcg=; h=Subject:From:In-Reply-To:Date:Cc:References:To:From:Subject:Reply-To; b=HNOuEylWGB7+symOwPwMaDhjxCsrJG8B8WNJ9OmWlFnQlSIGMEn7MBVMrMmLuOVllJSrMftHYwQJ80jbmbk/n5gzsxCuwvBnVO1+BpTEFG77WWIyXV5dd6ZwIcRxi+ihxonu6N3ZTfpR/dgv+pO3R8mpnBTnV8ZDL3bxQoe/nkNQTf22gZ+4Xmf3aXfTU5+oFucP4zhCQivHq/op2mS7q5casjHdpzkl/ve4/2zGAEVkm3g6+qPr2J9gF5dhi8q19cOwOsG1Dg69TxZCXjLVjDLHPGOS2fYnLSB8Bo4jsm5ID//lKvZlyrrti+ZO9mGvdmFQuUS+/1GagfxvE647gQ== X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1739494219; bh=7NCwzyRgpIpcfve2M6e74SkC587oJ5G4/EO9axfdvsG=; h=X-Sonic-MF:Subject:From:Date:To:From:Subject; b=q+xmncyEMNO0y8Kpb/dmQ7Z3Ay6JDhWtHnbu3oVi+phgjfCKnn/axwlpg6us+BWktkTl9ctlo+rdnZ/YYUnlMAbz7LiHetOYDwrm28dzuLIazkRDSbwoUbueYFHyIbYgALcG4WQwffpb2ilgsfBLHStRUV+oL4xtvLj6EQZp7flrH+vhspVApF/8+T1JY7J9G0xKVIZsugIRuEG1Xz+a/ialoCGEz248k+vK38sZSe3hziBH3qF3VQwYsF9TR0XjsKrnX0Sk2owbeTSndm5DgeiO8VBUJABoy9SB33SLpO1V0n2t1noOCC2C5ejD4tukoYPp0lYq/q2gu18UxRnizg== X-YMail-OSG: 38bukO4VM1k9wZdkD8w_Ko7vvukFxY.IV9FFrLiTAn.CyO4CsNG2UFds6omMuK9 a.wRMdwmXahwZORAGIwDq_ij2WF7TIH3kIh0Q_aKyuPTCZTXTenaqnka_DhQKUzRL3b5rmAjvzfY FeHEdgfsXMF6Pf7iNph_da4qEQ6BXvISs117VPL465vrLrdTCwlnHL0NES4ja1cZSRqNONwbtEtr sBEzfoblFYXFFkHsheJRyEBalKyiVQiqjiEHH6p4pr0DGQo6uqT8Rhn.w31URImqpfacnbuWw77H 6kMWvZ7Vs6XzoVHeIqILei98Cl_99g_GsLyyyyGlqgUXbqaibr3V9O7uz0K98LVE3YNz2U8X8.KF U5n46zSBw5mNdlo39VpsylrDpLT7X.kWIUu6vI6TsDif9TvydERToGcTp640fYbAcKFmEJJ2Zgqe Y9aqaVW0ykDe7mgdTziR.r3MXMPVud7qMqr9eAELkSs0BSWFWqhSytrBmYI8wJJhQweqL6vQeLtW iBXbTQHJKTpiIrgXR.P2fo0zrZSWKiwykEATkwGxdAaMuyXtu4Cl7gKMrZA6Hzd3jYSasJPfw3Ql VG_fm8bCY57vKuDkNRy1zyH5_RMimQAIcH._Ri0nR3kAPqZzJq5fmSFA6HrRNlZmFffTD.ntZ0TV KGzVm5r3c09sbO8ofJ8A20ooNGzHWeUbQ8eubkIHCWHif2B28XvI16vmuvWku91HOIJID0zUz8k4 mNusLEj5E3ppfX0G.dUl.gGHgMQX6vKFBDkv8DTdan6zp5v4YsPb.vtZ2PKwQjBUq.255h0QhDqh sRk.qoxx8qW_uxZuW5bkKMHZdzeH56Ncey6xmCp3Ovv.bY_B1lzmvcu0aXC0IsQnioiwYyZc8x8P ijfLgXkGRQ3B_kWmbdmrr3p3DWs3jkYpaWW4MHKdG71pttqET3oRBK8.xhHxw_VDFMXPHxsz1Byy BrVHNUJ6HH64_fT0dGbJnxAzsBq3.Rgjog56Nh7s0w57nYI1QcN83X8aAeVvkem5LrdMh2LCp2w_ rZnc9wpl_rYtewty8wZjCd0s1MwHHDo9oZezW2wM0UZqxY3XLSLLE4j6Bi.hF6LbWjFyPgh8KL2c zwccHGx25.qb9U7HpL9SsEBo4OSQT1q1VSSoqW8Up6HOUtnS9i6_pVvwa8IxgNjZH0aG.XqPHwg_ UoBwY8YuVVP.v57tWnBZ6A2aBOHqi45InwmxZTp2K8BV3KHuTdkhQJl.T5zDTGehNx0ioC0.K.1F C2w3cFbT7I7zpQwMPVjPCTuCwq9kwjCe_qY35vTs6ewDFilneh8.VnrOZvR8o1xoq3zsWBKkxp6x vYHJY3Lg.k13cKpEgecg9JyyNTrBZpoxJUYtFAjefi0xD0zjYj72s9zYqqr5E7hJ960jf4Skb30I 7v3KG.QqWT4_Vufry1vcJEYk6TSof3Mg31nmk2qb_EX2LNpCYsmEzb4AuOji_XfS1Zvhbgl9MDEZ Cw68Ba0w96htGthiajKfkf0G9XOx3FSUTuN7fHCs3xNFzL6d2kwbltKU.quv2LNR.d0EJdlkR4.N 6lWbCcM9VZTq5rOAzrS4W7Qw8UCxl0EIzkAG8Q6TBjtpRY7dJ5x3GoRnaAzrfMw6xk8Oa7YvAH9I 3LcymhzqIzVpjhsqxx9J66WeEewcZvD1sH_11dLqIOMTFLDci_np.nCt.E3XvxAtgzUmUp.NPZ3C _DU.L1D9amvGq2YvAFz.iaHpw.ZCb6zAgwzcqshxXZAqIfXIpFZrhXtT3hq4fv20L3hBtG_KqKp. Y2OqnZ280vN8USMt.Q3RqygL_VccnQmfpJ0nKUqC2.nkt7Gl3Q6mOUGXI_kHIldkPzWMPxxyCHFU 2NlZ6F8Mj1sTkLstVnaW6mo9.aLLP3Lmn7pxnTNLMQnCedxvTPiUSHWjE1ThUI9BufWf5BMbw8HM 6qoU3wo.aio7eAPCNoZ3Vy8J3IrQOHzJW6zSA5fjZCmPm1dv_sIyppgXBnheLujruVWrEPIcN8K9 Rlm6jsHCBD6YDgwKo6kSli338Su1VSexyoPJVYMrWqJBSuTt2wm53Yup2IvtCcCQaBEM2F5GebXk TqOwXP0mXMzv8fKarc95KU5OypolhDnTpLQto1ti7jt21_G4MHFlfaXn.H2gA67kg4XaYsLMZjeI ig0S6_bg0vG7HnCEAzHsy1JkrLG4_OlI8uddUoRDJQBRiaTT9e582xfp2po6ZOfdLQ7NB2pl9m4c QRD6DKLys83dmHnePNeLvAZF_gQJBHgUGcO3JLu8W8CoL0O7QMGuhPr8LCTt81uW_nkQTIv8EUYo GUkE- X-Sonic-MF: X-Sonic-ID: 9ec3a2cc-af87-48b3-9e7b-e9f9738f8568 Received: from sonic.gate.mail.ne1.yahoo.com by sonic308.consmr.mail.gq1.yahoo.com with HTTP; Fri, 14 Feb 2025 00:50:19 +0000 Received: by hermes--production-gq1-5dd4b47f46-k4d2j (Yahoo Inc. Hermes SMTP Server) with ESMTPA ID e27676c69176971bd36233367c1c12cf; Fri, 14 Feb 2025 00:50:14 +0000 (UTC) Content-Type: text/plain; charset=utf-8 List-Id: Discussion List-Archive: https://lists.freebsd.org/archives/freebsd-virtualization List-Help: List-Post: List-Subscribe: List-Unsubscribe: X-BeenThere: freebsd-virtualization@freebsd.org Sender: owner-freebsd-virtualization@FreeBSD.org Mime-Version: 1.0 (Mac OS X Mail 16.0 \(3826.400.131.1.6\)) Subject: Re: A way to have a console (aarch64) under macOS Parallels: build the kernel with nodevice virtio_gpu; any way with an official kernel build? From: Mark Millard In-Reply-To: Date: Thu, 13 Feb 2025 16:50:03 -0800 Cc: Virtualisation on FreeBSD , freebsd-arm Content-Transfer-Encoding: quoted-printable Message-Id: References: To: Warner Losh X-Mailer: Apple Mail (2.3826.400.131.1.6) X-Rspamd-Queue-Id: 4YvD3x3DDHz3jPc X-Spamd-Bar: ---- X-Rspamd-Pre-Result: action=no action; module=replies; Message is reply to one we originated X-Spamd-Result: default: False [-4.00 / 15.00]; REPLY(-4.00)[]; ASN(0.00)[asn:36647, ipnet:98.137.64.0/20, country:US] > On Feb 13, 2025, at 14:55, Warner Losh wrote: >=20 >> On Thu, Feb 13, 2025, 3:40=E2=80=AFPM Mark Millard = wrote: >> I've been testing using FreeBSD under Parallels on a MacBook Pro M4 = MAX, >> although the issue below and its handling may not be specific to = aarch64 >> contexts. >>=20 >> After (from a demsg -a from a verbose boot): >>=20 >> . . . >> 000.000078 [ 452] vtnet_netmap_attach vtnet attached txq=3D1, = txd=3D128 rxq=3D1, rxd=3D128 >> pci0: at device 9.0 (no driver attached) >> virtio_pci1: mem = 0x10000000-0x17ffffff,0x18008000-0x18008fff,0x18000000-0x18003fff at = device 10.0 on pci0 >> vtgpu0: on virtio_pci1 >> virtio_pci1: host features: 0x100000000 >> virtio_pci1: negotiated features: 0x100000000 >> virtio_pci1: attempting to allocate 1 MSI-X vectors (2 supported) >> virtio_pci1: attempting to allocate 2 MSI-X vectors (2 supported) >> pcib0: matched entry for 0.10.INTA >> pcib0: slot 10 INTA hardwired to IRQ 39 >> virtio_pci1: using legacy interrupt >> VT: Replacing driver "efifb" with new "virtio_gpu". >>=20 >> I end have no console. I ended up in a state where it >> turned out booting went to stand-alone mode for a manual >> fsck. So: no ssh access or any other access. I ended up >> using the Windows Dev Kit 2023 with the boot device in >> order figure out what was going on and to the the needed >> fsck. >>=20 >> Turns out that if I'm building, installing, and booting >> my own kernel, there is a way around that replacement >> of efifb by using: >>=20 >> nodevice virtio_gpu >>=20 >> in the kernel configuration, so that the boot ends up >> using efifb (no replacement). >>=20 >> If course, this does not help with kernels from official >> FreeBSD builds. >>=20 >> Is there a way to disable virtio_gpu for something that >> runs an official kernel build (where virtio_gpu is >> built into the kernel)? >=20 >=20 > boot_serial=3Dno >=20 > In loader.conf? How would that lead to not doing: VT: Replacing driver "efifb" with new "virtio_gpu". ? Using the menu, all 4 combinations stopped in the same place and way until I built and used a kernel that did not have virtio_gpu at all, if I remember right. Both efifb and virtio_gpu seem to be for the video side of the alternatives. (It seems that something more is needed for virtio_gpu to end up providing a console, if it can. May be the Parallels Toolbox for an actual Linux provides what is missing for that kind of context? I'm not after X11 or such, just having an operational console for seeing information and dealing with problems when ssh can not be used.) I've no clue if the issue is specific to just Parallels or not: I've really only used Hyper-V (only getting it working for FreeBSD as a guest OS on amd64) and Parallels (aarch64 currently). So I do not know if it would be worth a tunable to, say, set the vd_priority offset from VD_PRIORITY_GENERIC, such that it could end up not replacing efifb. (I looked in the source code a little bit for this message.) =3D=3D=3D Mark Millard marklmi at yahoo.com