From owner-freebsd-hackers@freebsd.org Fri Jan 1 05:50:06 2021 Return-Path: Delivered-To: freebsd-hackers@mailman.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.nyi.freebsd.org (Postfix) with ESMTP id 52F724BF6F0 for ; Fri, 1 Jan 2021 05:50:06 +0000 (UTC) (envelope-from neel@neelc.org) Received: from rainpuddle.neelc.org (rainpuddle.neelc.org [66.42.69.219]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 4D6YyQ1R1Bz4T3m; Fri, 1 Jan 2021 05:50:05 +0000 (UTC) (envelope-from neel@neelc.org) Received: from mail.neelc.org (rainpuddle.neelc.org [IPv6:2001:19f0:8001:fed:5400:2ff:fe73:c622]) by rainpuddle.neelc.org (Postfix) with ESMTPSA id B2365EB2A5; Thu, 31 Dec 2020 21:49:56 -0800 (PST) MIME-Version: 1.0 Date: Thu, 31 Dec 2020 21:49:56 -0800 From: Neel Chauhan To: Doug Ambrisko Cc: Mark Johnston , freebsd-hackers@freebsd.org, ambrisko@freebsd.org Subject: Re: Debugging a WIP PCI/ACPI patch: Bad tailq NEXT(0xffffffff81cde660->tqh_last) != NULL In-Reply-To: <20201231200744.GA95383@ambrisko.com> References: <44528336fa9168966d121bf771e1e229@neelc.org> <3c9ff844e527daacd04c51f48836b57d@neelc.org> <20201231200744.GA95383@ambrisko.com> User-Agent: Roundcube Webmail/1.4.9 Message-ID: <4f3f6a02a452f766063ae2acb060dc64@neelc.org> X-Sender: neel@neelc.org Content-Type: text/plain; charset=US-ASCII; format=flowed Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: 4D6YyQ1R1Bz4T3m X-Spamd-Bar: ---- Authentication-Results: mx1.freebsd.org; none X-Spamd-Result: default: False [-4.00 / 15.00]; REPLY(-4.00)[] X-BeenThere: freebsd-hackers@freebsd.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Technical Discussions relating to FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 01 Jan 2021 05:50:06 -0000 Hi Doug, Thank you so much for this information. On 2020-12-31 12:07, Doug Ambrisko wrote: > FYI, looks like this needs to be ported over from Linux: > static char __iomem *vmd_cfg_addr(struct vmd_dev *vmd, struct pci_bus > *bus, > unsigned int devfn, int reg, int len) > { > char __iomem *addr = vmd->cfgbar + > ((bus->number - vmd->busn_start) << 20) + > (devfn << 12) + reg; > > to > vmd_read_config > offset = (b << 20) + (s << 15) + (f << 12) + reg; > > vmd_write_config(device_t dev, u_int b, u_int s, u_int f, u_int reg, > offset = (b << 20) + (s << 15) + (f << 12) + reg; > > ie. > offset = ((b - sc->vmd_bus_start) << 20) + (s << 15) + (f << 12) + > reg; > > vmd_bus_start should be added to the softc as a uint8_t type and needs > to > be set via attach. We need range checks to make sure > vmd_write_config/vmd_read_config doesn't read something out of range > since it has been reduced. One thing I noticed is that the "b" variable (which corresponds to the Linux bus->number) is 0 (thanks to printf). This should be the bus number if we want to attach. If I use: "b = pci_get_bus(dev);" in the attach, b is still 0. And that leads to a kernel panic. > Not sure what the shadow registers do. These both seem to be new Intel > features and Intel doc's have been minimal. Looks like Intel is doing > a sparse map now on newer devices. I guess Linux is our best hope. Unless the new Intel docs is the Linux kernel source. > I'm concerned about the Linux comment of: > * Certain VMD devices may have a root port configuration > option which > * limits the bus range to between 0-127, 128-255, or 224-255 > > since I don't see anything to limit it between 0-127 only starting > at 0, 128 or 224, Maybe there is max of 128 busses overall? I could be wrong, but I guess that's a typo. > I don't have this type of HW to test things. I can use my hardware for testing. In the worse case scenario, I can donate an entry-level 11th Gen/TigerLake system if I have the funds and/or can get a tax credit. > Doug A. -Neel