From owner-freebsd-hackers@freebsd.org Sat Jan 2 19:06:51 2021 Return-Path: Delivered-To: freebsd-hackers@mailman.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.nyi.freebsd.org (Postfix) with ESMTP id 135814DB556 for ; Sat, 2 Jan 2021 19:06:51 +0000 (UTC) (envelope-from ambrisko@ambrisko.com) Received: from mail2.ambrisko.com (mail2.ambrisko.com [70.91.206.91]) by mx1.freebsd.org (Postfix) with ESMTP id 4D7WbG51Rkz4pHr; Sat, 2 Jan 2021 19:06:50 +0000 (UTC) (envelope-from ambrisko@ambrisko.com) IronPort-SDR: UVCxjv2eXQsCV+ckB7miTEZqhH/G5RKkDRTABlsu6Wwp8tgSke5ryVWS1T7+Z+hc84G30FAss9 xDzssj9U/jUsphpdD/42UYWJ2oOqbnKW0= X-Ambrisko-Me: Yes Received: from server2.ambrisko.com (HELO internal.ambrisko.com) ([192.168.1.2]) by ironport2.ambrisko.com with ESMTP; 02 Jan 2021 10:30:00 -0800 Received: from ambrisko.com (localhost [127.0.0.1]) by internal.ambrisko.com (8.15.2/8.15.2) with ESMTPS id 102J6iAe090727 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO); Sat, 2 Jan 2021 11:06:44 -0800 (PST) (envelope-from ambrisko@ambrisko.com) Received: (from ambrisko@localhost) by ambrisko.com (8.15.2/8.15.2/Submit) id 102J6i5u090726; Sat, 2 Jan 2021 11:06:44 -0800 (PST) (envelope-from ambrisko) Date: Sat, 2 Jan 2021 11:06:44 -0800 From: Doug Ambrisko To: Neel Chauhan Cc: Mark Johnston , freebsd-hackers@freebsd.org, ambrisko@freebsd.org Subject: Re: Debugging a WIP PCI/ACPI patch: Bad tailq NEXT(0xffffffff81cde660->tqh_last) != NULL Message-ID: <20210102190644.GB87535@ambrisko.com> References: <44528336fa9168966d121bf771e1e229@neelc.org> <3c9ff844e527daacd04c51f48836b57d@neelc.org> <20201231200744.GA95383@ambrisko.com> <4f3f6a02a452f766063ae2acb060dc64@neelc.org> <7cda3be6594d5ad5bdc69019f72b03d3@neelc.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <7cda3be6594d5ad5bdc69019f72b03d3@neelc.org> X-Rspamd-Queue-Id: 4D7WbG51Rkz4pHr X-Spamd-Bar: ---- Authentication-Results: mx1.freebsd.org; none X-Spamd-Result: default: False [-4.00 / 15.00]; REPLY(-4.00)[] X-BeenThere: freebsd-hackers@freebsd.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Technical Discussions relating to FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 02 Jan 2021 19:06:51 -0000 With VMD, the PCI "root" is hidden behind it. To access devices behind the VMD device, a new domain is created and when PCI config. space is accessed, it is indexed via the VMD device via the offset. Intel seems to have reduced the available bus space on some HW. So for a bus access less then what they implemented in HW we have to return error that nothing is there. Then when we get to the starting bus device, we need to offset that to 0 based in the HW. The PCI probe will run look for busses from 0 to 255. From the Linux driver your HW only works from 224 to 255. So we need to fail anything under 224 and for bus requests 224 and higher then subtract 224. Thus the b - sc->vmd_bus_start part. I'm not sure if we could do it the other way in which we allow 0-12 bus requests to pass and fail if it is over. I'm not sure if there is any specific reason why that wouldn't work. Linux didn't do that but that doesn't mean it wouldn't work. It would be good to start with the Linux method and then test 0-n, where n is the max. busses that HW allows. Anything n or more would have to return a fail. Doug A. On Sat, Jan 02, 2021 at 09:20:20AM -0800, Neel Chauhan wrote: | Just to ping you in case you may have missed my reply (I understand, New | Years Day). | | Is there a reason why "b = pci_get_bus(dev);" return 0 even when the bus | number is shifted (as it is on Linux)? | | -Neel | | On 2020-12-31 21:49, Neel Chauhan wrote: | > Hi Doug, | > | > Thank you so much for this information. | > | > On 2020-12-31 12:07, Doug Ambrisko wrote: | >> FYI, looks like this needs to be ported over from Linux: | >> static char __iomem *vmd_cfg_addr(struct vmd_dev *vmd, struct pci_bus | >> *bus, | >> unsigned int devfn, int reg, int | >> len) | >> { | >> char __iomem *addr = vmd->cfgbar + | >> ((bus->number - vmd->busn_start) << 20) + | >> (devfn << 12) + reg; | >> | >> to | >> vmd_read_config | >> offset = (b << 20) + (s << 15) + (f << 12) + reg; | >> | >> vmd_write_config(device_t dev, u_int b, u_int s, u_int f, u_int reg, | >> offset = (b << 20) + (s << 15) + (f << 12) + reg; | >> | >> ie. | >> offset = ((b - sc->vmd_bus_start) << 20) + (s << 15) + (f << 12) + | >> reg; | >> | >> vmd_bus_start should be added to the softc as a uint8_t type and needs | >> to | >> be set via attach. We need range checks to make sure | >> vmd_write_config/vmd_read_config doesn't read something out of range | >> since it has been reduced. | > | > One thing I noticed is that the "b" variable (which corresponds to the | > Linux bus->number) is 0 (thanks to printf). This should be the bus | > number if we want to attach. | > | > If I use: "b = pci_get_bus(dev);" in the attach, b is still 0. | > | > And that leads to a kernel panic. | > | >> Not sure what the shadow registers do. These both seem to be new | >> Intel | >> features and Intel doc's have been minimal. Looks like Intel is doing | >> a sparse map now on newer devices. | > | > I guess Linux is our best hope. Unless the new Intel docs is the Linux | > kernel source. | > | >> I'm concerned about the Linux comment of: | >> * Certain VMD devices may have a root port configuration | >> option which | >> * limits the bus range to between 0-127, 128-255, or 224-255 | >> | >> since I don't see anything to limit it between 0-127 only starting | >> at 0, 128 or 224, Maybe there is max of 128 busses overall? | > | > I could be wrong, but I guess that's a typo. | > | >> I don't have this type of HW to test things. | > | > I can use my hardware for testing. In the worse case scenario, I can | > donate an entry-level 11th Gen/TigerLake system if I have the funds | > and/or can get a tax credit. | > | >> Doug A. | > | > -Neel