From owner-freebsd-hardware Wed May 21 06:26:02 1997 Return-Path: Received: (from root@localhost) by hub.freebsd.org (8.8.5/8.8.5) id GAA09588 for hardware-outgoing; Wed, 21 May 1997 06:26:02 -0700 (PDT) Received: from Sisyphos.MI.Uni-Koeln.DE (Sisyphos.MI.Uni-Koeln.DE [134.95.212.10]) by hub.freebsd.org (8.8.5/8.8.5) with SMTP id GAA09578 for ; Wed, 21 May 1997 06:25:56 -0700 (PDT) Received: from x14.mi.uni-koeln.de (annexr2-44.slip.Uni-Koeln.DE) by Sisyphos.MI.Uni-Koeln.DE with SMTP id AA08990 (5.67b/IDA-1.5 for ); Wed, 21 May 1997 15:25:44 +0200 Received: (from se@localhost) by x14.mi.uni-koeln.de (8.8.5/8.6.9) id PAA03640; Wed, 21 May 1997 15:25:45 +0200 (CEST) X-Face: " Date: Wed, 21 May 1997 15:25:44 +0200 From: Stefan Esser To: Bruce Evans Cc: garycorc@idt.net, HARDWARE@FreeBSD.ORG Subject: Re: isa bus and boca multiport boards References: <199705211236.WAA23416@godzilla.zeta.org.au> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-Mailer: Mutt 0.73 In-Reply-To: <199705211236.WAA23416@godzilla.zeta.org.au>; from Bruce Evans on Wed, May 21, 1997 at 10:36:31PM +1000 Sender: owner-hardware@FreeBSD.ORG X-Loop: FreeBSD.org Precedence: bulk On May 21, Bruce Evans wrote: > I tried using 1 ins[bwl](port, dummyaddr, 1000) instead of 1000 inb's. > The results were (perhaps not surprisingly) similar. They were within 1 > nsec for de0, varied a lot with the access size for the crtc, and were > within 1 nsec for ins[bw] from wdc0 and twice as large for insl from > wdc0 (1510 nsec instead of 755 nsec). The latter is a bit surprising > - my wdc0 is configured for and uses 32-bit accesses and has a 16MB/s > transfer speed, yet 4 bytes per 1510 nsec is only 2.65 MB/s. I guess > the wdc data register is only valid when there is real data available :-). I'm not sure I understand what happens, but it might be like this: The PCI bus specs limit the time a single transfer may take to 16 bus cycles. If a device can't send deliver valid data within that time, it must back off. (If a device knows it can't deliver data with little delay, then it should acknowledge the address phase, but immediately signal a retry is necessary. This makes the bus available for other bus masters, and when the host to PCI bridge requests the same data again, a few hundred nanoseconds later, the slow chip should have prepared it in its output buffer and should respond fast to this repeated request.) Anyway, if the IDE chip finds it can't deliver any data, then it will signal an abort (and probably set a flag in a status register). The PCI IDE interface will try to retrieve the second 16bit word thereafter, if you are doing a DWORD register access, and this will fail after the same time. I guess that is ithe reason you see exactly twice delay in the insl case ... The delay of some 750ns is equivalent to 25 PCI bus cycles. This is slightly more than the allowed 16 wait cycles, but is in line with the other delays yor found (11 clocks best case). If the 25 cycles are 10 cycles for the fastests possible inb() and 15 wait cycles, then this looks self-consistent at least :) I guess the 32bit IDE hardware is just to dumb to know it need not try to fetch the second 16bits after the first transfer failed to deliver any data ... > >2) The PCI bus may have been "parked" at some other > > bus master. In order to give the output drivers > > of the current master time to go into a high > > impedance state, one cycle of delay is added. > > Clearly some magic (like data available :-) is required to > get burst mode. Do you thing 300+ nsec is typical for > non-data registers? Well, there is no such thing as burst mode I/O in PCI :) Burst transfers are only defined for memory accesses ... Regards, STefan