Date: Thu, 24 Apr 2008 17:28:58 +0100 From: Thomas Hurst <tom.hurst@clara.net> To: Jeremy Chadwick <koitsu@freebsd.org> Cc: Clayton Milos <clay@milos.co.za>, Kris Kennaway <kris@FreeBSD.ORG>, stable@FreeBSD.ORG, net@FreeBSD.ORG Subject: Re: nfs-server silent data corruption Message-ID: <20080424162858.GA46157@voi.aagh.net> In-Reply-To: <20080421154333.GA96237@eos.sc1.parodius.com> References: <wpmyno2kqe.fsf@heho.snv.jussieu.fr> <20080421094718.GY25623@hub.freebsd.org> <wp63ubp8e0.fsf@heho.snv.jussieu.fr> <20080421154333.GA96237@eos.sc1.parodius.com>
next in thread | previous in thread | raw e-mail | index | archive | help
* Jeremy Chadwick (koitsu@freebsd.org) wrote: > > I added it directly to the 2nd CPU (diagram on page 9 of > > http://www.tyan.com/manuals/m_s2895_101.pdf) and the problem > > seems to be the interaction between nfe0 and powerd .... : > > That board is the weirdest thing I've seen in years. K8WE's a very popular workstation board. I've been using one for years. > Two separate CPUs using a single (shared) memory controller, Er, no. Where are you getting that? 4 DIMMs are connected per CPU, though it's hardly strange to only have one, just cheap and nasty. > two separate (and different!) nVidia chipsets, a SMSC I/O controller > probably used for serial and parallel I/O, Er, so? Sun X4x00 M2's do exactly the same; they run a 2200 off one CPU and a 2050 off another (via an AMD 8132 no less). !M2's did much the same with a pair of AMD 8131's. They use SMSC IO controllers too: http://www.sun.com/servers/entry/x4100/arch-wp.pdf http://www.sun.com/servers/netra/x4200/wp.pdf We've used dozens of these systems in production in various configurations for years wuthout a problem. > two separate nVidia NICs with Marvell PHYs (yet somehow you can bridge > the two NICs and PHYs?), They're not seperate, they hang off the same chip according to the linked document. They are nve nonsense, though, not worth using imo. > two separate PCI-e busses (each associated with a separate nVidia > chipset), two separate PCI-X busses... the list continues. Again, nothing surprising. Each CPU gets its own bus via its own HT link. Back in the day when the K8WE was first released, this was the only way to get a pair of 16x PCIe slots. > I know you don't need opinions at this point, but what a behemoth. I > can't imagine that thing running reliably. The only stability problems I've experience have been the occasional lockup using PowerNow since migrating from dual single core to dual dual core. -- Thomas 'Freaky' Hurst http://hur.st/
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20080424162858.GA46157>