From owner-freebsd-arch@FreeBSD.ORG Thu Nov 15 22:07:07 2012 Return-Path: Delivered-To: freebsd-arch@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 81E7D195; Thu, 15 Nov 2012 22:07:07 +0000 (UTC) (envelope-from marcel@xcllnt.net) Received: from mail.xcllnt.net (mail.xcllnt.net [70.36.220.4]) by mx1.freebsd.org (Postfix) with ESMTP id 3A5848FC17; Thu, 15 Nov 2012 22:07:06 +0000 (UTC) Received: from sa-nc-cs-75.static.jnpr.net (natint3.juniper.net [66.129.224.36]) (authenticated bits=0) by mail.xcllnt.net (8.14.5/8.14.5) with ESMTP id qAFM6vEi092196 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=NO); Thu, 15 Nov 2012 14:06:59 -0800 (PST) (envelope-from marcel@xcllnt.net) Content-Type: text/plain; charset=us-ascii Mime-Version: 1.0 (Mac OS X Mail 6.2 \(1499\)) Subject: Re: [RFC] test layout/standardization for FreeBSD From: Marcel Moolenaar In-Reply-To: <7099.1352886181@critter.freebsd.dk> Date: Thu, 15 Nov 2012 14:06:52 -0800 Content-Transfer-Encoding: quoted-printable Message-Id: References: <7099.1352886181@critter.freebsd.dk> To: Poul-Henning Kamp X-Mailer: Apple Mail (2.1499) Cc: Garrett Cooper , George Neville-Neil , Matthew Fleming , "freebsd-arch@FreeBSD.org Arch" X-BeenThere: freebsd-arch@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Discussion related to FreeBSD architecture List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 15 Nov 2012 22:07:07 -0000 On Nov 14, 2012, at 1:43 AM, Poul-Henning Kamp = wrote: > -------- > In message = > , Garrett Cooper writes: >=20 >> I asked for feedback from some of the stakeholders of >> the FreeBSD test automation effort [...] >=20 > Can I throw in something here ? >=20 > A very important thing is to have systematic metadata about = test-cases, > in particular: >=20 > A) Is the test 100% deterministic, or is there a risk of random = failure ? >=20 > B) Is the test reliable on a heavily loaded machine ? >=20 > C) Estimated duration of test I can't disagree, but I would argue that it's not more important than having testcases. I'm not trying to be flippant, but we need to get off the ground first before we try to retract the landing gear. I would argue that if we build this stuff out, we either hit the problems that meta data would solve or we don't. If we do, we also have real examples to work with. If we don't then we didn't "waste" time. Since at this point in time it's just an academic exercise that doesn't yield anything concrete, we're more likely than not wasting time. For example: If a test is not reliable on a heavily loaded machine, then the test is ipso facto not 100% deterministic. So, B implies A. How is this different from a meta data perspective? If we test a RNG, are we ever going to be 100% deterministic? Also, the estimated duration for tests is very platform specific. What exactly does it mean when the actual runtime is shorter or longer than the estimated time? What's the error margin we're comfortable with? Why that error margin? Did the test FAIL if it took too long or ran too short? If not, then what is it for? In short: I'm not sure we can discuss anything concrete just yet. So let's wait until we can. I don't expect a lot of difficulty collecting statistics and data. So let's do that first. The easy cases will show up soon I think and we'll sort those out first. Think functional tests vs performance tests and white-box vs black-box. Eventually we'll have buckets and we'll find that meta data is a lot easier to define and use on a per bucket basis.... Maybe not... Let's just see... $0.02 --=20 Marcel Moolenaar marcel@xcllnt.net