Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 27 Feb 2002 01:56:52 -0800 (PST)
From:      Matthew Dillon <dillon@apollo.backplane.com>
To:        Robert Watson <rwatson@FreeBSD.ORG>, current@FreeBSD.ORG
Subject:   Re: Discussion of guidelines for additional version control
Message-ID:  <200202270956.g1R9uqB25473@apollo.backplane.com>
References:   <Pine.BSF.4.21.0202261927170.97278-100000@InterJet.elischer.org> <p05101400b8a20deb7635@[128.113.24.47]>

next in thread | previous in thread | raw e-mail | index | archive | help
    My general opinion is that a developer should not claim ownership of
    anything, it should simply be apparent from the traffic the developer
    posts to the public lists, discussion, and his commits.  This implies
    that the developer is only actively working on one thing at a time,
    at least in regards to non-trivial projects, which further implies that
    the work can be committed in smaller chunks (or A smaller chunk) verses
    otherwise.  While this ideal cannot always be met I believe it is a good
    basis for people working on a large project without formal management
    (i.e. open source).  In the relatively rare case where a large rip-up
    must be done all at once an exception is made.   For the FreeBSD project
    such an exception would be something like CAM or KSEs, but virtually
    nothing else.

    As an example of this I will note some non-trivial things that I have
    done using this model:

	* Implementation of idle processes.  Anyone remember how long that
	  took me?  It turned out to be a good basis for further work now
	  didn't it?

	* Pushing Giant through (most of) the syscall code and into the 
	  syscalls. 

	* The critical_*() patch is an excellent example of this.  The
	  engineering cycle was 3 days (non-inclusive of all the crap that
	  is preventing me from comitting it), and it is rock solid.

	* The rewrite of the swap system (In two pieces: added radix tree
	  bitmap infrastructure, then switched out the swapper).  I think
	  my engineering cycle on this was 1.5 weeks.  DG might remember
	  better.

    So as you can see, from my viewpoint something like UCRED is just not
    that big a deal.  With the infrastructure incrementally comitted and
    in place the final UCRED pushdown is something one could write, test,
    and commmit, from scratch, in just a few days.  That is far more
    efficient then trying to keep it in a private tree for months on end,
    having to constantly sync it with code and algorithmic changes occuring
    in the rest of the tree.  The same can be said for many of the other
    subsystems sitting in P4, like preemption.  Experimental code has
    it's uses, but when I've gleened the information I need from one of my
    experiments I typically scrap the source entirely so it doesn't get in
    the way of other work.  Later I rewrite the feature from scratch
    when the infrastructure has developed enough to support it.  It may seem
    inefficient but the reality is that it speeds up my overall engineering
    and design cycles and, at least for me, the end result is pretty damn
    good code.  Because I typically focus on one thing at a time, 3 days
    to get something simple like critical_*() done often seems like an
    eternity to me.  I can't just switch my focus to something else,
    that isn't how I work.  I can do a few things in parallel, like help
    find bugs in this or that, but real engineering cycles I do one at a
    time.

    Personally speaking if I do something complex and instrumenting it
    is straightforward, I always go ahead and instrument it because it
    makes debugging by others easy.  That is why I have been and want to
    instrument Giant in the places where Giant is otherwise being removed,
    and why for example I had a sysctl to allow the critical_*() code
    behavior to change on the fly for testing purposes.  The thing about
    instrumentation is that it's easy to put in if you integrate it right
    off the bat, and utterly trivial to rip out months or years down the
    line when you don't need it any more.  I don't understand why people
    complain about 'putting in instrumentation that we'll just have to rip
    out later'.  That kind of attitude is tantamount to saying 'I'm not going
    to bother to make this code debuggable because I've found all the bugs
    in it'.  Yah, right.  From my point of view instrumentation reduces
    the overall time required to add and stabilize a new feature, whereas
    someone saving source lines by not instrumenting his code is simply
    setting himself up for a long, buggy engineering cycle down the line
    (and not being a good neighbor to his peers much either).  

    There is a damn good reason why I am rabid about instrumenting a
    complex piece of code.  I don't care *how* long a developer has tested
    a big piece of code, it is simply naive to believe that anything short
    of exposure to the entire development community will exercise the code
    enough to really give it a good test.  In that respect I have a strong
    dislike for the idea of sub-groups of developers testing a
    non-experimental feature (i.e. intended for commit) in a side-tree.
    I do not feel that it adds anything to the project and, in fact, I
    believe it is a detriment to the project.

						-Matt


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-current" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200202270956.g1R9uqB25473>