Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 26 Aug 1997 11:57:10 -0700
From:      "Michael L. VanLoon -- HeadCandy.com" <michaelv@MindBender.serv.net>
To:        Simon Shapiro <Shimon@i-connect.net>
Cc:        "Jordan K. Hubbard" <jkh@time.cdrom.com>, current@freebsd.org, Ollivier Robert <roberto@keltia.freenix.fr>
Subject:   Re: IDE vs SCSI was: flags 80ff works (like anybody doubted it) 
Message-ID:  <199708261857.LAA23349@MindBender.serv.net>
In-Reply-To: Your message of Mon, 25 Aug 97 23:24:50 -0700. <XFMail.970825232450.Shimon@i-Connect.Net> 

next in thread | previous in thread | raw e-mail | index | archive | help

>Hi "Jordan K. Hubbard";  On 25-Aug-97 you wrote: 
>>  Hmmm.  If we're going to talk SCSI perf, let's get seriously SCSI here
>>  then: Quantum XP39100W drive on 2940UW controller:
>>  root@time-> dd if=/dev/rsd0 of=/dev/null count=1600 bs=64k
>>  1600+0 records in
>>  1600+0 records out
>>  104857600 bytes transferred in 10.974902 secs (9554309 bytes/sec)

>*  Given unlimited CPU cycles, IDE is much ``better'' than SCSI:
>   a.  Much cheaper.  somple IDE interface costs about $0.11 to build
>   b.  Much simpler code
>   c.  much shorter latencies on a given command.
>   d.  Runs sequential tests much faster.

You forgot a condition: Given unlimited CPU cycles, and a limited
budget, IDE is much ``better'' than SCSI.

>*  But consider this;
>    a.  How do you put more than 2 devices on a cable?
>    b.  How do you make the cable longer than a child's step?
>    c.  How do you issue multiple commands to multiple devices, allow them
>        to disconnect and re-connect when done?
>    e.  How do you allow command sequences s to be optimized by the device?

     f.  How do you get simultaneous, pipe-lined processing on all
         drives at once in a stripe set?

>Answer:  By replacing IDE with SCSI :-)

>Why are you guys always evaluate your disk systems with huge sequential
>reads?  How many times do you actually use your computer to do such I/O?
>(Yes, I burn rubber on my truck, excites the boys to no end :-)

It's just one way to measure.  Honestly, not the best.  I think most
of us use more than this one measurement.

>Even access to raw deviceis limited (for excellent reasons) to 64K at 
>a time.  Measure your performance in operations/sec and you will get in
>the right direction.  Load the system with multiple processes and you 
>will start getting an idea how useful is the system for a server.

And start loading it with processes, while accessing multiple drives,
possible for interleaved swap, various disk-accessing processes,
and/or striped partitions.  You'll really wish you were using SCSI in
that scenario.

>Example:
>This discussion is based on st.c; a random I/O generator I wrote some 
>time ago.  As a matter of fact, when I was trying to decide between
>Linux, FreeBSD, Solaris, Unixware and NT (just to keem management happy).
>St.c will randomly read from a file (or raww device, I always test raw
>devices, as filesystem performance is not what I am being paid for and I
>am a very insignificant ``expert'' in these.  You can ask st.c to either
>write back the read data, to write a pattern, to sequentially access the
>disk (two different ways), to lock, to flush caches, etc.  you get the
idea.
>
>FreeBSD (current, as of last Friday) will start saturating losing I/O rate
>around 256 processes.  This may be due to the hardware used, or maybe
>because of some other reason. Since this is exactly where we want to be,
>we did not bother to find why.
>
>Under 2.2, we see the saturation point at about 900 disk I/O ops/sec.
>Under 3.0 we see just over 1,400.  Again, the test method was different,
>so these results are not meaningful.  Our target was proven 800.  We are
>happy.

I would think the disk subsystem would be the primary limiting factor
here.  What mix of controllers and drives were these tests run on?

It would also be interesting to run this simulation against a striped
set of SCSI drives.  It would also be enlightening if you ran the same
test against your striped set of IDE drives.

-----------------------------------------------------------------------------
  Michael L. VanLoon                           michaelv@MindBender.serv.net
        --<  Free your mind and your machine -- NetBSD free un*x  >--
    NetBSD working ports: 386+PC, Mac 68k, Amiga, Atari 68k, HP300, Sun3,
        Sun4/4c/4m, DEC MIPS, DEC Alpha, PC532, VAX, MVME68k, arm32...
    NetBSD ports in progress: PICA, others...
-----------------------------------------------------------------------------



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199708261857.LAA23349>