Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 9 Feb 2005 01:02:48 -0400 (AST)
From:      "Marc G. Fournier" <scrappy@hub.org>
To:        Greg 'groggy' Lehey <grog@FreeBSD.org>
Cc:        freebsd-questions@freebsd.org
Subject:   Dual-Xeon vs Dual-PIII (Was: Re: vinum in 4.x poor performer?)
Message-ID:  <20050209004513.G94338@ganymede.hub.org>
In-Reply-To: <20050209002232.B94338@ganymede.hub.org>
References:  <20050208231208.B94338@ganymede.hub.org> <20050209002232.B94338@ganymede.hub.org>

next in thread | previous in thread | raw e-mail | index | archive | help

The more I'm looking at this, the less I can believe my 'issue' is with 
vinum ... based on one of my other machines, it just doesn't *feel* right 
....

I have two servers that are fairly similar in config ... both running 
vinum RAID5 over 4 disks ... one is the Dual-Xeon that I'm finding 
"problematic" with 73G Seagate drives, and the other is the Dual-PIII with 
36G Seagate drives ...

The reason that I'm finding it hard to believe that my problem is with 
vinum is that the Dual-PIII is twice as loaded as the Dual-Xeon, but 
hardly seems to break a sweat ...

In fact, out of all my servers (3xDual-PIII, 1xDual-Athlon and 
1xDual-Xeon), only the Dual-Xeon doesn't seem to be able to perform ...

Now, out of all of the servers, only the Dual-Xeon, of course, supports 
HTT, which I *believe* is disabled, but from dmesg:

Copyright (c) 1992-2004 The FreeBSD Project.
Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994
         The Regents of the University of California. All rights reserved.
FreeBSD 4.10-STABLE #1: Fri Oct 22 15:06:55 ADT 2004
     root@neptune.hub.org:/usr/obj/usr/src/sys/kernel
Timecounter "i8254"  frequency 1193182 Hz
CPU: Intel(R) Xeon(TM) CPU 2.40GHz (2392.95-MHz 686-class CPU)
   Origin = "GenuineIntel"  Id = 0xf27  Stepping = 7
   Features=0xbfebfbff<FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CLFLUSH,DTS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE>
   Hyperthreading: 2 logical CPUs
real memory  = 4026466304 (3932096K bytes)
avail memory = 3922362368 (3830432K bytes)
Programming 24 pins in IOAPIC #0
IOAPIC #0 intpin 2 -> irq 0
Programming 24 pins in IOAPIC #1
Programming 24 pins in IOAPIC #2
FreeBSD/SMP: Multiprocessor motherboard: 4 CPUs
  cpu0 (BSP): apic id:  0, version: 0x00050014, at 0xfee00000
  cpu1 (AP):  apic id:  1, version: 0x00050014, at 0xfee00000
  cpu2 (AP):  apic id:  6, version: 0x00050014, at 0xfee00000
  cpu3 (AP):  apic id:  7, version: 0x00050014, at 0xfee00000
  io0 (APIC): apic id:  8, version: 0x00178020, at 0xfec00000
  io1 (APIC): apic id:  9, version: 0x00178020, at 0xfec81000
  io2 (APIC): apic id: 10, version: 0x00178020, at 0xfec81400
Preloaded elf kernel "kernel" at 0x80339000.
Warning: Pentium 4 CPU: PSE disabled
Pentium Pro MTRR support enabled
Using $PIR table, 19 entries at 0x800f2f30

Its showing "4 CPUs" ... but:

machdep.hlt_logical_cpus: 1

which, from /usr/src/UPDATING indicates that the HTT "cpus" aren't enabled:

20031022:
         Support for HyperThread logical CPUs has now been enabled by
         default.  As a result, the HTT kernel option no longer exists.
         Instead, the logical CPUs are always started so that they can
         handle interrupts.  However, the extra logical CPUs are prevented
         from executing user processes by default.  To enable the logical
         CPUs, change the value of the machdep.hlt_logical_cpus from 1 to
         0.  This value can also be set from the loader as a tunable of
         the same name.

Finally ... top shows:

last pid: 73871;  load averages:  9.76,  9.23,  8.16                                                                       up 9+02:02:26  00:57:06
422 processes: 8 running, 413 sleeping, 1 zombie
CPU states: 19.0% user,  0.0% nice, 81.0% system,  0.0% interrupt,  0.0% idle
Mem: 2445M Active, 497M Inact, 595M Wired, 160M Cache, 199M Buf, 75M Free
Swap: 2048M Total, 6388K Used, 2041M Free

   PID USERNAME   PRI NICE  SIZE    RES STATE  C   TIME   WCPU    CPU COMMAND
28298 www         64   0 28136K 12404K CPU2   2  80:59 24.51% 24.51% httpd
69232 excalibur   64   0 80128K 76624K RUN    2   2:55 16.50% 16.50% lisp.run
72879 www         64   0 22664K  9444K RUN    0   0:12 12.94% 12.94% httpd
14154 www         64   0 36992K 22880K RUN    0  55:07 12.70% 12.70% httpd
69758 www         63   0 15400K  8756K RUN    0   0:18 11.87% 11.87% httpd
  7553 nobody       2   0   158M   131M poll   0  33:19  8.98%  8.98% nsd
70752 setiathome   2   0 14644K 14084K select 2   0:47  8.98%  8.98% perl
71191 setiathome   2   0 13220K 12804K select 0   0:29  8.40%  8.40% perl
70903 setiathome   2   0 14224K 13676K select 0   0:42  7.37%  7.37% perl
33932 setiathome   2   0 21712K 21144K select 0   2:23  4.59%  4.59% perl

In this case ... 0% idle, 81% in system?

As a comparison the Dual-PIII/vinum server looks like:

last pid: 90614;  load averages:  3.64,  2.41,  2.69                                                                                          up 3+08:45:17  00:59:27
955 processes: 12 running, 942 sleeping, 1 zombie
CPU states: 63.9% user,  0.0% nice, 32.6% system,  3.5% interrupt,  0.0% idle
Mem: 2432M Active, 687M Inact, 563M Wired, 147M Cache, 199M Buf, 5700K Free
Swap: 8192M Total, 12M Used, 8180M Free, 12K In

   PID USERNAME   PRI NICE  SIZE    RES STATE  C   TIME   WCPU    CPU COMMAND
90506 scrappy     56   0 19384K 14428K RUN    0   0:06 22.98% 16.41% postgres
90579 root        57   0  3028K  2156K CPU1   1   0:04 26.23% 14.45% top
90554 pgsql       -6   0 12784K  7408K RUN    1   0:04 18.76% 11.87% postgres
90529 pgsql       54   0 14448K  8568K RUN    0   0:03 16.90% 11.28% postgres
90560 scrappy     -6   0 97368K 56900K vrlock 1   0:03 18.50% 10.99% postgres
90433 root        -6   0   576K   392K piperd 1   0:02 10.47%  7.76% gzip
84754 scrappy      2   0 15508K  8380K sbwait 0   2:41  6.30%  6.30% postgres
90553 root         2   0    98M 99120K select 1   0:01  5.94%  3.76% pg_dump
  4621 scrappy      2   0 19544K 11988K sbwait 0   9:36  2.05%  2.05% postgres

The Dual-PIII is running an Oct7th kernel while the Dual-Xeon is Oct22nd 
... if that means anything ...

Is there anything I can look at or try?  More information I can provide?

Thanks ...




On Wed, 9 Feb 2005, Marc G. Fournier wrote:

> On Wed, 9 Feb 2005, Greg 'groggy' Lehey wrote:
>
>> On Tuesday,  8 February 2005 at 23:21:54 -0400, Marc G. Fournier wrote:
>>> 
>>> I have a Dual-Xeon server with 4G of RAM, with its primary file system
>>> consisting of 4x73G SCSI drives running RAID5 using vinum ... the
>>> operating system is currently FreeBSD 4.10-STABLE #1: Fri Oct 22 15:06:55
>>> ADT 2004 ... swap usage is 0% (6149) ... and it performs worse then any of
>>> my other servers, and I have less running on it then the other servers ...
>>> 
>>> I also have HTT disabled on this server ... and softupdates enabled on the
>>> file system ...
>>> 
>>> That said ... am I hitting limits of software raid or is there something I
>>> should be looking at as far as performance is concerned?  Maybe something
>>> I have misconfigured?
>> 
>> Based on what you've said, it's impossible to tell.  Details would be
>> handy.
>
> Like?  I'm not sure what would be useful for this one ... I just sent in my 
> current drive config ... something else useful?
>
> systat -v output help:
>
>    4 users    Load  4.64  5.58  5.77                  Feb  9 00:24
>
> Mem:KB    REAL            VIRTUAL                     VN PAGER  SWAP PAGER
>        Tot   Share      Tot    Share    Free         in  out     in  out
> Act 1904768  137288  3091620   381128  159276 count
> All 3850780  221996  1078752   605460         pages
>                                                     7921 zfod   Interrupts
> Proc:r  p  d  s  w    Csw  Trp  Sys  Int  Sof  Flt    242 cow     681 total
>    24     9282       949 8414*****  678  349 8198 566916 wire        ahd0 
> irq16
>                                                  2527420 act      67 ahd1 
> irq17
> 54.6%Sys   0.2%Intr 45.2%User  0.0%Nice  0.0%Idl   608208 inact   157 em0 
> irq18
> |    |    |    |    |    |    |    |    |    |     146620 cache   200 clk 
> irq0
> ===========================>>>>>>>>>>>>>>>>>>>>>    12656 free    257 rtc 
> irq8
>                                                          daefr
> Namei         Name-cache    Dir-cache                7363 prcfr
>    Calls     hits    %     hits    %                     react
>    46106    46005  100       13    0                     pdwake
>                                                          pdpgs
> Disks   da0   da1   da2   da3   da4 pass0 pass1           intrn
> KB/t   5.32  9.50 12.52 16.00  9.00  0.00  0.00    204096 buf
> tps      23     2     4     3     1     0     0      1610 dirtybuf
> MB/s   0.12  0.01  0.05  0.04  0.01  0.00  0.00    512000 desiredvnodes
> % busy    3     1     1     1     0     0     0    397436 numvnodes
>                                                   166179 freevnodes
>
> Drives da1 -> da4 are used on the vinum array da0 is just the system drive 
> ...
>
> ----
> Marc G. Fournier           Hub.Org Networking Services (http://www.hub.org)
> Email: scrappy@hub.org           Yahoo!: yscrappy              ICQ: 7615664
> _______________________________________________
> freebsd-questions@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-questions
> To unsubscribe, send any mail to "freebsd-questions-unsubscribe@freebsd.org"
>

----
Marc G. Fournier           Hub.Org Networking Services (http://www.hub.org)
Email: scrappy@hub.org           Yahoo!: yscrappy              ICQ: 7615664



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20050209004513.G94338>