Date: Sat, 10 Mar 2012 15:30:05 +0000 From: =?iso-8859-9?Q?Seyit_=D6zg=FCr?= <seyit.ozgur@istanbul.net> To: "freebsd-net@freebsd.org" <freebsd-net@freebsd.org> Subject: FreeBSD 9.0 some tuning for Best performance for 85598 intel controller Message-ID: <3807CE6F3BF4B04EB897F4EBF2D258CE5C044A49@GAMMA.magnetdigital.local> References: <3807CE6F3BF4B04EB897F4EBF2D258CE5C030DB5@yuhanna.magnetdigital.local> <3807CE6F3BF4B04EB897F4EBF2D258CE5C038290@yuhanna.magnetdigital.local>
next in thread | previous in thread | raw e-mail | index | archive | help
------=_NextPart_000_001A_01CCFEE3.6C5EDF90 Content-Type: multipart/related; boundary="----=_NextPart_001_001B_01CCFEE3.6C5EDF90" ------=_NextPart_001_001B_01CCFEE3.6C5EDF90 Content-Type: multipart/alternative; boundary="----=_NextPart_002_001C_01CCFEE3.6C5EDF90" ------=_NextPart_002_001C_01CCFEE3.6C5EDF90 Content-Type: text/plain; charset="iso-8859-9" Content-Transfer-Encoding: quoted-printable =20 Hello, =20 I try to build server for syn flood rezist.. i buy IBM x3650 with 24 = core xeon cpu, 64 gbit Ram, SSD disk.. Also i plug in 52598 intel Intel=AE 10 Gigabit XF SR Server Adapter CARD on it. I install FreeBSD 9.0 Release version. And i find ixbge-2.4.4.tar.gz = driver and i install on FreeBSD =20 I checked document and support pages for intel.. i didnt find any = improve performance document for FreeBSD..=20 Like this http://www.intel.com/content/www/us/en/ethernet-controllers/82575-82576-8= 259 8-82599-ethernet-controllers-latency-appl-note.html (this is good for = linux) But i dont see any freebsd documentation.. I need to improve more better for performance.. Can you help me about = this case ? .. Because i know this network card can handle more pps.. (i = noticed 600.000 pps without input error with this configuration with 46 byte syn packets my best performance at this moment on open port). but i want to build for 2.000.000 pps syn flood rezisting ...=20 =20 I see only 8 irq rx queues for handling.. but i got 24 core cpu.. how = can assign more cpu for better performance. (i attached screen for 8 queue) = ( how can i assign more then 8 core.. ) Also Finally i need tuning for ixgbe-2.4.4 driver or change freebsd = kernel params for better performance ? Can anybody knows about that =20 dev.ix.0.dropped: 0 dev.ix.0.mbuf_defrag_failed: 0 dev.ix.0.no_tx_dma_setup: 0 dev.ix.0.watchdog_events: 0 dev.ix.0.tso_tx: 0 dev.ix.0.link_irq: 0 dev.ix.0.queue0.interrupt_rate: 0 dev.ix.0.queue0.txd_head: 0 dev.ix.0.queue0.txd_tail: 0 dev.ix.0.queue0.no_desc_avail: 0 dev.ix.0.queue0.tx_packets: 0 dev.ix.0.queue0.rxd_head: 0 dev.ix.0.queue0.rxd_tail: 0 dev.ix.0.queue0.rx_packets: 0 dev.ix.0.queue0.rx_bytes: 0 dev.ix.0.queue0.lro_queued: 0 dev.ix.0.queue0.lro_flushed: 0 dev.ix.0.queue1.interrupt_rate: 0 dev.ix.0.queue1.txd_head: 0 dev.ix.0.queue1.txd_tail: 0 dev.ix.0.queue1.no_desc_avail: 0 dev.ix.0.queue1.tx_packets: 0 dev.ix.0.queue1.rxd_head: 0 dev.ix.0.queue1.rxd_tail: 0 dev.ix.0.queue1.rx_packets: 0 dev.ix.0.queue1.rx_bytes: 0 dev.ix.0.queue1.lro_queued: 0 dev.ix.0.queue1.lro_flushed: 0 dev.ix.0.queue2.interrupt_rate: 0 dev.ix.0.queue2.txd_head: 0 dev.ix.0.queue2.txd_tail: 0 dev.ix.0.queue2.no_desc_avail: 0 dev.ix.0.queue2.tx_packets: 0 dev.ix.0.queue2.rxd_head: 0 dev.ix.0.queue2.rxd_tail: 0 dev.ix.0.queue2.rx_packets: 0 dev.ix.0.queue2.rx_bytes: 0 dev.ix.0.queue2.lro_queued: 0 dev.ix.0.queue2.lro_flushed: 0 dev.ix.0.queue3.interrupt_rate: 0 dev.ix.0.queue3.txd_head: 0 dev.ix.0.queue3.txd_tail: 0 dev.ix.0.queue3.no_desc_avail: 0 dev.ix.0.queue3.tx_packets: 0 dev.ix.0.queue3.rxd_head: 0 dev.ix.0.queue3.rxd_tail: 0 dev.ix.0.queue3.rx_packets: 0 dev.ix.0.queue3.rx_bytes: 0 dev.ix.0.queue3.lro_queued: 0 dev.ix.0.queue3.lro_flushed: 0 dev.ix.0.queue4.interrupt_rate: 0 dev.ix.0.queue4.txd_head: 0 dev.ix.0.queue4.txd_tail: 0 dev.ix.0.queue4.no_desc_avail: 0 dev.ix.0.queue4.tx_packets: 0 dev.ix.0.queue4.rxd_head: 0 dev.ix.0.queue4.rxd_tail: 0 dev.ix.0.queue4.rx_packets: 0 dev.ix.0.queue4.rx_bytes: 0 dev.ix.0.queue4.lro_queued: 0 dev.ix.0.queue4.lro_flushed: 0 dev.ix.0.queue5.interrupt_rate: 0 dev.ix.0.queue5.txd_head: 0 dev.ix.0.queue5.txd_tail: 0 dev.ix.0.queue5.no_desc_avail: 0 dev.ix.0.queue5.tx_packets: 0 dev.ix.0.queue5.rxd_head: 0 dev.ix.0.queue5.rxd_tail: 0 dev.ix.0.queue5.rx_packets: 0 dev.ix.0.queue5.rx_bytes: 0 dev.ix.0.queue5.lro_queued: 0 dev.ix.0.queue5.lro_flushed: 0 dev.ix.0.queue6.interrupt_rate: 0 dev.ix.0.queue6.txd_head: 0 dev.ix.0.queue6.txd_tail: 0 dev.ix.0.queue6.no_desc_avail: 0 dev.ix.0.queue6.tx_packets: 0 dev.ix.0.queue6.rxd_head: 0 dev.ix.0.queue6.rxd_tail: 0 dev.ix.0.queue6.rx_packets: 0 dev.ix.0.queue6.rx_bytes: 0 dev.ix.0.queue6.lro_queued: 0 dev.ix.0.queue6.lro_flushed: 0 dev.ix.0.queue7.interrupt_rate: 0 dev.ix.0.queue7.txd_head: 0 dev.ix.0.queue7.txd_tail: 0 dev.ix.0.queue7.no_desc_avail: 0 dev.ix.0.queue7.tx_packets: 0 dev.ix.0.queue7.rxd_head: 0 dev.ix.0.queue7.rxd_tail: 0 dev.ix.0.queue7.rx_packets: 0 dev.ix.0.queue7.rx_bytes: 0 dev.ix.0.queue7.lro_queued: 0 dev.ix.0.queue7.lro_flushed: 0 =20 Here my driver,=20 =20 bsd# dmesg | grep ix module_register: module pci/ixgbe already exists! Module pci/ixgbe failed to register: 17 module_register: module pci/ixv already exists! Module pci/ixv failed to register: 17 acpi0: Power Button (fixed) ix0: <Intel(R) PRO/10GbE PCI-Express Network Driver, Version - 2.4.4> = port 0x2000-0x201f mem 0x9ba40000-0x9ba5ffff,0x9ba00000-0x9ba3ffff,0x9ba60000-0x9ba63fff irq 30 = at device 0.0 on pci31 ix0: Using MSIX interrupts with 9 vectors ix0: RX Descriptors exceed system mbuf max, using default instead! ix0: Ethernet address: 00:1b:21:cb:57:96 ix0: PCI Express Bus: Speed 2.5Gb/s Width x8 =20 Also here my /boot/loader.conf =20 gaze# cat /boot/loader.conf=20 #Useful when your software uses select() instead of kevent/kqueue or = when you under DDoS # DNS accf available on 8.0+ accf_data_load=3D"YES"=20 accf_http_load=3D"YES" =20 # Async IO system calls aio_load=3D"YES" =20 # Intel 52598 load driver ixgbe_load=3D"YES" =20 # Load cubic cubic_load=3D"YES" =20 #hw.bce.rxd=3D2048 #hw.igb.rxd=3D2048 #hw.bce.tso_enable=3D0 #hw.pci.enable_msix=3D0 #net.isr.direct_force=3D1 #net.isr.direct=3D1 #net.isr.maxthreads=3D12 # Max number of threads for NIC = IRQ balancing (4 cores in box) #net.isr.numthread=3D12 =20 =20 autoboot_delay=3D"3" # reduce boot menu delay from 10 = to 3 seconds #if_bce_load=3D"YES" # load the Myri10GE kernel = module on boot loader_logo=3D"beastie" # old FreeBSD logo menu #net.inet.tcp.syncache.hashsize=3D1024 # syncache hash size #net.inet.tcp.syncache.bucketlimit=3D100 # syncache bucket limit #net.inet.tcp.tcbhashsize=3D4096 # tcb hash size net.isr.bindthreads=3D0 # do not bind threads to CPUs net.isr.direct=3D1 # interrupt handling via = multiple CPU net.isr.direct_force=3D1 # " net.isr.maxthreads=3D16 # Max number of threads for NIC IRQ balancing (4 cores in box) hw.pci.enable_msix=3D1 hw.ix.rx_process_limit=3D1000000 hw.igb.rx_process_limit=3D1000000 hw.em.rxd=3D2048 hw.igb.rxd=3D2048 =20 And my /etc/sysctl.conf =20 # release/9.0.0/etc/sysctl.conf 112200 2003-03-13 18:43:50Z mux $ # # This file is read when going to multi-user and its contents piped = thru # ``sysctl'' to adjust kernel values. ``man 5 sysctl.conf'' for = details. # =20 # Uncomment this to prevent users from seeing information about = processes that # are being run under another UID. #security.bsd.see_other_uids=3D0 =20 # INCREASE BUFFER SIZE OF CONNECTIONS kern.ipc.nmbclusters=3D2560000 =20 # Increase Storm control hw.intr_storm_threshold=3D40000 =20 # Descrease ACK, SYN-ACK, FIN-ACK wait time net.inet.tcp.msl=3D5000 =20 # DROP TCP CONNECTION IF ON CLOSED PORT net.inet.tcp.blackhole=3D2 =20 # DROP UDP CONNECT=DDON IF ON CLOSED PORT net.inet.udp.blackhole=3D1 =20 # ICMP RESPONSE LIMITING Max : 50 after 50 dont RESPONSE net.inet.icmp.icmplim=3D50 =20 # BACKLOG QUEUE kern.ipc.somaxconn=3D65535 =20 # Increase Sockets kern.ipc.maxsockets=3D20480000 =20 # Increase Socket Buffer kern.ipc.maxsockbuf=3D104857600 =20 # For lower latency you can decrease scheluer's maximum time slice kern.sched.slice=3D1 =20 # Every socket is a life, so increase them kern.maxfiles=3D20480000 kern.maxfilesperproc=3D200000000 kern.maxvnodes=3D20000000 =20 # Incrase buffers net.inet.tcp.recvspace=3D65536000 net.inet.tcp.recvbuf_max=3D1048576000 net.inet.tcp.recvbuf_inc=3D6553500 net.inet.tcp.sendspace=3D32768000 net.inet.tcp.sendbuf_max=3D2097152000 net.inet.tcp.sendbuf_inc=3D8192000 =20 # Also timestamp field is useful when using syncookies net.inet.tcp.rfc1323=3D1 =20 # If you set it there is no need in TCP_NODELAY sockopt (see man tcp) net.inet.tcp.delayed_ack=3D0 =20 # Turn off receive autotuning # You can play with it. #net.inet.tcp.recvbuf_auto=3D0 #net.inet.tcp.sendbuf_auto=3D0 =20 # We assuming we have very fast clients #net.inet.tcp.slowstart_flightsize=3D100 #net.inet.tcp.local_slowstart_flightsize=3D100 =20 =20 # For outgoing connections only. Good for seed-boxes and ftp servers. net.inet.ip.portrange.first=3D1024 net.inet.ip.portrange.last=3D65535 =20 =20 # stops route cache degregation during a high-bandwidth flood #net.inet.ip.rtexpire=3D2 net.inet.ip.rtminexpire=3D2 net.inet.ip.rtmaxcache=3D1024 =20 # Security net.inet.ip.sourceroute=3D0 net.inet.ip.accept_sourceroute=3D0 net.inet.icmp.maskrepl=3D0 net.inet.icmp.log_redirect=3D0 net.inet.tcp.drop_synfin=3D1 =20 # Security net.inet.ip.redirect=3D0 net.inet.ip.sourceroute=3D0 net.inet.ip.accept_sourceroute=3D0 net.inet.icmp.log_redirect=3D0 =20 # Max number of timewait sockets (Maximum number of compressed TCP = TIME_WAIT entries) net.inet.tcp.maxtcptw=3D20000000 =20 =20 # FIN_WAIT_2 state fast recycle net.inet.tcp.fast_finwait2_recycle=3D1 =20 # Time before tcp keepalive probe is sent # default is 2 hours (7200000) net.inet.tcp.keepidle=3D60000 =20 # Should be increased until net.inet.ip.intr_queue_drops is zero net.inet.ip.intr_queue_maxlen=3D4096 =20 =20 # enable send/recv autotuning net.inet.tcp.sendbuf_auto=3D1 net.inet.tcp.recvbuf_auto=3D1 =20 # increase autotuning step size=20 net.inet.tcp.sendbuf_inc=3D16384=20 net.inet.tcp.recvbuf_inc=3D524288=20 =20 # turn off inflight limitting #net.inet.tcp.inflight.enable=3D0 =20 # set this on test/measurement hosts net.inet.tcp.hostcache.expire=3D1 =20 # Randomized proccess id's kern.randompid=3D348 =20 # do not processes any TCP options in the TCP headers net.inet.ip.process_options=3D0 =20 # disable MTU path discovery net.inet.tcp.path_mtu_discovery=3D0 =20 # Disable SACK net.inet.tcp.sack.enable=3D0 =20 kern.ipc.shmmax=3D5368709120 kern.ipc.shmall=3D13107200 kern.ipc.semmsl=3D1024 =20 net.inet6.ip6.auto_linklocal=3D0 net.inet.tcp.syncookies=3D0 =20 # BPF increase buffer size net.bpf.maxbufsize=3D1048576 =20 kern.ipc.nmbjumbop=3D262144 kern.ipc.nmbjumbo16=3D32000 kern.ipc.nmbjumbo9=3D64000 =20 dev.ix.0.fc=3D0 =20 =20 Best Regards.. =20 =20 =20 Seyit =D6zg=FCr Network Y=F6neticisi=20 =09 <http://www.magnetdigital.com/> Description: Magnet A.=DE. =09 =09 Eski =DCsk=FCdar Cad. No:10 VIP Center Kat:7 =DD=E7erenk=F6y Ata=FEehir =DDstanbul t: 0216 577 33 11 | f: 0216 469 52 43 <http://www.magnetdigital.com> www.magnetdigital.com=20 =09 =09 =09 =20 ------=_NextPart_002_001C_01CCFEE3.6C5EDF90 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-9" Hello, I try to build server for = syn flood rezist.. i buy IBM x3650 with 24 core xeon cpu, 64 gbit Ram, = SSD disk.. Also i plug in 52598 intel Intel=AE 10 Gigabit XF SR Server = Adapter CARD on it. I install FreeBSD 9.0 Release version. And i find = ixbge-2.4.4.tar.gz driver and i install on = FreeBSD I checked document = and support pages for = intel.. i didnt find any improve = performance document for FreeBSD.. Like this [1]http://www.intel.com/content/www/us/en/ethernet-control= lers/82575-82576-82598-82599-ethernet-controllers-latency-appl-note.ht ml<= /a> (this is good for linux) But i dont see any freebsd = documentation.. I need to improve more better for performance.. = Can you help me about this case ? .. Because i know this network card = can handle more pps.. (i noticed 600.000 pps without input error with = this configuration with 46 byte syn packets my best performance at this = moment on open port). but i want to build for 2.000.000 pps syn flood = rezisting ... I see only 8 irq rx queues = for handling.. but i got 24 core cpu.. how can assign more cpu for = better performance. (i attached screen for 8 queue) ( how can i assign = more then 8 core.. ) Also Finally = i need tuning for ixgbe-2.4.4 driver or change freebsd kernel params for = better performance ? Can anybody = knows about that dev.ix.0.dropped: = 0 dev.ix.0.mbuf_defrag_failed: = 0 dev.ix.0.no_tx_dma_setup: = 0 dev.ix.0.watchdog_events: = 0 dev.ix.0.tso_tx: 0 dev.ix.0.link_irq: = 0 dev.ix.0.queue0.interrupt_rate: = 0 dev.ix.0.queue0.txd_head: = 0 dev.ix.0.queue0.txd_tail: = 0 dev.ix.0.queue0.no_desc_avail: = 0 dev.ix.0.queue0.tx_packets: = 0 dev.ix.0.queue0.rxd_head: = 0 dev.ix.0.queue0.rxd_tail: = 0 dev.ix.0.queue0.rx_packets: = 0 dev.ix.0.queue0.rx_bytes: = 0 dev.ix.0.queue0.lro_queued: = 0 dev.ix.0.queue0.lro_flushed: = 0 dev.ix.0.queue1.interrupt_rate: = 0 dev.ix.0.queue1.txd_head: = 0 dev.ix.0.queue1.txd_tail: = 0 dev.ix.0.queue1.no_desc_avail: = 0 dev.ix.0.queue1.tx_packets: = 0 dev.ix.0.queue1.rxd_head: = 0 dev.ix.0.queue1.rxd_tail: = 0 dev.ix.0.queue1.rx_packets: = 0 dev.ix.0.queue1.rx_bytes: = 0 dev.ix.0.queue1.lro_queued: = 0 dev.ix.0.queue1.lro_flushed: = 0 dev.ix.0.queue2.interrupt_rate: = 0 dev.ix.0.queue2.txd_head: = 0 dev.ix.0.queue2.txd_tail: = 0 dev.ix.0.queue2.no_desc_avail: = 0 dev.ix.0.queue2.tx_packets: = 0 dev.ix.0.queue2.rxd_head: = 0 dev.ix.0.queue2.rxd_tail: = 0 dev.ix.0.queue2.rx_packets: = 0 dev.ix.0.queue2.rx_bytes: = 0 dev.ix.0.queue2.lro_queued: = 0 dev.ix.0.queue2.lro_flushed: = 0 dev.ix.0.queue3.interrupt_rate: = 0 dev.ix.0.queue3.txd_head: = 0 dev.ix.0.queue3.txd_tail: = 0 dev.ix.0.queue3.no_desc_avail: = 0 dev.ix.0.queue3.tx_packets: = 0 dev.ix.0.queue3.rxd_head: = 0 dev.ix.0.queue3.rxd_tail: = 0 dev.ix.0.queue3.rx_packets: = 0 dev.ix.0.queue3.rx_bytes: = 0 dev.ix.0.queue3.lro_queued: = 0 dev.ix.0.queue3.lro_flushed: = 0 dev.ix.0.queue4.interrupt_rate: = 0 dev.ix.0.queue4.txd_head: = 0 dev.ix.0.queue4.txd_tail: = 0 dev.ix.0.queue4.no_desc_avail: = 0 dev.ix.0.queue4.tx_packets: = 0 dev.ix.0.queue4.rxd_head: = 0 dev.ix.0.queue4.rxd_tail: = 0 dev.ix.0.queue4.rx_packets: = 0 dev.ix.0.queue4.rx_bytes: = 0 dev.ix.0.queue4.lro_queued: = 0 dev.ix.0.queue4.lro_flushed: = 0 dev.ix.0.queue5.interrupt_rate: = 0 dev.ix.0.queue5.txd_head: = 0 dev.ix.0.queue5.txd_tail: = 0 dev.ix.0.queue5.no_desc_avail: = 0 dev.ix.0.queue5.tx_packets: = 0 dev.ix.0.queue5.rxd_head: = 0 dev.ix.0.queue5.rxd_tail: = 0 dev.ix.0.queue5.rx_packets: = 0 dev.ix.0.queue5.rx_bytes: = 0 dev.ix.0.queue5.lro_queued: = 0 dev.ix.0.queue5.lro_flushed: = 0 dev.ix.0.queue6.interrupt_rate: = 0 dev.ix.0.queue6.txd_head: = 0 dev.ix.0.queue6.txd_tail: = 0 dev.ix.0.queue6.no_desc_avail: = 0 dev.ix.0.queue6.tx_packets: = 0 dev.ix.0.queue6.rxd_head: = 0 dev.ix.0.queue6.rxd_tail: = 0 dev.ix.0.queue6.rx_packets: = 0 dev.ix.0.queue6.rx_bytes: = 0 dev.ix.0.queue6.lro_queued: = 0 dev.ix.0.queue6.lro_flushed: = 0 dev.ix.0.queue7.interrupt_rate: = 0 dev.ix.0.queue7.txd_head: = 0 dev.ix.0.queue7.txd_tail: = 0 dev.ix.0.queue7.no_desc_avail: = 0 dev.ix.0.queue7.tx_packets: = 0 dev.ix.0.queue7.rxd_head: = 0 dev.ix.0.queue7.rxd_tail: = 0 dev.ix.0.queue7.rx_packets: = 0 dev.ix.0.queue7.rx_bytes: = 0 dev.ix.0.queue7.lro_queued: = 0 dev.ix.0.queue7.lro_flushed: = 0 Here my driver, = bsd# dmesg | grep = ix module_register: module pci/ixgbe already = exists! Module pci/ixgbe failed to register: = 17 module_register: module pci/ixv already = exists! Module pci/ixv failed to register: = 17 acpi0: Power Button = (fixed) ix0: <Intel(R) PRO/10GbE PCI-Express Network = Driver, Version - 2.4.4> port 0x2000-0x201f mem = 0x9ba40000-0x9ba5ffff,0x9ba00000-0x9ba3ffff,0x9ba60000-0x9ba63fff irq 30 = at device 0.0 on pci31 ix0: Using MSIX interrupts with 9 = vectors ix0: RX Descriptors exceed system mbuf max, using = default instead! ix0: Ethernet address: = 00:1b:21:cb:57:96 ix0: PCI Express Bus: Speed 2.5Gb/s Width = x8 Also here my = /boot/loader.conf gaze# cat = /boot/loader.conf #Useful when your software uses select() instead = of kevent/kqueue or when you under DDoS # DNS accf available on = 8.0+ accf_data_load=3D"YES" = accf_http_load=3D"YES"<= /p> # Async IO system = calls aio_load=3D"YES" # Intel 52598 load = driver ixgbe_load=3D"YES" <= p class=3DMsoNormal> # Load = cubic cubic_load=3D"YES" <= p class=3DMsoNormal> #hw.bce.rxd=3D2048 #hw.igb.rxd=3D2048 #hw.bce.tso_enable=3D0 #hw.pci.enable_msix=3D0 #net.isr.direct_force=3D1 #net.isr.direct=3D1 #net.isr.maxthreads=3D12 &nb= sp; # Max number of = threads for NIC IRQ balancing (4 cores in box) #net.isr.numthread=3D12 autoboot_delay=3D"3" &nb= sp; &nbs= p; # reduce boot menu delay from 10 to 3 = seconds #if_bce_load=3D"YES" &nb= sp; &nbs= p; # load the Myri10GE kernel module on = boot loader_logo=3D"beastie" = &= nbsp; # old FreeBSD logo menu #net.inet.tcp.syncache.hashsize=3D1024 = # syncache hash size #net.inet.tcp.syncache.bucketlimit=3D100 # = syncache bucket limit #net.inet.tcp.tcbhashsize=3D4096 &= nbsp; # tcb hash size net.isr.bindthreads=3D0 &nbs= p; # = do not bind threads to CPUs net.isr.direct=3D1 &nb= sp; &nbs= p; # interrupt handling via multiple = CPU net.isr.direct_force=3D1 &nb= sp; # = " net.isr.maxthreads=3D16 &nbs= p; # Max number of = threads for NIC IRQ balancing (4 cores in box) hw.pci.enable_msix=3D1 hw.ix.rx_process_limit=3D1000000<= /p> hw.igb.rx_process_limit=3D1000000= hw.em.rxd=3D2048 hw.igb.rxd=3D2048 And my = /etc/sysctl.conf # = release/9.0.0/etc/sysctl.conf 112200 2003-03-13 18:43:50Z mux = $ # # This file is read when going to multi-user = and its contents piped thru # ``sysctl'' to = adjust kernel values. ``man 5 sysctl.conf'' for = details. # # Uncomment this to = prevent users from seeing information about processes = that # are being run under another = UID. #security.bsd.see_other_uids=3D0<= /p> # INCREASE BUFFER SIZE OF = CONNECTIONS kern.ipc.nmbclusters=3D2560000 # Increase Storm = control hw.intr_storm_threshold=3D40000 # Descrease ACK, SYN-ACK, = FIN-ACK wait time net.inet.tcp.msl=3D5000 # DROP TCP CONNECTION IF = ON CLOSED PORT net.inet.tcp.blackhole=3D2 # DROP UDP CONNECT=DDON IF = ON CLOSED PORT net.inet.udp.blackhole=3D1 # ICMP RESPONSE LIMITING = Max : 50 after 50 dont RESPONSE net.inet.icmp.icmplim=3D50 # BACKLOG = QUEUE kern.ipc.somaxconn=3D65535 # Increase = Sockets kern.ipc.maxsockets=3D20480000 # Increase Socket = Buffer kern.ipc.maxsockbuf=3D104857600 # For lower latency you = can decrease scheluer's maximum time slice kern.sched.slice=3D1 # Every socket is a life, = so increase them kern.maxfiles=3D20480000 kern.maxfilesperproc=3D200000000<= /p> kern.maxvnodes=3D20000000 # Incrase = buffers net.inet.tcp.recvspace=3D65536000= net.inet.tcp.recvbuf_max=3D1048576000 net.inet.tcp.recvbuf_inc=3D6553500 net.inet.tcp.sendspace=3D32768000= net.inet.tcp.sendbuf_max=3D2097152000 net.inet.tcp.sendbuf_inc=3D8192000 # Also timestamp field is = useful when using syncookies net.inet.tcp.rfc1323=3D1 # If you set it there is = no need in TCP_NODELAY sockopt (see man tcp) net.inet.tcp.delayed_ack=3D0 <= p class=3DMsoNormal> # Turn off receive = autotuning # You can play with it. #net.inet.tcp.recvbuf_auto=3D0 #net.inet.tcp.sendbuf_auto=3D0 # We assuming we have very = fast clients #net.inet.tcp.slowstart_flightsize=3D100= #net.inet.tcp.local_slowstart_flightsize=3D100= # For outgoing connections = only. Good for seed-boxes and ftp servers. net.inet.ip.portrange.first=3D1024 net.inet.ip.portrange.last=3D65535 # stops route cache = degregation during a high-bandwidth flood #net.inet.ip.rtexpire=3D2 net.inet.ip.rtminexpire=3D2 net.inet.ip.rtmaxcache=3D1024 = # = Security net.inet.ip.sourceroute=3D0 net.inet.ip.accept_sourceroute=3D0 net.inet.icmp.maskrepl=3D0 net.inet.icmp.log_redirect=3D0 net.inet.tcp.drop_synfin=3D1 <= p class=3DMsoNormal> # = Security net.inet.ip.redirect=3D0 net.inet.ip.sourceroute=3D0 net.inet.ip.accept_sourceroute=3D0 net.inet.icmp.log_redirect=3D0 # Max number of timewait = sockets (Maximum number of compressed TCP TIME_WAIT = entries) net.inet.tcp.maxtcptw=3D20000000<= /p> # FIN_WAIT_2 state fast = recycle net.inet.tcp.fast_finwait2_recycle=3D1 # Time before tcp = keepalive probe is sent # default is 2 hours = (7200000) net.inet.tcp.keepidle=3D60000 = # Should be increased = until net.inet.ip.intr_queue_drops is zero net.inet.ip.intr_queue_maxlen=3D4096 # enable send/recv = autotuning net.inet.tcp.sendbuf_auto=3D1 = net.inet.tcp.recvbuf_auto=3D1 = # increase autotuning step = size net.inet.tcp.sendbuf_inc=3D16384 = net.inet.tcp.recvbuf_inc=3D524288 = # turn off inflight = limitting #net.inet.tcp.inflight.enable=3D0= # set this on = test/measurement hosts net.inet.tcp.hostcache.expire=3D1= # Randomized proccess = id's kern.randompid=3D348 # do not processes any TCP = options in the TCP headers net.inet.ip.process_options=3D0 # disable MTU path = discovery net.inet.tcp.path_mtu_discovery=3D0 # Disable = SACK net.inet.tcp.sack.enable=3D0 <= p class=3DMsoNormal> kern.ipc.shmmax=3D5368709120 <= p class=3DMsoNormal>kern.ipc.shmall=3D13107200 kern.ipc.semmsl=3D1024 net.inet6.ip6.auto_linklocal=3D0<= /p> net.inet.tcp.syncookies=3D0 # BPF increase buffer = size net.bpf.maxbufsize=3D1048576 <= p class=3DMsoNormal> kern.ipc.nmbjumbop=3D262144 kern.ipc.nmbjumbo16=3D32000 kern.ipc.nmbjumbo9=3D64000 dev.ix.0.fc=3D0 Best = Regards.. S= eyit = =D6zg=FCr = Netw= ork Y=F6neticisi = [2]3D"Description: = Eski =DCsk=FCdar Cad. No:10 VIP Center Kat:7 = =DD=E7erenk=F6y Ata=FEehir =DDstanbul t:= 0216 577 33 11= | f:= 0216 469 52 43= [3]www.magnetdigital.com= References 1. 3D"http://www.intel.com/content/www/us/en/ethernet-controllers/82575= 2. 3D"http://www.magnetdigital.com/" 3. 3D"http://www.magnetdigital.com"/ ------=_NextPart_002_001C_01CCFEE3.6C5EDF90-- ------=_NextPart_001_001B_01CCFEE3.6C5EDF90-- ------=_NextPart_000_001A_01CCFEE3.6C5EDF90--
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?3807CE6F3BF4B04EB897F4EBF2D258CE5C044A49>