Date: Fri, 25 Jan 2008 10:56:04 +0400 (GST) From: Rakhesh Sasidharan <rakhesh@rakhesh.com> To: Gilberto Villani Brito <linux@giboia.org> Cc: freebsd-pf@freebsd.org Subject: Re: ping: sendto: No buffer space available Message-ID: <20080125105447.K51665@dogmatix.home.rakhesh.com> In-Reply-To: <6e6841490801221122p108f8196x9c50f216cccac956@mail.gmail.com> References: <20080122185929.A35598@obelix.home.rakhesh.com> <20080122193545.N35750@obelix.home.rakhesh.com> <6e6841490801221122p108f8196x9c50f216cccac956@mail.gmail.com>
next in thread | previous in thread | raw e-mail | index | archive | help
Gilberto Villani Brito wrote: > Try use those options in your pf.conf: > set limit { states 1000000000, src-nodes 1000000000, frags 50000000 } I did this. After about a day of usage and no significant uploads/ downloads (unlike the previous two times) I started getting the same problems. I am on FreeBSD 6.3/i386 now. Upgraded day-before. Thanks, Rakhesh > > -- > Gilberto Villani Brito > System Administrator > Londrina - PR > Brazil > gilbertovb(a)gmail.com > > > On 22/01/2008, Rakhesh Sasidharan <rakhesh@rakhesh.com> wrote: >> >> Update below ... >> >>> Hi, >>> >>> I am running PF on a FreeBSD 6.2/i386 machine. Started doing so abt a week >>> ago. In case it matters, this machine is the master in a CARP group with >>> another machine. Both of them run PF and have pfsync to keep things in sync. >>> >>> What happens is that after a day or so of heavy usage (downloading some >>> torrents and doing a portinstall/ portupgrade/ copying stuff to other >>> machines on my LAN simultaneously), this PF FreeBSD machine stops responding >>> to the network. >>> >>> The machine is perfectly fine. I can login and do stuff, just that its as if >>> it's disconnected from the network. >>> >>> When I ping another host on the LAN, this is what I get: >>> PING 192.168.17.13 (192.168.17.13): 56 data bytes >>> ping: sendto: No buffer space available >>> ping: sendto: No buffer space available >>> ping: sendto: No buffer space available >>> ^C >>> --- 192.168.17.13 ping statistics --- >>> >>> Now, if I disable PF (pfctl -d) things start to work! >>> >>> And after that if I enable PF (pfctl -e) things continue to work. >>> >>> So it pretty much looks like a PF problem. Searching this list's archives I >>> found one old thread >>> (http://article.gmane.org/gmane.os.freebsd.devel.pf4freebsd/1745) that >>> mentions a similar problem. Only, there re-enabling PF didn't solve the >>> problem (thoguh reloading with a re-read of the rules helped). >>> >>> This problem's happened twice over the last week. >>> >>> Based on the previous thread, I though the following outputs might be useful. >>> >>> Output of ''pfctl -si'': >>> Interface Stats for xl0 IPv4 IPv6 >>> Bytes In 1778679531 0 >>> Bytes Out 424820294 0 >>> Packets In >>> Passed 2178377 0 >>> Blocked 14705 0 >>> Packets Out >>> Passed 1911568 0 >>> Blocked 74601 0 >>> >>> State Table Total Rate >>> current entries 632 >>> searches 18330505 10534.8/s >>> inserts 335629 192.9/s >>> removals 334997 192.5/s >>> Counters >>> match 551629 317.0/s >>> bad-offset 0 0.0/s >>> fragment 0 0.0/s >>> short 0 0.0/s >>> normalize 0 0.0/s >>> memory 0 0.0/s >>> bad-timestamp 0 0.0/s >>> congestion 0 0.0/s >>> ip-option 21 0.0/s >>> proto-cksum 0 0.0/s >>> state-mismatch 12159 7.0/s >>> state-insert 61 0.0/s >>> state-limit 0 0.0/s >>> src-limit 0 0.0/s >>> synproxy 998 0.6/s >>> >>> I have the following line in my /etc/pf.conf file. So I suppose I'm not >>> running out of state table entries either ... >>> set limit { states 20000, frags 10000, src-nodes 2000 } >>> >>> Finally, here's the output of ''netstat -m'': >>> 324/666/990 mbufs in use (current/cache/total) >>> 322/308/630/32768 mbuf clusters in use (current/cache/total/max) >>> 320/192 mbuf+clusters out of packet secondary zone in use (current/cache) >>> 0/0/0/0 4k (page size) jumbo clusters in use (current/cache/total/max) >>> 0/0/0/0 9k jumbo clusters in use (current/cache/total/max) >>> 0/0/0/0 16k jumbo clusters in use (current/cache/total/max) >>> 725K/782K/1507K bytes allocated to network (current/cache/total) >>> 0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters) >>> 0/0/0 requests for jumbo clusters denied (4k/9k/16k) >>> 0/7/6656 sfbufs in use (current/peak/max) >>> 0 requests for sfbufs denied >>> 0 requests for sfbufs delayed >>> 0 requests for I/O initiated by sendfile >>> 67 calls to protocol drain routines >>> >>> Any suggestions what I can do to troubleshoot? >>> >>> Thanks. >>> Rakhesh >>> >>> ps. Forgot to mention: yes, my rules have some ''rdr'' rules. That's another >>> similarity with the problem in the previous thread. >>> >>> ps2. When the problem happens, this machine goes down to a backup status (for >>> CARP). However, once I restart PF, even though things work fine otherwise, >>> the status does not return to master. Mentioning in case that means something >>> ... (I have the appropriate sysctls and advskew set for this machine to >>> become a master when things are restored. It works usually, except in this >>> situation). >>> >> >> Turns out disabling and enabling PF doesn't solve the problem permanently. >> After trying an NFS copy, the machine started having problems again! I >> don't think it copied anything more than 5-10MB of data before losing >> conectivity! >> >> The only solution then was to do a ''/etc/rc.d/pf reload''. Since this >> reloads the rules too it solves the problem. So my problem is same as that >> in the thread I mentioned. >> >> Please help. >> >> Thanks, >> Rakhesh >> >> --- >> http://rakhesh.net/ >> _______________________________________________ >> freebsd-pf@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-pf >> To unsubscribe, send any mail to "freebsd-pf-unsubscribe@freebsd.org" >> > Rakhesh --- http://rakhesh.net/
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20080125105447.K51665>