From owner-freebsd-stable@FreeBSD.ORG Mon Aug 9 15:12:47 2010 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 5C1911065679 for ; Mon, 9 Aug 2010 15:12:47 +0000 (UTC) (envelope-from ivoras@gmail.com) Received: from mail-qw0-f54.google.com (mail-qw0-f54.google.com [209.85.216.54]) by mx1.freebsd.org (Postfix) with ESMTP id 09BAB8FC25 for ; Mon, 9 Aug 2010 15:12:46 +0000 (UTC) Received: by qwg5 with SMTP id 5so6329435qwg.13 for ; Mon, 09 Aug 2010 08:12:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:mime-version:sender:received :in-reply-to:references:from:date:x-google-sender-auth:message-id :subject:to:cc:content-type; bh=PggJzg2ahrqx3GueQp6aL6GJDT4UB7ci7nvPEfMjugQ=; b=GXJx8aA6AtkKSNiE8v08yZAoavX0JQuVENPWDcpZuxVS2Q12VevVjqWkYhV8itJOvp isChc0jfnGrueLzD4ZpPvCeerL3jnm5gDJee9DxABGJYt72BVydL/3dWqxoKlgsjntS3 dn8ybDNgOaJF2qI2AFVGdMCLYyuV76x+QHnEY= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:from:date :x-google-sender-auth:message-id:subject:to:cc:content-type; b=ctVOJm65wYfXMRudKmjl7n1hDud95g0n62X4hzKChTsVX6ds09zplPjYNg97GRIv1A IIHuC9rCtcsHwBKxcq5ps9YFYw+oTBTf1tVUc+2Ho2pbJ0TI27n210D8lhl1aMFQO9dK nEYxGv5oR3d+m3vCPJA8s3+iBn17V8Taro8NM= Received: by 10.229.1.233 with SMTP id 41mr7228661qcg.284.1281366766180; Mon, 09 Aug 2010 08:12:46 -0700 (PDT) MIME-Version: 1.0 Sender: ivoras@gmail.com Received: by 10.229.236.132 with HTTP; Mon, 9 Aug 2010 08:12:21 -0700 (PDT) In-Reply-To: References: From: Ivan Voras Date: Mon, 9 Aug 2010 17:12:21 +0200 X-Google-Sender-Auth: H0g5XwXSvsvRyllKnFPbvQP6Pxk Message-ID: To: Joshua Boyd Content-Type: text/plain; charset=UTF-8 Cc: freebsd-stable@freebsd.org Subject: Re: 8-STABLE Slow Write Speeds on ESXI 4.0 X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 09 Aug 2010 15:12:47 -0000 On 9 August 2010 16:55, Joshua Boyd wrote: > On Sat, Aug 7, 2010 at 1:58 PM, Ivan Voras wrote: >> >> On 7 August 2010 19:03, Joshua Boyd wrote: >> > On Sat, Aug 7, 2010 at 7:57 AM, Ivan Voras wrote: >> >> >> It's unlikely they will help, but try: >> >> >> >> vfs.read_max=32 >> >> >> >> for read speeds (but test using the UFS file system, not as a raw >> >> device >> >> like above), and: >> >> >> >> vfs.hirunningspace=8388608 >> >> vfs.lorunningspace=4194304 >> >> >> >> for writes. Again, it's unlikely but I'm interested in results you >> >> achieve. >> >> >> > >> > This is interesting. Write speeds went up to 40MBish. Still slow, but 4x >> > faster than before. >> > [root@git ~]# dd if=/dev/zero of=/var/testfile bs=1M count=250 >> > 250+0 records in >> > 250+0 records out >> > 262144000 bytes transferred in 6.185955 secs (42377288 bytes/sec) >> > [root@git ~]# dd if=/var/testfile of=/dev/null >> > 512000+0 records in >> > 512000+0 records out >> > 262144000 bytes transferred in 0.811397 secs (323077424 bytes/sec) >> > So read speeds are up to what they should be, but write speeds are still >> > significantly below what they should be. >> >> Well, you *could* double the size of "runningspace" tunables and try that >> :) >> >> Basically, in tuning these two settings we are cheating: increasing >> read-ahead (read_max) and write in-flight buffering (runningspace) in >> order to offload as much IO to the controller (in this case vmware) as >> soon as possible, so to reschedule horrible IO-caused context switches >> vmware has. It will help sequential performance, but nothing can help >> random IOs. > > Hmm. So what you're saying is that FreeBSD doesn't properly support the ESXI > controller? Nope, I'm saying you will never get raw disk-like performance with any "full" virtualization product, regardless of specifics. If you want performance, go OS-level (like jails) or some example of paravirtualization. > I'm going to try 7.3-RELEASE today, just to make sure that this isn't a > regression of some kind. It seems from reading other posts that this used to > work properly and satisfactorily. Nope, I've been messing around with VMWare for a long time and the performance penalty was always there.