Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 10 Feb 2021 11:05:48 -0500
From:      Gunther Schadow <raj@gusw.net>
To:        freebsd-performance@freebsd.org
Subject:   Re: FreeBSD on Amazon AWS EC2 long standing performance problems
Message-ID:  <96d43dbe-ed50-8ba4-f676-673ee99725bd@gusw.net>
In-Reply-To: <5BC0DEF9-3D58-4FFC-9E20-311B0520A25A@longcount.org>
References:  <5BC0DEF9-3D58-4FFC-9E20-311B0520A25A@longcount.org>

next in thread | previous in thread | raw e-mail | index | archive | help
I think we got enough feed back now from other professionals to suggest that it
would do the FreeBSD project good to acknowledge the issue and create some sort
of work in progress / project statement, perhaps a wiki, where people can flock
to to look for workarounds, current statue, just to not feel so lost and lonely
and wondering if anybody cares at all. Such a meeting-point could at least be
something of a self-help group, for emotional support and positive thinking =8).

I have a slight sign of hope also: in my latest Amazon Linux db server deployment
the performance is only half of what I got in my previous db server deployment.
Go figure. But it's a fresh launch and I can't figure out what else I had optimized
with the previous Amazon Linux box. So, while not improving anything, it at least
closes the inequality between Linux and FreeBSD in the socialistic way ;)

Now I am trying the FreeBSD install again with the same disk setup. Because one
thing I clearly know is that it makes a huge difference of having one large EBS
device and a single partition / file system vs. the same large EBS device with
many partitions and file systems, separating tables, indexes, temporary sort space,
etc. so that there is less random access contention on a single file system.

Why that is important, I wonder, actually. And it's not the underlying EBS that
matters so much. Like in bare metal world, the approach would be separate disks
vs. striping (RAID-0) to get more spindles involved instead of having disk seek.
But none of that should mater that much any more with SSD drives (which EBS gp2 and
gp3 are!)

Fortunately now Amazon has gp3 volumes also which support 3000 io transactions
per second (IOps) and 250 or up to 1000 MB/s throughput despite being as small as
4 GB. So what I'm doing now is instead of a partitioned 1 TB gp2 volume with many
partitions, I make myself over 20 separate smaller gp3 volumes, the advantage of
this is that I can individually resize those without colliding with the neighboring
partition.

Tomorrow I should have better comparison numbers for my database on both Linux and
FreeBSD configured the exact same way, and it might be closer now. But sadly only
in the socialist way.

regards,
-Gunther

On 2/6/2021 9:55 AM, Mark Saad wrote:

> On Feb 6, 2021, at 7:07 AM, Łukasz Wąsikowski <lukasz@wasikowski.net> wrote:
> All
>    So what I was getting at, is do we have good data on what the issue is ? Can we make a new wiki page on the FreeBSD wiki to track what works what and  doesn’t .  Does one exist ?
>
> To be clear we should check if the issue something that aws is doing with their xen platform , kvm/qemu one or common to all ? Also does that same issue appear on google and Microsoft’s platforms?  This will at least get some bounds on the problem and  what if any fixes may exist .
> There are some commercial FreeBSD products running on aws . Maybe the vendors know some stuff that can help ?
>
>
> Thoughts ?
>
> ---
> Mark Saad | nonesuch@longcount.org
>
> _______________________________________________
> freebsd-performance@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-performance
> To unsubscribe, send any mail to "freebsd-performance-unsubscribe@freebsd.org"
>



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?96d43dbe-ed50-8ba4-f676-673ee99725bd>