Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 13 Jul 2015 10:54:41 +0100
From:      Steven Hartland <killing@multiplay.co.uk>
To:        Yamagi Burmeister <lists@yamagi.org>
Cc:        freebsd-scsi@freebsd.org
Subject:   Re: Device timeouts(?) with LSI SAS3008 on mpr(4)
Message-ID:  <55A38AE1.5010204@multiplay.co.uk>
In-Reply-To: <20150713112547.8f044beabe26672fd13fc528@yamagi.org>
References:  <20150707132416.71b44c90f7f4cd6014a304b2@yamagi.org> <20150713110148.1a27b973881b64ce2f9e98e0@yamagi.org> <55A3813C.7010002@multiplay.co.uk> <20150713112547.8f044beabe26672fd13fc528@yamagi.org>

next in thread | previous in thread | raw e-mail | index | archive | help
This is a multi-part message in MIME format.
--------------020206030603020309090600
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Transfer-Encoding: 7bit

I assume da0 and da1 are a different disk then?

With regards your disk setup are all of you disks SSD's if so why do you 
have separate log and cache devices?

One thing you could try is to limit the delete size.

kern.geom.dev.delete_max_sectors limits the single request size allowed 
by geom but then individual requests can be built back up in cam so I 
don't think this will help you too much.

Instead I would try limiting the individual device delete_max, so add 
one line per disk into /boot/loader.conf of the form:
kern.cam.da.X.delete_max=1073741824

You can actually change these on the fly using sysctl, but in order to 
catch an cleanup done on boot loader.conf is the best place to tune them 
permanently.

I've attached a little c util which you can use to do direct disk 
deletes if you have a spare disk you can play with.

Be aware that most controller optimise delete's out if they know the 
cells are empty hence you do need to have written data to the sectors 
each time you test a delete.

As the requests go through geom anything over 
kern.geom.dev.delete_max_sectors will be split but then may well be 
recombined in CAM.

Another relevant setting is vfs.zfs.vdev.trim_max_active which can be 
used to limit the number of outstanding geom delete requests to the each 
device.

Oh one other thing, it would be interesting to see the output from 
camcontrol identify <device> e.g.
camcontrol identify da8
camcontrol identify da0

     Regards
     Steve

On 13/07/2015 10:25, Yamagi Burmeister wrote:
> On Mon, 13 Jul 2015 10:13:32 +0100
> Steven Hartland <killing@multiplay.co.uk> wrote:
>
>> What do you see from:
>> sysctl -a | grep -E '(delete|trim)'
> % sysctl -a | grep -E '(delete|trim)'
> kern.geom.dev.delete_max_sectors: 262144
> kern.cam.da.1.delete_max: 8589803520
> kern.cam.da.1.delete_method: ATA_TRIM
> kern.cam.da.8.delete_max: 12884705280
> kern.cam.da.8.delete_method: ATA_TRIM
> kern.cam.da.9.delete_max: 12884705280
> kern.cam.da.9.delete_method: ATA_TRIM
> kern.cam.da.3.delete_max: 12884705280
> kern.cam.da.3.delete_method: ATA_TRIM
> kern.cam.da.12.delete_max: 12884705280
> kern.cam.da.12.delete_method: ATA_TRIM
> kern.cam.da.7.delete_max: 12884705280
> kern.cam.da.7.delete_method: ATA_TRIM
> kern.cam.da.2.delete_max: 12884705280
> kern.cam.da.2.delete_method: ATA_TRIM
> kern.cam.da.11.delete_max: 12884705280
> kern.cam.da.11.delete_method: ATA_TRIM
> kern.cam.da.6.delete_max: 12884705280
> kern.cam.da.6.delete_method: ATA_TRIM
> kern.cam.da.10.delete_max: 12884705280
> kern.cam.da.10.delete_method: ATA_TRIM
> kern.cam.da.5.delete_max: 12884705280
> kern.cam.da.5.delete_method: ATA_TRIM
> kern.cam.da.0.delete_max: 8589803520
> kern.cam.da.0.delete_method: ATA_TRIM
> kern.cam.da.4.delete_max: 12884705280
> kern.cam.da.4.delete_method: ATA_TRIM
> vfs.zfs.trim.max_interval: 1
> vfs.zfs.trim.timeout: 30
> vfs.zfs.trim.txg_delay: 32
> vfs.zfs.trim.enabled: 1
> vfs.zfs.vdev.trim_max_pending: 10000
> vfs.zfs.vdev.bio_delete_disable: 0
> vfs.zfs.vdev.trim_max_active: 64
> vfs.zfs.vdev.trim_min_active: 1
> vfs.zfs.vdev.trim_on_init: 1
> kstat.zfs.misc.arcstats.deleted: 289783817
> kstat.zfs.misc.zio_trim.failed: 431
> kstat.zfs.misc.zio_trim.unsupported: 0
> kstat.zfs.misc.zio_trim.success: 6457142235
> kstat.zfs.misc.zio_trim.bytes: 88207753330688
>
>
>> Also while your seeing time-outs what does the output from gstat -d -p
>> look like?
> I'll try to get that data but it may take a while.
>
> Thank you,
> Yamagi
>


--------------020206030603020309090600
Content-Type: text/plain; charset=UTF-8;
 name="ioctl-delete.c"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="ioctl-delete.c"

#include <stdio.h>
#include <stdlib.h>
#include <sys/ioctl.h>
#include <err.h>
#include <errno.h>
#include <fcntl.h>
#include <sys/disk.h>
#include <sys/time.h>
#include <libutil.h>
#include <unistd.h>

void
syntax()
{
	fprintf(stderr,"ioctl-delete <device> <startblock> <blockcount>\n");
	exit(1);
}


double
timediff(const struct timeval *t1, const struct timeval *t2)
{
    double ret;

    ret = t2->tv_sec - t1->tv_sec;
    ret += (t2->tv_usec - t1->tv_usec) * 0.000001;

    return ret;
}

int
main(int argc, char** argv)
{
	off_t ioarg[2];
	int fd;
	off_t offset, size;
	char *device;
	struct timeval start, end;
	double tdiff;
	char buf[8];
	uint64_t bsec;
	unsigned int sector_size = 512;

	if (4 != argc)
		syntax();

	device = argv[1];
	offset = strtoul(argv[2], NULL, 10);
	size = strtoul(argv[3], NULL, 10);

	fprintf(stderr, "deleting: %ld, %ld\n", offset, size);

	if ((fd = open(device, O_RDWR)) < 0)
		err(1, "device '%s' not found", device);

	if (ioctl(fd, DIOCGSECTORSIZE, &sector_size) <0)
		err(1, "delete failed");

	ioarg[0] = offset * sector_size;
	ioarg[1] = size * sector_size;

	gettimeofday(&start, NULL);
	if (ioctl(fd, DIOCGDELETE, ioarg) <0)
		err(1, "delete failed");
	gettimeofday(&end, NULL);
	tdiff = timediff(&start, &end);

	bsec = (int64_t)((long double)ioarg[1] / tdiff);
	humanize_number(buf, sizeof(buf), bsec, "/s", HN_AUTOSCALE, HN_B | HN_NOSPACE | HN_DECIMAL);

	printf("deleted %lu bytes in %f seconds, %ld bytes per second (%s)\n",
		ioarg[1], tdiff, bsec, buf);

	close(fd);
	exit(1);
}

--------------020206030603020309090600--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?55A38AE1.5010204>