From owner-freebsd-hackers@freebsd.org Tue Nov 19 12:06:41 2019 Return-Path: Delivered-To: freebsd-hackers@mailman.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.nyi.freebsd.org (Postfix) with ESMTP id 5F23C1B04A9 for ; Tue, 19 Nov 2019 12:06:41 +0000 (UTC) (envelope-from wojtek@puchar.net) Received: from puchar.net (puchar.net [194.1.144.90]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 47HPgh0spPz4P4T for ; Tue, 19 Nov 2019 12:06:39 +0000 (UTC) (envelope-from wojtek@puchar.net) Received: Received: from 127.0.0.1 (localhost [127.0.0.1]) by puchar.net (8.15.2/8.15.2) with ESMTPS id xAJC6an0017680 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO) for ; Tue, 19 Nov 2019 13:06:36 +0100 (CET) (envelope-from puchar-wojtek@puchar.net) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=puchar.net; s=default; t=1574165196; bh=dKqtbM59ww177NNV3tkGu4uA6TLLgfbU0/M+Khndnl0=; h=Date:From:To:Subject; b=rNqMz9+lr6hsiTTaACO7TwHMm2YjzYQONg+45EDp/eFi70LpoU2IJvMY7vZ7ZYGSZ 8CYzToLinNc0LGI7sIMfOh3vgytBS4nuUPttCAvou2byNEoBf2ViOWfBpecrjznpgh GcwImk56SBUW/faIwIerKrW5HwT7ORNx6kqiNjJo= Received: from localhost (puchar-wojtek@localhost) by puchar.net (8.15.2/8.15.2/Submit) with ESMTP id xAJC6aDi017677 for ; Tue, 19 Nov 2019 13:06:36 +0100 (CET) (envelope-from puchar-wojtek@puchar.net) Date: Tue, 19 Nov 2019 13:06:36 +0100 (CET) From: Wojciech Puchar To: freebsd-hackers@freebsd.org Subject: geom_ssdcache Message-ID: User-Agent: Alpine 2.20 (BSF 67 2015-01-07) MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset=US-ASCII X-Rspamd-Queue-Id: 47HPgh0spPz4P4T X-Spamd-Bar: ---- Authentication-Results: mx1.freebsd.org; dkim=fail (rsa verify failed) header.d=puchar.net header.s=default header.b=rNqMz9+l; dmarc=none; spf=pass (mx1.freebsd.org: domain of wojtek@puchar.net designates 194.1.144.90 as permitted sender) smtp.mailfrom=wojtek@puchar.net X-Spamd-Result: default: False [-4.72 / 15.00]; ARC_NA(0.00)[]; NEURAL_HAM_MEDIUM(-1.00)[-0.998,0]; FROM_HAS_DN(0.00)[]; R_SPF_ALLOW(-0.20)[+mx]; R_DKIM_REJECT(1.00)[puchar.net:s=default]; MIME_GOOD(-0.10)[text/plain]; PREVIOUSLY_DELIVERED(0.00)[freebsd-hackers@freebsd.org]; TO_DN_NONE(0.00)[]; RCPT_COUNT_ONE(0.00)[1]; NEURAL_HAM_LONG(-1.00)[-0.999,0]; DMARC_NA(0.00)[puchar.net]; TO_MATCH_ENVRCPT_ALL(0.00)[]; DKIM_TRACE(0.00)[puchar.net:-]; RCVD_IN_DNSWL_NONE(0.00)[90.144.1.194.list.dnswl.org : 127.0.10.0]; IP_SCORE(-3.42)[ip: (-9.04), ipnet: 194.1.144.0/24(-4.52), asn: 43476(-3.62), country: PL(0.07)]; FROM_EQ_ENVFROM(0.00)[]; MIME_TRACE(0.00)[0:+]; RCVD_TLS_LAST(0.00)[]; ASN(0.00)[asn:43476, ipnet:194.1.144.0/24, country:PL]; MID_RHS_MATCH_FROM(0.00)[]; RCVD_COUNT_TWO(0.00)[2] X-BeenThere: freebsd-hackers@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Technical Discussions relating to FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 19 Nov 2019 12:06:41 -0000 today SSD are really fast and quite cheap, but still hard drives are many times cheaper. Magnetic hard drives are OK in long reads anyway, just bad on seeks. While now it's trendy to use ZFS i would stick to UFS anyway. I try to keep most of data on HDDs but use SSD for small files and high I/O needs. It works but needs to much manual and semi automated work. It would be better to just use HDD for storage and some of SSD for cache and other for temporary storage only. My idea is to make geom layer for caching one geom provider (magnetic disk/partition or gmirror/graid5) using other geom provider (SSD partition). I have no experience in writing geom layer drivers but i think geom_cache would be my fine starting point. At first i would do read/write through caching. Writeback caching would be next - if at all, doesn't seem good idea except you are sure SSD won't fail. But my question is really on UFS. I would like to know in geom layer if read/write operation is inode/directory/superblock write or regular data write - so i would give the first time higher priority. Regular data would not be cached at all, or only when read size will be less than defined value. Is it possible to modify UFS code to pass somehow a flag/value when issuing read/write request to device layer?